None

Imagine buying Windows in 2023

None
None
62
NLPcels are :marseydepressed: over the release of GPT-4
None
20
Daily Tedpost: Uncle Ted on AI

172. First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary.

174. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite-just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.

Tedphobes btfo he was right about literally everything

:marseyunabomber: :!marppyenraged: :!marppyenraged: :!marppyenraged:

None
51
When will bbbb be upgraded to GPT-4?

Imagine the immersive power bbbb can hold over dramacels if it was powered by GPT-4.

!codecels @HateMonster

None
74
GPT 4 can turn a napkin sketch into a functioning website
None

In the year of twenty-eighteen, Alice and Bob, a married team, Their income combined reached new heights, As they worked hard day and night.

Their son Charlie was their joy, A little baby, a lovely boy, A household they maintained together, Yet lived apart, without a tether.

To calculate their tax, it's true, A standard deduction we must construe, For married folks who file jointly, Twenty-four thousand dollars, quite pointy.

Their income sum, seventy-eight thousand nine eighty-one, Minus the standard deduction, the math's begum With exemptions being zero, the next line we trace, A taxable income of fifty-four thousand nine eighty-one takes place.

Now to the tax table, a liability we seek, For married couples, the outlook's not bleak, In range of thirty-six thousand nine to eighty-nine thousand one fifty, The formula's set, no longer shifty.

Five thousand five hundred thirty-five, it starts, Plus twenty-eight percent of the excess imparts, Eighteen thousand eighty-one, the difference we find, Multiplied by point two eight, the tax liability's designed.

Ten thousand five hundred ninety-seven dollars and sixty-eight cents, A tax liability for Alice and Bob, a sum quite dense, In this world of numbers, a story unfolds, Their financial journey, in a poem it's told.

None
34
Reddit is down

That's it, that's the title

None
75

Acceleration chads WYA?

None
None

Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned.

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI cowtools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company's AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.

"Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this," the company said in a statement. "Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. [...] We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey."

But employees said the ethics and society team played a critical role in ensuring that the company's responsible AI principles are actually reflected in the design of the products that ship.

"People would look at the principles coming out of the office of responsible AI and say, 'I don't know how this applies,'" one former employee says. "Our job was to show them and to create rules in areas where there were none."

In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger "responsible innovation toolkit" that the team posted publicly.

More recently, the team has been working to identify risks posed by Microsoft's adoption of OpenAI's technology throughout its suite of products.

The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization.

In a meeting with the team following the reorg, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly. "The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed," he said, according to audio of the meeting obtained by Platformer.

Because of that pressure, Montgomery said, much of the team was going to be moved to other areas of the organization.

Some members of the team pushed back. "I'm going to be bold enough to ask you to please reconsider this decision," one employee said on the call. "While I understand there are business issues at play ... what this team has always been deeply concerned about is how we impact society and the negative impacts that we've had. And they are significant."

Montgomery declined. "Can I reconsider? I don't think I will," he said. "Cause unfortunately the pressures remain the same. You don't have the view that I have, and probably you can be thankful for that. There's a lot of stuff being ground up into the sausage."

In response to questions, though, Montgomery said the team would not be eliminated.

"It's not that it's going away --- it's that it's evolving," he said. "It's evolving toward putting more of the energy within the individual product teams that are building the services and the software, which does mean that the central hub that has been doing some of the work is devolving its abilities and responsibilities."

Most members of the team were transferred elsewhere within Microsoft. Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans.

About five months later, on March 6, remaining employees were told to join a Zoom call at 11:30AM PT to hear a "business critical update" from Montgomery. During the meeting, they were told that their team was being eliminated after all.

One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. "The worst thing is we've exposed the business to risk and human beings to risk in doing this," they explained.

The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of technology and fix any problems before they ship.

But they also have the job of saying "no" or "slow down" inside organizations that often don't want to hear it --- or spelling out risks that could lead to legal headaches for the company if surfaced in legal discovery. And the resulting friction sometimes boils over into public view.

In 2020, Google fired ethical AI researcher Timnit Gebru after she published a paper critical of the large language models that would explode into popularity two years later. The resulting furor resulted in the departures of several more top leaders within the department, and diminished the company's credibility on responsible AI issues.

Members of the ethics and society team said they generally tried to be supportive of product development. But they said that as Microsoft became focused on shipping AI cowtools more quickly than its rivals, the company's leadership became less interested in the kind of long-term thinking that the team specialized in.

It's a dynamic that bears close scrutiny. On one hand, Microsoft may now have a once-in-a-generation chance to gain significant traction against Google in search, productivity software, cloud computing, and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that every 1 percent of market share it could take away from Google in search would result in $2 billion in annual revenue.

That potential explains why Microsoft has so far invested $11 billion into OpenAI, and is currently racing to integrate the startup's technology into every corner of its empire. It appears to be having some early success: the company said last week Bing now has 100 million daily active users, with one third of them new since the search engine relaunched with OpenAI's technology.

On the other hand, everyone involved in the development of AI agrees that the technology poses potent and possibly existential risks, both known and unknown. Tech giants have taken pains to signal that they are taking those risks seriously --- Microsoft alone has three different groups working on the issue, even after the elimination of the ethics and society team. But given the stakes, any cuts to teams focused on responsible work seem noteworthy.

II.

The elimination of the ethics and society team came just as its remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released cowtools powered by OpenAI to a global audience.

Last year, the team wrote a memo detailing brand risks associated with the Bing Image Creator, which uses OpenAI's DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft's first public collaborations with OpenAI.

While text-to-image technology has proved hugely popular, Microsoft researchers correctly predicted that it it could also threaten artists' livelihoods by allowing anyone to easily copy their style.

"In testing Bing Image Creator, it was discovered that with a simple prompt including just the artist's name and a medium (painting, print, photography, or sculpture), generated images were almost impossible to differentiate from the original works," researchers wrote in the memo.

They added: "The risk of brand damage, both to the artist and their financial stakeholders, and the negative PR to Microsoft resulting from artists' complaints and negative public reaction is real and significant enough to require redress before it damages Microsoft's brand."

In addition, last year OpenAI updated its terms of service to give users "full ownership rights to the images you create with DALL-E." The move left Microsoft's ethics and society team worried.

"If an AI-image generator mathematically replicates images of works, it is ethically suspect to suggest that the person who submitted the prompt has full ownership rights of the resulting image," they wrote in the memo.

Microsoft researchers created a list of mitigation strategies, including blocking Bing Image Creator users from using the names of living artists as prompts, and creating a marketplace to sell an artist's work that would be surfaced if someone searched for their name.

Employees say neither of these strategies were implemented, and Bing Image Creator launched into test countries anyway.

Microsoft says the tool was modified before launch to address concerns raised in the document, and prompted additional work from its responsible AI team.

But legal questions about the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the AI art generator Stable Diffusion. Getty accused the AI startup of improperly using more than 12 million images to train its system.

The accusations echoed concerns raised by Microsoft's own AI ethicists. "It is likely that few artists have consented to allow their works to be used as training data, and likely that many are still unaware how generative tech allows variations of online images of their work to be produced in seconds," employees wrote last year.

None

lmao

None
129
:marseyyes: :duckdance:
None

I was having a conversation with Bing, speculating about microsoft's goals and long term business plans for their various products. Conversation went over a few points about Open source software, increasing Linux support, and more. We started talking about executives, and I suggested it analyze the social media, blog posts, and personal interviews of the product director of Bing.

https://i.rdrama.net/images/16787585102395484.webphttps://i.rdrama.net/images/16787585103702786.webp

Very interesting results. I have access to bing chat somehow, so drop any requests and I'll let you know what it says.

None
None

Oh did you click on another thread? Let me check your connection again.

Oh did you click on yet another thread? Let me check your connection again.

Oh did you click too fast? Better give you a 429 error for a minute.

Want to view who awarded you? Sorry let me give you an error message.

Want to search something or someone? Let me check your connection again.

And the cycle repeats.

Can Null at least try to code a better DDoS system that works competently while not being too overprotective?

None
Reported by:
  • 9 : this is a good thing
  • Thirtythirst4sissies : Transition tou skinny manlets shits you have nothing to loose

Bonus bolded 41% mention in the post

Lmfao look at OP’s post history. embarrassing asf

https://old.reddit.com/r/csMajors/comments/11oviuv/evidence_for_hiring_discrimination_in_favor_of/jbux7e0/

This is whataboutism and that article only presents anecdotes. [-63]

https://old.reddit.com/r/csMajors/comments/11oviuv/evidence_for_hiring_discrimination_in_favor_of/jbv0j27/

Too many issues with this study and your conclusion.

  • The last names highly imply white or black women, so intersectionality isn’t taken into account
  • Your 41% here is meant to encourage outrage when it’s 11.2% to 15.8%, which while is a slight advantage, isn’t as egregious as you make it seem
  • Almost all of the jobs in this study fall into the category “senior”. It says that job titles aren’t mutually exclusive. You aren’t even competing for those positions as a college grad, so you aren’t losing there.

It even says it in the study itself:

“The results of the field experiment support the hypothesis that female software engineers experience positive discrimination at the screening phase of hiring pro- cesses. This is at least found to be true for applicants with moderately impressive backgrounds and a few years of full-time work experience, and this result could be generalized through follow up research.“

You aren’t getting hired because you can’t even be half-assed to read the conclusion. If you wanna whine about new grad/internships, at least pull a study that actually looks at that experience level.

OP consistently posts on purple pill. That tell me enough about his views on women and why he thinks his failing is directly caused by something like this.

Your 41% here is meant to encourage outrage when it’s 11.2% to 15.8%, which while is a slight advantage, isn’t as egregious as you make it seem

Comparing the difference in percentage points makes no sense when most applications don't receive a callback.

Women get 40% more callbacks so they have way more options at that point than men. If a man and a woman with an otherwise identical resume both send 100 applications, that's 4-5 more callbacks for the woman.

You aren’t getting hired

Do you not see my role? I'm just a freshman, I'm not applying to jobs or internships yet. What's up with the immediate assumptions and personal attacks? [-16]

https://old.reddit.com/r/csMajors/comments/11oviuv/evidence_for_hiring_discrimination_in_favor_of/jbuxk6f/

https://old.reddit.com/r/csMajors/comments/11oviuv/evidence_for_hiring_discrimination_in_favor_of/jbuzpjr/

None

Don’t entirely disagree but let’s see what happens monday

None

Orange Site:

https://news.ycombinator.com/item?id=35112818

None

One of the YC team is in there trying to defend himself.

![](https://i.rdrama.net/images/16785829983699899.webp)

Edit: It's now [flagged] and people aren't happy :marseyschizowall:

Related: Twitter needs a longpost bot

None
128
So that's why they call him Linus "Hard R" Tech Tips

None
Reported by:
54
As seen on the github dramautism compendium: fork edition!

![](https://i.rdrama.net/images/16785724210963054.webp)

None
None
104
Tech bros are about to disrupt the old model of being homeless on the streets of San Fran
None
123

An actual decent reddit post in /r/toopoortobuyaniphone for once, and a good Saturday morning thread to drink a cup of Joe with. :marseycoffeemug:

Many of us have witnessed the breathtaking moon photos taken with the latest zoom lenses, starting with the S20 Ultra. Nevertheless, I've always had doubts about their authenticity, as they appear almost too perfect. While these images are not necessarily outright fabrications, neither are they entirely genuine. Let me explain.

There have been many threads on this, and many people believe that the moon photos are real (inputmag) - even MKBHD has claimed in this popular youtube short that the moon is not an overlay, like Huawei has been accused of in the past. But he's not correct. So, while many have tried to prove that Samsung fakes the moon shots, I think nobody succeeded - until now.

WHAT I DID

  1. I downloaded this high-res image of the moon from the internet - https://i.imgur.com/PIAjVKp

  2. I downsized it to 170x170 pixels and applied a gaussian blur, so that all the detail is GONE. This means it's not recoverable, the information is just not there, it's digitally blurred: https://i.imgur.com/xEyLajW

And a 4x upscaled version so that you can better appreciate the blur: https://i.imgur.com/3STX9mZ

  1. I full-screened the image on my monitor (showing it at 170x170 pixels, blurred), moved to the other end of the room, and turned off all the lights. Zoomed into the monitor and voila - https://i.imgur.com/ifIHr3S

  2. This is the image I got - https://i.imgur.com/bXJOZgI

INTERPRETATION

To put it into perspective, here is a side by side: https://i.imgur.com/ULVX933

In the side-by-side above, I hope you can appreciate that Samsung is leveraging an AI model to put craters and other details on places which were just a blurry mess. And I have to stress this: there's a difference between additional processing a la super-resolution, when multiple frames are combined to recover detail which would otherwise be lost, and this, where you have a specific AI model trained on a set of moon images, in order to recognize the moon and slap on the moon texture on it (when there is no detail to recover in the first place, as in this experiment). This is not the same kind of processing that is done when you're zooming into something else, when those multiple exposures and different data from each frame account to something. This is specific to the moon.

CONCLUSION

The moon pictures from Samsung are fake. Samsung's marketing is deceptive. It is adding detail where there is none (in this experiment, it was intentionally removed). In this article, they mention multi-frames, multi-exposures, but the reality is, it's AI doing most of the work, not the optics, the optics aren't capable of resolving the detail that you see. Since the moon is tidally locked to the Earth, it's very easy to train your model on other moon images and just slap that texture when a moon-like thing is detected.

Now, Samsung does say "No image overlaying or texture effects are applied when taking a photo, because that would cause similar objects to share the same texture patterns if an object detection were to be confused by the Scene Optimizer.", which might be technically true - you're not applying any texture if you have an AI model that applies the texture as a part of the process, but in reality and without all the tech jargon, that's that's happening. It's a texture of the moon.

If you turn off "scene optimizer", you get the actual picture of the moon, which is a blurry mess (as it should be, given the optics and sensor that are used).

To further drive home my point, I blurred the moon even further and clipped the highlights, which means the area which is above 216 in brightness gets clipped to pure white - there's no detail there, just a white blob - https://i.imgur.com/9XMgt06

I zoomed in on the monitor showing that image and, guess what, again you see slapped on detail, even in the parts I explicitly clipped (made completely 100% white): https://i.imgur.com/9kichAp

TL:DR Samsung is using AI/ML to slap on a texture of the moon on your moon pictures, and while some think that's your camera's capability, it's actually not. And it's not sharpening, it's not adding detail from multiple frames because in this experiment, all the frames contain the same amount of detail. None of the frames have the craters etc. because they're intentionally blurred, yet the camera somehow miraculously knows that they are there. And don't even get me started on the motion interpolation on their "super slow-mo", maybe that's another post in the future..

The input article in particular OP links to is egresses :!marseyneat:

Is the Galaxy S21 Ultra using AI to fake detailed Moon photos?

The gist of the article is 3 sources say it's abilities are real, 3 say they're fake, so input does their own test pitting a S21 Ultra vs Sony A7R III w/ 600mm telephoto lens. It's really funny and I suggest you read it as I can't do it justice.

But if you're lazy and don't want to read it here's the ending of the article.

3,500-something words into this investigation and I feel confident between my own comparison, the lack of Moon overlay photos or maps within the camera’s software, and Samsung’s own detailed explanations that there is no faking going on with 100x Moon photos. The S21 Ultra's doing a ton of correction on a 100x photo of the Moon and I have no reason to believe any addition of third-party imagery is happening. The S21 Ultra’s 100x zoom (with intelligent software tuning) is really that impressive and gives it a considerable edge over other phones.

But I also want to include one caveat: the S21 Ultra’s Scene Optimizer will not suddenly make all 100x zoom photos look as crispy as the Moon. Samsung flat-out says the Scene Optimizer can recognize “more than 30 scenes.” That includes the following according to a spokesperson:

Food, Portraits, Flowers, Indoor scenes, Animals, Landscapes, Greenery, Trees, Sky, Mountains, Beaches, Sunrises and sunsets, Watersides, Street scenes, Night scenes, Waterfalls, Snow, Birds, Backlit, Text, Clothing, Vehicle, Shoe, Dog, Face, Drink, Stage, Baby, People, Cat, Moon.

Scenes and objects that aren’t recognized by the Scene Optimizer will likely look like grainy mush at 100x zoom. So take that into consideration when using the S21 Ultra’s max zoom.

Honestly, I can’t believe I spent this many words debunking such a silly conspiracy theory. But consider the case closed (for now). Now, if you’ll excuse me, I have to return to my regular scheduled programming that consists of dunking on flat earthers and people who believe in UFOs.

Key take away/s: AI technology has easily been able to fool some tech journ*list and pop-tech YouTubes reviewers who don't have an understand of either technology or physics.

:marseyl:

orange site discus

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.