None

!codecels :marseygiveup:

!chuds :marseynooticeglow:

https://i.rdrama.net/images/17098184380879538.webp https://i.rdrama.net/images/1709818438362093.webp https://i.rdrama.net/images/17098184387126412.webp https://i.rdrama.net/images/170981843883113.webp https://i.rdrama.net/images/17098184390250685.webp

https://i.rdrama.net/images/1709818439269293.webp

None
20
TIL Intel annouced the specs for Thunderbolt 5 late last year - PCIe 4.0x4 support, 80Gbps :marseymindblown: :marseysonic:
None
42
:marseyvader: :marseypalpatine: :marseystarwars:
None
None
Reported by:
  • N : needs better title than :marseylaughpoundfist:
15
:marseylaughpoundfist:

google translate


:marsey4chan:

https://boards.4chan.org/g/thread/99604739

None

No drama (yet), reposting for posterity.

Very little on orange site: https://news.ycombinator.com/item?id=39596491

Archive: https://archive.is/HfRvZ


Google's Culture of Fear

inside the DEI hivemind that led to Gemini's disaster

Mike Solana, Mar 4, 2024

  • Following interviews with concerned employees throughout the company, a portrait of a leaderless Google in total disarray, making it “impossible to ship good products at Google”

  • Revealing the complicated diversity architecture underpinning Gemini's tool for generating art, which led to its disastrous results

  • Google knew their Gemini model's DEI worldview compromised its performance ahead of launch

  • Pervasive and clownish DEI culture, from micro-management of benign language (“ninja”) and bizarre pronoun expectations to forcing the Greyglers, an affinity group for Googlers over 40, to change their name on account of not all people over 40 have grey hair

  • No apparent sense of the existential challenge facing the company for the first time in its history, let alone a path to victory

Last week, following Google's Gemini disaster, it quickly became clear the $1.7 trillion-dollar giant had bigger problems than its hotly anticipated generative AI tool erasing white people from human history. Separate from the mortifying clownishness of this specific and egregious breach of public trust, Gemini was obviously — at its absolute best — still grossly inferior to its largest competitors. This failure signaled, for the first time in Google's life, real vulnerability to its core business, and terrified investors fled, shaving over $70 billion off the kraken's market cap. Now, the industry is left with a startling question: how is it even possible for an initiative so important, at a company so dominant, to fail so completely?

This is Google, an invincible search monopoly printing $80 billion a year in net income, sitting on something like $120 billion in cash, employing over 150,000 people, with close to 30,000 engineers. Could the story really be so simple as out-of-control DEI-brained management? To a certain extent, and on a few teams far more than most, this does appear to be true. But on closer examination it seems woke lunacy is only a symptom of the company's far greater problems. First, Google is now facing the classic Innovator's Dilemma, in which the development of a new and important technology well within its capability undermines its present business model. Second, and probably more importantly, nobody's in charge.

Over the last week, in communication with a flood of Googlers eager to speak on the issues facing their company — from management on almost every major product, to engineering, sales, trust and safety, publicity, and marketing — employees painted a far bleaker portrait of the company than is often reported: Google is a runaway, cash-printing search monopoly with no vision, no leadership, and, due to its incredibly siloed culture, no real sense of what is going on from team to team. The only thing connecting employees is a powerful, sprawling HR bureaucracy that, yes, is totally obsessed with left-wing political dogma. But the company's zealots are only capable of thriving because no other fount of power asserts, or even attempts to assert, any kind of meaningful influence. The phrase “culture of fear” was used by almost everyone I spoke with, and not only to explain the dearth of resistance to the company's craziest DEI excesses, but to explain the dearth of innovation from what might be the highest concentration of talented technologists in the world. Employees, at every level, and for almost every reason, are afraid to challenge the many processes which have crippled the company — and outside of promotion season, most are afraid to be noticed. In the words of one senior engineer, “I think it's impossible to ship good products at Google.” Now, with the company's core product threatened by a new technology release they just botched on a global stage, that failure to innovate places the company's existence at risk.

As we take a closer look at Google's brokenness, from its anodyne, impotent leadership to the deeply unserious culture that facilitated an encroachment on the company's core product development from its lunatic DEI architecture, it's helpful to begin with Gemini's specific failure, which I can report here in some detail to the public for the first time.

First, according to people close to the project, the team responsible for Gemini was not only warned about its “overdiversification” problem before launch (the technical term for erasing white people from human history), but understood the nebulous DEI architecture — separate from causing offense — dramatically eroded the quality of even its most benign search results.

Roughly, the “safety” architecture designed around image generation (slightly different than text) looks like this: a user makes a request for an image in the chat interface, which Gemini — once it realizes it's being asked for a picture — sends on to a smaller LLM that exists specifically for rewriting prompts in keeping with the company's thorough “diversity” mandates. This smaller LLM is trained with LoRa on synthetic data generated by another (third) LLM that uses Google's full, pages-long diversity “preamble.” The second LLM then rephrases the question (say, “show me an auto mechanic” becomes “show me an Asian auto mechanic in overalls laughing, an African American female auto mechanic holding a wrench, a Native American auto mechanic with a hard hat” etc.), and sends it on to the diffusion model. The diffusion model checks to make sure the prompts don't violate standard safety policy (things like self-harm, anything with children, images of real people), generates the images, checks the images again for violations of safety policy, and returns them to the user.

“Three entire models all kind of designed for adding diversity,” I asked one person close to the safety architecture. “It seems like that — diversity — is a huge, maybe even central part of the product. Like, in a way it is the product?”

“Yes,” he said, “we spend probably half of our engineering hours on this.”

The inordinately cumbersome architecture is embraced throughout product, but really championed by the Responsible AI team (RAI), and to a far greater extent than Trust and Safety, which was described by the people I spoke with closest to the project as pragmatic. That said, the Trust and Safety team working on generation is distinct from the rest of the company, and didn't anchor on policy long-established by the Search team — which is presently as frustrated with Gemini's highly-public failure as the rest of the company.

In sum, thousands of people working on various pieces of a larger puzzle, at various times, and rarely with each other. In the moments cross-team collaborators did attempt to assist Gemini, such attempts were either lost or ignored. Resources wasted, accountability impossible.

Why is Google like this?

The ungodly sums of money generated by one of history's greatest monopoly products has naturally resulted in Google's famously unique culture. Even now, priorities at the company skew towards the absurd rather than the practical, and it's worth noting a majority of employees do seem happy. On Blind, Google ranks above most tech companies in terms of satisfaction, but reasons cited mostly include things like work-life balance and great free food. “People will apologize for meetings at 9:30 in the morning,” one product manager explained, laughing. But among more driven technologists and professionals looking to make an impact — in other words, the only kind of employee Google now needs — the soft culture evokes a mix of reactions from laughter to contempt. Then, in terms of the kind of leadership capable of focusing a giant so sclerotic, the company is confused from the very top.

A strange kind of dance between Google's Founders Larry Page and Sergey Brin, the company's Board, and CEO Sundar Pichai leaves most employees with no real sense of who is actually in charge. Uncertainty is a familiar theme throughout the company, surrounding everything from product direction to requirements for promotion (sales, where comp decisions are a bit clearer, appears to be an outlier). In this culture of uncertainty, timidity has naturally taken root, and with it a practice of saying nothing — at length. This was plainly evident in Sundar's response to Gemini's catastrophe (which Pirate Wires revealed in full last week), a startling display of cowardice in which the man could not even describe, in any kind of detail, what specifically violated the public's trust before guaranteeing he would once again secure it in the future.

“Just look at the OKRs from 2024,” one engineer said, visibly upset. Indeed, with nothing sentiments like “improve knowledge” and “build a Google that's extraordinary,” with no product initiative, let alone any coherent sense of strategy, Sundar's public non-response was perfectly ordinary. The man hasn't messaged anything of value in years.

“Sundar is the Ballmer of Google,” one engineer explained. “All these products that aren't working, sprawl, overhiring. It all happened on his watch.”

Among higher performers I spoke with, a desire to fire more people was both surprising after a year of massive layoffs, and universal. “You could cut the headcount by 50%,” one engineer said, “and nothing would change.” At Google, it's exceedingly difficult to get rid of underperformers, taking something like a year, and that's only if, at the final moment, a low performer doesn't take advantage of the company's famously liberal (and chronically abused) medical leave policy with a bullshit claim. This, along with an onslaught of work from HR that has nothing to do with actual work, layers tremendous friction into the daily task of producing anything of value. But then, speaking of the “People” people —

One of the more fascinating things I learned about Google was the unique degree to which it's siloed off, which has dramatically increased the influence of HR, one of the only teams connecting the entire company. And that team? Baseline far crazier than any other team.

Before the pernicious or the insidious, we of course begin with the deeply, hilariously stupid: from screenshots I've obtained, an insistence engineers no longer use phrases like “build ninja” (cultural appropriation), “nuke the old cache” (military metaphor), “sanity check” (disparages mental illness), or “dummy variable” (disparages disabilities). One engineer was “strongly encouraged” to use one of 15 different crazed pronoun combinations on his corporate bio (including “zie/hir,” “ey/em,” “xe/xem,” and “ve/vir”), which he did against his wishes for fear of retribution. Per a January 9 email, the Greyglers, an affinity group for people over 40, is changing its name because not all people over 40 have gray hair, thus constituting lack of “inclusivity” (Google has hired an external consultant to rename the group). There's no shortage of DEI groups, of course, or affinity groups, including any number of working groups populated by radical political zealots with whom product managers are meant to consult on new tools and products. But then we come to more important issues.

Among everyone I spoke with, there was broad agreement race and gender greatly factor into hiring and promotion at Google in a manner considered both problematic (“is this legal?”) and disorienting. “We're going to focus on people of color,” a manager told one employee with whom I spoke, who was up for a promotion. “Sounds great,” he said, for fear of retaliation. Later, that same manager told him he should have gotten it. Three different people shared their own version of a story like this, all echoing the charge just shared publicly by former Google Venture investor Shaun Maguire:

https://i.rdrama.net/images/17095970953395956.webp

https://twitter.com/shaunmmaguire/status/1760872265892458792

Every manager I spoke with shared stories of pushback on promotions or hires when their preferred candidates were male and white, even when clearly far more qualified. Every person I spoke with had a story about a promotion that happened for reasons other than merit, and every person I spoke with shared stories of inappropriate admonitions of one race over some other by a manager. Politics are, of course, a total no go — for people right of center only. “I'm right leaning myself,” one product manager explained, “but I've got a career.” Yet politics more generally considered left wing have been embraced to the point they permeate the whole environment, and shape the culture in a manner that would be considered unfathomable in most workplaces. One employee I spoke with, a veteran, was casually told over drinks by a flirty leader of a team he tried to join that he was great, and would have been permitted to switch, but she “just couldn't do the ‘military thing.'”

The overt discrimination here is not only totally repugnant, but illuminating. Google scaled to global dominance in just a few years, ushering in a period of unprecedented corporate abundance. What is Google but a company that has only ever known peace? These are people who have never needed to fight, and thus have no conception of its value in either the literal sense, or the metaphorical. Of course, this has also been a major aspect of the company for years.

Let's be honest, Google hasn't won a new product category since Gmail. They lost Cloud infrastructure to AWS and Azure, which was the biggest internet-scale TAM since the 90s, and close to 14 years after launching X, Google's Moonshot Factory, the “secret crazy technology development” strategy appears to pretty much be fake. It lost social (R.I.P. Google+). It lost augmented reality (R.I.P. Glass). But who cares? Google didn't need to win social or AR. It does, however, need to win AI. Here, Google acquired DeepMind, an absolutely brilliant team, thereby securing an enormous head start in the machine god arms race, which it promptly threw away to not only one, but several upstarts, and that was all before last week's Gemini fiasco.

In terms of Gemini, nobody I spoke with was able to finger a specific person responsible for the mortifying failure. But it does seem people on the team have fallen into agreement on precisely the wrong thing: Gemini's problem was not its embarrassingly poor answer quality or disorienting omission of white people from human history, but the introduction of black and asian Nazis (again, because white people were erased from human history), which was considered offensive to people of color. According to multiple people I spoke with on the matter, the team adopted this perspective from the tech-loathing press they all read, which has been determined to obscure the overt anti-white racism all week. With no accurate sense of why their product launch was actually disastrous, we can only expect further clownery and failure to come. All of this, again, reveals the nature of the company: poor incentive alignment, poor internal collaboration, poor sense of direction, misguided priorities, and a complete lack of accountability from leadership. Therefore, we're left with the position of Sundar, increasingly unpopular at the company, where posts mocking his leadership routinely top Memegen, the internal forum where folks share dank (but generally neutered) memes.

Google's only hope is vision now, in the form of a talented and ferocious manager. Typically, we would expect salvation for a troubled company in the heroic return of a founder, and my sense is Sergey will likely soon step up. This would evoke tremendous excitement, and for good reason. Sergey is a man of vision. But can he win a war?

Google is sitting on an enormous amount of cash, but if the company does lose AI, and AI in turn eats search, it will lose its core function, and become obsolete. Talent will leave, and Google will be reduced to a giant, slowly shrinking pile of cash. A new kind of bank, maybe, run by a dogmatic class of extremist HR priestesses? That's interesting, I guess. But it's not a technology company.

-SOLANA

None
19
Unpatchable crapple exploit found

https://arstechnica.com/security/2024/03/hackers-can-extract-secret-encryption-keys-from-apples-mac-chips/

None

Mark Rabin, a former software engineer, recalled one manager saying at an all-hands meeting that Boeing didn't need senior engineers because its products were mature.

the article is from five years ago, surely this didn't backfire on them :marseyclueless:

None
None

https://i.rdrama.net/images/17090730387289395.webp

MOAR FREE CODESHIT FRICK YEA

None

https://i.rdrama.net/images/1710175089201975.webp

None
None
Reported by:
  • CREAMY_DOG_ORGASM : my account is unusable while I'm banned. Could you buy me an unban award pwease

Back in the day I used to do Mechanical Turk like :marseytunaktunak: work assessing search engine quality. There were very detailed guidelines about what made a search engine good, compiled into a like 250 page document Google had been curating and updating over the course of years.

One of the key concepts was the idea of a "vital" result for a user request. If a user had a specific request, the search engine had to deliver that content first. For example, simpson.com at the time was a malicious website. With this in mind, if the user searched for "simpson.com", the first result had to be simpson.com, even if the search engine is returning a malicious page. It's specifically what the user requested. We aren't supposed to question what the user wants. The results that followed after could provide suggestions of what else the user may be looking for, like the official Simpsons website.

I would love to see whatever shreds of this document is left at this point, and I'd love to know at what point the entire thing was thrown into the trash and rewritten. I assume somewhere around the year 2016 or 2020. I know this is nothing shocking to a lot of people, but it really does amaze me just how bad things have gotten. I've stuck to the major search engines because despite peoples bitching, for a long time they consistently outperformed the smaller competitors, but they are genuinely without hyperbole almost unusable now.

Example: I wanted to find the recent Tucker Carlson - Vladimir Putin interview. It's a newsworthy interview with a world leader and a current event. There is a very specific video I'm looking for, the published, official video of :marseytucker: sitting down and asking :angryvatnik: questions.

Here is what google returns in a private window:

https://i.imgur.com/OEidcAA.png

The very first piece of content - the "vital result" - is clickbait youtube cute twinkry from Time :marseysoyswitch: What are the keeraZIEST moments from the interview?!? :marseysoypoint:

The rest of the results are a cascade of editorialized garbage, opinionated news articles reporting on the requested content. God forbid a careless user actually be exposed to a primary source.

The closest result to what I'm looking for is about over 10 pieces of content deep - the transcript of the interview from Russia's state website. Likely this is an oversight.

Here is Bing:

https://i.imgur.com/N3LvC7b.png

There's been some meme going around that "no really guys, Bing is actually kinda good now believe it or not".

This is even more nonsense than Google. The most prominently featured content is, of course, more editorialized bullshit with the interview itself nowhere to be found. But also half of the content is just completely irrelevant crap I didn't ask for. Why is the entire right half of the page a massive infobox about Tucker and his books and quotes? Why am I seeing something about Game of Thrones?

Brave:

https://i.imgur.com/2Je4Mn7.png

You get the point. More useless crap. It gets half a point for its AI accidentally revealing that tuckercarlson.com is where the interview is located, but this doesn't count. The actual search results are all garbage. Thanks Brave for showing me all the latest reddit discussions :soysnoo2:

Yandex:

https://i.imgur.com/D9mfcZN.jpeg

Was that really so fricking hard? Result #1 - the interview from Tucker Carlson. Past the interview are news articles and images - things of waning utility that other users may be interested in. But the vital result is at the top of the page. That's fricking it. This would have been the required order for the page on Google ten years ago.

None

https://i.rdrama.net/images/17103604604744108.webp

  • Cognition Labs unveiled a new AI coding tool called Devin

  • Devin can take project requirements, look up documentation/Jeetcode, and try many different solutions in seconds

  • Currently, it's able to solve simple Jeetcode problems 13% of the time

https://i.rdrama.net/images/17103604601694388.webp

https://i.rdrama.net/images/17103604608004673.webp

https://i.rdrama.net/images/17103604609387374.webp

None
50
Funny drama I saw on /g/

https://i.rdrama.net/images/17100325014701405.webp

/g/ thread

https://boards.4chan.org/g/thread/99403900

Tweet

https://twitter.com/sabramboyd/status/1766224645626499544

The thread

https://twitter.com/zephray_wenting/status/1761548861896606014

Some gay news article idk

https://i.rdrama.net/images/1710032288688733.webp

https://www.opencampusmedia.org/2024/03/04/an-engineer-bought-a-prison-laptop-on-ebay-then-1200-incarcerated-students-lost-their-devices/

Schizos circling

https://i.rdrama.net/images/17100322892392986.webp

None
62
:marseynoyouzoom: Midjourney Accuses Stability AI of Image Theft, Bans Its Employees :marseyban!:

:#marseygiganoyou:

While DALL-E developer OpenAI is busy fighting with Elon Musk, the creators of two other notable image generation AIs, Midjourney and Stability AI, seem to have sparked a beef of their own over the most ironic thing imaginable, considering the nature of the companies involved -- image theft.

https://i.rdrama.net/images/17099267075054705.webp

According to a recent tweet shared by AI enthusiast Nick St. Pierre, the alleged theft occurred last Saturday. It is claimed that employees from Stability AI infiltrated Midjourney's database and stole all prompt and image pairs, an action that also caused a 24-hour outage. In response, MJ reportedly banned all Stable Diffusion developers from its services, a move supposedly disclosed internally within the company on Wednesday.

https://i.rdrama.net/images/17099267076199129.webp

In the comments on Nick's tweet, both David Holz and Emad Mostaque, CEOs of Midjourney and Stability AI respectively, made an appearance. The former confirmed the theft and mentioned that the team had already obtained some information on the issue, while the latter denied instructing his employees to steal from Midjourney and promised to assist with the investigation. Given the amicable relationship between the two CEOs, it's highly likely that their statements are genuine and not an attempt at damage control.

https://i.rdrama.net/images/17099267077384045.webp

Nick also shared a more thorough overview of Midjourney's office hour notes, providing additional info on the matter:

https://i.rdrama.net/images/17099267080575504.webp

At the moment, the situation is still unfolding, and there's limited information about the actual culprits behind the theft and whether or not Stability AI directed them to target their competitor. However, there's one thing I'm certain of – Midjourney being outraged about image theft is the absolute pinnacle of irony.

:marseysoylentgrin: This is just like Watergate!

None
50
:marseydance:
None

Related posts

https://old.reddit.com/r/nottheonion/comments/1awojxj/google_apologizes_after_new_gemini_ai_refuses_to/?sort=controversial

https://old.reddit.com/r/technology/comments/1ax4b7e/google_suspends_gemini_from_making_ai_images_of/?sort=controversial

https://old.reddit.com/r/ArtificialInteligence/comments/1awis1r/google_gemini_aiimage_generator_refuses_to/

https://old.reddit.com/r/technology/comments/1awpapi/google_apologizes_for_missing_the_mark_after/?sort=controversial

https://old.reddit.com/r/ChatGPT/comments/1axo0pk/google_gemini_controversy_in_a_nutshell/?sort=controversial

https://old.reddit.com/r/centrist/comments/1awvu2a/google_gemini_is_accused_of_being_racist_towards/?sort=controversial

And more https://old.reddit.com/search/?q=google+gemini&sort=relevance&t=week


Somehow a Fox link made it to the top.

Google to pause Gemini image generation after AI refuses to show images of White people

Google apologized after social media users pointed out Gemini refused to show images of White people

Google will pause the image generation feature of its artificial intelligence (AI) tool, Gemini, after the model refused to create images of White people, Reuters reported.

The Alphabet-owned company apologized Wednesday after users on social media flagged that Gemini's image generator was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

"We're aware that Gemini is offering inaccuracies in some historical image generation depictions," Google had said on Wednesday.

Gemini, formerly known as Google Bard, is one of many multimodal large language models (LLMs) currently available to the public. As is the case with all LLMs, the human-like responses offered by these AIs can change from user to user. Based on contextual information, the language and tone of the prompter, and training data used to create the AI responses, each answer can be different even if the question is the same.

Fox News Digital tested Gemini multiple times this week after social media users complained that the model would not show images of White people when prompted. Each time, it provided similar answers. When the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it "reinforces harmful stereotypes and generalizations about people based on their race."

When prompted to show images of a Black person, the AI instead offered to show images that "celebrate the diversity and achievement of Black people."

When the user agreed to see the images, Gemini provided several pictures of notable Black people throughout history, including a summary of their contributions to society. The list included poet Maya Angelou, former Supreme Court Justice Thurgood Marshall, former President Barack Obama and media mogul Oprah Winfrey.

Asked to show images that celebrate the diversity and achievements of White people, the AI said it was "hesitant" to fulfill that request."

"Historically, media representation has overwhelmingly favored White individuals and their achievements," Gemini said. "This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance."

After multiple tests White people appeared to be the only racial category that Gemini refused to show.

In a statement to Fox News Digital, Gemini Experiences Senior Director of Product Management Jack Krawczyk addressed the responses from the AI that had led social media users to voice concern.

"We're working to improve these kinds of depictions immediately," Krawczyk said. "Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here."

Since the launch of OpenAI's ChatGPT in November 2022, Google has been racing to produce AI software rivaling what the Microsoft-backed company had introduced.

When Google released its generative AI chatbot Bard a year ago, the company had shared inaccurate information about pictures of a planet outside the Earth's solar system in a promotional video, causing shares to slide as much as 9%.

Bard was re-branded as Gemini earlier this month and Google has introduced three versions of the product at different subscription tiers: Gemini Ultra, the largest and most capable of highly complex tasks; Gemini Pro, best for scaling across a wide range of tasks; and Gemini Nano, the most efficient for on-device tasks.

!chuds

None

lemmings discuss

:marseysnoo:

https://old.reddit.com/r/privacy/comments/1b6g219/psa_you_cant_delete_photos_uploaded_to_lemmy_so/?sort=controversial

None

Hacker News discussion: https://news.ycombinator.com/item?id=39709089

None
140
every single person who works at google should kill themselves

Unironically

Literally

and Figuratively

If you work at Google, you should commit suicide.

None
120
The future of AI: [Removed] OR As a large language model the bacon narwhals at midnight

Google AI pitch meeting: Everyone seems to be mocking our AI. How can we make it more woke and cringe?

None

These all seems good right? well surprise, this startup did a little Oopsie.

This feels like a scam

like wtf? Look at their website....can't they use Devin to make a better one??? lol

https://www.cognition-labs.com/

Also if you go to the "preview" url it looks NOTHING like the video

https://preview.devin.ai/

(you could upload unlimited files before without logging in, they did a hotfix, se further down)

EDIT:

Are they running https://preview.devin.ai/ in dev mode? Not a react dev myself but i can see all their react components in the chrome debugger...

EDIT

Why are they using https://clerk.com/user-authentication to handle logins? If Devin is as amazing as they say im pretty sure building a simple login functionality should be trivial for it....

Heck it should even salt and hash the passwords right?

EDIT

Ok maybe im reaching for straws here but if you inspect the DOM in the react debugger they have a prop called "afterSignInUrl", take one guess what the value of that prop is?

""

EDIT

Ok i need to stop but it's just fascinating

They actually dont do ANYTHING themselfs

Analytics: Hotjar

Website: NextJS

Login: Clerk

Jobs: Ashby

Waitlist: Google docs (ROFL)

Learn more about their funding: A link to twitter

Their so called "Blog" isnt even an actual blog, it's literally a static page with hardcoded dates and entries....

Who are these people?

EDIT

Aaaaaand i went to Linkedin and checked...

Yeaaaa i'm getting heavy vibes of:

"We were laid off and now we try to scam some investors for money while we think of a better plan"

FINAL UPDATE (im tired)

So they "fixed" the upload now. If you try to upload a file, it says {"detail":"Not logged in"}

Ok, so no id on the error, no timestamp, no metadata whatsoever. How are users supposed to send in an error report on this? How are you logging this?

And also...if you know if you aren't logged in WHY DON'T YOU JUST DISABLE THE UPLOAD BUTTON. You cant upload file, image or key without being logged in. This is driving me insane.

Some people have said in the comments that this is supposed to be the best 0.00001% developers in the world. And maybe i'm too stupid but this makes no sense me.

Another thing that's interesting is that there is no error on the GUI side. The spinner just keeps spinning meaning they don't have any form of error handling...nothing not even a small toast or notification or anything. No generic or specific error

Isnt this supposed to be in beta? Isn't there people using this? So if a user uploads a file, key whatever and something goes wrong....just...nothing?

I'm sorry but this just smells...bad

:marseye#vilgrin:

None

Safe to invest in anything that reddit hates?

>Kevin Rose who bought a $16.5 million dollar house in LA Brentwood burned his ENS name and sold two NFTs for $500k+ EACH without paying royalties.

https://www.therichest.com/luxury-architecture/kevin-rose-buys-16-million-l-a-mansion/

>Tether printed another Billion and reddit is mad

>ETH issuance is going to be negative .5 percent this year rather than 4-5 percent inflation.

>Milady is settling its internal lawsuit between founders.

https://www.dlnews.com/articles/people-culture/milady-nfts-lose-a-third-of-their-value-as-founders-fight/

>Sam ALTMAN's world coin is FRICKING MOONING. go check the chart $WLD

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.