None
Reported by:
None

Do you want to get gassed up?

:marseyfart::!marseybrap:

None

:marseysnoo:

https://old.reddit.com/r/technology/comments/10eslve/top_ibm_execs_again_accused_of_cheating_investors/

None

Orange Site:

https://news.ycombinator.com/item?id=34414420

:marseysnoo:

https://old.reddit.com/r/ScienceUncensored/comments/10erhla/conservatives_are_panicking_about_ai_bias_think/

:marseybluecheck:

https://x.com/search?q=https%3A%2F%2Fwww.vice.com%2Fen%2Farticle%2F93a4qe%2Fconservatives-panicking-about-ai-bias-years-too-late-think-chatgpt-has-gone-woke&src=typed_query

:marsey4chan:

https://archived.moe/g/thread/90946023

None

Orange Site:

https://news.ycombinator.com/item?id=34422627

None
None

Possibly the most dorky article I ever read. Equating wearing a helmet as a bad thing because it makes bicycling look scary is like saying wearing a seat belt makes it too intimidating for group X (women, minority, etc) to drive.

This is the opening paragraph, which sets the tone for the rest of the article

Last year, health officials in Seattle decided to stop requiring bicyclists to wear helmets. Independent research found that nearly half of Seattle’s helmet tickets in recent years went to unhoused people, while Black and Native American cyclists in the city were four times and two times more likely, respectively, than white cyclists to be cited.

...

Helmet mandates intimidate potential riders, they argued, by framing cycling as an activity so dangerous it necessitates body armor.

My brain is to big to risk getting damaged while riding, thank you

:marseybiker::marseybigbrain:

None

https://old.reddit.com/r/SubredditDrama/comments/10d0p2x/lengthy_grammatical_slapfight_in_rbuildapc/

None
None
77
I did it. I fixed @automeme's hashtag algorithim.

It's late as frick and I just deployed it so I don't want to fully explain how it works, but the gist is that I have been secretly collecting ~500,000 rdrama comments. I wrote a script to figure out what terms correlate with other terms based on the comments. I also look at wikipedia's top trending articles to see what people are talking about.

Hilariously, because of rdrama's fixation on trans people, and the fact that a prominent trans activist recently died, almost every post I tried ended up having "#FrickingTransWomen" appended to the end of it. I thought that was hilarious so I knew I had to get the bot running before it goes away.

Heymoon it won't fix anything, your bot sux, keep yourself safe

i dont care

None

If only America implemented this. (Hopefully soon)

:marseysnoo:

https://old.reddit.com/r/technology/comments/10d8r6n/huge_win_for_privacy_facebook_tracking_is_illegal/

https://old.reddit.com/r/tutanota/comments/107f0ci/huge_win_for_privacy_facebook_tracking_is_illegal/

None

https://www.techemails.com/p/bill-gates-tries-to-install-movie-maker

None
None

:#marseysal:

:marseysnoo:

https://old.reddit.com/r/technology/comments/10cdg44/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/nottheonion/comments/10ch138/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/teslamotors/comments/10cdgpe/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/collapse/comments/10cimgl/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/TeslaLounge/comments/10cjmws/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/environment/comments/10ckvgl/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

https://old.reddit.com/r/Conservative/comments/10co7ho/wyoming_wants_to_phase_out_sales_of_evs_by_2035/

Orange Site:

https://news.ycombinator.com/item?id=34390556

:marseybluecheck:

https://x.com/Teslarati/status/1614466674920505344#m

https://x.com/TheTeslaLife/status/1614650964622954498#m

None
None

https://ctl.utexas.edu/5-things-know-about-chatgpt

As a first step, learning about this tool will help instructors gain awareness and know to seek assistance when issues related to ChatGPT arise. In addition, the release of ChatGPT encourages us to revisit the best ways to assess student learning in a variety of instructional contexts (5). It invites us to ask important questions, such as:

  • Why and how do we best equip students as strong writers?

  • What other ways can students demonstrate learning in addition to written papers?

  • What is the best way to solicit student writing that is meaningful and authentic?

  • If students rely on ChatGPT as a source of information to answer factual questions, how will that affect their development of research skills?

This focus on the relationship between students and instructors and the educational mission of the university fits with broader efforts underway to reinforce the importance of the process of learning, including making and correcting mistakes. The university is in the process of refreshing our honor code and honor code affirmation to renew our commitment to supporting students in their journey to master complex knowledge and skills.

With these types of questions and issues in mind, we have gathered a variety of suggestions you can pick and choose to incorporate in your teaching practice if students’ use of ChatGPT is relevant for you. Incorporating 1-2 of these approaches may help ease concerns and challenges that could arise with the introduction of the ChatGPT tool.

Beginning of the semester:

  • Be clear on what you want your students to know and be able to do or demonstrate by the end of the course and why that knowledge is valuable to their lives. (See this resource for assistance in developing learning outcomes for your course.) Help students see that the ways you are assessing their learning are key to understanding what they are gaining from the course and where they may need extra coaching and support. (6)

  • Talk to your students about how relying heavily on this tool may interfere with achieving the learning outcomes you hope they will achieve in this course (e.g., problem solving, developing an authentic writing voice, etc.).

    • In particular, “If you can explain to students the value of writing, and convince them that you are genuinely interested in their ideas, they are less likely to reach for the workaround.” (7)
  • Have an open discussion with your students about the ethical implications of ChatGPT and the value of authentic learning for students’ lifelong development as learners. This may include having conversations around digital literacy and bias in research and scholarship, as AI writing cowtools like ChatGPT are limited to the public source material they have access to on the internet.Don’t feel you have to have all of the answers, as this is a continually evolving issue. (6)

Assignment design:

  • Ask students to reference and/or cite class materials, notes, and sources (particularly sources that are behind firewalls such as JSTOR articles) in their written assignments. This instruction is valuable because ChatGPT draws on text models from public websites.

  • Require students to reflect more deeply and critically on course topics. This tip is always a good assessment strategy and ChatGPT currently performs better on more superficial and less detailed responses.” (8)

  • Use in-class time for students to demonstrate knowledge and understanding in a variety of ways through low-tech, low stakes in-person activities like freewriting and live presentations.

  • Craft an assignment where you generate a ChatGPT output based on a prompt and ask your students to critique the response, indicating where it did a good job of articulating key points and what nuances it missed. (For 10 other ways to creatively use ChatGPT in course assignments, see “Update your course syllabus for ChatGPT”; keep in mind that asking students to engage with ChatGPT may generate privacy concerns, so it may be better practice to provide them with a copy of ChatGPT responses that they can use.)

  • Focus on critical skills that artificial intelligence struggles with. NPR education correspondent Anya Kamanetz describes three of these areas as:

    • Give a hug: empathy, collaboration, communication, and leadership skills;

    • Solve a mystery: generating questions and problem finding; and

    • Tell a story: finding what's relevant in a sea of data or applying values, ethics, morals, or aesthetic principles to a situation. (9)

  • Carefully scaffold assignments with time and space for students to complete each step along the way, and consider whether the number of time-intensive tasks might require more bandwidth than students have to spend. Students are more likely to utilize a tool like ChatGPT when they are short on time. (6)

  • Treat ChatGPT as a tool that some students may want to use to help get started writing. For example, students who have difficulty starting writing assignments might be encouraged to generate a paragraph with ChatGPT as a stub that enables them to continue writing. As long as the student ultimately adds significant new material and thoroughly edits or ultimately eliminates the output from ChatGPT, they are producing a document that reflects their own work.

Classroom Climate:

One way to help encourage students to make better decisions about using cowtools such as ChatGPT is to design your classroom climate to engender mastery approaches to learning, which involve a focus on deeply understanding the knowledge and skills rather than simply achieving a particular score on an assessment. In a mastery-oriented classroom, students are more likely to engage in strategies that will help them truly learn the material rather than for the goal of performing a task and receiving a grade for their work.

Three simple tips for encouraging mastery approaches in higher education classrooms include:

  1. offering flexible evaluation design: consider providing opportunities for students to revise and redo specific portions of assignments;

  2. focusing feedback on process and effort: offer feedback oriented toward student effort and their learning processes rather than on high grades and performance relative to others. When possible offer elaborative feedback rather than feedback based simply on correctness.

  3. building a sense of belonging: discuss, emphasize, and model that making errors and mistakes is part of everyone's learning processes rather than something that only poor performers or people who "don't get it" do

None

:marseysnoo:

https://old.reddit.com/r/technology/comments/10bxkga/spy_software_found_a_worker_wasnt_working_as_much/

(Surprisingly /r/technology is dunking on the dumb foid)

https://old.reddit.com/r/overemployed/comments/10b7uka/spy_software_found_a_worker_wasnt_working_as_much/

https://old.reddit.com/r/antiwork/comments/10b7teb/spy_software_found_a_worker_wasnt_working_as_much/

https://old.reddit.com/r/byebyejob/comments/10c833x/spy_software_found_a_worker_wasnt_working_as_much/

None
86
I change my mind I fricking hate Rust.

>:marseynerd:Hey rust, can I access my data?

>:marseyrave: No.

>:marseyreading: Why not?

>:marseyakshually:there exists a theoretical use case where accessing this data can cause issues.

>:marseygamer: Okay, Stack Overflow what do I do?

>:marseytrans2: Just write 400 lines of code so the pattern can exclusively work with one specific case

>:marseylongpost2: For fricks sake I guess I’ll try. $ Cargo Build & ./target/debug/r-slur.exe

>:marseysnappyautism: function panic

>:marseyterrydavis: That’s enough of that. :taddance: Miss me with that :marseytrain: software. :realisticelephant: I code for god now.

None
10
JPG bros, how do we cope?
None
Reported by:
  • JimothyX5 : Stupid, unfunny and completely unrelated to drama

They're getting destroyed in court, aren't they?

None
Reported by:
  • JimothyX5 : Stupid, unfunny and completely unrelated to drama

Hi, I'm a long time reader of Slate Star Codex and I used to post on the Reddit forum until I got banned. There are a few reasons that I believe I got banned.

  1. Uncharitably claiming that Leftist censorship was a threat to the rationalist community

  2. Advocating for violence

  3. Not being kind

I understand why all of these things could have been a problem on the Reddit community, but I would like to know if they're still going to be a problem here, since I don't want to invest a lot of time creating a profile and having good-faith discussions with people if I'm only going to be banned again. Here are the reasons that I think these three issues shouldn't be a problem anymore.

  1. I was right, and everybody who disagreed with me was wrong. The fact that the community had to move here proves it. I'm not expecting an apology but I think that time has proven me correct on that score.

  2. Violence is a completely justifiable response to tyranny. While calls to violence may be against Reddit rules (and the community was right to ban me from Reddit because my rhetoric could have caused problems for the mods) there are no such rules here. In fact, rdrama (which helped set up this offsite community, and whom you should all be grateful to) actively encourages calls to violence. If a rational and logical case can be made for violence then I think there is no good reason not to hear that case out. If you're forced to censor people you disagree with because you're unable to make a stronger case for pacifism over violence in the open marketplace of ideas, then you should question whether your pacifism is actually a worthwhile philosophy.

  3. Kindness and truth are different terminal values. If you optimize for kindness then it is self-evident that you will have to sacrifice truth at some point. Obviously the Reddit community has chosen kindness as its terminal value, but I'm hoping that this offsite community is enlightened enough to choose truth.

I'm linking to a few articles from my Substack here so you have a few examples of my style of writing and can make a better judgement about whether I would be a good fit for the offsite community. I'm also on rdrama where my username is sirpingsalot. If you think I'm not a good fit for the offsite either, then no hard feelings - I'm happy to take my ideas to more sympathetic communities instead. I just don't want to put in the effort of investing time and energy here if I'm only going to get banned again for the same reasons.

None

Orange site: https://news.ycombinator.com/item?id=34377910

Hello. This is Matthew Butterick. I'm a writer, designer, pro­gram­mer, and law­yer. In Novem­ber 2022, I teamed up with the amaz­ingly excel­lent class-action lit­i­ga­tors Joseph SaveriCadio Zir­poli, and Travis Man­fredi at the Joseph Saveri Law Firm to file a law­suit against GitHub Copi­lot for its "unprece­dented open-source soft­ware piracy". (That law­suit is still in progress.)

Since then, we've heard from peo­ple all over the world---espe­cially writ­ers, artists, pro­gram­mers, and other cre­ators---who are con­cerned about AI sys­tems being trained on vast amounts of copy­righted work with no con­sent, no credit, and no com­pen­sa­tion.

Today, we're tak­ing another step toward mak­ing AI fair & eth­i­cal for every­one. On behalf of three won­der­ful artist plain­tiffs---Sarah Ander­senKelly McK­er­nan, and Karla Ortiz---we've filed a class-action law­suit against Sta­bil­ity AIDeviantArt, and Mid­jour­ney for their use of Sta­ble Dif­fu­sion, a 21st-cen­tury col­lage tool that remixes the copy­righted works of mil­lions of artists whose work was used as train­ing data.

Join­ing as co-coun­sel are the ter­rific lit­i­ga­tors Brian Clark and Laura Mat­son of Lock­ridge Grindal Nauen P.L.L.P.

Today's fil­ings:

As a law­yer who is also a long­time mem­ber of the visual-arts com­mu­nity, it's an honor to stand up on behalf of fel­low artists and con­tinue this vital con­ver­sa­tion about how AI will coex­ist with human cul­ture and cre­ativ­ity.

The image-gen­er­a­tor com­pa­nies have made their views clear.\

Now they can hear from artists.

Sta­ble Dif­fu­sion is an arti­fi­cial intel­li­gence (AI) soft­ware prod­uct, released in August 2022 by a com­pany called Sta­bil­ity AI.

Sta­ble Dif­fu­sion con­tains unau­tho­rized copies of mil­lions---and pos­si­bly bil­lions---of copy­righted images. These copies were made with­out the knowl­edge or con­sent of the artists.

Even assum­ing nom­i­nal dam­ages of $1 per image, the value of this mis­ap­pro­pri­a­tion would be roughly $5 bil­lion. (For com­par­i­son, the largest art heist ever was the 1990 theft of 13 art­works from the Isabella Stew­art Gard­ner Museum, with a cur­rent esti­mated value of $500 mil­lion.)

Sta­ble Dif­fu­sion belongs to a cat­e­gory of AI sys­tems called gen­er­a­tive AI. These sys­tems are trained on a cer­tain kind of cre­ative work---for instance text, soft­ware code, or images---and then remix these works to derive (or "gen­er­ate") more works of the same kind.

Hav­ing copied the five bil­lion images---with­out the con­sent of the orig­i­nal artists---Sta­ble Dif­fu­sion relies on a math­e­mat­i­cal process called dif­fu­sion to store com­pressed copies of these train­ing images, which in turn are recom­bined to derive other images. It is, in short, a 21st-cen­tury col­lage tool.

These result­ing images may or may not out­wardly resem­ble the train­ing images. Nev­er­the­less, they are derived from copies of the train­ing images, and com­pete with them in the mar­ket­place. At min­i­mum, Sta­ble Dif­fu­sion's abil­ity to flood the mar­ket with an essen­tially unlim­ited num­ber of infring­ing images will inflict per­ma­nent dam­age on the mar­ket for art and artists.

Even Sta­bil­ity AI CEO Emad Mostaque has fore­cast that "[f]uture [AI] mod­els will be fully licensed". But Sta­ble Dif­fu­sion is not. It is a par­a­site that, if allowed to pro­lif­er­ate, will cause irrepara­ble harm to artists, now and in the future.

The prob­lem with dif­fu­sion


The dif­fu­sion tech­nique was invented in 2015 by AI researchers at Stan­ford Uni­ver­sity. The dia­gram below, taken from the Stan­ford team's research, illus­trates the two phases of the dif­fu­sion process using a spi­ral as the exam­ple train­ing image.

https://i.rdrama.net/images/1684135631929439.webp

The first phase in dif­fu­sion is to take an image and pro­gres­sively add more visual noise to it in a series of steps. (This process is depicted in the top row of the dia­gram.) At each step, the AI records how the addi­tion of noise changes the image. By the last step, the image has been "dif­fused" into essen­tially ran­dom noise.

The sec­ond phase is like the first, but in reverse. (This process is depicted in the bot­tom row of the dia­gram, which reads right to left.) Hav­ing recorded the steps that turn a cer­tain image into noise, the AI can run those steps back­wards. Start­ing with some ran­dom noise, the AI applies the steps in reverse. By remov­ing noise (or "denois­ing") the data, the AI will emit a copy of the orig­i­nal image.

In the dia­gram, the recon­structed spi­ral (in red) has some fuzzy parts in the lower half that the orig­i­nal spi­ral (in blue) does not. Though the red spi­ral is plainly a copy of the blue spi­ral, in com­puter terms it would be called a lossy copy, mean­ing some details are lost in trans­la­tion. This is true of numer­ous dig­i­tal data for­mats, includ­ing MP3 and JPEG, that also make highly com­pressed copies of dig­i­tal data by omit­ting small details.

In short, dif­fu­sion is a way for an AI pro­gram to fig­ure out how to recon­struct a copy of the train­ing data through denois­ing. Because this is so, in copy­right terms it's no dif­fer­ent from an MP3 or JPEG---a way of stor­ing a com­pressed copy of cer­tain dig­i­tal data.

Inter­po­lat­ing with latent images

In 2020, the dif­fu­sion tech­nique was improved by researchers at UC Berke­ley in two ways:

1. They showed how a dif­fu­sion model could store its train­ing images in a more com­pressed for­mat with­out impact­ing its abil­ity to recon­struct high-fidelity copies. These com­pressed copies of train­ing images are known as latent images.

2. They found that these latent images could be inter­po­lated---mean­ing, blended math­e­mat­i­cally---to pro­duce new deriv­a­tive images.

The dia­gram below, taken from the Berke­ley team's research, shows how this process works.

https://i.rdrama.net/images/16841356326009624.webp

The image in the red frame has been inter­po­lated from the two “Source” images pixel by pixel. It looks like two translu­cent face images stacked on top of each other, not a sin­gle con­vinc­ing face.

https://i.rdrama.net/images/16841356335785332.webp https://i.rdrama.net/images/16841356340051105.webp

The image in the green frame has been gen­er­ated dif­fer­ently. In that case, the two source images have been com­pressed into latent images. Once these latent images have been inter­po­lated, this newly inter­po­lated latent image has been recon­structed into pix­els using the denois­ing process. Com­pared to the pixel-by-pixel inter­po­la­tion, the advan­tage is appar­ent: the inter­po­la­tion based on latent images looks like a sin­gle con­vinc­ing human face, not an over­lay of two faces.

Despite the dif­fer­ence in results, in copy­right terms, these two modes of inter­po­la­tion are equiv­a­lent: they both gen­er­ate deriv­a­tive works by inter­po­lat­ing two source images.

Con­di­tion­ing with text prompts

In 2022, the dif­fu­sion tech­nique was fur­ther improved by researchers in Munich. These researchers fig­ured out how to shape the denois­ing process with extra infor­ma­tion. This process is called con­di­tion­ing. (One of these researchers, Robin Rom­bach, is now employed by Sta­bil­ity AI as a devel­oper of Sta­ble Dif­fu­sion.)

The most com­mon tool for con­di­tion­ing is short text descrip­tions, also known as text prompts, that describe ele­ments of the image, e.g.---"a dog wear­ing a base­ball cap while eat­ing ice cream". (Result shown at right.) This gave rise to the dom­i­nant inter­face of Sta­ble Dif­fu­sion and other AI image gen­er­a­tors: con­vert­ing a text prompt into an image.

The text-prompt inter­face serves another pur­pose, how­ever. It cre­ates a layer of mag­i­cal mis­di­rec­tion that makes it harder for users to coax out obvi­ous copies of the train­ing images (though not impos­si­ble). Nev­er­the­less, because all the visual infor­ma­tion in the sys­tem is derived from the copy­righted train­ing images, the images emit­ted---regard­less of out­ward appear­ance---are nec­es­sar­ily works derived from those train­ing images.

The defen­dants


Sta­bil­ity AI

Sta­bil­ity AI, founded by Emad Mostaque, is based in Lon­don.

Sta­bil­ity AI funded LAION, a Ger­man orga­ni­za­tion that is cre­at­ing ever-larger image datasets---with­out con­sent, credit, or com­pen­sa­tion to the orig­i­nal artists---for use by AI com­pa­nies.

Sta­bil­ity AI is the devel­oper of Sta­ble Dif­fu­sion. Sta­bil­ity AI trained Sta­ble Dif­fu­sion using the LAION dataset.

Sta­bil­ity AI also released Dream­Stu­dio, a paid app that pack­ages Sta­ble Dif­fu­sion in a web inter­face.

DeviantArt

DeviantArt was founded in 2000 and has long been one of the largest artist com­mu­ni­ties on the web.

As shown by Simon Willi­son and Andy Baio, thou­sands---and prob­a­bly closer to mil­lions---of images in LAION were copied from DeviantArt and used to train Sta­ble Dif­fu­sion.

Rather than stand up for its com­mu­nity of artists by pro­tect­ing them against AI train­ing, DeviantArt instead chose to release DreamUp, a paid app built around Sta­ble Dif­fu­sion. In turn, a flood of AI-gen­er­ated art has inun­dated DeviantArt, crowd­ing out human artists.

When con­fronted about the ethics and legal­ity of these maneu­vers dur­ing a live Q&A ses­sion in Novem­ber 2022, mem­bers of the DeviantArt man­age­ment team, includ­ing CEO Moti Levy, could not explain why they betrayed their artist com­mu­nity by embrac­ing Sta­ble Dif­fu­sion, while inten­tion­ally vio­lat­ing their own terms of ser­vice and pri­vacy pol­icy.

Mid­jour­ney

Mid­jour­ney was founded in 2021 by David Holz in San Fran­cisco. Mid­jour­ney offers a text-to-image gen­er­a­tor through Dis­cord and a web app.

Though hold­ing itself out as a "research lab", Mid­jour­ney has cul­ti­vated a large audi­ence of pay­ing cus­tomers who use Mid­jour­ney's image gen­er­a­tor pro­fes­sion­ally. Holz has said he wants Mid­jour­ney to be "focused toward mak­ing every­thing beau­ti­ful and artis­tic look­ing."

To that end, Holz has admit­ted that Mid­jour­ney is trained on "a big scrape of the inter­net". Though when asked about the ethics of mas­sive copy­ing of train­ing images, he said---

There are no laws specif­i­cally about that.

And when Holz was fur­ther asked about allow­ing artists to opt out of train­ing, he said---

We're look­ing at that. The chal­lenge now is find­ing out what the rules are.

We look for­ward to help­ing Mr. Holz find out about the many state and fed­eral laws that pro­tect artists and their work.

The plain­tiffs


Our plain­tiffs are won­der­ful, accom­plished artists who have stepped for­ward to rep­re­sent a class of thou­sands---pos­si­bly mil­lions---of fel­low artists affected by gen­er­a­tive AI.

https://i.rdrama.net/images/16841356346753697.webp

https://stablediffusionlitigation.com/

None
None

I'd love to be an epic 1337 hax0r admin of my own circlejerk website but most of my experience is in hosting video game servers.

None

:marsey4chan:

https://archived.moe/g/thread/90873235

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.