- 5
- 28
Bill Gates tries to install Movie Maker
— Internal Tech Emails (@TechEmails) January 15, 2023
January 15, 2003 pic.twitter.com/QIYv6JtIzL
https://www.techemails.com/p/bill-gates-tries-to-install-movie-maker
- 5
- 10
- 15
- 26
- 51
- 53
https://ctl.utexas.edu/5-things-know-about-chatgpt
As a first step, learning about this tool will help instructors gain awareness and know to seek assistance when issues related to ChatGPT arise. In addition, the release of ChatGPT encourages us to revisit the best ways to assess student learning in a variety of instructional contexts (5). It invites us to ask important questions, such as:
Why and how do we best equip students as strong writers?
What other ways can students demonstrate learning in addition to written papers?
What is the best way to solicit student writing that is meaningful and authentic?
If students rely on ChatGPT as a source of information to answer factual questions, how will that affect their development of research skills?
This focus on the relationship between students and instructors and the educational mission of the university fits with broader efforts underway to reinforce the importance of the process of learning, including making and correcting mistakes. The university is in the process of refreshing our honor code and honor code affirmation to renew our commitment to supporting students in their journey to master complex knowledge and skills.
With these types of questions and issues in mind, we have gathered a variety of suggestions you can pick and choose to incorporate in your teaching practice if students’ use of ChatGPT is relevant for you. Incorporating 1-2 of these approaches may help ease concerns and challenges that could arise with the introduction of the ChatGPT tool.
Beginning of the semester:
Be clear on what you want your students to know and be able to do or demonstrate by the end of the course and why that knowledge is valuable to their lives. (See this resource for assistance in developing learning outcomes for your course.) Help students see that the ways you are assessing their learning are key to understanding what they are gaining from the course and where they may need extra coaching and support. (6)
Talk to your students about how relying heavily on this tool may interfere with achieving the learning outcomes you hope they will achieve in this course (e.g., problem solving, developing an authentic writing voice, etc.).
- In particular, “If you can explain to students the value of writing, and convince them that you are genuinely interested in their ideas, they are less likely to reach for the workaround.” (7)
Have an open discussion with your students about the ethical implications of ChatGPT and the value of authentic learning for students’ lifelong development as learners. This may include having conversations around digital literacy and bias in research and scholarship, as AI writing cowtools like ChatGPT are limited to the public source material they have access to on the internet.Don’t feel you have to have all of the answers, as this is a continually evolving issue. (6)
Assignment design:
Ask students to reference and/or cite class materials, notes, and sources (particularly sources that are behind firewalls such as JSTOR articles) in their written assignments. This instruction is valuable because ChatGPT draws on text models from public websites.
Require students to reflect more deeply and critically on course topics. This tip is always a good assessment strategy and ChatGPT currently performs better on more superficial and less detailed responses.” (8)
Use in-class time for students to demonstrate knowledge and understanding in a variety of ways through low-tech, low stakes in-person activities like freewriting and live presentations.
Craft an assignment where you generate a ChatGPT output based on a prompt and ask your students to critique the response, indicating where it did a good job of articulating key points and what nuances it missed. (For 10 other ways to creatively use ChatGPT in course assignments, see “Update your course syllabus for ChatGPT”; keep in mind that asking students to engage with ChatGPT may generate privacy concerns, so it may be better practice to provide them with a copy of ChatGPT responses that they can use.)
Focus on critical skills that artificial intelligence struggles with. NPR education correspondent Anya Kamanetz describes three of these areas as:
Give a hug: empathy, collaboration, communication, and leadership skills;
Solve a mystery: generating questions and problem finding; and
Tell a story: finding what's relevant in a sea of data or applying values, ethics, morals, or aesthetic principles to a situation. (9)
Carefully scaffold assignments with time and space for students to complete each step along the way, and consider whether the number of time-intensive tasks might require more bandwidth than students have to spend. Students are more likely to utilize a tool like ChatGPT when they are short on time. (6)
Treat ChatGPT as a tool that some students may want to use to help get started writing. For example, students who have difficulty starting writing assignments might be encouraged to generate a paragraph with ChatGPT as a stub that enables them to continue writing. As long as the student ultimately adds significant new material and thoroughly edits or ultimately eliminates the output from ChatGPT, they are producing a document that reflects their own work.
Classroom Climate:
One way to help encourage students to make better decisions about using cowtools such as ChatGPT is to design your classroom climate to engender mastery approaches to learning, which involve a focus on deeply understanding the knowledge and skills rather than simply achieving a particular score on an assessment. In a mastery-oriented classroom, students are more likely to engage in strategies that will help them truly learn the material rather than for the goal of performing a task and receiving a grade for their work.
Three simple tips for encouraging mastery approaches in higher education classrooms include:
offering flexible evaluation design: consider providing opportunities for students to revise and redo specific portions of assignments;
focusing feedback on process and effort: offer feedback oriented toward student effort and their learning processes rather than on high grades and performance relative to others. When possible offer elaborative feedback rather than feedback based simply on correctness.
building a sense of belonging: discuss, emphasize, and model that making errors and mistakes is part of everyone's learning processes rather than something that only poor performers or people who "don't get it" do
- 56
- 86
- JimothyX5 : Stupid, unfunny and completely unrelated to drama
- 99
- 103
- JimothyX5 : Stupid, unfunny and completely unrelated to drama
- 48
- 70
Hi, I'm a long time reader of Slate Star Codex and I used to post on the Reddit forum until I got banned. There are a few reasons that I believe I got banned.
Uncharitably claiming that Leftist censorship was a threat to the rationalist community
Advocating for violence
Not being kind
I understand why all of these things could have been a problem on the Reddit community, but I would like to know if they're still going to be a problem here, since I don't want to invest a lot of time creating a profile and having good-faith discussions with people if I'm only going to be banned again. Here are the reasons that I think these three issues shouldn't be a problem anymore.
I was right, and everybody who disagreed with me was wrong. The fact that the community had to move here proves it. I'm not expecting an apology but I think that time has proven me correct on that score.
Violence is a completely justifiable response to tyranny. While calls to violence may be against Reddit rules (and the community was right to ban me from Reddit because my rhetoric could have caused problems for the mods) there are no such rules here. In fact, rdrama (which helped set up this offsite community, and whom you should all be grateful to) actively encourages calls to violence. If a rational and logical case can be made for violence then I think there is no good reason not to hear that case out. If you're forced to censor people you disagree with because you're unable to make a stronger case for pacifism over violence in the open marketplace of ideas, then you should question whether your pacifism is actually a worthwhile philosophy.
Kindness and truth are different terminal values. If you optimize for kindness then it is self-evident that you will have to sacrifice truth at some point. Obviously the Reddit community has chosen kindness as its terminal value, but I'm hoping that this offsite community is enlightened enough to choose truth.
I'm linking to a few articles from my Substack here so you have a few examples of my style of writing and can make a better judgement about whether I would be a good fit for the offsite community. I'm also on rdrama where my username is sirpingsalot. If you think I'm not a good fit for the offsite either, then no hard feelings - I'm happy to take my ideas to more sympathetic communities instead. I just don't want to put in the effort of investing time and energy here if I'm only going to get banned again for the same reasons.
- 4
- 25
Orange site: https://news.ycombinator.com/item?id=34377910
Hello. This is Matthew Butterick. I'm a writer, designer, programmer, and lawyer. In November 2022, I teamed up with the amazingly excellent class-action litigators Joseph Saveri, Cadio Zirpoli, and Travis Manfredi at the Joseph Saveri Law Firm to file a lawsuit against GitHub Copilot for its "unprecedented open-source software piracy". (That lawsuit is still in progress.)
Since then, we've heard from people all over the world---especially writers, artists, programmers, and other creators---who are concerned about AI systems being trained on vast amounts of copyrighted work with no consent, no credit, and no compensation.
Today, we're taking another step toward making AI fair & ethical for everyone. On behalf of three wonderful artist plaintiffs---Sarah Andersen, Kelly McKernan, and Karla Ortiz---we've filed a class-action lawsuit against Stability AI, DeviantArt, and Midjourney for their use of Stable Diffusion, a 21st-century collage tool that remixes the copyrighted works of millions of artists whose work was used as training data.
Joining as co-counsel are the terrific litigators Brian Clark and Laura Matson of Lockridge Grindal Nauen P.L.L.P.
Today's filings:
As a lawyer who is also a longtime member of the visual-arts community, it's an honor to stand up on behalf of fellow artists and continue this vital conversation about how AI will coexist with human culture and creativity.
The image-generator companies have made their views clear.\
Now they can hear from artists.
Stable Diffusion is an artificial intelligence (AI) software product, released in August 2022 by a company called Stability AI.
Stable Diffusion contains unauthorized copies of millions---and possibly billions---of copyrighted images. These copies were made without the knowledge or consent of the artists.
Even assuming nominal damages of $1 per image, the value of this misappropriation would be roughly $5 billion. (For comparison, the largest art heist ever was the 1990 theft of 13 artworks from the Isabella Stewart Gardner Museum, with a current estimated value of $500 million.)
Stable Diffusion belongs to a category of AI systems called generative AI. These systems are trained on a certain kind of creative work---for instance text, software code, or images---and then remix these works to derive (or "generate") more works of the same kind.
Having copied the five billion images---without the consent of the original artists---Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.
These resulting images may or may not outwardly resemble the training images. Nevertheless, they are derived from copies of the training images, and compete with them in the marketplace. At minimum, Stable Diffusion's ability to flood the market with an essentially unlimited number of infringing images will inflict permanent damage on the market for art and artists.
Even Stability AI CEO Emad Mostaque has forecast that "[f]uture [AI] models will be fully licensed". But Stable Diffusion is not. It is a parasite that, if allowed to proliferate, will cause irreparable harm to artists, now and in the future.
The diffusion technique was invented in 2015 by AI researchers at Stanford University. The diagram below, taken from the Stanford team's research, illustrates the two phases of the diffusion process using a spiral as the example training image.
The first phase in diffusion is to take an image and progressively add more visual noise to it in a series of steps. (This process is depicted in the top row of the diagram.) At each step, the AI records how the addition of noise changes the image. By the last step, the image has been "diffused" into essentially random noise.
The second phase is like the first, but in reverse. (This process is depicted in the bottom row of the diagram, which reads right to left.) Having recorded the steps that turn a certain image into noise, the AI can run those steps backwards. Starting with some random noise, the AI applies the steps in reverse. By removing noise (or "denoising") the data, the AI will emit a copy of the original image.
In the diagram, the reconstructed spiral (in red) has some fuzzy parts in the lower half that the original spiral (in blue) does not. Though the red spiral is plainly a copy of the blue spiral, in computer terms it would be called a lossy copy, meaning some details are lost in translation. This is true of numerous digital data formats, including MP3 and JPEG, that also make highly compressed copies of digital data by omitting small details.
In short, diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it's no different from an MP3 or JPEG---a way of storing a compressed copy of certain digital data.
Interpolating with latent images
In 2020, the diffusion technique was improved by researchers at UC Berkeley in two ways:
1. They showed how a diffusion model could store its training images in a more compressed format without impacting its ability to reconstruct high-fidelity copies. These compressed copies of training images are known as latent images.
2. They found that these latent images could be interpolated---meaning, blended mathematically---to produce new derivative images.
The diagram below, taken from the Berkeley team's research, shows how this process works.
The image in the red frame has been interpolated from the two “Source” images pixel by pixel. It looks like two translucent face images stacked on top of each other, not a single convincing face.
The image in the green frame has been generated differently. In that case, the two source images have been compressed into latent images. Once these latent images have been interpolated, this newly interpolated latent image has been reconstructed into pixels using the denoising process. Compared to the pixel-by-pixel interpolation, the advantage is apparent: the interpolation based on latent images looks like a single convincing human face, not an overlay of two faces.
Despite the difference in results, in copyright terms, these two modes of interpolation are equivalent: they both generate derivative works by interpolating two source images.
Conditioning with text prompts
In 2022, the diffusion technique was further improved by researchers in Munich. These researchers figured out how to shape the denoising process with extra information. This process is called conditioning. (One of these researchers, Robin Rombach, is now employed by Stability AI as a developer of Stable Diffusion.)
The most common tool for conditioning is short text descriptions, also known as text prompts, that describe elements of the image, e.g.---"a dog wearing a baseball cap while eating ice cream". (Result shown at right.) This gave rise to the dominant interface of Stable Diffusion and other AI image generators: converting a text prompt into an image.
The text-prompt interface serves another purpose, however. It creates a layer of magical misdirection that makes it harder for users to coax out obvious copies of the training images (though not impossible). Nevertheless, because all the visual information in the system is derived from the copyrighted training images, the images emitted---regardless of outward appearance---are necessarily works derived from those training images.
Stability AI
Stability AI, founded by Emad Mostaque, is based in London.
Stability AI funded LAION, a German organization that is creating ever-larger image datasets---without consent, credit, or compensation to the original artists---for use by AI companies.
Stability AI is the developer of Stable Diffusion. Stability AI trained Stable Diffusion using the LAION dataset.
Stability AI also released DreamStudio, a paid app that packages Stable Diffusion in a web interface.
DeviantArt
DeviantArt was founded in 2000 and has long been one of the largest artist communities on the web.
As shown by Simon Willison and Andy Baio, thousands---and probably closer to millions---of images in LAION were copied from DeviantArt and used to train Stable Diffusion.
Rather than stand up for its community of artists by protecting them against AI training, DeviantArt instead chose to release DreamUp, a paid app built around Stable Diffusion. In turn, a flood of AI-generated art has inundated DeviantArt, crowding out human artists.
When confronted about the ethics and legality of these maneuvers during a live Q&A session in November 2022, members of the DeviantArt management team, including CEO Moti Levy, could not explain why they betrayed their artist community by embracing Stable Diffusion, while intentionally violating their own terms of service and privacy policy.
Midjourney
Midjourney was founded in 2021 by David Holz in San Francisco. Midjourney offers a text-to-image generator through Discord and a web app.
Though holding itself out as a "research lab", Midjourney has cultivated a large audience of paying customers who use Midjourney's image generator professionally. Holz has said he wants Midjourney to be "focused toward making everything beautiful and artistic looking."
To that end, Holz has admitted that Midjourney is trained on "a big scrape of the internet". Though when asked about the ethics of massive copying of training images, he said---
There are no laws specifically about that.
And when Holz was further asked about allowing artists to opt out of training, he said---
We're looking at that. The challenge now is finding out what the rules are.
We look forward to helping Mr. Holz find out about the many state and federal laws that protect artists and their work.
Our plaintiffs are wonderful, accomplished artists who have stepped forward to represent a class of thousands---possibly millions---of fellow artists affected by generative AI.
- 16
- 18
I'd love to be an epic 1337 hax0r admin of my own circlejerk website but most of my experience is in hosting video game servers.
- 2
- 15
- 7
- 20
Due Diligence, more like dude bussy lmao.
- 5
- 24
Inb4 someone calls Bardfinn with this.
- 21
- 25
You can't convince me otherwise
- 2
- 10
- 3
- 11
- 7
- 25
- 9
- 32
First, /r/aiwars runs a poll:
https://old.reddit.com/r/aiwars/comments/108ohhe/how_do_you_identify_politically/
Then, /r/DefendingAIArt discusses:
Unfortunately, /r/LoveForAIArt and our virtuous mission has no mention in these threads.
- 5
- 28
Orange site: https://news.ycombinator.com/item?id=34339285
Now playing: In A Snow-Bound Land (DKC2).mp3