Machine learning """"researcher"""" just so happens to stumble upon a paper that solves every problem ever. Even though he's only read the abstract, he sure seems to know a lot about it! Some /r/machinelearning users take issue with this.

57  2017-10-18 by codosotopo

Skip to The Juicy Stuff if you don't want to hear my beautiful background

For those of you who don't know, machine learning is a buzzword field of research at the intersection of statistics and computer science. Machine learning research focuses on teaching computers how to learn patterns in data and teaching journalists how to exaggerate findings for pageviews. (Digression: remember that article about how two AIs at Facebook invented their own languages, prompting the researchers to shut them down because they were too dangerous? What actually happened is that the AIs were meant to "speak" English, but converged to a local optimum that meant that they were speaking gibberish instead. The experiment didn't go as planned, so they shut it down. Shit like this happens every day.)

Anywho, now that machine learning (and deep learning, which is machine learning but more buzzwordy) is so popular, people are starting to wonder why all of these crazy methods actually work. Additionally, lots of big names such as Google publish research that lacks enough information to be reproducible. These factors have led to a big push to find theoretical explanations for how this magic works.

The Juicy Stuff

Recently, a paper claiming to answer every single question about why deep learning works was posted on /r/machinelearning here. The poster says:

I only read the abstract, and the end of the paper. I think I'm getting chill up my spine... Did they solve deep learning?

Wow! A paper that solved deep learning! For good! Period! We're witnessing a revolution! The poster's pretty damn helpful as well, working through so many questions raised by commenters. He knows so much for someone who's just reading through the paper for the first time! That's it, ladies and gentlemen, we're officially living in the future.

Of course, an internet detective finds that the OP was the author of the paper itself. Surprise surprise. Our poor, misunderstood machine learning hero does not react well to these accusations. Other posters start to catch on to the fishy nature of OP's comments and paper, some nicer than others. Some attack the very validity of the paper's claims themselves.

TL;DR: OP tries to pass of his own paper as the brilliant, earth-shattering work of anonymous researchers. /r/machinelearning doesn't buy it.

28 comments

Providing a Safe Space™ from SRD since 2009!

Snapshots:

  1. This Post - archive.org, megalodon.jp*, removeddit.com, archive.is

  2. /r/machinelearning - archive.org, megalodon.jp*, archive.is*

  3. here - archive.org, megalodon.jp*, removeddit.com, archive.is

  4. He knows so much for someone who's ... - archive.org, megalodon.jp*, removeddit.com, archive.is

  5. an internet detective finds that th... - archive.org, megalodon.jp*, removeddit.com, archive.is

  6. does not react well - archive.org, megalodon.jp*, removeddit.com, archive.is

  7. to these accusations. - archive.org, megalodon.jp*, removeddit.com, archive.is

  8. some - archive.org, megalodon.jp*, removeddit.com, archive.is

  9. others - archive.org, megalodon.jp*, removeddit.com, archive.is

  10. Some attack the very validity of th... - archive.org, megalodon.jp*, removeddit.com, archive.is

I am a bot. (Info / Contact)

THIS is the type of posts we need! Take notes /u/IvankaTrumpisMyWaifu, /u/Thot_Crusher, /u/Ed_BussyToast and /u/AnnoysTheGoys

More than three pings nullifies all of them.

Damn it. I knew but forgot :/

Such a total faggot.

/u/Ed_ButteredToast u/annarchist /u/ComedicSans why are we pinging people again

Because you're retarded and if we don't keep reminding you, you might forget to breathe :(

It's out out in, right? And what about this plastic bag that /u/Ed_ButteredToast where does that go?

(☞゚ヮ゚)☞

lol i wrote that paper brah

I wrote it too and so did my wife

Of course, an internet detective finds that the OP was the author of the paper itself.

Bullshit. The author of the paper knows their QFT and is proud of describing DNNs using a scalar field theory. OP's comments don't touch on that aspect at all, beyond some parroting here and here.

You very well may be right, but which makes for better drama: "Of course, an internet detective finds that the OP was the author of the paper itself," or "The author could really be anyone, who knows"?

I still have this feeling that they're the same guy, if only because of their really similar writing styles. That said, it also makes sense that this guy's just an overenthusiastic researcher who saw something that seems cool and got really excited about it.

The latter, because then we can repeatedly find out who the real author is every few weeks.

/u/dukwon, I refer you to where the sidebar says:

Do your part to keep our community healthy by blowing everything out of proportion and making literally everything as dramatic as possible.

Verdict: /u/dukwon's complaint against /u/codosotopo is not sustained.

The first of its kind, AlexNet (Krizhevsky et al., 2012), led to many other neural architectures have been proposed to achieve start-of-the-art results in image processing at the time

Lol the second fucking sentence has a grammatical error. Even the labs filled completely with foreign nationals right better than that.

I took a machine learning course in my last year of college and I understood maybe a third of it. I know symmetry breaking is a physics concept but I have no idea what it means in the context of machine learning.

My understanding is that a symmetry is any value in a physical system that doesn't change after a transformation. Like a still objects position won't change with a time translation. Then spontaneous symmetry breaking is how some systems will sometimes break these symmetries, I guess in physics it would answer some quantum shit. What is it doing here? Fuck my ass

It's a relevant concept in deep learning, basically referring to the notion that if you have a neural network with several input/output layers some of the nodes in the network may become useless if they have the same weights as other nodes - once the nodes exhibit "symmetries" in some deep learning structures they essentially become useless.

Interesting and pretty sensible but I really dont think putting physics terminology does anything but puff this guys paper up. Especially because redundant nodes doesn't seem to fit the same definition as a symmetry in physics but it always seems like there is some backwards way to define a system and a transformation so I'm probably wrong.

Physicists didn't invent the concept of symmetry or the breaking thereof. It might be a term used in physics (idk) but it is an established concept in the deep learning literature that was probably labeled independently of how physicists use the term.

It would make the article kind of trivially true though then? Like yeah we can save more resources by not having nodes detect the same patterns duh. I want someone who really knows some shit to drop some knowledge on us but the fact he deleted his profile makes me feel like it wasn't good enough to be worth defending.

If you want to know how fake a field is, just listen to an "expert" talk and see how much they sound like someone making fun of star trek.

For every spontaneous broken continuous symmetry, there exist a weight with zero Hessian eigenvalue.

As you can see here, machine learning is currently in the "nerds playing make believe" phase.

bazinga!

Wubba dubba lub lub

this tbh. them crackers should use ebonics for explaining stuff instead of stuff like modules and HOMOmorphisms(lmao).