Hacker News new | past | comments | ask | show | jobs | submit | 29athrowaway's comments login

You can also see it in Street Fighter II: The World Warrior.

Meta was losing to TikTok so they had to adapt by promoting brain rot[1]

[1]: https://www.merriam-webster.com/slang/brain-rot


Except the content quality on TikTok wasn't only brain rot, and the algorithm often grew into valuable content. That is, if you actually wanted it. If you want brain rot, it'll give it to you.

Meanwhile, you don't even get the choice on Facebook.


Hiroshi Yamauchi was highly selective when it came to what games could be released for Nintendo consoles.

Atari was not. Atari had many cash grab games like ET the extraterrestrial where most budget was spent in box art and marketing than game development.


> Atari was not. Atari had many cash grab games like ET the extraterrestrial where most budget was spent in box art and marketing than game development.

It's a little bit ironic that Spielberg's love of videogames kinda ruined Atari.

It was Spielberg who pursued Atari, not the other way around.

Basically, the video game companies weren't looking to do movie tie-ins at the time. Spielberg loved videogames, and made a request to have Atari's Howard Scott Warshaw come out to SoCal to meet Spielberg.

That meeting led to Atari's "Raiders of the Lost Ark" game. Warshaw had previously done "Yar's Revenge" and "Adventure."

Then Spielberg asked Atari to make an E.T. game, and the rest is history.

Basically, if Atari had ignored Spielberg's call to make "Raiders," they wouldn't have made "ET" and they might have remained dominant for a few more years, preventing Nintendo from taking everything over in the mid 80s.


The Atari 2600 had a certain type of "adventure game", which was basically walk-around until you found this blob of pixels (see manual). "Adventure" was famous, and "Superman" (also a Warner movie franchise) did well with this style.

Atari "Raiders of the Lost Ark" seemed to be a game that sold very well on name value, but it was hard dexerity, and required reading the manual, and so most people probably didn't make it more than about 5 screens in. That and Atari "Pac-Man" and a few other games, and HEY Atari is just ripping us off!!

"E.T." was pretty half-assed, but IMO a big part of it was the entire game design was not all that entertaining to begin with. It was "Superman" with pits.


They could have just made the game not suck. The original Star Wars arcade game that Atari did was received quite well. Video game franchises based on movies did tend to have this cash grab quality to them though for long afterwards (with some exceptions), even on the NES.


Wrong continents



And the lesser known, newer, invisible, digital watermark based version of it. Unfortunately very little information is available about that one. http://www.cl.cam.ac.uk/users/sjm217/projects/currency/


It's not memcmp that's slow, it's the .NET marshalling that has an extra cost.



GTK 3 was released in 2011. The upgrade took 14 years.

GTK 4 was released in 2020.

I hearby declare the GIMP / GTK 4 challenge: use AI to migrate GIMP from GTK 3 to GTK 4. The prize: a drawing of a seven legged spider.


There are moments in which you will need to know Erlang to debug issues on an Elixir application.


Granted that for many Elixir devs, knowing (and learning) Erlang happens during such debugging efforts.


Many of the benchmarks I have seen in this space suffer from the Texas Sharpshooter fallacy, where you shoot first and then paint a target around the hole.

If you create a benchmark and your product outperforms everything else, it could mean many things. Overfitting being one of them.


That's an interesting point. The bias might or might not be intentional. From the benchmarks I have seen, lot of tools solve slightly different problems altogether and also target slightly different data distribution and in the end have to build best solution around it.

Which is why publishing open benchmarks is first step where there is public scrutiny around whether then benchmark itself irrespective of the results is fair. In the end, the end user will choose the benchmark that's best fit for their usecase or mostly will create a variation of their own, do their own unbiased evaluations.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: