Hacker News new | past | comments | ask | show | jobs | submit login
Before the iPhone, I worked on a few games for what were called "feature phones" (twitter.com/id_aa_carmack)
679 points by tosh on May 20, 2021 | hide | past | favorite | 397 comments



I worked on a J2ME app that needed to access ___location on phones without GPS, I was in touch with the operator and got a list of cell ids with lat, lon.

With that I created a daemon in Symbian that would query the cell id and open up a socket server to give it to the J2ME app.

With that we developed an app to request a taxi service, but none of the taxi companies wanted it. Some of the complaints were:

1. GPRS data plans were too expensive for end users 2. No one would put an expensive Sony Ericsson touch phone on a windshield, the windshield would be broken by thieves to steal it. 3. Looking at a screen on the windshield while driving was never going to be approved by authorities.


"Good idea but before it's time" crashed so many mobile startups pre-iPhone.


I worked for a company that allowed you to play along with jeopardy on your brew/Jme device in real time as you watch the show live on tv. You could compete with your friends in real time, leaderboards, chat etc. The company eventually went under but I can’t help but think if it had been ported to iOS when it came out it would have been a hit. I’ve still never seen any technology like it.


At the time, "second screen" was the term I heard used for those types of experience. The only prominent example I can think of these days is kahoot, which is only sort of the same idea.


Ah yes, I'd forgotten about the phrase "second screen". It was going to be big thing until people realized that no one wanted to look at 2 screens at once.


That's not my experience. There's a constant background hum of "what else have I seen him/her in" in my house.

Not so much "lets all talk about this in realtime!" I admit, but there seems to be a subset of people who use Twitter/Reddit like that when watching politics or sports.


I'm one of them, for example eurovision or euro'20 in a few weeks. Memes and stuff it's literally better than watching it. It just become a base to make memes of


Nobody could have anticipated that YouTube poop would be more compelling than professionally produced television.


FWIW I do this all day at work now :)


Teenagers now have a third screen (TV + phone for friends + tablet for watching a streamer)


The Jackbox series of party games also does this very well.


This was a prototype app we built circa-2012 for a rapid app dev class. Due to time constraints though, we took the less interesting path and ran off pre-time & answer coded data files.

The best thing to come out of the project was the "advertising" video featuring YT casually answering every question correctly. Which is a thing you can do when you've watched the same episode 15 times to record second-level timing!

It always confused me why Jeopardy never pushed anything official.


To be fair, all 3 of those are valid complaints.

#1 was true all the way till ~10 years ago when 3G became popular.

#2 is still true is most 'urban' areas. People just take their phones with them when they get out of their cars.

#3 is interesting - I think some regions had tried to ban mounting phones on the dashboard, but at this point, they must have given up


Confirming #2: I live in Switzerland (low crime rate etc...) but I would not even dream of leaving my phone visible in the car.

Btw. in the 90' I used to detach each-single-time that I parked the car the front panel of the car's radio to then carry it with me (leaving the storage box in front of the passenger seat open, to show that I didn't just put it in there) to discourage people from breaking into the car to extract the radio (front panel & main car radio were individually coded to only work together, that's at least what I believed).

What a silly thing to do, hehe, but to be fair the price of a fancy radio at that time was similar to today's phones ("Pioneer" and "Sony" were high on my list - some models had quite complex display animations/colors/equalizers/etc & sound modes).


The detachable faceplate for anti-theft reasons was (is?) a very common feature for aftermarket radios/head-units/whatever in the US. Built-in ones have just increasingly moved to not being a discrete, accessible, interchangeable component, another way around.

I still remember when my dad had bought an aftermarket tape-deck-only radio, a Sony I believe, well into the era of the ubiquitous CD player (he just wanted a working tape deck). Someone broke into the car at some point and stole it and I can only assume they were very disappointed after looking closer.


Italy, mid-80s, early summer. My primary school organises a day trip to a city nearby. During the trip, a classmate from a “difficult” background ends up getting slightly hurt, banging his head against a wall. When we eventually get back to our hometown, his parents don’t come to pick him up, and somehow that means everyone else has to wait on the bus. Eventually, his dad shows up, drunk and high (on heroin, we’ll found out later) with a car radio under his arm - and everybody knows he doesn’t have a car, he’s literally just stolen it…


To this day I’d be afraid to park my car out in the open with that piece of hardware attached to it.


And when I bought an ebike, the dealer told me to always remove the battery and carry it with me when I park the bike in public spaces.


GPRS was a godsend as it brought the always-connected, pay-for-data usage that we all know nowadays.

The precursor, HSCSD, was ridiculously expensive because you paid by the minute.

If my memory is correct, there was also dial-up on mobile/GSM (well, I used something before I had a phone that supported HSCSD), which was interesting, but quite useless at the time.

Also, #3 makes little sense seeing as there were already cars with built-in satnav, and you had to look down, which is worse than a phone on the windshield.


> there was also dial-up on mobile/GSM

Yeah, it was called CSD. HSCSD was high-speed CSD.


Right, a whole 9.6 kbit/s, HSCSD was a major upgrade at 56 kbit/s :D

The technology has come so far, it's pretty incredible.


Pre-iPhone I used my Symbian phone as the GPS system in my car, using software called Route 66 and a bluetooth based GPS receiver.

The map data was all stored on an MMC card so it didn't need any GPRS data. I used both a Nokia 6600 and then a 6680 with the same software. https://www.pocketgpsworld.com/route66-mobile-britain.php

Only the bluetooth GPS receiver had to be kept in the windscreen, and both parts were small enough I could easily could take out of the car when leaving it.

I suppose it was slightly bleeding-edge stuff for it's time but it worked pretty well considering the capabilities of the device and was much cheaper than a dedicated GPS unit with no ongoing costs.


TomTom on my Orange-branded HTC Windows phone/palmtop for me.

Unrelated obviously, but as someone who was surprisingly poor for an owner of such a phone in those days (it was on my mum's phone contract and we had to fight and haggle to get it), satnav software piracy was rampant back in the day.

I think there's still a Bluetooth GPS receiver somewhere in my old room at my folks place...


> 3. Looking at a screen on the windshield while driving was never going to be approved by authorities.

The solution to that seems so obvious: make the phone speak to the driver instead. Was turn-by-turn navigation not feasible back then? Or what else am I missing?


Back in the before times, taxi drivers knew how to navigate a city just by memory. Give them an address and they'd probably know how to find it.

Most likely this app was not navigation. Remember, feature phone, probably had less than a couple hundred MB of memory for the entire device. I assume the app would just give the taxi driver address details and what not.


Oh, even easier then; the phone would just have to read the address aloud. Symbian phones were able to run text-to-speech as far back as 2002; my best friend is blind and used a Nokia phone with a screen reader back then.


That's not true in my experience. I didn't stop directing cabbies to my destination until Uber and Lyft became commonplace and I never had to use a cab service again.

In the major urban areas I've lived, every cab drive would start with "take this street to that street" or "head towards major landmark."

Even in cities with decent grids you couldn't trust a cabbie to take the fastest route.


> In the major urban areas I've lived, every cab drive would start with "take this street to that street" or "head towards major landmark."

> Even in cities with decent grids you couldn't trust a cabbie to take the fastest route.

I heard these stories and would direct cabs the same way, until I realized that the cab drivers were right and I was wrong, and that the depictions of them as shady thieves who would purposely take you out of your way, or were horribly incompetent, were urban myths.

And I was being disrespectful to the drivers, to presume that and treat them that way. I don't treat other service industry people that way. And if you think about it, cab drives make more from a flag drop than a longer trip - as one cab driver said, 'people say these things to me - do you know how much I make for an extra few blocks? 50 cents? And what is my take of that?'

I learned to trust and respect the cab driver, who after all were human beings, and drove around all day long, and like most people was honest and considerate.


Being able to get to you and actually taking a good route are two different things though.

How did Cabbies know where to show up when called?

Or did they not? I only ever took Taxis from places I could flag them down pre-Uber.


Most taxi drivers required a license - and that required passing an exam. Big part of the exam was knowledge of town.

People also could tell the "main" roads, which would be known.

On a side note: my usual experience with taxis (not uber) is that they first ask the question to figure out if you know the town or not and then they know very well what is the longest route. Also even when asked for estimate of price it would always be the top.

Big reason why uber is so popular: you wont get a ling ride through whole town and you can rate the driver.


Calling a cab resulted in a cabbie showing up maybe 20% of the time. You would hail cabbies in the street.


At least some of the time this was more to stretch out a fare on the unsuspecting, rather than a total lack of local knowledge. Not that it makes it any better.


> Back in the before times, taxi drivers knew how to navigate a city just by memory. Give them an address and they'd probably know how to find it.

In what city? London cab drivers famously were (are?) required to pass a memorization test, but in American cities I've had many cab drivers who didn't know their way around. They've been generally better than Lyft/Uber drivers.


In just about every city a cab driver was expected to know where they were going. For more obscure addresses they may have asked for major cross streets or similar, but it was generally assumed that the driver knew where he was going (which led to the inevitable problem when he didn't or when the passenger was not precise in their description of the destination.) You would also always find a Thomas' guide somewhere in the front of the taxi just in case...


People knew this too - and so navigating by landmarks was much more common.

Some cities are setup so that with most addresses you can pinpoint almost exactly where it is in the city. Seattle’s a good example of this.

Also if you think of most taxi trips they will either be to a major ___location (hotel, airport, restaurant) or to a place personally known by the rider (house, work).


These feature phone did have Google Maps, and lists of GMaps directions, and could have done navigation just like the Garmin devices - but they crucially didn't have GPS.


Dispatch knowing drivers' locations in realtime would have made this a killer app imo. Car services used to assign pickups based on drivers reporting their ___location by radio.


This is me guessing: This sounds like before turn-by-turn navigation was possible. If they had to approximate ___location via a set list of cell ids, then that doesn't give very precise ___location data. Also, back then, I am guessing, there where not as good map data available, especially not that would fit in a phone. I would also think that this was before synthesised speech was really possible to do on a phone.

The world is a different place today.


Before smartphones and before cheap GPS, I got spoken turn-by-turn directions while driving from Tellme.

1-800-TELL-ME launched in 1999/2000. You told it your address and the destination address, and it read you the directions one at a time. E.g. after you made the first turn, you'd ask for the next one and it'd tell you. It did not need any GPS this way, it's like talking to someone sitting next to you reading the map for you.

Microsoft bought Tellme in 2007 for $800 million.


And did what with it?

And why?


To do the same thing Google was doing that year when they launched GOOG-411:

Using the interactive phone calls to train their speech recognition systems, so that they could eventually use what they learned to develop things like Google Assistant, Cortana, Windows Voice Recognition, etc.

Tellme was taking 2 billion calls a year when they were acquired. They had all the training data Microsoft could want to compete with Google in that area.


> In early 2012, Microsoft divested itself of Tellme Networks' interactive voice response (IVR) service and the majority of its employees to [24]7 Inc. The service was moved to a non-toll-free number.

https://en.wikipedia.org/wiki/Tellme_Networks


GPS navigation is 90s tech, though. I had garmin barking turns at me in the early 2000s same as siri today. In fact usually better reception than my cell phone if it was a clear day, since cell coverage is still terrible where you really need need it out in the boonies where gas stations are miles and miles apart.


Devices like the Garmin [0] were chunky though, weren't they, like a few cm thick, even compared to phones of the times they were big. And to my [limited and shaky] recollection people had car mounted antennae for GPS (in the late 90s) because they worked poorly without it?

The first retro-fittable GPS was in 1997, the Alpine CVA-1005 [1], which weighed >3kg and had a display of 26cm across; it connects for nav to a CDROM drive containing base unit [2]; here's the wiring diagram [3].

[0] https://spectrum.ieee.org/consumer-electronics/gadgets/the-c... [1] https://ndrive.com/brief-history-gps-car-navigation/ a good review of early GPS [2] https://www.ebay.co.uk/itm/Alpine-NVA-N751A-Navigation-Syste... [3] https://elektrotanya.com/alpine_cva-1005_wiring_diagram.pdf/...


Chunky doesn't matter for a car GPS. They sit on your dash, not in your pocket. For my old garmens, the weighted sandbag mount they used to hold the device onto the dash was honestly less frustrating than a suction cup that falls of the windshield down to your feet while you a driving on the highway.


Oh, for sure, but surfsvammel was on about tech that fits in your pocket, you responded that GPS was 90s tech, and I was just fleshing out that whilst it was 90s tech it was only really widely available as tech for a car (and of course USA restricted GPS accuracy, which meant it often got confused in the UK as to which road one was on).

The last Garmin my dad had, maybe 5 years ago was decidedly chunky compared with phones at the time (but you need a reasonable sized screen and the use case meant it didn't need to be thinner, so perhaps not a fair comparison).


GPS nav hasn't improved much since the 90s, but smartphone nav relying on multiple positioning methods has improved remarkably in even the past 5-6 years. It used to be neither a dedicated GPS device nor a smartphone could handle dense urban areas (where buildings cause satellite interference) if you were moving much faster than a pedestrian.


True, but also your phone utilizes the same GPS signals. It's all about the maps. Your Garmin typically had maps for the whole country preloaded, your phone typically downloads them on the fly. So in the boonies your phone knows exactly where it is in terms of longitude and latitude, but hasn't a clue where anything else is.


It's not just about maps. Unlike a dedicated GPS device, your phone uses A-GPS for faster geolocation. Before A-GPS, you often need to wait a couple minutes before the device has any idea of its ___location or the time. Offline maps are available for both phones and non-phones, and aren't an issue at all.


Is everyone here too old to remember MapQuest or CD based car navigation systems? Apparently. We had decent turn by turn in the 90s. In the US mapping data was heavily derived mostly from public Census TIGER data, which had flaws but was good enough.


Just recently I had to explain how that thing works on an old car. In the end, my advice was to just slap an Android phone/tablet on the dashboard and forget the stupid CD satnav :D

They also had a backseat DVD player with (shudder) analog video input. The quality was atrocious!


> a daemon in Symbian that would query the cell id and open up a socket server to give it to the J2ME app

Why not just make the entire app native in Symbian if you require it anyway? Or did the Symbian SDK suck even more than I remember?


I tried to do a college end of year project in Symbian, and couldn't get anything to work for months. This was likely a combination of lack of ability as much as the SDKs problem, but it definitely was not novice friendly.

I pivoted to J2ME with about a week to go in the project and managed to get an MVP working in time, after 3 months of wrestling with Symbian.


Ex Nokia here, yes the Symbian SDKs required high comfort with C and C++ development environments on Windows, then there was Symbian C++ dialect.

Also it did not help that the SDK was rebooted like 4 times.

Initially based on Metrowerks, coupled with a mix of Perl and batch files, rebooted twice into Eclipse based IDEs, and finally the QtCreator initial effort before the burning platforms memo happened.

Still, it was still much more friendly than dealing with NDK issues on Android.


Oh UIQ3? Sony’s M600i and later phones are my all time favourite smart phones.

That dual-key QWERTY is my favourite input, ever. I’d love something like that on iPhones soft keyboard lol


what product was this?


Point 2 and 3 are baffling to me. Garmins mounted on the windshield have been a thing in vehicles since the early 2000s at least.


In the early 2000s, I had to take my Garmin off the windshield and hide it in the trunk every time I parked, otherwise someone would break in and steal it. My car was broken into 3 times. Someone also stole my Microsoft Zune when I forgot to take that in with me.


Heck, this happened to me in the late 2010s...


Regarding point 2: When I worked for a valet parking company not too many years ago it would astonish me how many people would 'remember' they must secure their GPS in the glove box, despite leaving much more expensive items like iphones/laptops/purses/wallets in plain sight strown around within their vehicle. They were an easy and common theft target at one time I guess .


In 2009 somebody smashed my car's window to steal my garmin. Even if the GPS is worth little, they are worth hiding because thieves are morons who will smash your window for it anyway.


Anything in a car is a theft target. I've had a window smashed so someone could steal my $7 knockoff lightning cable.


In the UK, we were reminded to clean the inside of the windscreen to remove the telltale suction cup marks, so thieves wouldn't assume there was likely a satnav hidden in the glovebox or boot. (Windscreen = Windshield, Boot = Trunk)


Can confirm. Had a Garmin/Palm handheld (iQue 3200 or sth) with Garmin mapping software for Windows to prepare/preload map tiles onto SD cards, plus a complex setup involving VirtualPC and NoMachine NX to make it run on my PowerBook.


A lot of people have entirely different perceptions of risk than actual risk exposure.


Oh man, do I have fond memories of developing on feature phones!

Started wayyy back in 2007, and was a cofounder of a startup that made popular J2ME games available for free to people by wrapping it in our proprietary ad serving software. We launched more or less the same time that AbMob did, invented more or less the same stack (ad delivery to mobile phones), but we focussed on the product (games) whereas they focussed on the platform (ad delivery). A few years later AdMob was acquired for mega $$$ by Google, whereas we just kind of limped along and died a slow, natural death! Many years later I discovered that my cofounder just let our 4 letter ___domain expire (www.hovr.com) and I think it's up for sale now :(

Also remember developing games on the BREW platform by Qualcomm, circa 2005. Whereas I was in India which mostly had GSM J2ME phones, BREW was much more popular on North American CDMA handsets. I, along with a friend, developed one of the first real-time multiplayer games called Blingster Battle, which was on top of Verizon's charts for a brief period of time! Truly groundbreaking stuff at that time..

The most amazing bit, though, was when we made some BREW apps around 2013. By then iOS and Android had firmly taken over the smartphone market, and all the cool kids were downloading apps/games on them. However there was a very significant portion of the market - primarily composed of the elderly - who were still hanging on to their old CDMA feature phones and were still interested in buying new apps. We made a couple of quiz types games, that actually generated a couple of thousand dollars in revenue every month till last year, until Qualcomm finally pulled the plug on BREW!


> I, along with a friend, developed one of the first real-time multiplayer games called Blingster Battle, which was on top of Verizon's charts for a brief period of time! Truly groundbreaking stuff at that time..

Impressive - how did you manage to run real-time multiplayer on that technology?


The game was initially developed as a single-player game (kind of like Tetris). Then to make it multiplayer the entire session management and messaging was moved to a third-party service (it was called electroserver, IIRC). Don't really remember the details now unfortunately!


I worked at a startup, Javaground [1], where we ported and developed games for J2ME phones. We had a room full of dressers full of all the different mobile phones of the day.

Each phone had different implementation quirks, such as variable audio delay when playing a file, audio/image formats that were faster/slower, odd button events (some used press, some used ondown, some had no ondown, etc). A lot of our work was learning all of these quirks and implementing them into the automated porting platform.

Then the iPhone came out and started to build momentum.

I still have my old flip phone, the one with the least quirks, with some of our games on it.

1: https://en.wikipedia.org/wiki/Javaground


Ah, it's nostalgia-a-clock. I worked on Skype for J2ME with a 3-4 person dev team. Custom UI kit, 128x128 screens, 128kb available memory and other fun limitations.. I think we re-wrote our text rendering/styling/wrapping code more times than there were actual releases of the app :)

I honestly miss these times - it sparked so much creativity when one needed to achieve something in such a limited environment.


> I honestly miss these times - it sparked so much creativity when one needed to achieve something in such a limited environment.

As someone from the c64/apple][/atari generation... it was constantly amazing to see how much people could push a single device. Lots of creativity, as you say. But... we seemed to hit a wall with the whole j2me-era of "just get creative to work around things!" mindset. Because, IIRC, there were dozens of different devices that all didn't work the same way - you'd have to get really creative to get things to work in 128k, but then do it again and again and again for each target device.

There were millions of C64 units in people's hand in, say, 1986. And similar for Apple ][ units. You could be assured of a decent audience/sales/users if you ported to that platform, even if there was a lot of 'creativity' to deal with. The j2me device world always seemed a lot more fragmented to me (but maybe it wasn't?). But just saying "this runs on a java device" was different from distribution - allowing 'regular' people to get something in to a centralized store/distribution point seemed another big win for the iPhone world.

EDIT: fwiw, this made me spend a few minutes in youtube looking at old c64 and apple and Atari demos. what people ended up doing with those devices - years after they were mainstream - is still crazy to me.


> fwiw, this made me spend a few minutes in youtube looking at old c64 and apple and Atari demos. what people ended up doing with those devices - years after they were mainstream - is still crazy to me.

The craziest ever, to me, is the following in 256... bytes. Bytes (including the music ofc, which is the whole point).

https://youtu.be/sWblpsLZ-O8


Then you are going to love this 256 bytes demo.. :-)

Memories by Hellmood https://youtu.be/Imquk_3oFf4


That's incredible!


> The j2me device world always seemed a lot more fragmented to me (but maybe it wasn't?)

Oh it sure was! There were device specific workarounds and hacks all over the codebase.


Also that version of Skype was way better than this hot steaming pile of manure currently shipped by Microsoft.


Thank you! It really was a different era back then :)


My very first industry job was making J2ME games and oh my god does it still give me nightmares.


If you miss it, just try Android, contrary to Google arguments against J2ME, the fragmentation experience is kept unchanged.


We have many more issues supporting multiple versions of iOS than supporting multiple versions of Android. But neither are not even close to the nightmare that was J2ME.


Another one that wasn't blessed with OEM deviations from AOSP.


We have "been blessed" with OEM deviations from AOSP (we have a big number of clients and most of them are using Android because of regional characteristics), but still we have way more issues with customers using out-dated iOS versions than users on those devices. For comparison, we have way more iOS issues than Android ones, even if Android is the vast majority of our user base (~70% of our customers).

Anyway, even including those cases it is still very far from J2ME days.


- Bluetooth issues

- Camera is hit and miss, even after the renewed API

- Apps randomly killed on the background

- Intents that don't launch as expected

- NDK debugging that cannot attach to the server running on the device

- Unstable GPGPU drivers

- Keyboard handling

- Perfectly working code that needs to be rewritten just because

Yeah so much better than J2ME.


You know, even if you're right we still see much more issues in iOS. For example, a simple Xcode minor upgrade can randomly break some flows.

> - Perfectly working code that needs to be rewritten just because

This issue is much more frequently on iOS than Android, since iOS deprecates features much faster and there is no compatibility layer between versions.


Except on Android that happens on the same Android version across devices.


> - Bluetooth issues

Aren't they rewriting this thing in Rust?

> - Camera is hit and miss, even after the renewed API

Give it time. CameraX is already vastly better than what we had before.

> - Apps randomly killed on the background

Up to OEMs.

> - Intents that don't launch as expected

?

> - NDK debugging that cannot attach to the server running on the device

Welp.

> - Keyboard handling

Fixed.

> - Perfectly working code that needs to be rewritten just because

?


- Devices never see updates, other than a select few flagship models

- It does not matter if the blame is on Google or OEMs, it is still fragmented


Circular arguments are circular.


Nope, just the Google support team arguments that ignore how similar Android is to J2ME in many aspects.

One needs to support the home team after all.


So now I am promoted to "Google support team" just because I bought to the discussion some observations from my own company? But you bringing your own observations does not put you on "Apple support team"? This seems completely fair /s.

Well, my point in the end is that even with all the issues that you pointed (some are true, some we didn't hit because we don't use those features), we still see more issues on iOS even when they're a smaller portion of our user base.

Heck, even if we got the most problematic Android phones for us (Asus budget phones comes to my mind), it is still has less issues than iOS.


It's not as bad as it used to be. The "drawers full of devices", all terrible and broken in unique ways, gave me flashbacks to the Android 2.x days.



Especially with respect to Bluetooth quirks. Every single phone has a different set of bugs in its bluetooth components, and none of those sets are remotely empty. In the end, we decided we could only afford to support the 5 most popular models of the day and if you don't have that phone, then too bad for you.


That one doesn't seem like purely on Android. I've never seen a device without Bluetooth bugs. The protocol is so complicated and implementation relies so much on chips that never get fixes that I'm surprised it works anywhere.


I wasn't criticizing Android, I was criticizing Android phones. I'm sure iPhone doesn't implement the protocol perfectly either, but we certainly observed fewer issues. Whether that's because our BT stack vendors did more testing with Apple devices or because Apple has fewer bugs or both, I'm not sure.


From my experience Apple’s Bluetooth stack is more stable than a typical Android stack and miles better than the Windows one. I have so many headphones that can pair with a Windows laptop exactly once, getting them to work again requires a full reset on both sides every time.


It was kind of bad until recently, it's ok now but gets a lot of custom protocol help if you buy AirPods.

There's a lot of tuning needed to avoid desense issues when your device supports 3+ wireless protocols - that's why your PC motherboard comes with an external antenna for Bluetooth and your iPhone doesn't.


But then they still insist on removing the headphone jacks because "wireless is better".


Don't get me started on programming with BLE on Android. *shudders*


I literally yesterday had to buy my first iphone since the 3G because my wife bought some BLE hardware for her work that won't work properly with her Pixel 3. (It's a set of 12 devices and the Pixel can only connect to six at a time). The manufacturer won't even list Android devices they claim to have tested it with.


Our management kept asking us to fix these issues with software. We tried things like transparently rebooting our BT module and prompting the user to do common fixes (e.g., "turn your phone off and on again") but most of the time these wouldn't work, unsurprisingly.


It's gotten much much better in the modern times. Android 2.x would break your app in many spectacular ways, but Google has been steadily adding more coverage to its "compliance test suite" that a device must pass to be eligible for Play Services preinstall. On 4.x, there were some Chinese phones, like Xiaomi and Meizu, that meddled with notifications, action bars and list views, sometimes to the point of making your app crash, and I do remember having to work around them. But if you support Android 6.0+, like many new apps do these days, you don't have to worry much about device compatibility.


I read the J2ME spec, looked at the procedure to draw an array of pixels to the screen, and noped out of there. Browser Java was bad enough, but J2ME? I wasn't gonna do that to myself.



> Java be like, "we are cross-platform" > mobile manufacturers, "hold my phone"

The problem with cross-platform development in a nutshell. Still to this day writing an Android/iOS app using PCL code is a headache


At least adapting to another private API was easier because the scope was pretty small. It's probably lost forever now, but I had a project which repacked jars to add a few wrapper classes to convert Nokia-specific j2me games to be compatible with my Siemens.


Yeah; as things get more complicated though i feel that this concept will never return


Oh wow, yeah my first mobile game studio had a huge industrial fastener shelf with little drawers stuffed with different phones. We started in 2004, and feature phone game dev was indeed wild. We did a lot of WWE games in 2d and 3d, needing ~15 different reference builds ranging from 64k 2d builds to 800k 3d for high end, plus another set for all the BREW devices. Managing code, asset pipelines, and qa across high/med/low 2d/3d Java/cpp permutations was a huge challenge, but it was so satisfying to be good at. I love what mobile has turned into, and never want to debug another random c crash on device ever again, but I do miss the cabinet of phones and all the crazy variation


I remember Javaground!!

We actually worked with the BREW platform, so we only had 50 phones instead of the 300 java ones. Lol, too funny


Cool! The BREW automation guy at JG was a genius C coder. Those phones seemed more capable in general...but just as quirky.


I worked with a few people from JG. I also worked on a bunch of J2me games. It was an interesting time to be in mobile.


I was just out of college, like the 5th employee at Irvine, a lowbie gamedev. I remember doing some collab with nearby studios (like WF), one where my future wife worked!


John Carmack writes:

> Unlike most emulator projects, Kemulator turned out to be closed source abandonware

It is amazing that in the past two decades most emulators transitioned from closed source closely guarded secrets to open sourced and often under a popular license.

The preservation is unparalleled.


Mind that this doesn't mean that they are not commercialized - modern console emulators make a lot of money on patreon(in some cases up to $80k a month) for "private builds" that can run certain games, especially newer ones, better, and online features in some cases.


Which is how it should be, IMO - the base model is available to anyone for free. If you want extra features after using the base model, then a small donation to the developers for their time is a worthwhile tradeoff.


The problem is that it then gives the developer weird incentives. If someone in the community decides to build the same feature that the main developer kept as a paid feature, then the main developer won't want to merge it into their version.

Then you're stuck: the community member could certainly fork the project with their new feature, or maintain a patchset and forward-port to each new version of the original software, but both of those things are a ton of work.

Meanwhile, users are the ones who lose out; someone decided to build the feature and give it away for free, but users have to jump through hoops to use it.

As someone who has done a lot of open source work over the past ~20 years, I don't think anyone is entitled to find a business model in there. Sure, if you can find a way to work on your project full time but still support yourself, that's great. But often the ways people do so make for some (IMO) bad trade offs and perverse incentives.

I like the dual-licensing approach, where commercial users need to pay, but that's not workable for all projects.


Or once the new feature is paid for... then release it. Someone has paid for your time (to develop the feature). They get their feature, (which they need enough to part with money), the community gets it too (whether they need it or not)

It doesnt benefit from the massive scale revenue, but if it's paid for your time...


I imagine the logic here would be developing a passive income to compensate for the free labor you put in in the first place, and cover whatever future free labor you might want to do.


> The problem is that it then gives the developer weird incentives. If someone in the community decides to build the same feature that the main developer kept as a paid feature, then the main developer won't want to merge it into their version.

See VirtualBox, and to some extent Chrome. For VirtualBox, I'm sure that the Extension Pack could be covered with an open-source effort by the community, and in the case of Chrome, their refusal to accept *BSD patches.


I remember reading recently about a "Firefox bug" caused by a buggy "portability" patch of one of their dependencies.

Accepting 'portability patch' isn't free..


Could you give examples? I can’t picture the use cases, I don’t game often.


yuzu, a Switch emulator, had introduced network play support a while ago that was gated behind Patreon subscriptions but was subsequently removed. https://yuzu-emu.org/entry/yuzu-x-raptor/

IIRC, for a while cemu (a Wii U emulator) had builds with support for Breath of the Wild gated behind Patreon early access as well.


Yes, the raptor thing seemed a bit fishy to me back then as well, especially since they're a non-free third party.

Happy that they got rid of it, but there was quite a bit of community pushback needed.


It wasn't so much that builds with support for it were behind paywall, more that the latest build which supported BOTW more were.

BOTW would still play just worse, then they'd release the next version on patreon and make the last one free.

Now it just runs well.



It surprises me the number of people who thought they could make money selling emulators, in what is and has always been almost exclusively dedicated to piracy.


Your comment is completely wrong. There is a cottage industry of emulation developers funding their development through Patreon. There is a huge number of emulation enthusiasts who are adults with high levels of disposable income willing to fund the development of emulators they enjoy using. Some of the larger emulators get tens thousands of dollars per month on Patreon.

Checkout the following links:

https://www.patreon.com/yuzuteam

https://www.patreon.com/cemu


They are not completely wrong. Crowdfunding a product is not the same as purchasing that product because often people only decide to donate because the resulting product is free. (For example, I give $5 a month to Lichess, but I am unwilling to pay for a Chess.com subscription.)


I can understand why one would want to donate if you find the product useful, but donating because it’s free doesn’t make any sense to me. Could you elaborate?


There is an argument that free chess service benefits chess community (and society in general) in a way that a paid service doesn't.


Lichess is not for profit, so donating to Lichess means that my dollar goes "farther" for infrastructure & helps subsidize the website (which has many features) for other people who may not be able to pay. The main developer only pays themselves $56k a year, when they could easily be making $300k+ in the valley.

https://docs.google.com/spreadsheets/d/1Si3PMUJGR9KrpE5lngSk...

Chess.com is for profit, so they have to maintain some profit margin and lock features behind paywalls to incentivize people to pay. The free experience is worse than Lichess.


I will admit to paying for Bleem! in the long long ago. I still have the CD. Frankly, it was pretty damned amazing.


Did burned game CDs work with Bleem? I assumed it made some check for an official disc.


I never had the PC version of Bleem, but I can confirm that a burned copy of Metal Gear Solid worked for the Bleemcast port of it.

My parents wouldn't let me buy M-rated games, so the easiest way for me to play MGS was on my Dreamcast with a copy of Bleemcast that I found used at Gamestop for four dollars with a pirated copy of the game.


I'm afraid I don't recall. I feel like it probably did if it was possible, as a defense against being called a piracy tool, but I'm not sure if there was any way for a consumer CDROM to check for the wobble groove. I owned a PlayStation and official games so I only recall using those.

P.S.: Some googling suggests that it played "backups" just fine.


The CD Key


Well, some emulator devs are making huge sums of money nowadays in "donations"/crowdfunding, to much chagrin of others in the emudev scene.

It seems they got the business model right this time.


I'm sure that Nintendo's Virtual Console emulator series has made them quite a bit of money.


Well, they stopped doing it, so perhaps not.


They don't offer emulated games as standalone purchases anymore (and, frankly, the idea that they charged repeatedly for games is insane to me) -- instead, now it's tied to the Switch Online subscription service.


The library of retro games on Switch Online is laughably small and is one of my biggest gripe with the Switch compared to the Wii (U).


> and, frankly, the idea that they charged repeatedly for games is insane to me

I've been curious whether Switch libraries will follow you to whatever the next Nintendo console is. They haven't done that in the past, but online purchases might be so common now that they can no longer get away with not doing it.


Not at all, emulators are also a way to keep old games alive.

Like still being able to watch that old VHS movie on BluRay HD, or listening to Swing records from 1920 in 2021.


They're also a way to help developers write new games for old platforms.


I pretty much only play emulators these days as having a load of consoles and cables under my TV is a pain in the ass.

I recently paid five dollars for redream Dreamcast emulator. Totally worth it. There's a free version that doesn't run hi Res and that's fine.


The premium Redream is definitely worth the $


My old boss used to be a game dev, and many people at is company used an unofficial DS emulator (I believe No$GBA but could be wrong) quite extensively for debugging/development purposes.

They even paid the developer several thousand dollars so that they would improve the debugger function.

This was all unofficial, of course - Nintendo had no idea and would not have been happy if they found out.


It surprises me that people think that it isn't possible to make money selling emulators.

https://play.google.com/store/apps/details?id=com.dsemu.dras...

Over one million downloads, price £4.99

;)


You got me there! It shows the kind of market that can emerge when piracy is less practical than purchasing the software.

My point still stands though for emulators on PC where I believe software piracy remains popular.


PC software has a market too.


Well, I remember the moment No Cash suddenly had some cash after making the latest version of his popular emulator paid-only.


3dSen seems to have done a decent job monetizing - but they added a lot of value.


There are, "and always have been" commercial emulators for keeping old software running when the original system no longer exists (PDP, VAX, etc)


Invoking GC on every frame has different performance characteristic on old feature phone and modern PC.

Reminds me of a technical document of Doom 3 BFG edition.

https://fabiensanglard.net/doom3_documentation/DOOM-3-BFG-Te...

In 2004, it was a best practice to keep data on memory. in 2012, CPU and GPU performance was increased a lot but memory performance wasn't increased much so calculating the necessary data on demand is faster than keeping it on memory and retrieving it.


Also reminds me of all those framerate pacing hacks people put into old Flash movies. They literally spin in a loop until the current time advances to the next frame. AFAIK Ruffle explicitly pads out the time scripts see just to defeat this particular coding antipattern.


This is actually a great example of something I see in the wild. The most common I've seen are lookup tables for trig functions that are only as fast or even slower than math.h.

You have to aggressively benchmark even across CPU generations to remain confident that your optimization has optimized anything.


Nowadays, most of the math code I worry about I throw into godbolt with -O3 then check the major instructions on Agner Fog. It's often immediately obvious that a modern compiler+CPU is already using a tiny number of cycles to do what I want. (One exception is hot paths that might need to be optimized by hand to use SIMD intrinsics.)


My cofounder and I started RiffWare in 2004 with our eyes set on BREW. Carrier billing was the reason... and if you made it onto the carrier store, you did well. So many great stories.

We didn’t have experience in games, so our thesis was to make the dumb phones ‘start.’ We actually have quite a few ‘firsts’ (to our knowledge) and ended up with several of our own apps in the Best Sellers list on Verizon.

We were responsible for the first BREw certified app to use the camera (for bar code reading) via a consulting gig.

We launched a Guitar Tuner that was actually quite high quality despite the cheap hardware. It quickly became a best seller and shocked the Verizon rep when people would buy it for $25.

We also launched a Do Not Disturb app that was fantastic. We tried to port that to the iPhone in 2010 but Apple wouldn’t allow it.

I am convinced we were also the very first people to honestly lose and recover a phone using tower based ___location services while testing a new app we called Secure Phone. Verizon wouldn’t launch that app though because they were concerned about privacy. 2 years later that was a moot point.

Another fun fact about that, Sam Altman launched Loopt during that time. I asked him how the heck he got Verizon to approve it... ‘board member’ was his answer. Smart money for the win.

Not a bad run for a small group of indie developers with no backing. Great memories.


I had a blast spinning up J2ME Loader on my phone (it's on f-droid) and playing games I had on my Sony Ericsson W595 back in the day :)

Worms, Zombie Infection, Sims, loads of Fishlab games, a silly GTA clone... Massive nostalgia hit

I expected it just to be the nostalgia, but actually those games hold up pretty well especially considering the limitations of the platform! Certainly they are a breath of fresh air compared to the microtransaction/ads/spin-the-wheel/spyware - ridden games for mobile platforms today.


Modern mobile games are an absolute joke. Sure there’s some gems, but a vast vast majority even from the “big” names are just junk. Eg the official Tetris requires a monthly subscription to not have ads between EVERY game, other big names only let you play a single level then the rest cost money for each and every additional level, etc etc.

Our “phones” are now more powerful than gaming PCs from not that long ago, and could easily play plenty of proper PC and console games from a few generations back, and yet the mobile gaming industry is basically just a lucky dip buried in a landfill.


The stores should finally label “free” apps for what they are: adware, subscription-based, time-limited demo, …

I dream of returning to a free category that is actually free, or with a reasonable amount of advertising.

Apple, which so much “focus” on quality they still won’t reject absolute user-hostile junk.


Mobile games are indeed 90% crap cash grabs and dark patterns. But given the absolutely immense amount of mobile games available today, those 10 remaining percent still contains an insane amount of good games.


I love the way John uses \ as a continuation character to the next tweet -- ever the C/C++ programmer!


Two friends of mine used to work on projects that depended on BREW. Brazilian videogame console (zeebo) was also dependent on it. At the time they didn't spoke negatively about it. With my free software roots, I had a very negative view on proprietary platforms and dev tools.

A few years forward, no zeebo game can be played on non-original hardware that is no longer manufactured for about a decade. These games will become unplayable and unsalvageable too. Although not spectacular, losing part of history is always a dent on culture preservation.

Basically the same happens with the BREW software that won't run anywhere else and brew devices that simply became unusable because they require signatures.

Locked down platforms should be regulated or laws should exist to force companies to open specifications after some time and release signatures.


Had the same fix to a wildly different situation. A long running Spark job that accepts 100s of jars and runs 1000s of stages over its lifetime was having intermittent massive GC pauses. Too intermittent as it happens, practically periodic. Turns out Spark runs System.GC() every 30 minutes by default. DisableExplicitGC fixed everything right up.


Is there any disadvantage to limiting the emulator's JVM heap size to match the original execution environment (in this case what seems to be 128k), instead of explicitly disabling GC?


Ultimately it's just as complicated as disabling explicit GC calls, and since we have a lot more RAM to use now you'll get better performance if you just let Java have a bigger heap.

To be clear as well, the flag he added doesn't explicitly disable GC, it disables asking for GC explicitly, e.g. it makes "System.gc()" a no-op. The JVM will still garbage collect when it's heuristics decide it should.


Ultimately it's just as complicated as disabling explicit GC calls, and since we have a lot more RAM to use now you'll get better performance if you just let Java have a bigger heap.

Will you get better performance? Or will it end up using a large amount of memory and then having a long GC pause that causes your game to drop a frame every now and then?


The JVM heuristics are quite good, and the GCs are state-of-the-art and a beast, so I doubt you would have frame drops because of GC. More often than not you are better off not tuning the JVM.


Every application is different, but I'd wager it would strictly be better. The current GC implementions are very good, especially ZGC for pause times, I'd be surprised if a j2me game had a max GC pause time over 1ms with ZGC.


I'm not sure that in the present day of massive teams and > $100 million budgets that there's much room for new celebrity game developers to emerge on that scale. Pretty much every gamer knows who he is.

Now, even the biggest breakthrough indie game with a 5 person dev team wouldn't become a household name. These days it's the studios themselves that get most of the credit. Which may only be fair: When there's 100+ people on a project, it's such a group effort that singling out a handful doesn't really represent the achievement.


It’s not as recent as it seems in my mind, but Notch is a relatively well known game dev. And in the younger crowd he may be even well known than Carmack.


Ever since Mojang was sold to MS in 2014, I'd say his notoriety has dropped quite a bit. Most people that would have been familiar with him are now in their mid teens at the youngest.


Even before Notch left I'd say others like Jeb or Dinnerbone were more commonly known in the Minecraft communities as they were more active in the community & posting about upcoming things.

(then of course Notch went off the deep end and most communities rapidly began distancing themselves from him...)


And the fame associated with these names was only possible because they canes from a small indie group that broke through to the mainstream so massively successful that I can't think of any other example on that scale in the last decade.

In the 90's and early 00's, many big names practically were studio, the brand.


There are definitely still "well known names" in the indie scene. They may not hit mainstream success, but there's for example there's some pretty well known members of the Factorio team in the Factorio community (kovarex & Klonan come to mind). There's people like Maddy Thorson of Towerfall & Celeste fame. If they decide to embrace it I imagine one of the very few people on the team behind Valheim would also fit this, but it doesn't seem like they are interested in that.

There are still small indie groups that make breakout or successful games that hit mainstream awareness (Untitled Goose Game anyone?), but it seems like many prefer to use a company branded twitter than make their own name(s) public or just let the game stand on its own.


Sure, but they're not breakthrough names like Carmack & a few others, and are unlikely to be known by anyone not familiar with those particular games.

Carmack on the other hand is completely unavoidable if you even casually tune in to gaming news. I've never player a Doom game, but I know who Carmack is. In contrast, I've played 200+ hours of Factorio, and had no idea what the names of the developers were.

I'm not saying it's impossible, just that it takes much more than a breakout hit from indie devs to thrust them to Carmack levels of fame.

Part of it is likely due to technology changes. A big part of Carmack's fame initially stemmed not only from having a wildly popular game, but having done things with hardware that were practically magical at the time. The use of Binary Space Partitioning-- a not very well known technique for rendering that had, to my knowledge, never been used in videogames before was used by Carmack because he could not only program, he rises above the level of programmer to computer scientist. Many people study computer science in college or bootcamps, but most simply become programmers.

Compare that to today's game dev ecosystem: Even indies are typically using a variety of middleware and off-the shelf software to build there game. There is both less room and less need for the type of hackery of the Carmack era (which wasn't unique just to Carmack). These days if you want your game to do something more computationally intensive, you just do it, and up the minimum reqs from a gtx 950 to a gtx 960.

I think this is why the closes thing to the level of celebrity of a single dev we've had in a while is Notch w/ Minecraft. He didn't get there by pushing the boundaries of hardware to make something previously not possible, he simply put in long years of iterative design that results in a unique breakthrough hit that appealed to hugely different audiences. If we're looking for future celebrity devs, that's the sort I would expect, and they seem much more rare.


He hasn't really done much—or completed anything—since selling Minecraft. At one point, yes, I'd agree, but I don't think that is the case anymore.


Notch ruined his popularity tho by being a total bigot and jerk. :/


https://en.wikipedia.org/wiki/Markus_Persson#Controversy

He's said some truly awful things. It's too bad people are downvoting you - this is part of the story of gamedev. Carmack would never say these kinds of things. Persson did. One is a famous and well-known and loved developer, the other is relegated to parroting talking points of alt-right/Nazi discussion boards and does not associate much with the rest of society.

Notch could have been more famous and well-loved than Carmack even, but his personality and hatred of minorities stopped that from happening.


Well; and when it boils down to it, what did Notch even really do for programming in general?

Minecraft is incredibly popular and very well made; but in no ways is it revolutionary or game-changing for the industry in the way Wolf3D or DOOM was.

There is essentially the game industry before and after DOOM.

Quake, similarly; changed the game; and, in fact - the engines for Quake/II and DOOM/II would go on to be the engine behind a quite massive quantity of games in the 90’s.

And then there’s Carmack’s massive contribution to FOSS by allowing us inside the code to learn how the craziness was constructed...

Honestly, there are only a handful of people in the world who even had the chance to make that kind of impact.


I guess you remember the names who make history. You remember the names of the astronauts who first went to the moon but no-one knows the names of the other people who went to the moon.


ConcernedApe with StarDew valley has more revenue than any sega genesis game had back in the day.


It happened recently with Markus Persson and Jonathan Blow.


Jonathan is definitely not on the same scale as Notch. There is only a brief period where Jonathan was well known and mainly due to Indie Game: The Movie


Most people on AAA studios certainly know Jonathan, if nothing else for his contributions at GDC.


That wasn't a dev coming out of a major studio though, which is what I think is significantly harder these days. Notch got there only because it was a small indie team that hit a black swan event to hit the mainstream in a massive way, at the same time that it caught fire on YouTube with many current game streamers and channels having their roots in the early days of Minecraft videos.

Had Minecraft come from a major studio, even hitting it just as big, I don't think we associate it nearly as much with particular devs, and I can't think of another indie hitting it like that in the last decade. But if big names do still emerge, I think it will be from small breakthrough indie teams.


Fame in gaming has largely moved into “content creators”. The most famous game developers are almost certainly YouTubers first and game developers second. There are some fairly big channels now that produce memey content about making games in nearly exactly the same way as people make memey content about Minecraft or Fortnite.

People like danidev: https://www.youtube.com/channel/UCIabPXjvT5BVTxRDPCBBOOQ


I think you may have a point. I don't play Fortnite, but I know who Ninja is, and casual observers of the gaming world pretty much know his name regardless of playing fortnite. (Though he's not a game dev).

It is an interesting trend that players can now become more famous for playing a game than the people that create the game. However I think that's only possible with the advent of user-created content: Minecraft gameplay on YouTube ~2012-2013 was a big driver in Minecraft's popularity and catalyst for mote activity like that. I think gaming culture would have reached this point without Minecraft, but as it stands right now it was the foundation of the massive gaming channels and streaming.


It’s not that unusual that players get famous for playing games rather than developers. Games are much more active way of engaging with media and playing the game is a far more common experience than making it.

Streaming and gaming personalities both pre-date Minecraft but for sure it’s had a massive cultural impact. Particularly in terms of the growth of an audience through all the kids participating.


I think you are right. The closest I can come to a modern example would be Darkest Dungeon (made by 2 people) and Slay the Spire/Stardew Valley (both made by single devs) however owing to your point I don’t remember their names despite reading about them at least a few times. I would recognize John Carmacks face in a crowded room not to mention obviously unlikely to ever forget his name.


Now that you mention it, I know the names of a few of the roguelikes I play. I think part of this stems from it being the work of very small teams that ALSO do all their own game marketing, so your “representative” for the game is the creator themselves.

Dwarf Fortress comes to mind, and I know it’s made by Tarn and his brother, but I don’t know their last name…but I guess it’s not exactly modern.

Similarly, Kyzrati/Josh Ge, of Cogmind & REXpaint fame, and pender/Brian Walker.

To the point above about “who does the representing,” even though I happen to know their names, I think of them by their monikers, pender and Kyzrati, not their real-world names.


Toby Fox and Undertale also comes to mind


I believe Megacrit, which built Slay the Spire, is two people, Casey Yano and Anthony Giovannetti. But to the broader point I knew the name of the studio off the top of my head while I needed to look up the names of the individuals.


Dwarf Fortress?


well yeah there's less crucial core stuff to pioneer these days


nobody's figured out a UX for VR yet so far as I know


it's a much bigger problem space with lots of different ways to do things based on hardware capability etc.


> These days it's the studios themselves that get most of the credit.

It certainly serves the studios' interests to reduce their talents' market power. If customers recognized individual devs and wanted games made by them, those devs could demand more money and more influence.


Very true. Atari did it deliberately with their devs. Probably to their detriment: Having big name devs with their own brands would only have brought more attention to the market. Who cares if they make they own game studio? Atari would still get royalties on cartridges sold.

With today's game studios though, they aren't the platform owner, so a game dev with a personal brand branching off to do their own thing is still a net loss for them. I wonder if they have rules in place about that sort of thing. I could even take the appearance of something benign & reasonable: "No one talks to the press, everything to the press comes out of PR & marketing"

If that's the case, there may very well be a dozen Carmacks in the big dev studios that made something seemingly impossible happen, and without whom their games would simply have died, or been flops.

Then again with that level of fame, failures hit the individual quite a bit harder than the dev teams as well. Look at someone like Warren Spector, Richard Garriot, or Dennis Dyack. I might read each of their examples as situations where their singular influence on their games and rise to fame ultimately lead to their failures & blame as well. It created a lot of pressure (somewhat self-imposed to be sure) to do something bigger & better each iteration that they eventually couldn't keep up. A situation made worse along the way by financial backers giving them too much money to develop, meaning the devs didn't have to think as critically about what to include and what to prune from games. It lead to games that were a mess of shiny features or broken promises lacking a solid core. Fame cuts both ways.


Don't know about the person or his work/fame, but Brendan Greene is PlayerUnknown(PU) in PUBG.


Some names that come to mind are Jonathan Blow, Rami Ismail and Edmund McMillen


Cliff Bleszinski worked on $100M+ projects, Martin O'Donnell did, so did Notch. Definitely still possible.


Not being a programmer, I never have any idea what Carmack is talking about, but I'm always enthralled.


In case you want a simple explanation for this story: he tried to run an old pre-iPhone mobile game on a computer. The game runs very slow on computers, which is surprising considering the performance difference between these old phones and a modern computer.

The reason turned out to be that the game runs a memory cleaning command to avoid bugs arising from lack of space. Since modern computers have 10000x more memory to clean up, these commands now take way more time to complete, and thus slow the whole game down.


> Since modern computers have 10000x more memory to clean up

The understated insane part of this is that emulating a 100mhz ARM CPU with 128kb of RAM apparently takes gigabytes of RAM to accomplish. What on earth is that emulator doing?


Given the details in the tweet, it might be simply translating GC run from emulated to host context. Not what you would ever want to do but it's abandonware from years ago after all, it might have been "slow but bearable" back when it was being developed.


It seems kind of crazy to me that people were using a GC language on a device with 128kB of memory. John even mentions how he was forced to run the GC on every frame to avoid problems. I would think when you are that constrained you would be closely tracking your memory usage. It's probably a miracle that those games weren't constantly hitching and crashing due to slamming up against the memory limits.


Garbage collection was invented for Lisp, which initially ran on a machine with 18,432 bytes of RAM.


> It seems kind of crazy to me that people were using a GC language on a device with 128kB of memory.

Look up JavaCard which apparently is a thing that still exists. It uses the Java language without a GC - that programming environment is truly miserable to work with.


Best practice was to pre-allocate everything, or at least not allocate insane amounts of objects per-frame if at all possible such that System.gc() would become a no-op up to tracing and maybe defragmentation anyway.


I'd follow a Twitter alt dedicated to ELI5-style summaries of John Carmack tweets.


> Since modern computers have 10000x more memory to clean up,

The thing I don't understand is that the application presumably isn't using any more or less memory than it did when it ran on a 1/10,000x computer. You're not scanning the whole RAM for memory to clean up, just the allocated memory. And on a game that was designed to use 128kB, that's presumably not very much.


If the game was built with premature scalability and you're letting it see how much RAM there actually is, instead of giving it 128KB, I can see it hitting some bugs and getting really aggressive.


That was very helpful, thank you!


A lot of times, even being a programmer, you never have any idea what Carmack is talking about...


https://m.imgur.com/xA8LLRu

Is a reference to both the many enthralling presentations Carmack has given and to https://www.youtube.com/watch?v=X68Mm_kYRjc



I used Qualcomm brew in grad school to create an accelerometer based app to detect falls for senior people. I remember the first time it detected a simulated fall !! I was new to such sophisticated phones and it felt like an amazing achievement. I remember talking to a Qualcomm engineer who helped me with the internal API and the whole setup. Fun times!!


The Qualcomm engineers were always very approachable. It was a small but cool community.

Congrats on doing something intrinsically valuable. Using the accelerometer like that so early was slick!

I got an email once from a woman that thanked me profusely for our Do Not Disturb app that kept an abusive boyfriend from harassing her. That felt good.


"Well, we are programmers, we should be able to fix it."

What an awesome "beginner's mind" perspective. It's too easy to write off a potential solution as difficult or impossible - but why not adopt this attitude, and at least try?


Absolutely, and having 30+ years of troubleshooting experience on a wide range of hardware and software platforms also help :)

It's very inspiring to read about a problem and having the solution explained as matter-of-factly as here.


"it's either off by one error or caching problem" -- Carmack probably


> It's too easy to write off a potential solution as difficult or impossible - but why not adopt this attitude, and at least try?

You forgot the part where dude in question is a programming Jesus.


In the same vein I remember the first time I was able to use a phone to access "internet" from a computer. The "internet access" that the phone had to offer was called "WAP" and I was communicating with the phone via infrared (wireless \o/), you had to dial cabalistic symbols from the computer and to be really careful using their proxy to avoid sell-an-organ level out of plan charges.

After that Android's deceptively straightforward tethering feature was almost saddening...


I used to love WAP on my flip phone. It was perfect for scrolling through bash.org


> After that Android's deceptively straightforward tethering feature was almost saddening...

Ha - you should've seen early iPhone tethering. Back in those days, net neutrality wasn't a legal right yet (at least here in the Netherlands), so the carriers would push a profile to your phone that disabled tethering. The solution, of course, was to jailbreak your phone and install a tweak that turned it back on.


I remembered that some apps had passed review and released with hidden tethering feature, then discovered, then banned.


> Back in those days, net neutrality wasn't a legal right yet (at least here in the Netherlands) ...

You must live in the future!


I'm also not aware of net neutrality being protected in Europe yet. I think I recall ongoing initiatives aiming at that but I'm pretty sure even these don't cover cellular data networks. I think that, for example, plans including unmetered access for specific services are still a thing (say unlimited Facebook usage, exempt from regular data plan limits).


> I think that, for example, plans including unmetered access for specific services are still a thing (say unlimited Facebook usage, exempt from regular data plan limits).

Ah yes, the infamous 'zero rating'... this is currently the only hole in the net neutrality regulation. However, apart from that, net neutrality is indeed a legal right in the EU.

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A...

https://berec.europa.eu/eng/document_register/subject_matter...


Carriers still do that. I can only do tethering by running a SOCKS proxy in Termux and forwarding the port with adb.


I remember using infraport tethering from my Sony Ericsson 68i to my Palm Tungten T and it worked fine - good enough for ICQ, telnet and IIRC some lightweight webbrowsing. Also you could feel finally living the future. :)

Even though I never did get a Sharp Zaurus in the end, I migrated directly from Palm TX Neoo FreeRunner and to Nokia N900 and the rest is history. And I'm running Sailfis OS on my Xperia X now. :)


Imagine having John Carmack randomly contribute a patch to your project, pretty awesome stuff.


I'd take a screenshot and hang it up on my wall like a trophy.


Worth introducing a bug that would annoy him enough. /s


Would absolutely blow my mind, I would even merge it without multiple approves. :D And add a new contributors section in the readme with John Carmack at the top.


John Carmack is awesome, but I find this level of deification (of any individual) kinda creepy - it just feels unhealthy. Would you like to be treated that way, in Carmack's position?

I realize that your comment was likely made in jest, but it still bugs me.


I think the joke is that you write a program, and someone well known like Carmack or Linus submits a minor patch, and you from then on say "Linus and I wrote ..."


Yeah it's all in good fun, kind of like you have the Chuck Norris memes.


If the attention bothered him, the simple solution is to create another account under an alias. I do it to keep the worlds apart, and I am not even famous.

Given all that he has accomplished and the influence his work has had on multiple generations of computer and gaming nerds, I don't see the celebrity status as terribly unwarranted.


Sure, it's a simple solution, but it comes at a cost. There are a lot of reasons to use one's own name when writing.


So if people wish to be treated like normal human beings, they must hide behind pseudonyms?

That may be the current reality of the situation, but it doesn't mean we have to keep it that way.

I'm not saying he doesn't deserve his status, I'm just saying let's not get too fanatical about it.


No one is doing pilgrimages to his house or stealing his garbage or following him around here..

We're just acknowledging that he is an exceptional contributor to our industry, and that his work has inspired a TON of people.

To have someone of that stature contribute to your project is exciting! There's nothing strange or creepy about that, and if people want to celebrate that in their own way like printing a Git commit or something, whatever!

Imagine being an indie film director and having Kathryn Bigelow show up on set one day to give you some notes and feedback on your film. You might frame that piece of paper.

Imagine being a local chef in a restaurant and having Julia Sedefdjian stop by for a meal and compliment your food. You might get a photo to keep on the wall in your kitchen..

Nothing wrong with any of that, I think you are characterizing things to an unwarranted extreme here.


I don't take issue with anything listed in your comment. The comment I was originally replying to described:

a) Bypassing their own code review processes.

b) Creating a contributors list, just so they can put Carmack at the top of it (what about all the other contributors?!)

IMHO this crosses a line. Not in a big way, but one worthy of comment.


If he breaks something just go fix it. Would be a funny story.


I wouldn't bet that it's "no one".

Carmack, while not Cardi B or whoever, is famous enough that I'd bet he has at least a few extreme worshipers and extreme haters.

That said, if Carmack contributed to one of my projects, yeah, I'd be sure to let people know that. :-)


fame and fame admiration is as old as civilization. And yes, using pseudonyms has been used for 100s of years for the same reason. You are complaining about something that is deeply human in nature.

I don't think the comment was fanatical at all, it was a lighthearted joke.


Just because it appeals to nature, doesn't make it good.

While I agree it was likely lighthearted, that does not exempt it from criticism.


You've had several HN readers suggest your criticism is unwarranted. I agree as well. Seriously, nothing from the OP indicates anything fanatic or out of the ordinary.


Eh, well while I agree, he's far better than Bezos, Musk, or Zuckerberg COMBINED, and look at all the idolatry those guys receive.

Purely in terms of a programmer. Things he's created.

Everyone should be able to have heroes. You don't need to necessarily elevate them to the level of Gods, but John is very readily a video game real life hero.


Something tells me Mr. Carmack himself wouldn't approve of such engineering practices, no?


It's not always necessary to do a strict pre-commit code review system. If you're working on commercial projects you might be used to pair programming or post-commit review (which IIRC isn't actually that risky.)


Of course, I made the comment in jest. But it certainly would be an honor for anyone nonetheless.

(if it did happen for real the most I would do is print out the commit hash with his name next to it :D, because he is one of the people I look up to in CS, others being Knuth and Tarjan among others, I would do the same for them, but then again I don't think they are active open source contributors)

Nonetheless, Carmack is still one of the most impactful programmer of the modern era.


Interesting, yeah it's probably hard for a famous person to connect with you on a real level if you're busy groveling and kissing the ground they walk on.


I think I'd have to find at least one thing to have him fix, haha.


I tried running games from 2004 earlier this week and all failed in a spectacular fashion. I thought Windows was all about supporting legacy 32-bit applications.


You might have better luck running those old games under Wine or Proton.


There's a Windows version of Wine?


You can run it on WSL, so in a sense, yes - https://reddragdiva.dreamwidth.org/607714.html


You might be able to do it via WSL but it's probably not worth the effort.


You might have more luck running them in Proton or Wine.


Also ScummVM


Yesterday i just ordered myself a feature phone and now I see this tweet about game development on feature phones

What a coincidence !!! Anyways has anyone still using feature phone, i would love to hear your experience


What feature phones can get on modern networks? I thought LTE "needed" a smartphone for some reason


No way. There are plenty of dumb phones. I never heard the term feature phone before but when I google it I see it described as dumb phones.

I have a great Alcatel dumb phone with dual sims and NO OTHER FEATURES. :D That's exactly what you want from a dumb phone. It works just fine on any mobile network today in Sweden. It cost the equivalent of 24 USD.


Feature phones are distinct from dumb phones. Feature phone usually implies there is a suite of built in programs. It will have a note keeping application, an image viewer/gallery view, sometimes (rarely) email, a mp3 player, a handful of games which for some inexplicable reason always includes Snake, etc.


Oic I almost suspect that was the case. The wikipedia article did not list dumb phones, but in fact phones that had a lot cooler games than Snake.



Nokia latest phones offer 4g I’m not sure about LTE but they do get the job done



I remember BREW but barely used more than the demos: we were in San Diego and Qualcomm was trying to get local developers interested. We had a few clients considering it but the terms we were getting were eye-watering: if memory serves, it was $50k or more per carrier just to be listed for sale, plus a big chunk of the purchase price, and that was just a floor — the carriers wanted to adjust up based on your perceived ability to pay. We had some household name clients but just having money didn’t mean they would entertain the idea of adding so much fixed cost to the project just to see if it’d eventually become popular enough to break even.


The BREW conference charged $5k each to attend. We would come up with speaking topics to get gratis badges...


So, the bytecode instrumentation as post-compilation functional modification tool is pretty interesting. More info here: https://blogs.sap.com/2016/03/09/java-bytecode-instrumentati....

I wonder if there's any 'defense' against this kind of thing.


For as long as I can remember there has been the capability to sign jars. So you can detect tampering, though not prevent it.


Worst part about decent Feature Phone games was the device / carrier lock in.

I was always upset as a massive FF7 fan that there was a canonical game only released for certain (one?) feature phones tied to a Japanese telecom, and despite my best efforts I was unable to obtain a phone with a copy of all episodes downloaded (though I did find some translated transcripts that I could read for the story).

Very happy that it will now be included as a part of Ever Crisis.


It blows my mind that a garbage collected language without aggregate value types was the language of choice for games on these tiny phones.


It makes more sense if you lived through the marketing hype.

I'd compare "Java as solution to everything" to the more recent "web scale" and "NoSQL" crazes, but with the backing of a PR firm instead of Internet echo chambers.


They had a limited amount of memory and accessing it was probably fast, comparatively speaking. So it was a better fit for those devices than it is for our modern ones.


Perhaps lower performance was worthwhile over having segfaults.


Qualcomm MSM chipsets in feature phones probably all had ARM CPUs with the extensions that had the JVM acceleration instructions.


Just after iPhone was released I worked for a (now defunct) company that had an automated J2ME to BREW porting tool. They modified it to work on iPhone which was why they hired me. Had a few early iPhone games released through that platform.

Of course times moved on an companies switched to native apps. I ended up leaving for an accounting startup.


GC every frame? Jesus. Gamedevs jump though hoops to avoid it.


Probably added after profiling on that system found that the GC pauses would fit within frame budget, whereas not running it every frame would have a long pause eventually drop frames


This! If you GC every frame, you can almost guarantee that it runs fast enough to not cause a frame skip.


Only true for Java runtimes at the time. With modern JVMs with generational GCs, it is detrimental even. Profile it, and only let allocation rate increase to an acceptable level that can be reclaimed easily. Or nowadays one can use a low latency GC as well.


Your other choices are:

- Never GC via using object pools. This code is nastier than C++ because Java is not intended to be used this way.

- GC whenever needed randomly. The game will just pause occasionally. Very annoying as a player.

- Write the actual game in C++. Make a few JNI calls here and there. On feature phones I only remember this being possible for some vendor apps.


Depends on when. If we are talking about modern day JVM, than even the non-latency optimized default GC would have <10ms stop-the-world pauses for up to gigantic allocation rates, much less for the presumably minor one of a simple game. And then there are two latency optimized ones, Shenandoah and ZGC, with the latter having <1ms, meaning that your OS introduces more latency with thread switches.

So I think writing a game while profiling allocation rates and paying a bit of attention to not spam new everywhere, one should get decent performance without any framedrops. At most, optimize the hot loops with primitives, arrays.


If these phones were still around, I'd imagine there'd be another option now:

- Write your game in C++ and transpile it to Java using some fancy framework that dances around never using GC.


You'd have to do something like allocate a single byte[] for everything you'll ever need, and reading & writing data would just be a constant tax since you can't just in-place cast that to an int or whatever. It wouldn't be very fun.


But it would be transpiled so the programmer would never need to look at the very ugly stuff. The idea reminds me of the original asm.js


These phones had like 1 MB, 2 MB, 4 MB of RAM. And these were 3D-accelerated games running on them. A GC language was definitely the wrong choice for the platform, but J2ME was the industry standard because portable. So if you're allocating a dozen objects in a frame it's best to GC them in the same frame or you're going to lose tens of frames later.

Even in modern times, for the longest time Android Java apps had laggy scrolling due to GC hitches that the refcounting iOS Obj-C apps avoided.


Refcounting has glitches as well.


The refcounting isn't so bad but freeing and running destructors can pause. There's also memory fragmentation from not having a compacting GC. It's all fixable though.


Though nowadays GC is also “fixable”, and probably more performant than refcounting, at least for non-single-threaded code.


Cross-thread refcounting is not that common and the CPU has very fast atomics anyway. It's still a better tradeoff than having to sweep (which might need to page in), make all pointers visible, accept the occasional peak memory increase, etc.


What are the options for sharing an object between threads then? And even with having good atomics, it is a very significant overhead. Also, the primary reason for modern GCs having significantly better performance than refcounting is that with GC one can move the majority of work to another thread, letting them continue the work.

Refcounting is good for some simple programs where ownership is not trivial, and the language doesn’t support a GC/or when memory is constrained, etc. But it is not an accident that high level languages with GC doesn’t choose refcounting, and having the cost of destruction at the given thread is just one point, afaik circular references are similarly not an easily solved problem. And basically with every single “solution” to these problems you are moving towards a full-blown GC.


> What are the options for sharing an object between threads then?

It works fine, it’s just not done that often. (Specifically contended refcount changing isn’t done, which is why having fast uncontended atomic helps.) Transferring between threads happens and just works.

ObjC has explicit weak pointers and ways to move destructors to another thread and it all works. Though you could use C# or JavaScript in your app, many people do.

Actually, PHP and Python do use refcounting internally, I think PHP only GCs on exits from functions…


Java and .NET also have weak references.

Also C# can do everything that Objective-C is capable of, provided one actually knows how to use the language.

CPython uses a mix of refcounting with a cycle collector tracing GC, other Python implementations use tracing GCs.

The language does not specify GC semantics and counting on them is a recipe to break code when moving across implementations.


I don’t get your point. Of course refcounting is a possible and working solution, but it’s not an accident that Java, JS, and C#, languages with the best GCs, doesn’t use it.


Yes, C# is the best language to write a phone OS in if you want to go out of business.


Other languages need special hardware to beat C#, so I guess the language wasn't the issue.

https://github.com/ixy-languages/ixy-languages

https://blog.metaobject.com/2020/11/m1-memory-and-performanc...


There is no "special refcounting hardware" in the M1, it has exactly what I said it has above this. Alas, nobody trusts me…


Sure, but it's glitches you control. Everything is deterministic. With a GC, your only solution is to call the GC explicitly on every frame and pray for the best.


These games are turn and grid based like Legend of Grimrock or Chess where input lag is much less important.


I really liked this game "stolen in 60 seconds" in my 8th grade I played this for hous on my nokia phone https://www.getjar.com/categories/all-games/puzzle-and-strat...

They never released android or ios version and probably the company is closed now. one of the best mobile hames I have ever played!


Symbian s60 was amazing and I can’t believe that Nokia isn’t part of the current smartphone landscape. Same with Palm TBH. Even Windows Mobile had smartphone apps, GPS, camera, browser, high-speed data (HTC Apache etc). They were just so impressive so long ago, and now they’re nothing. Windows phone is dead. WebOS is used on LG TVs. Nokia is, as far as I can tell, doing nothing except maybe hanging onto some IP patents and maybe selling burner cells.


I played so many games on feature phones back in the day. You could download .jar files from umnet, etc and install them on just about any phone from any manufacturer.

My favorite was the Pirates of the Caribbean: At World's End game which came out as a tie in to the movie, back when every major film had tie in games.


The thrill of playing games on my Siemens phone was something I haven’t experienced since. There was just something mind blowing about this small device, considering I had a PC with a CRT display back then. The sheer size difference between those two devices showed how far the technology has gone.


I got a bit sad when I thought of a generational talent like Carmack porting Doom to a Nokia or whatever but then I realized that everything he’s invented has been by trying to wring every drop of performance out of hardware.


You can play a bunch of games from this era on PC using this amazing archive/emulator bundle:

https://bluemaxima.org/kahvibreak/


Hmm. Although they were not actually called feature phones. Feature phones was what we started calling phones that weren't smartphones after smartphones came to be a thing/word.


Agreed, I noticed that too ;)


H4sIADQep2ACAz1Su44bMQzs/RXENdfYwQXIF1x1TaqkOeAarZa7IiyLC5HKwn9/I62dSi/Og0P9kTX9OJ0+uDLtTAHL1+mkhQK9FN6NFq3t9nLut1n1KmWl4HidZVkAKg7YZOIDt6fOE2jSOnPNUviycpFmtHE1sIrRXsU7C8pRmHWlTc37UQr9/PV2iSnUEJ0rSYmVb9Cwg1xiol1ypq3qFKZ8p6JOExRjZDOZMgNDnpiW5u3oJZSZYiivo5CDCWBtm4PzfCaeZay950qVwwwBT9qcLFbNaGFFPJ/aaNeWZ3BLufYEsEFXKsUHmoGC479Ad+dHMdxDhh5tLjg8PEGrmwwNUpXiKL4yb89SXShrWUGEBtng4B2OpJjD4XlgH7kab0gLNXT4sSMr2NIeJv0Pc1w/q2ea7kMrxKvlYKk/ArFVdkZe2PKt5cM8Ni6XPkw4LZhdCy6YJSAT+858JG4aJWS6IVJgBOldhp3zcwiLFLEEp+IYwMgZ7LnneagH/Loak/zr/hAD2v6t+Efg55iK4uqOu28/M6keswIAAA==


I played my first games on my brother's Nokia 2600 C.

I played titles like Assassin's Creed, and found them way mir entertaining than current smartphone titles.


Has anyone worked with KaiOS? It's an OS for modern feature phones. Just buttons, no touch screen. 2nd most popular phone OS in India.


"for what were called feature phones"

This way of phrasing it, makes that term look ancient, which is a hilarious way of making me feel old.


How did games get purchased and installed on those phones? I don't remember seeing any advertising for phone games back then.


I really want to play the Doom RPGs again, I've never gotten them to work in emulator.

Maybe I should try again with the GC off.


GC feels like this magic box. Does great things for you without having to worry about memory leaks, which is great. But like anything that is magical you give up some control. I guess it's a tradeoff


Well, malloc and free are black boxes as well. And in typical implementations can potentially take arbitrary amounts of time to run, too. (Though they usually don't.)


It depends on your allocator - ptmalloc, the default linux allocator - is open source, and there's plenty of very robust open allocators (jemalloc, mimalloc, tcmalloc, etc). Understanding how your allocator works can be very important in certain contexts.

On windows I'd expect the default allocator to be a black box, but I might be wrong.

For garbage collection I strongly recommend this book (on top of the source code of your gc if available!) https://gchandbook.org/


> On windows I'd expect the default allocator to be a black box, but I might be wrong.

The UCRT is at least "source available" on Windows, up to a point, and distributed with the Windows SDK. The release heap codepath is a bit boring:

    malloc:       C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\malloc.cpp
    _malloc_base: C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\malloc_base.cpp
    HeapAlloc:    (kernel32.dll alias for ntdll.dll!RtlAllocateHeap() on my machine)
The debug codepath is a bit more interesting:

    malloc:                  C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\malloc.cpp
    _malloc_dbg:             C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\debug_heap.cpp
    heap_alloc_dbg:          C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\debug_heap.cpp
    heap_alloc_dbg_internal: C:\Program Files (x86)\Windows Kits\10\Source\10.0.19041.0\ucrt\heap\debug_heap.cpp
    HeapAlloc
HeapAlloc itself is a bit more of a black box (AFAIK), and contains a lot of the fun details about the actual process of heap allocation - although there's a bunch of hooks, debug functions, documentation, articles, alternative implementations (ReactOS), etc.


Thanks! I was wrong then.


Wouldn't the garbage collection for most common runtimes also be open source?


Technically, but they tend to be much harder to hack on.

It's trivial to replace malloc/free with my_malloc/my_free - and integrating libraries that replace malloc/free as-is without renaming also tends to be straightforward. In C++, you can overload new/delete to use my_* with little hassle, or placement new to instantiate classes on previously allocated memory directly.

Meanwhile, C# and Java provide absolutely no means of creating instances of their classes via anything other than their built-in GCs. You can't just distribute a .exe or .jar with a replaced GC - instead, you need to create/distribute/install an entirely new runtime, and even that doesn't really provide any sane means of having multiple GCs living side by side. This is all theoretically technically possible, but orders of magnitude more work.


C# has structs and support for native heap management, and as of C# 9 very few features missing versus something like Modula-3 or even D.

You can provide your own GC on .NET via the COM API.

https://github.com/Potapy4/dotnet-coreclr/blob/master/Docume...

Just like Java since version 10, https://medium.com/@unmeshvjoshi/writing-your-own-garbage-co...


> C# has structs

And yet so little code uses them that to eschew the builtin GC is to eschew basically the entire .NET framework. Even basic foreach loops go through IEnumerable interfaces - theoretically boxing even structs. They also come with different semantics - sometimes terrfiyingly subtly differences when combined with properties.

> and support for native heap management

IDisposable and friends are awkward fill-ins for proper RAII tools for native heaps.

That said, these options can be incrementally deployed in your existing codebase without resorting to another language, so they're more accessible options

> [links]

Hooking/replacing the GC seems more straightforward these days, than when I last looked into it though! Although, coreclr APIs won't help with Unity, or Mono. OpenJDK is at least used by modern Android these days, so perhaps there's a way to use it's GC customization options...?


Some developers know their stuff,

https://devblogs.microsoft.com/aspnet/grpc-performance-impro...

Others are doomed never to move away from new.


Strange, if the software expects this little memory wouldn’t it be better to just limit the JVM to less memory?


mukesh610 had the same thought. [0] I think papercrane's response is correct: it makes more sense to disable explicit GC. Running a full GC cycle every frame is going to severely undermine a modern generational garbage collector. Disabling explicit GC, and using a modern low-pause GC, seems like the way to go.

[0] https://news.ycombinator.com/item?id=27222631


> Before the iPhone, I worked on ...

Did Carmack work on the iPhone? Or should I parse this differently?


Before the iPhone existed, there were feature phones, and Carmack wrote games for feature phones.


I think this means that after the iPhone appeared, j2me phones went extinct.


Working on the iPhone means writing apps for it.


I played some of those games back in the day. Smooth and very fun to play.


I remember playing Orcs and Elves on an old flip phone! Such a fun title.


Twitter is not suitable for such writings...


Before Twitter, John used a much more usable platform called basic text files to share his incredibly interesting opinions.


The chain all in one place, such that it's readable:

> Before the iPhone existed, I worked on a few games for what were called "feature phones": Doom RPG 1&2, Orcs&Elves 1&2, and Wolfenstein RPG. Qualcomm's native-code BREW platform had better versions, but I haven't seen any emulators and archives for it, so they may be lost at this point. The J2ME (java mobile) versions are still floating around, and can be emulated.

> My son wanted to get O&E2 running, so we set out on a little adventure. Kemulator ran the game, but audio was glitchy and it hung after you died in game. Well, we are programmers, we should be able to fix it. Unlike most emulator projects, Kemulator turned out to be closed source abandonware, so we moved over to freej2me, which is a live github project.

> The hang didn't happen, but audio was even worse. Missing sound effects was a simple bug fix -- MIDI sounds weren't seeking to the start on replays. We will submit a patch. Still, everything was glitchy with audio underruns. We noticed that the emulator was taking an absurd amount of CPU, despite the game being built for <100 MHz mobile CPUs.

> We spent a frustrating afternoon exploring java profiling tools, but finally, Flight Recorder and JDK Mission Control pointed out the root cause: explicitly invoked garbage collection. A vague memory of having to call system.GC() every frame to avoid problems on some mobile phones bubbled up. We couldn't change the source on the game, but the jvm has a handy option -XX:+DisableExplicitGC that fixed everything right up.

> This is an interesting case where an operation is >10x slower on a modern computer.

> A GC sweep on a phone with 128k of heap is a very different thing than a desktop with a multi-GB heap.

> Some old writing about the early cell phone work: https://web.archive.org/web/20060502175605/http://www.armadi...


I'm curious how the UI looks to other people, because "all in once place" isn't really a complaint I can understand about the Twitter UI I'm seeing. There's buttons and stuff between tweets, but with 280 characters per tweet (140 was definitely less readable) they're not significantly more difficult to read on Twitter than they are in the paragraphs you posted.


The Twitter web UI if you are not logged in is purposefully broken. Every so often, it will just show you "access denied" or "you don't have permission". It is the peak of dark patterns.


I think this is a bug rather than deliberately blocked. The on-page retry button will continuously fail, but if you go up to the address bar and keep hitting enter it will eventually work.

Still embarrassing for a tech company of twitter's size. Displaying a tweet to a logged out user should be the single simplest job their service has, but it's usually broken.


>embarrassing for a tech company of twitter's size

I'm a heavy user of Twitter. They have the most ridiculous bugs all the time

Let me list a few I can remind in recent two years:

1. A serious bug that makes the private lists' name, member count, and description visible to everyone for half a way

2. ANOTHER private list exposure bug after a year

3. Lots of their features are semi- or totally broken, the most obvious example being Moments. Some minor ones are like "twitter anniversary" etc.

4. Media files being totally lost/404, especially for some older tweets

5. UI: "Checked" mark for adding people to lists being invisible for at least a week

5.1. UI: some UI elements suddenly become black for a few weeks regardless what theme you use

6. Outage: Like function broken for almost a day

7. Outage: Timeline broken (no update) for half day

8. This probably isn't a bug but a "shadow block" feature, but I can't follow some accounts (and their following/follower count doesn't show) if my IP is in certain region.

9. Huge feature disparity between web and app, or iOS and Android

And needless to say, their support is beyond unhelpful, and they don't really have a proper place to report technical issues/bugs of their service.


Don't forget videos appearing as a blocky mess for the majority of playback time no matter how good your internet connection is.


Yeah Twitter’s video compression is practically a parody of bad video compression.


I was surprised by being able to see a tweet from a user who had blocked me after someone mentioned it in my replies.


The fact that threaded tweet links like this don't work on non-official apps (I use Talon on Android), to me, is a sign that this is deliberate.


https://github.com/klinker-apps/talon-for-twitter-android/is...

Twitter's thread/reply API has been changed a lot. It definitely works better now.

I will also be honest, Talon basically isn't really updated any more.

I remembered the last time I used it, it has a bug that it will always re-save/re-compress the image when you download, which is very easy to fix, but the author didn't do anything about it. I just checked, the bug is still there..

https://github.com/klinker-apps/talon-for-twitter-android/is... and https://github.com/klinker-apps/talon-for-twitter-android/is...


They need users, they rather avoid non-users that just want the content.

Twitter is a private company but it has also become a public space for political discourse.

The need for profits and open free speech are butting heads.


Thinking that it was intentionally made broken, or deliberately broken at some point is rather presumptuous. As has been said before, don't assume maliciousness when incompetence is a better explanation.

That said, I would buy that ignoring and any fix for it is deliberate.


Twitter has not fixed this bug for many many months. I think they are very happy to have it because I do not believe Twitter cannot fix it if they want to.


Years. I’ve never used a Twitter app, but this bug has occurred a high percentage of the time I view a tweet in the browser of every smartphone I’ve ever owned.


Thank you! I had given up on reading any HN post from Twitter because of this issue.


This happens regardless of whether you are logged in or not. IIRC it's some weird implementation bug with how they use webworkers. There was an HN thread on it a couple of months ago that prescribed some fixes. It varies by browser, browser version and some other things like what chrome may be a/b testing for you. The fixes are temporary, though.

Dunno why Twitter seems to have completely deprioritized the issue though. They change the error message every once in a while but nothing else.


It seems to require randomly from 1 to 5 page refreshes to display anything (and don't be fooled by helpful retry button, it won't work, you need to use browser refresh). I thought it was just broken, but you say it works when logged in? WTF.


Most probable explanation: logged and notlogged requests go through a different path in the infrastructure and the not logged one doesn't work that good and they have no pressure in fixing it. Maybe it's not a dark pattern but it smells like one for sure.


Try copy/paste of the URL into a brand new browser tab. Works for me, every time. Which means it's probably something to do with the Referer header being set? Anti-flooding/hotlinking maybe?


On top of that this particular chain is quite egregious because Carmack didn't even bother editing his text to be twit-friendly. It just breaks in the middle of sentences.

It's pretty amusing to me that this modern, high-res, multi-megabyte page has worse usability than when we could just "finger" Carmack's .plan from a terminal 20 years ago.

Maybe we should try to bring finger back. We could pretend that it uses the blockchain to drive adoption.


We just need to sell an NFT each time a .plan changes.


It has been like that for years too.

I basically wont read Twitter threads because of it, perhaps I am better off for it.


That's entirely possible. I don't really ever browse it without being logged in.


I noticed it almost never happens if you copy/paste the url into a browser. Seems related to navigating in with a Referer header set.


And you have to click on "Show this thread" at the end to see more, otherwise you only see some of the tweets.

And yes, it's absolutely horrible that there are all those buttons and a repeated profile pic intruded at random points in the text, like in the middle of a damn word.

I get that you need to tweet to get views and nobody reads blogs any more but this "Twitter thread" concept is a UX horror.


And "Show this thread" takes me to the comments first, I have to scroll up to see other messages by the author and the order seems incorrect.


Yeah, I prefer threadreaderapp over Twitter's native UI for reading long threads all day everyday, but don't find the native one difficult to read.

My only complaint is when the thread is long, Twitter will start not showing it all at once. You have to click the last tweet to see anything after it.

But again, threadreaderapp itself also has trouble to feed all the parts of a super long thread.


Thank you for this! I have seen this before but I forgot the name. Bookmarking it and I'll probably create a simple bookmarklet so that I can quickly open it up on the current tweet

edit: in case anyone is interested, here's the code for the bookmarklet to turn the current tweet into a threadapp thread. This is not very deeply tested but it worked with the current tweet above so YMMV:

  javascript:(function(){window.___location.href = 'https://threadreaderapp.com/thread/' + window.___location.pathname.split('/').pop(); })();


To me it looks like : "Something went wrong. [Try again]". Clicking the try again button will never do anything. Reloading the page a few times will eventually load the tweet. Unless I'm on a mobile connection. And sometimes the web workers get screwed up and nothing will ever load until the browser is restarted.

When it does load, chances are that it only shows one tweet from a thread followed by half a dozen unrelated tweets that twitter thinks I might want to read instead.


Thank you. Twitter's user experience is the worst.


Although there is no reason why he didn't post on say medium and post the link. He chose to use a 140-char-limit site to write a novel.


280 now ;)

But I’d rather him use Twitter than the centralized broken mess that is medium.


Better the centralized broken mess you know than the centralized broken mess you don't


Is it really Twitter or the habit to write everything there, even lengthy posts?


It's interesting to think about why that habit exists. It seems like a low barrier to entry to me. If Carmack had wanted to post to Medium or something he would've had to write and edit the whole essay and get people to come read it.

On Twitter Carmack posts a paragraph as it comes to him. Could do the whole thing at once. Could take days to complete the thread or longer. It could be as long as he wants. One tweet or one thousand. No real expectations of edits. No one would be surprised if it's only a three paragraph thought whereas going to read an essay that might feel a little brief. Nobody expects really profound or serious insight, just the thoughts on top of his head.

Readers also have the same low barrier to entry. No need to go to a separate page or app. Look at the first or first n paragraphs. Scroll by anytime.

The user experience for something like this may not be perfect on Twitter, but I think Twitter has a lot to recommend itself as the appropriate tool for sharing thoughts like this.


Reading Carmack's thought to text style writing (a la Joyce's Ulysses) might be the only thing the get me to wade into the sess pool that is Twitter .... Holy crap, that was the most arrogant, self agrandizing post I've ever made =( I wish I didn't feel that way, but I do =)


Has someone written an extension to aggregate tweets into 1 large tweet yet?


Dunno about an extension but there are apps

https://threadreaderapp.com/thread/1395089205986988043.html


nitter.com is an alternative ui that shows the whole thread on one page without needing js



The correct link seems to be nitter.net


Whoops


Yeah, @threadreaderapp


The worst is when lots of people each reply "unroll thread!" to a thread. So much noise and junk.


I detest it for this reason. Well, that and the fact that it's a closed source service leaching off content on another website and slapping ads on top of it. The only thing worse are those awful video downloader bots.

I have opted out and blocked their bot on Twitter which apparently currently suffices to prevent their scraping. I'm more than happy to point anyone who asks to a text file. God, I miss blogs.

Sorry about the rant.


It's unfortunate. I'm guessing it's because people need that dopamine hit of getting a notification, just for them.


I wish John would move over to a proper blogging site like medium, rather than a Frankenstein string of tweets.

He has enough material to gain a good following.


Yes, I've wondered if there's any advantage to using twitter if you've to say something more than a single tweet. Is there any blocker to using good old blogs for this? There's no rule that a blog should be at least a page long; it can be short.


He used to use Facebook for these things, back when he worked there.


Bring back .plan files!


You could also insert the Carmack’y “ah-ehm” between tweets :-)


Awesome thank you!


Must be fun being related to John Carmack.


Just a fun question. If you had a choice; would you rather be John Carmack or Tony Hawk?


For the general case, it’s going to be a lot easier when you’re 80 years doing Carmack stuff than it will be Hawk stuff.


80? Tony thinks he recently did his last ever 540 - at "only" 53.

https://twitter.com/tonyhawk/status/1372425655913123840


Jim Keller. He looks like Tony Hawk but is technical like Carmack.


Amount of injuries Tony Hawk had to take to become this good is horrifying. That is enough for me make me choose John.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: