In 2004, it was a best practice to keep data on memory. in 2012, CPU and GPU performance was increased a lot but memory performance wasn't increased much so calculating the necessary data on demand is faster than keeping it on memory and retrieving it.
Also reminds me of all those framerate pacing hacks people put into old Flash movies. They literally spin in a loop until the current time advances to the next frame. AFAIK Ruffle explicitly pads out the time scripts see just to defeat this particular coding antipattern.
This is actually a great example of something I see in the wild. The most common I've seen are lookup tables for trig functions that are only as fast or even slower than math.h.
You have to aggressively benchmark even across CPU generations to remain confident that your optimization has optimized anything.
Nowadays, most of the math code I worry about I throw into godbolt with -O3 then check the major instructions on Agner Fog. It's often immediately obvious that a modern compiler+CPU is already using a tiny number of cycles to do what I want. (One exception is hot paths that might need to be optimized by hand to use SIMD intrinsics.)
Reminds me of a technical document of Doom 3 BFG edition.
https://fabiensanglard.net/doom3_documentation/DOOM-3-BFG-Te...
In 2004, it was a best practice to keep data on memory. in 2012, CPU and GPU performance was increased a lot but memory performance wasn't increased much so calculating the necessary data on demand is faster than keeping it on memory and retrieving it.