> and I struggle to think of what would lead one to the urge to implement a key=value config file parser in 2025
C/C++ culture never changes.
As many new build tools and package managers as people come up with, the ‘default’ environment is still one where adding dependencies is hard, so people roll their own utilities instead.
I can only speak for myself but I don't think you've got the cause and effect right. Dependences tend to have their own dependencies (which have ...). It's not so much the difficulty as it is the awareness of it that leads me to minimize my dependencies to the bare minimum.
All my dependencies are locally cloned. I build them from source in a network isolated environment. And yeah, that makes it more expensive to bring new ones in so I tend to shy away from it. I see that as a good thing.
That said, if you're willing to give cmake access to the network things largely just work as long as you don't attempt anything too exotic compared to the original authors. For that matter boost already has a decent solution for pretty much anything and is available from your distros repos. Rolling your own is very much a cultural past time as opposed to a technical necessity.
> and I struggle to think of what would lead one to the urge to implement a key=value config file parser in 2025
That could be a symptom of LLM coding. I have found at times they will go down a rabbit hole of coding up a complicated solution to something when I know that a library already exists it could’ve used. I’m sure part of the problem is that it isn’t able to search for libraries to solve problems, so if its training data didn’t use a particular library, it will not be able to use it.
>>
In this case we build a model with 1T index which we lookup for every token to make prediction with much smaller model.
<<
This index seems to be used to minimize the size of models.
I'm familiar with term indexing as described in The Handbook of Automated Reasoning and I imagine that this index helps them recognize 'generalizations'.
In the way that a rewrite rule can be used to reduce an infinite number of expressions, not just a single expression, a generalization can be used to minimize models.
Generally, such an index would be some kind of prefix-tree.
> prima.cpp is a distributed implementation of llama.cpp that lets you run 70B-level LLMs on your everyday devices— laptops, desktops, phones, and tablets (GPU or no GPU, it’s all good). With it, you can run QwQ-32B, Qwen 2.5-72B, Llama 3-70B, or DeepSeek R1 70B right from your local home cluster!
Just my two cents from experience, any sufficiently advanced LLM training or inference pipeline eventually figures out that the real bottleneck is the network!
Sometimes, every config parser that is popular enough to be considered trusted is just chock full of features that you don't need and increases both your build time and binary size, without bringing much value beyond doing the config parsing you could've written yourself in a few minutes.
Sometimes, what you want is just a simple key=value config parser.
A little re-inventing the wheel is better than a little dependency.
Even with consumer GPUs, the AI stack is completely dependent on ASML, isn't it?
Thought experiment: What would happen if the Dutch government decided that AI is bad for mankind and shuts down ASML? Would the world be stuck in terms of AI? For how long?
That's a silly the thought. ASML isn't controlled by the Dutch government.
Also, everything in computing is dependent on semiconductors. ASML is just one player. There are tens of thousands companies involved in the industry and some of them are single suppliers of critical materials, machines or software. It's wrong to single out ASML.
ASML licensed technologies from US companies during the development of EUV. That's what gives the US the leverage to do things like block sales to China.
Like all novel things, once you prove it can be done, someone else will do it. If you shut ASML down, some other country that is already working on it will catch up. ASML existing is better because at least the person ahead can keep trying to remain ahead.
ASML publishes most of the research and theres not much stopping people from building their own EUV lithography machines. Its just very very very hard and basically the equivalent of doing magic. China is making incredible progress on this front.
The problem with these things is that there are always trade secrets that aren't published anywhere. So you'd need to actually hire people with specific knowledge to be able to replicate it.
The world (and the West specifically) definitely needs to build redundancy ASAP here.
The new machines are 2-3 stories tall, require an Airbus to transport, have complexity on par with the worlds largest particle accelerators, if not more complex. Because of this, the supply chains are highly intertwined no one country and can isolate that supply chain. The Dutch can't build it without our contributions, and neither could we without theirs. Lots of moving parts here literally and figuratively.
reply