Many language learning books used to come with audio media. I'm old enough to own a few that came with cassette tapes.
Books are still worthwhile IMO, if only because they provide a bit of structure to one's learning. With free resources it's way too easy to become paralyzed by choice.
I am old enough to remember them. Comparably, you got maybe 4 hours of media - meaning sentences from the book being read and short boring dialogs. You cant compare it to what is currently available. It is like comparing a puddle of mud to Atlantic Ocean. And I mean it in a positive way - those audio tapes were almost nothing comparably.
Beyond projects like Dreaming Spanish, you have around infinite amount of French, Italian, Spanish or German Youtube about whatever topic you want to. There are even dedicated playlists for total beginners you can start to consume with zero knowledge. You have thousands of shows on Netflix in foreign language with various difficulty - some actually suitable for beginners. Some you have already seen in own language, so you can understand them more easily.
For major languages, there are dozens if not hundreds of podcasts with simplified news, "for beginner" discussions. Some of them are useable with literally miniscule amount of knowledge.
To me, the cool (and uncommon in other languages' standard libraries) part about C++ ranges is that they reify pipelines so that you can cut and paste them into variables, like so:
auto get_ids(std::span<const Widget> data)
{
auto pipeline = filter(&Widget::alive) | transform(&Widget::id);
auto sink = to<std::vector>();
return data | pipeline | sink;
}
> The websites I run generally have a honeypot page which is linked in the headers and disallowed to everyone in the robots.txt, and if an IP visits that page, they get added to a blocklist which simply drops their connections without response for 24 hours.
Also, x86 has an instruction that multiplies two 32-bit registers and stores the 64-bit result in two 32-bit registers. So you get the result of the division in the register with the high part of the multiplication result, and don't need to do a shift.
Indeed; quoting from my winning entry [1] that same year:
> This program celebrates the close connection between obfuscation and conciseness, by implementing the most concise language known, Binary Lambda Calculus (BLC).
> The submission implements the universal machine in the most concise manner conceivable.
This is cool, but they should have just used Vulkan. Dawn is a massive dependency (and a PITA to build, in my experience) to get what's basically a wrapper around Vulkan. Vulkan has a reputation for being difficult to work with, but if you just want to use a compute queue it's not that horrible. Also, since Vulkan uses SPIR-V, the user would have more choices for shading languages. Additionally, with RenderDoc you get source-level shader debugging.
Shameless plug: in case anyone wants to see how doing just compute with Vulkan looks like, I wrote a similar library to compete on SHAllenge [0], which was posted here on HN a few days ago. My library is here: https://github.com/0xf00ff00f/vulkan-compute-playground/
Vulkan is definitely a valid angle and I seriously considered it as well. There's a few things that, in aggregate, led me to explore a different direction:
First, there's already a few teams taking a stab at the vulkan approach like kompute, so it's not like that's uncovered territory. At the same time I first looked into this the khronos/apple drama + complaints about moltenvk didn't seem encouraging but I'd be happy to hear if the situation is a lot better.
Second, even though it's not the initial focus, the possibility of browser targets is interesting.
Finally, there's not much in the fairly minimalist gpu.cpp design that couldn't be retargeted to a vulkan backend at some point in the future if it becomes clear that (eg w/ the right combination of vulkan-specific extensions) the performance differential is sufficient to justify the higher implementation complexity and the metal/vulkan tug of war issues are a thing of the past.
Ultimately there's much less happening with webgpu and the things that are happening tend to be in the ml inference infra rather than libraries. it seemed to be a point in the design space worth exploring.
Regarding Dawn - I've lived where your coming from. Some non-trivial amount of effort went into smoothing out the friction. First, if you look at the bottom of the repo README you'll see others have done a lot to make building easier - fetchcontent with Elie's repo worked on the first try, but w/ gpu.cpp users shouldn't even have to deal with that if they don't want to. The reason there's a small script that takes the few seconds to fetch a prebuilt shared library on the first build is so that you can avoid the dawn build by default. After that it should be almost instantaneous to link and compile cycles should be a second or two.
But as I mention elsewhere in these threads, if the Dawn team shipped prebuilt shared libraries themselves, that would be an even better solution (if anyone at Google is reading this)!
Books are still worthwhile IMO, if only because they provide a bit of structure to one's learning. With free resources it's way too easy to become paralyzed by choice.
reply