I don't know if that's what the GP hinted at, but as a Svelte developer and big advocate for more than 6 years (single handedly training and evangelizing 20+ developers on it), I found so many concerns with Svelte 5 that it simply made me use React again.
It's a temporary choice and I'm desperately evaluating other ecosystems (Looking at you SolidJS).
Put simply, Svelte and React were at two ends of a spectrum. React gives you almost complete control over every aspect of the lifecycle, but you have to be explicit about most of the behavior you are seeking to achieve. Building an app with React feels about 80% on the JS and 20% on the HTML side.
Svelte on the other hand felt like a breeze. Most of my app is actually plain simple HTML, and I am able to sprinkle as little JS as I need to achieve my desired behaviors. Sure, Svelte <=4 has undefined behaviors, or maybe even too many magic capabilities. But that was part of the package, and it was an option for those of us who preferred this end of the trade-off.
Svelte 5 intends to give that precise level of control and is trying to compete with React on its turf (the other end of that spectrum), introducing a lot of non-standard syntax along the way.
It's neither rigorous Javascript like React where you can benefit from all the standard tooling developed over the years, including stuff that wasn't designed for React in particular, nor a lightweight frontend framework, which was the initial niche that Svelte happily occupied, which I find sadly quite empty now (htmx and alpinejs are elegant conceptually but too limiting in practice _for my taste_).
For me it's a strange "worst of both worlds" kind of situation that is simply not worth it. Quite heartbreaking to be honest.
Ok, I see your point. I wrote in another thread that I loved the simplicity of using $: for deriveds and effects in Svelte 3 and 4. And yes, the conciseness and magic were definitely part of it. You could just move so fast with it. Getting better performance with the new reactivity system is important to my data viz work, so it helped me to accept the other changes in Svelte 5.
Exactly. There was a certain simplicity that might be lost. But yeah I can imagine it might work out differently for others as well. Glad to hear it is for you!
Have you considered other options? Curious if you came across anything particularly interesting from the simplicity or DX angle.
I just saw Nue and Datastar suggested somewhere, but have not had time to check them out yet, but I will probably stick with Svelte, need to get stuff built.
One thing that also came to mind regarding Svelte 5 is that I always use untrack() for $effect() and declare dependencies explicitly, otherwise Svelte 5 becomes too magical for me.
Thanks! covary is my first Svelte 5 project (have not yet migrated my Svelte 4 projects). The backend is surprisingly simple, but I'm relatively familiar with the data and statistics, so maybe that's why it's so simple and/or perceived as such by me. I really like working on the human interface layer, i.e. the frontend. Backend work for me is always in the service of that.
If you find a viable alternative to Svelte and React, please let me know.
What I am noticing with every new Gemini model that comes out is that the time to first token (TTFT) is not great. I guess it is because they gradually transfer computer power from old models to new models as the demand increases.
We're in the same space, and we knew about those shady practices for a while now, so it feels pretty good to see them finally exposed. It doesn't give back the capital they sucked in though
We were using gpt-4o for our chat agent, and after some experiments I think we'll move to flash 2.0. Faster, cheaper and a bit more reliable even.
I also experimented with the experimental thinking version, and there a single node architecture seemed to work well enough (instead of multiple specialised sub agents nodes). It did better than deepseek actually. Now I'm waiting for the official release before spending more time on it.
> "It's time for OpenAI to return to the open-source, safety-focused force for good it once was," Musk said in the press release. "We will make sure that happens.""
From the creator of Grok, this is such an insane thing to say
Their infotaiment uses a customized Debian distro. On a Model S you could easily get a shell into it, because they used a freaking SSH with a password-based authentication over Ethernet to connect from the instrument cluster to the computer in the central console.
This is a gist created 1 hour ago. No proof of the attack vector. What's the point of posting a private key?
Also, so what if they used Debian? Linux is used on everything. Debian has multiple licenses, it also has BSD3 and others to choose from: https://www.debian.org/legal/licenses/
In case anybody wants it. I can do a more detailed writeup about hacking into my Tesla, but I'm not particularly interested in that. In short, I bought an Tesla instrument cluster on eBay and dumped the NAND chips from it.
They use plenty of GPL software there, including the Linux kernel itself.
Ok, you seem to be implying that just the use of GPL software necessitates the open sourcing of anything you build on it or with it. If that were the case, then all of AWS would be open sourced and all of the server backends built on Ubuntu clusters would have to be open sourced.
As far as I understand, its only "derivative" works that must be open sourced. Not merely building a software program or hardware device on top of a Debian OS. Tesla's control console is hardly a derivative work.
Eh, if they were being compliant and merely building modules ontop of and called by BusyBox, they could get away with Mere Aggregation [0]*, but from a little looking around it looks like they were called out years ago for distributing modified BusyBox binaries without acknowledgement [1] and promised to work with the Software Conservancy to get in compliance. [2]
*but I would argue (a judge would be the only one to say with certainty) that Tesla does not provide an infotainment application "alongside" a linux host to run it on, they deliver a single product to the end user of which Debian/BusyBox/whatever is a significant constituent.
(P.S. to cyberax: if you can demonstrate that Tesla is still shipping modified binaries as in [1] I think it would make a worthwhile update to the saga.)
This is modern day tech ceo/politician playbook 101. And it's because of this that society in general is a shit hole. There is no semblance of honesty nor accountability at all anymore.
Grift and lie to everyone's faces because you know that it doesn't matter what the fuck you say, as long as your political stance aligns with the right people bootlickers will lick up anything you say for a chance at being noticed.
You need rabid fans though to make sure your doubters are yelled down. That's one way Musk gets away with this behavior. Thousands of dimwits yelling down anyone that suggests he may not be operating in good faith.
How many of them are even real though? I'm pretty sure Musk has a troll farm for a long time now, back in twitter days his supporters' profiles already looked very suspicious
Grok has absolutely no safety mecanism in place, you can use it for anything it will not block a query, all under the pretext of "free speech". And it's not open source either
The opinion spectrum on AI seems to currently be [more safe] <---> [more willing to attempt to comply with any user query].
Consider as a hypothetical the query "synthesise a completely custom RNA sequence for a virus that makes ${specific minority} go blind".
A *competent* AI that complies with this, isn't safe to give to open source to the general public, because someone will use it like this — even putting it behind an API without giving out model weights is a huge risk, because people keep finding ways to get past filters.
An *incompetent* model might be safe just because it won't produce an effective RNA sequence — based on what they're like with code, I think current models probably aren't sufficiently competent for that, but I don't know how far away "sufficiently competent" is.
So: safe *or* open — can't have both, at least not forever.
(Annoyingly, there's also a strong argument that quite a bit of the recent work on making AI interpretable and investigating safety issues required open models, so we may also be unable to have safe closed systems as well as being unable to have safe open systems).
I would even say that he's now consistently lying to get to what he wants. What started as "building hype" to raise more and have more chances actually delivering on wild promises became lying systematically for big and small things..