Hacker News new | past | comments | ask | show | jobs | submit | Layvier's comments login

isn't the self replicating property of life a huge benefit though?


This is really sad honestly. It feels like we'll be stuck with React forever, and even with it there'll be less incentives to make api changes


Why do you say that? You make it sound like it's not possible to write code without the help of LLMs.


Disclaimer: OT and pretty ranty.

I don't know if that's what the GP hinted at, but as a Svelte developer and big advocate for more than 6 years (single handedly training and evangelizing 20+ developers on it), I found so many concerns with Svelte 5 that it simply made me use React again.

It's a temporary choice and I'm desperately evaluating other ecosystems (Looking at you SolidJS).


Can you expand on the concerns regarding Svelte 5?


Put simply, Svelte and React were at two ends of a spectrum. React gives you almost complete control over every aspect of the lifecycle, but you have to be explicit about most of the behavior you are seeking to achieve. Building an app with React feels about 80% on the JS and 20% on the HTML side.

Svelte on the other hand felt like a breeze. Most of my app is actually plain simple HTML, and I am able to sprinkle as little JS as I need to achieve my desired behaviors. Sure, Svelte <=4 has undefined behaviors, or maybe even too many magic capabilities. But that was part of the package, and it was an option for those of us who preferred this end of the trade-off.

Svelte 5 intends to give that precise level of control and is trying to compete with React on its turf (the other end of that spectrum), introducing a lot of non-standard syntax along the way.

It's neither rigorous Javascript like React where you can benefit from all the standard tooling developed over the years, including stuff that wasn't designed for React in particular, nor a lightweight frontend framework, which was the initial niche that Svelte happily occupied, which I find sadly quite empty now (htmx and alpinejs are elegant conceptually but too limiting in practice _for my taste_).

For me it's a strange "worst of both worlds" kind of situation that is simply not worth it. Quite heartbreaking to be honest.


Ok, I see your point. I wrote in another thread that I loved the simplicity of using $: for deriveds and effects in Svelte 3 and 4. And yes, the conciseness and magic were definitely part of it. You could just move so fast with it. Getting better performance with the new reactivity system is important to my data viz work, so it helped me to accept the other changes in Svelte 5.


Exactly. There was a certain simplicity that might be lost. But yeah I can imagine it might work out differently for others as well. Glad to hear it is for you!

Have you considered other options? Curious if you came across anything particularly interesting from the simplicity or DX angle.


I just saw Nue and Datastar suggested somewhere, but have not had time to check them out yet, but I will probably stick with Svelte, need to get stuff built.

One thing that also came to mind regarding Svelte 5 is that I always use untrack() for $effect() and declare dependencies explicitly, otherwise Svelte 5 becomes too magical for me.


Yeah those are pretty cool and on my radar! And thanks for sharing the tip :)

Just checked your work on covary and it's pretty rad! What's your backend like?


Thanks! covary is my first Svelte 5 project (have not yet migrated my Svelte 4 projects). The backend is surprisingly simple, but I'm relatively familiar with the data and statistics, so maybe that's why it's so simple and/or perceived as such by me. I really like working on the human interface layer, i.e. the frontend. Backend work for me is always in the service of that.

If you find a viable alternative to Svelte and React, please let me know.


Agreed, it's not even possible to run an eval dataset. If someone from google see this please at least increase the burst rate limit


It is not without rate limits, but we do have elevated limits for our accounts through:

https://glama.ai/models/gemini-2.5-flash-preview-04-17

So if you just want to run evals, that should do it.

Though the first couple of days after a model comes out are usually pretty rough because everyone try to run their evals.


What I am noticing with every new Gemini model that comes out is that the time to first token (TTFT) is not great. I guess it is because they gradually transfer computer power from old models to new models as the demand increases.


If you’re imagining that 2.5Pro gets dynamically loaded during the time to first token, then you’re vastly overestimating what’s physically possible.

It’s more likely a latency-throughput tradeoff. Your query might get put inside a large batch, for example.


That's very interesting, thanks for sharing!


Absolutely. So many use cases for it, and it's so cheap/fast/reliable


And stellar OCR performance. Flash 2.0 is cheaper and more accurate than AWS Textract, Google Document AI, etc.

Not only in benchmarks[0], but in my own production usage.

[0] https://getomni.ai/ocr-benchmark


I want to use these almost too cheap to meter models like Flash more, what are some interesting use cases for those?


We're in the same space, and we knew about those shady practices for a while now, so it feels pretty good to see them finally exposed. It doesn't give back the capital they sucked in though


We were using gpt-4o for our chat agent, and after some experiments I think we'll move to flash 2.0. Faster, cheaper and a bit more reliable even. I also experimented with the experimental thinking version, and there a single node architecture seemed to work well enough (instead of multiple specialised sub agents nodes). It did better than deepseek actually. Now I'm waiting for the official release before spending more time on it.


> "It's time for OpenAI to return to the open-source, safety-focused force for good it once was," Musk said in the press release. "We will make sure that happens.""

From the creator of Grok, this is such an insane thing to say


And Tesla, who (in)famously doesn't regularly publish their GPL-derived codebases.


What’s the background here? How can we know they use GPL licensed code? Was there some leak?


Their infotaiment uses a customized Debian distro. On a Model S you could easily get a shell into it, because they used a freaking SSH with a password-based authentication over Ethernet to connect from the instrument cluster to the computer in the central console.

You could sniff the password with a man-in-the-middle attack, if you knew the host key of the instrument cluster. Here's one from my previous Model S: https://gist.github.com/Cyberax/ad9866ab4306d43957dc480db573...


This is a gist created 1 hour ago. No proof of the attack vector. What's the point of posting a private key?

Also, so what if they used Debian? Linux is used on everything. Debian has multiple licenses, it also has BSD3 and others to choose from: https://www.debian.org/legal/licenses/


In case anybody wants it. I can do a more detailed writeup about hacking into my Tesla, but I'm not particularly interested in that. In short, I bought an Tesla instrument cluster on eBay and dumped the NAND chips from it.

They use plenty of GPL software there, including the Linux kernel itself.


Ok, you seem to be implying that just the use of GPL software necessitates the open sourcing of anything you build on it or with it. If that were the case, then all of AWS would be open sourced and all of the server backends built on Ubuntu clusters would have to be open sourced.

As far as I understand, its only "derivative" works that must be open sourced. Not merely building a software program or hardware device on top of a Debian OS. Tesla's control console is hardly a derivative work.


Eh, if they were being compliant and merely building modules ontop of and called by BusyBox, they could get away with Mere Aggregation [0]*, but from a little looking around it looks like they were called out years ago for distributing modified BusyBox binaries without acknowledgement [1] and promised to work with the Software Conservancy to get in compliance. [2]

[0] https://www.gnu.org/licenses/gpl-faq.html#MereAggregation

[1] https://lists.sfconservancy.org/pipermail/ccs-review/2018-Ma...

[2] https://sfconservancy.org/blog/2018/may/18/tesla-incomplete-...

*but I would argue (a judge would be the only one to say with certainty) that Tesla does not provide an infotainment application "alongside" a linux host to run it on, they deliver a single product to the end user of which Debian/BusyBox/whatever is a significant constituent.

(P.S. to cyberax: if you can demonstrate that Tesla is still shipping modified binaries as in [1] I think it would make a worthwhile update to the saga.)


You'd need to post Linux kernel source, though.


Your post reads like Debian is available with multiple licenses including BSD3 This is not true.

The page you posted is a list of licenses various software in the Debian distribution are released with.

Of course the parent's idea that Tesla using Debian means they have to release the source of anything is incorrect.


source?


None of these people have any shame anymore, it's just exponentially growing levels of unwarranted chutzpah.


This is modern day tech ceo/politician playbook 101. And it's because of this that society in general is a shit hole. There is no semblance of honesty nor accountability at all anymore.

Grift and lie to everyone's faces because you know that it doesn't matter what the fuck you say, as long as your political stance aligns with the right people bootlickers will lick up anything you say for a chance at being noticed.


You need rabid fans though to make sure your doubters are yelled down. That's one way Musk gets away with this behavior. Thousands of dimwits yelling down anyone that suggests he may not be operating in good faith.


How many of them are even real though? I'm pretty sure Musk has a troll farm for a long time now, back in twitter days his supporters' profiles already looked very suspicious


Wouldn't be surprised if the majority of xAI's GPU resources are being used for writing pro-Elon responses to Twitter comments.


Grok-1 was open-sourced. Grok-2 has yet to be open sourced, but perhaps they will once they launch Grok-3.


Could you add more context? I’m unfamiliar with Grok. Is Elon being a hypocrite?


Grok has absolutely no safety mecanism in place, you can use it for anything it will not block a query, all under the pretext of "free speech". And it's not open source either


>it will not block a query

This is good though?

>And it's not open source either

Wonder why Grok-1 is open-source but not Grok-2. Maybe it will when Grok-3 releases?


> > it will not block a query

> This is good though?

The opinion spectrum on AI seems to currently be [more safe] <---> [more willing to attempt to comply with any user query].

Consider as a hypothetical the query "synthesise a completely custom RNA sequence for a virus that makes ${specific minority} go blind".

A *competent* AI that complies with this, isn't safe to give to open source to the general public, because someone will use it like this — even putting it behind an API without giving out model weights is a huge risk, because people keep finding ways to get past filters.

An *incompetent* model might be safe just because it won't produce an effective RNA sequence — based on what they're like with code, I think current models probably aren't sufficiently competent for that, but I don't know how far away "sufficiently competent" is.

So: safe *or* open — can't have both, at least not forever.

(Annoyingly, there's also a strong argument that quite a bit of the recent work on making AI interpretable and investigating safety issues required open models, so we may also be unable to have safe closed systems as well as being unable to have safe open systems).


Safetyism is bad, but grok definitely has safeguards and will block certain queries.


why are you pro censorship?


Didn't X have to ban Grok output due to what their own users were making?


No, it's live and free for all X users.


I would even say that he's now consistently lying to get to what he wants. What started as "building hype" to raise more and have more chances actually delivering on wild promises became lying systematically for big and small things..


well on my side I'm the proud owner of a new ed25519 key.. The status page didn't update quick enough


as am I!


Just to say I love your product guys, well done;)


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: