Hacker News new | past | comments | ask | show | jobs | submit | WhatIsDukkha's comments login

I simply wouldn't use this as is but I would like it if it was a uv plugin, poetry seems like a dead end in 2025 to me.

The alternative to radiation is mixing less poop into your meat etc.?

You understand that the majority of "food science" is designed to allow increasingly lazier and sloppier food handling and allowing it to still be palatable/not kill too many people right?

Don't fall into the "lower cost" idea either, being lazier and sloppier means higher corporate profits and not lower consumer prices (for worse food).

Compare the grass fed/ranged (produced on farms 1/10th the size of the US equivalent) BigMac in Germany versus the one you get anywhere in the US, which do you think is healthier and tastier? They are basically the same price to the consumer mysteriously...


This is the ignorance I was talking about. There are many reasons to irradiate food besides substandard handling. For instance, potatoes can be irradiated to inhibit sprouting, increasing how long you can store them. And imported fruits can be irradiated to prevent the spread of insects and other pests (without needing to use far riskier pesticides.)

You proved my point actually.

Neither of those things is actually useful as an eater of food?

You want less fresh food and from sketchier sources yet you think those are virtues?


They are useful to people who buy food (who hasn't had some potatoes sprout in a cabinet?), and to society generally. Insects are a fact of fruit, to call that "sketchy" is just ignorant.

Sprouted potatoes is a sing its time to get some new ones... you want to eat horded 6 month old potatoes glhf

I don't eat fumigated strawberries so replacing fumigated strawberries with irradiated strawberries is... not useful?


If the potatoes last longer without going bad, then there's no reason to replace them prematurely. You have a predicted narrative that any preservation method you aren't comfortable is intrinsically bad because it lowers food quality, but I can guarantee you there are countless other forms of food preservation you have no problem with.

It comes down to superstition.


This is a case where the science evolved to justify a pre-decided narrative. This was absolutely necessary for an unsustainable food industry in an overly financialised nation(guess which). Don't waste your breath arguing logically. Just try your level best to ensure it doesn't occur in your local food economy, for the near future. Eventually, the GMO folks will reap.

European consumers seem to not want factory farms that produce such low quality food that it needs to be CRISPRed (as is the case with this story) just to be kept alive long enough.

I also am in that camp, I don't want to eat pork raised in unsanitary conditions and then sold to me at top dollar (because lying/obscuring about sourcing).


Then you should want regulations about how the pigs are raised, not banning the use of CRISPR.

> Then you should want regulations about how the pigs are raised

We have those. EU animals have "five freedoms".


As an EATER of food what is the benefit of CRISPR/GMO?

There answer after a good 40 minutes of searching is... nothing.

It's a technology 100% in service of being lazier/sloppier for industrial scale food production and in service of IP restricting the food supply in favor of shareholder X or Y.

"but we can make tasteless US tomatoes on even more inappropriate cropland!"

...

Great for my stock portfolio to screw over developing countries but useless for me as a first world eater of food.

No proof of existence of a benefit.


Uh. Healthier animals.

This specific approval is for a gene therapy to prevent PRRSV infection - a major porcine virus and one that regularly infects pigs in the EU.

It has nothing to do with mistreatment of animals or factory farming.


poor husbandry is the primary objection to US food products

the chicken has to be chlorinated because it has literally been produced covered in faeces

this would seem to be enable it to become even worse


So don't import US food products if it scares you. That's a separate issue from whether to allow CIRPRed livestock.

Again, this disease regularly affects pigs in Europe and causes immense animal suffering.


> So don't import US food products if it scares you.

this is exactly the position of the EU, UK governments

and is one of the few policies that is universally supported by their populations


The EU and UK both import food from the US.

Some US food products are banned for concerns about safety, but they're hardly unique - the US also bans some food products from the EU and UK that are considered unsafe in the US.

None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.


no GM crops, no milk with growth hormone (nearly all of it), no beef with growth hormone (nearly all of it), no chlorinated chicken (nearly all of it), no washed eggs (nearly all of them)

and now pork will end up on that list too

> None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.

I couldn't care less if US'ians want to eat shit (here, literally)


I would never ask any of these questions of an LLM (and I use and rely on LLMs multiple times a day), this is a job for a computer.

I would also never ask a coworker for this precise number either.


But it's a good reminder when so many enterprises like to claim that hallucinations have "mostly been solved".


I agree with you partially, BUT

when are the long list of 'enterprise' coworkers, who have glibly and overconfidently answered questions without doing math or looking them up, going to be fired?


My reasoning for the plain question was: as people start to replace search engines by AI chat, I thought that asking "plain" questions to see how trustworthy the answers might be, would be a good test. Because plain folks will ask plain questions and won't think about the subtle details. They would not expect a "precise number" either, i.e. not 23:06 PDT, but would like to know if this weekend would be fine for a trip or the previous or next weekend would be better to book a "dark sky" tour.

And, BTW, I thought that LLMs are computers too ;-0


I think its much better to help people learn that an LLM is "not" a computer (even if it technically is).

Thinking its a computer makes you do dumb things with them that they simply have never done a good job with.

Build intuitions about what they do well and intuitions about what they don't do well and help others learn the same things.

Don't encourage people to have poor ideas about how they work, it makes things worse.

Would you ask an LLM a phone number? If it doesn't use a function call the answer is simply not worth having.


First we wanted to be able to do calculations really quickly, so we built computers.

Then we wanted the computers to reason like humans, so we built LLMs.

Now we want the LLMs to do calculations really quickly.

It doesn't seem like we'll ever be satisfied.


Ask the LLM what calculations you might or should do (and how you might implement and test those calculations) is pretty wildly useful.


These models are proclaiming near AGI, so they should be smarter than hallucinating an answer.


I have about 140 Emacs plugins running.

I've never had to declare Emacs bankruptcy.

10 years of good times with Emacs+evil and 35 years of vim.

Fwiw.


Sadly that's not worth anything. ¯\_(ツ)_/¯ Anecdotal evidence in both directions.

I had a soft spot for Emacs for a _long_ time (almost two decades)... FWIW. ;)

Key word: had.


Now you're living life large with jetbrains and making the big bucks !


Nope. Neovim. And even there some of the "distros" got on my nerves. I guess at one point I'll just learn Neovim's API and make my own blend like everyone else.

NOT looking forward to it. But I suppose there's no other way.

I still view Neovim as a huge improvement over Emacs though.


> I still view Neovim as a huge improvement

Neovim unlike Emacs is an editor. Emacs is not a mere code editor, not an IDE, text-processor, or web-browser. Emacs first and foremost is a Lisp REPL, with a built-in text-editor. Without deeply understanding that aspect one can never truly appreciate the incredible power it grants you.

Do you use your editor to read and annotate pdfs? Or watch videos? Or manage the library of your ebooks? Or track your expenses? Or control project management like Jira? Or keep your knowledge base and note-taking? Or interact with LLMs? Or explore APIs like Postman? Or keep your spaced repetition flash cards like Anki? Or use it for chat over platforms like Telegram and Slack? Or find and read RFCs and manpages? Or to perform web-search, search through your browser history, Wikipedia. Do you have etymology lookup, thesaurus, dictionaries, translation? Or to order pizza? Or measure distances between coordinates on a map? Automate things based on solar calendar or moon phases? Manage all your configs, aka dotfiles? OCR images with text? List, browse and code review Pull Requests, etc., etc.

In what sense exactly is Neovim/VSCode/IntelliJ/whatever is a "huge improvement", please tell us?


Well you kind of answered yourself. After limping with Emacs for 19-ish years, during most of which I never made the serious effort you alluded to, I finally admitted I just don't have and don't want to have the mindset that's deemed necessary to make full use of Emacs.

All of my professional experience, which is at this point substantial (though of course I make no claims that quality stems from quantity!), has showed me that smaller specialized tools always perform better -- in every meaning of the word.

To me Neovim wins because it's extremely snappy, it has no visual noise, and doesn't show me a warning from a random plugin down there in the command bar almost every minute (something which Emacs apparently will always do). And is configurable in a way I find intuitive. I never cared about all the directories where I am supposed to install my own Elisp files, and I still don't. I want a plugin manager and I want to issue commands to it: install, update, delete. Neovim's Lazy and Mason do exactly what I expect.

I was never, in almost two decades, able to look at how Emacs does things and think to myself "oh but of course it will work like that".

So no, I haven't used Emacs for anything except coding. And I don't intend to use Neovim for anything else as well (with the possible exception of lazygit integration, that one works great). Though I am also working towards having everything except my web browsers be in the terminal.

Again, many will say "skill issue" or "it's just not for you" (the smarter ones). Which, again again, I never denied. But I don't plan to mince words and I am tired of the (to me) unjustified praise for Emacs. It absolutely is not, not just for everyone, but not for most even, IMO.

If you have tamed it and find it intuitive, I am sure it empowers you. I never got to that point and I regret trying to fit in for such an extremely long time. But oh well, we live and learn. We live for sure. :)


The point you're trying to make is meaningless. You're trying to compare two things of distinct categories. It's as if I said - "I like Zoom, we sometime use it for screen recordings", and someone replied: "I've moved away from Zoom for my screen recordings - OBS is such a huge improvement over it..."

I don't care what editor you use or like or moved on to. I use Neovim myself. But like I said, Emacs is not just an editor. Come back when you find a better replacement for a "Lisp REPL with a built-in editor", maybe then, the conversation start making sense.


Emacs was sold to me as an editor however, and technically that wasn't a lie.

The distinction you're making is too only technically correct. Emacs is still (also) an editor. I judged its editor abilities and found them lacking. Finally I woke up and understood it's not for me and moved on.

You can stop arguing now.


Care to share your .emacs? :)


github.com/agzam/.doom.d


thank you!


Here is a bug with some back and forth between millenniumdb and qlever in starting a benchmarking attempt but I don't see results, though they managed to build and import.

https://github.com/MillenniumDB/MillenniumDB/issues/10

https://github.com/ad-freiburg/qlever


Well being loopy isn't too good for economic success so that's a factor? Though for sure I've met some incredibly loopy people that had some great dice roll streaks.

But there are a lot of other factors in the problem and making it an economic issue really obscures some of the more important factors.

Intellectual laziness, idle brains, idle hands, toxic memes and bad actors making bad use of them.


schizophrenia has been proven to be stress triggered and im just not interested to listen to humanity idealization by engineers any more, with the endless utopias and perfect characters, and the blame on everyone not fitting into that template under duress. What good is a construction plan for a machine with all parts carved from diamond? If you cant integrate a real humanity in your perception , your plans and interests, why even waste public bandwidth on your ideas and the terrors they will become ?


I would accept the point that stress triggers schizophrenic behaviors.

When we look at magical thinking as one of those light schizotypical behaviors large portions of the population exhibit these symptoms.

They drink from the cup of nonsense to excess.

There should always be room for drinking from the cup of nonsense but too much and its difficult to be useful to yourself and your family.


I don't understand the value of this abstraction.

I can see the value of something like DSPy where there is some higher level abstractions in wiring together a system of llms.

But this seems like an abstraction that doesn't really offer much besides "function calling but you use our python code".

I see the value of language server protocol but I don't see the mapping to this piece of code.

That's actually negative value if you are integrating into an existing software system or just you know... exposing functions that you've defined vs remapping functions you've defined into this intermediate abstraction.


Here's the play:

If integrations are required to unlock value, then the platform with the most prebuilt integrations wins.

The bulk of mass adopters don't have the in-house expertise or interest in building their own. They want turnkey.

No company can build integrations, at scale, more quickly itself than an entire community.

If Anthropic creates an integration standard and gets adoption, then it either at best has a competitive advantage (first mover and ownership of the standard) or at worst prevents OpenAI et al. from doing the same to it.

(Also, the integration piece is the necessary but least interesting component of the entire system. Way better to commodify it via standard and remove it as a blocker to adoption)


The secret sauce part is the useful part -- the local vector store. Anthropic is probably not going to release that without competitive pressure. Meanwhile this helps Anthropic build an ecosystem.

When you think about it, function calling needs its own local state (embedded db) to scale efficiently on larger contexts.

I'd like to see all this become open source / standardized.


im not sure what you mean - the embedding model is independent of the embeddings themselves. Once generated, the embeddings and vector store should exist 100% locally and thus not part of any secret sauce


I'm not frustrated with you but I'll explain why you might be getting get the vibes here.

Its like people are learning about these new things called skis.

They fall on their face a few times but then they find "wow much better than good old snowshoes!"

Of course some people are falling every 2 feet while trying skis and then go to the top of the mountain and claim skis are fake and we should all go back to snowshoes because we don't know about snow or mountains.

They are insulting about it because its important to the ragers that, despite failing at skiing, they are senior programmers and everyone else doesn't know how to compile, test and review code and they must be hallucinating their ski journeys!

Meanwhile a bunch of us took the falls and learned to ski and are laughing at the ragers.

The frustrating thing though is that for all the skiiers we can't seem to get good conversations about how to ski because there is so much raging... oh well.


With your analogy I would be the one saying that I'm still not convinced that skis are faster than snowshoes.

I still use ChatGPT/Claude/Llama daily for both code generation and other things. And while it sometimes does do exactly what I want it to, and I feel more productive, it still seems to waste my time an almost an equal amount of time, and I have to give up on it and rewrite it manually or do a google search/read the actual documentation. It's good to bounce things off, it's good as starting point to learn new stuff, gives you great direction to explore new things and test things out quickly. My guess on a "happy path" it gives me 1.3 speed up, which is great when that happens, but the caveat is that you are not on a "happy path" most the time, and if you listen to the evangelists it seems like it should be 2x-5x speed up (skis). So where's the disconnect?

I'm not here to disprove your experience, but with 2 years of almost daily usage of skis, how come I feel like I'm still barely breaking even compared with snowshoes? Am I that bad with my prompting skills?


I use -

Rust, aider.chat and

I thoughtfully limit the context of what I'm coding (on 2 of 15 files).

I ./ask a few times to get the context setup. I let it speculate on the path ahead but rein it in with more conservative goals.

I then say "let's carefully and conservatively implement this" (this is really important with sonnet as its way too eager).

I get to compile by doing ./test a few times, there is sometimes a doom loop though so -

I reset the context with a better footing if things are going off track or I just think "its time".

I do not commit until I have a plausible building set of functions (it can probably handle touching 2-3 functions of configs or one complete function but don't get too much more elaborate without care and experience).

I either reset or use the remaining context to create some tests and validate.

I think saying 1.3x more productive is fair with only this loop BUT you have to keep a few things in perspective.

I wrote specs for everything I did, in other words I wrote out in english my goals and expectations of the code, that was highly valuable and something I probably wouldn't have done.

Automatic literate programming!

Sheep shearing is crazy fast with an LLM. Those tasks that would take you off in the weeds do feel 5x faster (with caveats).

I think the 2x-5x faster is true within certain bounds -

What are the things that you were psychologically avoiding /dragging or just skipping because they were too tedious to even think of?

Some people don't have that problem or maybe don't notice, to me its a real crazy benefit I love!

That's were the real speedups happens and its amazing.


Do you mind sharing how much experience you have with the tech stack that have generated code? What I found with LLM is the perspective for AI generated code is different depends on your own experience, and I would like to know whether it is only my experience.

I have more than 20 years with backend development and just some limited experience with frontend tech stacks. I tried using LLM initially with for frontend in my personal project. I found that code generation by LLM are so good. It produces code that works immediately with my vague prompts. It happily fixes any issue that I found pretty quick and correct. I also have enough knowledge to tweak anything that I need so at the end of the day, I can see that my project work as expected. I feel really productive with it.

Then I slowly start using LLM for my backend projects at work. And I was so suprise that the experience was completely opposite. Both ChatGPT and Claude generated code that either bad practice or have flaw, or just ignore instructions in my prompt to come back to bad solutions after just a few questions. It also fails to apply common practices from architecture perspectives. So the effort to make it work is much more than when I do all coding myself.

At that point, I thought probably there are more frontend projects used to train those models than in backend projects, therefore quality of code in frontend tech is much better. But when using LLM with another language that I did not have much experience for another backend project, I found out why my experience is so much different as I can now observe more clearly on what is bad and good in the generated code.

In my previous backend project, as I have much more knowledge on languages/frameworks/practice, my criteria was also higher. It is not just the code that can run, it must be extensible, in right structure and in good architecture, use correct idiom ... Whereas my frontend experience is more limited, the generated code work as I expected but possibly it also violated all these NFRs that I do not know. It explains why using it with a new program language (something I don't have much experience) in a backend project (my well know ___domain) I found a mixed experience when it seems to provide me working code, but failed on following good practices.

My hypothesis is LLM can generate code at intemediate level, so if your experience is limited you see it as pure gold. But if your level is much better, those generated code are just garbage. I really want to hear more from other people to validate my hypothesis as it seems people also have opposite experiences with this.


> Am I that bad with my prompting skills?

Or you're using skis on gravel. I'm a firm believer that the utility varies greatly depending on the tech stack and what you're trying to do, ranging from negative value to way more than 5x.

I also think "prompting" is a misrepresentation of where the actual skill and experiences matter. It's about being efficient with the tooling. Prompting, waiting for a response and then manually copypasting line by line into multiple places is something else entirely than having two LLMs work in tandem, with one figuring out the solution and the other applying the diff.

Good tooling also means that there's no overhead trying out multiple solutions. It should be so frictionless that you sometimes redo a working solution just because you want to see a different approach.

Finally, you must be really active and can't just passively wait for the LLM to finish before you start analyzing the output. Terminate early, reprompt and retry. The first 5 seconds after submitting is crucial and being able to take a decision just from seeing a few lines of code is a completely new skill for me.


This seems like a weird use of Rust.

There is no mention of how much of the codebase is even in safe Rust after all this work so no clear value to the migration?

Frequently when people get their code ported they then begin a process of reducing the unsafe surface area but not here.

The author seems to have little or no value on safe Rust? It doesn't seem evident from reading/skimming his 4 articles on the process.

Interesting mechanical bits to read for sure though so it' still a useful read more broadly.

It's unsurprising that the author would go use Zig next time since they didn't seem to have any value alignment with Rust's core safety guarantees.


> It's unsurprising that the author would go use Zig next time since they didn't seem to have any value alignment with Rust's core safety guarantees.

I don't think that's true. The end application talks to smart cards. See also one of the links: https://gaultier.github.io/blog/how_to_rewrite_a_cpp_codebas...

However, the codebase has the possibility to use multiple memory allocators, and Rust is simply actively bad when faced with that.

It just seems like the codebase has a set of idioms that really lean into the areas that Zig is actively good at and where Rust is weak. Memory allocators, lexical (not RAII) defer, C interop, C++ interop, and cross-compilation are Zig's raison d'être, after all.

The one thing I disagree with is the complaint about "repr(C)". Sorry, but I've become convinced that if we want our compiler and languages to work well on modern hardware, we're going to have to allow the compilers to do lots of struct-of-array to array-of-struct (and vice versa) transformations depending upon the actual access patterns. That means that a struct or an array will not be locked to a specific memory layout unless you specifically request as such.


>Doing an incremental rewrite from C/C++ to Rust, we had to use a lot of raw pointers and unsafe{} blocks. And even when segregating these to the entry point of the library, they proved to be a big pain in the neck.

That's how the project ended up in such a bad state by the end. Instead of having rust-rust linkages that the compiler could check, they designed every function boundary to be an uncheckable rust-* linkage. This would be like porting a C library to C++, but only moving one function at a time, such that every single function had to comply with extern C.

Here is an important warning:

The difficulty of setting up boundaries to unsafe languages are the hidden reason for people wanting to rewrite every C library in Rust. Do not choose a design pattern that requires more than one of these boundaries to exist within your own code!


I don't really understand why they chose to rewrite this in rust if they're just going to spend their time writing unsafe C code in rust.


Unsafe Rust is much safer than C


> Unsafe Rust is much safer than C

That is not at all an obvious axiom.

I am willing to concede that "Rust" is safer than "C".

However, in "unsafe Rust" it is super easy to violate a Rust API precondition that the compiler takes advantage of. Even the Rust intelligentsia have pointed out that writing correct "unsafe Rust" is significantly harder than writing correct C.


Do you have source on that? I would like to know more. Not disputing this, just curious.



Thank you, appreciated!


To add on, unsafe Rust's main strength is the tools and the culture of encapsulating it well. It's probably the case that "well designed code that uses unsafe" is much safer overall, as you would expect for a memory-safe language. But it doesn't just come about from using unsafe.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: