Hacker News new | past | comments | ask | show | jobs | submit | kevinventullo's comments login

Something I’ve found very helpful is when I have a murky idea in my head that would take a long time for me to articulate concisely, and I use an LLM to compress what I’m trying to say. So I type (or even dictate) a stream of consciousness with lots of parentheticals and semi-structured thoughts and ask it to summarize. I find it often does a great job at saying what I want to say, but better.

(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).

P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:

When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.


Like the linked article, I’d rather read your original text, even if it’s less structured and rough

Agreed, the messiness of the original text has character and humanity that is stripped from the summarized text. The first text is an original thought, exchanged in a dialogue, imperfectly.

Elsewhere in this comment section, it's discussed about the importance of having original thought, which the summarized text specifically isn't, and has leeched away.

The parent comment has actually made the case against the summarized text being "better" (if we're measuring anything that isn't word count).


Learning to articulate your thoughts is pretty vital in learning to think though.

An LLM could make something sound articulate even if your input is useless rambling containing the keywords you want to think about. Having someone validate a lack of thought as something useful doesn't seem good for you in the long term


Your original here is distinctly better! It shows your voice and thought patterns. All character is stripped away in the "compressed" version, which unsurprisingly is longer, too.

LLM’s are very good at style transfer, like turning a piece of writing into a poem or whatever. I’ve found this to be helpful with respect to coding style as well.

E.g. I still kind of write python as if I was writing C++. So, sometimes I’ll write a for loop iterating over integer indexes and tell the LLM “Hey can you rewrite this more pythonically” and I get decent results.


But also automated long-haul trucking has been pretty clearly on the short term horizon for the last decade. I think most young people know that this is coming, and hence trucking is probably not the best career to invest your time in.

Whoosh

Full consensus is evil incarnate

So when there’s broad agreement that something is a good idea like, I don’t know, making sure the water is drinkable, your reaction to that is it’s “evil incarnate”?


If nodes have pointers to their parent, constant space tree traversal is easy enough.

Would be interesting to apply Interpretability techniques in order to understand how the model really reasons about it.

With non-mutable “streaming” input, there is an O(n) algorithm to obtain the unsorted top k with only O(k) extra memory.

The basic idea is to maintain a buffer of size 2k, run mutable unsorted top k on that, drop the smaller half (i.e the lowest k elements), then stream in the next k elements from the main list. Each iteration takes O(k), but you’re processing k elements at a time, so overall runtime is O(n).

When you’re done, you can of course sort for an additional k*log(k) cost.


Technically the emperor dubs Anakin as Vader just after the showdown where Anakin betrays Mace Windu. The battle with Obi-Wan happens after that.

I’m not normally this pedantic, but on the topic of Star Wars it somehow feels appropriate.


TIL, honestly have only watched Star Wars once, so I appreciate the correction.

IMO it would make the conjecture far more interesting, as it would be a surprise to most people who have thought about the problem.

Many natural questions would arise, starting with “Is this the only counterexample?”


Possibly, but it would join other false conjectures such as Euler's sum of powers conjecture - posed in 1769 and no counterexample found until 1966. There's only been three primitive counterexamples found so far.

(I got that from https://math.stackexchange.com/questions/514/conjectures-tha... which features some other false conjectures that may be of interest to you)


Not even the same implications. All empirical evidence strongly support the Goldbach conjecture. Any counterexample would mean an entire field of Mathematics has to be rewritten.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: