Something I’ve found very helpful is when I have a murky idea in my head that would take a long time for me to articulate concisely, and I use an LLM to compress what I’m trying to say. So I type (or even dictate) a stream of consciousness with lots of parentheticals and semi-structured thoughts and ask it to summarize. I find it often does a great job at saying what I want to say, but better.
(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).
P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:
When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.
Agreed, the messiness of the original text has character and humanity that is stripped from the summarized text. The first text is an original thought, exchanged in a dialogue, imperfectly.
Elsewhere in this comment section, it's discussed about the importance of having original thought, which the summarized text specifically isn't, and has leeched away.
The parent comment has actually made the case against the summarized text being "better" (if we're measuring anything that isn't word count).
Learning to articulate your thoughts is pretty vital in learning to think though.
An LLM could make something sound articulate even if your input is useless rambling containing the keywords you want to think about. Having someone validate a lack of thought as something useful doesn't seem good for you in the long term
Your original here is distinctly better! It shows your voice and thought patterns. All character is stripped away in the "compressed" version, which unsurprisingly is longer, too.
LLM’s are very good at style transfer, like turning a piece of writing into a poem or whatever. I’ve found this to be helpful with respect to coding style as well.
E.g. I still kind of write python as if I was writing C++. So, sometimes I’ll write a for loop iterating over integer indexes and tell the LLM “Hey can you rewrite this more pythonically” and I get decent results.
But also automated long-haul trucking has been pretty clearly on the short term horizon for the last decade. I think most young people know that this is coming, and hence trucking is probably not the best career to invest your time in.
So when there’s broad agreement that something is a good idea like, I don’t know, making sure the water is drinkable, your reaction to that is it’s “evil incarnate”?
With non-mutable “streaming” input, there is an O(n) algorithm to obtain the unsorted top k with only O(k) extra memory.
The basic idea is to maintain a buffer of size 2k, run mutable unsorted top k on that, drop the smaller half (i.e the lowest k elements), then stream in the next k elements from the main list. Each iteration takes O(k), but you’re processing k elements at a time, so overall runtime is O(n).
When you’re done, you can of course sort for an additional k*log(k) cost.
Possibly, but it would join other false conjectures such as Euler's sum of powers conjecture - posed in 1769 and no counterexample found until 1966. There's only been three primitive counterexamples found so far.
Not even the same implications. All empirical evidence strongly support the Goldbach conjecture. Any counterexample would mean an entire field of Mathematics has to be rewritten.
(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).
P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:
When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.
reply