Hacker News new | past | comments | ask | show | jobs | submit | DeathArrow's comments login

>Resource inequality leads to inefficient resource allocation.

Why don't we follow evolutionary laws and let nature and competition tackle resource allocation?


Same reason people prefer neural networks over genetic algorithms, evolution is slow.


I feel like people who espouse this sort of view often wildly over estimate their chance of not ending up with a pointy stick through an important organ.

We don't do that because we don't like seeing women and children get their brains beaten out with a large rock over a rabbit haunch that the bigger ape wanted.


Because of the intermingling of law and wealth. If you don’t intermediate, then justice becomes the will of the wealthiest, and the market stagnates. The only way that evolution works is if it is the highest and only law, so we would be warring individuals killing each other for resources on the daily.

As soon as you add social organization, you disrupt evolution inside that social unit: it then becomes one social unit vs others…. We’ve already been down that road, and now we live in large, relatively stable social units we call nations.

Anything inside a social unit is subject to the monopoly of coercion that the unit possesses over its members. Evolution is short-circuited by the presence of external existential forces.

Making evolution work inside a social unit is equivalent to dissolving it.


What does that look like, practically, considering how many rich people were born into an abundance of resources? Their survival is not a measure of fitness.


Inequality is somehow a consequence of the nature. People and animals have each different traits. One is faster when running, another is cleverly ambushing prey.

Trying to artificially reduce inequalities as in communism didn't yield good results as everything was reduced to the lowest common denominator.


Let's see some concrete examples rather than bringing up communism as a general idea and claiming that it reduces inequality and that it is a bad thing. There's no substance provided for the argument you've made.


I wonder why should a language implement in its libraries a SingleLinkedList and a DoubleLinkedList.

I do get why you need an array-like list, a dictionary/hashmap, a stack and a queue. I got the impression that linked lists aren't used very often. Or maybe it's a different case with Zig?


Maybe because it's used elsewhere in the standard library?

SinglyLinkedList is used in the standard library in std.Thread.Pool, and std.heap.ArenaAllocator. DoublyLinkedList is used in the http client's connection pool, as well as the ObjectCache for the package manager's git client.


> I got the impression that linked lists aren't used very often.

I did some cursory research on the software I've been using today and presented the results here: https://news.ycombinator.com/item?id=43684121

I think there is a misconception around linked lists mostly stemming from the fact that they're not great for cache locality. Someone presents the idea that linked lists under common circumstances are unintuitively slow and therefore should be considered carefully for performance sensitive applications. A game of Chinese whispers immediately forms around that idea and what comes out the other end is a new idea: that linked lists are bad, shouldn't be used and aren't used. Meanwhile, they continue to be used extensively in system software.


Linked lists get a bad reputation because they wound up being a default data structure--and that's almost always a mistake on modern computers.

Back in days of yore, small memory and CPU meant that linked lists and arrays were really the only effective data structures. Allocating and computation were super expensive, so following a link or having an index is really the only thing that works.

On today's computers, CPU is so much faster than memory that it is almost always better to recompute a value than try to look it up. Locality is so important that linear actions trump asymptotic complexity until you get quite a lot more elements than you might expect. Memory is so cheap that it is worth exponentially overallocating in order to amortize complexity. etc.

At the end of the day with modern programming, you should almost always be reaching for something simpler than a linked list (array/vector) or you should be reaching for an abstraction that is more complex than a linked list (hash/dict, queue/deque, etc.).

With modern CPUs, if you are using a linked list for anything other than building a more complex data structure, you're probably making an incorrect choice on almost all fronts.


> Locality is so important that linear actions trump asymptotic complexity until you get quite a lot more elements than you might expect.

The reverse of this can be true. In the following paper, they found that William's Heap Construction (one by one, O(n log n) time) beat Floyd's Heap Construction (O(n) time) when the input is larger than the size of off-chip cache.

https://doi.org/10.1145/351827.384257


>actions trump asymptotic complexity

Which operation of a linked list has any better complexity. The only one I can think of is a frequent removal of random elements in the middle -- and opt not to use tombstones.


Linked lists have higher constant const memory wise, much slower to iterate, effectively removals of a random in the middle are the same as vector. When used as queues the ring buffers are better for most applications.

Their prime use is that adding new elements lack a terrible worst case, aside the fact it puts more pressure on the memory allocator - need to be concurrent friendly.

If there is a full control of the memory allocation (e.g. kernels) linked lists make sense, for virtually all other cases there are better data structures.


I'd first like to point out that yours is an argument from the perspective that performance is crucial and dependent on the use or non-use of linked lists. This is not necessarily the case. Performance may not be crucial, and even when it is, linked lists can find use in areas where they're not even nearly the bottleneck, for being the most obvious data structure that satisfies the required operations on the collection. That pointer chasing is slow is not an argument against the use of linked lists in general.

What if I only ever iterate over a list of some hundred entries in the rare case of an error? Should I be reaching for the most performant data structure or the one that most obviously and clearly maps to the operations the program needs to perform on it at some infinitesimal cost?

> Linked lists have higher constant const memory wise

Than a fixed vector with a known maximum number of elements, yes. With something like a dynamic array you are however usually better off using a linked list memory-wise, unless you want to reallocate every few insertions or your node data is smaller than the link.

> effectively removals of a random in the middle are the same as vector.

You're conflating concepts. Seeking is O(n). Removal is O(1). There are cases where seeking for the purpose of removal is either amortized (e.g. you're already iterating over all the nodes, executing an associated handler for each which may or may not determine that the node should be unlinked in-place) or completely unnecessary (some other data structure already tracks a reference to the node and some code associated with that determines if/when it is removed).

One might similarly argue that I am conflating compacting (O(n)) with removal (O(1)) in the vector case, but in the case of linked lists, the difference is just a natural consequence of design, while for a typical implementation of vector operations they are the same operation and require an additional data structure to be anything but that in order to track the holes.

Moving elements around in memory also comes with a host of other issues, for example if there are many references to the elements, requiring maintaining and indirectly accessing memory via an index array.

> aside the fact it puts more pressure on the memory allocator - need to be concurrent friendly.

This is again a conflation of concepts: that of using linked lists and that of making space for the nodes or preparing an allocator for concurrent application. The nodes could be on the stack for some applications, in a static pool for others.


What do you think is the data structure underneath a hash, stack or queue? It's likely a linked list.

So, you are completely correct. The issue is simply that you are at a different level of abstraction.


He forgot to mention that founders are all veterans of Unit 8200, the signals intelligence division of the Israeli military.


Someone explain to me how this works?

Big Tech acquires companies founded and run by literal foreign spies and recruits said agents into critical positions with their departments. Meanwhile their alumni buddies down the street over at proscribed companies like NSO and Candiru hack into the products and services of these very same companies and use it to target citizens (including journalists, activists, politicians, diplomats) of America and its allies? And no one thinks there is a conflict of interest or threat to national security here?


Yep, these guys have a very powerful network with access to a lot more CISO offices than you or I can get into. That network also includes a lot of the people who develop malware and exploits.


The network Assaf got from founding and selling a big cybersecurity company and then being a VP-equivalent at Microsoft for 5 years (immediately before found Wiz) is more relevant than what he had from being an IC in the army 15 before that.



Well the context is how they built and sold a business so unless that information is pertinent then why would he? Perhaps you can elaborate its relevance in more detail?


If someone needs a C#/.NET meta-rule, I shared one here: https://pastebin.com/EmNsTRwY

After quickly testing it seems to be reasonably good.


One of my favorite authors and one of the best novelists of all times. I am grateful he existed and that he wrote so much amazing books. I am sad that he's gone.

May he rest in peace!


>But over the decades, most converged to the Algol-style (statements, curly braces, often using semicolons, type before identifier, etc.). Look at what we did to programming:

>- Java, C, C++, C#, Kotlin, Rust, Swift, Go, TypeScript, JavaScript, ... → they look more or less the same

The upside being that if you come from C or Java you will fill at home with Go.


I don't really think having an agent fleet is a much better solution than having a single agent.

We would like to think that having 10 agents working on the same task will improve the chances of success 10x.

But I would argue that some classes of problems are hard for LLMs and where one agent will fail, 10 agents or 100 agents will fail too.

As an easy example I suggest leetcode hard problems.


I'm authoring a self-compiling compiler with custom lexical tokens via LLM. I'm almost at stage 2, and approximately 50 "stdlib" concerns have specifications authored for them.

The idea of doing them individually in the IDE is very unappealing. Now that the object system, ast, lexer, parser, and garbage collection have stabilized, the codebase is at a point where fanning out agents makes sense.

As stage 3 nears, it won't make sense to fan out until the fundamentals are ready again/stabilised, but at that point, I'll need to fan out again.

https://x.com/GeoffreyHuntley/status/1911031587028042185


We need The Mythical Man-Month: LLM version book.


The fleet approach can work well particularly because: 1) different models are trained differently, even though using mostly same data (think someone who studied SWE at MIT, vs one who studied at Harvard), 2) different agents can be given different prompts, which specializes their focus (think coder vs reviewer), and 3) the context window content influences the result (think someone who's seen the history of implementation attempts, vs one seeing a problem for the first time). Put those traits in various combinations and the results will be very different from a single agent.


Nit: it doesn't 10x the chance of success, it (the chance of failure)^10.


neither, probably


Not at all. But Facebook is not used only by Usians. So the article might seem a bit misleading for people from different parts of the world.


"American" is the correct demonym for people from the United States of American.

Fun fact, there have been a dozen other countries which were "United States of X" throughout history: https://en.wikipedia.org/wiki/List_of_countries_that_include...


> I honestly don't see appeal why anyone would use Facebook when you literally have better alternatives that are specialized for the particular niche you might be interested in e.g. images, videos, gaming etc.

The particular niche I am into is staying connected to friends and people I like or have an interest in. And reading their posts.

I like to mainly read posts, not watch short videos or see images without too much explanation.


You can use WhatsApp and/or Facebook Messenger for connecting with friends(they are both owned by Facebook) but using Facebook per se seems less and less appealing especially when it became bloated with non-friend content. Zuck is tho introducing or reintroducing "friends feed" so that might serve people better[0].

[0] https://www.theverge.com/news/637668/facebook-friends-only-f...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: