You're in danger of making the perfect the enemy of the good.
Email is far from perfect, but good enough and its federated nature means it's reasonable for institutions to use it as a default mode of communication and authentication.
What exactly is the alternative to federation? Is it possible for everyone to be their own admin?
In any case, under feudalism serfs didn't have the freedom to choose and switch their feudal lord as they saw fit.
I assumed that private or corporate control over something that becomes a default communications network is undesirable, because that was a basis for the discussion I was replying to.
I also believe it. I don't want to have to subject myself to the X dumpster fire or sign away my data to Facebook just to receive communications from my local police department or child's school.
The title is wrong, and is not the actual title of the paper. They discovered some new aspects of how regular mitochondria work, not a new kind of mitochondria.
Say a tissue needs a few cells of a certain type, roughly evenly spaced throughout. One strategy to achieve this is for all cells in the region to have a tendancy towards developing those characteristics, but also for the quickest to do so to simultaneously produce a messaging molecule that suppresses that tendency in its near neighbours.
Ah that explains a lot. I was confused reading some of the comments here, because to my memory all the war computing/enigma & post-war work was in BP museum, and TNMOC had a brief mentioned to link but was mostly later personal computing / early displays for ATC, etc.
TNMOC wasn't better to me, because I was more interested in Turing et al. than 80s vintage computing. (And some people would be the opposite, I was just surprised to see so many comments saying it's almost objectively better for this crowd.) But if all the computing side is there now makes sense that people are saying that.
Wealth distribution is important, because very rich people disproportionately buy up limited-supply assets in order to earn passive income on them. In densely populated places, that includes assets like homes, which pushes up housing costs.
Housing costs often aren't included in popular measures of inflation. If you do use an inflation measure that includes housing and all other everyday expenditure, then I think it's a reasonable to argue the baseline going up is good enough.
Unless someone buys up so many properties they can start unilaterally increasing prices, the problem with housing going up isn't rich people buying them and renting them at a market rate. It's the amount of demand vs supply that does it.
And you're right - this is possibly the single biggest issue that's a downgrade for people vs their parents/grandparents, and it's a huge one. But the problem needs to be attributed correctly.
More very rich people looking for somewhere to park their money and earn returns means more competition when you want to buy a home.
I agree it doesn't necessarily mean rents rises. You might actually expect higher price to earnings ratio with more wealthy people competing to buy assets (see also stocks).
But it will lead to the shrinking of the home-owning middle class.
That's not to say there isn't also a problem with lack of supply.
I don't think the point is to nitpick how realistic the representation is or how fair the question is. The point is children react to the question differently from the way adults do. And that's true despite (or perhaps because of) what an adult thinks of the question.
But that's a different point to what GP was making. It wasn't that children answer differently to adults, it's that they get it 'right' more often than adults. Which is still more about ignorance allowing them to make the same assumptions as the questioner than thought processes. A child might not even be aware that people in other countries might drive on the other side of the road, and so be sure of their 'correct' answer, but most adults know that without knowing the ___location of this image, the question can't be answered.
EDIT: And if the question weren't ambiguous, you'd basically be telling people the answer, since as soon as you say "assume it's in the US", you give a massive clue that bilateral asymmetry is relevant.
As a kid I instinctively associated different days of the week with colours. This is a form of synaesthesia. As an adult I never think this way, and even when I try can only even remember one of the colour associations (Friday was orange).
My cod-biological explanation is that a child's brain is still forming connections and the process of becoming an adult involves pruning many of these connections to become more focused and efficient.
My letter-color synesthesia continues to fire on all cylinders well into adulthood. It's just a form of memory association of course. I can see that this 'e' is (in my editor) white text on a muted grey background. But that 'e' is also, simultaneously, the brightest lime green you ever laid eyes on. Like #00FF00, extremely light green. Every 'e' in every word lights up in my mind's eye in exactly this color. Most vowels do the same, consonants are inconsistent, numbers are pretty strong.
It's a fun little memory trick, and it was surprising to learn that it's not a universal experience. Not just "your letter-color association is different from mine" (it seems to be highly personal) but "wait, you don't do that at all?"
While reading this comment, I was thinking "tuesday is blue, friday? Definitely orange" and then read you saying (Friday was orange) a split second later
Tuesday rhymes with blue. Not to mention "Blue Tuesday" is a phrase used quite often.
You might be associating orange with Friday because it makes you think of the sun and going outdoors?
It's all very interesting. Even more so because I don't think of days/colours. My mind gives the days size and mass. ie. starting with Sunday (I'm American) the day is small. And as the week progresses the days get larger, heavier and denser. Until Saturday, which is a big fat puffy ball of "day". You can do what you want with it and it will just be there and be your friend. I love Saturdays.
If Tuesday is blue, Wednesday is green, and Friday is orange then it would makes sense that Thursday is yellow, Monday is violet, Sunday is indigo and Saturday is red...
As a software developer, I think I should be almost exclusively working in the top right (important, not urgent). If there is something I need to do urgently then I view this as someone (maybe me) screwing up.
It's also not obvious to me why "urgent, not important" gets handled any differently to "not urgent, not important", except that you have to be extra vigilant not to be distracted by the former.
This has kind of crystallised for me why I find the whole generative AI and "prompt engineering" thing unexciting and tiresome. Obviously the technology is pretty incredible, but this is the exact opposite of what I love about software engineering and computer science: the determinism, the logic, and the explainability. The ability to create, in the computer, models of mathematical structures and concepts that describe and solve interesting problems. And preferably to encode the key insights accurately, clearly and concisely.
But now we are at the point that we are cargo-culting magic incantations (not to mention straight-up "lying" in emotional human language) which may or may not have any effect, in the uncertain hope of triggering the computer to do what we want slightly more effectively.
Yes it's cool and fascinating, but it also seems unknowable or mystical. So we are reverting to bizarre rituals of the kind our forbears employed to control the weather.
It may or may not be the future. But it seems fundamentally different to the field that inspired me.
Thank you for this. I agree completely and have had trouble articulating it, but you really nailed it here: all this voodoo around LLMs feels like something completely different to the precision and knowability that is most of the rest of computer science, where "taste" is a matter of how a truth is expressed and modeled not whether it's even correct in the first place.
I have to say, I agree that prompt engineering has become very superstitious and in general rather tiresome. I do think it's important to think of the context, though. Even if you include "You are an AI large language model" or some such text in the system prompt, the AI doesn't know it's AI because it doesn't actually know anything. It's trained on (nearly exclusively) human created data; it therefore has human biases baked in, to some extent. You can see the same with models like Stable Diffusion making white people by default - making a black person can sometimes take some rather strong prompting, and it'll almost never do so by itself.
I don't like this one bit, but I haven't the slightly clue of how we could fix it with the currently available training data. It's likely a question to be answered by people more intelligent than myself. For now I just sorta accept it, seeing as the alternative (no generative AI) is far more boring.
I actually sort of love it. It's so so similar to "theurgy", a topic that greek philosophers expended millions of words on, completely uselessly. Just endless explanations of how exactly to use ritual and sacrifices to get gods to answer your prayers more effectively.
I actually sort of think that revisiting greek ideas about universal mind is actually sort of relevant when thinking about these gigantic models, because we actually have constructed a universal shared intelligence. Everyone's copy of chatgpt is exactly the same, but we only ever see our own facets of it.
It reminds me of human interactions. We repeatedly (and often mindlessly) say "thank you" to express respect and use other social mechanics to improve relationships which in turn improves collaboration. Apparently that is built into the training data in subtle ways or perhaps it's an underpinning of all agent based interactions; when solicitor is polite/nice/aligned, make more effort in responding. ChatGPT seems amazingly human like in some of its behaviors because it was trained on a huge corpus of human thought.
It's predicting the next token. The best answers, online, mostly come from polite discourse. It's not a big leap to think manufacturing politeness will yield better answers from a machine.