Is it a thing for angels to exit in the early rounds?
Instead of being shoved down the cap table by a giant tranche of series A preferred stock, might it not be appropriate to give the angel a payday instead?
I guess some angels want to keep their fingers in the pie? And, more likely, it’s just not a reasonable expectation to see an exit like that way before anyone else does?
It’s just the same thing as a take profit in the stock market. Intellectually it seems reasonable but because a lot of the bets go straight down (never raise another round for that take profit opportunity) you need a higher multiple of the ones that win.
You end up taking profit at a 1.1 return and in 10 years it ends up being uber.
Positive skew strategies (lose a little on a lot of bets and win big on a few) are impossible to use take profits on because you need those big winners.
early exits probably won't get the type of return that an angel investor would be interested in monetarily, since you need more than fu-money to motivate them.
It’s not fair to characterize Brexit — a ridiculously over simplistic yes/no referendum question — as being inherently bad.
I think a charitable reading of your comment ought to replace Brexit with the subsequent implementation of Brexit by successive Conservative governments.
That’s also quite possibly what you meant anyway, but it’s still worth saying aloud.
> What does it say that people who voted "yes" without a clear plan of action already on the table?
The UK government explicitly refused to make a plan of action.
It turned out to be a foot-gun moment.
The leave campaign was not representative of people in office who were basically all remainers, and the referendum would not change the makeup of Parliament because it's not an election. As a result they had no ability to make an official plan or even an unofficial one which could be put into place afterwards, because they did not have the power to do that and the vote was not going to give it to them.
The government's reasoning for not making a plan seemed to be that it might be too appealing if there was a plan, and they didn't want to give leave campaigners the ability to say "See, there's a plan, it'll be fine", when remain campaigners could instead use the lack of plan to say "Are you crazy, you're voting for the unknown! There's no plan!". And they were confident that the leave vote was DoA so making a plan was a waste of time and money anyway.
This seems like dirty pool to me, and it backfired anyway. Turns out that chaos and uncertainty was more or less explicitly what some leave voters wanted - give the whole establishment a kick in the pants.
I voted remain, FWIW, but I think holding "there was no plan" over leave-voters heads is a bit rich given why there was no plan.
> The UK government explicitly refused to make a plan of action.
Could you elaborate on that?
I would assume that it would be the responsibility of the leave campaign to provide that, but that's not the case?
I wonder, then if the mistake was to allow the leave to be executed, even if a plan couldn't be created in the time frame allowed. Maybe having an abort clause instead of jump off the cliff if the bridge isn't built yet.
Maybe the referendum should have required the government to build the plan which would be decided on later.
The mainstream political parties in the UK since the 70s were pro-EU, though with some muttering at the fringes. Among the general public there was a bit more ambivalence. Anti-EU sentiment was generally written off as racist, stupid etc etc and this wasn't necessarily wrong because the loudest voices were parties like the BNP - basically 'out' fascists.
This changed with the accession of central and eastern european countries to the EU. <ost western EU countries imposed a two year stop on free movement but then Tony Blair wanted to "rub their noses in diversity" (referring to the sceptics) and opened the doors to the UK on day 1. Immigrant populations became much more visible very quickly and as a result so did anti-EU parties like UKIP. When a woman asked the next PM, Gordon Brown, something about migration, he was caught on a microphone he thought was deactivated saying something about "that bigoted woman" and the flames were fanned ...
2010s - the UK has a coalition and to curry favour with the eurosceptic side of the electorate the conservative prime minister, David Cameron, goes into the 2014 election promising a referendum. He's still sure it's a fringe issue and once people have had their say on the matter UKIP, (who are starting to eat 'his' vote share on the right, will be neutralised and we can all stop talking about it. This had worked for him twice - in coalition he was forced to run one on changing FPTP to AVC voting and through a campaign that I think was run on disinformation he managed to head off any further discussion on democratic reform. He also granted the Scottish government's wish to hold an independence referendum and then successfully campaigned for a "remain" vote there. He was on a roll.
So he calls the EU membership referendum.
The terms are set - a simple in/out question. It was legislated as a guiding referendum rather than a binding one. The difference is that the act of parliament introducing the referendum on AVC already contained legislation that would have been triggered the next day to change the voting system if it had passed. Power was actually delegated from Parliament to the people in that case. In contrast the EU referendum and the Scottish Independence referendum were more like national opinion polls - they didn't establish anything in law beyond the result itself.
The PM and his whole cabinet are pro-business, fairly socially progressive conservatives. Definitely pro-EU. As are the main figures in the Labour party (mostly). So the establishment all rally behind the remain campaign. It is assumed that there will be an easy, comfortable win for remaining even among hardcore leave supporters. Despite the referendum being advisory/guiding, David Cameron says that a vote to leave will be acted on immediately with no plan or preparation to try and scare people into voting remain. When opinion polls start to look a little bit dodgy, the chancellor threatens the country with a punishment budget including big tax hikes if leave wins. The spectre of uncertainty is raised repeatedly but in the end that gives ammunition to the 'leave' side who call it out as "Project Fear", a term the pro-independence scots had coined to describe the 'remain' campaign in their referendum.
> Maybe having an abort clause instead of jump off the cliff if the bridge isn't built yet.
> Maybe the referendum should have required the government to build the plan which would be decided on later.
So thats the thing - the law that created the referendum didn't include any requirement or any plan to do anything whatever the result. The way the whole shebang was run, nobody had the power or the mandate to make a plan because the referendum was advisory and was really only meant to be for show. The real plan was that on the 24th of June 2016 David Cameron could address the nation and say "See, we told you, the British people have spoken, shut up about the EU, let's get on with our lives".
But when you as Prime Minister call a referendum and promise (or threaten) that the result will be carried out post-haste, even when you have quite deliberately not set out a course of action or legislated for it, and you lose ... you've backed yourself into a political corner and you basically have to do it.
Which is why he resigned the next day.
And that's why there followed years of parliamentary arguments, court cases and in-fighting about what the hell to do next. You can't just throw that sort of thing aside. If over half of the people who voted, voted to leave, it's probably electoral suicide to ignore it and you're paving the way for UKIP to rise. You can't ask for a do-over, because there's a perceived history of the EU getting people to re-vote on important issues (Ireland, Denmark IIRC(?), to do with referenda rejecting the new EU constitution/Treaty of Lisbon) and the 'leave' side would have had a field day portraying the whole edifice as profoundly anti-democratic. But the majority of the people who are tasked with coming up with a plan don't want to do it. Eventually the government collapses, but the conservatives are re-elected on a promise to "get brexit done", which brings Boris Johnson to power and gives further political mandate to leaving. Through a variety of political manoeuvres, some questionably legal, a plan is finally approved and put into action four years later.
Sorry for the wall of text :)
Anyway, all of that is to say that while Brexit may well be the greatest act of political self-harm the UK has carried out in a good long while, that's why I feel the specific criticism that "You voted for something when there wasn't even a plan you dumb shits!" isn't really fair. There was never going to be a plan, and if they didn't vote for it there was probably never going to be another chance.
tl;dr - there wasn't a plan because the people with the power to make one didn't want one.
I'm an immigrant living in the UK, and have been for over 20 years. I'm practically British now, without the accent. I don't disagree with much of your post, but some of it feels emotionally biased.
>the chancellor threatens the country with a punishment budget
Biased. And whatever cause and effect ended up being, our taxes have risen, immediately after the result came in GBP dropped, we had inflation, and to counter it all interest rates were dropped from already extreme low levels even further. There are no widely respected economists (though they're hard to take seriously anyway) who think leaving the EU has not harmed the UK.
So, I consider it the duty of the chancellor to have informed us of this, because the other side of the argument (the brexiters) had not one bit of moral integrity to present reality. Remember, we're dealing with a group of people who lied for 40 years to achieve their aims. No other country in the EU required an EU hosted web page dedicated to countering all the anti-EU lies.
The brexit side effectively ran at least two campaigns, with plausible deniability by the "official" campaign because Farage wasn't on their team. Farage was the face of the less savoury side of the campaign, and his group ran using things like this:
https://ichef.bbci.co.uk/images/ic/1920x1080/p078zmng.jpg
While on the "official" campaign we have gems like this (I still genuinely laugh when I read this blog post):
I watched all of this unfold, as an immigrant living in the UK, and we (immigrants) were very acutely aware of the sentiment that drove the brexit vote.
So, what is my main take-away from all of this? That this referendum was about two valid political choices, remain inside a pooled sovereignty union, or leave that pooled sovereignty union. Both valid choices. But the travesty was how poorly the referendum was constructed and run. And that is because we just don't have a history of running referendums very well (see the alternative voting referendum), and this would never have passed the sniff test in for example Switzerland.
I don't think that bit's biased, myself, George Osborne literally threatened a punishment budget before the referendum. He may not have used those words, but everyone else did and he did come out telling everyone that he would be having an emergency budget after the vote which others in his party described at the time as "economic vandalism". IMHO there's a difference between telling people that they're making an economic mistake and detailing what will go wrong (which he did too), and saying "I'm going to raise income tax, raise inheritance tax and slash the NHS budget within a few weeks if you vote leave".
> So, I consider it the duty of the chancellor to have informed us of this
Absolutely agree, but that's not what I'm referring to.
I agree with the rest though, it was a clusterfuck in so many ways. I'm not going to try to claim I'm entirely unbiased - in the lead up to the referendum I was definitely in the 'leave' camp, part of the group of people who just wanted to see British politics given a righteous kick up the arse, regardless of what form that came in. I sorta came-round in the last few days and voted remain, mostly because I knew if Brexit happened a lot of people I care about would be upset, and some would have their lives upended. And then I got to watch it happen anyway.
Having seen the news from Runcorn today, I feel it's a shame the British people haven't got tired of the Farage clown show yet. But then my own father would probably vote for him (probably does), because he's got suckered into the Old-people's-outrage channel, GBNews, which can't be good for his blood pressure let alone British democracy. Currently I'm hoping (I think realistically) that my adopted home of Australia does better in the general election tomorrow. I'm not yet a citizen so just spectating on this one.
Also in my understanding, it was a gamble by David Cameron. He promised the referendum before the previous general election, believing the Tories wouldn't get a majority and he could blame the Lib Dems coalition partner when they blocked the referendum, then the Tories did win outright and oopsie, what do I do now, I've got to hold a referendum with no plan. Basically unintended consequences. Moral of the story... be careful. Maybe Cameron had had a long day and was tired or something when he made that decision ;)
I could have used wars as an example (Iraq, Afghanistan, Vietnam), but Brexit feels like more of a parallel as it's non-violent and somewhat economic. We will absolutely have a conclusive outcome for what we've decided to do as a nation. The unfortunate thing is we are not going to get back 4 years of our lives. It's just going to evaporate and that's the thing that political fervor masks. You got one life, you can spend it fighting China, I suppose. In the case of Europe, you can spend it exiting it, I suppose. There's a serious opportunity cost here that wasn't properly discussed due to the zealotry of both sides.
Policy discussion seems to be something the masses cannot handle without clearly defining an "other". I feel Jeffersonian (bigoted) in suggesting that it's a mistake to give ordinary people access to this debate. Almost like letting ten year olds get involved in how mom and dad handle the mortgage.
There are an infinite number of Brexits we didn’t get. We only got to try one. For most purposes I think it’s pretty reasonable to equate ‘Brexit’ with that one.
Frankly, I don’t think any of the Brexits we stood any chance of actually getting could have been good: it was only a question of how bad the one we eventually got would be.
And the problem with the less bad Brexits was: they would be less bad, but they would also be more directly comparable with no Brexit (e.g. “in order to improve trade we’re going to follow all the EU’s rules but not have a say in any of them”).
> (e.g. “in order to improve trade we’re going to follow all the EU’s rules but not have a say in any of them”).
We de facto do that anyway, because most of our trade is conducted with the EU and companies aren't stupid. They don't want to design and build to multiple different standards, so they just adhere to EU rules for simplicity and cost reasons. But now we don't get a say in those rules.
True. But given that's true, it would be so much better to be inside the single market or customs union. But apparently those are still 'red lines' ...
> If it had actually reduced mass immigration from the third world, as voters were promised, it would have been good.
There isn't a very high bar to understanding that immigration from "the thirld world" had nothing to do with the EU.
As an immigrant to the UK, I was very acutely aware of the sentiment leading up to, during and after the referendum, but I was mortified by the ignorance displayed by people around the topic.
I'm from one of those third world countries by the way.
On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?
x = 2 + 2
y = 2 * 2
z = f(x, y)
print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?
*And superficially off the topic of this thread, but possibly not.
Superscalar processors (which include all mainstream ones these days) do this within a single core, provided there are no data dependencies between the assignment statements. They have multiple arithmetic logic units, and they can start a second operation while the first is executing.
But yeah, I agree that we were promised a lot more automatic multithreading than we got. History has proven that we should be wary of any promises that depend on a Sufficiently Smart Compiler.
Eh, in this case not splitting them up to compute them in parallel is the smartest thing to do. Locking overhead alone is going to dwarf every other cost involved in that computation.
Yeah, I think the dream was more like, “The compiler looks at a map or filter operation and figures out whether it’s worth the overhead to parallelize it automatically.” And that turns out to be pretty hard, with potentially painful (and nondeterministic!) consequences for failure.
Maybe it would have been easier if CPU performance didn’t end up outstripping memory performance so much, or if cache coherency between cores weren’t so difficult.
I think it has shaken out the way it has, is because compile time optimizations to this extent require knowing runtime constraints/data at compile time. Which for non-trivial situations is impossible, as the code will be run with too many different types of input data, with too many different cache sizes, etc.
The CPU has better visibility into the actual runtime situation, so can do runtime optimization better.
In some ways, it’s like a bytecode/JVM type situation.
If we can write code to dispatch different code paths (like has been used for decades for SSE, later AVX support within one binary), then we can write code to parallelize large array execution based on heuristics. Not much different from busy spins falling back to sleep/other mechanisms when the fast path fails after ca. 100-1000 attempts to secure a lock.
For the trivial example of 2+2 like above, of course, this is a moot discussion. The commenter should've lead with a better example.
What kind of CPU auto-optimization? Here specifically I envisioned a macro-level optimization, when an array is detected to have length on the order of thousands/tens of thousands. I guess some advanced sorting algorithms do extend their operation to multi-thread in such cases.
For CPU machine code it's the compilers doing the hard work of reordering code to allow ILP (instruction-level parallelism), eliminate false dependencies, inlining and vectorization; whatever else it takes to keep the pipeline filled and busy.
I'd love for the sentiment "the dev knows" to be true, but I think this is no longer the case. Maybe if you are in a low-level language AND have time to reason about it? Add to this the reserved smile when I see someone "benchmarking" their piece of code in a "for i to 100000" loop, without other considerations. Next, suppose a high-level language project: the most straightforward optimization to carry out for new code is to apply proper algorithms and fitting data structures. And I think this is too much to ask nowadays, because it takes time, effort, and knowledge of existence to remember to implement something.
Spawning threads or using a thread pool implicitly would be pretty bad - it would be difficult to reason about performance if the compiler was to make these choices for you.
I think you’re fixating on the very specific example. Imagine if instead of 2 + 2 it was multiplying arrays of large matrices. The compiler or runtime would be smart enough to figure out if it’s worth dispatching the parallelism or not for you. Basically auto vectorisation but for parallelism
I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.
int buf_size = 10000000;
auto vec = make_large_array(buf_size);
for (const auto& val : vec)
{
do_expensive_thing(val);
}
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.
If I replace it with
int buf_size = 10000000;
cin >> buf_size;
auto vec = make_large_array(buf_size);
for (const auto& val : vec)
{
do_expensive_thing(val);
}
the compiler could generate some code that looks like:
if buf_size >= SOME_LARGE_THRESHOLD {
DO_IN_PARALLEL
} else {
DO_SERIAL
}
With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...
arr_size = 10000000
buf = [None] * arr_size
for x in buf:
do_expensive_thing(x)
You’re fixated on the very specific examples in our existing tools and saying that this wouldn’t work. Numpy could have a switch inside an operation decides whether to auto parallelise or not, for example. It’s possible but nobody is doing it. Maybe for good reasons, maybe for bad.
I’m doing no such thing. I’m providing an example of why verifiable industry trends and current technical state of the art are the way they are.
You providing examples of why it totally-doesn’t-need-to-be-that-way are rather tangential, aren’t they? Especially when they aren’t addressing the underlying point.
Bend[1] and Vine[1] are two experimental programming languages that take similar approaches to automatically parallelizing programs; interaction nets[3]. IIUC, they basically turn the whole program into one big dependency graph, then the runtime figures out what can run in parallel and distributes the work to however many threads you can throw at it. It's also my understanding that they are currently both quite slow, which makes sense as the focus has been on making `write embarrassingly parallelizable program -> get highly parallelized execution` work at all until recently. Time will tell if they can manage enough optimizations that the approach enables you to get reasonably performing parallel functional programs 'for free'.
That looks more like a SIMD problem than a multi-core problem
You want bigger units of work for multiple cores, otherwise the coordination overhead will outweigh the work the application is doing
I think the Erlang runtime is probably the best use of functional programming and multiple cores. Since Erlang processes are shared nothing, I think they will scale to 64 or 128 cores just fine
Whereas the GC will be a bottleneck in most languages with shared memory ... you will stop scaling before using all your cores
But I don't think Erlang is as fine-grained as your example ...
AFAIU Erlang is not that fast an interpreter; I thought the Pony Language was doing something similar (shared nothing?) with compiled code, but I haven't heard about it in awhile
There's some sharing used to avoid heavy copies, though GC runs at the process level. The implementation is tilted towards copying between isolated heaps over sharing, but it's also had performance work done over the years. (In fact, if I really want to cause a global GC pause bottleneck in Erlang, I can abuse persistent_term to do this.)
I believe it's not the language preventing it but the nature of parallel computing. The overhead of splitting up things and then reuniting them again is high enough to make trivial cases not worth it. OTOH we now have pretty good compiler autovectorization which does a lot of parallel magic if you set things right. But it's not handled at the language level either.
> …where x and y evaluate in parallel without me having to do anything.
I understand that yours is a very simple example, but a) such things are already parallelized even on a single thread thanks to all the internal CPU parallelism, b) one should always be mindful of Amdahl's law, c) truly parallel solutions to various problems tend to be structurally different from serial ones in unpredictable ways, so there's no single transformation, not even a single family of transformations.
There have been experimental parallel graph reduction machines. Excel has a parallel evaluator these days.
Oddly enough, functional programming seems to be a poor fit for this because the fanout tends to be fairly low: individual operations have few inputs, and single-linked lists and trees are more common than arrays.
there have been fortran compilers which have done auto parallelization for decades, i think nvidia released a compiler that will take your code and do its best to run it on a gpu
this works best for scientific computing things that run through very big loops where there is very little interaction between iterations
I met a member of an EU pact
Who said: two vast and fruitful suits of law
Prevail in the courts. Near them, in their acts,
Half won, a shattered victory lies, whose maw
And wrinkled smile, a sneer of bitter spite,
Tell that its makers well those voters fed
Which yet survive, in those politic corps,
The lips that lied, the hearts that bled
And on the cover these words, in bold, underlined
“My name is Brussels, Home of Kings:
Look on my rules, ye Mighty, and be fined!”
No thing beside remains. Around the court
Of that great parliament, in open plans, Aerons reclined
The ever mighty FAANGs endure.
Additionally, mutable fields will quite often benefit from having a separate edit table which records the old value, the new value, who changed it, and when. Your main table’s created and updated times can be a function of (or a complement to) the edit table.
It is tempting to supernormalize everything into the relations object(id, type) and edit(time, actor_id, object_id, key, value). This is getting dangerously and excitingly close to a graph database implemented in a relational database! Implement one at your peril — what you gain in schemaless freedom you also lose in terms of having the underlying database engine no longer enforcing consistency on your behalf.
> This is getting dangerously and excitingly close to a graph database implemented in a relational database!
This feels like a great unresolved tension in database / backend design - or maybe I'm just not sophisticated enough to notice the solutions?
Is the solution event sourcing and using the relational database as a "read model" only? Is that where the truly sophisticated application developers are at? Is it really overkill for everybody not working in finance? Or is there just not a framework that's made it super easy yet?
Users demand flexible schemas - should we tell them no?
> supernormalize everything into the relations object(id, type) and edit(time, actor_id, object_id, key, value)
I frankly hate this sort of thing whenever I see it. Software engineers have a tendency to optimize for the wrong things.
Generic relations reduce the number of tables in the database. But who cares about the number of tables in the database? Are we paying per table? Optimize for the data model actually being understandable and consistently enforced (+ bonus points for ease of querying).
> Additionally, mutable fields will quite often benefit from having a separate edit table which records the old value, the new value, who changed it, and when.
Aren't you describing a non-functional approach to event sourcing? I mean, if the whole point of your system is to track events that caused changes, why isn't your system built around handling events that cause changes?
To be fair to DEI, the D could easily stand for “anti-homogeneity”. To that extent, Beijing, Xinjiang, Lahore, Jakarta, or Manila makes no difference. All that matters is you’re not being excluded just because you look or sound different.
Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.
…or alternatively it’s not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
The culture presents such a tempting world view for the type of people who populate HN.
I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.
That's why it's so important to reduce all of your personal data points online. Imagine what they can reconstruct based on their modeling and comparing you to similar users. I have 60 years of involuntary data collection ahead of me. This is not going to be fun.
Yes, full agree. I also think there is a future in personal memetic firewalls that will filter ideas before they hit our wetware and filter our tics and habits that uniquely identify us on the way out. Such a firewall would need to be much smarter than we are, but not necessarily as smart as the smartest ai trying to 'attack'. It would have perfect knowledge of its user, and as such possess information assymetry.
> I don't even think it'll be from terminators and nuclear wars and that sort of thing
I do. And I don't even think the issue is a hostile AI. There are 8 billion people in the world. Millions of those people have severe mental issues and would destroy the world if they could. It seems highly likely to me that AI will eventually give at least one of those people the means.
That'll be great for the world's natural outsiders. Those that hate pop music and dislike even taylored ads because of the creepy feeling of influence. Or who don't follow any politicians because they're all out to hoodwink you.
Oh, a subset will be at risk of being artificially satisfied but your hardcore grouch will always have a special "yeah, yeah, fuck off bot" attitude.
Alternatively they can sell you anything if they can make you feel content or euphoric on their command. Get your new drug gland today, free of charge, sponsored by Blackrock
There is a bias there in action: we are assuming that the entire world is like this thing we just happen to be thinking about.
It is not.
Even if it were just a minority, there are plenty of people outside "this thing" that will profit from the ((putative) majority's) anesthesia. Or which at least will try to set the world on fire (anybody remember the elections in USA a few months ago? That was really dumb. But sometimes a dumb feat shows that one is alive, which is better than doing nothing and being taken for dead. Or it is at least good-enough peacocking to attract mates and pass on the genes, which is just an extravagant theory of mine that I'm almost certainly sure is false. And do not take this as an endorsement of DJT). I'm not being an optimist here; I've seen firsthand the result of revolutions, but it may be the least-bad outcome.
> I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.
I think we'll just die out. Everyone will be too busy having fun to have kids. It's already started in the West.
While the west has gone this way, there also seems to be a strong undercurrent (at least here in Australia) of "we can't afford to have kids (yet)".
As housing has moved further out of reach of young people, some don't seem to feel their lives are stable enough to make the leap. The trend was down anyway, but the housing crisis seems to be an aggravating factor.
This subject has been investigated a lot. In many countries governments make it much easier and more affordable to have children, and it doesn't seem to make any difference.
I agree, using housing as a source of wealth has broken a whole generation. When the boomers of the world start to massively die out (any year now), housing will deflate, but not spectacularly without a crisis (people don't want to settle where the cheap houses are in a bull market).
I wouldn't call my kid-skipping activites fun, but go off.
Spending a life on the treadmill doesn't encourage more walks. It encourages burning it down. All I've known is work. Pass on more, thanks. I hear you/others now:
But past generations managed...
That's exactly my point. Despite all of our proclaimed progress, we're still "managing". Maintaining this circus/baby-crushing machine is a tough sell.
To get where I could afford to have kids, I became both unprepared and uninterested.
What's more: I'm one of the lucky ones. I was given a fancy title and great-but-not-domicile-ownership-great pay for my sacrifice. Plenty do more work for even less reward.
> The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.
We already have so many of those that it’s very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.
Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.
The Culture is about a post-capitalist utopia. You’re describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.
You’re right so I made a slight edit to separate my two ideas. Thanks for even reading them at all! I try to contribute positively to this site when I can, and riffing on the overlap between fiction and real-life — a la Doctorow — seems like a good way to be curious.
> Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to “not give up the night job” — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.
Controversial stance probably, but this very much sounds like a world I'd love to live in.
It’s implied that programs are personal but product code goes through peer review, checking the premise of the change as well as the implementation.
When someone reviews vibe coded patches and gives feedback, what should the reviewer expect the author to do? Pass the feedback onto their agent and upload the result? That feels silly.
How has code review changed in our brave, vibey new world? Are we just reviewing ideas now, and leaving the code up to the model? Is our software now just an accumulation of deterministic prompts that can be replayed to get the same result?
With the Altair you had switches to input bits of CPU instructions. Then punchcards. Then Telex. Then assembly in a terminal. Then C, Pascal and Smalltalk. Then Java, C++, Python, PHP. Then the mountains of libraries and frameworks, realtime indexers, linters, autocomplete, suggestions.
The next step seems that we will write programs in prompts. Maybe we will get a language that is more structured than the current natural language prompts and of much higher order than than our current high level programming constructs and libraries.
You can’t eliminate formalism in programming. You can only abstract it away by creating a metaprogramming layer for the current one. And prompt isn’t that. Prompting is a blind shot hoping to hit something good.
Indeed, that is why we may end up with some sort of standardized and structured way to write functional descriptions of software that we can train AIs on. How close to natural language it will be I'm not sure. Arguably, since the birth of computing we see programming languages inching ever closer to natural language, albeit still far removed. Perhaps we will end up with something indeed really formal and strict, but much more high-level. AIs could than be trained on the full language; maybe we can then solve the "confident yet wrong" problem with current agentic programming.
The first order fraud would be market manipulation run by the commander in chief.
The second order play would be for Trump to tip his associates off that he was going to do this and let them profit handsomely from the bump. Congratulations Mr President: your lackies love you because they profited from fraud but moreover you now have grade A kompromat on all of them in the form of your Signal chat records showing they were complicit in market fraud!
Instead of being shoved down the cap table by a giant tranche of series A preferred stock, might it not be appropriate to give the angel a payday instead?
I guess some angels want to keep their fingers in the pie? And, more likely, it’s just not a reasonable expectation to see an exit like that way before anyone else does?
reply