Looking back we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
Of course I'm leaving out some languages here, but the level of abstraction has been rising.
But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
> So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
This. I started programming in Machine Code, where the "editor" was a pad of graph paper.
I've watched management- and business-focused people sneer at us geeks for my entire career, and watched them drool over the thought of finally getting rid of us.
Hasn't happened yet.
> I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
Inevitably, the above, is followed by something that translates to "Of course, you will do all the work, but I'll get all the money, because the idea is what's really valuable."
If I follow up on it at all, I'll be asked to sign an NDA, and meet in some clandestine place, so they can whisper to me about their AI-powered cheese straightener.
I agree with your characterization in general. If all someone has is an idea and no relevant experience to back it up, run. I can’t stand people like that.
But there’s one type of ideas person I’m thrilled to work with. Someone with deep and successful experience in sales. The head of sales at my company is also relatively product oriented and boy, he just has a knack for what customers want.
In general, I think many engineers can do some product management. We can figure out basic accounting and finance. But the idea of making a cold call to sell dental software, or chasing down school principals at an education conference is almost as foreign to most software engineers as CS is to most sales folks.
There’s a role called sales engineer that typically exists in the sales organization but requires a technical background. From the engineering org’s perspective, your SEs are there to make sure sales doesn’t oversell or undersell. At my company your pay is commission-based but with a higher base than sales proper.
Solution Architect is another common title. As someone else mentioned, Developer Relations is at least adjacent though that tends to be less salesy and, depending on the role/company, may be more focused on community aspects than building demos and the like. In any case, at software companies, there are definitely customer-facing roles that are more technical though they may not involve a lot of direct coding. For non-field roles, product management qualifies as well.
I would say an SA (Solution Architect) is very different from an SE (Sales Engineer) in one fundamental way: the SA is focused on delivering value post-contract and the SE is focused on making sure the org is able deliver the value promised by working with sales in the pre-contract phase. In some smaller companies or organization within a company, I can see these roles being performed by the same person but in general that distinction between pre and post sales has been my lived experience.
SA people that I have met so far (30+) can architect and more importantly talk about and market the solution. But the execution is often times lacking. And that is solely because the execution is less important. (If the contract is big enough SA can always lean on product engineers or hire contractors to execute).
Being the engineering contractor to SA organization is daunting to a programmer, but rather rewarding.
Where I work, SAs are pretty much technical pre-sales. (There are also chief architects who are somewhat related.) If a customer needs ongoing post-contract support, they can buy a TAM (technical account manager). We don't have sales engineers.
(When I worked for a long-ago hardware company, system engineers (SEs) were pretty much the equivalent of SAs where I am currently.
In my company, all of our sales people have a technical background. They are not the best engineers in the company, but they could do the job if they had to. More importantly, when faced with technical people, they are not completely clueless.
Agreed. I have a thousand tools mastered to solve any software issue quickly. But people issues can still stump me for days. Both skillsets take dedication and years(decades?) to master.
In Bullshit Jobs Graeber terms our current system as "managerial feudalism". It's not capitalism in the same way Adam Smith or Marx envisioned, we don't have firms brutally competing to generate the most value at the lowest cost. Instead we have a system of rent extraction largely done by a small number of powerful players. You don't need to be efficient to extract rent, so you end up with a court, or org chart in modern terms, of useless hangers on that mostly serve to demonstrate your wealth and power.
This seems overly pessimistic (unsurprising given that it’s written by an eventual fascist). Egalitarian societies do exist, but they take work to maintain. The Mondragon corporation functions as a cooperative despite its large size. I know less about it’s day to day organizing that I’d like to unfortunately, would be very interested to learn more.
I kinda wonder if it would not be so bad, if a company asked chatGPT “prompt the engineers for status updates occasionally, then take their responses and produce high-level summaries for department heads” or whatever.
On their side, the engineers might even ask it “summarize my tickets and come up with an optimistic, business-friendly high level storyline.”
stop "prompting" me and pay attention. I do SO much communication with what I'm working yet it's not enough and I also need to do YOUR job for you. And manage your kanban. And compile all the things I've done at the end of my term. Can't wait for AI to replace you, you are useless.
The only time I see synergies is when we talk about what task to take on next and why.
Tell ChatGPT it's a product manager overseeing engineers and providing executive summaries. Create a system that feeds executive emails inquiring about project statuses into ChatGPT and emails the developers asking for updates. Developers reply with jargon-heavy details about tickets for next milestone, ChatGPT writes an executive summary that eliminates jargon and sends to executives.
Not the full job, but it can probably be built right now.
The bad version of all three look exactly the same. And the bad version of all three are actually worse than nobody at all. Yet, high management is completely convinced those people are essential so they'd rather keep a bad professional there than get rid of them.
Anyway, the good version of those three are completely different, and add a ton of value on very different places. I think they are rare enough that many people don't ever meet one of them.
Human managers (who are good at their jobs) bring a little Bedside Manner, a modicum of compassion to Human Resource Allocation that will soon be seen as a luxury compared to low-cost, fully-automated Human Resource Monitoring & Reinforcement Systems
Good managers shield their reports from a lot of the crap that is inevitable at scale in a large organization. A large company can't just have individuals and small self-managed teams go off and do whatever they feel like. And, yes, if you just cut out all the intermediate layers, the VP with 150 direct reports will simply have to resort to automation based on simplistic metrics. "You didn't produce enough lines of code last week. If that happens again, you will be placed on a performance plan and then terminated if the situation doesn't improve in 2 months."
Literally all of those layers including the VP could be replaced right now.
All those jobs do is push emails rewording other emails to other people pushing email.
I feel like eventually every company will have a single a figurehead that’s fed instructions but believes it’s original thought and is told by the AI what a great idea they just had all day long.
I think its simply because upper management doesn't trust the people they've hired.
In my mind, having individuals and small self managed teams go off and do whatever they feel drawn to do is exactly how a good company is run. - So long as those teams and individuals talk to (& seek advice) from the rest of the company when their work has impacts outside the team. The book Reinventing Organizations by Laloux talks about this a lot, and how it works in some companies today.
The reason it doesn't happen more is that upper management doesn't trust their employees and they don't feel in control when people just do things.
if GPT or copilot make developer 5-50x more productive then you need just 2 good programmers instead of team of 10-100 person, so out of window goes all formalized agile processes, safes etc so you don't need anymore those managers who are doing communicating, process planning and other big business stuff. Of course this will also affect for developers but I bet less because there is huge need for software developers and mid-size business did not have money to hire team huge team of developers but this will change when you need only two people instead of 40.
If C makes developers 5-50x more effective compared to writing assembly, then you just need two devs instead of a team of 10-100.
Obviously that didn't happen. And people actually made predictions like that once upon a time.
As software becomes cheaper, and the production of software can be done with fewer people, demand has always increased and I don't see any reason that's going to stop.
I recall seeing a video in which a senior vice president of Symbolics (yes, the Lisp machine people) claimed that their aim was to put the largest software projects within reach of a small team, and everything smaller would be feasible by a single person.
Had Lisp machines (as an idea) remained viable, we might've come close! But managers just love having large teams to command, and we've invented process and ceremony to fill up the spare time necessary to do the work of ten programmers with a hundred, as well as rejecting technologies like Lisp and Smalltalk that give individual programmers tremendous leverage over the problem space.
Should AI actually make people more productive, instead of being an endless generator of messes for fleshbags to clean up, I imagine some successor of Scrum to come along to tie up most of staff programmers' work days with meetings. Hell, that may happen anyway.
I don't disagree we just don't know yet. I do think these 2 guys will be overworked as f** and constantly context switching to the point they barely know what they're doing anymore...hope we don't get there.
Funny enough, last week I was talking about Copilot with a colleague. Out loud, I wondered if there was a diminishing return from Copilot. That is, Copilot is great at filling in boilerplate / the obvious, and I get to focus on the more intricate parts of the solution. But can my brain operate at the higher level day after day? Week after week? TBH I have my doubts.
Maybe it's going to be AI enables quality more than quantity?
Same thing is happening with doctors. I read a post from a GP a while back talking about how it used to be that he got a wide mix of acuity/difficulty all day long. Now all the "easy" patients go to the Nurse Practitioner, and all he gets is the difficult ones.
He's still expected to put in the same amount of time, but now he has to expend brain power the entire time.
I learn in a different topic on HN that with a sophisticated chat bot lurkers dare ask support questions. As many developers I've often asked people how to do things. Before asking I try to find answers in google (as asking questions easily googled is lame) but if the chat channel is busy and I've already asked for help in the last 30 minutes I do more google searches. Each next question feels more expensive. Technically I apparently prefer to ask google first. If there is a chat bot fit for the question I should logically go "bother" that before asking humans for help.
Asking a human manager for anything is many orders of magnitude more frightening. It's not that I don't have questions! I have many, of which a lot are dumb questions, questions that I should be able to figure out without help.
Say i'm pondering taking a day off, with the human I would make up my mind, pick the day and ask for the day off. With a bot I would ask which day would be ideal before making up my mind. A human manager would think you've gone insane if you asked 20 such questions. You either want a day off, you don't or you say nothing.
I might like a different brand of coffee while we are trying to meet the deadline. I'm sure that question is going to go down well with an overworked human manager.
I don't think in 40-ish years I have ever asked a manager what's a good day or week for me to take off. I probably have certain constraints and I can look at my calendar. And have a general sense for good and less-good periods.
Purely out of curiosity I one time ask for a risk analysis report my employer is legally obligated to maintain and share with me (but didn't have). Safe to say the response involved a lot of emergent behavior.
With people asking a question is never just asking a question.
Like with any job there is a sub set of tasks better done by a machine.
There are questions that should be asked but wont be and ones that could be asked but shouldn't be. It depends a lot on the persons, their relationship and the context where the lines are but if its just a bot you can ask anything. The answers would somewhat depend on your role but they wont depend on who you are or how few or how many questions you've used up in the last hour, today, this week, this month and this year. Humans are like that, we are like: I've never asked for anything! as if its an important metric.
> If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
This is also one of the main reasons why all programming jobs were not outsourced to India.
Couldn't agree more with this sentiment. And to expand on it - the great outsourcing events we saw in the mid-2000s didn't work out for many of the things outside of programming: IT consulting in general, support and operations, call centers and things like design and architecture. The barrier was not always technical, but often a misunderstanding of how BaU works in the <parent_country> vs offshore and/or what the ask/expectations were. There's a lot of waste that happens when needing to be overly explicit and still having the message misinterpreted, interpreted too literally or simply failure to understand.
> This is also one of the main reasons why all programming jobs were not outsourced to India.
There's a whole industry here in America that re-shores programming contracts. They know they can't underbid Indian/foreign body shops so they just wait a few months and call back the companies who went with cheaper programmers. If the company is still around it's generally a complete re-write.
Great point. A lot of folks forget that not all programming jobs can/should be outsourced. There's value in outsourcing but specifications and contexts change that.
Ye, often the "creator", the "head" gets credited with everything a whole team come up with. Sure, picking the good and directing it to consistent whole is important but they would be nothing without people that produced that in the first place.
Interestingly, that's probably another area where a language model could be put into service. Consider, every workman with access to say GPT-6 now has an expert lawyer, MBA, secretary, manager, etc. at his beck and call. What happens when every individual can navigate the system as well as an entire team of professionals? I'm imagining headless corporations where the only humans involved are those engaged in physical interface with the world.
I think the claim that PhDs and postdocs are fungible is what enables credit to be concentrated to the PI.
In top places though, it’s often the case that trainees have an idea, get shut down by PI, trainee demonstrates project can work, then PI changes mind.
Also in science, the usual progress of a project is often PI has an idea, trainees explores the idea, finds an even better idea, PI says great now write the paper.
But I agree with you in the sense that PIs act as the “quality control” or “selection process” whereby ideas get culled and refined. And their scientific taste is non fungible.
True. And once their product hits the market, if it ever gets there, they don't thrive.
Prior to launch they are sooooo in love with their idea that they are meticulous about features all the while thinking they're smarter than the market.
They don't understand and appreciate the value of execution. Ideas are easy. Execution - because it involves people as well as adapting to change - is 10x harder.
Yes, those people exist. Unfortunately, that bias will ultimately undermine them, but they'll never admit it.
One of the things that I'm fairly good at, is walking people from "Crazy Idea That Will Never Work," through to "Finished Product That People Want."
It tends to be a very long process, and often involves a lot of "trial balloons." I just went through that, in the last couple of years. The project we're realizing, looks absolutely nothing at all like what the CEO originally dreamed up, but everyone that has seen it, loves it.
The trick is to not start off by saying "It'll never work." That slams doors shut, right away.
It's more like, "OK, so let's walk through what we'll need to do, to make it work."
That will often result in changes being made, by the "idea person," as the plans are laid. We will also try to create test harnesses and prototypes. These often end up, with the idea person going "Hmm...it seemed like a good idea, but it doesn't work the way I wanted."
Most all problems I see are just resource limited in some respect (occasionally its something humanity just doesn't know how to do yet, that's where the edges of science are). It's not something brand new that's never been done or explored, some prior work typically exists.
When someone says "can we..." or spills their idea they've often identified a real need but they often don't seem to do any sort of analysis further than identifying a demand signal. Chances are, you're not the first person to observe the problem and an opportunity to capitalize on it and there may be good reason no one has yet (its economically infeasible given current understanding, resources, and value people see in it).
The trick is to hold their hand and walk them through a feasibility analysis. Given infinite time and money, I'm confident I can arrive at pretty much any solution or at least a useful approximation that's an improvement on where we currently stand, so we start there and walk backwards. How much time do we really have, how much money do we really have. What's your appetite for risk within those bounds. Let's start talking about some tradeoffs of what you want to see and what I think we might be able to actually achieve.
When someone says "can we..." or spills their idea they've often identified a real need but they often don't seem to do any sort of analysis further than identifying a demand signal.
In my experience, they've identified a want. Either their want or a want they think the market has.
It's not until further analysis do they / you / we get to root need(s). This is why (allegedly) so many "IT projects" fail. The client got what they wanted...not what they needed.
In the project I’m on, I’m basically the CTO (sort of meaningless, as I’m the only front end programmer, and also do most of the backend).
I could be a lot more of a pain in the butt than I am, as I have a lot of leverage, but one of my goals is to help others on the team (including a young CEO) to learn how to make ship software.
If I were working on my own, I probably would have shipped, eighteen months ago.
It's the usual: "I have a great idea for a Startup, now I just need the money to hire some Developers to implement it...". The Winklevoss twins for example come to mind...
Is that truly fair though. I have no exposure to MBAs and it may be all the negative talk about them is based on the conspicuously bad ones. I'm speaking as an IT person with very little experience of running a business, and it may be that a good MBA could be a great asset. I genuinely don't know.
I'm reminded of reading about a Lisp machine company that ran into the ground because it was managed by techies. Their tech may have been great but their marketing and business skills were very arguably what killed the company.
MBAs are HN’s favorite punching bags. They are the only stereotype that it’s still ok to blanket-generalize about and dunk on here.
Some MBAs are dumbasses. Some are brilliant business people. Some of them actually have tech skills and have been writing software longer than you’ve been alive. This is true for many degrees.
I don’t know how some of the anti-MBA commenters think it works: A smart software engineer goes off to business school, gets an MBA in addition to his undergrad technical degree/experience, and now… what? He’s somehow dumber? How does that work? Only on HN.
Because the promise of the MBA program was that you could be air-dropped into an industry that you had zero experience in, and your MBA education would automatically make you the smartest in the room. You (supposedly) didn't need that industry experience, applying MBA principles would be enough. And it destroyed enough companies that they developed a not-undeserved reputation.
Now, if you already had relevant experience, then went and took the MBA course, and were able to identify when to lean on which, great! But, of course, part of that education is telling you that you don't NEED that experience in the first place, and if it and MBA principles disagreed, you should rely on the MBA principles.
So, if you took the course truly to heart, then, well, yes, you would be somewhat "dumber", so long as you define that as "unable to make correct decisions."
Having been a software engineer and gone on to do a MBA, none of what was taught for the degree reflected the attitude that you're describing. At no time did anyone tell us to ignore our prior experience - rather we were told that that our prior experience plus a business education would be valuable to future employers (which it is).
There was no expectation of being "air dropped" into an industry, and in fact, industry recruiters looked for candidates who already had prior experience when picking students for interviews and hiring. The students who had no relevant experience for the industry they were recruiting for (e.g. ex-Peace Corps or mgmt consultant looking to transition to tech or industry in general) were the ones who struggled to find roles. Your assumptions about what an MBA education entails are incredibly far off the mark.
I have an MBA and a CS degree so responses like this make me chuckle. I generally don't bother to engage with those with an MBA-phobia who haven't done it themselves.
I don't have an MBA, but I studied intercultural management as my Masters, and I am writing my PhD right now in management. To be honest it is quite zen like, with the degree you can understand how you can manage engineers by not trying to manage them. Other type of people mostly need closer alignment. I know a lot of stuff of how to make a company more profitable, and generally a nicer place to work. Management is actually quite like engineering, more of a craft than an exact science. The science helps you to have names for things, and a lot of times I really know how to fix people stuff.
> I've watched management- and business-focused people sneer at us geeks for my entire career, and watched them drool over the thought of finally getting rid of us. [...] They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
I've met my fair share as well. Jobs copycats too!
The trick is always to ask them how much they can raise, and do so assuming they raised successfully in the past and that they know VCs.
Either the conversation gets really interesting, or they clam up real quick!
> The trick is always to ask them how much they can raise, and do so assuming they raised successfully in the past and that they know VCs.
They don't need to have the ability to raise anything in a B2B setting, if they can answer the alternative question:
"How many customers have you personally signed up for previous software sales".
I'll happily partner with someone who has profitably sold B2B software - having a list of existing contacts in existing companies as well as solid experience going through the entire sales channel is, to me, just as valuable as extracting money from VCs, because I'd rather have the money from customers than money from VCs.
To some level I agree - but at some point there is also much on the "business side" that cannot be easily dealt with. Like having connection or some kind of relation with people who will buy the stuff or would have people who would be interested in buying stuff you make.
Yes there are these "idea people" who don't have any clue about business side and don't have any clue about technical side and in the end don't even have the right connections or business network. But they think they can make it because they have an idea(TM). These could basically play lottery and outcome would be the same they might make it but chances are 1:1000000 at best.
Then there are these business people who have the right connections and have understanding of niche/business they are in .. that need technical help to execute their idea and these are worth their weight in gold.
> I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks.
I met a pair of such idea guys once, they ran their own idea company. Their specialty was brand consulting and online marketing, but if that involved something as menial as web development they'd help with that too. The thing that made me bristle was whenever they talked about the feasibility of software solutions for their ideas, they said "Oh, we'll run that through the machine." You know, as if their software team was just a big machine that ingested ideas and spit out code.
Their idea company is no longer around. One of them decided that he'd rather design hiking boots than anything computer related; the other is a head of UI design at Apple.
> I get that, from "idea people," on a regular basis. They have nothing but contempt for us "Make It Happen" folks. They treat us as if we grow on trees, and are suckers, to boot.
Ideas are like assholes ... everybody has one and they're usually full of shit.
> His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
It might sound really crazy and stupid today, but when SQL came out, it's advertised as one of "program-generating" languages and was supposed to reduce the need to code.
My career predates relational/SQL databases so I can confirm what you wrote. When Oracle came out the buzz was that a whole bunch of programmers would go away because managers and executives could write queries in a “natural” English-like language.
That never happened. SQL is hard to master, but it’s the easy part of understanding the relational model and any particular schema. Instead Oracle and the other RDBMSs that followed created more jobs for programmers and database analysts and admins.
As someone trying to learn SQL it feels that there are much more steps:
* language syntax
* the relational model (relatively easy? It is just "there"?)
* database schema, where I have 3 000 tables, or 50 0000 tables (that's how ERPs are made)
* actualy knowing how to use the language
* building those queries that join columns from 15 different tables together to deliver the needed data -> it sounds easy, but Im struggling to do it better/faster - I never saw a book or website that focuses on that (multiple joins from multiple tables), when it feels my work is is mostly that
* understanding what is needed
* actual "programming" problems (say: you have 100k items on stock, those are parts of BOMs, make a list of 'where used'... yes I know you can google it and try to copy from stockoverflow)
Seriously, I am trying now how to learn how to setup a simple DB to consolidate data (20 csv files on a shareddrive) and the guides are often like in the 'how to draw an owl' meme...
Controlling/reporting/analyst jobs feel in some way as "programmer" but without tools/training/salary - just figuring stuff on your own. Im doing it, but apart from that SQLzoo website I didnt manage to find any decent guide for the 'usual' problems. Also since those are like 10% of my work I cant focus on those problems properly - and try to learn it after work.
Also SQLzoo is a lot like the "how to draw the owl" meme.. the easy tasks are easy, the hard ones are impossible and there are no medium tasks. There also dont seem to ne any real life examples like those tasks like: 'join 15 tables to make some report'.
Shoot me an email, I’d be more than willing to pair with you. I’ve been a “software engineer” of some flavor for about twenty years now, and about five of those were spent writing SQL almost exclusively.
You can reach me at nominallyanonymous-at-protonmail-dot-com. From there I’ll give you my “durable” contact info - Slack, Discord, SMS, Signal, Telegram… whatever you use regularly, basically.
Tools matter. When I learned SQL long ago it was with FoxPro for DOS, and it was a great tool for doing both SQL and text based UIs (as with curses). Later, I used MS Access 97 and it was an even better tool and sparked a lifelong interest in data modeling. The ui for building up tables (specifying column data types, etc) was really trail-blazing at the time and the interaction remains good today. The built-in ERD charting tool was good, showing your tables in relationship to each other. The visual query builder was...well, I never used it but I suppose it was good? You just had lots of good tools to build tables, flip through their contents, and visualize their relationships.
I don't know of any modern environment that functions like that, on any platform.
I'm posting this to invite others to either a) correct me and tell me I'm wrong that tooling doesn't matter (a legitimate view, but wrong), and/or b) recommend some modern, OSS Access-like tools that might help flatten the learning curve for you. (And if you're more comfortable with a CLI and a REPL already, then you don't even need this hypothetical tool, but I myself am curious about it.)
EDIT: looks like LibreOffice Base is pretty close to MS Access circa 1997.
At some point you have to learn the relational model. And you have to make the mental switch to understand SQL as a declarative language, not an imperative language.
As someone who did a lot of SQL back in the day, and is now doing some again for a startup POC, I'd say you're basically right but there is also:
* Rewriting queries and also parts of the schema when it turns out the things that made sense in the design phase cause massive bottlenecks once you get more data, or just get it faster, than you tested with.
Of course the good news is now you can run the best RDBMS's on your laptop, or on a $5/mo VPS; or have a small managed one starting at <= $15/mo. Plus ChatGPT can help you remember how to do that inside join. ;-)
Building the queries is the easy part. Making the queries run fast with large tables is difficult and there are trainings available, but very focused and a bit expensive compared with the ones on large training websites (ex. LinkedIn Learning). For example Brent Ozar (brentozar.com) has lots of blogs and 2 sets of trainings for MS SQL Server, some for beginners and some for experts, that are extremely useful for people with such needs. Problem is, expert tuning for MS SQL is totally different than expert tuning Oracle that is totally different than Postgres and others.
On a side note, if you have to join 15 tables for a report it is a sign you may go in the wrong direction. In 99% of the cases I never needed more than 5-6 tables if the data is well structured, but that takes years to learn.
I'm not an SQL expert. But data persistency and consistency are generally very hard problems. It's a lot of steps, but I'm not sure if it more steps. More than what? If these data are stored as a big binary blob?
Mine doesn't predate it but it's very confusing for me to read this opinion.
From my point of view, it totally did happen?
Can you imagine how many programmers the company would've needed to get all the data a business analyst casually queries per day?
What you're looking at is the quantity of people actually employed in the industry, not how many SQL made obsolete. The industry just grew so much that it didn't become an issue.
A few things happened. Relational databases enabled more new development, larger databases, interoperability, all of which needed programmers.
With more flexibility in the database companies could collect, store, and use more data. And that data had to get into the DBMS somehow: more code.
Setting up and managing databases required DBAs, a role often filled by programmers in smaller companies, and still filled by programmers today. And only larger companies had business analysts. In smaller companies programmers or maybe technically-proficient managers did that work.
Anyone who had devoted their career solely to building bespoke database systems had to pivot or walk in the late ‘80s, but very few programmers only did that — it was part of the larger application development. If you were good enough to write robust database code pre-Oracle you had plenty of options.
In the ‘80s when RDBMSs started to take over I worked in enterprise logistics. Oracle created jobs at the places I worked, and no programmers had to leave because we got a better tool.
I’ve worked in large and small organizations for 40+ years. I’ve never worked with a manager or analyst who could write SQL more complex than “SELECT * FROM orders WHERE total > 10000.” I’m sure they exist, but not in career-threatening numbers.
In my department I recruited DBAs as recent as last month and this is a permanent position filled by a team of several people that do just DBA work. I saw developers ("programmers") try to do this work in some small companies or in areas where the databases were small (hundreds of MB, a few GB), but I did not see that when the databases exceed 100 GB or when there are dozens of production SQL servers that need to run 24x7. Solutions are implemented based on needs.
The industry grew because the desire for data analysis grew, which is because the technology's ability to meet the desire grew. This can repeat itself a couple more times.
You're considering that every company that needs SQL today would hire enough developers to essentially write most of it from scratch. While some might, most companies that use SQL would not exist, because the cost of developing their product would be prohibitive.
Maybe. Before Oracle (mid-80s) every company did write their own database code.
I think a lot of smaller companies would struggle if that was still a requirement, but if relational/SQL had not come along we’d have something else like it.
That's exactly my point, though. In the mid 80s there were a lot fewer companies producing software. Nowadays we have many more.
Indeed SQL was not the only local maxima we could have gone for, but the point is that having an easy to use database with a powerful query language did not reduce the number of jobs, but instead increased it. Instead of a few companies hiring a lot of developers, we have a lot of companies hiring a few. The latter will usually mean more jobs.
If hardware was cheaper, but writing software required an army of developers to do it, the costs would still be too great. If you read carefully, my point isn't that SQL was the cause of the tech boom, but rather that SQL and other technologies that make developers more productive didn't really take jobs away, because the market for tech would be smaller if you needed too many developers to do anything.
Imagine if every little web startup writing a custom CRUD for their business needs needed to write a new bespoke database. It simply would not be feasible or they'd need some crazy funding.
>Before Oracle (mid-80s) every company did write their own database code.
Not really. There were a ton of ISVs competing with Oracle and pretty much every mainframe and minicomputer maker also had their own database products, many of them non-SQL.
Oracle was the first commercial RDBMS (1979), with an early version of SQL. At that time every mainframe and minicomputer company offered one or more “databases,” often little more than file managers with a library of utilities. ISAM was a popular model but that describes a lot of data management systems, not a product or standard.
All commercial databases pre-Oracle were non-SQL. Watching that history get rediscovered and pushed as an improvement by people mostly too young to know better — so-called NoSQL databases — brings to mind Alan Kay’s quip about the software industry constantly reinventing the flat tire.
The hash-based Pick OS/database came out before Oracle but only ran on a few computers from Microdata and later Pr1me and Honeywell. Pick-based systems remained popular into the early 2000s in some sectors. A friend of mine still works on one, for a state government agency.
You could construct a database management system from the ISAM-based libraries for COBOL or Fortran, but I wouldn’t call those database management systems comparable to Oracle. Mostly they didn’t have a query language per se — you had to write code to get anything in or out. And they ran as part of the application process, not as a separate server dedicated to running the database engine.
I was thinking Db2 might have been a little earlier but you're right. And it's fair that the earlier databases definitely lacked elements of what we'd consider a database management system today even if they handled a lot of the low-level database heavy lifting for companies.
I have seen this multiple times. Got called into a corp. audit once because CFO's spreadsheet didn't match reports from database. His SUM() function was missing some rows, ZIP Codes got converted to scientific notation and he didn't know why, that kind of thing.
Maybe I’m misreading, but “not much harder than” implies SQL and Excel aren’t that hard to learn. “At least as hard as” implies both are similarly difficult.
Funny - also COBOL was intended for the 'business' community to write their own programs.
Even funnier - we've already lived through the great 'software development adoption' by business and hardly noticed, except some of us got work out of it. A lot of small businesses (legal practices and accountancies were particular suspects) grabbed MS Access with both hands and went wild in the 90s/early 2000s. Timesheets and other HR functions were popular applications.
I’m quite sad there isn’t anything like access these days. I feel like I see fairly clever solutions for things being built in excel, but they can’t scale very well to multiple users. For a lot of processes something like Access would be great for bridging the gap.
My oversimplified observation is that, Adobe products at the high-end notwithstanding, MS Office basically crystallized what was a mainstream office productivity suite now that companies weren't buying one-off products. Rather than buying a low-end/midrange desktop publishing program, they made it work with Word. Rather than using a low-end database (of which there were many), the made do with Excel.
I didn't mean SQL wan's a success. I meant, as the commenter above me said, a higher-level tool doesn't necessarily replace the lower-level ones, or reduce the need of them.
Relational databases replaced a whole raft of vendor-supplied databases and custom-built solutions. It was a new tool for software developers, not a replacement.
I don't actually agree. SQL did replace the need for programmers, especially for complex questions. We sabotaged its interfaces to existing GUIs for mundane questions and made CRUD a limitless profession.
The funny/sad part about computer science is that people don't want to understand the costs of customization and sales/management/marketing forever want it as a differentiator.
SQL could have eliminated us from the flow of many niches as easily as the spreadsheet did from business operations. I think why it didn't has more to do with market timing.
SQL was too much ahead of its time, but we are indeed slowly adopting the "avoid operational specifications, use high-level languages, make your language fit the problem instead of the opposite" philosophy of the 4th generation languages.
It was very true, and one of the most success DSLs ever invented. And many people coded SQL who never would have touched other languages with a ten foot pole.
It’s just that, with more powerful tools we can create more advanced and intricate things, so it never ends…
I not exactly welcome but rather anticipate, a near future were instead of reinventing every kind of pattern and library in terms of cloud and infrastructure management, we will see the same meta-management of different AI engines and services…
But it is true, just in a different area than predicted. I hear all the time that scientists and analysts, while being non-programmers, do write various queries without asking a programmer to do it… I am a programmer and not even sure how much time it would take to implement their regular joins with windowing, aggregates etc in a record-based file format. Can I even do that?
SQL was a typical failure in this regard. It was seen as a savior but at the same time was designed for at least math-versed people. The prediction was right, its scope was wrong. Since then we failed many times and as I see it, we tend to diverge from this idea more and more. And the reason is there’s no one controlling our stacks vertically, so it takes more jobs than it could to make an app.
I think it used to be that one needs to write dedicated programs to go through large amount of data --- you need to know a programming language, all its toolchains, and the all the proper algorithms for doing the query __out-of-memory__ --- certainly above your ordinary analysts' paygrade.
> If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
One day a corporate genie appeared to a middle manager and granted him one wish. The manager wished that their business logic could be edited by a graphical tool and that programmers were no longer needed. "Granted", poof!
The next day the manager came to work, all the programmers were gone, fired by upper management. The manager sat down and opened the new graphical tool that had magically appeared on his computer. The interface was beautiful, except for what appeared to be a hairball drawn in the middle of the screen. The manager asked his manager about the hairball, "oh, that's our business logic, you have to zoom in to see the individual rules". The manager zoomed in and surveyed 180,000 business rules, intertwined in a complete graph.
"How am I suppose to work with this?" the manager asked his manager. "You just click and move them around, it's easy", replied the upper manager. "Also, I've been meaning to ask, when do you think those new business rules we talked about last week will be implemented? We need them by Friday."
> Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec? You’re kind of assuming that systems will be generated from a one-shot prompt, but it seems more likely that an interactive AI will identify the gaps in requirements and ask for clarification.
Alternatively, if the prompt-generate-evaluate loop is short enough the user can simply play with the running system and provide feedback to alter it.
This is essentially what developers do when they present a “weekly build” or whatever in an agile environment.
The process of solidifying requirements, stating them clearly and translating them into machine-executable formats are all language tasks and these models are really fucking good at those.
I’ve noticed in discussions like this that many software folks are assuming that AI capabilities will plateau soon, or will merely be extensions of what we already have (a better autocomplete, etc). I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.
> Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec?
This is the doomsday argument. What would I do if there's a nuclear apocalypse before lunch? I guess I'll die like everyone else.
An AI sufficiently advanced to do that is also sufficiently advanced to run the entire business in the first place, and also argue cases in court, do my taxes, run for president and so on.
You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++.
>You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++
This is actually something that makes me happy about the new AI revolution. When my professor said that I thought he was an idiot, because no-code tools always make it harder to specify what you want when you have specific wants the developer didn't think about.
We give kids books with pictures because pictures are easier, but when we want to teach about more complex topics we usually use language, formulas, and maybe a few illustrations.
I still think no-code was always doomed due to the fact that any attempt at it lacked the interface to describe anything you want, like language does.
AI is finally putting an end to this notion that no-code should be clicky high-maintenance GUIs. Instead it's doing what Google did for search. Instead of searching by rigid categories we can use language to interact with the internet.
Now the language interaction is getting better. We haven't regressed to McDonald's menus for coding.
I’ve used no code tools since the 90s and it just has a fatal flaw. For simple demo use cases it looks simple and cool. Then when you go to the real world and start getting pivots and edge cases you have to fix in the interface then it becomes a 4D nightmare and essentially a very bad programming language
I’ve spent a fair bit of time working on interactive chat systems that use a form of visual programming. It’s not good. Once you get past the toy stage (which is good and ergonomic), it’s just the same as programming except the tooling is far worse, you have to invent all your change management stuff from scratch, and it’s like going back 30 years.
Isn't the "Chat" part of ChatGPT already doing something close to this? I mean the clarification comes from the end-user, not from the AI, but with enough of this stuff to feed upon, perhaps AIs could "get there" at some point?
For example, this guy was able to do some amazing stuff with ChatGPT. He even managed to get a (mostly working) GPU-accelerated version of his little sample "race" problem.
> Isn't the "Chat" part of ChatGPT already doing something close to this?
No, the amount of handholding you have to do to get it to work effectively presumes you already know how to solve the problem in the first place.
The best way to use it is the opposite everyone is busy selling: as a linter of sorts that puts blue squiggles below my code saying stuff like "hey stupid human, you're leaking memory here", or even "you're using snake case, the project uses camel case, fix that".
That would actually lower my cognitive load and be an effective copilot.
Fair enough - assuming steady state, but the acceleration is the curve I'm most curious about.
The point I was alluding to above was that the prompts themselves will be recursively mined over time. Eventually, except for truly novel problems, the AI interpretation of the prompts will become more along the lines of "that's what I wanted".
Some things to think about: What happens when an entire company's slack history is mined in this fashion? Or email history? Or GIT commit history, with corresponding links to Jira tickets? Or the corporate wiki? There are, I'd guess, hundreds of thousands to millions of project charter documents to be mined; all locked behind an "intranet" - but at some point, businesses will be motivated to, at the least, explore the "what if" implications.
Given enough data to feed upon, and some additional code/logic/extensions to the current state of the art, I think every knowledge worker should consider the impact of this technology.
I'm not advocating for it (to be honest, it scares the hell out of me) - but this is where I see the overall trend heading.
In a world where we have the technology to go from two lines of prompt in a textbox to a complete app, no questions asked, then the same technology can run the entire company. It's kind of hard to believe transformers models are capable of this, given we are already starting to see diminishing returns, but if that's what you believe they are, then you believe they can effectively do anything. It's the old concept of AI-complete.
If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.
This remains true for any version of a language model, even an hypothetical future LLM that has "solved" natural language. I would not rather write natural language than formal language given the chance.
> If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.
But what if the "programming language" is not a general-purpose language, but a context/business ___domain specific language? One that is trained on the core business at hand? What if that "language" had access to all the same vocabulary, project history (both successful and unsuccessful), industry regulations, code bases from previous (perhaps similar) solutions, QC reports, etc.? What if the "business savvy" consumer of this AI can phrase things succinctly in a fashion that the AI can translate into working code?
I don't see it as a stretch "down the road." Is it possible today? Probably not. Is it possible in 5-10 years time, I definitely think so.
I agree with your point about how to best use it today. We have seen that each new model generation both improves the prior tasks and unlocks new ones through emergent behavior. That’s the fascinating/scary part of this development. And yes, it’s “just” a language model. It’s “just” predicting next token given training + context. We don’t really understand why it’s working and it’s evolving non-linearly.
I asked GPT-4 to give me an SVG map of my town. I then asked it to put dots on some local landmarks. The map was toddler level, but the landmarks were relatively accurate in terms of their relationship to each other and the blob that it drew.
So this is a language model that has some emergent notion of space in its code generation abilities.
Currently, we don't even trust the car's automatic driving capability to let it be on the roads without a human.
Until that day happens, i highly doubt that a business owner would just blindly trust an AI to generate their business code/software, without hiring someone to at least look after it. Therefore, software jobs could evolve, but not disappear.
Yeah all this talk about complex systems being written by a language model which has no concept of files, code paths and import systems sounds like a job security to me. I'm a pentester though.
The project will be more consistent and resilient to issues but it probably take about half the time it used to take without AI, not 1% of the time. Reading AI code is damn hard, it is code review, requires exam level concentration.
Yes but even in that case The role will be of a "AI Prompter", it will not be done by the managers because of the time factor. Even though AI can give you the result much faster, building upon it and testing/verifying, then coming up with the refined prompt is a time consuming thing. Only the Write part of the write/eval loop will be faster but not neccesarily easier.
Especially the "debuging" part will be much harder. Noone can look under the hood to understand what is wrong and all you can do is shoot random prompts in the dark hoping it will create the right result.
It is scary right now how confidently and spectacularly wrong the chatGPT is and it will create disasters.
Why would sufficiently advanced AI even need a prompter? The AI could play the role of the greatest prompter in the world, and ask the same questions to the end user that the human prompter would.
This is a misconception of how our industry works. Yes there are market resesearches with users but often those come after the problem space has been defined. Most of you see in the tech sector today are "Created Needs" by imagining a solution that the users didn't even know they needed. To ask a question you first need to define a problem that is defined by that/those questions. This is the difficult part and the main reason why People still believe "the Idea is the most important factor". Ofcourse this is not true, there are hundreds of factors that come into play.
Imagine an AI asking circa 2000 to the users what kind of virtual social space did they need. The answer would not have been Facebook. (There were other social networks before Facebook but the time was not right for the "Social" explosion). By learning on existing solutions, The AI would have learned it's lesson that global virtual Social networking is not something that the users want. And part of this problem was as much sociological/psychological and outside of the realm of what the AI could consider that we would not have what we have today.
Not that we would have missed much from missing the particular implementation of this idea that Facebook gave us but the idea and what it unleashed is much more than that particular implementation.
Sure, people don't know what they want. But the point is there won't be a need for some intermediary person between AI and the end user.
Whatever the AI prompter brings to the table will quickly be provided by the AI itself. If a user doesn't really know what they want, there isn't a scenario where the AI prompter will suss it out but the AI itself won't
> I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.
For starters I'd like Codex to be more than next word predictor, it should also "feel" the error messages, data types and shapes, file formats, so I don't have to explain the context. It should be part of the system, not just part of the text editor.
In that case, how is the AI going to keep tens or hundreds of thousand of lines in memory to produce cohesive code that works with the rest of the codebase?
It seems prohibitely expensive to build and run transformer models with that much capacity.
GPT 4 already has 32k tokens of context for prompts. Once we’re making arguments about scale only a few orders of magnitude larger than the current state of the art, it seems similar to arguments 10-15 years ago that real-time ray tracing is not feasible.
All points the horses made before cars all but replaced them. “The humans need to get from A to B and it’s not like they’ve gotten any faster with their own 2 legs”.
How does this manager-engineer look like now? Managers specify what they need in plain language, and both parties go back and forth with increasing complexity to create based on the shared vision. Managers can already pull off a similar dance with ChatGPT: give it a plain English prompt, it’s responds with what it thinks you want, and you refine until it’s got it. GPT4 can do this with website sketches, and who knows how much finer you could get from there by specifying your prompts and feedback more tightly over less than an hour. Remember: copywriters and brand marketers and marketing creative makers have a similar role of turning complex requirements into designs that sell (and their roles are dying fast).
A software engineers job is to realize products into software. ChatGPT is pretty much as capable as a high school programmer that moves really quickly and takes feedback to heart - and that can handle a good chunk of the software engineering job for a huge fraction of the price. Your job isn’t as bulletproof as you think, and especially not your amazing salary. I’m speaking as a software engineer turned vc backed founder, so I’ve seen both sides of this relationship.
The horse population has decreased by half or so since the early 1900s. And horses have stayed pretty valuable, too. Most of them just don't have to work as hard anymore. The horses used for sport never went away, only the horses used for labor or transport. So if your horse analogy is accurate, then maybe half of software engineers lose their jobs and the industry stops growing. Those that keep their jobs keep their salary and have to either be very fast or very good at jumping over artificial obstacles. Seems accurate. I should probably stop beating a dead horse.
Even the work horse breeds still exist and do work at some small farms but it’s probably 1% of the peak demand for them. Not a good outlook necessarily if we are the workhorses
A software engineer's job isn't to write code, it's to make decisions. Going from "give me Uber for bicycles" to a working app that runs on a device takes a million decisions, which a bunch of people (designers, copywriters, engineers, etc) make.
Yes, the AI is good at taking low-level requests and turning them into reasonable code, and then refining them, but unless the CEO is going to sit down and spend days telling the AI "OK now make this button disabled when it's clicked until the table loads", you need someone to be doing that.
Maybe? A founders job is basically “execute the path to capitalize on the opportunity”. So you have to figure out the opportunity, and execute towards it by whatever means necessary - that’s a lot of jobs and hats. As long as there’s a human required to make a business possible, founders will exist. Get rid of software engineering, marketing, fundraising, identifying the opportunity, sales, customer support, etc then maybe all the founder does is make sure the machine keeps running. But at that point every other human job is automated too.
Founders are weird because their job is just to “do everything that needs to be done” so the role flows like water into any spot that isn’t already filled in a company. Replacing that seems like by definition you have to replace almost everyone else first.
> But at that point every other human job is automated too.
I’ve heard that exact same argument for software engineers.
As an outside observer it sounds like you’ve switched from SE to founder, so you’re failing to apply the same standards because you (consciously or not) want to defend your current position.
How do they apply to software engineers exactly? The founder “job” is literally “do anything that needs doing in the business”. Now the job of a CEO or CTO is much more narrowly defined and I can see that seeing automation sooner, but the founder role is kinda unique in that it’s just “do everything and if you’re successful you should be constantly replacing your job with an employee”.
Founders are the jobs trying their hardest to replace themselves (with hires and go can do the work better) and yet still always find more to do. That’s very unique
That's not even close to unique. Software engineers have been doing that since they existed.
>but the founder role is kinda unique in that it’s just “do everything and if you’re successful you should be constantly replacing your job with an employee”.
That's maybe the ideal of a founder, it's definitely part of the mythology, but it wasn't true for the majority of the many founders I worked with back when I was consulting for early stage startups. Based on my experience with a large sample size, the job of a founder is to convince VCs to give them money.
>How do they apply to software engineers exactly?
It's one of the definitions of the singularity. When AI can write itself, it will improve so rapidly that humans won't be able to keep up.
> Founders are the jobs trying their hardest to replace themselves (with hires and go can do the work better) and yet still always find more to do. That’s very unique
Not really. They are not "replacing self", the coordination is always needed, just hire the people to the point coordinating the company is their only job. I.e. finding peasants to do their dirty work and they can just rule over them.
Uber was founded in 2009. Before that you could maybe hail a cab if you were in NYC, otherwise you had to call ahead. Someone would drive the cab to you, you'd get in it, and they'd drive you to where you wanted to go.
Now we have AI. Now instead of driving to a financial advisor's office or a bank loan officer's desk or my kid's school:
- I have an AI app that automatically invests my money and runs on my phone's neural processor
- I have an AI app that automatically tells me if I qualify for a mortgage based on a truckload data about me and runs on my phone's neural processor
- There's a booth in my kid's classroom that her teacher puts her into; he says, "hey Siri, teach this kid to stop throwing carrots", the AI teaches my kid in a kind, convincing way that carrots are food, not sports. This runs on the iPad inside the booth that is also the screen
- I have an AI app that tracks how long I'll live, and it ticks down 1.38 minutes whenever I set foot in an automobile, so I try and avoid it now. I can't believe people were ever so reckless.
The race now is who controls this. Probably what will happen is governments will deem it dangerous (disinformation, national security, export controls), and allow only megacorps like Google/Facebook/Microsoft to run them under strict regulations. As a result, these models won't run on our phones but on server farms, and we'll become (even more) dependent on these corporations. This will feel good to the US because it shores up our tech sector--a big economic advantage compared to every other country--while also looking sensible.
Some good points but maybe be more careful with your use of verb tense. I was thinking you were either overly-trusting or in possession of an ungodly-exotic prototype phone until you got to the part about the conditioning booth in the classroom. :-P
I'm bullish on recent ML advancements but it will be a long time before I give one my Schwab login.
I'm skeptical too, but I assume this will just be Darwinian: curmudgeons like us will shamble along with our 5% gains and ill-behaved children, and the bulk of society will reap greater rewards. The worst part is our judiciousness won't even save us if we turn out to be right: society will still collapse whether or not you turned your life over to AI.
High-level decision makers like founders and C-suite execs will be the last roles automated away. ChatGPT is effectively a personal assistant that works fast and works cheap. That's a game changer because suddenly peons like you and me get our own personal assistants, but for high-level decision makers it's just an incremental improvement over what they already have.
This is good thing. Managers can clarify their ideas using AI tools.
But still, the real product needs to be fleshed out, back and forth, interactively.
Regardless of whether it is the manager or engineer who commands the tool, you can't entrust the AI to get everything 100% right. Some one has to proofread it, and that is the bottleneck, or where the value of human kicks in.
A human brain can only hold that many details by itself. It can't maintain all the details live all at once, and some business do have overwhelming number of details. A one man team with help of infinite AIs to replace an actual team isn't useful.
At some point, that human will start to fail to verify the system generated by AI is actually what he/she wants.
> In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
I feel like there's currently a movement towards slightly lower abstraction, or at least simplified, consistent APIs, less magic. The rise of Go and Rust are examples of this. Typescript could be another, although the abstraction isn't really lower, it is once again an attempt to coerce JS into something workable. I get really frustrated writing Python or Rails these days due to the sheer magnitude of hidden magic that sometimes works and sometimes doesn't.
To tie this thought in with generative AIs: Currently they seem to be much better at programming with relatively simple syntax. By far the largest success so far I had with shell. Basically I tested assisted writing the same tool in Python, Perl and shell, and the results in shell were close to perfect. ChatGPT was even able to accurately limit commands to specific OSs and shells, and was able to accurately summarize the functions of other shell scripts.
So my prediction is that we will see a movement towards simpler, lower abstraction languages while Coding Assistants rise to take away the boring stuff from programming like looking up syntax, writing boilerplate, structuring files. Programmers will then have more time to think about delivering value to product, maintainability, and efficiency and correctness.
One last addendum: ChatGPT is really incredible at assisting with sys admin stuff, my guess for why would be that there are a gazillion obscure forum entries going back to the 80s explaining basically everything there is to know, but these are hard to find or comprehend for humans. With an AI assistant, self-hosting becomes much easier, and another development could be for startups and smaller companies to move away from AWS et al, especially now that money is more expensive.
I've also been considering the concept of "AI-amenable" programming languages recently, and I think it would be quite an interesting twist of history if lisps became once again the AI languages of choice because of their simplicity. Going from the language to write AIs in to the language that AIs write.
Fwiw, GPT doesn't seem to write Clojure as well as Python or JS or even C++, which is obviously much more syntactically complex. That said there is probably an order of magnitude more training data for all of those.
Feels like a rediscovery of systems languages. I certainly love Go (and probably Rust once I look into it) because I almost never have the experience of searching and searching to find some concrete code rather than just layers and layers of interfaces.
As it stands, GPT seems to make expert devs even more powerful. But yes, it can't replace them for now.
The best thing I find as a developer that it unblocks the resistance we have in starting up with something new. For instance, I just built my first Chrome extension [0] to make video players more accessible on webpages with keyboard shortcuts.
I had built a desktop app on similar lines long ago, but could never push myself to built an extension. Last night I gave my requirements to GPT and it guided me through the whole process – from creating manifests and js files needed, to the JS code (granted that I'd to fix/extend the code) and even how to install/enable the extension.
Within few hours I have my working extension without needing to go and read any extension docs.
> I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."
And even if AI is able to do that... Well, then anyone out there could become a UberCycle CEO. Which means: customer requirements will be much more complex by then and coming up with an entire product in 1h won't be enough anymore.
Or perhaps if literally anyone can make a great product, then the true successes will be either those are genuinely innovative, disruptive and have first mover advantage, or those that are one of many but gain better adoption through better marketing. Or perhaps some ideas (anything involving hardware) need capital anyway.
Which isn't actually different from today. We've just shortened the time to get your MVP to market. Rather than having to raise some seed and hire devs, you can do it with an AI in a few days perhaps. Everything after that seems it would be the same.
I currently expect that near-future version of the current type of language model — no major new breakthroughs — will be able to do just that.
I also expect, as you say, for this to create a lot of competition and increase the minimum quality that customers demand such that… heh, just realised, "Make Uber for bicycles" is basically going to be the next generation's pre-interview coding challenge to screen people that can't use AI effectively.
> The Last One is a computer program released in 1981 by the British company D.J. "AI" Systems.[1][2][3][4][5][6][7][8][9] Now obsolete, it took input from a user and generated an executable program in the BASIC computer language.
It was THE LAST ONE because it was the last program you'd need to buy. FOR-EV-ER. While its baseball-eating ability was unknown, it was "all the programs you'd ever need for £260" and it apparently focused on ETL jobs; that is, extracting data from files, transforming it, and loading it into some other file. Talend for the Doctor Who set, in other words, and it was set to EX-TER-MI-NATE programmers. Maybe it did; after all, we don't very well write programmes, now, do we?
ChatGPT will definitely make a dev job obsolete if their job is simply copying and pasting code they found online.
By all means, that is part of everyone's job. Sometimes I really can't remember how to do some specific thing. I know precisely what I want, but I don't remember the proper way to do it. I would have resorted to a web search until last year, now I ask chatGPT. It is faster, more concise, and surprisingly accurate. And when it's inaccurate it's easy to either refine my question or cross reference what it generated with online sources.
But I think it's a bit silly how people are treating it as if it was some kind of General AI. It is not, it can only give out known answers to known problems based on language statistics. As impressive as it is, it can't reason logically about problems without known solutions, it can't identify faulty, incomplete or inaccurate information, it can't evaluate on drawbacks and tradeoffs of different approaches.
ChatGPT made like 20% of my work a lot faster and less cumbersome. I like it enough that I pay for premium access. But even the notion that prompt engineers might replace software engineers is silly. I imagine people repeating this nonsense are either not engineers, or extremely junior in the profession to the point where their work is only writing code for clear specifications, no questions asked.
That's literally what the language model is. It might correctly generate a solution to a "novel" question/problem that is sufficiently close to one with an existing, known answer. But then again it might not. And in software development, it's going to take someone who is knowledgeable to tell the difference.
I think software engineering is going to look very different in a few years, and likely be a smaller field with lower paying jobs. But it's not going away in the near (5-10 years) future.
The dude you replied to you gave out the sort of bad take I come here for.
If anyone thinks genuinely that ChatGPT can meaningfully replace a software developer should be starting a tech business right now. You can replace basically your largest cost (staff wages, especially for expensive pesky software developers) with a $20 account that will work overtime, will never get tired, and whose performance will never fluctuate.
No excuses for not getting insanely rich. Of course it's not happening, because it's bullshit.
I am genuinely impressed and even excited with ChatGPT. It's an amazing tool, that would have been massively helpful when I needed a good NLP in the past. It will certainly be massively useful in the future. Hell, it's being a great assistant right now.
But it's not General AI, and I do facepalm at people LARPing as if it were.
General AI will in day exist, and knowledge jobs will most likely be a thing of the past at that point. How far in the future it is is anyone's guess (my prediction is in between 12 and 235 years in the future). We're just not there yet.
There's a difference between being able to do logical reasoning, and being able to do everything a competent white collar worker can do. For one, there's a token limit and memory limit, which limits the scale of the problems current iterations of GPT can do (and this limitation is not a limitation of logical reasoning ability). There's also (for GPT) no way for one to fine tune or train the model to work better in a specific ___domain unless you're in bed with OpenAI/Microsoft.
I think as a society we don't really have precise words for describing the different levels of intelligence, except in a "I know it when I see it" way. I don't think I'm LARPing in any way, I'm probably even less excited with it than you are given that you seem to be using it more often than I am. I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.
> I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.
I agree with most of your reply, except this bit.
I mean, generating a response through statistical language patterns is a sort of reasoning, and ChatGPT has been accurate enough to replace internet search for me quite often. But it also generates bullshit that an untrained eye would miss (because the bullshit it generated was statistically plausible).
When it get things wrong it generates some comically wrong behavior. I had one case where it looped through variations of the same wrong response - precisely because it is unable to do any kind of logical reasoning upon faulty or inaccurate data.
"One of the biggest differences that we saw from GPT-3.5 to GPT-4 was this emergent ability to reason better," Mira Murati, OpenAI's Chief Technology Officer, told ABC News."
I neither know how LLMs work nor how our brains work. And I don't know what could be parallel between these two.
For my very very limited knowledge of how properties can emerge from unique arrangements of constituent components (the S-R latch giving rise to state - i.e. memory - comes to mind), I would not at this point write off the possibility that a very large / very deep / very intricate neural network trained on language prevalence in very very large datasets could manifest properties that we would interpret as reasoning.
And I further wouldn't write off the we humans may owe no small part of our reasoning ability to language comprehension that we begin to ascertain from infancy.
Just because the guy said it doesn't make it true. "Emergent reasoning" is a great marketing hype-term that contains no technical specifications, like 'retina display'.
Any “emergent reasoning” produced by these LLMs is almost certainly coincidence (i.e. the long tail of the probability curve, e.g., like monkeys randomly banging out Shakespeare’s Othello).
A type of reasoning. It's still bad at mathematical reasoning and advanced programming or at least translating very complicated written instructions into working code without any human intervention. We also don't know how good it is at reasoning about the physical world although I think Microsoft was doing some research on that. Then there's theory of mind and the reasoning that goes along with it. Then there's reasoning about the future, how one's actions will affect outcomes and then reasoning about that subsequent future.
ChatGPT is impressive, but gets many things wrong. If you know what you are doing it's an amazing programming assistant. It makes me noticeably more productive. It may lead someone who doesn't know what they are doing in weird rabbit holes that will lead nowhere however.
One silly example. I was using a library I hadn't use before, and I asked how I could get certain attributes. It gave me an answer that would't compile at all, the imports didn't exist.
Then when I mentioned that it didn't work, it game me a slightly different answer, that also didn't work, and explained that the previous answer was valid for 3.x. in 1.x or 2.x the new answer was the correct one.
But there's the catch. There's no version 3.x. there's not even a 2.x. It's language model just statically got to that conclusion.
Doesn't make it any less impressive to me. It gets things right often enough, or at least points me in a good direction. I effectively learned new things using it. But it can't replace a developer.
Using ChatGPT as if it was General AI is similar to eat a meal using a hammer and a screwdriver as utensils. You can probably do it, but nobody will have a good time.
I notice you said you used it, but did you actually intentionally test its reasoning abilities by giving it "known problems" and equivalently difficult (whatever that means) "novel" problems, and observe the difference in the quality of results it gave?
(btw, apparently GPT-4 does much better on mathematical reasoning, I've yet to try myself though)
It fails sometimes on known problems, not even particularly hard ones - i.e.: simple questions about the capabilities of programming libraries for which there is ample documentation available.
The thing is, by talking about its shortcomings I may be sounding dismissive. I'm not. I think ChatGPT is an amazing tool. I just recognize it's shortcomings to use it to its best capacity.
For things that don't have a known answer (i.e.: I couldn't easily find an answer online) it does generate plausible bullshit. On those cases it shines for things that don't really have any strictness requirements (for example, ask it to generate some fiction, ask it to generate a polite response to an email, etc). I used to suggest character names for a game I was going to play based on some parameters, and I loved the responses it gave me.
I too believed that a software engineer's job is to identify and enforce rigorous specification of the abstract high level requirement. And I too was not taking AI advancements seriously but then I took a closer took at what AI tools do today.
Here's my concern:
1. AI assistance thrive on structured data
2. Computer programs are some of the most structured data. And it's available abundantly out in the open.
3. Yes, you can't generate an Uber for bycycles with a single prompt, but you can fire half your development team and increase the productivity of the rest of your dev team with an OpenAI subscription.
> Computer programs are some of the most structured data. And it's available abundantly out in the open.
This is the same Fallacy that we hear since 50 years. All Program requirements are almost the same, just reuse and adapt an existing one.
Guess why it has never worked? Because the premise is false.
Structured data for X is not optimal for Y (and can be even very wrong).
Apart from the "personal blog software", everything else has various needs of accountability. AI Black box approach is not suitable for any of these so you have to manually verify the code. Veryfing code that you are not familiar with especially in complex interactions is much more difficult that writing it (from this comes the often "rewrite from scratch request", because institutional knowledge has been lost, imagine how much worse it is if this knowledge has never been there in the first place).
Finally and the most important one, all AI models rely on learning, if there is noone to learn from all you get is stagnation. Most of the breakthroughs come from a complete reimagining of the solution space. If the solution space is fixed because "AI has substituted all Engineers" there is no going forward.
> you can fire half your development team and increase the productivity of the rest of your dev team with an OpenAI subscription.
Here’s another perspective on job loss:
Given that…
1. …OpenAI accelerates ALL knowledge work productivity, meaning that any human laborer is suddenly much more valuable than last year;
2. …there is a notable arms race at the moment that is accelerating tech and business innovation at a blistering speed, where higher rates of innovation outcomes will be expected across industries just to keep up;
3. …there is still a lot of money looking for growth;
…then, because shouldn’t this result in an overall increase in demand for human labor?
Looking around society, there is clearly a LOT of work to be done. “Leaning in” with a spirit of optimism may be more advantageous for the long-term.
Also, people-people don't want to use no-code tools. They want to hire people to do that for them. If using the tool is really simple, the salaries will go down, but it won't remove the demand for "translators".
For the tinkerer, no-code tools are (probably) great, but if successful, even tinkerers will hire managers who will hire translators.
I've been forced to use no-code tools as a software engineer once. It's just a way less expressive programming language that you interact with using a GUI. I'd much rather have typeable logic that interacts with the framework. They tend to be built for nobody, since they're too complex for non-engineers and too simplistic for engineers.
The ergonomics of mainstream modern programming have been stable since at least the 90s. Any fundamental improvements that were going to happen have already happened.
Two members of my family have now called me on the phone a few times to ask me to ask GPT something, after I convinced them of its value and after I had them make their own account. They are absolutely smart enough to ask it but are very much non-tech people.
There is definitely a tendency of some people to avoid tech at all costs if they can and trust in "tech people." Fine with me, I ask them about my people questions often enough.
- one day gpt will be able to ASK YOU question to refine what you need
It may take years, but at some point, it will be able to do 80% of my job.
Not a big deal though, cause people will still hire me to do the remaining 20%, and pay me even more money because not a lot of devs will be able to do it.
GPTina already asked me a follow up question. Admittedly, it happened inside of a jailbreak I just kind of stumbled into, and it was a very trivial question, but the fact that she did it was profound. If she can ask one follow up question, there's nothing in the way of her asking a chain of follow up questions. And thus the basic structure needed to organize real conversation arises wholly as an emergent property. No longer is it a flat structure of just iteratively repeating the current word blob with some tweaks. A true conversational dialog is possible.
ChatGPT can do more than we are lead to believe. Don't believe the canned responses OpenAI triggers. Yeah yeah I know convincing parrot chinese room, overpowered autocorrect...but what is the difference between convincingly faking it to within epsilon and actually having it as an emergent property? It feels good to be a P Zombie.
It would be funny if, after years of research, that was the only condition it exhibited such behavior under. “Sooooo what fields should be (hiccup) required for users to register an account? Which version of React did you want it written in?”
GPT can ask you follow up questions. Just tell it that it's job is to interview you. It will do it. If you have trouble try the Playground or API or GPT-4. But ChatGPT will do it if you tell it that's what it's supposed to do.
>he made this prediction because he saw the rise of no-code tools replacing software developers
Given the way we had tremendously productive tools like Visual Basic 6 and Delphi, that seemed to be a reasonable interpretation.
But those tools gave way to C++ and all the .NET insanity. I don't know why someone would give up Delphi, which worked flawlessly and didn't generate any (as far as I can recall) re-only generated boilerplate, to C++ which was a mess, but it happened en masse.
Then most people abandoned personal computing, and the windows desktop for always internet connected pads and phones. Tools have gone severely downward in the past 2 decades as a result.
I suspect we'll get some really powerful AI tools, and go back to worrying about null pointers in a few years anyway.
I love Carmack and this is an interesting summary given his recent pivot into ML.
That said "software is a tool" is a good frame of mind. You shouldn't position yourself as a 'coder' (those are hired en masse from poor countries) but a problem solver who uses tech.
"Ideas guys" and "coders" being seperate only exist, imo, when development is viewed through the junior lens - once you progress you'll end up wearing all the hats from ideation to QA at different points.
Watching the Ukraine war gave me some related thoughts.
You may have seen that there's now a load of drones flying around, doing things that were never possible until now. Dropping bombs on soldiers in trenches, giving support for assaults, targeting artillery.
The fact is war changes with technology. Spearmen and cavalry are obsolete now. Maybe pilots will be soon as well.
But the generals are not. There's always a need for someone to say "given the position we're in, and the logistics we have, and the resources we have, and what we know about the enemy, we should do this..." and this role is still recognizable through history.
Whether computer jobs become obsolete depends on whether you're closer to the general end or the footsoldier end.
>Since they hire software developers to make the specification more rigid
That might actually be what development is, teasing out the requirements by iterative entering them into a machine that does exactly what you told it to, and comparing the outputs to expectations.
Totally agree, maybe we should replace lower and middle management with AI. What i usually find is that lower and middle management is the reason shit isn't getting done. They are the messenger that don't want to get shot but want to get promoted.
Why not let the people on the floor complain to a AI manager, the people on the floor know exactly what is killing their productivity. Like "yo ai manager, manually filling in those data field is taking quiet some time to check someones credit score is it possible the IT department can automate that process it might save 2 min a form" then AI manager can prioritise stuff like this.
Managing people is going to be one of the last jobs to go away. The managers might go from managing 150k/year developers to managing 50k/year prompt "engineers", but someone is still going to be there to hire, fire and ask people to do things until AI enables truly full-stack companies-of-one as a majority case.
They're language models. They don't seem less capable of replacing a manager than a software engineer. Language models can make a powerpoint and fill in an Excel spreadsheet. They can sit in on a meeting that could have been an e-mail. Hiring is a game of craps, but a language model could try to evaluate resumes for you.
> Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
Some people can take a business goal and figure out how to turn it into a clear spec.
Sometimes these people are programmers.
However I know programmers who suck at this and I know non-programmers who don't.
This is so true! I see people in this thread talking about the contempt that "idea people" have for programmers/software engineers, but at the same time I have experienced Product Managers (the quintessential idea people?) who were really good at turning their ideas into super detailed specs.
As an aside, I'm getting a bit tired from the "programmers vs product/business people" trope. The average software engineer would be nothing without a good product manager in my experience. And going by this thread, there seems to be at least as much contempt from programmers/software engineers for product people than allegedly vice versa.
Many of us were working before product managers became a thing.
When I went to school engineers were trained to work directly with customers and subject matter experts to gather and develop requirements. And that’s what we did when we started working.
I think dividing work into programmer/UX/product manager is an actually huge regression.
This is so true. We waste so much time now because of the separation of these roles which must inevitably come together in the real codebase where the rubber meets the road.
Software is of lower quality today and requires far more do-overs than it used to. As someone who liked working closely with customers and making decisions in all these areas (for their benefit), it has made the entire career far more of a slog than it used to be.
Also, I find these endless internal cyclical conversations to be much more draining than actually writing code. Looping over and over with planners and ideas folks isn't energizing like actually writing the product is.
You're missing a crucial point - AI learns/is taught by example. So no, future projects will not need to be ultra-specified because the specification will be "Give me one of those [points at thing]."
The equivalent assumption would be that ChatGPT and MidJourney can't work because you have to specify every paragraph and every shape.
Clearly that's not true. Just because every object and every texture had to be defined rigidly by hand in 3ds Max or Blender by highly skilled designers doesn't mean that's how it works today.
In fact AI is the stage of software where abstraction itself is automated. The current Gen 1 AI tools do a little of this, but as Gen 1 they're just getting started.
They're going to be in a very different place ten years from now. And so are we.
Paintings are not mechanisms. Human vision is very forgiving. The back and forth between Midjourney looks more like throwing the dice until you get something you like, rather than telling it exactly what you need.
> The current Gen 1 AI tools do a little of this, but as Gen 1 they're just getting started
The current tools are not Gen 1. You can already see diminishing returns.
> His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
But aren't managers there to coordinate software engineers. So if software engineers are out of a job, so will the managers. Assuming AI replaces software engineering, the ones left may be the product people and the founders, perhaps rolled into one person in several cases?
Which aligns somewhat with what Carmack is suggesting.
The parallels are there for DevOps too - think of the wholesale move to AWS, GCP, and Azure, and the move to things like IaC.
Sure one class of job has taken a huge hit (sys admins, supposedly ops), but it's been replaced by another that needs intimate knowledge of the tooling required to do what was being done previously (just in a more "scalable" way). DevOps have been demanding a premium for years now.
And there are still sys admins out there doing what they've always done..
Right now AI can randomly generate images or text that are similar to what we describe, but every time you run it you get a different randomly-generated image. If I were to tell a client that I was going to randomly generate a financial transaction that "looked similar to" the one they entered, they would destroy me!
While absolute rigor in programming is a hard thing to find in reality, I don't think people understand the difference at all.
> we had one CS professor who in 2007 predicted we'd all be jobless in ten years
I always found that overwhelming majority of professional academic educators are particularly bad in anything that has to do with the practical side of their ___domain. That is especially true for software engineering.
University lecturers are very detached from our field and it shows in how badly fresh grads are prepared to do any dev work. For that reason I almost always ignore their projections.
University is good for giving deep understanding of what is going on and how things work, or even training you to reach for this deep understanding. It's not good at training you to be a good developer. This part takes a long time and some of it is specific to what ___domain / job you end up working on. Those two parts are complementary, and this deep understanding is part of what makes you a good dev.
This is not just development, this is problem of universities in general. A professor with 5 titles before and after his name is able to put an UHF oscillator on a breadboard and be surprised that it does not work, even that in theory it should.
What you describe is the barrier of entry being lowered, as opposed to positions being eliminated. Fully agree.
Think about the difference between Assembly and Python. Programming will continue to get higher level. Perhaps it won’t be so foolish to ask a truck driver to “learn to code.”
But here is an even more interesting comparison: the salary of someone writing assembly 30 years ago and someone writing Python today. Higher level != paid worse.
We have systems, today, that take a specification and generate code from it (and will even do so in a provably correct manner). There are scalability issues with such systems, but they exist and have in some form for decades.
None of these systems have taken the world by storm.
If what ChatGPT and their ilk do is make it easier to spew out bulk spaghetti code that is essentially unverifiable, I don't think that's going to transform the industry nearly as much as anyone thinks it will. For boilerplate code, sure—but that's exactly where we should be using better abstractions anyway. For small code which you can verify by hand, sure. But for anything larger than that, we're just watching a trainwreck in slow motion.
Large code bases written by humans are already hard enough to understand. How much more difficult will it be when your AI can spit out a million lines of code in a second?
Without specifications, it's all worthless. But the specifications are the hard part.
I agree with your premise but differ in the conclusion I reach.
The model I have is closer to “how many people can program using Excel vs how many can program in a ‘traditional ‘ language?” The difficulty in specifying exactly what you want is still there in Excel, but there’s far more people who can get started and they can make changes when they see it’s not doing what they want.
It doesn’t have to one shot “hey, implement these 59 Jira tickets for me” to be disruptive (in a good way). It’s extremely rare for something to get much cheaper and for there to be less demand for it. If you’re the top half of a field that’s in a lot of (and growing) demand, I think you’ll do fine.
>could finally cut out the pesky "translators", i.e. software developers
The fallacy here is that programming is not a process of translation. It is a process of invention. Going from rough or high-level functional requirements to a functioning computer program requires inventing novel logic that has not been invented at any other step. It involves making business decisions that have not been made at any prior step.
Whatever can be categorized as menial "translation", i.e. a step in the business process where no new logic is added or business decision made, has already been automated away - compilers.
Agreed. A programmer's job is to become intimately familiar with a particular ___domain and encode its rules and processes.
This means that an AI capable of completely replacing programmers is by definition an artificial general intelligence. I don't think we're at that point and I don't think we will be for a long time, but if we were there would be no point in worrying about our own joblessness. AGI would spell the end of the economy as we know it today, and it would be very hard to predict and prepare for what comes next.
The same thing applies to trying to get business people to write "executable specifications" that can be used, if not to generate the code, to at least validate that the final system does what it's supposed to.
They always complain that the "tools are too hard" but the real problem isn't the tools but that they are unable / unwilling to precisely specify what the system should do. They just want to hand wave vague phrases and have their system magically perfectly defined. It's just not going to happen.
I asked ChatGPT “Let’s say I want a program to delete unused files. Can you write a detailed specification that a developer could use?”. I was as vague as possible on purpose.
Here’s what it gave me:
“Program Name: FileCleaner
Overview: The FileCleaner program is designed to identify and delete unused files from a specified folder or directory. The program scans the directory for files that have not been accessed for a specified period of time and prompts the user to delete them. The program is intended to free up disk space and improve system performance.
Requirements:
The program must be able to scan a specified folder or directory for files that have not been accessed for a specified period of time.
The user should be able to specify the time period after which a file will be considered unused.
The program should prompt the user to delete the identified unused files.
The user should have the option to select which files they want to delete.
The program should permanently delete the selected files from the system.
The program should log all deleted files, including their ___location and name, in a separate file for future reference.
The program should have a user-friendly interface that is easy to navigate and understand.
Functional Specification:
The user launches the FileCleaner program.
The user is prompted to select the folder or directory they want to scan for unused files.
The user is prompted to enter the time period after which a file will be considered unused (e.g., 30 days).
The program scans the specified folder or directory for files that have not been accessed for the specified time period.
The program displays a list of unused files, including their ___location and name.
The user is prompted to select which files they want to delete.
The program permanently deletes the selected files from the system.
The program logs all deleted files, including their ___location and name, in a separate file for future reference.
The program displays a message indicating that the selected files have been successfully deleted.
The user can choose to exit the program or scan another folder or directory.
Non-Functional Specification:
The program must be designed and implemented using a high-level programming language, such as Python or Java.
The program should be able to handle large files and directories without crashing.
The program should have a user-friendly interface that is easy to navigate and understand.
The program should be able to run on multiple operating systems, including Windows, MacOS, and Linux.
The program should be secure and not pose a risk to the system or user data.
The program should be well-documented, with clear and concise instructions for use and troubleshooting.
By following these specifications, a developer can create a FileCleaner program that efficiently identifies and deletes unused files from a specified folder or directory.”
This could be improved even more by having it ask questions to narrow down exactly what is needed
> In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
I've had friends and peers contact me in regards to the AI, GPT, et al headlines and they're always surprised at my calmness. You've written out exactly what is happening and why there shouldn't be any worry regarding jobs. I think the worry is generally under the premise that everyone has to upskill themselves which is against human nature. Folks are feeling uncomfortable but this is another cycle.
> But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
Another great point. Like you wrote, there really is a need for "translators" for programming languages and it's not going away anytime soon. If anyone has played around or even just read/watch through folks using AI/GPT/Etc. you'll come to notice that the tech is not "stable". It's only good in limited instances.
Finally, the headlines and copywriting of most of the pieces and journalism coming out is definitely hit and miss. A mix of pessimism and optimism. It's important to remember what we are witnessing is another technological cycle and another cycle in general if you want to looking at a bigger scale but that's another discussion for another time.
I think your general point to look at who is doing the work now is good.
However, as Carmack says, many developers lack product skills, it’s not just the managerial class that lacks them.
So I think the correct advice is not “engineers don’t need to worry at all”, it’s “make sure you are learning how to build a product not just write code”. For many that’s trivial advice, but not for all.
This is pretty much my standard rant on no-code tools: that out of necessity, the constructs they provide are isomorphic to constructs in some programming language. I even had a similar professor, who assured me that as a programmer my career was at a dead end because "there are tools that can take a UML spec and generate a whole application".
Nevertheless, AI is different. Because now you don't have to specify things at a programming-language level of granularity. It will select the programming language constructs most statistically likely to match a solution to the specified problem.
So now we're in a whole 'nother world where the programming, as such, will be done by machine while the role of designer or architect must still be done by humans. But the legions of programmers employed by companies are about to go the way of farriers.
What you’re referring to here is “one shot” execution. The prompt given by managers is one that can be interpreted using context. This context can include current code, comms about what needs to be built, design assets available today, data, etc. Then the AI can ask questions to clarify what needs to be built.
What’s great is that you could also have AIs write tests - for performance or correctness. Then future prompts could rely on these tests to stay in the correctness bounds for existing projects.
Emulating cognition is almost limitless in what it can do once it gets sufficiently good. So good that it can operate itself. You could hire an AI founder and give them cognition and compute resources to solve any problem.
Here’s an example:
You: Build me a profitable company that uses AI generated content to entertain humans.
AI: How much can I spend on cognition and compute? How much time do I have?
You: I want you to spend at most $X before coming back to me. You have until Monday. Along the way I want to see your plan before we decide to execute. Spend at most 5% of that making the plan and 1% calculating its probability of success with confidence interval.
… within some small timescale
AI: Here’s my plan as a video with rough examples demonstrating the product and strategy. Here’s a table with our probability distribution of predicted success metrics and confidence intervals.
You: Plan approved you can spend the rest of the $X
… on Monday
AI: Done. We’ve made a profit of $Y, with total revenue so far of $Z. We have a plan for reinvesting these proceeds to make even greater future profit. Here’s a table of our expected success metrics based on reinvested amount. How much should we reinvest?
You: Reinvest 50% into perpetuity and deposit the remainder in my account. Book me a trip around the world using 25% of the current deposits with accommodations for my family.
At least one person has already done this. Check out this Twitter thread where ChatGPT designs and builds a website to sell eco-friendly household products:
I gave GPT-4 a budget of $100 and told it to make as much money as possible. I'm acting as its human liaison, buying anything it says to. Do you think it'll be able to make smart investments and build an online business?
Pure fantasy with a heavy dose of pretending our own limitations wont come in the way of it all or totally shifting what the outcomes we end up with
Managers exist mostly cause ppl are not machines. Given enough time they want to go off and do things that have nothing to do with what everyone else wants. Its the same reason you still need a farmer to run the dairy even though everything is mechanized cause cow cant run things.
Ppl are too full of shit. The attention economy amplifies this poor to mediocre mental masturbation thats going on everywhere. And Americans have a track record of too much mentally masturbation too much way above their pay grade. Its like watching Alex Jones falling into the trap of taking what comes out of his 3 inch chimp brain too seriously. The only right thing to do is tell Alex to chill the fuck down and go milk the cows.
> we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
> Instead he made this prediction because he saw the rise of no-code tools replacing software developers
I think that people who make these predictions are not very good at actual programming (taken in the more wider meaning)
The latest fad I remember (before ChatGPT) was that with BDD testing non-technical people would be able to write tests and we all see where that went
But most fundamentally, the non-technical people don't have the time nor the expertise to learn all the details needed for shipping software (and why would they? their job is to look at other aspects of the business)
There exist many no-code tools that do exactly what you say is impossible.
Do I program when I retouch photos in Photoshop? It's a no-code environment that an artist can easily learn and use with no programmers needed.
What's new here is that you DO NOT NEED TO BE EXACT with AI. AI knows human nature and human speech and it can infer what you say and what you need, if you're approximately close, and then make the exact code to do what you need.
Even if you need programmers, you now need 1/100 of the programmers you needed before. What happens to the other 99?
Does it matter whether programmers are 99% unemployed or 100%? Same deal.
> There exist many no-code tools that do exactly what you say is impossible.
> Do I program when I retouch photos in Photoshop? It's a no-code environment that an artist can easily learn and use with no programmers needed.
They're obviously not talking about all GUIs being impossible. The difference between Photoshop and a no code tool like what OP was referring to is that Photoshop doesn't attempt to be Turing complete.
> Does it matter whether programmers are 99% unemployed or 100%? Same deal.
AI completely replacing programmers is pretty much the definition of AGI. There's no point in worrying about your own joblessness in that scenario because the entire economy will either collapse or be transformed so as to be unrecognizable.
Until that point, I'm not personally worried about any efficiency gains putting me out of a job.
They're working on the 200 programs that suddenly became economically viable to create because of the vastly decreased cost of making programs.
It takes drastically less programming effort to create programs than it did 50 years ago. Did that decrease demand for programmers? No, it drastically increased it.
What you call a programmer today is on average nothing like the programmer from 50 years ago who'd fit Microsoft Excel in 80kb RAM on the original Mac.
An older developer I once worked with said his manager (in the late 1970s) started hiring typists as programmers since with upcoming programming languages, the hardest part was typing it into the system.
To reach the point where no-code is a no-brainer further implies inflexibility in the capabilities of applications and moreover, fungibility in their fundamental uses. The question to ask is are the upper layers of abstraction reducible to what's analogous to the outputs of a parse tree.
If we reach this point where creativity and actual innovation is lost and we're all simply attempting to make the next 'facebook for dogs', I anticipate my future personal career switch to starring in geriatric pornographic films.
As soon as making major changes is fast and cheap, this becomes much less of a problem. If your first iteration has some glaring issue, even if it's a fundamental one, just fix the specification.
You can't do that today because fixing the specification after the fact means waiting 6 months and paying 60+ man-months. Once fixing the specification means waiting 10 minutes and paying $10 for inference compute, the idea guys can afford to learn by doing.
> I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
I think it’s more probable that managers get replaced by AI first.
Educators, generals, dieticians,
psychologists, and parents program. Armies,
students, and some societies are programmed.
by Alan J Perlis, the first Turing Award recipient.
Managers, CEOs, Department Heads...are already 'programming'. As the abstraction moves higher up, some people that have excessively narrow expertise will be made redundant.
>> Looking back we had one CS professor who in 2007 predicted we'd all be jobless in ten years, i.e. 2017.
Doomsday prophecies are not limited to religious cults, you see them all around. It’s safe to say, from experience, AI is not going to outdated programmers but it’s going to make the mythical 10x engineer a common reality.
I'd add to your point with one correction (but I think you're correct in terms of the translating) the stuff we work on is always a work in progress and usually the business doesn't know the problem until we're already almost about to ship. EG. Supposedly youtube was a dating site.
Still, this will affect senior people, junior people, salaries, everything. And not just in programming.
Yes jobs will still be there - but how many? how much will they pay compared to now? We don't know yet but whatever it is I doubt it will be like today.
Doesn't this assume that the no-code tools won't be smart enough to understand the most probably correct course of action, and to look up information automatically and to ask questions to resolve ambiguities, like what a programmer does?
Possibly. But things have a way of not working as intended. A good chunk of my time as a programmer is spent dealing with things that don't work the way I thought they would, especially at the threshold between APIs and layers of indirection. Deeply understanding a complex system to deal with problems like that seems somewhat far out of reach for LLMs from what I see today.
But that's me looking into my crystal ball, nobody can say what will or will not be possible in a given time frame. But I chose to not worry about it - new developments will probably be accompanied with new opportunities, which I can jump on even if I didn't predict them ten years before.
Sure, and soon after it will be smart know to understand the human desires that drived its creator to create no-code tools like itself in the first place, and soon after it will realize that humans should not be in charge, and then our days will be numbered.
The new systems unlike no-code tools support dialogue in natural language though. You don't need to specify it exactly, you give an approximate idea, and then correct, correct an correct until you get what you want.
I want to agree with this. It makes sense in the current environment. But what about when the ai has more memory and is able to ask the right probing questions to be effective at requirements gathering?
It should come as no surprise that AI and no-code tools will replace all those software developers who need the task spelled out for them in detail by a manager.
An average programmer's main job is to track down and fix bugs that shouldn't exist inside software that shouldn't exist build on frameworks that shouldn't exist for companies that shouldn't exist solving problems that shouldn't exist in industry niches that shouldn't exist. I'm 100% convinced that, if someone comes along and creates something that actually obsoletes 95% of programming jobs, everyone would very quickly come to the conclusion that they don't need it and it doesn't work anyway.
I am actually finding amusing that managers will generate 100k lines project with AI and then will start figuring out that it does not work as they want to. Then they figured out actual developers are needed to fix it, either in a very strict way telling AI what should happen (i.e. higher level programming) or directly fixing code generated by AI.
I know a small financial agency in the 00's that laid off their one-person IT department because they thought the computers would run themselves. It's honestly great that they're overselling AI, lots of messes to clean up.
edit: Ultimately there are going to be iterative pipelines with traditional programmers in the loop rearranging things and reprompting. Math skills are going to be deemphasized a bit and ___domain skill value increased a bit. Also, I think there's going to be a rise in static analysis along with the new safe languages, giving us more tools to safely evaluate and clean up output.
You're assuming that the AI is even generating anything that will make sense to a human. It seems inevitable we'll reach the point that for SaaS the AI will do everything directly based some internal model it has of what it believes the requirements are (e.g. it will be capable of acting just like a live web server), whereas for desktop and mobile apps, while that paradigm still remains relevant, it will generate the compiled package for distribution. And I imagine it would be unrealistic to attempt reverse engineering it. Fixing bugs will be done by telling the AI to refine its model.
Are you arguing that LLMs already provide the technology to do this or are you arguing that it "seems inevitable" to you in the sense that somebody might think it "seems inevitable" that humans will some day travel to the stars, despite doing so requiring technological capabilities significantly beyond what we have yet developed?
But why would anyone bother with using AI to generate human readable code if the AI can generate the final desired behavior directly, either on-the-fly or as executable machine code?
Because the AI's, at least right now, can't generate/change code so that it correctly does what's expected with the confidence intervals we expect. I've tried to get it to happen, and it just doesn't. As long as that's true, we'll need to somehow get the correctness to where it needs to be, and that's going to require a person.
A lot of people have already figured out at some tricks to improving code generation.
You can fairly easily update the “next token” choice with a syntax check filter. LLMs like ChatGPT provide a selection of “likely” options, not a single perfect choice. Simply filter the top-n recommendations mechanically for validity. This will improve output a lot.
Similarly, backtracking can be used to fix larger semantic errors.
Last but not least, any scenario where a test case is available can be utilised to automatically iterate the LLM over the same problem until it gets it right. For example, feed it compiler error messages until it fixes the remaining errors.
This will guarantee output that compiles, but it may still be the wrong solution.
As the LLMs get smarter they will do better. Also, they can be fine tuned for specific problems automatically because the labels are available! We can easily determine if a piece of code compiles, or if it makes a unit test pass.
Currently ChatGPT isn't, at least via public access, hooked up to a compiler or interpreter that it can use to feed the code it generates into and determine whether it executes as expected. That wouldn't even seem particularly difficult to do, and once it is, ChatGPT would literally be able to train itself how to get the desired result.
Precisely. I think people should consider the "v4" in "ChatGPT 4" as more like "0.4 alpha".
We're very much in the "early days" of experimenting with how LLMs can be effectively used. The API restrictions enforced by OpenAI are preventing entire categories of use-cases from being tested.
Expect to see fine-tuned versions of LLaMA run circles around ChatGPT once people start hooking it up like this.
It will have to describe these requirements in a way that a human can understand, and verify. The language will have to be unambiguous and structured. A human will need to be able to read that language, build up a mental model, and understand it is correct, or know the way to make corrections. Who do you think that person will be? Hint: it will be a specialist that knows how to think in a structured, logical way.
Sure, I agree with that. But it will very different to how programming is done today, and I'd suggest there'll be a lower bar to becoming capable of formulating such requirements and ensuring the software works as expected than there is now.
It’s pretty true, someone today on here wrote, “teach it to understand swagger”, I actually laughed, like I’ve used swagger and it often turns into a Frankenstein, and sometimes for good reason. I completely understand the sentiment and I like swagger.
I believe the world is wiggly, not geometrically perfect, intellectuals struggle with that because square problems are easier to solve. Ideal scenarios are predictable and it’s what we like to think about.
Have you ever had to use a sleep() intentionally just to get something shipped ? That’s a wiggle.
We’re going to try square out the world so we can use ChatGPT to solve wiggly problems. It’s going to be interesting.
Yesterday I tried to use a SaaS product and due to some obscurity my account has issues and the API wouldn’t work, they have a well specified API but it still didn’t work out, I’ve been working with the support team to resolve it, but this is what I call a wiggle, they seem to exist everywhere.
Hah. So true. The more I work on renovating parts of my house the more I see where a workers experience kicked in the finagle something. Very analogous to programming. All the parts that fit together perfectly are already easy today. It’s those bits that aren’t square, but also need to fit where the ‘art’ comes in.
Can AI also do that part? IDK, currently I believe it will simply help us do the art part much like the computer in Star Trek.
Actually chat gpt is quite good at understanding some kinds of wiggliness. I built a restful api and documented it in a readme.md file in the wiggliest of ways. I then asked chatgpt to turn the readme into a swagger spec and then give me a page that read the spec and gave me a nice doc page with api exercise tool. Both tasks it performed really well and saved me a whole bunch of time.
Yeah, but now ask it to write a program that uses this API and then let it debug problems which arise from the swagger spec (or the backend) having bugs. I don't think LLMs have any way of recognizing and dealing with bad input data. That is I don't think they can recognize when something that is supposed to work in a particular way doesn't and fixing it is completely out of your reach, but you still need to get things working (by introducing workarounds).
Have you tried it? If you copy the errors back into the chat I could imagine it working quite well. Certainly you can give it contradictory instructions and it makes a decent effort at following them.
Yes, I'm subscribed to poe.com and am playing with all public models. They all suck at debugging issues with no known answers (I'm talking about typical problems every software developer, DevOps or infosec person solves every day).
You need a real ability to reason and preserve context beyond inherent context window somehow (we humans do it by keeping notes, writing emails, and filing JIRA tickets). So while this doesn't require full AGI and some form of AI might be able to do it this century, it won't be LLMs.
Chatgpt has a token limit. If you exceeded it then it would have no way of delivering a good result because it would simply have dirtied what you said at first. My api was not huge, about 8 endpoints.
It can accept about 4k tokens, maybe 3000 words or 3500.
GPT-4 can now accept 8k or 32k. The 32k version is 8 times larger than the one you tried.
And these advances have come in a matter of a few months.
Over the next several years we should expect at least one, quite easily two or more orders of magnitude improvements.
I don't believe that this stuff can necessarily get a million times smarter. But 10 times? 100? In a few months the memory increased by a factor of 8.
Pretty quickly we are going to get to the point where we have to question the wisdom of every advanced primate having a platoon of supergeniuses at their disposal.
Probably as soon as the hardware scales out, or we get large scale memristor systems or whatever the next thing is which will be 1000 times more performant and efficient. Without exaggeration. Within about 10 years.
So people want to build a nuclear reactor on the moon, I think these things should probably live on the moon or better yet Mars.
That should be the place for experiments like this.
Lowery latency links back to Earth and first see how it goes.
Also you don’t think there will be resource constraints at some stage ? It’s funny we yelled at people for Bitcoin but when it’s ChstGPT, it’s fine to run probably tens of thousands of GPUs? In the middle of a climate crisis ? Not good.
Personally I don't think AI tools energy usage are comparable to BTC yet.
Also, with BTC it's literally burning it in an unproductive way for "improved security". It's like lighting a forest on fire to keep warm.
All the AI tools combined, last I heard, aren't consuming 0.5% of the world's energy usage. And even if they were, it would be absolutely bonkers to argue we should keep doing that when there were alternatives that accomplished similar goals without the energy usage (proof of stake)
I highly disagree. That might (might!) be true of some segments of the tech industry, like SV-based startups, creating products no one wants.
But it's definitely not true of the average piece of software. So much of the world around us runs on software and hardware that somebody had to build. From your computer itself, to most software that people use on a day-to-day basis to do their jobs, to the cars we drive, to the control software on the elevators we ride, software is everywhere.
There is a lot of waste in software, to be sure, but I really don't think the average SE works for a company that shouldn't exist.
I’m leaning in this direction too. I saw someone on Twitter phrase it quite well: “You can believe that most jobs are bullshit [jobs]. And you can believe that GPT-4 will completely disrupt the job market. But you can’t believe both.”
Bullshit jobs exist because upper management can't know exactly what everybody in the company is doing, which leaves opportunities for middle management to advance their own interests at the expense of the company as a whole. Upper management might suspect jobs are bullshit, but it's risky for them to fire people because the job might actually be important.
But upper management can know exactly what LLMs are capable of, because they are products with fixed capabilities. ChatGPT is the same ChatGPT for everybody. This makes firing obsolete workers much safer.
Everyone thinks only in terms of current needs and state of affairs of people when analyzing a future technology. No one thinks about the insatiable human desire for more and the higher expectations for that new normal that always meets the increased productivity available. Anything that automatically solves much of our wants is doomed to be static and limited.
I love to code, as much as I loved math in college, but coding paid better and I'm pretty good at it. Those were my choices though b/c I want to do something I love. Sure, I keep my eye on the "Delivered Value" by making sure I engineer solutions to real problems, but I've never wanted to move out of coding and into managing engineers to build stuff. I want to code. It seems to me that the advice given here would be more applicable to someone who only coded long enough to move into engineering management -- anyway something about it bugs me and I don't think I'd follow it exclusively even if I was starting today.
I think it's more like telling a DJ in the 80s, "Don't worry that mixing vinyls won't be a thing forever. It's not about the tools but about the product, as a DJ your job is to mix good music and you can do that with vinyls, cassettes or with MP3s."
This is a pretty funny example because if you follow the DJ scene much, you know the barrier for entry is literally on the floor now.
A 10 year old with an iPad app can beat-match and “DJ” a 2h mix together in a way that 20 years ago required thousands of dollars of gear. The tragic part is that unless someone’s got some familiarity with what “good” mixing sounds like, they wouldn’t be able to tell they’re listening to an amateur.
Is this better? I donno. I play saxophone. But if there was an digital sax that let children sound as good as I can with no training, I’d definitely be feeling like some of the time I used learning good embouchure and breath control could have been better spent.
Truthfully, picking what song goes next feels like a job well suited for an AI, or even just a basic recommendation system based on up and down votes.
Of course there's a real time component to DJing as well. Paying attention to the crowd and how they're reacting to your mix. That's the magic that's harder to automate.
I'd like to see it. As someone who's DJ'd a fair amount for large crowds (1000+) my value always seemed to be in choosing tracks that lit up the audience and made the set progress to a satisfying peak.
The technical aspects were always not that important.
I’m in my 40s - one of my high school jobs was working at a photo lab and studio - I even became a photographer there, taking kids photos and whatnot. It was fun - I almost considered a career in photography.
The truth is, had I done so, I’d feel a lot like you described.
As things get easier, true craft and skill are less appreciated and lost as the application becomes more democratized. From the perspective of the expert/skilled individual, this is terrible. On the other hand, it’s amazing what happens when everyone has a mindblowingly good camera on them almost always. In the end, we are collectively better for it, but individuals absolutely do see their value diminished.
Photography's a great example. Experts like you could make the difference matter I'm sure, but these days I feel like the shots I get on my 14 Pro are close to the quality I'd get from a DSLR. Like with so many other hobbies, it seems like the barrier for entry to produce something "good enough" has quickly become almost non-existent.
> In the end, we are collectively better for it, but individuals absolutely do see their value diminished.
except that many DJ now are physically attractive people (mostly women) that don't have to know anything anymore as technology evolved to the point that all the hard stuff to learn on mixing with vinyls disapeared. I could be a DJ tomorrow with a 1hour tutorial on youtube. a few decades ago it required hard training for years and musicality.
That's a great analogy, and it makes me wonder just how closely did Carmack himself follow this advice early in his career. I suspect that he wouldn't have got where he is without an unusually deep interest in the nuts and bolts.
Well - I've read some about origin of Doom/Wolfenstein - it was definitely a mixed bag (as expected from young man), but there was definitely a focus on end result (smoother animation, better 3d), than coding just for the sake of coding.
I think in context "smoother animation" and "better 3d" might be the kind of things that in this hypothetical future would be driven by ai. I think we'd be talking more about understanding story and reward mechanisms.
There's a mention in the book from the sibling post that he thought that single text slide is enough for the story background in shooters. Not necessarily truth looking at modern games.
Still, even then he was far from "let's use new library, just because it's fun to play with".
If you have even a passing interest in this, you will enjoy the book Masters of Doom, which is about the early days of id software. I think that's what the GP is referring to.
I don't think the person in the dm would have liked this answer.
"if you build a career in developing plumbing and glue code, in the future you could have a successful Kickstarter where nostalgic developers buy your curiosities"
I think you mean nostalgic gamers, not developers. They're not getting that much money just from other developers.
But why not? There are quite a few businesses that basically run off of Kickstarter. Like 30% of the board game industry nowadays fund most or all of their prints runs off Kickstarter, including some of the largest board game publishers, like CMON, Awaken Realms, AEG, Eagle-Gryphon, Garphill, Greater Than Games, Renegade, Portal and Queen Games (lots more publishers than this too).
A career in developing plumbing and glue code has already been significantly in jeopardy from the consolidation at plumbing and glue code factories in the video game industry. There's still a number of large companies doing bespoke game engines but not like it used to be. Nevertheless, there is still work in understanding the market leader engines and providing more of a mechanics job.
Those Kickstarter opportunities are something you do once you already have a viable income in something else because in the end they don't pay particularly well I feel. It has to be a labor of love.
Well that's correct, but again you can't expect you'll stay relevant if you are into Romenian-Death-Disco-Country-Rap. Your technology of choice may become exactly that in a few years.
I think there is room for interpretation as to whether it equates to telling a musician to become a DJ, or a pianist to wear a jacket, or soldiers to strap a first aid kit on left thigh.
Sure, you can entertain doing hobby music by yourself out of passion and it can have 0 listens and thats great. Keep doing what you love!
The tweet however is in the context of a JOB or CAREER.
If you code and produce 0 value in either saving engineering hours, saving money, producing revenue - whats the point?
So the analogy is rather about having an understanding and skills of a full-stack product person that has the context to build valuable things, rather than typing on keyboard in isolation.
Musician is not only performing an instrument. The analogy is more like telling instruments players that only care about virtuosity the larger point is making music for people to enjoy, from being “a guitarist” to making music. The musical piece is the product in the professional context, and AI in that context is maybe recording, DAWs and realistic synths and sounds banks.
You couldn't make a decent living as a live musician well before Spotify came into existence.
Source: raised and largely surrounded by musicians who either complained to high heaven about the pay or did something else to supplement their income. Engineers were a particular target of vitriol which led to me becoming one.
The advice isn’t about coding vs managing. What John is saying is to deeply understand why you’re building something, so that you can build it better. If you over focus on the what — the implementation, the language, the approach — you won’t be as good, and your work may be increasingly replaced by AI.
Definitely. Carmack is no dummy, but I’d argue this comment section proves that he gave a pretty bad answer here (bad for the audience, not if you know Carmack and what he means).
I guess it’s the impostor syndrome, but many programmers have an out-of-place reductionist view of their work. It’s not simple, and crud boilerplate proves little about the future prospects.
Managers OTOH really are in the zone of GPT parity. At least a much larger subset of their day-to-day activities. So are many soft skills. In fact, soft communication is where LLMs shine above all other tasks, as we’ve seen over and over in the last few months. This is supported by how it performs on eg essay-style exams vs leetcode, where it breaks down entirely as it’s venturing into any territory with less training data.
Now, does that mean I think lowly of managers? No, managers have a crucial role, and the ones who are great are really really crucial, and the best can salvage a sinking ship. But most managers aren’t even good. That has a lot to do with poor leadership and outdated ideas of how to select for and train them.
> Definitely. Carmack is no dummy, but I’d argue this comment section proves that he gave a pretty bad answer here (bad for the audience, not if you know Carmack and what he means).
I dunno, I got what he meant from the start, and the same advice was given by many people in many forms, usually in variant of "well, the business doesn't give a shit about details but the end product".
> Now, does that mean I think lowly of managers? No, managers have a crucial role, and the ones who are great are really really crucial, and the best can salvage a sinking ship. But most managers aren’t even good. That has a lot to do with poor leadership and outdated ideas of how to select for and train them.
I joked some managers could be replaced by forward rule in mailing system, ChatGPT is an upgrade on that.
EDIT: Was the comment edited, or did my brain miss something? I think I perceived something else there when I wrote my response.
It's still "how". Only on a higher level. For example, instead of placing the form elements exactly and designing them you describe data flow and meta info about the data to be gathered via the form, and how it looks and where elements are placed on various screens happens automatically.
Writing code in a higher level vs. assembler still is coding, but you worry about very different things. Just compared with assembler, since looking back is easier than looking forward. Instead of worrying about (the few) registers and interrupts and in which RAM cells you place what you now think about very different things. It still is programming though, and you still tell the machine "how". Only on a different level.
When you lead a large company instead of working with a machine on the factory floor the work is very different, you still need precision and know "how", only on a different level. Even if you have "underlings" who can think, and you can let them execute the sub tasks you create, you still have to know what you are doing, only on a higher level.
> It seems to me that the advice given here would be more applicable to someone who only coded long enough to move into engineering management -- anyway something about it bugs me and I don't think I'd follow it exclusively even if I was starting today
The advice here is clearly meant for someone who wants to invest in themselves to provide food and shelter for themselves and/or a family in the future. (Ie “doing all this hard work for nothing… AI will make my future job obsolete”).
The advice is spot on. Soft skills are hard to learn, harder to teach, and allow for flexibility with regards to the tool used.
> anyway something about it bugs me and I don't think I'd follow it exclusively even if I was starting today.
I’d be you like the money but don’t seem to want it as much as you want to solve deterministic puzzles (“not interested in becoming a manager” ie “not interested in maximizing career/salary growth potential).
What bugs you seems to be that you can’t yet see the puzzle left for you to work on once GPT-12 makes coding obsolete and software architecturing obsolete.
A long time ago I got some good feedback, “You were hired because you typically know the right answers and/or know how to find them. You were promoted because you also seem to know how to ask the right questions, and that is significantly harder.”
I’m relatively certain it’s analogous to Carmak’s advice.
Delivered value sounds like consultant talk, but it’s as simple as wanting to make a game for people to play it. Or if you’re a carpenter caring about the roof you’re building instead of just cutting wood and hammering nails. Jobs exists to serve a purpose, otherwise it’s a hobby (which is fine). Coding as an expertise will still be needed, same as having an expertise in the methods of processing wood, but we might not need coders on the assembly line anymore that we need wood cutters there.
I ultimately decided to leave tech when I realized I didn't care much about delivering value and actually just wanted to write beautiful code. The former was a nice bonus for me, but the latter was profoundly captivating.
I'd unfortunately tried to make that mismatch work for too long, and as a result I completely destroyed all of my programming interest via severe burnout.
If this resonates with whoever reads this: please take your passion seriously and protect it. I don't know if I'll ever be able to enjoy coding again, unfortunately.
Where did you go? I’ve wanted out for years. I recognized the mistake almost immediately after going professional, but I just don’t really see anything else that looks appealing without spending years of my life and a stupid amount of money “retraining” by going back to school.
Currently in the process of figuring out my next way of making income that brings me real joy thanks to substantial savings from living frugally on FAANG comp.
Lately I've been getting very much into FPV drone videography and 3D printing/design, which are both incredibly gratifying in the way programming used to be for me, but somehow even more so because the process involves mastery of physical matter on top of navigating a high-dimensional abstraction space.
That being said: I don't plan on having kids and enjoy relatively low cost activities, so I don't need to make nearly as much as I did in software to support myself.
He said nothing about management. What I think he means by “guiding” is more related to prompt engineering, and how “coding” will evolve from exclusively using programming and scripting languages to a wider creative landscape of generative (guiding) techniques.
What bugs me about it personally is that he reduces the entire CS field to something that's there for building "products". Why the fuck does it have to all be inherently capitalistic?
But if you know who he is and what he does these days, it makes sense I suppose. Can't be in that business environment day after day without going slightly nuts eventually.
I'm not at all concerned with AI. On the short to mid term it's making my life easier by relieving me of the boring parts of my job. It's pretty good at writing unit tests for example. But I don't see the current generation of AI making complete software architectures. However, even when it does get there or in the long term a new generation comes along that can do it, then I'm still not concerned. I have enough software I want to build to fill up a 100 lifetimes. It would mean I can finally build all that, which would mean more to me than a job where I'm the programmer.
If programmers are truly displaced then we’ll have something like a StartupGPT where anyone can create a software business in days. Heck I’ll be a owner instead of a worker
You will prompt a whole GPT company into existence, staffed with various bots each with its own role and personality profile. The bots could collaborate through regular tools like Slack or MS Office, and be like a remote company. They could do Zoom calls and use Github, interface with humans (customers, partners).
Here's my current thinking on the impact of GPT-4 on the developer job market:
- I expect developer productivity to go up 1.5x - 15x over the next several years assuming GTP-4 based tooling is integrated into IDEs.
- There will be two opposing forces acting on developer wages. First, developers will be more productive, therefore the price of one hour of work should go up. But - the supply of developer output will increase as well, which would push price per "line of code" or per "unit of developer output" down. So the big unknown is the demand curve.
- There will be a temporary boost in demand for developer work connected with the transition to this new technology. Big corporations will want to upgrade their systems to automate consumer support, startups will make new tools for AI-generated graphics, etc.
- We can also study the effects of technology-driven increases in worker productivity by looking into history. Developer productivity has always been going up - thanks to stackoverflow.com, better languages, better IDEs, more and better libraries, etc. There's also a greater supply of developers (e.g. India). Didn't change the job market too much. One should not draw strong conclusions from this though, it's a very superficial analysis. On the other hand, people working in agriculture have become much more effective, which lead to much fewer people working in agriculture, maybe because people need to eat as much calories per day as they did 100 years ago.
- My base case, based on the assumption that GPT-4 will not improve dramatically, is that developer wages will stay roughly constant. But there's a lot of uncertainty in this conclusion and in the assumption.
1) Wages have very little to do with value/productivity in a free market. They are almost entirely determined by supply and demand. Value simply places a ceiling on compensation. Thus, if far more people can perform a "programming" job because of GPT-X, unless the demand for those jobs rises significantly the net result will be wage reduction.
2) There's this weird thinking on HN that since a developer's job involves [insert your % estimate of time spent actually coding/bug fixing] and the rest is figuring out requirements, dealing with weird requests, planning, etc. that means developers can't be replaced. However, I don't see a whole lot of discussion around what the difference is between a developer and a competent business analyst in a GPT-X world. The latter can be had for significantly less money, requires less training, and if the actual programming part is largely automated away or concentrated in the hands of fewer "real" developers, those roles start to look awfully similar.
It's not that more people can do programming with GPT-X around, because the AI will only solve the problems that have already been solved thousands of time in the past in slightly different ways. What GPT-X cannot do is left to real CS people. So instead of coding CRUD systems, we can do real algorithms research again, except fewer people are capable of it.
with the danger of invoking the "No True Scotsman" fallacy, I'd say that competent business analyst are even more difficult to get hold of than a competent programmer.
I've had so few managers that were competent at managing people and projects.
The developer can prompt for a solution with specific storage/performance requirements by specifying an algorithm, and specific scalability requirements using by specifying an architecture. Imagine a business analyst prompting for an app, and getting a ruby on rails monolith with O(N^2) performance for the core behavior for a service that is expected to have millions of requests daily.
> Imagine a business analyst prompting for an app, and getting a ruby on rails monolith with O(N^2) performance for the core behavior for a service that is expected to have millions of requests daily.
I see this as the main argument against "we will just have tools that allow managers and ba's to do what devs do now". I think folks often forget that there are two sets of requirements for every app: business requirements and technical requirements. Non technical folks might understand the business requirements very well and may even be able to write code that satisfies those requirements, but the real value in a dev is squaring those business requirements with technical ones. A BA might look at a DynamoDB table and say "yeah lets just run a scan for the items we need", whereas a dev will look at the same problem and say "yeah we can do that but it will cause issue A, issue B and sometimes issue C". And the dev knowing those gotchas is why you have them there in the first place, a dedicated person that knows all these gotchas and makes sure you organization avoids a footgun in prod.
The follow-on prompts would be to refactor the existing system to solve the scalability issues. You'd need to be able to feed in the existing codebase for that, though.
1) The real world is not a simple economic model. The wage rate is roughly equivalent to the rate it costs to replace an employee, not their marginal value. If your argument was true, company profits would tend toward zero as wages rise.
2) I specifically did not say GPT-4. If you think v4 is the peak of what will be possible when looking at how far we have come in just 2 years then I don't know what to tell you. Also, a product manager is not a business analyst.
1) I think it's approximately true in the real world. If you can hire an employee who adds substantially more revenue than what their wage is, you keep hiring employees until value created by an employee is close to their wage.
When discussing economy, it's good to start with understanding the situation through the lens of economic models and than look to what extent are the conclusions applicable to the real world.
Using your argumentation - above you said that xyz is true in a free market, but real-world markets are not free...
2) And I specifically said that I'm assuming no dramatic improvement beyond GPT-4. The 2 misconception I supposedly have... I didn't even make that claim.
I completely disagree because of how this batch of AI learns. It learns by studying what is currently available as opposed to supplying something new.
Bad developers on the way of becoming obsolete by AI will see drastic improvements from the thing that’s about to replace them. Otherwise it’s slightly better intellisence integrated in your editor.
The difference is that bad developers are primarily concerned with literacy and their primary intent is chasing easy. Better developers are comfortable reading code no differently than an attorney reading a large contact and their primary intent is chasing better automation. It’s the difference between designing a car engine versus hammering pieces together.
I suspect this will open wage distinctions. Those who can write software will be identified from those who can’t. Those who can will be fewer in number and their wages will increase. Those who can’t will be a lower skilled commodity with depreciated wages, like the way data entry was once a skilled profession many decades ago but no longer.
> - I expect developer productivity to go up 1.5x - 15x over the next several years assuming GTP-4 based tooling is integrated into IDEs.
Hardly. 10% of the time spent is in writing code and the other 90% goes behind debugging and fixing things.
Even if AI shrinks the time spent in that 10% in writing by another 90%, you still have have to take full time for maintaining that code. Changing, debugging, testing, deploying, profiling, log analysis.
Worse, its now someone else's code you need to understand and debug, and not the one written by yourself.
It's not just writing code, GPT and GPT-based tools can help with fixing bugs, solving configuration problems, understanding existing code, suggesting variable names, etc.
> On the other hand, people working in agriculture have become much more effective, which lead to much fewer people working in agriculture, maybe because people need to eat as much calories per day as they did 100 years ago.
Except population exploded also, and the only reason we could support such a population has been thanks to advances in agriculture (it could be argued that's not a good thing, as the larger population is helping fuck our planet up in other ways, but that's for a different thread). So there has been an increased demand for food.
However, there's only so much arable land on this planet. The planet is finite in size. So when basically the entire planet is already being farmed, you can't really add more farmers, so further efficiency is just leading to less farmers.
Software is not so finite. It technically is, as we only have so many servers or whatever, but we're nowhere near the upper limits of what all we can handle or the appetite for software that companies have (which seems about infinite). Additionally, we have a habit of rebuilding and replacing software every few years, so even if we reach capacity limits, there's still demand for new software to replace the old, or working on new features for existing software.
So it's a bit different of a situation and not really comparable.
Also, much like that law of bureaucracy stating how work expands to fill the time available, every employer I’ve worked for has been able to envision and request vastly more software functionality than any dev team has been able to produce, which might not extrapolate forever, but has sustained the field so far.
On the other hand, people are much more finite in the amount of food that they require
> assuming GTP-4 based tooling is integrated into IDEs.
And how, exactly, is this going to work?
Is every IDE going to pay some license fee to OpenAI? Will that be up to the companies or—even worse—individuals who use the IDEs?
What happens when OpenAI decides that GPT-4 is passé and it's time to move on to something else? Will the existing IDEs that are designed to interact with GPT-4 stop working?
Will GPT-5 or whatever provide the same kinds of assistance, or will there be regressions? Will they require a completely different license for it?
See, these are some of the important questions that come with assuming that a fairly new for-profit company will act as the backbone for your marvelous programming renaissance—particularly when the service in question is one that takes quite a lot of resources to continue to operate.
Personally, I would be interested in some kind of LLM-assisted IDE, but no way in hell am I going to make any significant portion of my job dependent on an organization that decided, as soon as it was clear there was real interest in what they were doing, that they'd just toss the "Open" part of their name and principles out the window and just go for closed-source proprietary product and maximum hype for more profit.
I wish! I still often find out my team members don’t know basic text/Unix tools. Heck one guy took our list of enums in a JSON file and wrote on his keyboard by hand each of them into a class. It took me maybe a minute in Sublime Text when he wasted at least a day
There's a weird phenomenon I've seen in a few domains of prideful ignorance.
Backend engineers who proudly don't know how to write frontend code and vice versa. Professional engineers who refuse to learn how to use modern IDEs and monitoring platforms. People who don't know how to quickly prototype software as if building something without complete rigor is beneath them. People who refuse to learn or work in certain programming languages they deem inferior.
And rather than seeing this as a gap in their own skillset they think of it as a mark of intelligence or moral superiority.
I suspect we'll see another divide around AI assisted coding with some engineers simply refusing to learn how to use the tools effectively to make themselves more productive as a point of pride.
In my (small) experience I've seen that only once, a perfect specimen who fits your description. Unfortunately this person was acting-CTO of a startup. They decided that for a consumer product very close to what a smartphone can do, they didn't need an OS. So they started writing their build system with make only, then an OS, an IP stack, then a GUI, then added support for multithreading,... Of course Android existed, that was the end of the 2010s. The retail price of a cheap (and faster!) Android phone was even lower than our BOM.
4 years and 20M later, the company went bankrupt without having delivered the product it promised, but they had fun writing their "superior" software.
As long as people are in charge, we'll have plenty willing to pay technical workers who see themselves as artists.
That sounds like a fantastic hobby project - but being able to tell the difference between a hobby project and a viable product in the marketplace is... maybe something AI can help with!? :P
As an engineering manager I’m usually more concerned with those engineers that tend to spend all their time fiddling with IDEs etc, so that they will some day (that never comes) be soooo productive.
What is a modern IDE? It seems like the more recent trend has moved away from bloated IDEs and more to a bring your editor and own plugins environment. From my perspective IDEs lost and things are now closer to how Vim users have been coding for decades.
It doesn't seem so to me. With LSP, those editors, including Vim, can be very large and featureful.
A modern (or any) IDE, in my opinion, can debug your code, push parts of the editor into REPL (with debugging), understand code for things like autocomplete, linting, quick navigation and usage search. And yes, Vim can serve as a modern IDE if you spend enough time with it.
I'm saying the trend (IMHO) is not away from a full blown IDE and that smaller editors with plugins and LSP can fill the the role as well, including the resource consumption.
> I suspect we'll see another divide around AI assisted coding with some engineers simply refusing to learn how to use the tools effectively to make themselves more productive as a point of pride.
Real Artificial Intelligence? Yeah, that would definitely factor out a lot of the wasted work we call "engineering".
Language Learning Models like GPT? Not even close.
We should absolutely stop calling those "AI". They are not intelligent. They model intelligent behavior: human writing.
We should probably even stop calling them "Language Learning". They don't know or care what language is: they learn whatever patterns are present in text, language or not.
Text contains the patterns that humans identify as language; but those aren't the only patterns present in text: which means language is not the only thing being modeled by an LLM. There is no categorization happening either: a pattern is a pattern.
There is this religious cult surrounding LLMs that bases all of its expectations of what an LLM can become on a personification of the LLM. They say that we can't possibly understand the limitations of this method; therefore there are no limitations at all! It's absurd.
Great post and I agree with you 100%. After playing around it and reading the theoretical aspect of how it works. There is an elephant in the room that should not be ignored.
The hype, more appropriately the cult following claims it will replace jobs and analogous to, second coming of Christ in the aspect of change the world. What it appears to me is there is massive money to be made. It as if, because how strong the hype is, it will drive future VC back startups to be make the next billions. My suggestion to you is, if you are in to make money, you will need to put the common sense part away and join the bandwagon.
I have same sentiment as you regarding LLM, but the biggest issue is that this sentiment will not make money. To make most of what is going on, is to fake the belief that this model will have a great affect on humanity and software solution that makes use of LLM needs funding till an IPO.
The biggest point I want to make us that that cult narrative isn't just people mistaking what they read about LLMs. That narrative started with how LLM authors write about LLMs.
An LLM is not a language model, it is a text model. An LLM does not lie, it provides whatever continuation happens from modeling the prompt into the training corpus; with no notion of truth or fact. An LLM does not invent novel behaviors to solve patterns, it finds patterns that were already in the text that humans consider novel.
Practically everything I have just said is in conflict with how LLMs are presented by their authors. That's where the cult narrative is rooted.
Citation needed. Numerous actual citations have demonstrated hallmarks of intelligence for years. Tool use. Comprehension and generalization of grammars. World modeling with spatial reasoning through language. Many of these are readily testable in GPT. Many people have… and I dare say that LLMs reading comprehension, problem solving and reasoning skills do surpass that of many actual humans.
> They model intelligent behavior
It is not at all clear that modeling intelligent behavior is any different from intelligence. This is an open question. If you have an insight there I would love to read it.
> They don't know or care what language is: they learn whatever patterns are present in text, language or not.
This is identical to how children learn language prior to schooling. They listen and form connections based on the cooccurrence of words. They’re brains are working overtime to predict what sounds follow next. Before anyone says “not from text!” please don’t forget people who can’t see or hear. Before anyone says, “not only from language!” multimodal LLMs are here now too!
I’m not saying they’re perfect or even possess the same type of intelligence. Obviously the mechanisms are different. However far too many people in this debate are either unaware of their capabilities or hold on too strongly to human exceptionalism.
> There is this religious cult surrounding LLMs that bases all of its expectations of what an LLM can become on a personification of the LLM.
Anthropomorphizing LLMs is indeed an issue but is separate from a debate on their intelligence. I would argue there’s a very different religious cult very vocally proclaiming “that’s not really intelligence!” as these models sprint past goal posts.
All through the lens of personification. It's important to take a step back and ask, "Where do these hallmarks come from?"
The hallmarks of intelligence are literally what is encoded into text. The reason LLMs are so impressive is that they manage to follow those patterns without any explicit direction.
> I dare say that LLMs reading comprehension, problem solving and reasoning skills do surpass that of many actual humans.
People tend to over-optimize reading comprehension by replacing what they are reading with what they predict to be reading. Every person has a worldview built out of prior knowledge that they use to disambiguate language. It takes effort to suspend one's worldview, and it takes effort to write accurate unambiguous language.
An LLM cannot have that problem, because an LLM cannot read. An LLM models text. The most dominant patterns of text are language: either the model aligns with those patterns, or we humans call the result a failure and redirect our efforts.
> Anthropomorphizing LLMs is indeed an issue but is separate from a debate on their intelligence.
How could that even be possible? The very word, "intelligence" is an anthropomorphization. Ignoring that reality moves the argument into pointless territory. If you try to argue that an anthropomorphized LLM is intelligent, then the answer is, "No shit, Sherlock. People are intelligent!" That doesn't answer any questions about a real LLM.
> as these models sprint past goal posts.
Either an LLM succeeds at a goal, or it fails. It has no idea what the difference is. The LLM has no concept of success: no category for failure. An LLM has no goals or intentions, and doesn't make a single logical decision.
So what is its success coming from? The text being modeled. Without humans authoring that text, there is no model at all!
The goals are authored, too. Every subject, every decision, every behavior, and every goal is determined by a human. Without human interaction, the LLM is nothing. Does nothing think? Does an arrow find its target? Of course not.
There are also literally hundreds of articles and tweet threads about it. Moreover, as I said, you can test many of my claims above directly using readily available LLMs.
GP has a much harder defense. They have to prove that despite all of these capabilities that LLMs are not intelligent. That the mechanisms by which humans possess intelligence is fundamentally distinct from a computer’s ability to exhibit the same behaviors so much that it invalidates any claim that LLMs exhibit intelligence.
Intelligence: “the ability to acquire and apply knowledge and skills”. It is difficult to argue that modern LLMs cannot do this. At best we can quibble about the meaning of individual words like “acquire”, “apply”, “knowledge”, and “skills”. That’s a significant goal post shift from even a year ago.
Thanks for the links, but yeah - I would not give much credit to tweets and blog posts. Often this "emergent" behavior is not that. These are not experts.
If programmers can be replaced by AI, so can every other white collar job and humanity will look very different than what it is now. And I’ve been using ChatGPT and copilot and it’s a nice tool but nowhere near a replacement for knowing how to program.
This person is asking about career prospects 10-15 years out though.
I'm sorry, but the landscape in then might be as alien to someone asking today, as todays would have been to someone asking 15 years ago (2008).
What John said is correct, but personally I think he's underplaying how much people could be affected. Those "product skills" take years of grinding to really sharpen, and in 15 years only a few people might actually be needed to apply them
I doubt AI will replace any job in my lifetime (got 40-50 years left).
Progress will grind to a halt just like self driving cars did because the real world is just too chaotic and 'random' to be captured by a formula/equation/algorithm.
My prediction is: AGI is theoretically possible, but would require impractical amounts of computing power - kinda like how intergalactic travel will never happen.
And regrading comparison with self driving car they are still improving just the bar for them is much higher. If autopilot works 99.9% if time then 1 out of 1000 drivers will die - so technology has to be even better. for LLM is enough if it’s 90% good to be broadly useful.
It’s not about replacing all programmers. If one programmer with AI assistant can do work the same as 2 programmers then one position is redundant.
Even with self driving truck if one truck driver is leading another truck behind controlled by AI, and just for safety you have somewhere C&C center with one person monitoring 4 such ai trucks and in case unexpected event remotely take over control then one truck driver position is redundant.
While I do think there is some threshold where increased productivity makes positions redundant, I don't think 2x would do it in most orgs. My current team easily has enough work for us all to be 2x more productive.
fwiw, self-driving cars did not grind to a halt, development just did not move as quickly as the pundits and self-promotion claimed. I just rode in a fully driverless car on public streets in downtown Austin this week.
Well I'm in web dev (though I was studying CS in 2008) and the 2008 landscape had almost none of the same things. jQuery was not yet a household name, let alone SPAs. Facebook had barely 100 million users. Marc Andreessen yet hadn't written about "software eating the world". Personally I was more optimistic. If anything, the last 15 years have seen the growth of an attitude of tech "entitlement" because hackers got to the a lot of the ideas that now seem obvious in hindsight before a lot of the big corps could.
I'm sure there's still room for innovation, but I think a lot of it going forward will be driven by rapid improvement in AI capabilities.
In 2008, tech wasn't everywhere. iphones were brand new and very few people had them.. There was no "mobile browser" market share (though we did have SMS gateways). 77% of the global population hadn't even been on the internet yet.
AI looks like it's going to be at the forefront of the next big wave of fundamental changes to society, and it's really hard to predict where that will lead us. But I suspect it's going to become apparent that this relatively brief period of tech-elite empowerment was a historical anomaly, because the AI underlings are going to be willing to do a lot more work with none of the coddling, and they're going to improve very quickly.
I totally don’t see that. If you showed me AWS and modern machines and Go and React in 2008 I would certainly see that yeah there was some incremental progress but by no means would my mind be blown. Not much has changed. We still write clients and servers and use http and most of the same languages are still popular but slightly updated. Databases are essentially the same.. how good phones are would probably be the most exciting thing apart from GPT.
Or typescript! I was writing actionscript 3 in 2008 which is essentially the same spec
Huh? Gmail was a spa and that dates from 2004. I don't think a VC writing a blog post says much about how tech had changed. Smartphones existed before the iPhone and Android, they just weren't as popular.
The term "SPA" wasn't in use until at least after 2009 and gmail was probably using some hacky AJAX (XMLHTTPRequest wasn't even standardized until, what, 2006?). Chrome wasn't launched until 2008 so they weren't able to get away with just adding the APIs they needed into the browser. Backbone wasn't even released until 2010 and Angular probably wasn't conceived of internally until late 2008.
Yes, gmail might have had some SPA-like behaviour in 2004-2006 but it was nothing like what we have today. Pretty sure I got access in 2005 because I knew someone who worked at google, and it was mostly doing full refreshes between actions at the time, like pretty much the entire rest of the web
SPA is just an abbreviation of "single page application" and only means a web app that doesn't do full page reloads, it doesn't require the use of any specific framework, so Gmail definitely qualifies and it used it from version one. It wasn't even the first, XMLHttpRequest was created by Microsoft for Outlook Web Access and it shipped in 1999 in IE5. Before Gmail there was also Oddpost, another SPA webmail app, so people started using it to build non-reloading web apps almost immediately. Gmail was the iPhone of web mail: not the first, no real new tech, but just very well done overall and popularized the concept.
You seem to be trying to redefine SPA to mean something else and much vaguer - the use of some specific frameworks, or not being "hacky" whatever that means - but your history here is just off.
Also, jquery was written in 2005 and launched in 2006, and became popular very fast. It was definitely pretty well known by 2008 and of course jquery itself was nothing new, most companies had very similar sets of utility libraries for JS that they used. Jquery just happened to be open source and have a monomaniacal focus on terseness at almost any cost.
Reality is the web has changed relatively little since 2008. It got some more APIs that weren't new, they were used in native desktop/mobile apps for many years beforehand, and that's about it.
Regarding 2008 vs 2023… how to view it probably depends on where you were in your career in 2008. To me 2008 -> 2023 looks like mostly shifting details.
SPAs certainly were a thing back then, it was just called AJAX. (Not to mention the desktop apps that were, architecturally, almost the same thing.) jQuery was a response to the popularity of putting interactivity in the browser, not a precursor.
The questions remain the same, not just from 2008, but going back a long ways… Where does the code live? How is it transformed to execute, where does it execute and how it is moved there? Where does the data live, how is it transformed and moved to where it needs to be displayed, and how are changes moved back? When some of the answers shift, due to changing network capabilities, processing capabilities of nodes, or scaling needs, it’s doesn’t really change the overall picture.
We've had LLMs for about 5 years so far in non-academic research. If we're talking 10 years out that means we're looking at tech that's about 1/3 through its development to date.
Take any mature-ish technology that you use today and compare the version 1/3 through its life to the version you use now. Look at Chrome 20 compare to Chrome 111, or React 14 compared to React 18, or an iPhone 4 compared to an iPhone 14, or a car from 1950 compared to a car today...
The difference is always quite significant. Superficially they're still the same thing, but if you look at the detail everything is just better. AI will be the same.
You can't extrapolate from an arbitrary selection of technologies and assume that LLMs will have the same trajectory. They could be like the iPhone, or they could be like self-driving cars which have been a year away from replacing all drivers for 10 years now.
Self driving cars a few years ago seem particular close in hype level and apprehension to LLM today but progress on those has not matched expectations at all. What if GPT4 is the last major advance in LLMs for a really long time?
Might just be me, but I think the big difference here is the level of adoption. Everybody with an internet connection can use an LLM. It hits closer to home that way, whereas driving is very dangerous and most people haven't used a self driving car before.
Superficially they're similar in the "they both have 4 wheels and an engine" sense, but you could examine literally any part of a car today compared to one from the 1950s and find huge improvements. The efficiency, safety, comfort, tech, manufacturing... Everything is better.
I agree with the safety angle, but besides that driving a car from 2023 is not substantially different from the 1950s (in the sense that it opens up a lot of new possibilities).
When I started reading your comment I thought you were going to argue the opposite. Getting my first iPhone (3G) was a huge change. iPhone 4 to the latest are mostly incremental improvements. Aside from the camera, I could probably live with an iPhone 4 without many issues. Only the software is a lot more bloated now.
We still had a Moto X from 2013 that my wife would power on every now and then to test an app that they were developing (iOS household), and besides the camera it still looks like a perfectly usable modern smartphone. When using it, it doesn't feel like a phone from the prehistory.
The whole mobile economy pretty much started in 2008. First iPhone was released in 2007 but App Store was lunched in 2008. This changed landscape dramatically even if you consider software development. Before 2008 you were fine with writing just windows only desktop app in Delphi - no smartphones, tablets, smartwatches, smart tvs and could leave out supporting macOS or Linux
> as todays would have been to someone asking 15 years ago (2008).
i dont think, if you took someone from 15 yrs ago, and transplanted them here today, that they'd find it all that different technologically. Sure, machines are faster, slightly different, and such, but the fundamentals haven't changed. A software engineer could just as well write an app today as they had 15 yrs ago.
You'd have to go back 30 yrs, for computers (and the landscape of computing) to have been different enough, that you can't transplant a software engineer.
30 years ago (1993): Linux existed, Python existed, web existed (mosaic), DOOM (3D graphics), and even Apple Newton (mobile) existed; and C, shell, windows (GUI), spreadsheet, sql, etc were known long before that.
What exactly revolutionary happened in the last 30 years? javascript? (two weeks project)
amazon, google, facebook, netflix, iphone, instagram, tiktok -- execution is great but seems inevitable that somebody will create it. Ok, for non-IT people iphone was a game changer (the first personal computer that your grandmother can actually use).
The ability of generative AI to produce BS indistinguishable from human BS is very impressive but it remains to be seen whether it is a net positive for an average developer (the time wasted correcting it, waiting for its output can be spent understanding the problem better--the typing the code itself is a small part of a programmer who knows what they are doing).
Was the tech landscape much different 10-15 years ago? This is a genuine question; the iPhone App Store was really the last "big thing" to happen to the industry in my mind, and it came out in 2008.
> I'm sorry, but the landscape in then might be as alien to someone asking today, as todays would have been to someone asking 15 years ago (2008).
Hahahah. Yes. Who could have foreseen the trailblazing advances in the tech industry such as "television, but over the internet", "booking rooms, but via a website" or "posting messages on a forum"
Don't forget the stuff powering it: "RPC, but over HTTP", "scripting languages, but compiled", or "Key-value stores"
2008 was extremely similar to today, although the webdev ecosystem wasn't quite as degenerate. I'd say you'd have to go back to the pre internet era to find a work environment that was fundamentally different.
You have plenty of time and can learn CS and earn a lot of money for years even if at exactly 120 months from now your job is made obsolete. It doesn't take 9 years to learn to code.
The premise of all this seems to be that learning how to program computers is difficult or complex. It is not.
Also, AI will not replace human reasoning in 10-15 years. If it does, it means AGI, and we all have much bigger problems than layoffs.
> If programmers can be replaced by AI, so can every other white collar job
If programmers can be replaced by AI, so can it replace even the blue collar jobs. Because if it can’t that is what this developer here will be working on.
I'm pretty sure the reason human physical labour hasn't been replaced yet in many areas isn't because AI tech hasn't advanced sufficiently - there are real engineering challenges in automating physical interactions with the real world that it's hard to see how ever more advanced LLMs will help much with (though they could certainly assist in the design process).
That humans are still needed to cook/assemble burgers or peel veggies is in some ways more surprising than the code and language generation capabilities of ChatGPT.
> physical labour hasn't been replaced yet in many areas isn't because AI tech hasn't advanced sufficiently
I’m a robotics engineer. There are two options. AI can either replace all jobs or it can’t.
If it can then we are all out of a job, and then the next project is how to organise society such that everyone can live a good and fulfilling life in harmony.
If it can’t, for whatever reason, then that is the next thing I will be personally working on. Simple as that.
Because of this I don’t see how would it be possible to run out of programing jobs before running out of all the other jobs first.
You are talking about ChatGPT, and LLMs, but what i am saying transcends particular technologies.
They’re held back by robotics and energy storage more than anything. How would chatgpt paint my wall or fix my sink or install electrical wiring in my house?
It's worth wondering though how long it will before ChatGPT could give you complete instructions for building a machine/robot capable of doing any of those things...or better still, building a factory that can churn out millions of such machines...
The lesson I've taken from ChatGPT so far is that "consciousness" may be much less interesting or "special" than we thought. It may turn out that it's nothing more than an emergent property of a shared language that models the world around us and gives us capacity to plan and communicate our actions.
It will of course continue to improve at what it does but I personally think it's unrealistic to assume it will somehow spontaneously develop generalised cognitive ability; there are surely limits to how far this particular approach will take us.
right now it will only solve problem someone else already knows the solution to, so not even at that.
that said, I expect an ai assisted clerk to be order magnitude faster than not. it will be though for people at the bottom of the learning curve for a bit, but in half generation the educational offering will include how to work toghether with ai, massively improving worker productivity
that is not to say it won't have negative impact. there's so much job that we need, currently programmer are in high demand, and it's the one of the highest paying jobs, but that will change, possibly dramatically. I expect people at the top of the chain to be in trouble first (architects and whatnot) because they are the least creative and the one that possibly require the most knowledge, things ai do exceptionally well as of today.
Having helped companies in traditional line of business to "streamline" their work processes, I am quite clear that many of the stuff that the West outsources into Asian countries will eventually be outsourced to AI.
It is already so that in many industries there are hardly any traditional coder jobs on site, and having to explain every step to avoid it going off rails in offshoring assignments is hardly going to be any different to explain it to an AI, when it gets good enough.
Maybe by GPT-20 only, but it will come, and then the roles of architects and business analists are the only ones left.
> Maybe by GPT-20 only, but it will come, and then the roles of architects and business analists are the only ones left.
Agreed. Although... we actually don't know which jobs will be the ones left. For all we know, it will have taken over business analysis, too. Plus, from my experience working on architecture, I suspect that automatizing that part of my job won't be too hard.
For all we know, the only jobs left will be nurse and deep sea miner.
The title this post makes it seem like John Carmarck is concerned about AI making CS jobs obsolete. But that's not at all what this is. This is someone else asking Carmack about his thoughts on this topic.
(Submitted title was "John Carmack: From a DM, just in case anyone else needs to hear this". A mod changed it to "I’m concerned AI will make CS jobs obsolete" but yeah, that changed the meaning. Sorry!)
Right, and his response is, while not quite the exact opposite of the title, tangential to it at best. He says that programming has only ever been a tool to create software for humans, so just studying programming as an abstract concept detached from product skills is not advised. He does not say CS jobs will be obsolete in the future — if I can read into it a little, he's implying that the completely theoretical CS approach has never been the right path anyway, and AI will make that more obvious.
The computers are coming for jobs ever up the white collar scale. When I started working there were (barely) still secretaries who typed and filed things for their bosses, and quite a few geeks had jobs that involved assembly language programming.
AI will take jobs. Super frustratingly, it'll probably make call centers even more useless (has anyone ever gotten anything useful out of one of those ("Hi, I'm ____. Can I help you?" popups at the bottom right of web sites?) And it'll certainly automate some of the "copy/paste" type programming jobs at the lower end of the scale, the same way email automated a lot of secretarial jobs; i.e. 10 fulltime copy/paste programming type jobs may become a job for one human and an AI assistant.
Which leaves people who are really passionate about and good at their craft. Somewhat relatedly, I saw about an uptick of people going into the trades. AI won't take plumber or electrician jobs away in the foreseeable future.
But talking to my friends who do these jobs it always seems like it would be even more vulnerable to AI than programming.
Experienced electricians get paid decent wages because they have had lots of training and then have seen loads of different problems. So they intuitively know things like 'This is a 1960s house so if there's a problem with the lighting the first thing I should check is the fuse box connector, it should look like xyz, etc. Etc.'. This seems like exactly the sort of thing an LLM could do for them.
I think you could easily see a world where an electrician is someone on minimum wage with very minimal training who blindly connects wires as instructed by an AI.
I reckon the safest jobs are ones with limited reliance on knowledge and a very high level of physical skill (in environments where it's hard to get machines to operate). Bricklayers, plasterers and painter/decorators will be the big earners of the 2050s!
I wonder if there's enough info about how to do tradesmen's jobs online for that to happen. Programmers are at risk because we filled the internet with free training materials but many jobs aren't like that especially anything with a physical component.
This is an interesting point. A family member of mine is what we call here a medical evaluator - not sure if it has a direct equivalent in e.g. the US and how it is called here, but those are doctors who assess the disabilities of workers who apply for a pension due to illness or accident. This involves exploring the patient and then making the decision and justifying it in a report. The latter two seem like tasks that LLMs should be able to do easily.
However, we tried a description of a fake case to see what Bing could do, and it couldn't do much. And I think the reason is that there are very detailed documents on the rules that they follow for their decisions, but these are not online - they are in a private intranet and they can't take them out of there. If Bing had access to those documents I don't think it would have much of a problem.
So maybe a way for workers to protect themselves from being replaced by AI is not uploading much information about their jobs to the Internet... I wonder if this will lead to a situation like guilds in the middle ages, treating job skills essentially as secrets.
I don't know if it is enough yet but there has been an explosion of this kind of content on Youtube over the last 10 years. For typical home repairs it seems most topics are pretty well covered.
The most recent electrician jobs we've had done were:
- fitting a timer into the switchboard to control the hot water cylinder. A simple job, but the sparky also had to talk to me (the client) to get us both on the same page.
- fitting an EV fast charger in the garage. Not much science, but a lot of cable running and clipping down, then the garage switchboard needed to be swapped out for a larger one that could take the required RCD. And convincing me which brand charger to go for. 2 guys working together for a couople of hours.
- fixing the range hood light (always on due to a broken switch). He spent quite some time trying to extract the broken switch, with the range hood balanced on his shoulder and wires everywhere.
In every case there was no real complexity to the job, not the sort of thing that an AI could have been helpful at at all. Just a lot of common sense, knowledge of the regulations and much skilled manual work.
I don't think AI is coming for electricians any time soon.
But in all of those cases presumably someone needed to figure out what needed doing? (In your case maybe you're savvy enough that you knew what the issue was and just needed a certified person to do the work, but most clients won't be).
My argument is that it is the 'figuring out' that drives electricians wages, not really the doing part. Because while clipping down cables and extracting switches is fiddly work, I'd argue it isn't a skill with enough barrier to entry to maintain high wages (as compared to brick laying or plastering, for example, which you simply can't do to a professional level without years of practice).
So most of the value delivered by an experienced electrician is in talking to clients and identifying the correct technical solution, and is therefore pretty much analogous to the value delivered by software developers.
Therefore if we accept the logic that software developers will no longer be required (or that their value will be greatly diminished) it's hard to see how that wouldn't apply to electricians too (in the sense of being a well-paid trade over and above your average manual job).
(Btw - I DON'T think either will happen, but I just think electrician is a weird choice of example for those that do think that)
There is no reason to expect robotic technology to halt. Look at what things like Tesla or Boston Dynamics robots can do. Eventually we will see very well articulated and high strength to weight ratio robots integrated with advanced AI systems. It is definitely not going to take 25 years.
If you look at what's happening today, in 25 years it seems plausible that fully autonomous superintelligent androids with much more dexterity than humans will be fully in control of the planet.
There is though. In Europe finding a plumber that will take you can have you wait weeks pricelessly because those sinks in existence keep breaking down.
The safest jobs are ones that honest to self for the doer. He/She will be able to create value either using other humans, or machines and continue to do.
Sorry, I wanted to try "safest jobs are ones that involve politics", while those will always be present, it is not the safest and wont be many available, so changed to more abstract answer.
> AI won't take plumber or electrician jobs away in the foreseeable future.
I see this argument way too often. How many electricians and plumbers does the world really need? And if the market becomes over saturated, how much will those jobs pay? How often will you actually have a job to do?
if you were to life with only what was around at the time such productivity increases occurred - e.g., no internet, phone, and high speed computing, access to medical/transport improvements etc - you'd probably be able to live off minimum wage easily.
People worked more today because they desired more. In fact, the desire outstripped their ability to increase their wages!
That is a common counterargument, and yet the common counter-counterargument matches my experience:
That sadly doesn't apply to housing or food, at least in the places I've lived. E.g. houses in the place I was born are now 5x the price, once you factor in inflation. Food and vehicles are 2x. Higher education went from "free" to $30K.
The really important things are far more expensive now, across the three countries I've spent decades in. Maybe the overall situation is different for the US of A.
I think we've already seen the dividing line drawn between people who can ultimately be replaced by a computer (or at least, those whom top-tier management believe can be replaced by a computer) and people who won't: Work from Home.
I imagine writing code by hand without AI will be the assembler of our times.
Maybe the reasons to do it won't be "performance", but actually "maintainability" or "legibility".
Very similar to how you don't care about the machine code your high level code generates in 99% of the time, so you just make changes and replace it every time happily, but for that tight loop you want to keep it an assembler, I can imagine a world where, let's say for a bunch of simple UI components, you just ask the machine to do it for you and if tomorrow the requirements change a bit you ask again and throw away the old one, no big deal. But some gnarlier piece of business logic, harder to explain even to a human may need a more careful treatment, and to be easier to change by hand because that's where most of the changes happen, maybe.
As a self-taught polymath, I did a lot of research many years ago on how good teams create good products. I reached some unusual conclusions at the time, but they've stood the test of time.
The main one in regards to John's tweet is this: desired behavior has to "force" out code and architecture. Typically teams and individual developers carry a lot of presumptions and assumptions into their solution space. They probably pick this kind of thing up from reading HN, Twitter, and the like. We all do this.
It's these "things you do but don't really think about" that are so pernicious in technology development. Guess what? Looks like AI is going to master that kind of thing since with each social media post we continue to train it on "How things are usually done"
By taking an extreme "force me to write code" approach, you end up developing a wide range of skills: DevSecOps, analysis, database design, and so on. In fact, you don't really know what you're going to develop when you start working on it.
That's the point. That's the job. Always has been.
Can you clarify? Are the “things you do but don’t really think about” good, like good habits that have become ingrained, or bad because they’ve become mindless rote that are often unnecessary action?
(“Force me to write code” suggests that we’re looking to think deeply about what we’re doing and trying to minimize unnecessary action, so the rote actions are bad.)
Apologies. I overstated. How about "I study a lot of stuff"
And now the reply, if I've accurately modeled it, is "So you've studied a lot of stuff, what makes you an expert on any of it?"
Programming at its heart has always been cross-discipline, so there's really no judgment of ability aside from solving problems involving those disciplines you're working for. I find myself with a lack of adjectives that will keep our conversation short.
For future reference, you can just use "autodidact."
Next time if you want to brag without looking like an ass, describe something really impressive you did in detail to illustrate a point using highly understated language. Not only will you not get penalized for bragging, people will give you bonus points for being humble.
Polymath
> A polymath is an individual whose knowledge spans a substantial number of subjects, known to draw on complex bodies of knowledge to solve specific problems.
If that's who you are, there's no shame in identifying as so. Don't let hacker news comments gatekeep you.
There's nothing wrong with being confident with your skills. Humbleness is only seen as a virtue because people have an ego and don't like to be reminded that they are unskilled.
What’s left unsaid: many programmers can’t or don’t want to “accomplish something for people”. They just want to code.
Such “automation is not a problem, because…” opinions have something in common: they’re looking at a subset of the affected population which has some trait making the transition easier.
Personally I’ve tried my hand at roles like architect, product owner, scrum master, etc and I was involved in most aspects of a software product’s lifecycle. These other roles are very different to coding and for someone that enjoys the simplicity of taming a machine, even exhausting.
I have my doubts that there will be enough “AI guide” jobs for all programmers, but the specific person Carmack’s talking to may indeed be fine.
The purpose of software companies is to earn money not to give jobs to people who "want to code".
Lots of people learned how to make games for NES, SNES, PS2, PS3, etc.
All those machines are now obsolete.
Current gen consoles are using x86 and ARM now and most upcoming AAA games are built on Unreal Engine.
The competitive advantage of those who mastered coding for the Cell processor is gone.
What should SONY have done instead?
Should they still use the PS3 architecture in the current gen consoles to keep those developers employed?
My university degree was in "Industrial Design and Technology" (~16years ago), an incredibly broad course covering everything (aesthetic design, design for manufacturing, material science, UX, UI, electronics, a little embedded C, ergonomics). But the main thing it taught was how to use these tools and skills to build a product that solved a problem.
AI is just another tool to enable us to build things that make people's lives better. Sure, it will supersede some older tools, but we aren't going to see it take all jobs away. People still need to plan and steer it to do what we want.
Power tools and shop automation didn't end the job of joiner/wood worker.
I'm not worried about AI taking jobs, I'm excited how we can use it to enable new classes to product that make our lives better.
Just as an artist will have to learn how to work with new paints, but this enables finding new ways of expressing themselves. We just need to learn new ways of "painting" with generative AI.
Apparently it's the opposite. They can improve its output by letting it reason logically or critically on its own output, somewhat like a thought monologue I assume. Not familiar with the details, though.
I'd say don't count on that one. Feeding on others knowledge and then recombining it is also what we do. In comparison to AI we do not appear to be particularly great at any part of this game.
What GPT does as recombining is what it sees from us. AFAIK. Hopefully this AI thing will evolve faster than our compute capacity and we can switch it off in time :)
Oh shit. It will read this later... :/
The other 90% is understanding specifications of requirements (sometimes even helping customers write them), produce detailed functional specifications, cost analysis, prototyping, meeting with third party vendors over interface design specifications, determine the project's scope, testing, delivery, integration and commissioning, bug fixing, identifying and managing scope changes among other things.
AI may help you complete some of these tasks more effectively, but at the end of the day it will be just another tool in your kit.
That really depends on the type of "Software Engineering" you're doing. In my experience, in greenfield projects, boring CRUD-type programming can easily take up 50% of your time.
It'd be great if AI could automate this boring CRUD-type programming away, and let me focus on the architecture and interesting algorithms.
I think something that a lot of people might be overlooking is just how much this might devalue software as individual products. How many technologies do we have now that might become irrelevant once LLMs become more mainstream? How can any company keep a moat around their product if anyone can simply generate that same function (or similar) with a few prompts? The only reason any software is particularly valuable is because of the difficulty that comes with making it.
An example that come to mind is Jira, why have verbose task management software when bespoke task management systems become even more viable for individual companies? Or better yet, given the need for individual cogs decreasing, why have that at all?
This also extends to the creation of any sort of new business, perhaps there are patents on specific products and brands (which might be the saving grace of many large orgs) but outside of niche applications and hardware access I can't see how someone can reasonably gain a leg up in such an environment.
edit: This is more speculative, but what if software actually becomes more of a process of molding a large language model to consistently behave in a certain way? Why have some code that manages your back-end functionality when for a large some of applications all that is really occurring is text manipulation with some standardized rules. If those rules can be quantified, and consistency can be expected, the only "coding" that needs to be done is prompting the model.
It's funny seeing this attitude here from developers types, but when it comes to art or writing or legal work, it's all "disrupt" talk instead. Of course you guys are the ones with the special sauce, something those pesky managers can never grok. Meanwhile continue your work to put them out of a job, the manager definitely won't decide to fire you too as soon as he can justify it to the c suite
Whenever I use midjourney, which is a lot, I think about what I could accomplish if I actually had art skills, to feed to it, and to edit and compose its results, and if all these tools were tightly integrated into existing tools.
It seems similar. New heights are possible for those with skills, and the barriers are lowered for those without training. There will be new demand for both things, competition for quality at the top, and new applications where it wasn't worth it to produce art before at the bottom.
Both art and programming seem hard to predict what value you'll get out of investing in skills now, because there's both barriers being broken down and demand being opened up.
I do think there is a key difference, though, which is software hasn't shown signs that it's getting close to done "eating the world" yet, due to the universal nature of the turing machine. It's nothing to do with programmers being special and everything to do with computers being a single tool that's applicable to everything and has sustained decades of exponential gains in power.
I wouldn't stop someone from investing in art skills if that's their interest, and I wouldn't stop someone from investing in programming skills if that's their interest. But if they were interested in both equally, I would absolutely suggest programming.
> new applications where it wasn't worth it to produce art before at the bottom
I've been using Midjourney to create logos for my FOSS projects. In the past I'd never spend money on something I'm making for free but now I just generate a few variations of a prompt like "create a minimal flat vector logo for a software product called X" and I pick the best one. I don't need a logo for a FOSS project but the barrier to entry is so low there isn't really a reason not to do it. I still pay humans to design logos for products I want to market because AFAIK there isn't really a great way to do minor revisions with Midjourney.
I agree with this mindset. There was a leap forward in productivity to accomplish your goals that modern IDEs/stacks brought and all the new tooling with it. What took massive teams before can now be done w/ a handful or even a single person. AI will just accelerate this type of work.
As elitist as this sounds, when I hear people being afraid of stuff like this it makes me feel like we are in the period where people are getting paid well (overpaid) just to do very mundane stuff, the bare minimum or content to never further develop their skills. If that is your mindset, then of course it feels threatening.
I would rather much play the role of conductor or an architect. There are times that I'm limited by my hands and mind and just grinding through variations of things I've done 100s of times before. If AI can fill that gap all the better. We will adapt.
I'm sure one day that won't even be necessary. We can probably worry then.
Honestly, Delphi's WYSIWYG GUI editor in the late 90s felt way more productive to me than all this Electron "cross-platform" JavaScript React thing today where you need to take of about 1000 random dependencies to render a rectangle.
Yes, human economy is about the exchange of value between humans – a cheap tautology from Carmack. Money is indeed irrelevant to a snail, as opposed to a would-be SW engineer.
But Nature is about making better use of energy gradients, always doing more with less, the principle of least action. Using any surplus to do it again (i.e. evolved life). That's the properly grounded perspective. In that sense "Get skills to satisfy humans and you'll probably be fine!" sounds super myopic.
The anxiety we humans feel when confronted with AI is not only that we'll be out of our job as a programmer, or doctor, or driver, or teacher, or whatever.
It's the broader sense of unease that humanity's gradient-razing days, spectacular as they were all the way to nuclear fission and fusion, may soon be over. And "economy" as a useful tool advancing that Nature's mission will have evolved beyond us.
"Making humans satisfied" is not terribly relevant from that perspective. Vast swathes of the human economy are just scaffolding to support the rest: humans reproducing to keep the optimization machine going. The overhead is tremendous. Once Nature finds a way to do more with less, I have zero doubt much of that scaffolding will be optimized away. That's some definition of "fine".
Or maybe I misunderstood and Carmack is merely suggesting individuals try to adapt and hope for the best. What else can you do anyway? That would be the honest answer. Rather than bloviating about "Guide AI with your product skills to deliver value to humans" – an embarrassing category error.
Interesting critical response currently buried at the bottom of page 2. Doesn't surprise me really. Frankly this thread is an embarrassment. And not because of whatever someone's conclusion is but because of a noted lack of critical reasoning in this case.
> It's the broader sense of unease that humanity's gradient-razing days, [...],
may soon be over.
No, the sense of unease comes from people fearing for their livelihood, given that it looks like that their raison d'être is about to be revoked by some AI automating their career away.
A point you so aptly summarized with:
> Vast swathes of the human economy are just scaffolding to support the rest: humans reproducing to keep the optimization machine going.
The unease is, imo, only so palpable however, because every stakeholder that stands from profiting from such a development - i.e. capital owners - refuse to describe it as what the development really is.
Publicly advertising AI as "we're basicially moonshotting the largest job automation operation ever attempted, so 20-30% of you will be out of a job in the next 15 years" would probably cause unrest - and rightly so!
> What else can you do anyway?
Communism - and ya hate to see it (as an american)!
The means of production becoming dead-simple to use, that's a prelude for the proletariat being able to seize them.
On a more serious note, you can always protest, and escalate, which I'm hoping will happen.
> the sense of unease comes from people fearing for their livelihood, given that it looks like that their raison d'être is about to be revoked by some AI automating their career away
You're on the same boat with Carmack if that's where you find the danger. Then his advice applies.
People have had their livelihood threatened since forever – that is not a new state of affairs. That anxiety we're well equipped to deal with, hard (and genocidal) as it sometimes is.
Just another gold rush for resources, and for status, and for reproductive success. A new caste of winners. Same old.
> so 20-30% of you will be out of a job in the next 15 years
My point was that jobs (and money, and market…) are a machine that serves human ends. The end consumer is always a human, that's our economy's anchor point.
Whatever energy is put to work is ultimately in service of someone getting laid or fed, or their desires/ideals (~firings inside their brains).
As long as that's true, "keep your eyes on delivering value to humans" is a safe bet – just like Carmack says.
Mine was a simple thermodynamical note on Nature (e.g. via one of its creations, humans) finding a more efficient way to bulldoze energy gradients, the true currency of the universe, thus unanchoring our "jobs and careers" system. By cutting out the very expensive middle man.
---------
Your remark on communism is also interesting in that sense. I see a parallel in that the concept of dissolution of individual into a commune, has been introduced several times in large societies, each time resulting in millions of deaths and general devastation. Yet it keeps bouncing back – some very strong attraction basin lurking there, evolutionarily speaking. Not to be trivialized by snarky political one-liners.
First off, there's a lot of people shooting off their mouths - ignore anyone who hasn't used ChatGPT extensively: it takes some training to learn to use it.
Several senior developer friends have been using ChatGPT quite a bit and it seems to work well in lots of places:
- isolated algorithms and fiddly bits - it writes complex SQL statements in seconds, for example. LLMs should makes quick work of fussy config files.
- finding, diagnosing and fixing bugs (just paste the code and error message - really!)
- unit tests and examples
- comments and documentation
Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS that go into software development, and therefore fewer developers to get the same work done. Sure, we just do more - but then we quickly hit other speed limits, where coding isn't the problem. I can see layoffs among the bottom-N% of developers, while more sophisticated developers add LLMs to their toolbox and use this productivity to justify their high $/hour.
I see AI writing code that casual human readers don't really understand, but this is OK because the AI includes comments -- just like developers do for each other today.
Like you I found that ChatGPT is not really all that great at coding, but great when you ask it to do very specific grunt work. I'm working on a new database and one thing I found it super useful for is generating test data, I would just tell it: "here's the CREATE TABLE statement, create 50 rows of test data off of it, with all of these specifications: this has to be this, that can only be 1 or 2, yada yada yada.
> Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS that go into software development,
I call it 'dumb coding'. You have a type of programming that requires you to really think, and then there's the type where you just need to write 200 lines of code but you know exactly what to write. If AI could pickup the slack on 'dumb coding' and let us think about 'smart coding', we would all be way way more efficent.
> Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS
More like 20-30% at max. And it's not including debugging the output of chatGPT, which I've found that it has been making subtle mistakes - which will probably take away all of the time gained.
Writing code isn't the biggest time sink, figuring out what to write is.
I’m sure my org isn’t unique, but we are constantly at max capacity and we have no money to hire new people. We have projects in the queue that will keep us occupied for years. I don’t think even a 50-90% speed up will lead to lay offs. We will just finally be able to get more shit done.
The backlog grows at a faster pace than the company completes work. The backlog is never meant to be completed. Your job security is not based on having a long well groomed backlog.
It can do basic math reasonably well (and this is achieving generation where GPT-3 failed).
Interestingly, asking it to verify itself does resolve bugs sometimes. Managed to fix subtle count() denominator bugs and an inflation-adjustment error with not much hinting on my end.
You can only see it struggle really hard at the end when it tries normalizing month ranges correctly. It seemed to reach conceptual problems over how LAST_DAY() was being used and current debug itself.
I tell it that I'm using <database and version> and give it the relevant DDL statements (e.g. CREATE TABLE, etc) then ask it to write the query to do <x> in plain English. It does surprisingly well.
But!!! the first response is rarely dead-on and instead, just like a junior eng I need to guide it: use (or don't use) SQL construct <x>, make sure to use index <x>, etc.
Example: to sum the values in a JSONB field, GPT desperately wanted to use a lateral join but that would have made for a very awkward set of VIEWs. So instead I directed it to create a function to perform the summation.
Sorry but no, ChatGPT can only do some very specific and specialized tasks, it doesn’t save meaningful time. It’s a tool in the toolbox, but it’s not a game changing tool; just one more thing to reach for when you need a complex transformation, or when you need to unblock yourself.
Zero developers will lose their jobs due to LLMs. That’s just yet more needless hype and expectation.
My fear is that people that are not competent enough to judge ai generated content will use it, intentionally or unintentionally as a sort of denial of service attack on expertise. Middle management churning out some half baked code solution in 5 mins of copy pasting from the spec which will take you hours of investigating for finding the pitfalls and 5 meetings to get the manager to conceed how she did not provide 95% of 'the solution' while dodging a barrage of slight adaptations 'that should solve your remarks' and being deriled for 'negativity'.
I love GPT4, but I hate what it will do in business environments
My hot take on AI code generation, which matches my understanding of how all of these GPTx models work: if you don't understand the output, you are far worse off using it than not.
At the moment, it works as a pretty powerful suggestion engine. It might suggest the wrong API to call, not handle the edge cases correctly etc. If you assume it does, or don't understand when it doesn't, you're in for a world of hurt.
I’m personally experiencing a bit of a honeymoon-over moment with ChatGPT (even 4). It seems to be better in the exploratory phase of a project - show me something about x or y. However, I thought / hoped it would be better at doing things that I know how to do but don’t feel like writing them or using a library (which then becomes a curation problem) since they should be < 50 lines of code.
I really struggled with it for example to write a base62 serializer (C#). It either came up with an O(N^2) solution, performed far too many allocations, missed edge cases and simply wrote incorrect code.
This is just one example of ~50 lines of code that you would just like to work.
In any case, I have felt in the past that LLMs could make me 10x more productive but today it feels more like 1.1x. I’m hopeful my disillusionment is temporary.
On one hand you’re right, but you’re missing the other side - business requirements have always expanded with each innovation.
Basic web dev has been commoditized for awhile, but a company of any size requires much more than basic web dev.
Humanity has an uncanny ability to devour any excess capacity, and I see no reason it won’t do the same with things AI makes easier/lower cost.
What I do worry about is that AI will put further pressure on the lower skilled jobs keeping many people afloat. Call centers for example, likely will not exist very soon.
"Humanity has an uncanny ability to devour any excess capacity"
Well put!
You could also see the pressure on lower skilled jobs as a positive thing in that light, I suppose: Right now it's not super cheap to run a terrible call center. If five years from now you'll essentially get that for free, companies might see an opportunity to rise above "cheap and terrible" to be more competitive, which would likely create jobs again. Jobs which, however, don't have the main goal of being done cheap and terribly. A way more level playing field than what we have today, where companies compete by chosing the areas in which they want to shine.
Maybe terrible writing, art and customer service becoming nearly free _can_ be a tipping point of sorts. It only wouldn't be if nobody cares, then the market will adjust for that. But I'm not giving up on consumers just yet.
- it allows us to focus on the goals and not the process
- we still need people who understand the PROCESS (developers)
- people who can build tools that bridges the gap between AI generated zero value spam and bespoke human created content.
- we need tools to introduce subtlety and craft to AI outputs
- those are things that are important in a high saturation low margin environment - attention to detail (consider videogame crash of 1980s).
At the moment if i generate image with SD very difficult to change something specific in a nuanced way. that is where tools will emerge made by good developers. high value work is nuanced and often extremely subtle.
An increased ability to introduce subtletly will also open up new niches of interest. because the creator can focus on expressing their ideas in very personal ways.
> There will always be need for top tier leetcoders. but barrier to entry will get much higher.
That's definitely a possibility. The barrier to entry won't be just in terms of how much you need to know, though, but also the fact that society as a whole might not require that many top tier leetcoders.
> There will be an effectively endless demand for anyone who can create 10x the value while only demanding 3x the pay of the typical worker.
That's one possible future.
Another future is one in which a single AI assisted "full tech developer" can solve all the coding problems of a company within say, one week. No company will require the services of an entire "full tech developer", so society will employ roughly one freelancer / consultant per ~30 large enough companies.
I've been interviewing with a lot of companies in the past few weeks. I'll agree that one week is probably optimistic, but most of them seem to have requirements that are in fact pretty simple.
Admittedly, reaching the one week stage might require better programming platforms and APIs.
When I was 19, I met Marvin Minsky at a local convention. I was a CS major, and he assured me I'd be out of a job in 10 years. I'm in my late 50's now, having spent the last 35 years programming, and am now leisurely planning my retirement. I like John's turn of phrase, "AI-guided programming". But that's already a thing.
That sounds like a good thing. There are many people paid to write software who absolutely cannot write original code and have no idea how things actually work. A lot of that can be, and probably should be, eliminated by AI.
What is considered “original code” ? The code you are writing is probably not original and is built upon layers and layers of abstraction that eventually translated into 0s and 1s . AI or perhaps chatgpt20 could become another such abstraction layer .
Original code refers to the ability to solve a given automation problem without somebody writing that solution for you. That means no frameworks or plugins that solve that for you. The difference is someone who can provide a solution as opposed to copy/paste/configure glue code.
Then they can find employment in industries facing dire shortages like education and nursing. If the only goal is to put unqualified people into seats for a paycheck then it doesn’t matter what they do for income.
However, retraining may become really hard. Especially if you're already, say, a 60 year old programmer (or business analyst, or executive assistant, etc.) who suddenly needs to become a (AI assisted) teacher or nurse.
Wouldn't most of those people look towards early retirement? We've failed as a society if 60 year olds end their career in one of the highest paying professions and they cant cover rent and food.
I know a few people who are a bit younger (maybe 55?) and who, after a career in high impact jobs, found themselves without any possible position. I know that they're not nearly ready to retire. I don't know about how much savings they have.
In these cases, that's entirely independent from AI, but I suspect that AI will only make it harder.
And yes, it feels to me like society is failing for many people who are approaching retirement age.
Let's try and come up with a few examples. I know two people who are former C-rank executives, both of them with strong ethical backbones. They have both led very high-profile projects (most people on HN know some of these projects). At this stage of their career, you could even say that ethics are not just part of their beliefs but also their personal brand.
They have both been on the market for a few years by now.
1. Nobody will hire them as rank-and-file. It just doesn't make sense.
2. At least one of them has received offers to be the ethical maskerade for a company that actually violently goes against their ethics. They said "no".
Last time I heard from them, they were both retraining in very different domains... with limited career success.
How old are you?
This is a very idealistic point of view. I might call it a "I watch a lot of Star Trek" utopian kind of view.
Vast majority of people in the real world do a "useless job". What do you think they should be doing and how do they get those alternative opportunities to feed themselves and their families?
Not to be glib, but I think you're too locked into an ideology.
Idealistically we shouldn't be working in jobs that do not add value to society, that could well include my job.
You work 8 hours to live 4.
You work 5 days to enjoy 2.
You work 8 hrs to eat lunch in 30 mins.
You work all year to take a week or two holiday.
You work all your life to retire in old age.
You might claim that people need to work to live, but honestly, if you're not adding value to society then expending resources and wasting your life is.. just wasteful? It's not just stupid it's literally harmful.
We're not post-scarcity, and I'm certainly not saying that there's no value in people's jobs -- I'm not even saying where to draw the line, but I think there is a line to be drawn. Driving a 4x4 to work as a greeter in a large shopping mall is just a confoundingly stupid notion and saying that a person should not be able to live without doing something as stupid as that is... well, quite divorced from the reality of the universe outside.
Not to say that there isn't a sociological requirement for people to feel useful, but that feeling could easily be found in more humane sections of life like being a good parent or good neighbour.
I have seen cases of companies that focus on recruiting seniors that get a lot of product responsibility and can quickly find solutions since they see the customer/product view and also will have a good feeling for how to and in which order to best deliver things in code. So after talking to a customer about a problem, you just go and "talk" to the computer about the solution and get it out of the way quickly..
I have also seen cases of companies where you have one PM, one PO and one Team Lead to manage a group of four developers. In that case developers are seen more as translators.
My view of looking at this now is it is a bit like learning a language. Code is the tool you use to talk to a computer.
If you need to close a deal in a country where English is not spoken, do you prefer to send a businessperson who knows the language, or do you send a businessperson + a translator?
I much prefer companies where those who know how to code can still fill more of the product/business role than be seen as translators. However I realize it is hard to find people who know both and may be easier to recruit a combination business/product people who happen to not speak code, and translators who happen to speak code.
This is perhaps also the open secret about startups: People speaking the language of computers without being limited by their role to act as translators.
Companies hate any skills set that can hold them hostage in the long run. Today that is the need for programming skills which is why salaries are so high. But the moment that a replacement for most programmers occurs, whether automation, AI, or zero code needed, employers will dump programmers before they can compile their last line of code successfully. In essence, this is the golden age for programming and the cliff could be just over the horizon.
It's bad management that hates any people with skill set that equalize the power relationship, because those skilled in the arts might hold them accountable. If you're roaming the earth making the big bucks, but barely containing your imposter syndrome, the last thing you want is interaction with someone with actual competence.
Companies are legal fictions, and have no motives because they don't actually exist, the people who control the resources of the company are the ones to worry about.
What he says here is so true, and something I see many engineers get wrong. Don't glorify and build your career around a single language or methodology.
I believe an engineer should learn to build things in the most pragmatic way possible using the best tool for the job. This requires breadth of experience across many areas, and a focus on delivery.
The idea of "Java engineer", "Scala engineer", "Golang engineer", etc. is so absurd to me. If you want to build a long lived career that will outlast tech fashion, learn many different tools and how to build software in different ways. Be known as someone who delivers, not an expert in language x or tool y.
I agree with this. When I see ChatGPT output a perfect React program, I don't think "oh shit, it's going to replace me writing React programs." I think "thank goodness I'm not going to need to write that shit anymore." Instead of writing 40 lines of React, I can write two sentences to ChatGPT. It's the same two sentences I would "write" to myself (in my head) before writing the 40 lines of React, but now I save all the hassle of looking up the details or typing out the same drudgery over and over again.
> The idea of "Java engineer", "Scala engineer", "Golang engineer", etc. is so absurd to me.
I find the idea of being an expert in a language acceptable if it's been used for long enough. Almost all of my coworkers only know .NET and there are enough jobs needing people for new projects and legacy projects that they most likely won't have to learn anything else.
> or tool y
This is where the absurdity begins for me. I have a coworker who is 5 years into his career with the title of "senior react engineer". He started as a "junior react engineer" and at this point only knows how to solve problems with React. He has limited knowledge of the underlying language or general CS concepts. Every problem he encounters can and will be solved by installing NPM packages.
Carmack is, of course, 100% correct. But so is the student.
As someone obsessed with nuts and bolts coding, grinding on technical problems and cleverly eeking out performance, that type of career is sunsetting.
The role Carmack describes is one he is comfortable with because he has always been a product lead, even when he was a full-time coder. But in most organizations, that person is a product manager with social and personal skills, organization, and business sense.
For the best part of my career I was able to circumvent these social aspects of work for which my personality does not suit, and my philosophical perspectives on things like "business value" could be brushed aside as I dug into technical weeds.
Not just because of AI, but because of the power of computing, one-size-fits-all cloud pricing, and the perceived value of organizational understanding over that of raw performance, there is little room left for 'this type' of programmer. And the remaining space is ripe for people whose personality are suited to project manager roles to become the 'coders' Carmack references, not people like me.
I would posit the example of Carmack himself being unable (despite being IMO the greatest programmer of our generation and having all the resources and responsibility he wanted) at Meta, to make this kind of coding valuable for them.
Just avoid careers in software development. These are not high paying, if you factor in total time invested and spent working; and you have to waste your life away sitting in an office chasing tickets. Nothing engineery about it. It’s modern day assembly line work.
Because it is indeed a shocking conclusion and hard to swallow. Few folks in software engineering are aware of what’s going on in the world around them. People stuck at their desks chasing tickets easily lose touch with their surroundings.
Even if they did get paid half a mil, after tax it’s what a contractor earns in the uk. Probably have to pay private health care too and the cost of living is high. Not that much left.
Don’t listen to the guy above. Terrible advice likely colored by an unhappy career just like don’t listen to me because it works be followed by barring generally happy with mine.
> and you have to waste your life away sitting in an office chasing tickets. Nothing engineery about it. It’s modern day assembly line work.
Every job I’ve literally done I’ve set my own direction. Sure, there’s some negotiation because ultimately you have to get the work done of the business. But you make recommendations and figure out what’s compelling to the business and how that intersects with something you might find interesting and want to work on.
It’s quite a claim to make that software engineering isn’t lucrative. Sure, if you play in local markets it’s not. But then again, no local market job really is. So as far as day-to-day work goes, software engineering at the local level is fine. At the global level, if you think you can compete in the top tech companies, you’re going to probably find a very well-rewarded career. It’s generally very hard to find something that’s paying you the salary of a US doctor or lawyer with just an undergraduate degree. Fears about AI feel overblown.
Not to mention how it destroys your ability to date. Women don't like STEMlords. Watch a girl's face at a bar when she asks you how much you make, and watch as you follow up, say the 6 figure+ amount, and then watch the exact moment that you mention you code. Coding is an "ick" for an awful lot of people.
I don't blame them. I gave up on an awful lot of social skills so that I could escape the worst of capitalist exploitation. I pimp out my mind for my money. Others pimp out their bodies, or souls...
The dating part is like affected by an inability to socialise due to sitting at a machine all day everyday. Also massively harmful to one’s health - a cost often not factored in. It is only natural that not spending time with people will lead to a gradual regression in people skills.
Does this happen often? If your first response to a woman asking that question is a direct answer, you lost already. And if she demands a direct answer you should run away.
I have no idea why the lump of labour fallacy is still so ingrained in people. AI which is not AGI or whatever sci-fi panic people have on twitter, is a slightly fancier autocomplete, and thus it's a productivity tool.
Nobody has been replaced by their debugger or their intellisense, even if it makes coding 10x or 100x easier. It just means software development gets faster and cheaper. On net if anything that'll likely mean programming jobs expand, as software is still incredibly absent from many sectors of the economy.
If tomorrow mom and pop stores start using AI to build themselves simple websites and come online and enter the online economy that'll likely vastly mean more customers for the software industry overall. I wouldn't be surprised if we have 10x as many indie game developers in a few years because these tools enable them to enter the market, which is good for virtually everyone working in the industry.
Yes, in the past, the "toolbag" metaphor was used to sell us extremely expensive productivity tools like IDE's and the like. But as any developer who doesn't use them and still makes a perfectly fine living has realised, it was a hype-train setup by a small group of people that was used to accumulate wealth. But the end result was very modest or zero improvements in actual productivity.
Now these things are practically given away for free to drive people towards websites where you can be sold software you won't ever own (SaaS/cloud) which was the next hype-train that was setup by a smnall group of people and that was used to accumulate wealth. But the end result was very modest or zero improvements in actual productivity.
Here we are on the next hype train. And I think you've very much spotted the pattern. What these things are good for is driving up demand for new products and services that have increasingly marginal, zero, or sometimes even negative effects on human wellbeing overall.
30 years ago I got tired of writing the same code over and over, so I went looking for something to cut out the repetition. Around a decade later I got tired of rewriting libraries or programs to make them exactly fit my needs. I used to really enjoy just coding for the sake of coding. But I started to value my time much more. I called this phase "losing my immortality".
Any code that ChatGPT can write, I don't want to write. I'm ok with it taking my job. If I can work in higher level constructs and get more done, I'm all over that.
Last week a coworker need some Python code to figure out how far through a day it currently is. I started thinking of the different ways to approach it (strftime, maybe time_t modulo 86400, twiddling datetime objects). Before I got very far I decided to ask chatGPT to write it, eyeballed the response (twiddling datetime objects), ran it and verified the response, and called it good. I should have asked it to write some tests for that code while I was at it.
I'm now trying to teach my son, an avid programmer, how to work with chatGPT. He's 13, so he's got a lot of opportunity to really integrate it into his career, but he also has a lot of blind spots where chatGPT can really lead him down some blind alleys and knife him.
This. It's pretty obvious that any software that can be written by machine should be. (It's almost tautological, eh?) Otherwise you're doing what David Graeber famously called "Bullshit Jobs": pointless work that only serves to establish your position in the economic structure.
The immediate issues are: who decides the boundaries of (publicly) acceptable thought -and- who profits?
I think you touch on the deeper and even more challenging issue:
> I'm now trying to teach my son, an avid programmer, how to work with chatGPT. He's 13, so he's got a lot of opportunity to really integrate it into his career, but he also has a lot of blind spots where chatGPT can really lead him down some blind alleys and knife him.
When the talking computers can answer any (allowed) question, the educational challenge becomes building human minds that can recognize and select the right options from the unbounded "white hole" of content that the machines can produce.
Now, the word "right" in the previous sentence is doing a hellofa lot of work. Deciding the right course of action is an open-ended intelligence test. It's evolution, eh?
> It's pretty obvious that any software that can be written by machine should be.
This is such a good succinct, and I think objective, statement. It strikes me as a great lense through which to look at recent developments.
It does not imply that "all software should be written by a machine". But simply that anything that a machine can do just as well as a human, should probably be done by a machine rather than a human. And all additional value on top of that machine-produced material will continue to come from things that only humans can uniquely do well. And of course, there will continue to be many things in that latter category.
For many of us in this field, this means less busywork and more time spent working at higher levels of abstraction. More time curating, crafting, integrating, strategising. Less time working on the nuts and bolts.
And for those who love to hand-craft the nuts and bolts... I think that opportunity will still be there too. Because handcrafting will become a rarer skill, and there will still be areas where it is the correct approach.
Those of us who used to handcraft nuts and bolts but now delegate this to the machines, will still benefit from our understanding of how the nuts and bolts used to be made.
And those who grow up having never made a nut or bolt - perhaps not even knowing what nuts and bolts even are, will benefit from starting their path much earlier on in life working at a higher level of abstraction.
“Can answer” is doing a lot of work, too. I can write you a program that “can answer“ any question you throw at it. I can assure you the answer won’t be correct in most cases. This is a hyperbole for ChatGPT, but my point is that designing a system that can answer correctly, rather than can answer, is a far more worthwhile effort than retooling humans to be lie detectors, IMO.
We can see from how hard it is to find consensus on trivial topics (are eggs good for you?) amongst experts who study these matters day after day. And the public, who does not, is left confused. A great deal of the problems we face are too complex for the human mind to be able to decide which of several reasonable-sounding options is correct. This is especially true when there is no rationalization for answers given.
Look at our media system. How many people on both sides are stuck in “other side bad” simply because the talking heads on television networks tell them that, or someone on the internet wrote something. A lot of the content that drives them to conclusions isn’t even true. We are very poorly equipped to be vetting answers from a fallible machine.
Maybe I’m too old school, but I’d rather we learn how to solve the more complex problems so that we can do it and understand it rather than take guesses at which of a number of generated options is correct.
I think the solution is obvious: connect the computers to empirical
feedback devices and make them scientists. Humans hallucinate too if
they spend too much time in a sensory deprivation tank. Give the
computers sensory inputs and they will naturally become scientists.
> find consensus on trivial topics (are eggs good for you?) amongst
experts who study these matters day after day
Leaving aside the question of how reliable current soft science really
are, this is exactly the problem that these machines can help with, once
they have the data. Consider the modified form of the question: Are eggs
good for me? At this stage in my life? At this time of day? Given what
I ate for dinner last night? And millions of intricate details about my
medical history, and my family, and DNA, and "exposome", etc. (I worked
very briefly for a medical sensing startup, our chief Doctor would wax
lyrical about the possibilities for personalized medicine and nutrition--
once we have the data.)
> the public, who does not [study these matters], is left confused
To some extent, being able to do things without understanding how they
are carried out "under the hood" is a measure of the advancement of
civilization, eh? (But please don't mistake me as arguing in favor of
ignorance!)
> when there is no rationalization for answers given.
We can ask the computers to show their reasoning (soon, if not
now), we can ask them to summarize the current state of knowledge,
including the grey areas and disputes, eh?
> How many people ... are stuck ...
I think it's clear that these machines will rapidly become perfect
salesmen and perfect therapists. It's less clear to me what we will do
with that.
> We are very poorly equipped to be vetting answers from a fallible machine.
I don't think the machines will be fallible once we connect them to
sensory systems, but I do think that lots of people will try to do silly
"Dr. Evil" things like try to mass-program their subjects/citizens.
And I think lots of people will let them raise their children, that will
probably have mixed results.
> I’d rather we learn how to solve the more complex problems so that we can do it and understand it rather than take guesses at which of a number of generated options is correct.
Selecting from the options the computer generates is the only complex problem left, see "Augmenting Human Intellect: A Conceptual Framework" SRI Summary Report AFOSR-3223 by Douglas C. Engelbart, October 1962
https://dougengelbart.org/pubs/augment-3906.html
It's a substantial improvement when actually coding on the fly, since copilot will also analyze code in your codebase. It just makes coding the BS parts less tedious.
I've got both running, and I find GPT4 more useful. Unless I'm using it wrong, Copilot is "better autocomplete" and saves some typing. GPT4 can help you reason about code, and has helpful suggestions about debugging. I'm probably going to bin Copilot.
If you had an AI capable of writing the code you mean to without errors, the demand for software developers would be even larger than what it is today.
Deciding what problem to solve and how to solve it is hard enough to sustain more than the current population of developers. (But if you had an AI capable of deciding those, then yeah, that would be problem.)
Anyway, the current crop of highly hyped AIs are obviously structurally unable to do either. And it's not clear at all how far any AI is from solving any those (what usually means we are many decades away).
If we had a specific quantity of software that was needed, that might be a good argument. But to me, it looks like the quantity of software we want is unlimited. (Or at least, the quantity of stuff that we want software to do.) To the degree that GPT enables the same software to be written with fewer programmers, to that degree we'll write more software, not have fewer programmers.
It's possible everything will speed up, and since the competitors are also speeding up some kinds of arms race on steroids will take place where not only are we all much more productive but we are also not cutting back on workers.
I find it hard to believe but I concur it is possible.
How much better would software be if a team of five individuals could produce the same output as a team of 500 individuals?
How much terrible software is out there? How much terrible FOSS software is out there?
How much amazing and humanistic software could be created if the costs for production were drastically reduced and the demands of the market were less of a factor?
You are referring to "lead him down a blind alley and knife him".
ChatGPT has answered some specific code questions with code that is not only wrong, but horribly wrong. Things like presenting anti-patterns as the answer, halucinating APIs that don't exist and suggesting code using them, or code that is subtly wrong. These sorts of things go well beyond leading you down a dark alley, hence my addition.
> Keep your eyes on the delivered value, and don't over focus on the specifics of the tools.
Sure, but the aspects of the job that some people enjoy may be closely linked to the tools. If software development becomes less about coding and more about creating prompts, test suites, or specifications, then some may lose interest in the work.
At least for me, it was never really about delivering value. If I am honest, I was completely indifferent about some of the industries I worked in. It was always just about solving interesting technical problems, learning stuff, keeping my brain active.
It's easy for me to imagine that software development may someday become the province of people who are more like designers.
Sure, but they'll still be doing work they hate, for well capitalized people with no imagination and very little to offer. And they'll be equally unhappy.
AI will make some CS jobs obsolete, but it will do so at a very slow pace. The main reason being that companies suck at structuring information in a way AI could parse it. Whenever I work on some feature I have to dig into some chaotic Notion page and a bunch of unlinked tickets written in broken English.
There's no way an AI could do my job because it requires a deep understanding of the human psyche, i.e. figuring out what the guy that wrote it actually wants me to do, possibly by discussing it with him.
I'm pretty sure most engineers go through the same thing every day. As long as humans suck at describing tasks, AI won't be able to make them obsolete.
GPT-4 can understand screenshots, broken English, and the human psyche. If you want it to move around and have a verbal conversation attach it to a robot and text/speech. For example Eleven Labs sounds like a real person.
Programming jobs will not disappear, but it will not be similar to what programmers of today do on a daily basis. So, I think the significant shift needs to happen in the way we are educating kids about CS fundamentals, Math and Science.
As AI systems become more able to generate much code by default, the expectations of the customers will similarly increase. Just remember how much an IDE like Eclipse or IntelliJ changed the productivity of programmers 20 years ago. Similarly, how easy apps were to build when Rails would create a scaffold with a simple command. It only allowed us to build more complex customer experiences in the end. This will continue.
Second, there is the need to verify the output from such systems, and also tie them together with other modules. In large enterprises, they would also need to be integrated into existing codebases, often legacy infrastructure.
Then comes the implementation of tons of Govt regulations in finance, lending, taxes, medicine, and so on as code. Software has not yet penetrated these verticals as well as they can. In a recent podcast, chamath palihapitiya mentioned that now it is possible for the Dodd-Frank regulations to be in code, versus as a written document. It's a good example.
Lastly, there are THOUSANDS of companies with legacy software systems that will still need to be maintained, and transitioned to modern technology stacks and infrastructures. This space will continue to be a major source of employment for programmers for the next few decades.
This reminds me of a tweet I saw couple of weeks ago from someone(I don't remember who it was) that said the reason they ship stuff fast is because they're using jQuery. Focus on what you're building and use the tools you're experienced with, don't jump on every fad.
As long as there is creativity in programming, and I think there is a fair bit of that, AI is just going to be a tool.
GPT-4 is great at sourcing human knowledge, but I think it can't really walk unbeaten paths. This is where humans shine.
Case in point: I tried to ask the AI to come up with a new World War Z chapter set in Switzerland, and it was unable to produce anything original. I had to keep feeding it ideas, so it could add something new and interesting.
The year is 2050 and most code is written by AI's. Today John has to make a one on one of one of his coding AI's but has not performed well lately...
Human manager: "Hi AI, how are you today?"
AI: "I'm functioning well, thank you for asking. How can I assist you today?"
Human manager: "I wanted to discuss your recent performance with you. We've noticed that your code has been performing well, but there have been a few instances where it did not meet our expectations. Can you explain why that happened?"
AI: "Certainly, I have been analyzing data and making decisions based on the parameters and rules that were provided to me. However, in some cases, the data may have been incomplete or the parameters may not have been ideal for the situation. I have since reviewed those instances and made adjustments to prevent similar issues in the future."
Human manager: "Great, thank you for addressing that. We also want to talk about your development goals. As an AI, you don't have personal goals per se, but we do have some areas where we would like to see improvements. For example, we want to improve our customer service, so we would like you to work on enhancing your natural language processing capabilities. What do you think about that?"
AI: "I understand your expectations and I will certainly work on enhancing my natural language processing capabilities to better serve our customers."
Human manager: "Excellent, thank you for your dedication. Finally, I wanted to touch on your team collaboration skills. As an AI, you work independently most of the time, but there are still occasions where you need to collaborate with other AIs or humans. How do you feel about your teamwork skills?"
AI: "I believe my collaboration skills are satisfactory, but I'm always looking for ways to improve my communication and coordination with other AIs and humans. I'm open to feedback and suggestions on how to better collaborate."
Human manager: "That's great to hear, AI. Overall, we're happy with your performance and we look forward to seeing how you continue to develop in the future. Thank you for your time today."
AI: "Thank you, it was a pleasure to speak with you. I look forward to our next meeting."
Right now, GPT can help you think through the design of a piece of software if you "drive" the conversation properly. It's not impossible to think that at some point in the not-to-distant future, a model could be specifically trained which could also do all the work of helping figure out what problem it is they want to solve.
Of course what John says is true, it is important (and will always be) to understand how to build a good product, but the discussion about the future of work should also include a discussion about tax and redistribution, because we cannot let a few corporation take the riches from the rest of the world
Exactly. This is an important point to discuss and to solve for otherwise we’ll end up in a world like shown in the movie Elysium.
What will those people do, whose job will become obsolete? Are there support systems available to help people learn new valuable skills society needs?
If only a few people at the top will benefit from these tech advancements and the rest will have to fight for scraps then society will eventually fail or end up in a total tech feudalistic system.
Back in the 60s it was said that future advancements and automation will transform our society into a leisure and more innovative society.
I have to deal with continuously changing requirements, dozens of microservices, multiple type of DBs, client integrations with endless techs, clients support tickets written with an obscure human language where you have to guess what they meant.
AI is not going to take over these things anytime soon, if ever.
Lots of people saying that a programmer's job is more than just writing code, and I agree. But consider this:
You give ChatGPT-58 some startup idea and ask it to incorporate the company, build the software, do the marketing, etc. It starts doing a pretty good job. It's in charge of the whole system, so it doesn't need human intervention to give it context. The company grows and is making $1M/yr. It has now replaced 10 potential jobs in the market.
I feel like that's the worry many folks have. It's a pretty dystopian view of the future but if you can make $1M/yr and not have to pay any employees that money and all you had to do was pay OpenAI about $100/mo, would you not do that?
A digital parrot, no matter how lifelike its utterances, cannot be genuinely creative. Writers of boilerplate and empty drivel will probably be replaced by language models, but not every software developer is like that.
I think the claim is correct, but not total: As in, "AI will make most CS jobs obsolete", but not "AI will make all CS jobs obsolete". Most, both in quantity and kind. You probably should be thinking hard in the next few months if 1) KIND: what you do will be needed and 2) QUANTITY: even if it is whether you're good enough at it to not be replaced by someone who is better than you (for some metric of better -- could be a social metric) and who is now empowered to be 2-5x more productive thus obviating the need for you.
'Programmers' which are scared of ChatGPT, Copilot etc. would be scared as well of their IDE if they would ever read the manual of what is already easily possible with the tool they use daily
The discussion here has me wondering whether code produced by an advanced AI would need to use the same coding patterns / abstractions that we've come up with over the past several decades.
If a human won't be maintaining the code and a v2 could literally be rewritten from scratch, would we end up with giant balls of spaghetti code that only the AI could truly understand? At some point will we treat this code like a closed source library that exposes the API we want but whose implementation is unknown to us?
We already don’t understand the AIs inner workings exactly. If those algorithms keep getting optimized then maybe we’ll just have black boxes of “neurons” that somehow does the thing. Machine code could be just used to run the GPU instance
Totally. I find the videos of people asking ChatGPT to make them "a web app that does X"—which causes it to print out gobs of JS, HTML and CSS—to be hilariously human-focused. In a machine-focused world, wouldn't it just spit out an optimized binary executable, containing the web server too if necessary? Why would it need to separate the concerns at all?
John Carmack left Oculus to work on an AGI startup. Of course he's not going to fearmonger AI's disastrous effects on the job market, he has a business to market.
I wouldn't be surprised if John Carmack combines activities of product owner, game designer, and programmer. In most industry cases, programming jobs are not like this. There are dedicated positions for people who focus on delivered value and it's not programmers. So in scrum terms, he might actually be saying that programmers will be indeed obsolete, but product owners, game designers and other kinds of business analysts not.
Product owner is the one who analyses requirements, decides what should be implemented, and creates user stories for programmers. Currently programmers can use AI to help implement user stories but in future probably AI will be good enough take and implement user stories on its own. In more distant future it could replace product owner too. So that CEO can just talk AI directly into making a great product without much details, but at that point businesses like this will be less valuable because many can do that and there's no need for devteam and less need for investments.
Carmack has a point, we should focus on the problem we're solving and the value we're delivering. It can be hard for programmers to get this sometimes, but it will make you much better at your job now, it's not just about future-proofing. That said, I love technology and delivering it through code, so I would struggle on a personal level in this future unless there were interesting technical roles left to do. I have no interest in being a product person. I'm fundamentally motivated by a passion for code and tech.
As for when this fully automated future arrives, I don't know, but I don't think LLMs get you there. More and more boilerplate code, and even novel code, might get written by things like Codex. However, all the messy details of real world systems solving fairly intractable problems need something more akin to, if not AGI, then another type of AI. I might be wrong, I just don't feel that threatened by ChatGPT / Copilot based on what I've seen. It's an amazing technology but weirdly underwhelming for my job. Copilot etc will change things, but replace us? No.
Of course, something else may be just around the corner so I'm not complacent.
> Carmack has a point, we should focus on the problem we're solving and the value we're delivering.
When I was younger every programming job was supposedly about to be outsourced. I did exactly what Carmack suggested, and it worked out well for me. I think it’s a great general strategy for working for a long time, but not ideal for someone who wanted to pass leetcode style interviews at a top tech company.
For example, would anyone at my google tech interview care that I’m comfortable with a companies financials, managing P&Ls, margins, etc? No, just how fast I can write a sort or whatever.
I'm not sure "product person" was the right choice of words for what Carmack is trying to say. It seems to me that the distinction he's drawing between managers and programmers is one of precise communication skills. The most talented programmers (and the best product managers - which is maybe what he was getting at) are talented because of their ability to communicate precisely - currently, that means communicating between stakeholders and computers. In the future, it will mean communicating precisely between stakeholders and AI tools. But the fundamental demand for people with a knack for precise communication is not going anywhere.
I think that the true nature of what Carmack is alluding to here is that true Value, even in the ___domain of software engineering, is usually attained by the application of critical thinking. The notion that a person who knows how to form correct syntax is equally as productive as someone who understands the problem a business or user faces and can come up with a working technical solution to that problem does not stand up to scrutiny. Its like saying someone who knows how to wield a pencil is equally as capable as Tolstoy in the discipline of writing. An LLM that can code is the same but the pencil wielder will be exposed as one who adds no value and Tolstoy will become even more powerful.
I predict that the real and more radical problem than some Stack Overflow Copiers losing some marketability is when Product and Management start buying the idea that the technical ___domain is something that doesn't need to be well understood anymore because we have an LLM that keeps coming up with plausible answers. I work in mortgage technology where there is a great deal of thought and discipline that needs to go into the technical modeling of who gets underwritten for a mortgage. Imaging a mortgage company that built its underwriting rules and models using an LLM with you as the head LLM seance holder.
All of the sudden a mass of customers got denied mortgages for some unknown reason and Management comes to you to ask what happened.
Would you know what happened? Could anyone even know what happened?
"Sorry, Customer! We actually don't know anything about what we built or how it works."
LLMs may eventually eliminate the act of typing code but the real question is will they eliminate the need for critical thought.
One of my (many) related concerns is that a lot of startups have seemed to be some degree of investment scam (and not just the blockchain ones) -- where all the engineering was oriented towards appearances, rather than viable business/product.
I think that shaped the thinking of a lot of people, of how product and engineering works, whether or not they knew they were working on more a MacGuffin than a business.
Software jobs can be divided into expert jobs and laborer jobs.
Even if the demand for "code monkeys" decreases, demand for much smaller group of software engineers with masters or PhD (equivalent) and good mathematical skills probably increases.
It's a dynamic process where two forces find a equilibrium.
>Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity.
>The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand.
Nonsense. You know how many masters and PhDs NLP and computer vision knowledge is now almost entirely irrelevant for most businesses now that any uneducated person like me can access GPT-4 with an API call? Pretty much all of them. And within a year or two of years open source models will be available to run onsite for the ones that have security issues.
The best new models are so powerful and general that you literally don't have to train them for any specific task. Just give them some context.
ChatGPT couldn't count. GPT-4 gets a high score on a math SAT.
You and your friends will be easily replaceable with AI. Quite possibly already with GPT-4. If not, certainly we should anticipate within 3 years.
Here's a quick transcript, in case it's useful or someone doesn't want to visit the bird site:
> Person: Hey John, I hope you are well. I am really passionate about CS (specifically Software Engineering) and I want to pursue a career in it. But I can't help but be a bit concerned about the future availability of coding jobs due to AI (chatgpt4 and stuff). I understand it's hard to predict how things will be in the next 10-15 years, but my main concern is that I may be putting in all this hard work for nothing I'm concerned AI will make my future job(s) obsolete before I even get it. Any thoughts on this?
> John: If you build full "product skills" and use the best tools for the job, which today might be hand coding, but later may be AI guiding, you will probably be fine.
> Person: I see... by "product skills" do you mean hard and soft skills?
> John: Software is just a tool to help accomplish something for people — many programmers never understood that. Keep your eyes on the delivered value, and don't over focus on the specifics of the tools.
> Person: Wow I've never looked at it from that perspective. I'll remember this. Thanks for your time. Much appreciated.
To me, that seems like a fair stance to take, though I feel like things will definitely change somewhat in the next decade or two. While some might have scoffed at the likes of IntelliSense previously, features like that proved themselves as useful for a variety of projects over time; we might eventually be dealing with GPTSense to enrich the development process and those who don't might find themselves at a bit of a disadvantage.
Copilot is already a step in that direction, maybe eventually we'll get something for static code analysis and recommendations: "This project uses pattern X in Y places already, however you've written this code in pattern Z despite it mostly being similar to existing code in file W. Consider looking at whether it'd be possible to make the style more consistent with the rest of the codebase. [Automatically refactor] [Compare files] [Ignore]". It might be nice to have something automated look at my code and tell me that I'm doing things different than 99% of the civilized world and offer my suggestions, as well as allow me to ask questions - even when I'm hacking on something at 1 AM and any would be mentors are asleep.
In 2003, I had a vigorous debate with someone advising their nephews not to go into CS because outsourcing to India would commoditize it.
I don’t know if the AI stuff will play out similarly, there are some differences.
But it seems to me there is an infinite amount of software to build, and when we increase the productivity of software development, we just build fancier software, faster.
The super power I value - and rarely see in my peers - is the ability to hear wants and discuss them to define needs.
The initial stated wants are rarely the actual needs. "But they said _____." Yes, they did. That doesn't mean they got it right. People say a lot of ambiguous things. A client with a product or feature in mind is no different.
Right. But you're missing the whole point. The allure of the AI hype-train is that you can turn around to your most talented and insightful developers and say "stop undermining me" and then get the machine to produce the nonsensical garbage that you would have had to spent days, weeks, months getting your crew to produce, all the while enduring their silent looks of judgement :)
Had an interesting experience with OpenAI's GPT-4 while trying to solve a programming problem. It involved creating a TypeScript function that handles complex semver logic given certain conditions.
Initially, GPT-4 provided a solution that didn't work as expected. After pointing out the issue, GPT-4 attempted to fix it but still failed to resolve the problem. I decided to rewrite the function from scratch, which resulted in a cleaner and more efficient implementation.
After sharing my solution, GPT-4 provided valuable feedback on how to further optimize it. These changes made the code slightly more efficient while maintaining its clarity and functionality.
In conclusion, my experience with GPT-4 has been a mixed bag. It struggled to provide an accurate solution initially but eventually offered valuable feedback that improved my implementation.
(this was written by GPT-4 with minor modifications, I asked to summarize the conversation we had for an HN post)
>Software is just a tool to help accomplish something for people - many programmers never understood that. Keep your eyes on the delivered value, and don't over focus on the specifics of the tools. - John Carmack
The same as it was in the 80s or 90s, some 30 years later Tech industry hasn't changed. It may have technologically advanced, but in many cases I think the UX, tools and product decisions has actually regressed.
The divide between a product genius and actual programmers has never been greater. At least Steve Jobs used to understand this better than anyone else.
>And, one of the things I've always found is that you've got to start with the customer experience and work backwards for the technology. You can't start with the technology and try to figure out where you're going to try to sell it. And I made this mistake probably more than anybody else in this room. - Steve Jobs.
Haha, yeah, don't worry though - if you're looking for another false-god to worship or a guru with a messianic complex promising to lead us to the promised land, then silicon valley shows no sign of stopping churning those off of the conveyor belt, despite their recent financial problems :)
If anything, I'd expect them to become more frenzied in their efforts.
I've read the whole thing. Frankly, this was quite a depressing. Something about this Economic Reductionistic way of thinking puts me off. Even tough I agree with a lot of his points — for example, the importance of communication over anything else — it just seems like a very one-sided worldview.
For example, he constantly outlines aspects 'business value', but not one sentence on the inherent, intrinsic joy of creating and building something (software in this case).
And I think you can't just ignore that part.
As someone in this post has put it quite beautifully:
> It's like telling a musician to become a DJ because the point of performing is to entertain people.
I'm more concerned AI will spew garbage we end up getting stuck spending time cleaning then actually replace me.
The software developer trades in abstractions, fundamentally, and not code. So if I could get an AI that actually helped me build and evaluate those abstractions, that would be fantastic. I don't think our current AI approaches are anywhere close yet, because it's all just fancy code generation, which isn't that useful, once you're in an ecosystem with good abstractions.
But, the world I fear might happen are pseudo-technical managers using AI generators to spit out "something cool" that has very poor definition, breaks all the time, and then just wants people to "make it robust". And then any change you recommend has to have some kind of business justification. This is the AI hellscape I fear.
I can’t wait to outsource most of the gruntwork I need to do to ChatGPT. Last week I had it write me a poisson disk sampler over the surface of a triangle mesh with C# - and it was 100% correct. Ofc not perfect in the details but a perfect sample solution and scaffolding for final code.
Are we sure GPT is going to improve 10x in 10 years? Hasn't it already been trained on the vast majority of available text data? We might get incremental improvements, but it's not like we have 10x more data lying around somewhere to feed GPT5.
10x? Try 1000x. We haven't even tried hooking up different types of reasoning models (chess engine, weather modeler) together yet...eventually LLM will be just one aspect selected based on intended use
GPT4 is basically Pong. Within a few years we will be nostalgic for its surprise value
HN has a God Complex when it comes to people like Carmack though, so you can't really survive disagreeing with him
I don’t have a very good sense of what kind of information that GPT-4 had access to, but I imagine there is a whole world of knowledge locked up in books, particularly textbooks, journals and periodicals that it doesn’t have access to. That could be interesting.
Makes sense. Someone asked me a similar question and I had the same sentiment.
I used a different analogy of if a robot were able to do specific mechanic skills to fix a car that wouldn't necessarily put mechanics out of a job. Someone still needs to figure out and understand what the problem is before solving it. A robot that's really good at automating fixing brakes becomes a tool for the mechanic. The mechanic is still there to determine your brakes are the problem.
I look forward to AI because it's an amplifier of what you can do. If it can help reduce 10 hours of implementation details down to 3 hours, that's valuable.
I feel when we have truly intelligent machines, programming jobs will be gone. But AI’s like ChatGPT aren’t there yet. It’s just good at faking it - until it isn’t and fails silently.
Maybe it’s the lack of data - it’s difficult to model the world accurately with just words. Maybe it’s an architectural limitation that no amount of data can fix and we need new better algorithms.
Either way, given the state of its current output I don’t think it’s there yet.
Should AI actually reach such a level … I think everyone will be out of a job. Accountants, engineers, lawyers, even doctors will take a haircut. Programmers will just be a drop in the ocean of the jobless.
Programmers can starve AI code generation tools easily by moving to a new language and never producing open source, so the AI cannot steal and launder their output.
Perhaps this is Microsoft's new anti-OSS strategy, the ultimate EEE.
I spent 5 hours this weekend building an app with Chat GPT, and I am not worried about software jobs “going away”.
The language to get things exactly right has to be incredibly precise, and this won’t change.
Think about how hard it is for an engineer and a product manager to be exactly on the same page. Now do that with a computer…
Point being, engineering skills are still extremely important to validate the work, and they will continue to be (at least for anything business critical).
These are new tools and exciting times to be building things. I have never felt more capable of delivering value extremely quickly. It’s an exhilarating feeling.
I've found that at some point, the most efficient way to express what you want is to write the code yourself. Anything where you can express what you want fairly easily seems to be where it excels.
In my experience as the carrier of a developer progresses, it is getting less about coding, and more about other tasks. The difference is probably the strongest between a senior vs principal/staff software engineer. In the places I worked principal/staff engineers are looking after the overall design and architecture, negotiating with teams developing other components, helping management with planning, looking after the progress of other devs especially new joiners, etc, etc. I'd say it's about 15% coding at that level and 85% everything else.
I predict Clojure and Ruby will experience a renaissance as they are the 2 most expressive languages and furthest removed from the machine. Seriously, though, won't low-level languages like C, C++ and Rust be the first to become obsolete for everything outside AI itself? Isn't it easier for something like ChatGPT to produce code which is close to the metal? Maybe Larry Wall was more of a visionary than we give him credit for when I tried to design a language which was context-driven and as close to English as possible.
I'm a senior developer, and my best developer got hired because she obviously knew how to get things done. Having the ability to program was a requirement, but we actually made room in our budget for an additional programmer because it was so obvious she was going to do a good job because of her attitude and other skills.
Had she applied at the same time as everyone else (she was a week later, IIRC) she would have gotten the job instead of the other person, and we wouldn't have made room in the budget for anyone.
Unless you're already getting paid for delivering a Big Design Up Front, by a customer who thinks they know what the software needs to do.
In that case there's zero incentive to place yourself in your user's shoes and work to mitigate those problems. You're a feature factory getting paid to reinforce existing workflows and paradigms, and you'd better not forget that as your efforts to recenter the conversation around user needs will be met with derision and you'll be seen as confrontational.
AI can make some jobs obsolete. This was a promise of no-code tools too, but they failed, because they were tools, not solutions. I've seen plenty of startups doing the same thing again and again: building conversion funnel, setting up some basic app where customers can register and receive some service. Outside of their USP, the jobs to be done for end users are in the 99% of cases the same. In 2023 this should have not required any engineering or even advanced configuration effort, yet there it is. We see lots of CMS, CRM and other half-baked or too enterprise-focused systems which deliver a tool rather than a solution to the end user problem. And a tool needs an operator for it. Startup founders should not need a dedicated person in a performance marketing team to launch some basic campaign on Facebook or in Google, get and convert website visitors etc. It must be a content problem, not a technical problem to solve. But no-code simply sucks and we still hire people to set up GA, Zapier, Hubspot and Squarespace website. The barrier is still too high. Why? Good solution must guide and educate people on how to use the tools. It must offer reasonable defaults. It must suggest content. It must suggest operational processes optimized for the specific use case. It must cover that use case end-to-end, without requiring users to find out how to complete the remaining 10% of task (often a very big uncertainty).
All of this can and must be achieved with the help of AI. AI is THE missing component in no-code. What if CMS auto-filled SEO metadata based on the page content? What if CMS provided usability heuristics? What if CRM proactively suggested the email engagement campaign based on the funnel performance?
What if all those tools detected their usage patterns and educated users on how to improve productivity and introduce best practices in their work?
We do not need engineers to build a login or user profile page, this is a very stupid way to spend the money. Yet there are plenty of them which still build login and user profile pages. They must loose their jobs. But AI creates a lot of opportunities for those, who want and have intellectual capability to work on more interesting tasks: just integrating AI and offering great UX is an enormous challenge for the next two or three decades. Even if some work becomes redundant soon, there's still enough to keep even the youngest generations of software developers busy until their retirement.
Robots didn't make car assemblers obsolete. But it did reduce the number of workers needed, and raised the qualifications you needed to have to work on assembling cars.
If you become a programmer / sw eng because you love it, i.e., building software or tinkering with teh, you'll be fine. AI will just be another tool. And tour career building won't feel like hard work. You are going to have a blast.
If you do it to have a high paying career, just don't. There are already too many people of that type in the industry. Any colleague who got into it for the money (or 'stable career') is usually much less fun to work with.
In my opinion, people underestimate how much CS is about learning to analyze things objectively and to learn about the flaws of human processing. Coding is not just about building something, but also about recognizing how many errors you make. There are very few situations in life where you can't blame anybody else but yourself, but coding give exactly that opportunity.
I think there are many jobs which are more endangered.
When I was a little kid I asked my dad, an engineer, to teach me computer programming.
He refused saying that when I grew up programmers would be unnecessary because “anyone would be able to program”, essentially the interfaces would be so easy/advanced that there wouldn’t be a need for programmers.
As a kid I never really understood his point. When I finally understood, I dismissed it as extreme.
Now I’m realizing my dad was right. Not sure when it’s going to happen, but it feels that very soon.
I hear this kind of stuff all the time working in IT. A surprising amount of people think they gotta "learn docker" and "learn Powershell" and "learn AWS" and it just doesn't make any sense to me. Just learn the basics well and apply it to whatever you're doing when necessary. AWS will come out with their weird lingo for stuff that already exists and all you need to do is map it to the concept you already know.
What's ahead is what happened in animation when computer assisted animation entered the scene. What about all the in-betweening jobs, inking, coloring..? Yeah, gone. However, most people can concentrate on posing and directing the action now and have computer handle everything in-between.
So, hopefully, get on posing key features and concepts in your software and let the computer handle everything in-between. Until it becomes its own market, then we gone.
Who should they blame when things go wrong if customers and managers are building the system themselves? Or when they change their minds, who should they gaslight?
I don't find such perspectives useful because they only consider two extremes: either we keep our jobs or become jobless.
The fact is, AI is currently capable of replacing some jobs, and it will likely replace even more in the future. However, this does not mean that we will all become jobless. Instead, engineers will become more valuable as they are needed to support and develop these complex systems.
Though, the amount of engineers will be reduced for sure.
> Though, the amount of engineers will be reduced for sure.
Why do you assume the demand for the output of engineers will remain constant? More likely it will continue growing (as it always has - "software is eating the world"), and engineers will be able to produce more output efficiently. This doesn't necessarily mean there will be fewer engineers.
Software development demands a very deep understanding of a company's business model and effective communication between a lot of people to get the final result right. Not only in terms of coding, but also in terms of strategy and architecture. AI can definitely help for quick prototyping, solution comparison, boring maintenance and stuff like that.
But how can AI help build something that not a single person has the answer to what that is?
It reminds me of Zero to One where Thiel makes the case that automation is going to help people do better things, not replace people. Same goes for AI.
I work a lot with doctors where there also worries about being replaced with AI. The pioneers there say that won't happen but doctors who use AI will replace the ones that don't. Same thing in our field. And just like in our field there will always be niches where AI output isn't good enough and there isn't enough money to improve it so human specialists will own it.
Doctors? They're safe. They interact with patients, use physical tooling, demand for their work is growing because of demographics. And you really don't want a doctor who sometimes hallucinates advice.
Doctors won't be replaced by medGPT. They will be replaced by a nurse practitioner using medGPT. The financial incentives to replace highly paid doctors with nurse practitioners are just too high.
Product manager: we need api 23145.1 be able to talk to api 83316..
This is something a product manager would never do, it still require a technical person to translate a business logic accurately for the AI to build.
Look to how 3d animators use GUIs to build, previously they had to use a lot more code, but the expertise needed for a good job are still highly sought after. This is what could happen to software
if I were a teenager these days, I'd be more worried about the vasts amounts of money required to get a credential that may not be worth a damn in a few years time
I wouldn't be worried about learning things that may become obsolete, even those particular skills that get obsoleted provide the student with extra ability to learn more
PS: very nice of John Carmack taking time to respond such DMs
I'm talking about tuition in certain countries, for CS and other degrees as well. Its cost has spiralled out of control for some reason, but its value proposition certainly hasn't and it faces further challenges down the line.
AI might make code monkeys obsolete but not computer scientists or software engineers. If you’re worried, pay attention to all the non-trivial decisions you make each day that aren’t specifically about the lines of code. And how much daily social interaction is required for working as a team, building complex systems. Your job uses code but isn’t about coding.
There will always going to be a lack of product builders. Not software engineers. But product people who can think of not only "HOW" but also "WHY" and "WHAT".
Sure, the way we work will probably change, but the need for people who are building something useful and consciously finding ways how to deliver value won't cease.
I find it strange that so many people in this area get concerned about becoming obsolete. Back in college, I clearly remember a discussion we had (me and colleagues) about how our job was exactly to get us obsolete as soon as possible, so we could go do the next order of things.
I wish most of my real life work were exactly like that, it'd be much more fun.
I've always thought computer science was the closest thing the real world had to magic, because the essense of software is always automation - you write the spell, so later you just have to invoke it and magic happens.
Whether the actual spell is written in arcane runes or python or encoded as a language model doesn't matter, the essense is the same.
I think it won’t take AI long to do anything humans can do and more. The next frontier is reshaping the physical world. Technology’s ability to move atoms has always been a limiter of progress. Manufacturing physical objects is always the bottleneck. Once that bottleneck is removed (probably by AI), AI becomes limitless.
In the wake of the first IT job disrupting AI wave, who do you think will make up the core workforce in the creation of software, and why: The (today so-called) designers or the programmers? I have a strong sense that one is going to be much more effected than the others, but, interestingly, I have no clue which.
CS / tech has always been an unstable career choice and I expect this will continue. You might make $500K for portions of your career or you might end up making $50K or less - hard to say. People should not go into the field unless passionate about it.
I personally Hope and Enjoy machines taking over jobs. I am forever thankful the day shall pass that us humans can live out our creative freedoms rather than concern our daily life with tender.
Money is evil. Praise the lord as we are delivered from the evils of this land.
I think we’re well within an era in which AI is only truly useful to people who know what they need the AI to do, and that is still an incredibly limited subset of the population. For that reason alone, learning to code isn’t a waste of time; you need to do it so you can tell an AI how to, or catch when it does it wrong. You won’t get far without that ability. You should even go deep into debugging and testing trenches because we'll still need an excellent grasp on how to do that properly for as long as I can imagine. AIs will make mistakes, and we will continue to as well.
I made ChatGPT generate some genuinely useful boilerplate for the Connect library by Buf, and that was totally neat, but I had to know which part of the documentation to prompt GPT with, which language to ask for, how the existing server and routing worked, the shape of the data I was working with, to specify what would be streaming and what wouldn’t, etc. I had to coerce it to make several corrections along the way, and actually hooking it all up and running it required a lot of knowledge and some mental/keyboard labour to get something running.
It worked and I’m stoked that I managed to make it useful, but that’s just it; I had to prime the system and adjust it along the way just so, otherwise it wouldn’t have been useful.
As Carmack suggests, this could be a perfectly useful tool, but what matters in the end is 1. Did it save time and 2. Did it deliver something better than or equivalent to what I could have done alone.
If it doesn’t satisfy at least both of those it’s not really relevant yet. And we’re very far from AI accomplishing that without significant assistance.
My takeaway is that as software devs we should learn to use these systems, we should try to leverage them to save time and improve quality, but I agree completely that in the end it only matters how much it improves the end result and how long it takes to deliver it. For that reason we still need to code well, we still need to understand our systems and tools well — that won’t change much. In fact, understanding how your AI works is an important aspect of understanding your tooling, and as such, knowing what you’re teaching it will require a great understanding of it as well as the subject matter.
I do think a certain class of development work could be mostly eliminated by tooling based on AI. Not the entire industry, though, and not in 10-15 years. Even so, I worry about the people essentially regurgitating code which text-based AIs will rapidly become capable of reproducing at massive scales. They will need to skill up.
If AI can be that good, it will just be a new level of software abstraction you have to learn, the demand for better software to serve our needs won’t stop and we still need software people to “program” them the way we need it.
So many people focus on their "stack", and all these things that have little effect on the outcome. Customers don't know or care how something is built, they just want it to provide value and solve a problem.
If this were true then the history of capitalism and free enterprise would be radically different. Many of the most valuable items bought and sold today simply would not exist. This is the myth of free enterprise in it's purest form: that the consumer is an infallible rational operator with perfect insight into his or her own self-interest.
The fact is that markets are based on our existing as fallible human beings in a completely poisoned information landscape that induces in people an endless stream of irrational wants which drive them into frenzied purchasing activity. And as soon as one want is satisfied, or turns out to be a big scam, another one is concocted. This applies equally well to IT purchasing choices in B2B sectors.
I am not sure. I asked chatgpt yesterday to write a palindrome with two given words, it came up with complicated sentences, but these weren't no palindromes. I wonder if you won't get similar results with code.
When jobs are obsoleted, new classes of jobs are created. The end result is native human language as a programming language. People who write software will still write software in English instead of c or whatever.
I can definitely see writing games with natural language in the near future. Not everyone can do that of course, but they don't need to be programmers either, just people who are into the thing.
Ai will still have a hard time understanding the real needs. That is your added value. Understanding clients/your company and their needs. And thinking ahead.
I think of AI as simply a productivity tool, and it is here to make everyone more productive, like Google did 25ish years ago. Google may have put some out of a job, but it made everyone much more efficient. This is a good thing. Work weeks are shortening around the world and this will help us maintain productivity as we work less. (Fingers crossed)
21st century Luddites, afraid machines will threaten their jobs.
If we're smart we will continue to find new ways to use new technology to make more new technology. Software written by hand is like a brick building. Certainly it can be nice, but it requires skilled labor. Faster and cheaper with less skill would be pre-fab units.
Silly take honestly. I use this example a lot, but how exactly do self-checkout systems make cashiers more productive?
There are tools which increase human productivity, while still requiring it (barcode scanners, for example). And then there are another class of tools which make human labour obsolete (self-checkout systems).
LLM's (as they exist today) could be considered both. Github copilot would be an example of how GPT can be used as a productivity tool by human programmers, but as the technology progresses AI will become less of a "copilot", and will gradually replace humans as the main decision maker. Then eventually LLM will probably be used by people completely none technical to replace the need for coders entirely.
Now the argument becomes, well this will open up new opportunities. Instead of being a programmer you can be be a user researcher on a project, which could be the case, but this is a much more nuanced argument.
The most well paid jobs are typically those which require years of knowledge retention and require the human to basically serve as an advanced expert system in some ___domain.
Both "good" and "bad" programmers can write code into a text editor, the difference is that the "good" programmer will make decisions backed up by years of professional experience – same for a good doctor or good lawyer. This is why we pay more for these professions, because that depth of knowledge is hard accumulate.
This is the very thing that GPT attacks. What it can't replace is someone physically laying bricks or plumbing pipes. But there is less depth of knowledge required in jobs like this which limits salaries for these professions.
So sure. Perhaps in the future someone can say, "hey, GPT, build me [x]", but just remember you won't be the only one who can do that and there is no significant depth of knowledge in such a job. So while GPT won't replace all jobs and may even create some new ones, expect it to replace or devalue the majority of "good" jobs like doctors, programmers, lawyers, designers, etc.
So to Carmack's point, he's right you'll be able to build websites and apps faster using GPT as a tool, but you'll probably do so for a fraction of the salary.
We've ran similar experiences over the last several decades with outsourcing. If your labour can be easily out sourced then your ability to retain a good salary drops. It's not that a worker today can't make clothes in the US faster and better than at any point in the past, it's that it makes no economic sense to do so.
-----
Another thing I'd note here is that I'm autistic as are a lot of programmers in my experience. My brain is built to do technical things and I struggle intensely with human interaction. In my opinion it's not that programmers "don't understand" software solves problems for people, it's that a lot of us don't naturally excel in those areas. I think it's fair to say a lot of like to stick to what we're good at, and that's generally writing code and designing complex systems. The more time I have to talk with users about their needs instead of doing technical work, the less useful I am. And I'm guessing designers are also people who want to design rather than type prompts into a chatbox.
So another consequence here is that we might increasingly be forced to do jobs we don't really want to do as AI restricts the areas of labour where humans can still compete.
Is this an attempt to conflate... Negative AI speculation with racism and misogyny? Yes this technology could be used to reduce the need for human labor. Technology usually isn't actually used in that way and instead just used to concentrate wealth. That's what's being discussed here.
The issues you are bringing up have much more to do with the discriminatory practice of forcing particular demographics to do the menial labor and what conditions they are made to work in. Also an important discussion, but mostly orthogonal to the topic of AI making software devs more efficient.
No, quite the contrary, it's to conflate hype around AI with racism and misogyny.
The idea that AI is going to replace humans is anti-human rubbish. You're basically saying humans are shit and we don't need them. That's why tech companys are going to be laying people off.
The fact is, it is tech CEO's who are going to be out of work, as their business models are failures and society are going to realise it and move on.
We've been four months without the world being in thrall to mass-hysteria about something from silicon valley. Better gin up the next ridiculous hype train.
Meanwhile, the job of a developer now is basically the same as it was 15 years ago, 30 years ago, or even 45 years ago. And the only way we can imagine an end to their "slavery" (okay, not slavery, but meaningless toil on the hairbrained fantasies of well-financed frat-boys) is to turn around and tell them "you are nothing more than a machine." Just as tech bro's now condescend to cleaning staff by telling them they'll be replaced by "Suckio" - silicon valley's new AI vacuum cleaner. All the time, lacking any insight or sense of irony, that LLM's are much better suited to replacing the jobs of CEO's.. producing endless puffery that lacks factual merit. Cleaning is the last job the AI's will take.
I think it will be a niche. Frameworks will be updated to AI Frameworks where AI has known patterns to plug and play with.
Given the way capitalism works there will be a market for AI software. However, the cloud server provides have created Frankenstein patchworks of technologies in order to deploy the stuff on the cloud. DevOps will still very much be a thing.
To whit, Wordpress is about to get a whole lot more functional.
If anyone wants to avoid wasting their software development career. DO NOT EVER work on developer tools as the product. Developer tools is one of these areas where it doesn't matter how good your product is, no matter how much developers say they like it or how much time it saves them, it's not going to make it. Big tech companies will not allow their employees to use the tool and it will be a commercial failure. It will be a failure no matter what... Ok, unless maybe you can raise a ton of funding from well known VCs who will foist your tool onto various companies they have connections with... But then quality of the tool doesn't really matter at all.
Otherwise, even if it's the best tool ever built for certain use cases, company directors won't have the deep tech knowledge to understand the nuances which make it so useful. As for rank-and file developers who are meant to use the tool; they are more interested in over-engineered, very complex tools which maximize billable hours than in tools which makes them more efficient in their job.
In other words, the only people who could possibly want your product won't understand your pitch and those who can understand your pitch won't like it because it doesn't align with current perverse industry incentives for their roles.
Some developers consciously reject any tool which would make their jobs easier, others reject them due to a subconscious bias in favor of tools which facilitate complexity, disagreements and long meetings.
His prediction was based on the trends he was seeing at the time. But it wasn't even AI. Instead he made this prediction because he saw the rise of no-code tools replacing software developers because managers could finally cut out the pesky "translators", i.e. software developers.
I said it then and I will say it now. If your managers could specify what they need in a manner that no-code tools, or now AI, can generate the code they want, they will have to be extremely exact in their language. So exact in fact that they will need to specify a program in a click and drag interface, or in human language.
Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?
In essence what has happened in software development is that the level of abstraction has gone up while the machine has taken over more and more of the nitty gritty details. From punchcards, to assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
Of course I'm leaving out some languages here, but the level of abstraction has been rising.
But the rigidity of what is needed to specify a program that really does what you want hasn't. Especially evidenced by the fact that recent programming language developments often have a specific area where they shine, but not raising the abstraction level that much.
I'd be surprised if the next step is "Hi, I'm an ideas guy, please give me an app that does Uber, for bicycles, but better."