Hacker News new | past | comments | ask | show | jobs | submit login
How to protect your career from a ChatGPT Future (wearedevelopers.com)
39 points by KainBoorde on Feb 25, 2023 | hide | past | favorite | 74 comments



The ratio of dumb stuff to reasonable stuff being said about ChatGPT is dizzying; I truly hope it has reached its peak and this kind of hype and poorly-thought-out analysis dies down soon.

This whole article could be summarized as "How to protect your career? Be good at your job!"

The 'software dev' section is particularly off-putting. It starts off with a code-snippet that is ridiculously wrong: it defines 'typos' as any string matching this regex: '(\w+)\s+\w+'. That's right, if your text contains a word followed by whitespace followed by a word, that's a typo. The code could only appear correct to someone with no knowledge of the ___domain and no experience coding. The fact that this was published on "wearedevelopers.com" is a bit sad.

The article then goes on to deliver this piece of wisdom: "Coding will soon become no more than a means to an end. Which it always was." OK, if it always was, then why is this news, or a disruptive change? Also, is anybody involved with writing production software convinced that coding is somehow not a means to an end? What else would it be?

I don't have much knowledge of the other careers mentioned in the article, but I suspect the analysis and advice there is about as flawed. The suggestion that teachers should have ChatGPT grade papers while they spend more time "face-to-face with struggling students" seems particularly vapid and shortsighted as well. How would they even know who is really struggling if they're not reading their work, delegating it to a notoriously unreliable tool?


>> The 'software dev' section is particularly off-putting. It starts off with a code-snippet that is ridiculously wrong: it defines 'typos' as any string matching this regex: '(\w+)\s+\w+'. That's right, if your text contains a word followed by whitespace followed by a word, that's a typo. The code could only appear correct to someone with no knowledge of the ___domain and no experience coding. The fact that this was published on "wearedevelopers.com" is a bit sad.

If you want this hype cycle to come to its inevitable end sooner rather than later, maybe it's a good idea to keep quiet about such errors.

Anyway I'm still curious to see what happens when enough of that garbage code makes it into github for the next iteration of Codex or some similar model, to be trained (or fine-tuned anyway). If we keep pointing out the errors in code generated by LLMs, we risk never finding out.


No point in getting dizzy offering feedback. The article was probably written with ChatGPT.


I just don't buy this happening for software engineers. I feel AI will always need human supervision with a human guiding the AI who lacks the wider context that a human can gather. I have really enjoyed doing some coding recently with GPT on some front end stuff, which is where I suck (I am a backend / systems engineer). GPT taught me a lot, but it soon got itself in a lot of trouble and had I not intervened a bug ridden project would have been made. I realise the tech will improve, but I just can't see it faring well on its own without a human holding its hand and feeding it with the context that a machine cannot determine on its own. You can see this through where it is strong, writing unit tests - testing is a perfect fit, it can read the various types , functions and methods and determine what the outcome should be, and then write a test to check those outcomes are realised.

Rather than replacing software engineers, I see it being more of an enhancer.

Many engineers will be able to upgrade their productivity and find a lot of use from being able to bounce back and forth with GPT. This was my experience this week, I worked 'with the AI' and every time I got something working it felt like a shared high five moment.

Us being out of work? Nah, this sounds like the same sentiment before that calculators and then computers were going to make accounting a job of the past.


I think that this is a naive take. Current AI capabilities like ChatGPT can be compared to Pascal and C. We will get the Python and Java equivalent when we abstract things with hyperparameters and scale up the number of params at some point.


It's a really weird analogy because I think the only critical difference between C and Java is that the latter has garbage collection. What's the equivalence of garbage collection/memoery management in AI...?


Are you getting confused between C and C++ (or C#)? Apart from Java being based on C syntax it otherwise bears very little relation to C.


Are you implying Java or Python is somehow better than C?


I think it's more about barrier of entry and ease of use


which is why it's not a good analogy: you make a powerful tool like "programming language" easier to use, you get more people with the power to write programs. You make an unreliable and brittle tool like ChatGPT easier to use, you get more crap content on the Internet.

Not saying that there won't be good use-cases for Large Language Models, just saying that the analogy doesn't work: these things will be useful if used by competent people in very specific use-cases. The way ChatGPT is used now is just to generate a lot of hype and middlebrow content: with that in mind, I think its low barrier of entry is actually a negative (it causes things like this: https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt...)


I started writing in Python this January, after spending years in Java, C, C++ ad PHP. (In academia, Pascal.)

I would say quite explicitly that Python is a lot better than C, at least as long as you value pure concentration on the business logic of whatever you are implementing. Its libraries are helpful, huge and well documented.

I am overjoyed by the fact that I don't have to a) either reinvent the wheel myself (e.g. writing a JSON parser from scratch) or b) rely on some OSS library with a bus factor of 1, whenever I just need to store or read data etc.


The Python honeymoon period lasts a long time. For me it lasted until the monolith I was working on had grown to ~500k lines of code. At that scale the smallest changes would cause (seemingly) unrelated breakage, despite many thousands of tests saying all was good. Python’s “anything is anything” philosophy made me miss the predictability of C and Java. Granted, that was before type annotations had taken off so maybe things are better nowadays.


In that case, blessed is me, because my Python projects are small-ish mutually independent modules ("probes") that only communicate using a specific JSON-based protocol over HTTPS.


Type annotations... help. They're not great, but they're better than nothing.


PyCharm handles them really well and I am happy that I started with Python 3.11.


I should elaborate that when I write "better than nothing", I'm someone with a background in strongly-typed programming languages (OCaml, Haskell, Rust) and even more strongly-typed programming languages (Idris, Coq, Twelf).

It's unlikely that Python type annotations will ever be able to achieve something comparable to what many of these languages had during the 90s. They're useful, but if you need strong static safety guarantees, Python is the wrong tool for the task.


well done on totally missing the point, all that will happen then is we get bugs at a different abstraction level.


It won't be a problem for the best of us. But for the rest it might be.

It means fewer can do even more.

It might cause simply more software to be created or it might cause a shortage on the market. Or bit of both.


If AI makes engineers more productive, then it follows that companies will hire fewer engineers for the same work. So AI is making some engineers obsolete.


On the flipside, it might create more demand for some engineers (or skills in general) as the bottlenecks move "upward" on the skills ladder (if we define that ladder as "how hard is it for AI").

E.g., faster and cheaper compute created more demand for software developers as that demand was no longer capped by the compute bottlenecks. Similarly, faster and cheaper "basic" software development might create more demand for software architects, and so on.


High level languages have been making programmers more productive for decades. But that hasn't resulted in fewer programmers, but more programs.


> If AI makes engineers more productive, then it follows that companies will hire fewer engineers for the same work.

History says otherwise. When I started (late 90s I guess) we looked at starting a company. It was going to require things like big iron, rented DC space, teams of people to manage, for what was something that is now pretty simple. Fast forward and the same idea could be done (and has been done) by a couple people using a cheap VPS.

Every productivity enhancer that has happened over my career has not led to fewer jobs, but more. The ability for humanity to consume any excess resource is unmatched. I'm sure there will be some bumps along the way, but any company who thinks they can stay static and replace everyone with AI will get beat out by the companies using AI as a tool to continue to innovate.


> hire fewer engineers for the same work

or will take on more projects

Field is evolving for a long time, with more automation/frameworks which make programmers more productive. So far it increased workload and opportunities.

The real issue will happen if suddenly humans will be completely removed from the loop and AI will scale independently.


No, you're historically wrong.

If AI makes engineers more productive, then it follows that the application of engineering become economically viable on whole new sets of problems. So AI is making engineers even more sought after.

That said, I'm so unimpressed with AI at the moment that I start thinking it's just a bubble, but even if it isen't, it's not going to lessen the need for engieers in the least.


> then it follows that the application of engineering become economically viable on whole new sets of problems.

Agree.

The demand curve for highly skilled SWEs is incredibly robust but has been supply limited. Insofar as AI effectively upskills workers who were just below the demand cliff, this is a boon to workers.

The big shift I see is towards Quality. Implementation has gotten cheaper relative to problem specification and verification.


I don't see why these are mutually exclusive. The point is that those new "engineers" will not necessarily be the same people. Those "good" engineers will enjoy improved productivity and will be more sought after, while those "not so good" engineers will be replaced by AI. As always, any innovation just seems to increase inequality.


On the contrary, AI will _also_ create jobs for people who as of now would be unqualified for engineering.. People who have enough of the right type of thinking, but can't learn to code well enough to be productive, they can now be AI babysitters and integrators.


Many things have made engineers more productive in my working lifetime, yet there still more of these pesky software engineers than ever!


chatGPT has proven a productivity booster for me. I had to port some typescript backend code to google go and with chatGPT it felt like I had access to an experienced google go developer to ask questions to.

For juniors it is a massive booster as they have someone available all the time allowing them to explore further before asking for help.


> accounting a job of the past

sure. accounting stays.

so will software development.

but would you hire the boomer with the abacus or the whizz-kid that can wrangle the AI to deliver you a full software project in a few days.


If the whizz-kid and his fancy AI make you get wrong results that may bankrupt you or make you end up in jail, well you may have been better of with the abacus. But as always the temptation if too high; for some hope is indeed a strategy


AI is in its infancy. I've seen drastic improvement just in the last four years. Accurate code will be reliably produced, and soon.


The boomer, every time, they've seen things, and they know stuff.


Your belief isn't required. After decades of paying a kings ransom to twitchy weirdos who sandbag deadlines, blow through project budgets like a freight train through tissue paper, and who's final product has a nasty tendency to cause as many problems as it solves, business managers are thoroughly incentivised to explore alternatives.


This take is very yikes. Your assumtion that all coders are twitchy weirdos reflects back on your own distorted assesment of a whole industry probably based on few personal anecdotes. But if you hope AI will make things smoother for you you’re in for a rude awakening


Idgaf if you think it's yikes, it's based on 25 years of working in and near the industry as a developer and listening to whats said when folks think there are only suits in the room. If you think D and C level managers/execs love their software guys you are deeply delusional.


Outsourcing all over again


What is up with the typos-in-CV code? It’s going to find every space between words?

Seems the author did not check the AI’s output…


It seems like a perfect example of why LLMs shouldn't be used for coding without serious supervision.


Yeah, I don't get it too.

And even if it generated \s{2,}, well, a multiple whitespace would be a typo? You need a dictionary to identify typos.


I'm still thinking the greatest danger of GPT is its abundantly demonstrated ability to turn off and/or get past people's critical thinking capabilities moreso than its ability to replace people's jobs. I'm yet to see a code snippet past 4 lines that particularly impresses me. Is that legitimate progress in the field? Yes. But not quite ready to throw in my towel.

I do somewhat fear my career turning into cleaning up ChatGPT-created messes. Of course I've already cleaned up my fair share of previous coder's messes. But ChatGPT messes have the potential to be a real nightmare even beyond that.

Everything beyond 4 or 5 lines has contained bugs, be they subtle or blatent (as this one is). But what's really concerning is that either A: the author doesn't notice before writing about how amazing ChatGPT is at code or B: the author doesn't even think to check. Both are terrible, but each in its own way. And it is this reaction that concerns me more than the quality of ChatGPT itself.


Maybe you have stellar junior engineers, but I find reviewing GPT code as very similar to reviewing purely human code. We just need to get much better at verification…


AI art generators were shitty for decades. But now they got so good that many artists are afraid to lose their jobs.

ChatGPT and other chatbots will improve too.


My comment is specific to GPT, which is why I said that and not AI. GPT is not the whole of AI.

But the fact that people are already back to full-blown 1970s-style "the computer said it! it must be true!" with GPT-based technologies does not bode well for our ability to assess future technologies either. The possibility of future AI technology being just an even-more obfuscated and convincing error generator at this point can not be dismissed, if people are already failing this badly at evaluating what is still an embryonic technology.

I'm sure people will accuse me of being a Luddite about AI in the future, and I'm sure they won't appreciate "Look, it's not that I don't appreciate the technology, what I'm not trusting is you and your assessment, since you have a proved past track record of being unable to assess AI technology reliably." I've got all sorts of friends to make in the future with that.


>> ChatGPT and other chatbots will improve too.

I've seen this and similar predictions repeated so many times. What are they based on? Who says those systems "will improve"? How will they "improve"? Why is everyone so certain?


To do a really good job I think you need an LLM. So the relevant Python code would be an API call to chatGPT. I just tested it.

Me: Tell me if the following sentence contains any typos or grammar errors:

"This is my CV.I am a experianced developer. I have 5 years of experiance."

ChatGPT:

Yes, there are typos and grammar errors in the sentence. Here is the corrected sentence:

"This is my CV. I am an experienced developer. I have 5 years of experience."

The corrections made are as follows:

1. "experianced" should be "experienced".

2. "experiance" should be "experience".

3. The first sentence is incomplete and should be separated from the second sentence with a period

Point #3 correctly calls out the error (missing space) but describes it incorrectly. Also it corrected "a" to "an" (good for a grammar checker) but didn't list it as a mistake.


That's the future, getting rejected for using spaces.


Maybe the whole thing is sarcasm


Kinda wish it was. There's this horrible magical thinking that's common around this stuff, all my semi-technical colleagues are wetting themselves. It's kinda scary how much people trust this stuff.


You can become more specialised; but that will only take you so far. Coding will soon become no more than a means to an end. Which it always was. The only way out for developers is through expanding their vision beyond that of the tools made available to them. Why does something need to be coded? What does the end customer expect? Did they vocalise their need properly? Contextual knowledge can hardly be reproduced and is the best way to beat a ChatGPT-like tool.

This is the content I clicked for and I feel like it's pretty vague and poorly-articulated, but there's a seed of truth here. Coding is a highly-technical profession and Chat-GPT does seem to trivialize it somewhat. With Chat-GPT automating out some of the developers, you will still need some developers around to understand and review the AI's output. The developers who remain will be the ones who are high-functioning: the ones who can communicate with both technical and non-technical staff, understand the business logic, and fill-in the blanks of what the customer needs.

In this future, Chat-GPT replaces some technical staff positions. So how does that work? In my experience with the tool, getting it to write anything multidimensional--like a board game--requires hours of back-and-forth articulating and re-articulating the interface, the business rules, and how those things interact. In this sense, it's like those developers who throw up their hands and complain about the requirements documentation while other developers stay in constant contact with the stakeholders during the development process to deliver the business needs. I don't think the later need to worry about job security.


Insightful comment, but you are assuming that the capabilities of this type of technology will remain static. Maybe that was also a premise of the article.

But we know that these systems will not stop improving. Just in recent weeks there are scientific papers documenting LLMs with improved capabilities as well as clear reports of more capable models in the business pipeline. And papers describing major upgrades to the models such a as adding a visual modality to the data.

But beyond the last few weeks, there is an exponential trend in the capabilities of the hardware and systems. And a clear track record of creating new paradigms when there is a roadblock.

In the next several years we will see multimodal large models that have much better world understanding grounded in spatial/visual information. Cognitive architectures that can continuously work on a problem with much better context, connected to external systems to automatically debug or interact with user interfaces in a feedback loop. Entirely new compute platforms like memristors or similar that increase efficiency and model size by several orders of magnitude. All of this is well under way.


Will be fun to watch people argue with these bots if they don't get the correct result.

Not to mention how answers changes if the model changed.

Tried it multiple times on real problems I needed to solve and all answers were nicely written with code examples and explanation but completely wrong.

And it got worse after I tried to correct it.

From my point it's useful if you ask for something that could be copy and pasted from Stackoverflow but not new solutions that weren't already written down somewhere.


I don't like articles/subjects that. How about: How to embrace ChatGPT in the future to make you more effective?

New things come and help: Industrialization, transportation, communication, and now what - assistentization?

And so what if someone looses job to ChatGPT? Not a shortage of jobs out there.


  New things come and
Increase economic inequality.

  And so what if someone looses job to ChatGPT? Not a shortage of jobs out there.
It gets increasingly harder to qualify for these new jobs, and those working the "easier" jobs will once again become poorer. This process is unsustainable.


The real danger is not ChatGPT and its capabilities, but decision-makers who are under illusions about its capabilities.


I remember my father telling me that computers would never be able to outplay humans at chess because they were really just calculators and couldn't think.

That argument has failed to halt the march of the machines. It has been made many times. I don't think it will hold here, the likely near future is that a lot of jobs are about to go the way of the horse-drawn carriage. Humans suddenly aren't competitive at a bunch of writing tasks.


>> I remember my father telling me that computers would never be able to outplay humans at chess because they were really just calculators and couldn't think.

The important thing to ask here -and no offence is meant- is who was your father and what did he know about artificial intelligence research and computer chess.

For the record, John McCarthy, who was a guy famously cognizant about both AI research and computer chess, was on the opposite side of your dad and thought that computers would be able to beat humans at chess by 1978 [1]. Come to think of it, many other CS and AI research pioneers, not least Alan Turing and Claude Shannon, had similar expectations, that were only off by a few decades at most.

Then again it's useful to remember what McCarthy had to say when Deep Blue beat Gary Kasparov, finally establishing computer primacy in chess, in 1989. Quoting from an article that McCarthy wrote at the time:

In 1965 the Russian mathematician Alexander Kronrod said, "Chess is the Drosophila of artificial intelligence." However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have _some_ science, but mainly we would have very fast fruit flies. [2].

That's because McCarthy, like Turing, and Shannon, and all the other greats of CS and AI research who were interested in chess, were interested in chess as an example of how humans think differently than other animals and computers; not as a sterile fight between man and machine for who can compute the most calculations, the fastest. Unfortunately, the sterile fight is all we ever got.

So maybe the machines "march", but maybe they're not really marching, to were people think they're marching.

_____________

[1] https://en.wikipedia.org/wiki/David_Levy_(chess_player)

[2] http://www-formal.stanford.edu/jmc/newborn/newborn.html


Machines still can’t really think and are indeed calculators or very high on that scale. They’re good at following instructions which thinking humans conceived. To think otherwise is a folly


You can already ask computers to draw you pretty pictures. Why pay a human artist for that?


Power tools were thought to get rid of tradesmen but they actually increased demand. The craft changed though from limited output hand crafted structures to more process oriented and things like a shared code of standard to build to.

If AI can do a good amount of the heavy lifting it stands to reason we can standardize a lot as well as expect productivity to rocket for workers. I’d expect more demand.


Historically this has been the case with technology, but also the rate of technological change has been a lot slower than the pace at which humans can learn new things. If it takes 2 years to train a human in something new and 2 months (or 2 days) to train an AI, that's a situation without historical precedent.


Sure but I think our use cases and expectations will change too. We won’t just pause and keep making the same things. We’ll be able to build more novel things with AI as a helper and accelerator. I think the 80/20 law will remain unbroken in that AI will get us 80% of the way there but we’ll have 20% that’s completely novel.


This article is so bad, I stopped reading it after one minute.

Plain wrong things:

> Today’s most talked-about AIs are generative — far superior to their heavily supervised predecessors.

No they are not. Generative AI is one kind of AI. Just a model being generative does not make it superior. Plain GANs from earlier years are not "superior" to the cutting-edge image recognition or speech recognition or semantic segmentation models.

> they temporarily made the tech bros forget about Web3 and crypto, which is saying something

Bottom feeders shifted away from Cryptocurrencies because enough people realized its pyramid scheme nature and the price collapsed. No newer idiots could ve lured in. And the plain stealing by "exchanges" merely accelerated that.

> back when a neural network was still called a Boltzmann machine

Boltzmann Machines and deep neural nets are different things.

I just wish fools and dumbs stop being loudmouths.


The outlook for devs hypothesised in this article is pretty bleak.

I'm sure it is just coincidental it is on a site (dev recruitment) that will benefit from a new humility in that trade.

https://www.wearedevelopers.com/


I think the real issue is companies will start reducing compensations and salaries by using AI in combination, all this whilst still expecting them to work the same hours.

Vox broke down an article on AI main investment is replacing 'middle white collar' jobs https://www.vox.com/recode/2019/11/20/20964487/white-collar-...

We as software engineers have been using automation for years to improve our work, but that never really meant we had to work longer for less pay


Well, the attack surface also increases with AI tool. So if people start using it, we can always start attacking it too. Feed adversarial prompts and let these system go loose. It is only useful as long as it is sandboxed. Anything beyond that is a cat and mouse game between adversary and the AI corp. Therefore, as developers, we are always gonna be safe.


I can see a future where, say, an annual ESG report is compulsory for every business in the EU, and both the writing and the checking activity are outsourced to ChatGPT-like robots.

The final limiting hurdle to explosive growth of feel-good bureaucracy - need for some actual humans to work on the papers - is slowly eroding away...


The fact that there is a “How to protect your career?” for “Cringe Twitter Threads” is hilarious.


"ChatGPT will change everything!"

Everything?!

This kind of hyperbole is rampant in writing on these technologies and seeing it should instantly clue you into the fact that the rest of the writing is not objective. And sure enough, such is the case here.


In the long run, the only two professions AI cannot replace will be the ones related to sex and religion/spirituality. Time to figure out how to become a guru.


AI is the story of augmenting human intelligence rather than replacing it. The sooner we realise this, the sooner we can make good use of it.


Unimpressive post, we need a better and honest treatise of this topic. The impending replacement is real.


How does AI change optimal dev team size? Can we reduce team sizes, and get things done in parallel?


I work in a factory and my job would be hard to automate away. I'm not worried




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: