Hacker News new | past | comments | ask | show | jobs | submit login

> Since they hire software developers to make the specification more rigid, and the managers don't seem to be getting better at this over time, why would you believe this skill set is going to go away?

Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec? You’re kind of assuming that systems will be generated from a one-shot prompt, but it seems more likely that an interactive AI will identify the gaps in requirements and ask for clarification.

Alternatively, if the prompt-generate-evaluate loop is short enough the user can simply play with the running system and provide feedback to alter it.

This is essentially what developers do when they present a “weekly build” or whatever in an agile environment.

The process of solidifying requirements, stating them clearly and translating them into machine-executable formats are all language tasks and these models are really fucking good at those.

I’ve noticed in discussions like this that many software folks are assuming that AI capabilities will plateau soon, or will merely be extensions of what we already have (a better autocomplete, etc). I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.




> Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec?

This is the doomsday argument. What would I do if there's a nuclear apocalypse before lunch? I guess I'll die like everyone else.

An AI sufficiently advanced to do that is also sufficiently advanced to run the entire business in the first place, and also argue cases in court, do my taxes, run for president and so on.

You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++.


>You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++

This is actually something that makes me happy about the new AI revolution. When my professor said that I thought he was an idiot, because no-code tools always make it harder to specify what you want when you have specific wants the developer didn't think about.

We give kids books with pictures because pictures are easier, but when we want to teach about more complex topics we usually use language, formulas, and maybe a few illustrations.

I still think no-code was always doomed due to the fact that any attempt at it lacked the interface to describe anything you want, like language does.

AI is finally putting an end to this notion that no-code should be clicky high-maintenance GUIs. Instead it's doing what Google did for search. Instead of searching by rigid categories we can use language to interact with the internet.

Now the language interaction is getting better. We haven't regressed to McDonald's menus for coding.


I’ve used no code tools since the 90s and it just has a fatal flaw. For simple demo use cases it looks simple and cool. Then when you go to the real world and start getting pivots and edge cases you have to fix in the interface then it becomes a 4D nightmare and essentially a very bad programming language


I’ve spent a fair bit of time working on interactive chat systems that use a form of visual programming. It’s not good. Once you get past the toy stage (which is good and ergonomic), it’s just the same as programming except the tooling is far worse, you have to invent all your change management stuff from scratch, and it’s like going back 30 years.


What about coding in two languages, one textual and one visual?

Or a single language that has both visual and textual components

Or a single language where each component can be viewed in textual or visual form (and edited in the form that makes most sense)


Isn't the "Chat" part of ChatGPT already doing something close to this? I mean the clarification comes from the end-user, not from the AI, but with enough of this stuff to feed upon, perhaps AIs could "get there" at some point?

For example, this guy was able to do some amazing stuff with ChatGPT. He even managed to get a (mostly working) GPU-accelerated version of his little sample "race" problem.

See: https://youtu.be/pspsSn_nGzo


> Isn't the "Chat" part of ChatGPT already doing something close to this?

No, the amount of handholding you have to do to get it to work effectively presumes you already know how to solve the problem in the first place.

The best way to use it is the opposite everyone is busy selling: as a linter of sorts that puts blue squiggles below my code saying stuff like "hey stupid human, you're leaking memory here", or even "you're using snake case, the project uses camel case, fix that".

That would actually lower my cognitive load and be an effective copilot.


Fair enough - assuming steady state, but the acceleration is the curve I'm most curious about.

The point I was alluding to above was that the prompts themselves will be recursively mined over time. Eventually, except for truly novel problems, the AI interpretation of the prompts will become more along the lines of "that's what I wanted".

Some things to think about: What happens when an entire company's slack history is mined in this fashion? Or email history? Or GIT commit history, with corresponding links to Jira tickets? Or the corporate wiki? There are, I'd guess, hundreds of thousands to millions of project charter documents to be mined; all locked behind an "intranet" - but at some point, businesses will be motivated to, at the least, explore the "what if" implications.

Given enough data to feed upon, and some additional code/logic/extensions to the current state of the art, I think every knowledge worker should consider the impact of this technology.

I'm not advocating for it (to be honest, it scares the hell out of me) - but this is where I see the overall trend heading.


This is the doomsday scenario again, though.

In a world where we have the technology to go from two lines of prompt in a textbox to a complete app, no questions asked, then the same technology can run the entire company. It's kind of hard to believe transformers models are capable of this, given we are already starting to see diminishing returns, but if that's what you believe they are, then you believe they can effectively do anything. It's the old concept of AI-complete.

If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

This remains true for any version of a language model, even an hypothetical future LLM that has "solved" natural language. I would not rather write natural language than formal language given the chance.


> If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

But what if the "programming language" is not a general-purpose language, but a context/business ___domain specific language? One that is trained on the core business at hand? What if that "language" had access to all the same vocabulary, project history (both successful and unsuccessful), industry regulations, code bases from previous (perhaps similar) solutions, QC reports, etc.? What if the "business savvy" consumer of this AI can phrase things succinctly in a fashion that the AI can translate into working code?

I don't see it as a stretch "down the road." Is it possible today? Probably not. Is it possible in 5-10 years time, I definitely think so.


I agree with your point about how to best use it today. We have seen that each new model generation both improves the prior tasks and unlocks new ones through emergent behavior. That’s the fascinating/scary part of this development. And yes, it’s “just” a language model. It’s “just” predicting next token given training + context. We don’t really understand why it’s working and it’s evolving non-linearly.

I asked GPT-4 to give me an SVG map of my town. I then asked it to put dots on some local landmarks. The map was toddler level, but the landmarks were relatively accurate in terms of their relationship to each other and the blob that it drew.

So this is a language model that has some emergent notion of space in its code generation abilities.


This is far from the doomsday argument, but maybe it's the "AI can do everything that has significant economic value today" argument.


Currently, we don't even trust the car's automatic driving capability to let it be on the roads without a human.

Until that day happens, i highly doubt that a business owner would just blindly trust an AI to generate their business code/software, without hiring someone to at least look after it. Therefore, software jobs could evolve, but not disappear.


Yeah all this talk about complex systems being written by a language model which has no concept of files, code paths and import systems sounds like a job security to me. I'm a pentester though.


I'm ... less optimistic about how well people can place their trust. Cars, at least, have concrete failure criteria and consequences for them.


The project will be more consistent and resilient to issues but it probably take about half the time it used to take without AI, not 1% of the time. Reading AI code is damn hard, it is code review, requires exam level concentration.


Yes but even in that case The role will be of a "AI Prompter", it will not be done by the managers because of the time factor. Even though AI can give you the result much faster, building upon it and testing/verifying, then coming up with the refined prompt is a time consuming thing. Only the Write part of the write/eval loop will be faster but not neccesarily easier.

Especially the "debuging" part will be much harder. Noone can look under the hood to understand what is wrong and all you can do is shoot random prompts in the dark hoping it will create the right result.

It is scary right now how confidently and spectacularly wrong the chatGPT is and it will create disasters.


Why would sufficiently advanced AI even need a prompter? The AI could play the role of the greatest prompter in the world, and ask the same questions to the end user that the human prompter would.


This is a misconception of how our industry works. Yes there are market resesearches with users but often those come after the problem space has been defined. Most of you see in the tech sector today are "Created Needs" by imagining a solution that the users didn't even know they needed. To ask a question you first need to define a problem that is defined by that/those questions. This is the difficult part and the main reason why People still believe "the Idea is the most important factor". Ofcourse this is not true, there are hundreds of factors that come into play. Imagine an AI asking circa 2000 to the users what kind of virtual social space did they need. The answer would not have been Facebook. (There were other social networks before Facebook but the time was not right for the "Social" explosion). By learning on existing solutions, The AI would have learned it's lesson that global virtual Social networking is not something that the users want. And part of this problem was as much sociological/psychological and outside of the realm of what the AI could consider that we would not have what we have today.

Not that we would have missed much from missing the particular implementation of this idea that Facebook gave us but the idea and what it unleashed is much more than that particular implementation.


Sure, people don't know what they want. But the point is there won't be a need for some intermediary person between AI and the end user.

Whatever the AI prompter brings to the table will quickly be provided by the AI itself. If a user doesn't really know what they want, there isn't a scenario where the AI prompter will suss it out but the AI itself won't


> I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.

Sounds like me get reorganized out of a job though...what does it mean to reorganize everyone around the A.I if it does everything better than us?


> I submit that we may reach a point where the AI is so compelling that we’ll reorganize teams/systems/businesses around it.

For starters I'd like Codex to be more than next word predictor, it should also "feel" the error messages, data types and shapes, file formats, so I don't have to explain the context. It should be part of the system, not just part of the text editor.


You can do that prompt / play with it / feedback thing right now with my GPT+Stable-Diffusion powered website. https://aidev.codes

I am in the process of adding VMs which the AI will be able to write software and fix compilation and other problems automatically.


In that case, how is the AI going to keep tens or hundreds of thousand of lines in memory to produce cohesive code that works with the rest of the codebase?

It seems prohibitely expensive to build and run transformer models with that much capacity.


GPT 4 already has 32k tokens of context for prompts. Once we’re making arguments about scale only a few orders of magnitude larger than the current state of the art, it seems similar to arguments 10-15 years ago that real-time ray tracing is not feasible.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: