Hey, Steve here from tldraw. This is a toy project with a horrible security pattern, sorry. If you want to run it locally or check out the source, it's here: https://github.com/tldraw/draw-a-ui. You can also see a bunch of other examples at https://twitter.com/tldraw.
Happy to answer any questions about tldraw/this project. It's definitely not putting anyone out of work, but it's a blast to play with. Here's a more complicated example of what you can get it to do: https://twitter.com/tldraw/status/1725083976392437894
Hey Steve! Super exciting project and congrats on the launch! I tried to use my open AI key and am getting an error around "you exceeded your current quota, check billing". Does the standard $20 a month OpenAi Pro subscription work for this or are there additional permissions needed?
Edit: found the answer on the github readme
"*To use your own API key, you need to have access to usage tier 1. Check out your current tier, and how to increase it in the OpenAI settings."
Wow! That flow chart is really a killer point and probably should be a first-class concept for a tool like this. Really starts to give someone the right levers to making something useful vs. a toy.
That last example is fun, it's amazing how you can give it feedback like that. When you select the generated app + your text, what's it actually receiving as input? Is it receiving its previously generated code plus your new text?
Yes! If you have a previously generated shape selected, it incorporates the code from that file into the prompt for the next one. i.e. "here's what we had before, here are the notes that came back"
```
You are an expert tailwind developer. A user will provide you with a
low-fidelity wireframe of an application and you will return
a single html file that uses tailwind to create the website.
They may also provide you with the html of a previous design that they want you to iterate from.
Carry out any changes they request from you.
In the wireframe, the previous design's html will appear as a white rectangle.
Use creative license to make the application more fleshed out.
if you need to insert an image, use a colored fill rectangle as a placeholder. Respond only with the html file.
```
(not sure about why the creative[commons?] license is referred here and why does it help.)
and for each generation the user prompt is:
```
[IMAGE_LINK]
Turn this into a single html file using tailwind.
```
"creative license" in plain English, means essentially "don't worry too much about constraints, do what you feel will is best." I don't think it has anything to do with software licenses.
If you add a DOM node somewhere, it’s first removed from where it was because it can only exist in one place. You need to clone the node if that’s not what you want.
Incidentally, here’s a briefer spelling of that function (skipping the superfluous Array.from(), using a for loop instead of forEach, and using .append() instead of .appendChild(), cumulatively reducing 8 years of browser support to 5½+ years, which is no meaningful difference; and although I’ve declared Array.from() superfluous, note that this is only the case because querySelectorAll returns a non-live NodeList—you couldn’t do this with childNodes since it’d be being mutated during iteration so you’d miss half the items due to how it all works):
function moveSelectedItems(fromList, toList) {
for (const item of fromList.querySelectorAll('input[type="checkbox"]:checked')) {
item.checked = false; // Uncheck the item
toList.append(item.closest('li')); // Move the entire list item
}
}
Yeah, to the extent "I need to lie down," it's actually due to the features I didn't even know existed. In that followup with the accessibility corrections, I had no idea you could even do those things...
I checked in a screen reader. Nothing is announced when pressing the buttons. This is a problem. It should say something like “Checkbox B moved to the left”. Without any sort of announcement, the user has no idea if pressing the button did anything.
Thank you very much for your insightful review and valuable feedback regarding the accessibility of our recent list transfer feature. Your suggestions were instrumental in enhancing the user experience, especially for those relying on screen readers. I have incorporated your recommendations into the updated code [0], ensuring better accessibility and usability. I'm also including a link to our conversation [1] for further context and transparency. Your thoroughness and attention to detail are greatly appreciated, and I look forward to your continued guidance and expertise in future projects.
Such recent demos show both how impressively ML/AI has advanced recently, and how unimpressively repetitive and unoriginal tasks keep being reimplemented by millions of developers worldwide. Since most UI screens can be accurately described in one or two paragraphs, it's no wonder they can be represented in much detail in a relatively small embedding vector.
While I agree at a high level, it's also important to understand that most of these demos are being carefully cherry-picked. If you are just seeing the viral demos on social media, you're going to think AI is further along than it actually is for more complex tasks. People who are non-technical and not using AI in anger to get real work done are going to be the most susceptible to this.
Those in the weeds are generally going to have a more nuanced view of the benefits and challenges--i.e. that it's incredibly useful but also very fallible and requires careful hand-holding to get production-ready results.
I say all that as an AI optimist. The value is real and the most impressive demos are glimpses of where we're heading. But it's going to take some time before the median result catches up to the hype.
I think the examples by naysayers of what AI can’t do are often cherry-picked.
The way has been shown with the web. Now many people who would have been paying a designer are using things similar to Wix. A lot of people don’t need top of the line custom work, and most custom work isn’t top of the line. I’ve seen AI frequently, but a bit unpredictably, hit the high notes. https://www.joelonsoftware.com/2005/07/25/hitting-the-high-n...
You know, that’s a great point! Both the negative and positive outliers can be cherry-picked to misrepresent what working with AI is really like in practice. Some want to pretend it’s much further along than it actually is, while some want to pretend it’s all just a flash-in-the-pan. The truth lies, predictably, in between: it’s useful and it’s real, but it’s not consistent or reliable… yet.
A lot of methods of expressing software ideas are also very inefficient. The actual interesting part, the entropy, is very small. In the demo it’s literally two sliders controlling two CSS attributes which is not a lot of bits of entropy in a UI specification. With an appropriate UI specification language, that would be, what, three lines of code? Needing to manage Web UI boilerplate is where the difficulty is.
I have been thinking about that a lot recently. Where I work, we spend a very small fraction of our time on building things that are unique to our business. Maybe we are doing something very wrong, but I am under the impression that most of the code that gets written anywhere is extremely low-entropy.
This low-entropy, repetitive coding is not limited to the user interfaces. We do tend to describe the same structures and logic over and over again in front-ends, backends, and databases.
I am currently building an open-source project that tries to make the definition of applications from database structure to business logic to user interfaces, much more declarative and compact. If you are interested, you can try it on https://sql.ophir.dev
> we spend a very small fraction of our time on building things that are unique to our business.
I usually see this in places/cultures that value code-beautification projects rather than delivering value to the customer. Sometimes, they even want to do the latter, but actually do the former.
If you work somewhere that focuses on delivering value, the devs constantly complain about technical debt that will never, ever get fixed. That's the only sucky part.
"Code beautification" is not the point, but a tool. Smaller and clearer code could be faster to write, and harder to make mistakes using. This is why e.g. Rails can be so good at producing certain kinds of apps very quickly and in very few LOCs.
The problem is, of course, that simplicity follows complexity, not the other way around. Because of that, it's mostly "trivial" and "repetitive" tasks that receive polished tools for easy and compact expression. Anything new and non-trivial usually grows ugly and uncouth for quite some time.
Code Beautification is a waste of time and ossifies a code base. It's a form of optimization that actually makes the code harder to change and the business less agile. It is pretty though.
Beauty for the beauty's sake is art, not business.
Usually simple, lean, and logical is also beautiful; beauty is not a random quality. But sometimes too simple and too lean is not flexible enough; then that's a case of a wrong abstraction, or of premature optimization.
This reminds me of the introductory scene of the amazing game The Stanley Parable:
Stanley worked for a company in a big building where he was Employee #427.
Employee #427's job was simple: he sat at his desk in Room 427 and he pushed buttons on a keyboard.
Orders came to him through a monitor on his desk telling him what buttons to push, how long to push them, and in what order.
This is what Employee #427 did every day of every month of every year, and although others may have considered it soul rending,
Stanley relished every moment that the orders came in, as though he had been made exactly for this job.
It was a sobering moment when I realized that accurately described most of my job too.
That dialogue is hard to fully appreciate without the audio, and that button didn't work for me, so I'm linking a yt. However, I highly recommend that anyone who hasn't played this game should just go play it, even if you're not especially keen on games. It's absolutely brilliant, and consumed as a played game is absolutely the greatest way to absorb its delightful prodding (or anti-prodding (or anti-anti-prodding)) message.
Even just the free demo! The demo is incredible as a standalone experience, and does not have identical dialogue / experience as the full game. It is in fact a meta-experience of the game.
My email signature sometimes says "software done right is always solving novel problems." I wish people always had the time and mandate to find the simple generization of their sequence of one offs.
Hopefully this sounds like helpful feedback instead of annoying nitpicking:
I found that the substantial "flash of unstyled content" when the page first loads to be very jarring/unappetizing.
I think it would be worthwhile to investigate one of the approaches that can mitigate this FOUC effect.
The worst I'm able to get when manually disabling the cache and simulating a slow 3G connection is this: a blank page first, then text in the browser's font, then the text re-renders with the right font, then the icons load. The user should never see completely unstyled content.
The site uses "font-display: fallback" so this happens only on slow network connections. If the font loads fast enough, then the fallback never appears.
The browsers font switching to the right font is exactly what we are talking about. I’m in the Netherlands, so maybe I am just far away from your servers, but there’s about 200-300ms (essentially when reading the title, at least that is what I was doing when it switched on me) and that’s when it switches the font.
You can probably just use the system font for the title, and nobody would ever notice the issue.
IME the real "problem" (maybe just interpersonally) is when you have a product person, or a designer, who insists that something needs to be pixel-perfect and subtly different when it objectively does not. Maybe even when it is a grey area but the juice isn't worth the squeeze.
Case in point I'm in the middle of, let's just call it what it is, an argument, because a new page in a new section of our app has a different design for form elements, specifically single check boxes (think TOS, affirmations, etc). Well we have shared components obviously, so we're reusing the component. No sense taking half a day or more to reskin this one checkbox on one page.
The amount of grief and manhours wasted discussing this checkbox would astound you. Thousands of dollars in payroll over multiple meetings so far with no end in sight because this particular check box just has to be slightly different. The ticket which has been feature-complete for over a week has no chance of being merged in November.
I swear the art form here is to provide the product people and designers with a tool/framework where they make the choices and they suffer the constraints, then they have to blame themselves when they can't do it.
There is the old story about Steve Jobs and the Mac calculator app, where they made a calculator toolbox to build one in order to prevent him demanding seemingly arbitrary changes all the time.
Exactly. Look not at the requirements but at the likely space for requirements over time and build an engine that can be tuned/tweaked/configed to meet all the likely requirements. Makes testing a little harder but makes dev and product happy.
The usability article [1] from yesterday suggested a similar problem - design-oriented people want to put their graphical mark on their GUIs, and it comes at a significant usability cost. GUIs are most usable when they are consistent - everyone uses exactly the same UI elements, with the same exact color schemes and shapes and sizes, to mean the same things every time. And that’s…not what people do anymore.
I blame Microsoft for having five different UI toolkits across three languages. Web browsers aren't that cross platform, is the GUI toolkit that is, so use that!
Most users don't care about any of that, either. And I would argue that i18n and l10n belong outside your language's framework (and obviously outside CSS).
This is 100% incorrect. Users do care about software being built for them, in a language they can understand and use, and they very much want it to be usable and accessible to them. You'll have to provide me with some citation showing otherwise for me to take that argument seriously.
> I would argue that i18n and l10n belong outside your language's framework (and obviously outside CSS).
Obviously Frameworks and CSS disagree with this assertion. Considering you've presented no argument though, I don't see why you would think that, for example, it wouldn't be important to style your site differently for different languages. Are Americans routinely reading rtl?
Edit: It just occurred to me you think this way because most everything you create is specifically created for you and those like you. You care, you just don't realize you care until it's taken away.
Accessibility is legally mandated in many cases and if you ever want to sell your app to government or enterprise customers, it’s likely to be a requirement.
Of course the way that text is written, displayed, rendered, formatted, etc is all highly language dependent and culture dependent.
"stop"
"STOP"
“Stop”
"STAHP"
These all mean different things even though it's just one language. To suggest that the way symbols are presented doesn't carry its own symbolic communication is willful ignorance at best, and at worst a kind of arrogant imposing that "my way of seeing things is the only correct way."
In a thread about the "bits of information transfer successfully encoded by AI on a UI implementation," I would expect an experienced engineer to notice the bandwidth of communication (or lack thereof) being demonstrated.
If we're taking that route, most people don't care about any specific thing, so we can just skip all this development effort and play video games all day.
You've put into words what has bothered me about the relentless boosterism of B2C AI solutions on the backs of hundreds of these "Hello World" type demos. "OMG, programming is dead!"
The strange thing is that on HN, the limits of these approaches re: extensibility and maintenance are easily recognized when talking about traditional no-code platforms. But somehow with AI, these problems are now fixed, and we wont have to worry about unspooling spaghetti spat out from a black box.
I'm far from an AI booster, but I notice that the "AI-powered coding" goal posts keep moving as soon as something is accomplished. Ten years ago, if you showed someone a text box where you could type "Write me a C++ Hello World program" and it actually would do it, it would have been considered wizardry. Then, it became "Well, sure... it can only write Hello World--what good is that?" Now, it's "Well, sure... it can do basic slider controls, who needs such a toy app?" What's next? "Oh, the full first person shooter game it conjured up from a prompt is generic and uncreative. Who would want to play that?"
I mean who cares about hypothetical benchmarks - when I try to apply these tools they fail more often than not.
The only tool I can say I'm reliably using is Copilot context aware completions work very nicely and it's easy to get used to when it will be usefull and when it wont so it improves productivity.
Copilot chat interface is terrible - I gues it tries to be stingy about the context tokens it's always so much hassle to get it to do anything useful - takes more explaining than doing a google search and read.
ChatGPT is hallucinating so much, mostly in times you want it to hallucinate the least.
I've tried using Midjurney and DALLE to generate placeholder art and memes - it's worse than ChatGPT.
I've built stuff on top of the API and it's very inconsistent and falls apart at totally random unexpected places (not to mention undeterministic).
I want these tools to work, but they are just so inconsistent and introduce so strange failure cases I'm not used to that it's just more trouble than worth over existing workflows.
I agree - largely. ChatGPT cannot program for me (it fails on anything that is a bit more complicated than very-common problems). It cannot write a good algo for me (unless the algo is a very standard one, if so, why do I need it to write it?). It gives incorrect information, a lot of the time. The docs it is trained on are outdated, many times methods, etc.. it suggests no longer exist, or the API endpoint has been deprecated. It doesn't seem to have a "sense of time" meaning it seems just as likely to suggest data from 10 years ago as data from 1 year ago.
But there are some times, where something is behaving very strangely, and ChatGPT helps me work through it. Examples are complex SQL queries, quickly combining existing queries, oddities in CSS, etc..
It may not give super-accurate answers, but at least it gives something that I can work with or work on.
I think this makes it a valuable tool, but it's not going to replace developers. Well, sure. Some people will just use ChatGPT rather than hire a programmer - I've worked with business owners who learned to code rather than hire programmers. I don't expect different results in either of these cases, the code will be very flawed, have huge security issues, and not be maintainable.
I don't think the goalposts have changed. Websites (not webapps) are one of the easiest things to automate because the patterns used are ultra-predictable. All the way back in 2017, there was a semi-vaporware startup called theGrid.io that was going to automate that using prompts. It went dark in 2020:
GPT 3.5 should be able to handle this functionality easily on the basis of these demos yet a full-fledged product that does this has yet to make an appearance. Squarespace, Wordpress page builders should be all over this, yet they're not. Neither are any "disruptors" like Webflow. Maybe they know something that Hello World prototyper does not?
I think its totally reasonable to move the goalpost when it turns out the original goalpost didn't actually get you anything more then a 'gee-whiz' demo.
If those old goalposts actually helped solve engineering and product needs then there would be huge praise for the achievements.
Do you really care about extensibility and maintenance when you could just generate an entirely new component when you want more features or need to upgrade the library?
If the team that came up with the prototype assumes that responsibility, then no, I don't. I'm sure an AI will understand how to refactor the codebase on an adhoc basis without creating any problems down the line.
Not a matter of refactoring. AI should ideally regenerate the code from the spec each time the requirements change. "Programming" won't go away, but it will refer to working with specification languages, not implementation languages.
In the best of all possible worlds, you'd have to deal with the C++ or JavaScript code about as often you have to dig into the x86 or ARM assembly code now.
The second you need to do something the AI can’t do, you’ll be trudging through garbage and breaking tons of things in regression (causing either errors or visual bugs).
In your scenario, AI will likely produce code that _it_ determines to be maintainable, or if it’s rebuilt each time as you suggest, then it doesn’t need to be maintainable or readable at all.
Good code is written for teams, not individuals. It’s written for future you and future people who you will never meet.
You'll find that argument holds up well at first, but it won't age well at all. None of the "AI can't do X" arguments will.
The most popular programming language in 2030 -- 2035 at the latest -- will be English. Few people will GAF whether the underlying generated code is readable or styled for human understanding, any more than they care about the compiler's machine language today. Some will, of course, but it'll be a rarefied, specialized career practiced only by gurus on mountaintops, as assembly programming is now.
It just exposes how stupid our software stacks are. Everything is driven by "well, maybe they'd want to customize this and that and that and three hundred irrelevant details". When in reality all we ever wanted was to separate the functionality from its presentation. Which, incidentally, was exactly the Web we had in 1996 with just bare HTML. Things have gone horribly, horribly wrong in such a mind-bogglingly stupid way and nobody is better for it. Oh but those dropshadows and inner flexboxes or something.
> how unimpressively repetitive and unoriginal tasks keep being reimplemented by millions of developers worldwide
This extends far beyond just developers. This is a majority of all office work, from data entry to accounting to creative work. Most office work is just doing the same thing over and over again, often times with different people repeating what are essentially the same tasks, just at different companies.
Well billions of human beings are living remarkably repetitive and unoriginal life. Doing few categories of work, living in few categories of home, and eating few varieties of food. Same for entertainment, transport and so on. Most people are just running in rather tight loops.
Perhaps to start with, one way to have vastly more creativity/diversity of things people do is to have vastly fewer humans on earth.
Html supports a lot of UI widgets now but everybody keeps reimplementing their own. And because nobody is using them they arent improving so nobody is using them so they arent improving.
It’s because they display—and behave—wildly differently in different browsers. This has been the story of tons of HTML5 “widgets”.
I know this isn’t quite a widget, but when something as simple as input type=“number” was introduced, I was excited because it could be used to call up the mobile keyboard for numeric input. (Say for a zip code, which is a common use case). But unfortunately, it can also be changed via your mouse’s scroll wheel, accidentally, so I’m stuck using input type=“tel” for everything. (Do you want to scroll to your zip code?)
The problem with these things is you can’t just push out improvements and fixes. Everything has to be backwards-compatible and I feel like they’re never nearly good enough at the beginning.
In that vein, I wonder if you use these embeddings and compression to work out a new sort of programming language which could extremely concisely represent these concepts. Probably the actual result would be so complex as to be worthless, like code golf languages but worse, but maybe it could help us come up with an interesting new programming language paradigm.
I think the parent's comment isn't saying that UI should be wildly different from each other but moreso highlighting that this is a prime usecase for an easy abstraction over the top.
UIs should be repetitive, but we keep reinventing them anyway to satisfy the vanity of product owners/designers/managers/clients/... and built entire industries around making slightly the UIs slightly more differently than the last time.
In my experience it's because every time you give a page or product spec to a designer they give you something slightly different based on however they happen to be feeling that day. Combine that with whatever product people do, and you get basically random UIs for each task.
I suspect a large part is that the visual nature of the topic simply lends itself to endless bike shedding and fashion cycles; maturity and stability are going to be anathemata in these contexts.
But it also doesn't help that input/output formfactors keep changing. We just about understood what did and didn't work with terminals when desktop GUIs sprung up, by the time those started maturing, we had to figure out how to adapt to PDAs and pen inputs, then came smartphones and tablets, then 3D for a while, then VR, and in parallel to that increasingly exotic "classic" form factors like folding devices with fluid screen sizes, ...
> Is that because we still don't understand things well enough to design a decent foundation?
Nope. Don't let the conmen fool you that the incremental changes in HCI justify all the new coats of paint. Form and function changes are happening in parallel.
Fashion has to change, that's just the nature of fashion. Fashion will also spin yarn to justify itself. That's also just the nature of fashion. Don't get me wrong, I want all my software to be palatable to modern tastes. But entropy always increases with time.
I would say that in the long term/big picture we have figured out a pretty standard UI for computers. In the 1980s you probably had to read a pretty hefty book for any given model of computer to be useful. Now, we can sit down in front of just about any computer and trust that the Mouse and Keyboard will exist, and act like we expect (i.e. the left button on the mouse is for selecting, there are arrows on the keyboard which can control a cursor, etc), and that the GUI software will be substantially similar regardless of manufacturer (windowed programs, a bar of some sort on the lower edge of the screen for controlling the OS, a bar at the top of the screen for controlling the active window, etc.)
The parts of the UI toolkits that change are the least consequential. Round or square corners, buttons, skeumorphism, etc...
OK but if it's that common it shouldn't take more time to create the final result than the time it took to draw it and render it. We don't want to spend time at such a low level abstraction, we want to be describing higher-level behavior.
> it shouldn't take more time to create the final result than the time it took to draw it and render it
Ideally it should be faster. Ideally it should be slower to draw and render it than it is to create the final result.
My controversial hot take on UI is that I don't think graphical tools like Figma encourage good UX habits. Unless you're doing something really creative, (opinion me) you will get better results if your UI starts in a text editor. You want a slider? What you want is:
<input type="range" id="volume" name="volume" min="0" max="11" />
<!-- And then you get more boilerplate double-binding the input and wiring it up to whatever component it controls -->
Or better, in a way that your UI designer can understand:
Volume (0-11 slider) => Music Volume
When we say that these kinds of tasks contain too much boilerplate, its more about the amount of code required to actually wire these things up and about the defaults and caveats of the systems that we're using to build them.
And we say that this kind of boilerplate should be eliminated rather than plugged into an AI, what we're talking about is trying to get rid of the stuff that makes people feel like "well, I just need to draw my interface, it's too much work writing it out or programming it." Because web authorship isn't actually there yet, it isn't efficient and easy to do this boilerplate from scratch.
But in a way, visual representations of high-level behavior are themselves an inefficient way to describe behavior. It's lossy, it doesn't always represent multiple states well, people forget to handle other setups or states. You need to draw boxes and move them around and if you want to reposition anything you have to move everything else around it? Nah, it's a slider from 0-11, it should be double-bound to some kind of variable. And I don't want to think about boxes, I want to think about what the control is and what it does. When I start building UIs, the first thing I do is I make a markdown list that just lists the controls. I don't start by drawing.
This is viewed as a kind of programmer-centric way of thinking about design, but I don't think it is, I think it results in better designs across the board. Drawing shapes should be a step that comes much later in the design process. It should happen, you want to do these kinds of visual tweaks to make sure things line up well and to think about presentation, the same way you want to do a visual pass when typesetting a book. It's not that it's not important, butthe visual position of every element is not the part of the design that's most difficult, figuring out what to show the user and when and how to represent it is the difficult part. And in the same way that you wouldn't write a book and start out thinking about the page breaks, it doesn't make sense to think about the positioning of every control before you've figured out what your controls even are.
It's kind of a failure of modern UI/UX toolkits that people are so hungry for visual design. It's backwards, we treat the behavior of controls as an implementation detail and the positioning of controls as the primary design step. It's the opposite, how a control behaves is important, and how it looks is an implementation detail that we may need to change or polish in the future depending on whether our current app-wide default styles work well for the control or not. But that's because people are so used to feeling disconnected from the implementation and are so used to the implementation being a repetitive chore.
I welcome these tools but when you need to be completely accurate with something, you have to drop the nice graphical tool and actually edit code.. and then it's a mess. Either the code is super complex to humans or going back to graphical mode breaks everything.
I just see this as a tool to help make UI designers (and maybe POs) look smart and competent, but the real work is going to go to the programmers, just as it does today.
UI designers will be able to give a "demo" but how will this basic functionality translate to the rest of the app? It won't.
> how will this basic functionality translate to the rest of the app? It won't.
It certainly will lead to fun and productive conversations like “it’s already working right there! Why is it going to take so long to get it into the app?!? Can’t you just download it?”
Since well before GPT there has been an argument for making very early prototypes/mocks more obviously lo-fi, such that their visual polish is proportional to how functional they are under the hood.
Beautiful, seemingly “working” UI-only prototypes have a way of setting unrealistic expectations even with clear communication, leading to a higher probability of proto-duction.
A decade ago when I worked at an agency, we had a policy of having two branches of any given project: an internal one, and a version with "gray-boxes.css" applied. It's too difficult for humans to separate visually polished from functional and finished.
I hate it when designers present what looks like polished and complete UIs in Figma. To the execs it looks like a finished product which frequently creates unreasonable expectations that interfere with what should be the iterative nature of UI design.
I don’t know how many times I’ve come across a project where a terrible system design was foisted on a group of engineers because of the constraints required by a Figma design that was blessed by some exec or other.
Granted there are worse problems in these orgs, but easy high fidelity
mock-ups vs wire frames has made it worse.
I hate it with engineers complain moan and drag their feet on implementations that are hard but worth it. Because really under the surface they just don’t like doing something new, even if it has clearly defined user and business value.
I managed a 250 person engineering team at FANG. The number of times I had to explain to a jr or mid level engineer what our fundamental business was, was astounding. They would constantly argue tickets on a premise that was entirely self contained inside their code -- and be entirely oblivious to what the product was accomplishing outside in the real world with real users. Now this wasn't always the case, and frankly I ran into it much less there than I had at smaller companies. But it still happened a lot.
And these engineers would always boast the most about how they knew best about X or Y.
Maybe that was my fault as a leader -- but I couldn't fix the incurious.
The disconnect is that both sides are valid. User value is important, but stability, opportunity cost, and ROI also provide user value.
It's not valid to say that the realities of the current code base are irrelevant just as much as it's invalid to say clean code always trumps features.
That’s actually not true and this is the mistaken idea engineers have. It’s not a negotiation. User value that translates to business goals strictly always trumps internal code. Or you don’t have a business to fund that code. You find a way to do it if you need to fund the business. If not your are sunk.
I agree with you to some extent, but if you go with this line of thinking consequently then things of no apparent "User Value" (say, a code base not riddled with technical debt) never gets any budget or attention in favor of shoehorning in more shiny stuff of obvious value.
What this then leads to is borderline unmaintainable code because the project managers, software architects, etc. rarely touch the IDE anymore and the more junior people can't or won't articulate the issues they are having. (i.e. a feature made tests 10x slower, a hastily added API times out often locally, etc.)
This then leads to people jumping ship every 1-2 years, low productivity and bad implementations. All in the name of precious User Value!
As a developer I actually agree. Too often we let “straightforward” or “fits best with the backend design” cause a knee jerk reaction to reject designs that are better for the end-user. I’ve been working to catch myself and say “yeah this won’t be easy but in the end it’s going to fit better with the user’s mental model of what they are doing”.
The example I think of is if you have 2-3 related entities a developer might like a simple CRUD for each thing where you have to create the parent object before you can create the child (in a relationship). However sometimes the child is the obvious first thing a user wants to create so it’s important to build a UI that lets them create the parent on the fly or even have no parent until later.
As a younger dev I dug in my heals too often on things like that and I think it’s made me much better as a dev to approach designs with fresh eyes and think about how I’d want to experience the UI/UX instead of bringing preconceived notions based on how we built the existing code/UI.
One thing I will point out is I love working with designers who are open to “what if we did X instead of Y? X will take me 1 hour but the Y in your design will take 1 week”. Sometimes the “Y” is worth it and some times the designer thinks “X” is just fine (or even likes it better). Nothing is worse than strict designs passed down that need to be pixel-perfect with no wiggle room.
I generally agree with you and its how I operate, but then you see Apple who strictly requires engineers to follow design/marketing requirements to the pixel -- and they are the most successful consumer business on the planet.
I was discussing with a client how to integrate our software with his.
He sent me a screenshot of the main form.
I put the screenshot into ChatGPT and said “make a react form like this in bootstrap”.
Made some adjustments, added my software, a few hours later showed the client who was knocked out to see a proof of concept of our systems integrated so quickly.
When doing web development I often take a screenshot of a problem with css layout, upload to ChatGPT and ask how to fix it.
But the thing is, creating a basic form page -is- simple. What is so amazing, is that after ~25 years of web development, we have continued to make it more complicated than it should be, by continuously coming up with new web frameworks that are brilliant for all kind of fancy use cases but overcomplicated for the more common and basic tasks.
Yes, there have been exceptions every now and then, but most web devs don't like them. They don't look fancy on your cv and face it, who wants to stick to building web forms for the rest of their career?
The software world is much more flexible than the real world (yeah , I know, you're taking to Sherlock here).
Go to a civil engineer and ask to create a building that tilts extremely on one side and requires concrete reinforcement with titanium whatever... they will laugh at you.
Do the equivalent to a software engineer or product manager and they will hurry to invent yet another framework to satisfy your request, without giving a second thought about long-term consequences.
My point is that the real world has the laws of physics keeping things in check. With software, we don't have such obvious hard limits and each situation is case-by-case with lots of variables... it gets messy.
I don't think this is about flexibility. It's more about culture. Software engineering culture does not value simplicity, and actually seems to value complexity. Other engineering cultures place a lot more value on simplicity, and its associated values of reliability and risk mitigation.
A civil engineer can build you that building, but they will think "what a ridiculous architect." With a software engineering mindset, they would happily build that building for you, and invent a new type of concrete mixed with titanium flakes to do it for you. And then they will go on a decades-long campaign about how titanium-flake-concrete is the Next Big Thing and that anyone using normal concrete is a simpleton.
The culture is derived from the economics though. Software engineers who value simplicity don't get promoted, or their software is not as successful, because the other engineers who produce complex software produce results that users are attracted to and the cost of that complexity is not so great as to completely ruin it (at least not at first).
On top of that, a little bit of software can be used by a lot of people, but a little bit of building usually can't, so unless you're Gaudí building the Sagrada Família, nobody really wants to spend a couple centuries building a complex building.
Clearly, you haven't worked on German automobiles.
There is no one so willing to do things the hard way as a German car engineer trying to implement something that has existed for decades, and functions perfectly in standardized form.
Basic "fill out the fields then submit" forms are simple. But many use cases these days want intakes or onboardings like apps, then complain about complexity or timelines when they fail to realize the requirements spawned 45,000 branching paths.
Yes, the customer will be impressed by your speed the first time. The second time, he will expect it. The third time, when requirements have grown enough to be beyond what ChatGPT can deal with, he'll be angry that timescales have exploded.
This comment is so relevant.. people underestimate software complexity. In software services, often it's not the algorithms that are hard to implement or maintain, as they often have very clearly defined input and output requirements, and you can and test them, but its the ever changing and regularly patched up business logic that evolves the system into a complex and fragile service needing careful maintenance..
Can't imagine an AI system taking 'charge' there as a subtle mistake by the AI system would then need a human intervention to 'fix' it which by then would be close to impossible.
At that point you can't also make use of AI beyond a co-pilot role as anything complex would need line by line and careful inspection by a software dev.
I don't know if "do everything manually and slowly so that you never have to set realistic expectations" is a great way to do business. Being worse at your job on purpose isn't typically a great strategy.
It sounds to me like the commenter has found a subset of his work which can be sped up significantly with chatGPT and is using that to continue conversations with a potential client. Pretty cool if you ask me.
> I don't know if "do everything manually and slowly so that you never have to set realistic expectations" is a great way to do business.
It’s called Expectation Management. It’s how successful business is done.
Marketing 101: Under promise, over deliver.
P.S. Apple is quite good at this IMHO. I notice they often very quickly go from product announcement to product availability. My theory for why this is good is that there less time in between for people’s imagination to run wild and fill in the information gaps with their own ideas that the actual product might not satisfy. Expectations remain ground when their customers receive the product vs a product getting hyped to the moon such that it’s practically impossible for it to ever live up to the hype.
i'm not replying about the concept of under promise, over deliver, but more like.... why would you avoid a tool that helps you do something quickly because you don't want to explain to your client that some steps aren't as fast to do as others.
Right now with the increased productivity, it leads to customer engagements that previously wouldn't have been possible for me. E.g. now it's feasible from a cost standpoint to let me as a freelancer build internal tooling that previously would have been to expensive in total. It's also easier to deliver initial MVP milestones for projects for a price that's in a much more comfortable range for smaller companies.
Yeah, that advantage may go away, but just like good "googling" was/is a skill that can set you apart from your peers, proper usage of LLMs is a skill as well that needs to be learned (and that many won't).
its a tool. use the tool. I don't care if the house is built using an old fashioned hammer or a nail gun. I do care if you used nails when you should've used something else, or used the wrong nails.
> When doing web development I often take a screenshot of a problem with css layout, upload to ChatGPT and ask how to fix it.
Oh wow, that’s a neat idea that I hadn’t thought of before. I’m decent enough at CSS that I can normally fix it in dev tools then port/copy the styles to the code but I’ll have to remember that trick.
I haven’t used ChatGPT as much for code as I have CLI piping or bash scripts to munge data quickly. Things I wouldn’t have checked (like for debugging or proving a hypothesis) become almost easy when I can give ChatGPT the output of a command and ask it for bash to format/collate/sort/extract what I want out of it. I can do it manually but I’m slow at that process and have to google or use man pages to remember flags/args/etc. For code I mostly just use GH Copilot.
Creating the front-end of a Bootstrap form with no backend logic isn't particularly impresssive, it's something that would be taught halfway through any bootcamp course. Source: me, a person who built BS sites for years until I finally got around to learning Flexbox and CSS Grid properly.
Yeah I didn't mean to be pedantic about that, because actually I don't think there is a ChatGPT API, there's no reasonable way something like this even could use it I think? And the only reason to really would be to use the free one (not have to ask for a key) which would surely be against the terms, and doesn't do any computer vision or anything as presumably needed by this demo.
In other contexts though I think it can be ambiguous and I can understand why people get irritated/pedantic about it - AIUI it's the same model but different training/parameters? And ChatGPT only gives you the 'user' prompt essentially, with the 'system' one already being 'you are a chatbot called ChatGPT [...]' or whatever.
Call me an unbeliever, but I don’t believe in the future of no code solutions. You will still have to align that button at smaller device resolutions, leave extra space so it looks nice in another language, and other requirements. Maybe it’ll enable us to use even more abstracted languages to build apps faster at most. This only works for extremely basic and common things like tic tac toe and not original works.
A huge part of the problem with LLM based no code is that the output is non-deterministic, so you can only check in the output to version control.
Imagine what happens when you have dozens of barely technical people all adding features by sketching them and clicking “make it real”. Each one is producing hundreds of lines of code. At the end of the day someone is responsible for understanding the output because since the output is non-deterministic, that’s all we have.
> Imagine what happens when you have dozens of barely technical people all adding features by sketching them and clicking “make it real”. Each one is producing hundreds of lines of code. At the end of the day someone is responsible for understanding the output because since the output is non-deterministic, that’s all we have.
Joking reply: Have you seen modern software development?
Joking-but-not-really reply: I wonder if someone could train a "bad AI code to human-maintainable code" AI.
This reminds me of what I’ve been saying to friends… we will either see al lot more layoffs of us software engineers or another big boom because the technology is moving way faster than normal humans can learn. Non tech people will just hire software engineers to do it for them.
From the link: Sometimes, determinism may be impacted due to necessary changes OpenAI makes to model configurations on our end. To help you keep track of these changes, we expose the system_fingerprint field. If this value is different, you may see different outputs due to changes we've made on our systems.
Generally because although you can easily generate new versions, it is difficult to generate a new version that is different from the old version in a specific way, without also being different in 100 other ways that you didn't want.
It's like a revision control system where when you submit a commit that changes one line, which it faithfully records, it also records a change in dozens of other lines in the file. (Which leads you down the merry road of Stable Diffusion where you can "inpaint" that one line, but now it's not able to adjust the rest of things to accommodate that change because you told it not to...)
Artisans who can sing on key will continue to make very interesting things.
But also in parallel a whole new wave of people that couldn’t or didn’t want to learn to sing on key will make entirely new genres of music and also pop for the masses with far less effort.
Even the artisans will use it sparingly to enhance and perfect and speed up some of their workflows
Unlike software, a song doesn’t need to be maintained to keep working. A song doesn’t need to grow in complexity as more people use it, and a song can’t contain subtle bugs that steal listeners’ credit cards or damage other songs.
Of course metaphors have limits, but the limits of this particular metaphor hide all of the flaws of this technology.
"Entirely new genres" of anything is unlikely from ML models trained on existing work. Someone using AI to implement an entirely new idea will be frustrated as the AI keeps gravitating toward convention.
I think a lot of these demos aren't necessarily trying to push forward a no-code purist approach, but rather showing how you can get basically a live wireframe going in no time at all. I think tools like canva, figma, etc will be all over this stuff and really improve high-fidelity wireframes/demos
Look at the success of ComfyUI in the Generative AI world and node-based editing in the graphics world (i.e. blender). "No code" works, but it has to be tailored towards experts who want to actual write code sometimes, not billed as making it possible for suits to write software.
Squarespace, Wix etc have already taken the bottom of the market, and if they hadn’t, Indian outsourcing would have anyway.
This is the logical progression of those same concepts. If I were a product manager at a website builder, I’d be all over integrating ai builders like this. It will never work for barely defined complex business tasks, but it might do fine to create a cost estimator for a photography business, for example.
I see this as a useful tool for creating interactive demos and prototypes which allow to quickly iterate over ideas. Could keep the feedback loop with clients short and allow to minimise miscommunication. I could see for example Figma implementing this.
Why do you think it can only work for trivial tasks? That's just denial.
Business process workflow software is quite popular. There are many applications where people do something similar with drag and drop/interactive widget editors and they can have complex forms, parent-child, state transitions, etc.
Using something like GPT Vision means you can skip the widget drag and drop and use more freeform drawing tools or freehand sketching.
Notion is probably the most popular example today but there are more complex ones going back forever.
Because so far it has only worked on trivial tasks? That isn't denial: it's stating fact.
Business process workflow software is popular, but have massive downsides. These softwares consume massive amount of resources. One dev becomes one dev plus a business person plus another person plus overhead.
> Using something like GPT Vision means you can skip the widget drag and drop and use more freeform drawing tools or freehand sketching.
To do... what? Where does the UI go? Where does the data go? Where is it stored? How is it accessed? How is security? How are backups? Version control? Etc..
I feel old now. I'm fairly sure that we could do this almost as fast with VB or Delphi a couple of decades ago, but a little more deterministic results instead of having the tool inferring it from the label names. We had this and then we shoved everything in the browser and forgot that we could do this without using huge amount of compute of some generative AI model.
I have kind of the same opinion. Maybe more provocative. When people show me figma to HTML with AI I tell them “have you ever heard of Dreamweaver?”
Ok, dreamweaver code was ugly and unusable when AI generated code is not too bad (sometimes). But still, I also feel we kinda were already close to where we’re at today.
Not sure about Dreamweaver, but the fact that no Frontpage equivalent exists this day and age, and armies of developers are grappling with HTML, React and GraphQL (e.g. Gatsby) to generate what are essentially websites, is very surprising.
I used to look down on it but when I heard it handles payment it blew my mind and I decided I'd rather use it myself if I ever make a shop instead of building it myself. To quote a random guy "I trust their code more than what I wrote at 2am."
I am a SWE. When I needed a website for a small business I was running, I used squarespace too. If you just need a static public facing site it's absolutely great.
Same here. I used to scoff at their prices “I could do that for way cheaper”, and I could/can… if my time is worthless which it’s not.
Now for things that aren’t my core business (day job or side project) I’m much more likely to reach for a paid off-the-shelf solution.
After months of putting off building a marketing website for my side project I just paid for a tool to build/host it so all I had to do was plug in my info. Yes I could have hosted it for pennies on S3 with CloudFront in front of it but instead I setup a cname, paid like $20 for the year, and let this other company handle the responsive design (template) for me.
For some weird reason, "Squarespace" keeps reminding me of a ___location-based social network from the early 2000's. Did I imagine that? Or did the ___domain and trademark get bought?
There are countless of no-code tools to build landing pages, static site content.
For web apps the level of custom
logic makes it unavoible to just code. Since web developers tend to be coders they would code static sites too. And use what they are most comfortable with, usually React.
My friend works on a low code system marketed to small governments. (Basically just CRUD builder optimized for bureaucracy.) There's definitely a niche for it, it just hasn't caught on much for some reason.
Of course it exists, it’s just not produced by Microsoft and therefore relatively few people use it.
Webflow has been around for a while and I’m sure they implemented AI already (I didn’t even check). Other React-based tools surely also exist, but have an even smaller user base.
I used Dreamweaver significantly back in the day, and use Figma professionally today. I see what you're saying, but there's no question about them being equivalent in functionality, except insofar as I do not consider generated code from either of them particularly useful. In that respect they are the same, but in every other way Figma is 100 times better.
Completely agree. Was trying to find the current parallel of the design to code process. But there is no doubt dreamweaver is almost not comparable to Figma (at least the version I knew).
As others mentioned the closest descendant of dreamweaver today is most likely Webflow.
I hadn't seen that comparison, but Webflow is really cool! I had to make a company website using it a few years ago, because it needed to be maintained by people with no coding skill at all. I found it really awesome. Also, their Youtube tutorials were about the most impressive tutorials I've ever seen, so well produced and fun that I would recommend just watching them as entertainment.
Are you sure? I remember one of Dreamweaver competitive advantages was that it produced much cleaner HTML than it's competitors. FrontPage was the big offender with awful code.
To be honest I can’t remember when was the last time I used dreamweaver but it sure wasn’t towards the end of its life. So, it could very well be that it got better after my last experience. What I remember was way too many tags and almost unreadable classes or repeated styles. It was not possible to directly take over without quite some re-writing.
Toward the end of its life, Dreamweaver's code was quite readable. Webflow today is kind of an equivalent, and also puts out relatively readable HTML. I think most of the reputation comes from people looking at its early output.
I once spent a week as an intern building a really clean and well done java swing UI options panel for a portion of our team's app. I was so proud of myself and thought it was so clear and concise and great. I told my manager it was all finished.
Well, no one else on the team was that familiar with java swing so they couldn't work with it, so one of my coworkers had to spend an hour rebuilding the panel in the UI builder that the rest of the app worked in. It produced a perfectly functional UI that had all the features requested and could be maintained by anyone on the team. Sure, the raw java file was twice as long, but who cares? It gets compiled.
I was enlightened on that day. "Elegant" is worthless in most cases. Five years later that entire app was rebuilt as a web app and nobody gave a shit whether one of it's option panels was artisanally hand crafted with care and love or spit out by an actually really good and regularized UI builder.
We are data plumbers. Nobody cares if the pipes you laid out are arranged to look like the mona lisa, and it's probably worse for the customer and maintenance that way anyway.
It has nothing to do with the browser, it has to do with how opinionated your framework is.
VB was highly opinionated; it made native windows-style UIs and nothing else. Nobody was taking arbitrary mockups from designers, with a wildly custom look-and-feel, and replicating them down to the pixel in VB.
Today, most product GUIs are part of the brand. For better or worse, every company wants a distinctive look and feel and set of UI behaviors. This requires the tools used to build them to be much more complicated.
You could replicate VB in a browser. Many people have done roughly that. But nobody uses them to build products, because their company doesn't want boring/generic UIs.
I get your point, but it only applies to the most simple of these examples. This can do all kinds of stuff, for example look further into that thread and you’ll see it implementing tic tac toe. The tool works by basically sending a screenshot of your diagram to GPT4 and saying “implement this”.
Which in turn only works for simple stuff that GPT-4 can implement from a single screenshot, and is, as OP pointed out, non-deterministic. The code OP is referring to would have been written to integrate with the rest of the project, to connect to data stores and services that the screenshot has no way of knowing about. People built huge applications in VB and similar tools.
Don't get me wrong, these new tools are cool, and I imagine they'll be great for prototyping small things quickly!
Human beings build stuff non deterministically too. The key is just to give the tool more information along with the screenshot, for example showing it your current codebase so it writes code using the same conventions.
... and our opinions about how badly this is going to pan out are based on decades of experience with how badly that panned out.
Nobody that has ever had to maintain software is going to look at this and be impressed. Without a reproducible and predictable set of transforms from the source artifact to the product artifact(s), maintenance of software generated in this fashion will be impossible. The concept of a "localized" change doesn't exist; you have to assume that any and every change risks breaking the entire product.
This is, of course, just another version of folks looking at something that mimics a (structured, evolved) human activity and assuming that it is in fact reproducing that activity, rather than just copying some subset of the visible consequences of that activtiy.
I don’t know why you’d assume the AI behind this wouldn’t end up with access to both the source and product artifacts, implementing changes in the most constrained/localized way possible.
Recall that this is an LLM; it doesn't "think", it's just cutting and pasting "likely" scraps of things from its training set. At this point, the vast majority of those scraps were human-generated, so a) many of them work, and b) copying them in this fashion is plagiarism ("the offense of taking passages from another's compositions, and publishing them, either word for word or in substance, as one's own")
Once upon a time, there was a fresh new operating system, the next step, being demo'ed in the Graphics pavillion of Comdex. Some huge nerd stood there, dragging things around with a mouse, 'proving' that you didn't have to be a programmer to build applications.
A decade after that, something floated around for a year or two called Visix Vibe, which gave the same thing, but for this relatively new language (at the time), Java.
Every few years, maybe 4-5 or a decade or so, someone gets the itch to make all the complexity fade away. Eventually though, they build an OS.
You still see this using just about any major native UI toolkit. They may not be RAD-fast, but all kinds of elements and interactions that require custom and usually-janky JavaScript to implement on the Web are available “for free”. And actually work correctly, interact with the rest of the toolkit the way you’d expect, and i18n and a11y and all that work correctly and consistently.
There’s no reason HTML can’t do a lot more than it does out-of-the-box, saving crazy numbers of developer-hours and piles of user frustration every year. It just doesn’t.
I think we'd all be gladly sitting atop high abstractions like that if not for the fact that brands pay the bills, and brands want custom experiences. Website and app design often begins with "what does a button look like," and design tools focus not on prebuilt widgets but on paths and fills, because the last thing a brand wants is to speak with different tones in different media. Their software has to feel like their stores and advertisements in order to create a brand-consistent customer experience. Design systems are a hot topic today because what brands got with this design method is a different voice in each of their sites and apps, because they're all independently custom.
This is what kills Horizon Worlds, which is Facebook's earnest answer to the problem of making VR authoring much easier than "making a video game"
I want to make a 3-d world based on photographs (even stereo) and visual art and to do that I need to import JPG or PNG images. No can do. They have a short list of textures they supply, but you cannot import media assets like images, audio, video, geometry, point clouds, whatever, ...
McDonald's would insist on putting a Coca-Cola logo on the cups and so would every other brand.
> Website and app design often begins with "what does a button look like,"
Yup. Been a long time since I've used an un-customised standard button outside my personal projects, no matter how much R&D has been spent by Apple on working out what it means for a UI to be good.
There was a reason why we moved away from this. Everything is code, the UIs for creating such interfaces are just abstractions. The code generated by those no-code approaches to interface design was horrible and would break down quite fast in larger projects. Maintainability for such interfaces was non-existent.
I think just regretting an old feature long gone without proper context is dangerous. If we are to bring back those approaches we have to keep in mind exactly why they went away in the first place.
> Everything is code, the UIs for creating such interfaces are just abstractions. The code generated by those no-code approaches to interface design...
I'm not sure you're replying to the same thing. I never used VB much, but Delphi was not and is not no-code. In fact it emphasised using libraries a lot ('components' in its terms are classes provided by libraries) and the UI had a text description, which was streamed and created at runtime.
Delphi today could indeed create something doing this just as fast, and you wouldn't draw a trackbar and hope it's recognised in the image by the AI... you'd drop an actual trackbar.
Delphi's visual editor allows you to position, link together and configure visual and non-visual components in design time, automatically serializes all the components into text file and in runtime your program deserializes these components from the resource embedded in the program's binary file. It also allows you to create handlers for events like OnButtonClicked or OnDBConnectionOpened where the usual arbitrarily complex programming happens.
There are still tools to do it. I think the reason they are not more popular with programmers is that it's basically some stupid subconscious thing where we are afraid of being accused of being users or beginners. Not something we realize consciously, but anything that doesn't involve complex colorful text naturally gets kind of peer pressured out because it looks like a "beginner tool".
You are missing a huge caveat of those tools. They tend to have limits in functionality.
It isn't uncommon that you prototype with a tool and it is super fast but before you can launch for real you need to rewrite everything.
At that point it is difficult to know if the prototype was valuable. Certainly quickly visualizing is good but a prototype tool that is non functional is even faster to use.
Having a tool that allows easy drag and drop without any friction on the generated code (including difficulty of using that code) while also having all the powers of HTML would be really cool.
Such a tool wouldn't be a beginner tool but any that fail this and can't really go all the way to final product gets discarded as "I am going to have to rewrite anyway".
Less of a "I am too good for that" more a "not a useful abstractions level" when considered holistically.
That's a good rationalization for doing everything manually, but I don't believe it really holds up. I think it comes down to some social psychological effects that as I said are subconscious. So people don't realize it's effecting their decision making.
When they decide not to use those tools they will use rationales like you said rather than admitting that they felt peer pressure.
Some people want to be good software developers, not just good website builders.
If you want to use low code website builders, feel free. If that suits your work style and the projects you're building, great.
But you will never develop the skills you need to actually build software. A person who spends their life using website builders instead of writing software will never be able to build their own website builder, for example.
Some of us actually like to have the skills to build the tools ourselves.
If you want to call that peer pressure, then sure. It's peer pressure to elevate your own experience and attain mastery, instead of settling for only ever using tools that other people built for you.
You have perfectly illustrated the peer pressure I am talking about.
By the way, I have been programming for 38 years on many platforms and have built my own drag and drop UI editors and frameworks. I don't use these types of tools today because they are not popular and because of psychological factors as I said. But I still think that it would be more logical if programmers used them more often. And the times that I used them in the past they did increase my productivity.
The types of tools I am talking about often require editing code to customize functionality. They are not no-code tools.
If you hire an architect to design a house, then make some adjustments to the blueprint before it gets built, it doesn't make you an architect. In fact if you give your adjusted design to builders and have it made without consulting an architect first, you have a good chance of unknowingly violating some building code somewhere.
Similarly if you use a code generating tool written by a software engineer and then adjust the code output, it doesn't make you a software engineer.
Yes, software engineers can use those tools, but they're limiting their growth as engineers if they rely too heavily on those tools.
If that's peer pressure, then I am unapologetic about it. I'm not hiring people who can't build their own code to work as software engineers.
Sadly I, like all programmers, have been peer pressured or something into avoiding tools like that. But I know they exist. I also made one several years ago. (No one was interested).
But I think if you search for "RAD" or "Rapid Application Development" or graphical component based development may get quite a lot.
I think that there are several plugins for WordPress that have similar functionality, although less code integration.
When we went to the browser we took 30 years of UI development knowledge and UI/UX principle and flushed it down the toilet.
Only very recently have we started to gain composability in browser UIs through things like React, and it's a sad facsimile of the widget composability we had in WYSIWYG UI development on PCs in the late 1980s and early 1990s.
The web is a shit UI platform, but that's because it wasn't designed to be one. UIs were shoehorned into a hypertext system designed for viewing documents.
The web is a much better UI platform than what's available in linux desktop and whatever took 30 years of UX development in Linux wasn't flushed down the toilet (emacs, vi, GNU, ...). MS Windows users miss their Delphi/VB software.
Keep in mind this could equally be a picture of a napkin.
Ignoring that though, I t’s fair to make this point for the current level of capabilities, but look at the trajectory… in a year or two when the training set has lots of examples of people using this, it should be pretty competitive.
I think this is kind of missing the point. The fact that the underlying implementation is not a hand-coded-deterministic one is the interesting thing about this demo. This is clearly going to be useful by making people more efficient and only going to get better with time.
Now, instead of looking in some cryptic symbols which you don't understand, you just mash "regenerate" button until it works. Can anything be more simple? Of course, with newer models you need to hit that button even less, it's progress, isn't it?
My laptop from multiple years ago (6GB VRAM) can run local models with decent performance. It's my understanding that the major energy cost is from training the models, which you do somewhat infrequently, and that generation isn't nearly as costly.
Someone who knows more, please correct me if I'm wrong!
HN has this weird nostalgic thing for VB and Delphi, but I feel that most people who express it either haven't actually used these tools or don't remember it well. There's no way to do responsive interfaces with it. If you want anything but your OSes native controls, you're in the world of hell too. Even localisation is a problem. No unit, no integration tests. WYSIWYG editors mean that form files are changed automatically and it's often not realistic to review the diffs. And the languages itself — while being adequate for their time, they're don't even have support for functions as first-class objects, so compared to modern alternatives they're almost unusable and require enormous amount of boilerplate.
Oh sweet summer child. (if you are going to call us weirdly nostalgic)
There were ton's of methods - from extremely manual code-based detection of window resizing events, then subsequent calculating and scaling contained controls, to a myriad of third-party libraries/components that would provide an automatic resizable host container for other controls.
Localization was not a problem - VB6 supported 'resource files' like every other Win32 app. Unit and integration tests were possible - but uglier with visual forms - typically requiring third-party products, and/or low-level Win32-API integrations. But in reality - you would abstract all of your business logic into classes/modules/units with very little within the UI, so that those would be unit-tested, instead of the UI.
Now - don't get me wrong, VB was limited in many many ways, and I am not nostalgic for it - but, it was more capable than the picture you are painting. Delphi even moreso, as it had easy and direct access to the entire Win32 API and could handle threading and pointers as well.
Isn’t the whole point of all this so I don’t need to use a web UI anymore? I can just tell the computer what I want and it does it, then I can go about my life?
A nerd can dream…I can stay in the command line, work on the fun hard stuff, just explain the UI to a transformer and then push straight to production, sight unseen.
Alternatively, ITT: people who have been through the design-to-dev handoff process and understand the comparative advantages of both teams.
Tools like this can be good for indie developers, the ones who in the past may have had to learn a bit of dev/design to release something. The division of labour is larger teams is different. The product manager may have a user research background instead of a software one. The designer may be good with semi-complete prototypes in Framer, but the responsibility for delivering production code may still rest with the dev team.
I am still erring on the side of skepticism around AI "taking all jobs with computers" but have to admit that seeing the progress has made me doubt my position a bit. Even if it doesn't take ALL jobs and only relatively low level ones that is still an enormous amount of work that will at the very least change drastically.
What really shook me is that GPT-4 can spit out quite solid code for various things. I know there is a lot more to software development than just writing code but if you had asked me 3 years ago if AI would be able to code AT ALL within 10 years I would have said "no chance" with 100% certainty. Had to accept I was very wrong about that and don't have the technical background to really assess how far/fast this stuff can go.
Writing was on the wall over a year ago, shocked how many were in complete denial then. Even more surprising now.
People don't even seem to grasp that the next gen of these tools wont be rolling the dice once it'll be rolling it 1000 times then you pick the one that nailed it. Then the next generation will roll 10,000 times and it'll be picking the one that nailed it even without your input at all.
I see it - the majority of knowledge work in the next 20 years will done better by computers than by humans. This will destroy the middle class world-wide.
You are an expert web developer who specializes in tailwind css.
A user will provide you with a low-fidelity wireframe of an application.
You will return a single html file that uses HTML, tailwind css, and JavaScript to create a high fidelity website.
Include any extra CSS and JavaScript in the html file.
If you have any images, load them from Unsplash or use solid colored retangles.
The user will provide you with notes in blue or red text, arrows, or drawings.
The user may also include images of other websites as style references. Transfer the styles as best as you can, matching fonts / colors / layouts.
They may also provide you with the html of a previous design that they want you to iterate from.
Carry out any changes they request from you.
In the wireframe, the previous design's html will appear as a white rectangle.
Use creative license to make the application more fleshed out.
Use JavaScript modules and unkpkg to import any necessary dependencies
What's beyond me is - how just simply predicting the next token brings up this kind of magical comprehension in a model that otherwise has no sense or is aware about itself.
I think the point isn't something for AI to do the hard complicated parts for us, but help us with the boring repetitive ones that pop up all the time.
Making forms UI is one of those.
The challenging part here is creating the design language or style guide) and I plementing the business logic once the form gets validated (which AI could generate code for, too).
Often it's really hard to integrate something like this vs. writing it yourself.
An example I recently had was, I was handed a UI kit from a product team. Ok, fine. We only wanted to use about 10% of the kit - basic interface elements. OK.
It took me a long time massaging that into what I wanted, because it was a bit of a mess, honestly. And I'm still dealing with random issues that pop up in their CSS.
It takes me a few minutes to create a form: but if what I get back is some insanity that I can't read, and each time I get a new form it gives me different flavors of that, well, it's going to add up and be a huge pain to maintain.
Nothing is truly perfect. Maybe in 5-10 years we'll have massive 3D printers that can build you any house without needing a big construction crew. I doubt but if you asked me 5 years ago about AI being capable of replacing me, I'd have laughed in your face... Alas, here we are.
Not replacing, augmenting. Try to find ways to use the new tools to become more productive than other engineers. Help evolve integration and deployment frameworks to handle the snippets from LLMs in the context of larger projects with a minimum of glue.
I wouldn't be surprised to see some old ideas come back to life like encapsulation concepts from enterprise software development. But this time we let LLMs deal with the boilerplate code needed to use and connect them.
I think it's a good idea that knowledge workers learn the fundamentals of at least one trade. I am interested in HVAC and building automation.
You can also master a trade and become a knowledge worker. Here in Denmark it's possible to mix pre-university/gymnasium (equivalent to year 10-12 in the US) with a trade. Combined it takes 4.5-5 years.
Can someone with an API key try making a rectangle labeled "URL", and a bigger rectangle underneath it, and then see if it is smart enough to make a simple browser out of that?
FWIW, I'd guess that the 2 sliders with labels like those, and a square or other shape, matches very closely some GUI and pedagogical graphics toolkit tutorials upon which the LLM was trained (in which a slider rotates the shape).
I mean, cool, but surely if this is a technical feat it speaks more to the complexity of our tooling and platforms than it does the impressiveness of AI. What I'm trying to say is that all of this is pretty primitive if you built the right tooling to express those ideas trivially. Like even a 6 year old could create noughts and crosses if the paradigm they were using allowed them to express the game rules in a way that was natural to them. So yes, while I think this is cool, I don't get how it warrants the hype and hysteria. It makes me sad that this minor technical accomplishment seems impressive because the web is an unintuitive medium for expressing logic entangled with UI.
For the recent Galactic puzzle hunt competition [0] there was a problem that involved generating 5x5 star battle [1] grids that had a number of unique solutions in the range of [1, 14]. We initially tried to get chatGPT to write a python script to do this, and couldn't get it to produce anything functional. Conceptually it's not a hard problem, and can be solved in ~50 lines of python or so. Interestingly, chatGPT can describe in natural language the basic approach that you should use (DFS with backtracking). Anyway, here's one prompt I used for the generation portion. Is there something one can do to make LLMs more likely to produce functional code output?
```
Write a python iterator to generate all 5x5 grids of integers that obey the following criteria:
1. the grid contains only numbers 1-5 inclusive
2. each number is included at least once
3. Each number 1-5 forms a continuous connecting region within the grid where two cells are considered connected if they share an edge.
For example the following would be a valid grid subject to these rules:
[[1,5,3,3,3],
[1,5,3,3,3],
[1,5,3,3,3],
[1,5,3,3,4],
[1,5,2,3,3]]
But the following would not be a valid grid because the `1` in the top right corner is not connected to the 1s along the left edge:
[[1,5,3,3,1],
[1,5,3,3,3],
[1,5,3,3,3],
[1,5,3,3,4],
[1,5,2,3,3]]
So does this mean in the future GUIs will be custom and on demand depending on the situational context or personal preference?
Did we get every spaceship control room wrong? Where the Star Trek bridge would simply morph into whatever gui objects were necessary? (I can’t imagine them going away entirely and EVERYONE talking to the ships computer as it would be audio chaos and annoying a/f so I guess we’ll always need a nice quiet user interface.)
LCARS, the control panel UI in TNG/DS9/VOY/LD was always supposed to be quickly reconfigurable to the task at hand. But the prop implementation was usually sheets of translucent acrylic under glass, which isn't that easy to update.
That is pretty compelling. Probably another layer you use by default but can swipe to the gui search layer or full control. As usual, nerds will default to full control.
Anyone remember IBM's Rational Rose? You fed it a UML diagram of what you wanted to do, and it generated C++ stubs. That was two decades ago or more. I tried it once, and that was it. You still had to do the "last 10%" which is the most important 10% in software.
These tools are definitely more "magical", but these are essentially an iteration of what we've already had.
Code generation from UML was all the rage for a while too, until it wasn't. People realized its limitations at some point. Sort of like ORMs - if you are not policing SQL generation like a hawk, you are going to end up with an awful non-performant system.
Ultimately it is a productivity and prototyping tool - it will not do the hardest parts and integrations for you, at least not in the way you may want exactly.
Oh god, you're making my PTSD kick in. What this will all lead to? Clients going in making UI that appears to work fine and then asking developers to wire it in to the greater eco-system that will necessarily surround it, telling you "Look! It's almost done!" and then you're going to have PTSD too.
That seems a lot more complicated than just coding it. How would you go about drawing a box shadow on the sliders? What if you wanted a background image behind it? How would the model know that it's a box-shadow, or that the background is an image?
I feel like for that level of granularity, you'd spend more time figuring out how to style it than just writing it in code, since you'll need to start using descriptors on things, which is literally just coding again.
"just coding it" is a lot harder if you know nothing about code. What's cool here is that someone with literally 0 knowledge of programming can use this. Before this there was simply no way for them to do something like this, it was straight up not possible without the barrier of learning to code first. Does this do it all? Will this replace programmers? No, the skill ceiling remains as high as it ever was, but the skill floor for making something like this has been raised.
Looks like drawing a prospective UI, then having OpenAI multimodal AI turn it into functioning UI. Pretty cool demo. Probably needs some automated test cases, documentation writing etc if it's going to have even a hope of being maintainable. I wonder how much of the GPT-4 coding process described here can be automated as part of an application: https://gwern.net/tla
It takes seconds to pick a button and place it somewhere. Is it really so much better to let an AI guess that a green rectangle is supposed to be a button?
The point of this demo is to experiment with new ways of interacting with an LLM. I'm very tired of typing into text boxes, when a quick scribble, or "back-of-the-envelope" drawing would communicate my thoughts better.
It worked a lot better than I expected! If you give it a try, let me know how it goes for you! And please feel free to check out the source code: https://github.com/tldraw/draw-a-ui/
Seems pretty loose with the input prompt. Good thing stake holders are notoriously forgiving and would never ask to push pixels.
I do see the potential as a professional tool if it came in the form of a "fix up" button in a WYSIWYG editor. It would be great if you could haphazardly slap together a UI and have a button that unifies the margins and spacing style without taking too many liberties.
I think it would make sense to learn web development and use AI alongside it. Seems like no matter what you end up doing working alongside AI tools will be a part of the job so the experience with it will be useful.
If AI advances to the point where large swathes of workers even amongst developers are put out of work, then worrying about your own job is a little beside the point. Will be a large and apparent social issue then. I am still skeptical this will happen but if you had asked me 3 years ago could AI do what it is capable of already by today I would have said with 100% certainty there is not chance so I'm holding back my opinion at this point.
Its best to approach learning from a place of curiosity and never be 'done'.
However, as an answer to your question: If you're looking for a skill to learn once in a short time and apply that for a decade; then web front-end coding is among the worst options.
Hi, coder/founder of 25 years here. I’m genuinely conflicted as to whether or not we will get to a point where front end web development loses its utility. It’s one thing to have an AI assist in coding some simple UX/UI. But an entirely different thing to stitch all those together into a fully working web application. Particularly when it’s a very complex app with multiple pages, client/server architecture, DB, etc.
Perhaps I’m just being shortsighted here. I can sort of see how AI tech would evolve to achieve this. You would need an AI assistant able to persist the entire context of your application across months/years in a stable way to act as your ongoing “web developer“. Will that be feasible?
Hey, this is Steve from tldraw, I was up late last night putting this together.
I've added a note next to the input with more info here, but basically: the vision API is so new that its immediately rate limited on any site like this, and because OpenAI doesn't have a way of authorizing a site to use their own API keys (they should!), this was the best we could do. We don't store the API key or send it to our own servers, it just goes to OpenAI via a fetch request.
Putting an API key into a random text input is obviously a bad idea and I hope this doesn't normalize that. However, you can read the source code (https://github.com/tldraw/draw-a-ui) and come to your own conclusions—or else just run it locally instead.
Every scribe had their own style and flourish. Every scribe was an artisan. Discerning patrons favored particular scribes for their uniqueness and quality.
Somehow, someway, the hivemind mostly settled on today's 'A'. Something good enough.
„This works by just taking the current canvas SVG, converting it to a PNG, and sending that png to gpt-4-vision with instructions to return a single html file with tailwind.“
You still need to host it somewhere, deploy new features and then keep it running, generating code isn't enough and besides, back end is higher risk, so there needs to be a human there for insurance purposes to verify that access controls are properly defined and to rectify any urgent issues that arise. Same reason they still want human pilots inside planes even though most of the flight is automated.
Of course but analogous things can be said about front end. Pretty sure tldraw would fail spectacularly at anything slightly more complex like interactive world maps or parallax animations with dynamic break points.
Point being I'm not sure which of the two is safer from ChatGPT.
Earlier in the year I wanted to understand LLMs and GenAI so I tried to push their limits to see what broke and what didn't.
One of my projects was to build a blog/site from scratch. I had 0 web dev/design background.
My experience was starkly less optimistic, and am curious if any one else has tried something similar.
-----
First off, I must state a deep respect to people who build + design websites, while dealing with Clients.
I had assumed that ChatGPT would be very useful in helping me pick up and build things. However, I had to jettison ChatGPT fairly soon. I just couldn't trust the output of the model. It would suggest things that wouldnt work, then link to sites that didnt exist.
I switched to teaching myself. I had to watch hours of videos, learn CSS, Astro, and several other things from scratch. Definitely not the LLM experience I was expecting.
Code from Figma was great - but if I wanted an actual responsive site, I had to write the CSS myself, because boilerplate CSS had all sorts of odds and ends.
Getting an image to come out as I liked from Midjourney was fun - but it was also a massive time sink.
I had hoped to be able to get complex tasks done entirely with assistance from the LLM. In the end it helped maybe 20-30%. Its greatest use was to clarify concepts instead of me having to wade through specification docs.
When I went back to the videos of people using chatgpt to build a website in under 30 minutes - its always someone who knows the ___domain extensively.
I did get a site up and running after ~1-2 weeks of work including the necessary ritual sacrifices.
edit: writing in a rush, grammar and text are messed up.
edit: GPT 4, copilot and midjourney. AFAIK I had no half measures.
Its a bit of a game changer if you already understand things so can point it to the right tasks and also clock that it's screwed up just from glancing at the code.
I haven't tried Cursor yet, I just get it to write or refactor functions bit by bit maybe that's better for monolithic tasks?
I should probably write a post about this, but comments are easier.
0) LLMs introduce the challenge of automated semantic verification.
1) LLM work should be broken up by usecase. These use cases lie on a semantic complexity continuum.
2) Tasks like classification are simple to verify (precision recall etc.). Tasks that require semantic complexity like summarizaiton are on the other end.
3) Anything on the high semantic complexity end of the scale needs expert human review.
4) Chained or complex calls greatly complicate verification
Which is why you are getting use out of it. There is a human (you) in the loop, and the calls are not complex.
As long as a human is reviewing the output of the LLM every time, its great. This is the vast majority of "generative" output.
Its when you have things like agents, or chained calls that things go awry. ITs why proof of concepts are easy, but production is hard.
For the record, many people have called this out, including people at Open AI, and AI ops was the largest sub group in YC's fall batch.
I've been in the same boat. Earlier this year I wanted to up my game as well with an improved website and blog to go along with it, not even sure which frameworks to use.
Using ChatGPT has been a hit or miss experience, helping me maybe 30% of the time as well. But when it did help, it helped me massively, especially for quickly setting up Wordpress PHP designs and accompanying CSS. I can honestly say I couldn't have done my web and blog redesign if it weren't for ChatGPT. Not because I wouldn't have been able to get the knowledge without it– more so because I wouldn't have had the patience to figure all of this out in my spare time.
Using ChatGPT has certainly been a more fun experience than browsing documentation, but I did have to do the latter about half the time anyway.
I have generated some toy experiments for a monthish - gpt to generate code to do something - and did generate some web frameworks and pipelines - I did not have it generate a database - although that may be something to try next.
I wanted to it generate circuit diagrams and molecules
> I had hoped to be able to get complex tasks done entirely with assistance from the LLM. In the end it helped maybe 20-30%.
Ok but if you would do the same task again, how much would it help you?
> When I went back to the videos of people using chatgpt to build a website in under 30 minutes - its always someone who knows the ___domain extensively.
Was GPT4 out then? I self taught to build a Web app before LLMs, for the second one, I had LLMs and it helped me so much, in the area you mentioned: clarifying concepts and spotting obvious bugs due to inexeprience
I’m confused - what are you specifically having problems with?
I’m not logged in and the site loaded fine on mobile and then again on desktop. The video played immediately, I didn’t see any ads. Are you hoping to be able to comment without an account? That’s usually not how things work.
I’m guessing this is another case of Elon induced nerd grumpiness?
The cookie terror footer? The additional giant sign up bar? The fact that after a few seconds it will just hide everything behind a giant black pop-up about enabling notifications?
Also that it has stopped showing either the post being replied (unless RT-replied) to or any of the replies so you just have one item with no context available.
Not who you’re replying to but: Im on iOS using the HACK app and the built in safari web view wouldn’t play the videos, and same in chrome. So now I’ve given up.
Also, if I clicked into the tweet (xeet?) in the web view and hit “open in system browser”, I just got an X “something went wrong” page. Also if I opened the first video that failed to play, closed it, and clicked the second video, the first would attempt to load again instead until I reloaded the whole page and clicked the second…that still failed to play
> but it looks the same to me as it always has. It's never been that user friendly for those not logged in.
You used to be able to see context, not just a single tweet. If that doesn't make a difference to you in cases like this I don't know what to tell you.
Using nitter is the part which avoids engagement-ing with xitter.
Not having any context is the part which makes links to xitter useless, most of the time you get a link to a reply but without context it’s difficult to impossible to understand what it’s about, and the link is functionally useless.
I know this doesn't help you in the particular circumstances you mentioned, but: for that I would place the blame on the person who provided the link, for not giving sufficient context to understand.
> for that I would place the blame on the person who provided the link, for not giving sufficient context to understand.
I would not. First, I would assume logged users do get context so they likely are not even aware of the issue, and second that is what link for, if you have a to quote everything you're linking to in full then the web is broken.
It loaded faster than a YouTube video would have for me. Since I don't have an X login, there wasn't even a temptation to get drawn into any other pointless content. I like it.
Nope, it didn't until I hit refresh for the 5th time, somehow Twitter/X has been broken for me where it will always display the "signup" page even for non-NSFW content.
I need it to load the content when I click, it's pretty simple.
Friendly fyi... the moderator allows Twitter links and previously explained the reasoning:
>We're not going to ban Twitter because, like it or not, it's the source of some of the most intellectually interesting material that gets posted here. -- from https://news.ycombinator.com/item?id=30430760
The problem is that the content is not accessible unless you're logged in, with some recent changes. And even worse, it doesn't tell you what it's hiding, so you don't really know that you're only seeing parts of it. Makes it really confusing when linked to a twitter post.
That's fine but why should it be on me? It should just be HN policy to not accept twitter as article links. They're effectively paywalled. If people must, they should be submitting nitter etc links.
From the HN FAQ "It's ok to post stories from sites with paywalls that have workarounds." Instead of 30 comments on the state of twitter one nitter link would've sufficed.
Right, but the letter "X" may be pronounced "Eggs" and people may not even notice that you're not pronouncing it in a cool "Dimension X" scifi way with reverb and stuff.
Double Yew, Eggs, Why, Zee / Zed
Eccentric billionaire Elan Mosk, while cosplaying Howard Huges, Laid an egg that "he is going to make a wooden rocket that can land on the fondue oceans of the moon"
[edit: added the zed for friends who count on their hands starting at the thumb not pointer finger]
At this point I’m assuming this is broken on purpose (for visitors not logged in).
I wonder if people still on there realize that their posts are essentially not visible for unregistered users anymore without jumping though major hoops.
Downdetector is completely unscientific and based on user reports. It could also be that way fewer people use the website now, so nobody is making reports to downdetector.
Don't worry, in 5 years it will just be another layer in the JS dev stack, so then not only the version of npm, dependencies, docker, and 3 API keys have to match but you'll also need the correct commit of the 20GB LLM so the build doesn't fail. All for the sake of simplicity and increased productivity, of course.
We’ll have dependency files for LLMs in our mixture of experts config and we’ll get paged at 2am to update a version of an LLM because there’s a new social engineering CVE making it vulnerable to disclosing secrets.
These little demos of toy UIs are cute but humans can still hold the context of a whole codebase in their heads.
We're not quite at the level (yet) of feeding a whole codebase to an LLM and making it add features or make changes while understanding the big picture of the problem being solved, being consistent with the design principles and coding style of the overall existing codebase. And I'm not even talking about creating complex UIs where performance matters
Another issue I've ran into a lot is staleness of the knowledge the LLM was trained on, a lot of libraries and frameworks get really frequent, quite often breaking changes, and LLMs have a cutoff date.
Try using ChatGPT for something like Godot's GDScript, it will always try to use old Godot v3 style scripting because that's what it's been taught, and the whole documentation for Godot v4 is not something small enough to just fit into context
Maybe this would be a better fit for some agent type workflow where it can decide what to lookup from the documentation and then retrieve it, but it also needs to know and decide what to look up and how. There is still a lot to figure out
> We're not quite at the level (yet) of feeding a whole codebase to an LLM and making it add features or make changes while understanding the big picture of the problem being solved
We are one step away from that. All we need is a more advanced form of fine tuning.
If there is one good thing going to come out of EVEN the fairly clumsy LLMs, it is that we probably can forget about this js frontend framework crap and can focus on work instead of figuring out what the taste-du-jour is. It won't matter; the LLM will translate it.
Probably not. Instead we’ll be stuck using the old frameworks driver because LLMs won’t be able to learn anything new without years of stack overflow examples.
Already I bet stack overflow usage has gone down because LLMs can help fixing bugs.
I think the documentation, open source code, and code examples are more important training sets.
And in. my experience the code is often maybe 95% correct, so there will be a greater premium on expert developers who can spot and fix bugs (with the aid of LLMs, since stack overflow will no longer have any answers since no one will use it having moved to LLMs)
More like - let's give ourselves better tools. The people who hire us won't be able to build this stuff themselves no matter how good an AI you give them. The only developers who will lose their job are ones who fail to embrace AI.
Really? Then show me a video that doesn't look like what I described, if you want to be taken seriously. That demo is, what, 5 clicks with a decent gui builder?
Happy to answer any questions about tldraw/this project. It's definitely not putting anyone out of work, but it's a blast to play with. Here's a more complicated example of what you can get it to do: https://twitter.com/tldraw/status/1725083976392437894