I use them all the time. Recently I got ChatGPT to read a 50-100 page interface spec for a protocol (mostly structured prose as opposed to a typed interface spec) and skeleton out go client and server stubs. That probably would have taken me a day of copy/paste/check but with a few round trips I got acceptable results in under an hour.
Then I wanted to get some particular features out of a rust server that implements the protocol. I don't actually know Rust and the syntax is not superficially accessible to me, but with LLM I was able to one-shot translate a number of structs and functions and get a high-level idea of how stuff was working. I was able to port the features I wanted over pretty quickly. This was not entirely straightforward because async was involved on the Rust side but I wanted to use a more sync approach in the go client.
I also use it a lot to write out test stubs / skeletons. I paste in a bunch of interfaces or functions I want tests for and usually a pretty good starting point pops out. If it wasn't what I wanted I can refine interactively.
Overall I am very glad these tools exist now. Right now they're not doing anything for me I couldn't do myself but they help me maintain velocity and avoid getting stuck in gumption traps.
As someone also in a software career… spot on. I’d estimate GPT4 saves me 30-60 minutes every day, and I feel confident that’s not an exaggeration. It’s gotten to the point where I make certain scripts to automate things I previously would’ve done manually because the turn around with GPT4 for simple tasks means I’m spending 50% the time making an automated solution even for small tasks where it would normally take me too long to be reasonable to automate a one time task.
Not really. The technology is interesting, but the scene/subculture itself is incredibly off-putting to me. It greatly dampens whatever enthusiasm I may have had.
Yup, they're like raw logic gates that aren't applied to anything at the moment.
I think the true application is fairly abstract. Like when we invented cloud, it was just a storage space or shared computer, but eventually evolved into social media and ride-sharing tools.
I think LLMs are not a language tool - they're an attention tool. They're a way to purchase attention for cheap. Donkeys were physical labor, then engines, then even better engines. But we've never had something like that for human-like attention until a few years ago. It was only affordable last year. And it's only human-level this year. There's probably a lot of problems you can solve by getting a bunch of silicon to pay attention for what's the equivalent of human hours.
And there's probably something emergent once people get used to not having to pay so much attention for long periods of time. They can start assembling things faster, there might be less mistakes, and etc.
Yes, actually. I think there's useful functions not yet developed, and that we may yet see surprising capabilities that don't appear as potentials yet.
I'm quite cynical about the amount of bullshit, hype, and misdirected investment that will accompany the LLMs we've already seen. The more ferment happens, the more growth in the underlying technology we get from this point; will feed foaming waves of toxic "we're here to make money from this without ever understanding it" involvement and activity that may well suffocate useful research.
It's very much like a cow pie. Not so objectionable at first, but then the first movers take advantage of it and create an Atmosphere, which nurtures and draws Buzz and Activity and furor that is probably best avoided unless one is directly benefiting from it. After a bit that activity dies down, there's less objectionable presence of profligate resource users; and we can more easily access the factors within the pile that benefit growth.
The analogy works for lots of the tech hype cycles we've had for the last while.
Yes. I think even if the tech hits a wall in advancement, there's a lot of applications enabled by it that haven't been built yet.
Also, I'm not sure if you're grouping multi-modal models into your question--those definitely have some great incoming use cases as well, specifically the ones that can describe images.
I'll be even more excited when I can have GPT4-level quality on my local machine, open source, so it can't be taken away.
The big win will be when phone voice assistants get a good implementation. I predict LLMs will be a critical part of the first good assistant.
I hope that's soon. My dad is old, and he needs the voice interface to be able to keep up much longer. He can't make a mental model of how his Google TV ecosystem works, for instance, but he could sure tell an LLM what game he wants to watch.
Let's actually use the tech for what it's good at before we dismiss it.
Too early. Right now there are lot of gold diggers using AI/LLM to make quick buck. Let the dust settle in 3-5 years and then we will see. Having said that, I do think it is an amazing technology that has some real world applications. Just not sold on the hype train that's currently running.
But I use Perplexity.ai instead of search and I use it to write the first pass of my code now (and I'm a 40 year programming veteran), which I modify in Cursor IDE with built in ChatGPT.
I use them all the time. Recently I got ChatGPT to read a 50-100 page interface spec for a protocol (mostly structured prose as opposed to a typed interface spec) and skeleton out go client and server stubs. That probably would have taken me a day of copy/paste/check but with a few round trips I got acceptable results in under an hour.
Then I wanted to get some particular features out of a rust server that implements the protocol. I don't actually know Rust and the syntax is not superficially accessible to me, but with LLM I was able to one-shot translate a number of structs and functions and get a high-level idea of how stuff was working. I was able to port the features I wanted over pretty quickly. This was not entirely straightforward because async was involved on the Rust side but I wanted to use a more sync approach in the go client.
I also use it a lot to write out test stubs / skeletons. I paste in a bunch of interfaces or functions I want tests for and usually a pretty good starting point pops out. If it wasn't what I wanted I can refine interactively.
Overall I am very glad these tools exist now. Right now they're not doing anything for me I couldn't do myself but they help me maintain velocity and avoid getting stuck in gumption traps.