With multimodal GPT-4 I really feel like there's suddenly way more things I'm confident about attempting (e.g. anything bottlenecked by graphic design), which should in turn should lead to more things I'll be confident about in the future. Can a hyperproductive person verify if the returns here don't actually look linear, but geometric?
Wealth isn't determined by the amount of money in existence, it's determined by the amount of goods and services in existence. If goods and service increase while the money supply stays constant, that means every dollar can buy more. In practice, that's called "deflation" and it's something that people who control the money supply try to avoid, by printing more money.
Yes I know, which is why I used that phrasing. As the production of real things made of atoms can't be ramped up arbitrarily quickly.
For example, every doubling of oil production after WW1 took more than a decade to accomplish. And oil men were pretty well known for risk taking boldness.
The physical impossibility of real resource inputs to human civilization doubling on an even faster timeline is so obvious that I skipped pointing it out, and got straight to the point.
That it's incredibly unlikely for even money to double quickly enough.
Big difference between "can do this" and "will do this." More than 99% of people are consumers, not creators. And the field only narrows the farther you go in a specialty.
I do agree that the comment you're replying to is asking for too much -- "verify" and "geometric" are very strong words. But I also agree with the implication that LLMs are making it a lot easier for a person with grit to get farther, which might cross that person's personal threshold (using "threshold" as used in TFA).
I'm still surprised by most of my tech/design/startup friends who don't want or like using GPT-4 to get work done or even for small stuff like cooking and planning. We'll be ok.
Not really. In the 90s a lot of people were dismissive of the web but 10 years later everyone was a convert. It's just the beginning of this next revolution.
I find it hard to understand how using a tool to do the heavy lifting actually imparts the skills onto you directly. Why should there be multiplicative returns when you haven't actually learned anything?
> I find it hard to understand how using a tool to do the heavy lifting actually imparts the skills onto you directly.
My primary skill is as a software engineer who builds complex systems that solve difficult real world problems. That overarching skill encompasses a wide body of sub skills, from UX to doing user case studies to writing blog posts about my work. The actual writing of the code is a non-trivial portion of that, but ideally the code falls out naturally from a correctly planned approach to the problem ___domain.
Futzing around with some broken API that has a bunch of "gotchas" that can only be gleaned from reading a dozen blog posts on the topic (because the official documentation sucks) is a huge time sink that is not related to any of my core competencies.
I've previously spent days going through annoying stupid code doing work that an AI can do in hours.
> Why should there be multiplicative returns when you haven't actually learned anything?
I do fear this for the generation of coders coming up now. The best way to learn is to build it from scratch once, which already fewer and fewer people have a chance to do, and AI is only going to make it worse.
It's probably not about skill but productivity. Many founders have no skills apart from getting others to do the actual work (I am not being sarcastic, that is a skill too) so for them automated agents are just another road to productivity. Not everyone is driven by curiosity or wants to learn, some people only want to 'ship'. So in that sense there can be multiplicative returns without the person becoming more technically skilled.
At least for me, I've used ChatGPT for areas where I don't really have much experience doing it nor get much out of learning how to do it better, especially if 100% correctness isn't necessary - writing job reqs/descriptions has been the best use so far. I use ChatGPT as the idea mill and take the best sounding ideas out of it and manipulate it a tiny bit for my purposes. In that sense, it makes me more "productive" because I don't need to spend so much time on a task like that and I can spend more time on a task that I have more expertise in, ie, programming.
Geometric returns for the AI company of the tool you are using. As a user of the tool unless you have some other advantage or moat it’s a bit like a factory worker seeing new machines put in and wondering if that means their wages will increase with their increased productivity. Unless you own the means, as a user of AI you will always be a renter.
I still maintain that Alan MacDonald is the best expositor of the subject I've seen. His books are almost entirely self-contained. Here's a sampling: http://www.faculty.luther.edu/~macdonal/GA&GC.pdf
"Alphabetical privilege", nice. As someone who's surname starts with AB I would also call it "alphabetical curse".
E.g. back when I was young when teachers would describe what to do then often would call each pupil up alphabetically to do it, and I was always first, and most of time had not actually listened to the instructions...
I think you are right (although I now wish I had not abandoned my old account). However, I have always felt that karma systems are too biased towards old users who had more time to accumulate karma. If only the karma system had some "aging" of results built in, like the tennis ranking system.
And I am only partially saying that because I have a new account here.
Mind giving a concrete example? I read through the entire thing but couldn't make heads or tails of it. Is the point that your user stories should touch upon every aspect of your app, while still being incremental?
I think the idea is that for a slice of an elephant to be "elephant shaped," it has a bit of all of the key parts of an elephant - a bit of the trunk, a bit of a heart, stubs for four legs, whatever else makes an elephant an elephant. But what do the elephant's organs map to? I agree that the information on Elephant Carpaccio that I've been able to find doesn't really answer this.
My best guess is the idea is that it maps to aspects of a user story like "get input from the user," "do some business logic," "show output to user." So even the first slice is a working prototype in some superficial sense. The elephant organs might be app components (UI, database, etc.), but in the first slice you don't have a complete UI (maybe you have text input) and you don't have a production database (maybe you just have an in-memory dictionary) and you don't have robust business logic. You have the whole stack, but each part of the stack is incomplete. That's what I think makes it a vertical slice.
A horizontal slice (what not to do) would be one complete elephant organ. Maybe that's a production transactional database. So in the first slice you have a complete database or you've written the final business logic, but none of the other things that you would need in a mockup/prototype/MVP or an integration test.
With all the drama surrounding Reddit at the moment, it's nice to look back to a time when things were so much simpler, and most of the things we had to worry about were just tech.
On the contrary, I wouldn’t be using Capacities if they didn’t have their entire development roadmap on your product, so thanks