Please anyone willing to answer as if you might be speaking to a golden retriever:
How important is a "lead" in this space?
I have been comparing ChatGPT and Bard a lot the last two days because this is all very fascinating, and from my datapoint-of-one perspective, it feels like ChatGPT is way, way ahead.
Then I start to wonder is this because ChatGPT captivated our attention so much the last few months that they've been able to inherently improve the product faster, and thus the flywheel starts spinning rapidly giving them even more of an advantage as more people use the service, giving it more to train off of?
I.e., does Bard (and any forthcoming competitors) fall way behind because they lost the slightest head start that then rapidly spirals into a competitive advantage?
I don't think OpenAI's lead is holding Google back. Companies are held back by their own mentalities, and newer companies have fewer restrictions.
As an example, Google supposedly had better text-to-image models than DALL-E/Midjourney/etc, but didn't release them because they "reinforced harmful stereotypes".
It's also probably holding back on LLM because it doesn't want to harm its search cash cow, just like Kodak didn't want to hurt its film cash cow by developing digital cameras (which it invented).
Of course, now it is forced to productize LLMs (Kodak was actually the largest manufacturer of digital cameras (by unit number) when it went bankrupt, but that didn't help it).
I think large companies have a huge disadvantage in this area due to reputational risk that the startups don't have to really worry about to the same extent.
Imo there is less of a flywheel than it might seem. It's not like google search where a click of the 7th result is concrete feedback. ChatGPT doesn't get concrete feedback (it's not none), and the amount of text available for training is crazy high.
Additionally, the model doesn't learn from experience. If you use GPT-4 for 10 years, it will behave the same in the 10th year as the first. It's not getting better as you use it. OpenAI could improve their models based on the feedback they get but I believe they claim they aren't doing this, and I doubt they'd lie about that.
Sounds like you don't ever submit feedback. It's been there from day one and asks you to write why you didn't like the response. It would not shock me if they use gpt itself to read the feedback and provide a corrected response for the next versions training data. They could also log anytime some provides negative sentiment in the chat session for training signal.
Bard opened to a US-only audience. For being the engine in applications that might propel an LLM ahead of the others, I fail to see that this would be advantageous for opening markets and greeting new customers.
As an example [of the contrary], I noticed Swedish Klarna among the partners that OpenAI revealed when announcing their plug-in API today.
Depends on the height of the S-curve — that is, how long the exponential uptrend continues. If not long, then they will converge quickly. But if it continues for a long time, then always being a few years behind then might as well give up. Still, following a different path might lead to a bigger step change and catch up.
How important is a "lead" in this space?
I have been comparing ChatGPT and Bard a lot the last two days because this is all very fascinating, and from my datapoint-of-one perspective, it feels like ChatGPT is way, way ahead.
Then I start to wonder is this because ChatGPT captivated our attention so much the last few months that they've been able to inherently improve the product faster, and thus the flywheel starts spinning rapidly giving them even more of an advantage as more people use the service, giving it more to train off of?
I.e., does Bard (and any forthcoming competitors) fall way behind because they lost the slightest head start that then rapidly spirals into a competitive advantage?