Hacker News new | past | comments | ask | show | jobs | submit login

This is bizarre, wasn't Google the one who claimed the name and did it first?



Gemini was also "use us through this weird interface and also you can't if you're in the EU"; that + being far behind OpenAI and Anthropic for the past year means, they failed to reach notoriety, partly because of their own choices.


Honestly I don‘t get why everybody is saying Gemini is far behind. Like for me Gemini Flash Thinking Experimental performs far far better then o3 mini


There's a lot of mental inertia combined with an extremely fast moving market. Google was behind in the AI race in 2023 and a good chunk of 2024. But they largely caught up with Gemini 1.5, especially the 002 release version. Now with Gemini 2 they are every bit as much of a frontier model player as OpenAI and Anthropic, and even ahead of them in a few areas. 2025 will be an interesting year for AI.


Arguably Google is ahead. They have many non-llm uses (waymo/deepmind etc) and they have their own hardware, so not as reliant on Nvidia.


Demis Hassabis isn't very promotional. The other guys make more noise.


Seconding this. I get really great results from Flash 2.0 and even Pro 1.5 for some things compared to OpenAI models.

And their 2.0 Thinking model is great for other things. When my task matters, I default to Gemini.


I find the problem with Gemini is the rate limits. Really constrictive.


I can tell you why I just stopped using Gemini yesterday.

I was interested in getting simple summary data on the outcome of the recent US election and asked for an approximate breakdown of voting choices as a function age brackets of voters.

Gemini adamantly refused to provide these data. I asked the question four different ways. You would think voting outcomes were right up there with Tiananmen Square.

ChatGPT and Claude were happy to give me approximate breakdowns.

What I found interesting is that the patterns if voting by age are not all that different from Nixon-Humphrey-Wallace in 1968.


Gemini's guardrails are unnecessarily strict. As you mentioned, there's a topical restriction on election-related content, and another where it outright refuses to process images containing anything resembling a face. I initially thought Copilot was bad in this regard—it also censors election-related questions to some extent, but not as aggressively as Gemini. However, Gemini's defensiveness on certain topics is almost comical. That said, I still find it to be quite a capable model overall.


It was far behind. That's what I kept hearing on the Internet until maybe a couple weeks ago, and it didn't seem like a controversial view. Not that I cared much - I couldn't access it anyway because I am in the EU, which is my main point here: it seems that they've improved recently, but at that point, hardly anyone here paid it any attention.

Now, as we can finally access it, Google has a chance to get back into the race.


It varies a lot for me. One day it takes scattered documents, pasted in, and produces a flawless summary I can use to organize it all. The next, it barely manages a paragraph for detailed input. It does seem like Google is quick to respond to feedback. I never seem to run into the same problem twice.


> It does seem like Google is quick to respond to feedback.

I'm puzzled as to how that would work, when people talk about quick changes in model behavior. What exactly is being adjusted? The model has already been trained. I would think it's just randomness.


Magic

And fine tuning.

Choose your fighter...

High level overview: https://www.datacamp.com/tutorial/fine-tuning-large-language...

More detail: https://www.turing.com/resources/finetuning-large-language-m...

Nice charts: https://blogs.oracle.com/ai-and-datascience/post/finetuning-...

The big platforms also seem to employ an intermediate step where they rewrite your prompt. I've downloaded my ChatGPT data and found substantial changes from what I wrote. Usually for the better. Changes to the way it rewrites changes the results.


System prompts have a huge impact on output. Prompts for ChatGPT/etc are around a thousand words, with examples of what to do and what not to do. Minor adjustments there can make a big difference.


I've found this as well. On a good day Gemini is superb. But otherwise, awful. Really weird.


o3 mini is still behind o1 pro, it didn't impress me.

I think the people who think anybody is close to OpenAI don't have pro subscription


The $200 version? It's interesting that it exists, but for normal users it may as well... not. I mean, pro is effectively not a consumer product and I'd just exclude it from comparison of available models until you can pay for a single query.


It’s speed makes it better for me to iterate … o1 pro is just too slow or not yet good enough to wait 5 minutes…


o3-mini isn't meant to compete with o1, or o1 pro mode.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: