Hacker News new | past | comments | ask | show | jobs | submit login

What part do you think is going to become obsolete? Because Math isn't about "working out the math", it's about finding the relations between seemingly unrelated things to bust open a problem. Short of AGI, there is no amount of neural net that's going to realize that a seemingly impossible probabilistic problem is actually equivalent to a projection of an easy to work with 4D geometry. "Doing the math" is what we have computers for, and the better they get, the easier the tedious parts of the job become, but "doing math" is still very much a human game.



> What part do you think is going to become obsolete?

Thank you for the question.

I guess what I'm saying is:

Will LLMs (or whatever comes after them) be _so_ good and _so_ pervasive that we will simply be able to say, "Hey ChatGPT-9000, I'd like to see if the xyz conjecture is correct." And then ChatGPT-9000 just does the work without us contributing beyond asking a question.

Or will the technology be limited/bound in some way such that we will still be able to use ChatGPT-9000 as a tool of our own intellectual augmentation and/or we could still contribute to research even without it.

Hopefully, my comment clarifies my original post.

Also, writing this stuff has helped me think about it more. I don't have any grand insight, but the more I write, the more I lean toward the outcome that these machines will allow us to augment our research.


As amazing as they may seem, they're still just autocompletes, it's inherent to what an LLM is. So unless we come up with a completely new kind technology, I don't see "test this conjecture for me" becoming more real than the computer assisted proof tooling we already have.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: