Hacker News new | past | comments | ask | show | jobs | submit | more slashdave's comments login

> we have no way of knowing in advance what the capabilities of current AI systems will be if we are able to scale them by 10x, 100x, 1000x, and more.

Scaling experiments are routinely performed (the results are not encouraging). To say we know nothing about this is wrong.


I had the same thought. I mean, were they actually using ordinary floating-point numbers to represent amounts in their ledger? This sets off so many alarm bells.


In some circles, there is the irritating tendency to believe that technology can solve every problem. Experts are eschewed because innovation is valued above all else.


Um, is it okay to admit, as an "experienced" programmer, that I often resort to print statements? I mean, compilers are just so darn fast these days.

Another trick: for rare circumstances, code whatever complicated logic is needed to isolate the bug in order to issue a print statement, then use the debugger to break on that print statement.


The syntax is clunky, but watchpoints can do what you want.

https://ftp.gnu.org/old-gnu/Manuals/gdb/html_mono/gdb.html#S...


Coming from VMS at the time, I was confused why there was no decent full screen interface to gdb. DDD was such a disappointment in this regard.


Deep learning is the very opposite of generalization.


it's not that simple

"""

Intuitively, an overparameterized model will generalize well if the model’s representations capture the essential information necessary for the best model in the model class to perform well

"""

https://iclr-blogposts.github.io/2024/blog/double-descent-de...


The improvements in transformer implementation (e.g. "Flash Attention") have saved gobs of money on training and inference, I am guessing most likely more than the salary of those researchers.


I hear what you are saying, but "innovation" is also often used to excuse some rather badly engineered concepts


> Tesla are all making rapid progress on functionality

The lack of progress with self driving seems to indicate that Tesla has a serious problem with scaling. The investment in enormous compute resources is another red flag (if you run out of ideas, just use brute force). This points to a fundamental flaw in model architecture.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: