Hacker News new | past | comments | ask | show | jobs | submit login

Your run of the mill computer program also "operates in enormous parameter spaces that are impossible to meaningfully test for all possible adversarial inputs and degraded outputs".



This is hardly similar as the state of a typical computer program can be meaningfully inspected, allowing both useful insights for adversarial test setups and designing comprehensive formal tests.


Right, if you consider the internal state, it is hardly similar. You talked about black box and QA though. Black box by definition holds the internal state as irrelevant, and QA mostly treats the software it tests as a black box, or in other words the tests are "superficial" as you call it.


Black box testing in typical software is, however, less superficial, because the tester can make inferences and predictions about what inputs will affect the results. When you're testing a drawing program, for example, you may not know how the rectangle tool actually works internally, but you can make some pretty educated guesses about what types of inputs and program state affect its behavior. If the whole thing is controlled by a neural network with a staggering amount of internal state, the connections you can draw are much, much more tenuous, and consequently the confidence in your test methodology is significantly harder to come by.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: