Regarding everyone's latency concerns, as someone who has done low-latency audio processing on Android -- in their defense I'd bet almost anything the demo is meant to only demonstrate the math behind this. Depending on the platform (Android cough), low latency audio processing can be almost a dark art itself. And hey look, they're doing this on Android.
My guess is that they decided to release the demo earlier instead of spending days/weeks getting up to speed with low-latency audio processing in the Android JNI.
It's an academic demo/press release. Not a software release for production/market.
I'm curious, besides doing things in C/C++, are there any "magic tricks" to doing low-latency audio processing on Android? Looking at the chart here [1] it still seems to be a good bit behind iOS.
In the Java layer? About the only thing you can do is ensure that you're using the devices native sampling rate: typically 48 kHz for phones, and 44.1 kHz for tablets. Non-native sampling will induce a rather large latency hit. The buffering + buffer size stuff, unfortunately, is really only accessible in native IIRC.
To be completely honest, it's been a long time since I've messed with audio-stuff on the Java-side, so not sure if/how much things have changed for the better.
A lot of production input devices for stuff like waving hand gestures and pen input also have unacceptably high latency. Getting latency down is hard, and makes a huge difference to usability.
My guess is that they decided to release the demo earlier instead of spending days/weeks getting up to speed with low-latency audio processing in the Android JNI.
It's an academic demo/press release. Not a software release for production/market.