If it were that simple, people would be importing them and selling them already. Black market or not. But high end hearing aids take multiple in-person visits with experts for in-depth hearing tests and then impressions for a perfect fit, then hand adjusting to tune the fit and return visits to tune the sound and fit.
The technology is probably not the most expensive part of high end hearing aids. It's the service.
My eye doctor puts me in front of a bunch of lenses and then says "better... flip or worse?" over and over and we end up with a set of lenses. Without access to a trained doctor, at first glance (no pun intended) it seems like you could do a 95%+ job of fitting a hearing aid's sound profile using a better/worse UI on a phone app and some prerecorded sounds/speech samples. I have zero experience with hearing loss, though.
I wear a special type of contact lens because it gives me much better vision than any glasses ever could. My doctor starts with "better or worse" and that gives her a starting point. Then we have measure and fit and then of course the weight and shape of the lens affects the shape of the eye, which changes the lens requirements so it takes several fittings and trials to usually get a good fit.
Glasses are more like a hearing aid that just makes everything louder. Which will work in most cases. But many people need or want more.
Yeah let me tell you about something which people in the Quality Assurance industry might call an "indicator": The US Army doesn't do much more than what you're describing for hearing aids.
Well, you see, sound has this magic quality that the more you jack it up, the more you hear. Ever been to a concert of a warm homey Ninja Tune artist, such as Coldcut or Amon Tobin? They turn out to be eye-popping industrial hardcore on a club's 10000 watt system.
The problem is, it's not good to put too much sound into the ears.
The person you're replying to didn't say anything about jacking up the sound. He said, "you could do a 95%+ job of fitting a hearing aid's sound profile using a better/worse UI on a phone app and some prerecorded sounds/speech samples". You'd train it--at a constant sound level--by listening to sample audio recordings and repeatedly saying "better" or "worse". The same way you need to train a speech-to-text dictation system by repeating a bunch of phrases, or a fingerprint scanner by repeatedly swiping your finger, or voice recognition by repeating some words. It sounds like a totally reasonable way to eliminate high-priced fitting sessions.
Hearing testing for the most part isn't about the patient giving better/worse feedback. It's about determining the quietest levels at which you can hear each frequency, and how much loudness you need to accurately identify sounds and speech. The tester has to assess when the patient's performance is better or not, because it's blind testing. The patient's feedback is in the form of either signalling "I heard that", or repeating the words that were just played.
Because a person tuning an aid for themselves is rather likely, IMO, to simply choose more amplification since they can hear better then. Which will then backfire when they lose hearing even more. I've seen braggart reports of people with strong hearing loss and aids able to hear speech from more distance than healthy people—but is that good in the long term?
Firstly, people already have to be told to not play music too loud with headphones. And secondly, when you buy an aid, the audiologist tells you that you won't be comfortable with it for some time until you get used to it—even though the aid is supposedly tuned to the exact profile of your hearing loss. People aren't good at getting used to uncomfortable things without adjusting them to their short-term liking.
On top of that, audio engineers, musicians and graphic artists know that it's difficult to do fine adjustments of audio or graphics for long because the senses become tired and ‘burned out’ after a while and you don't see or hear the same (even just five–ten minutes is enough sometimes). Novices are likely to be unfamiliar with these effects, have less stamina for them, and unable to counteract them without overcompensating.
Agreed, that last 5% of fine tuning frequency bands etc you need a professional, but if you were going to roll out $12 hearing aids to all of the low income areas of the world 1 billion people at a time, using an android app on a cell phone you could probably give a lot of people back their hearing. Volume levels could be hard-limited in software, and people who need to exceed the preset limits could be referenced to a senior technician in those 5% of cases or whatever it might be.
Being able to hear even every other word would improve quality of life of many people. Probably one in three words is the acceptable limit of "good enough" where suddenly they are worthwhile to hassle with. My grandparents now hate family gatherings because they might understand only one in five sentences spoken directly to them because of hearing problems and the quality of the hearing aids they're able to afford with their insurance.
I'm not saying it can't be automated, just pointing out that the testing process is pretty different from what you seemed to be describing. And I left out some of the steps that aren't easy to automate: checking for fit, deciding what style of hearing aid would work best, visual inspection of the ear canal.
The technology is probably not the most expensive part of high end hearing aids. It's the service.