I had clearview AI try to hire me when nobody in the wider public knew about them and that was pretty much how I felt about it at the time when I turned that offer down. I'm glad to see it being talked about now.
I like to call it inspector gadgetarianism. Cops LOVE this shit. They do all sorts of practices like denying that you can easily pass lie detection testing, penile plesmography (like if they want to determine a potential recruit is a pervert they hook their meat up to this machine and show them shit like dudes copulating with chickens), and that time the us military got those ridiculous fake bomb detectors that did literally nothing.
Using an alt-account for this as I don't normally post on non-technical or potentially controversial topics.
I'm working on a product today which includes a PPG sensor, on an innocuous part of the body (e.g. wrist, etc.). While we expose many of the sensors to developers via an SDK API, the PPG sensor is heavily processed (downsampled, in a sense) to minimal information such as heart rate. This is due to concerns about application developers being able to use the raw PPG to create health inferences and/or use it as a way to identify/track individual users of our device.
One of our demos was to a defense company and an engineer there was appalled at all the sensors we were trying to utilize. He said something to the effect of "No way I'm wearing that. You could do a lot of fucked-up shit with that. I would know, because I'm a fucked-up guy and my job is making that fucked-up shit."
He followed up with: "Could you use that PPG sensor to see if I have an erection?" and the question was drowned away by laughter across the room,
One of our data scientists who works closely with the PPG sensor models told me in private later: "I actually thought that was a really good question. I thought about it for awhile, and I think the answer is yes -- we could." I don't know if he's right -- the data we get from the PPG sensor is pretty dirty especially whenever the wearer is moving around. But our signal processing pales in comparison to what Apple is able to achieve, so maybe someone better could. Or maybe not, it's not something we're spending engineering time to determine!
I mean I did have a good chuckle at your story but... I gotta ask, what are the actual useful applications for what you are building?
I have to step back from a ton of opportunities (Like I said, Clearview AI rang some alarm bells) because it wasn't the right thing to do. Some guy yesterday tried to rope me into one of those land banking scams and I actually sent him goatse and told him to delete my number, that's how far I want people with less than good ideas around me. Can't even pay me enough to violate my principles.
In this case the PPG sensor is pretty peripheral to the value proposition, which is why we can reduce it to just heart rate and still satisfy the needs of our customers (application developers). There are a number of other biometrics used which are more salient for what we're looking at, which is generally related to training people on new skills or relearning existing skills differently than they were originally learned.
Most of the applications so far are non-defense and pretty non controversial.
I'd genuinely be happy to talk about it with you offline but would rather not publically associate the product with that story because the product is pretty cool overall, and I'm a major technology skeptic in general so it takes me a lot to decide something is potentially valuable and not evil.
Also the main misunderstanding there was the point: "No, our API does not expose that raw PPG sensor data so no application developer could make that inference".
In terms of stepping back...I agree. I've done that on previous projects, left companies or forced gross changes in architecture to satisfy my conscience.
It really helps this time that I'm working with a lot of people who aren't afraid to share stories like this one and talk about them candidly, as well as share links on public internal Teams groups about all kinds of privacy concerns over biometrics (theres no fear of creating a paper trail which shows "we knew or should have known"). As we are building the "platform" rather than the applications, everyone genuinely seems to feel it's important to limit the harm that third party developers can cause. And for now at least, the product is actually quite hobbled by erring towards safety.
That said when you ask "what are the useful applications", that's a very difficult question for me to answer right now. Mainly I think the platform has a ton of potential, but each third party application development probably needs a lot more psychology doctoral level research to properly apply the platforms capabilities for each third party applications specific situation. And in lieu of that, developers have to either get lucky or have incredibly good intuition about the domains and people that they're creating applications for.
So maybe Apple should call their touch sensing and haptic feedback technology the "Faptic Engine", since the Apple Watch can detect when and how vigorously you're masturbating? (Apple Watch users: Do you take it off, or use the other hand, or is there "an app for that"?)
>A combination of “tap” and “haptic feedback,” taptic engine is a name Apple created for its technology that provides tactile sensations in the form of vibrations to users of Apple devices like the Apple Watch, iPhones, iPads, and MacBook laptops.
>Apple debuted its taptic engine technology along with the Force Touch feature in the 2015 editions of Apple MacBook notebooks and the initial Apple Watch, and the two features work together to provide a user with more input control and feedback.
>Apple has moved the Taptic Engine to its third device — the iPhone 7. The new technology replaces the older linear actuator, and will ultimately bring a world of force feedback sensations to the user, as developers implement the technology in their own apps.
>The Taptic Engine is Apple's implementation of haptic user interface feedback. Using a linear actuator, devices like the iPhone 7 can reproduce the sensation of motion or generate new and distinct tactile experiences. In some cases, audio feedback from onboard speakers completes the illusion.
Yes, in our case -- and in the general case, PPG can be assumed to stand for Photoplethysmogram. Also I'm not aware of any examples where "Photoplethysmogram" was used as a "Penile (PHOTO)-plethysmography", generally what I've read on "Penile plethysmography" are more direct measurements of the penis itself, not optical measurements of blood flow elsewhere on the body.
Because of that, I don't think it's reasonable to call Apples "touch sensing and haptic feedback technology the 'Faptic Engine' " at this time! but it is a funny pun!
We merely entertained the idea that it might be possible to use machine learning to infer erection status from a ppg sensor somewhere else.
Oh yeah, not that one but the history of this fraud goes BACK some years. Check this little piece of inspector gadgetarianist nonsense that goes all the way back to 1996
I don't really know what the people who make these are thinking, it seems like a really bad idea to me to scam the only people who can themselves legally go after and detain you
That particular device wasn't. But other very similar devices have been purchased by the US military.
> The Navy's counterterrorism technology task force tested Sniffex and concluded "The Sniffex handheld explosives detector does not work." Despite this, the US military bought eight for $50,000.
>Does the technique of clenching your butt hole actually work to beat a polygraph? (As described in my favorite "The Americans" season 2 episode 7, "Arpanet".) Does it also help to visualize someone you love at the same time? ;)
If it actually works, the photo of Charles Wayne Humble in the article looks like he could lie through his teeth while passing a polygraph exam with flying colors!
After consulting with Arkady and Oleg, and with the promise of coaching from Oleg, Nina tells Stan that she will take the FBI's polygraph test. Oleg suggests a few techniques including that she visualize him in the room as well as clenching her anus.
The Americans Season 2 Episode 7 "Arpanet" Review: "I like when I learn something from an episode, and now I know ..." "If you're having to do a polygraph, squeeze your anus."
>ESQ: The show featured Nina learning to beat and eventually beating a polygraph. How easy is it to do that?
>PE: We have a number of real-world instances. The Aldrich Ames case. He went through the polygraph twice, after he went to work for the Soviets. Administering a polygraph is an art, not a science. That's why it's not admitted in court. People have claimed to have had training to beat the polygraph. Everything from tightening your sphincter to breathing a certain way, and so forth.
>ESQ: Speaking of sphincters, the trick she's told is "squeeze your anus." Is that a thing?
>PE: I can't confirm. [Laughs] I took several polygraphs. Taking them is a standard thing in the intelligence life.
>Or if you deliberately clench your anal sphincter, that's enough to change results. And yes, that actually works; that's why some lie detectors now feature ass-clenching detention technology.
Not going to lie, all i can think of reading your comment is the book "How to Good-bye Depression: If You Constrict Anus 100 Times Everyday. Malarkey? or Effective Way?"
That seems like a decoy. Make people rely on it and then miss something else. Why couldn’t they just bite their tongue if you just need overriding stimulus?
Moreover that assumes a premise where a lie detector can actually detect lies in the fist place.
I have a friend who's terrified of this new phrenology. She's missing some muscles in her face, such that she blinks weirdly. She can only mostly close her eyes, thanks to an operation in her childhood that took a muscle from her butt and placed it diagonally on her forehead. At best, her blinks are asymmetrical. At night, she has to sleep with dark cloth over her eyes because she can never fully close them.
Her terror revolves around being profiled as being overly nervous at airports. Of course, that worry makes her actually nervous at airports.
This article is talking about two very different things:
1. Using video/audio of a person to make statements about their personality.
2. Using video/audio of a person to identify them.
The former seems as though it’s largely pseudoscience and should be avoided on the basis that it simply does not work.
The latter may be innaccurate, biased and problematic but it does not seem to qualify as pseudoscience. I would imagine such systems will continue to improve in accuracy. Do others really consider facial recognition and voice recognition (as the title suggests) to be like phrenology?
> Do others really consider facial recognition and voice recognition (as the title suggests) to be like phrenology?
I won't jump to that conclusion but I'm skeptical. Phrenology seemed like it worked (to some) until real rigor and statistically significant sample sizes put it to rest. I suspect that may happen to facial and voice recognition too - if limited to a small, known group size or heavily controlled circumstances it works well, but the second you apply it to nontrivial real world populations over time it loses most of its predictive power. Obviously even in limited applications the technology is far more useful than phrenology ever was, just going by the number of people using Face ID.
At the end of the day, the predictions facial/voice recognition algorithms make are far more specific and obviously testable than phrenology ever was, but we don't even have conclusive evidence that humans can precisely identify individuals using only sight or sound without a mental context that would approach a general AI. Even parents, for example, can have trouble differentiating identical twins without contextual clues like personality traits, schedule, or preferred fashion.
IMHO it all relates to what do you mean by "precisely identify" - if we're talking about automating some process that's currently done by humans, then it does not to be able to distinguish identical twins, it only needs to be roughly comparable to what the humans can do. And the "benchmark human" is not a close relative using contextual clues, it's some bored overworked clerk who'd otherwise be looking at the same pictures.
I think whether or not these methods are pseudoscience is irrelevant. Why?
Even if phrenology had worked most of the time, it would still be a terrible idea. Even if phrenologists could classify criminals with 99% accuracy from the shape of their skull, they'd still be screwing over a ton innocent people when their methods were applied to large populations. Putting somebody in prison because of the shape of their skull is a terrible proposition, even if you get it right more often than not.
That’s sad, but we already do lots of statistical analysis in good or bad ways. It would only help to shed more light on bad ways if we digitize it further. It’s unclear to me if what you assume about that digital racism software isn’t already there (e.g. detectives who prioritize suspects by skull shape or HRs doing the same). The fact that we cannot see it doesn’t make it less terrible.
Edit: I’m basing on a premise that working with a lot of subjects, a human inevitably creates a structure in their brain similar to what we discuss (professional deformation is a thing). I bet that it’s often even worse than $subj because a human mind tends to simplify its job for energy consumption reasons.
Even if the accuracy and error rate surpass humans, every death due to an AV failing is one death too much. However, most people seem to be happy with a failure rate lower then my grandma...
Human eyewitness testimony is also imperfect. How often do you see someone or hear a voice that you think you recognize only to find out it was someone else?
What you are arguing is that voice and face recognition are to be accepted as absolute fact, which is not how they ought to be treated. They shouldn't be considered any more reliable than a human, and any use otherwise should be discouraged. Don't throw the proverbial baby out with the bathwater.
Human eyewitness relates to an event. "Did you see this person on the night of January 14" etc. Phrenology is based on immutable characteristics of a person's physiology. Predicting criminality based on any immutable feature seems categorically wrong to me. If facial and voice recognition are used in relation to specific events, that's one thing, but using them to predict some sort of innate propensity for doing crime sounds like pure bathwater.
You might want to check out the bit in the article mentioning the study that claimed you could figure out a person's sexual orientation from their photo.
There are also many well-known algorithms that attempt to predict how nervous or angry a person is from a photo.
Edit: as for criticisms of image recognition systems themselves, they have different false positive rates for different skin colors.
Suppose for argument's sake that it is fairly accurate.
What would then be the argument that we shouldn't use this information to, say, set insurance premiums, when we already happily accept the use of other prejudicial information such as age and gender to do so?
> we already happily accept the use of other prejudicial information
False premise, I do not happily accept this. I begrudgingly accept it, only because I don't think I have the power to change it. But by speaking out against the automation of prejudice in other domains before those systems have become mainstream, I hope I might make a difference.
I think the cost of car insurance should be a function of your past driving history and the price of your car (if the insurance policy covers damage to your car.) Pulling race, gender, sex or age into the equation might make insurance companies more profitable, but I do not like it.
In the (theoretical, unrealistic) limit of perfect competition, supposing that the profit margin is the same in both cases, wouldn’t the average purchaser of car insurance would be better off if these things were taken into account than if they weren’t?
Ok, maybe you don’t think averages are the thing to care about.
Uh,
Ok so if you draw the demand curves (not straight lines) for car insurance for both types of people (those born with and without street racing symbols on their irises)
And we consider the cases where the insurance companies can set a price that depends on the type,
uhh,
Well,
Hm, ok yeah I guess those with racing-eyes get a worse deal in the case that they can be discriminated against. (Here I am using “discriminated against” in what is meant to be a way that doesn’t make a value judgment)
But, the point of insurance is not to produce equal outcomes between people. The point of insurance is to reduce the variance in outcomes for each person with as small as possible a worsening of the average outcome for that person.
If what you are doing is trying to make outcomes equal between groups, what you are doing is no longer just insurance, but a subsidy.
Is it really more efficient to have the prices be required to be the same regardless of racing-eyes, than it would be to just directly tax those with non-racing-eyes to subsidize those with non-racing-eyes?
Profit margins would actually be bigger in the case of more profiling. Absent profiling, insurers open themselves up to much more negative selection (e.g. old people being more likely to purchase cheap health insurance that's subsidised by the young), which has the effect of increasing premiums for the customer, decreasing margins for the insurer and chasing away the customers that are the least likely to need to make a claim.
Who's "we"? An outcome of EU gender equality law was that car insurance premiums may no longer depend on gender even though it is correlated with risk.
> A mere photo can be used [to] guess political preferences fairly well
Really? Could you provide me with a link? It seems counter-intuitive. I look pretty much like my barber, but he's right-wing tradcon and I'm oldskool left-wing.
It’s fairly straightforward to guess someone’s age, gender, race and maybe some cultural markers from a photograph.
Those demographic factors are correlated with people’s political views: old white men tend to be Republican, folks with rainbow hair tend to skew liberal, etc.
On a group level, this seems to let your predict political views, but there’s no mechanism that lets it work for an individual.
"The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust. "
The system isn't telling you anything about the person themselves; it's picking up on artifacts about how different subcultures present themselves in a public profile.
It's your speculation that the underlying reason these features work is because individuals are adopting the behavioral norms of a subculture. This may or may not be the case.
Also, some dimensions of personality are correlated with political belief, so even if these features weren't useful beyond predicting political belief (which itself is an unsubstantiated/speculative claim), there should still be some transitive predictive utility there.
>Speaker recognition is the identification of a person from characteristics of voices. It is used to answer the question "Who is speaking?" The term voice recognition can refer to speaker recognition or speech recognition. Speaker verification (also called speaker authentication) contrasts with identification, and speaker recognition differs from speaker diarisation (recognizing when the same speaker is speaking).
>Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy and learned behavioral patterns.
It's actually been used in criminal cases:
>Speaker recognition may also be used in criminal investigations, such as those of the 2014 executions of, amongst others, James Foley and Steven Sotloff.
>An investigation is under way to establish whether the man dubbed Jihadi John is behind a second murder after a British-accented man was shown in the video depicting the killing of Steven Sotloff.
>Security sources said that although there were similarities between the voice on the film that emerged on Tuesday and that depicting the murder of James Foley a fortnight ago, the figure is largely hidden in black clothing.
>A man with a British accent was seen in an Islamic State video posted last month in which the American journalist James Foley was beheaded. He was dubbed Jihadi John after one former hostage spoke about three Britons, collectively know as the Beatles, who were among their captors, one of whom went by the name of John. In both videos, the speaker has what appears to be a London accent.
some theory of social identity builds a construct like .. the more socially important (for whatever reasons) the person is, the more detail and currency are in the ID or profile, by many measures. It would apply here like - a lot of detail in identifying a television personality going through your security gates, and as a side-effect a lot of pressure on a person that happens to look a lot like that personality; but ordinary people of ordinary means would have both less detail overall to ID them, and more errors in that, overlapping with others.
Thereby, you would get many effects, like people who fit certain demographic and cultural slots at whatever place and time, get a lot of false positives due to no fault of most of them. other examples possible..
For anyone else not sure what phrenology is, the article says:
> In the early 19th century, some scientists became convinced that they could predict someone's personality and behaviour based simply on the shape of their head.
> Known as phrenology, this pseudo-science accelerated notions of racism, intellectual superiority, and caused many to suffer just because of what they looked like—some people were even imprisoned because the contours of their skulls suggested "criminality."
The problem with phrenology as a policy-making tool isn't that it's largely bullshit (though it was), but that it was, on its principles, unfair. The same is here with this tech. Accuracy isn't the issue, even if this one's also largely pseudo science.
I'd use a slightly different metric for unfairness: it's unfair to use factors that people have no control over to make predictions about things that they do have control over.
If we're running airport security - putting aside the security theatre aspect - do we really want to search a 90 year old female pensioner as often as a 21 year old man, when the former is significantly (as in, 100x thereabouts) less likely to pose a threat to a flight? Isn't that just a waste of resources?
Or consider health insurance. Do we really want young people paying the same premiums as 80 year olds?
Perhaps you think there's a negative societal byproduct of making a certain group of people feel marginalized/targeted. Does wanting to reduce this justify clear resource wastage, such as in the airport security example above?
IMO the main issue is that you're punishing (or at least inconveniencing) the person whose predicted to be bad. The 21 year old man did nothing wrong being a 21 year old man, and just because he's super suspicious doesn't mean he's actually dangerous. And you have to imagine what happens when you are the one who seems super suspicious.
Similarly, 80 year olds and especially people with pre-existing conditions, they did nothing wrong, so it's not fair to make them pay more.
Possible solutions are 1) monitor the suspicious individuals without inconveniencing them (e.g. monitor the 21-year old via a security camera), 2) distribute the inconvenience when possible (e.g. raise everyone's premiums a small amount for those who are more likely to be sick), or 3) offer some sort of compensation to people who have to be targeted.
Granting the security theater, If your searches aren't random you introduce a trivial way to bypass the searches. A bad actor simply has to con the contraband onto an unsuspecting low-risk passenger to convert his chances to a much higher success rate.
wrt health insurance, we should properly note that health insurance doesn't exist. It is not insurance but some kind of health care subscription. I am of course speaking as US citizen. A more useful analogy might be life insurance.
The negative societal byproduct of marginalizing certain groups is clearly not justified, it is the resource wastage.
"A bad actor simply has to con the contraband onto an unsuspecting low-risk passenger"
While it's true that this will happen, this isn't a valid argument for exactly equal search frequency across demographics.
Lone wolf attacks are almost exclusively young or middle aged men. That isn't changing in response to venue or airport search policy.
Young people might be easier to convert into drug smugglers due to financial need, higher risk appetite, less family responsibilities and a higher chance of having gang affiliations. Even if smugglers recruit more old people in response to a skew towards searching young people more frequently, such a policy may still maximize the number of successful searches.
>This isn't a valid argument for exactly equal resource allocation across demographics.
It is if your goal is to intercept contraband. Previous contraband seizure statistics will be skewed by survivor bias. Currently, in airports, everybody and every bag passes through an x-ray or metal detector. Airport or venue search policy is unable to affect demographics.
This kind of reasoning results in a spiral of paranoia and ineffective solutions (security theater) which is much worse than a waste of time and resources. This collective suspicion breeds the kind of resentment and distrust which I would argue contribute to the very problems they are purported to solve.
Some airports select a small subset of people and swab their bag manually for drug residue. If people are sampled randomly for swabbing, yet drug smugglers have a demographic skew to e.g. young people, then you've got a suboptimal policy in terms of maximizing the proportion of all searches that find illicit drugs.
> Previous contraband seizure statistics will be skewed by survivor bias.
What do you mean by this? Are you suggesting that it's difficult to design an appropriate skew in search frequencies due to a lack of high quality data about the demographics of drug smugglers?
> This kind of reasoning results in ... ineffective solutions (security theater)
I disagree. Peak ineffectiveness and theater is searching 90 year-old women as often as 21 year-old men, when nearly 100% of suicide attacks and mass shootings are perpetrated by young men and 0% by old women.
Searching 21 year-old men may still be theater to an extent, but it is less theatrical than searching 90 year-old women with equal frequency.
Your math is wrong. You cannot determine who to search in this way beforehand because you think a certain group might have more contraband or crime.
Aside from the trivial bypass you would introduce, that will additionally cause the survivor bias in your search statistics.
Adjusting some proportion of searches to one group or another in this way will ignore the types of people in the group. What works from NYC->Las Vegas will be absurd according to your logic when applied to NYC->Florida retirement communities. It is drugs or bombs? The results will also look different in both scenarios. The correct solution is screening.
I have news for you about drug smuggling, its the 21 year old who are caught. You should probably just let them have their MDMA.
It is all destined to be absurd, however; in the movie theater hypothetical, searching some percentage of 21 year old men is uselessly ineffective. What is the ratio of movie theater attendees to movie theater shooters? It's on the order of homeopathy. The logic and methodology in this is so tragically flawed.
> This kind of reasoning results in ... ineffective solutions (security theater)
> Does wanting to reduce this justify clear resource wastage, such as in the airport security example above?
Hold on, your argument can't put aside the "security theatre aspect" while also using it as justification for policies. If it really is theatre, this wouldn't be doing anything more effectively except getting more people invested in the show going on.
Now your argument hinges on it not being security theater. You have to make the case that airport security is so vital and effective that we need to profile people to get that slight edge.
Recall the Pan Am flight exploded over Scotland. Explosives were concealed in a radio.
Who is to say that grandma isn’t packing such a device?
The threat of a 9/11 style hijacking is largely neutralizes by reinforced cockpit doors. But any person can be a mule, knowingly or unknowingly, for contraband of various types.
> do we really want to search a 90 year old female pensioner as often as a 21 year old man, when the former is significantly (as in, 100x thereabouts) less likely to pose a threat to a flight?
Terrorist organizations aren't stupid, and they have members of all ages. If we made it our policy not to search 90-year-olds anymore, then they'd suddenly end up committing most of the attacks on flights.
> Or consider health insurance. Do we really want young people paying the same premiums as 80 year olds?
In this example, what are you saying that the 80-year-olds have control over?
"If we made it our policy not to search 90-year-olds anymore"
Even if that isn't the optimal policy (and I didn't suggest that it was) due to its exploitability, it doesn't then follow that having the same search probability for 90 year old women and 21 year old men is optimal.
Perhaps searching young men with a 3x higher probability compared to old women is optimal.
What we can say with confidence, when considering only the immediate objective of flight safety (or venue security), is that an exactly equal search probability is definitely not optimal, due to nearly all instances of lone shooters and so on being young or middle aged men.
If we made it our policy not to search 90-year-olds anymore, then they'd suddenly end up committing most of the attacks on flights.
And skewed statistics accordingly, so that it would include them again. That’s how it already works, but on an intuition of particular security workers and/or general guidelines they receive.
> Or consider health insurance. Do we really want young people paying the same premiums as 80 year olds?
Health insurance is a perfect example of intentionally restricting factors that may be considered to make a more fair system. Forget age. Most systems don't require a sick person to pay more than a healthy person even if the sick person is nearly guaranteed to take out more than they contribute.
> Perhaps you think there's a negative societal byproduct of making a certain group of people feel marginalized/targeted. Does wanting to reduce this justify clear resource wastage, such as in the airport security example above?
The extra security is the resource wastage. The flip side of "we can do less for this group" is "we can do more for that group".
Also, your comment here heavily implies you think that resource waste is obviously more important that making large groups of people feel marginalized. To be frank, even from the most sociopathic economic lens, I do think that exhibiting behaviors that perpetuate racist/religious/political/ageist/sexist/sexual orientation biased ideas have significantly higher negative externality costs to society. Especially those that implicitly convey that an outsider group is dangerous.
Why does having no control over a factor have anything to do with it? Is it fair to predict criminality based on haircut but not forehead size? In both cases, you are using a non-causal factor to make a conclusion based on some generalization you've seen in a population. That's what's not fair.
Suppose we lived in an alternate universe where personality characteristics could be inferred (with statistical confidence) from skull bumps (i.e. a world where phrenology worked): in such a world, you could argue that it would be unfair to disadvantage an individual based on his specific skull bumps. Those bumps aren't dispositive in individual cases. At the same time, it'd be wrong (again, in our pretend universe) to ban noticing the statistical connection between skull bump and personality, because that'd just be a fact about how our alternate universe worked, and censoring facts is always wrong.
Back in our world: I don't believe in demonizing technologies because of what people might do with them. Facial recognition works, at least for identification of individuals. I cannot get on board with mandating denial of this fact for the sake of fear of bad policy based on this fact. It never, ever, under any circumstances whatsoever acceptable to ban people noticing true facts about how the world works.
> I don't believe in demonizing technologies because of what people might do with them.
What about demonizing technologies because of what people do with them, all the time, in positions of power?
> Facial recognition works, at least for identification of individuals. I cannot get on board with mandating denial of this fact for the sake of fear of bad policy based on this fact.
That's not really what this article is about, the article is just poorly titled.
> It never, ever, under any circumstances whatsoever acceptable to ban people noticing true facts about how the world works.
That's fine. In this case you should have no issue that I've noticed people who say stuff like this tend to harbor supremacist ideals.
I’m struggling a bit with the layout of the arguments in this article. In particular, it seems to conflate several related, but different things.
- Face/voice recognition, which is just statistical modeling and threshold testing at the end of the day.
- (Mis)use of face/voice recognition to test for things that these technologies aren’t going to accurately measure
- Equating the former to phrenology, when it’s a more accurate of an analogy to equate the latter to phrenology.
Overall, I agree with the basic premise that face/voice recognition being misused to apply prejudice is a valid concern.
>"However, he warns that attempts to assess someone's personality from their voice—or even their facial expressions—is fraught with ethical and accuracy issues. "Even though there are physiological signals, it's quite possible that the way we interpret them is very culturally biased," he said."
A relevant article on this which talks about some of the evidence for cultural specific interpretations balanced against existing evidence for their universality: Do Feelings Look the Same in Every Human Face? (https://greatergood.berkeley.edu/article/item/do_feelings_lo...)
A decade ago I left a startup I helped found that did deposition video editing because (among other things) our timestamp provider was offering micro tremor stress (lie) detection in their API and people were VERY interested in using it. Even if you buy into that bullshit, in this instance the source audio was 32kbps mono. There was nothing keeping us from underlying text red, yellow, and green. It wouldn't be admissible as evidence (like a polygraph) but you best believe that shit would be used for awful shit. I left, they never did that, but I bet it's going on somewhere. I get the desire for tools like this, but it's really just a way around doing better work to determine factors. The only thing it's really for is coercion.
They don't actually mention any company using this for personality assessment. That would be a useful detail to include. They mention Clearview AI, but only in the context of facial recognition, not personality.
> At the self-described “heart” of the company’s monitoring software is Monitor AI, a “powerful artificial intelligence engine” that collects facial detection data [...] to identify “patterns and anomalies associated with cheating.”
> "... people who have some sort of facial disfigurement have special challenges; they might get flagged because their face has an unexpected geometry.”
This is literally phrenology with a bunch of linear algebra instead of calipers.
Wow that's awful. There might be some very general common characteristics of body language and facial movements the very roughly correlate to certain general situations. (Panic comes to mind) but those are still going to vary incredibly widely over the population based on so many factors, not the least of which are culture, peer groups, and general personal quirks. A person with a history of panic attacks, for example, may have learned to cope well enough not to completely lose it in a meeting at work. The list of other examples would be endless.
And how do you get a reliable data set for this to begin with? Most schools allow for a formal appeal or inquiry process precisely because anything but very blatant cheating can be a murky area. Even if you had videos and a tagged set of the outcomes of inquiries like that, you still wouldn't have intercoder ratings for reliability and therefore an no verified data set. I know some methods that can be used against untagged data, but at the very least you actually have to know if there was cheating or not.
And talk about black box-- this isn't even something that human review of specific incidents could validate. If a human couldn't reliably come to the correct conclusion based on available data, an AI using pattern matching with a questionable data set sure won't.
When the article mentioned companies using it I figured it was just early stages of testing it. Or used for some low stakes marketing/advertisement targeting. That would still be bad, but this is world's worse. Life ruining worse.
My guess is they're trying to classify behaviors like glancing off in some direction repeatedly to look at a cheat sheet or a phone, the same things human proctors watch for.
Presumably the system can fail to accurately estimate the gaze direction for some people because they are too different from the design assumptions made. i don't really think that's the same as phrenology. It's more akin to BMI being a poor estimate of obesity for bodybuilders.
Facial recognition works pretty well, as anyone who works in the field can tell you. So it's already different from phrenology. It's even more different from prenology in that facial and voice recognition are search and surveying tools, not conclusion or judgement tools.
The first portion of the audio (comprising all I heard) is just some guy kvetching about Alexa to a clueless but still pompous interviewer. As for "biometrics", per sei, it seems to be a marginally small addition to all the other information channels that are available to the surveillance industry. Not momentous in and of itself, and the concern seems to stem from questionable categorical distinctions (e.g. what you say vs. how you say it; wrong for others to consider the latter, not the former). And, regarding phrenology: its previous incarnation was discredited as a pseudoscience on the basis of its vagueness and potential inaccuracy. Now that it can be implemented using deep learning, its predictions' accuracy quantified and vetted empirically, that criticism no longer applies.
I like to call it inspector gadgetarianism. Cops LOVE this shit. They do all sorts of practices like denying that you can easily pass lie detection testing, penile plesmography (like if they want to determine a potential recruit is a pervert they hook their meat up to this machine and show them shit like dudes copulating with chickens), and that time the us military got those ridiculous fake bomb detectors that did literally nothing.
It's a miscarriage of justice but that's what you get when you have an organization that does stuff like not even letting you in if they think you're too smart https://abcnews.go.com/US/court-oks-barring-high-iqs-cops/st...