People don’t think as much about the possibilities of a machine that could tell whether people were lying as they should. Partly this is because many people erroneously believe that such machines already exist. Partly this is because fictional depictions of universes in which lying can be made impossible are unpopular- lying being of key importance to many tropes in fiction. We should think about it though because there’s every chance rapidly improving machine learning technology and brain scanning capabilities will meet in the middle sometime in the next four decades or so to give us a genuine lie detector.
If we had a machine that could determine whether people were lying or telling the truth right now, I can only assume it would be a disaster. The supreme court would probably rule that employers could use it on their employees or some such awful thing. There would be no hope of stopping the three-letter agencies from using it. It would possibly be what I call a critical-social technological point- locking us into a hierarchical society with little possibility of resistance. Awful to contemplate.
However, I can’t help but think that in a different kind of society- a kinder, wiser society- not infinitely kind or wise, but kinder and wiser than ours-, would gain enormously from the existence of a working lie detector.
Asymmetric information- markets & bureaucrats
As Stiglitz has shown, one of the most common problems in a variety of markets is asymmetric information. Huge problems with asymmetric information affect sectors as diverse as health, education, real-estate and second-hand car sales. How should a consumer choose between hospitals without being a doctor themself? How many second-hand cars are either grossly over or undervalued? A way of guaranteeing honesty could fix this.
On the other hand, if our hypothetical kinder and wiser society is using non-market-based solutions, either for a particular sector or in general, the assurance of truthfulness would still be invaluable. Honesty in high-level decision-making and interactions between bureaucrats on topics like quotas, targets, safety standards etc. would transform governance.
Crime and the possibility of redemption
One of the reasons (publicly acknowledged) redemption is so rare in our society is because the recognition of redemption requires trust and the criminal is, generally, seen as untrustworthy. A mechanism that would allow us to verify the truthfulness of a criminal’s claim to remorse would be invaluable. Unfortunately being genuinely remorseful is not always enough to guarantee you won’t do it again, but it’s a start. Maybe also this would shake society’s obsession with retribution.
If you’re like me and you suspect the false conviction rate is a lot higher than should be implied by that comfortable phrase “beyond reasonable doubt”, the reduction in false convictions will also be welcome. The true conviction rate would also rise, as the only way to raise questions about the credibility of a witness who had undergone a screening would be to claim they were mistaken, which simply wouldn’t be plausible in a lot of cases. A lot of types of crime that intrinsically difficult to prove- e.g. abuse by the police- would be bought further into the open. One downside here is that certain kinds of perpetrators might decide their only option was to kill victims and potential witnesses
The possibility of an honest democracy
We would get politicians who truly, really believe in what they say. I can see that there is a cynics case to be made that this is a bad thing, but I have enough faith in the voters to think not. Deliberately cynical manoeuvres like gerrymandering would be impossible or have enormous political costs.
A lot of people will say to the above claim about gerrymandering and other cynical manoeuvres “oh, but we already know they do these things and it has no effect, so a lie detector wouldn’t change things”. I disagree. There is a fundamental difference between knowledge which is founded on inference, and knowledge which is founded on admission or a leak or something like that. It may not be rational, but humans treat the two utterly differently. This is why there is a huge difference between saying something and dog-whistling it.
Interpersonal and social honesty as a building block for interpersonal and social empathy
This may sound bizarre or even paranoid, but there’s something quite frightening about the privacy of experience. Part of what forms that heavy barrier is the reality that someone could always be lying about their experiences. Even if the possibility seems remote, it’s there, and unlike other kinds of statements, there’s no way to check whether someone is honestly conveying what they feel. On a personal level, knowing that you were hearing the whole story from someone would allow for a new depth of emotional intimacy.
At the social level, however, is where this effect would be strongest. People often talk about what it’s like to be in such and such social position, and such talk can literally change the world. One of the mental barriers people put up against this is telling themselves that these people are lying or not telling the full truth. A working lie detector could do away with this layer of protective armour against empathy, though of course others would remain. Much richer social conversations might be possible when the facts of experience weren’t under dispute.
4 thoughts on “The good things a working lie-detector could do”
Fascinating thought experiment. My pet theory is that the arms race between lie-detectors and liars is part-and-parcel with the evolution of humans over the past million years. So, the cynical part of me is that new forms of dishonesty will emerge to circumvent the best-in-class lie-detection. For example, there could be regimes where lie-detectors aren’t allowed where all the rich people will congregate, similar to tax havens. From those havens, they’ll project power on the rest of the world. Lie-detection treaties between regimes could wind up being huge political chess pieces
A little note on the tech, my understanding of ML is that at best it will perform as well as human lie-detection. However, there is probably no bounds on brain scans.
Another avenue for subversion is the how people temporarily genuinely believe something, only to switch positions later. (Casanovas are a good example of convincingly demonstrating “true love” long enough to bed someone.) On second thought, this may be the biggest issue with honesty, and is probably the cutting edge in the arms race. i.e. the temporary “true believer.”
The real latest tech in the arms race is the honest hypocrite who doesn’t need to change their opinion- the politician who genuinely believes they’re working for the best interests of their constituents, but whenever the best is in question or two groups of constituents are in conflict invariably picks the choice that benefits them the most, for example. Asking about actions rather than intent, with things like “have you ever taken a bribe”, would be somewhat better, but I predict we’d be amazed at the number and variety of transactions which people don’t *think of* as bribes- an example that’s been observed even without lie detectors is that very few people will answer “have you ever raped anyone” on anonymous questionnaires” yet a few times that many people will answer “yes” to questions like “have you ever had sex with an unconscious person” and so forth on the very same questionnaires.
I could imagine the use of a working lie-detector having some fairly horrifying effects on human rationality- oppressive regimes could use it to crush any genuine idealists working against the system, but someplace like the old Soviet Union would be unable to do anything to bureaucrats who simply follow their incentives in making bad choices; I could imagine such a country executing more and more potential traitors as it becomes obvious to everyone with eyes that the regime is terrible for everyone, until finally it can’t bear anymore and is overthrown… replaced by a new government made up entirely of the most purebred hypocrites who changed their genuine loyalties at the very last minute, and without a single person competent enough to actually judge the state of affairs and figure out what’s best for the country.
Democracies with robust norms of free speech would probably be safer, and if they passed laws limiting mandatory lie-detector tests to set circumstances could even see a net benefit with the removal of corrupt politicians. I could imagine a few of the most intelligent and clear-sighted politicians being caught up if they did a few minor corrupt actions to get things done, while significantly worse politicians who successfully blinkered themselves to the consequences of their own actions remained in office, but I’d be surprised if the former group got too far in our system anyway.
LikeLiked by 1 person
I agree this is all absolutely possible. This thought experiment is, to be honest, more of a fantasy about how a lie detector might help out if things were just a bit better than they are, than a prediction of how things would really go.
LikeLiked by 1 person
There’s an interesting (somewhat dated) sci-fi novel about this exact scenario – ‘The Truth Machine’ by James Halperin.