Your Face Says You're a Criminal

Crime

Your Face Says You're a Criminal

December 7, 2016

Could a computer one day be the judge in a courtroom?

A new machine learning paper by two Chinese researchers suggests the possibility. However, their study is very controversial.

Xiaolin Wu and Xi Zhang, professors from Shanghai Jiao Tong University in China, investigated if machines could determine who is a criminal by deciphering people’s facial features.

In their paper, Automated Inference on Criminality using Face Images, they state their investigation was successful, and that they’ve even discovered a new law ruling “the normality for faces of non-criminals.”

How their research was conducted

Wu and Zhang input facial images of 1,856 people into a machine learning algorithm. The images were not mugshots, though half of the people were convicted criminals, and all were Chinese males “between the ages of 18 and 55, with no facial hair, scars, or other markings.”

Then they observed if analyzation of facial features could infer criminality, based on four classifiers – race, gender, age, and facial expressions.

According to the researchers, the experiment a success. “We have demonstrated that via supervised machine learning, data-driven face classifiers are able to make reliable inference on criminality,” they concluded.

In other words, they found certain facial patterns can indicate who is a criminal and who is not.

Isn’t this controversial?

Yes. And many have taken to the Internet to express their view. Twitter user Stephen Mayhew wrote, “This paper is the exact reason why we need to think about ethics in AI.”

The problem lies on the human element in machine learning. For a computer to conduct a task, it must be taught how by humans. In the instance of this study, one could conclude that to teach a computer to read a criminal’s face, it must be told what to look for. Hence opening up a can of worms in terms of bias.

The researchers were adamant that human bias was not a factor in the study because “any subtle human factors” were left out of the analysis.

Another problem comes from false analysis. The Chinese duo did admit in the paper that the algorithm made mistakes; identifying innocent people as criminals and vice versa. This raises the question of validity. No matter how accurate the numbers are in deciphering the lawbreakers from the law-abiding, shouldn’t character be judged by humans? What if the machine makes a detrimental error in a criminal case?

But we’re not at that point…yet, at least. The factors for this research to ever be taken seriously are still up the in air. As one critic of the study pointed out on Hacker News, “I agree it’s an entirely valid area of study…but to do it you need experts in criminology, physiology and machine learning, not just a couple of people who can follow the Keras instructions for how to use a neural net for classification.”

For now, this is just the beginning. While our thirst for technology grows and machines get better at learning how to think like us, there could be a day when computers can judge a criminal by their face…with 100% accuracy.

Disclaimer: The above is solely intended for informational purposes and in no way constitutes legal advice or specific recommendations.