DoD AI Drones That Can Recognize Faces Pose Ethical Minefield

Artificially intelligent military drones that can use facial recognition technology to detect the faces of targets are now being developed by the U.S. This has led many people to raise concerns about the ethics involved.

New Scientist reported that the U.S. Department of Defense (DoD) has an $800,000 contract with Seattle-based company RealNetworks to create these autonomous drones. The tech will be able to use machine learning to identify faces without human input.

According to the contract, these drones will be used by special operations abroad and for gathering intelligence.

AI to used in in Drones
A file photo of an MQ-9 Reaper remotely piloted aircraft (RPA) during a training mission at Creech Air Force Base in Indian Springs, Nevada. Artificially intelligent military drones that can use facial recognition technology to... Getty

Nicholas Davis, an industry professor of emerging technology at the University of Technology Sydney, told Newsweek: "There are innumerable ethical implications, from the way such devices might redistribute power or threaten groups within a society, to the ways in which they threaten established international humanitarian law in conflict zones."

The tracking of people of interest has been going on for a very long time already, with the artificial intelligence (AI) facial recognition adding an extra layer of technology.

Facial recognition is used in many countries already, including China and the United Arab Emirates. Other countries, including Libya, also are developing drones that can detect the faces of humans, according to a U.N. report.

There are major concerns that the use of facial recognition technology, combined with AI, could be used to target specific people, perhaps before they have even committed a crime.

Mike Ryder, a marketing lecturer and robotics, AI, war, ethics and drones researcher at Lancaster University in the U.K., told Newsweek: "The question here really is not so much around the use of technology to track an individual, but rather the decision-making process that lists them as a 'person of note' to track in the first place. Why are they being targeted? What evidence is there that they pose a potential threat?

"This is really the key ethical dilemma at the heart of modern drone operations overseas as the U.S. and its allies use drones to pre-emptively strike (i.e. assassinate, kill or murder) targets that are perceived to pose a potential threat at some point in the future," Ryder said.

"It's the pre-emptive nature of these operations that is the really problematic part here. These strikes work on the assumption that the person being monitored has the potential to carry out a terror attack in the future, even though they may never have engaged directly in terror activities up to that point."

There are also issues with the facial recognition technology itself and how its lack of sophistication could lead to the wrong person entirely being targeted.

facial recognition stock
Stock image of a woman using facial recognition to unlock her phone. There are concerns that the technology, when used in drones, could target the wrong person. iStock / Getty Images Plus

"Facial recognition technology (FRT) remains experimental technology," Edward Santow, an industry professor of responsible technology at the University of Technology Sydney, told Newsweek. "Its accuracy can be poor when seeking to identify people from a large database, or in imperfect lighting and other such conditions–as would be almost always the case in a conflict situation. There is a serious risk that, using FRT to target lethal autonomous weapons, results in errors causing unlawful death and injury."

The possibility of errors like these may be increased when the targeted person is not white.

"We know that, despite considerable efforts, such facial recognition technology remains biased," Toby Walsh, a professor of AI at the University of New South Wales in Sydney, told Newsweek. "It works less well on people of color. When your smartphone doesn't open because the facial recognition fails, the harms are slight. But errors with such drones would be fatal."

This also raises this issue of AI making decisions without a human having the final say.

"Do we want autonomous machines making (what should be) ethical decisions that take the lives of human beings?" Lily Hamourtziadou, a senior lecturer in criminology and security studies at Birmingham City University in the U.K., told Newsweek.

"Remote killing in many ways is easy killing: a kind of virtual, video-game killing. This in itself is morally problematic. Moreover, when a killing is attributed to a machine, there is lack of accountability and justice, and violence is used with impunity," Hamourtziadou said.

Despite the multiple moral drawbacks of this technology, it does provide some benefits in terms of effective warfare. This may mean that fewer innocent people are killed in the process.

"Unmanned systems were developed primarily to preserve military lives, to protect 'our army', but arguably they can save lives on both sides, mitigating actual physical harm, including civilian," Hamourtziadou said.

"As a method of remote warfare, they provide psychological and moral buffering to those who operate them, as they are distanced from the killings they carry out. Drones are said to be effective, powerful weapons that help ensure control, survival and security. The armed 'hunter-killer' drone is said to be a major step forward in humanitarian technology. Nobody dies, except the enemy," she explained.

"Drone warfare enables us to avoid total war, like the one we are seeing currently in Ukraine," Hamourtziadou said.

Other benefits that she described include the fact that drones can stay in flight for up to 20 hours. This is twice the amount of time that a human can maintain before needing sleep or food. Additionally, unmanned systems don't have the same limitations as the human body, so can operate in dangerous environments; for example, spaces filled with biological or chemical weapons.

Either way, this burgeoning technology is a sign of the times, as the use of both AI and facial recognition is getting increasingly more common in our daily lives.

"What I'm more concerned about here is the use of facial-recognition tech more broadly, and how it can be used and shared by the likes of Google and Facebook to build up a picture of our activities based on photographic record," Ryder said.

"We already know that our smartphones and apps are tracking us as we go about our daily lives. However, with the use of facial recognition, this level of tracking goes up yet another level as these companies can build more accurate profiles about us. They can also potentially use our image data to build better profiles of other people who may appear in our photos as well," he said.

Do you have a tip on a science story that Newsweek should be covering? Do you have a question about facial recognition? Let us know via science@newsweek.com

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Jess Thomson is a Newsweek Science Reporter based in London UK. Her focus is reporting on science, technology and healthcare. ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go