TSA faces ethical limits in use of AI

Artificial intelligence has become a disruptive force in society. Terms such as machine learning, deep learning and neural networks have become commonplace among mainstream media, eliciting visions of innovation that has the potential to change our lives.

At its core, AI attempts to mimic the capabilities of the human brain. Whether it’s computer vision, or natural language processing, which focuses on how computers recognize and interpret written text, the list of possibilities for AI use continues to grow.

ADVERTISING


Take, for example, aviation security. Many people will pass through security checkpoints at airports while traveling during the holiday season. The Transportation Security Administration will process as many as 2.5 million people at airport checkpoints on some of the peak holiday travel days.

The TSA’s responsibility is to protect the nation’s air system from malicious activity. Airport security involves many layers. Screening, for instance, uses various technologies to meet several objectives, such as validating a person’s identity and detecting any items that pose a threat, which a traveler may attempt to bring onto a flight.

The output of screening devices must be read and interpreted by TSA officers, and humans make mistakes. As such, the TSA is working to use AI to improve the detection process and reduce the impact of human error. However, the hope for AI in airport security is more far-reaching. Employing AI to determine intent from behavior, appearance and speech could have enormous practical impact and benefits. AI systems that could measure human intent would simplify airport security operations, effectively reducing the need for threat item detection.

The TSA already does this in offering people access to expedited screening lanes by enrolling them in TSA PreCheck.

With such a system, screening would be limited to a small subset of travelers, with most people passing through security checkpoints with little or no physical screening.

There are several challenges with designing and implementing such an AI system for aviation security. First, creation of the models and algorithms that process data and produce the required insights. Another is how AI systems make decisions and the inevitable false alarms and false clearance that come with them. The most skilled and knowledgeable humans make such errors. No AI system will be completely immune to errors, though the source of such errors will be the design and implementation of the models and algorithms.

A third issue is privacy. If an AI system can capture traveler intent, is this a line too far to cross? Would this be classified as an invasion of personal space, even with a positive end? That is why the TSA PreCheck program is voluntary, not mandatory: Participants must subject themselves to background vetting to qualify.

Perhaps most critically, the ethics surrounding the design of AI systems must be addressed. How an AI system incorporates ethics in its creation and implementation affects how it is received, perceived and adopted.

This challenge perhaps provides the greatest headwinds for AI advancements in our nation.

We are not likely to find an AI system in place at airports anytime soon that will measure human intent. However, the thought that it may be possible is what makes AI the disrupter and game-changer that demands everyone’s attention.

Indeed, the AI genie is out of the bottle, and where it takes us is a story that continues to be written.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

By participating in online discussions you acknowledge that you have agreed to the Star-Advertiser's TERMS OF SERVICE. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. To report comments that you believe do not follow our guidelines, email hawaiiwarriorworld@staradvertiser.com.