Artificial intelligence to spot potentially dangerous travellers and detect smuggling, but can this technology be trusted?

International travel is increasing at a rapid rate. In 2017, a record 1.4 billion tourists visited other countries last year and that number is expected to reach 1.8 billion by 2030.This swelling number of globe trotters also means growing queues at passport control. The vast majority of people who are detained by border agents don’t present a threat, which slows down the already often lengthy process of crossing an international border. Border crossing agents have a tough job. They have to make hundreds of judgement-calls every hour about whether someone should be allowed to enter a country. With the looming threat of terrorist attacks, people trafficking and smuggling, there is a lot of pressure to get it right.

Although they may have some additional intelligence on their computer system, when border guards examine most travellers, they’re relying on their own hunches and experience. And for many border control officers, that experience may not amount to much – it’s a position with a high turnover rate; border guards in the US quit at double the rate of other law enforcement positions.

Anyone who has been stopped from entering a country at immigration, even briefly, will know what an upsetting and stressful experience it can be. Staring into the hard eyes of a border guard as they examine your passport is always a nerve-wracking experience But there could soon be another, unseen border agent with a hand in these decisions – one that cannot be reasoned with or softened with a smile.

A number of governments around the world are now funding research on systems powered by artificial intelligence that can help to assess travellers at border crossings.

One of these is being developed by US technology firm Unisys, a company that began working with US Customs and Border Patrol following the 9/11 terrorist attacks in 2001, to develop technology for identifying dangerous passengers long before they board a flight. Their threat assessment system, called LineSight, slurps up data about travellers from different government agencies and other sources to give them a mathematical risk evaluation.

They have since expanded its capability to look for other types of traveller or cargo that might be of concern to border officials. John Kendall, director of the border and national security program at Unisys, uses an example of two fictional travellers to illustrate how LineSight works.

Romain and Sandra are ticketed passengers who have valid passports and valid visas. They would pass through most security systems unquestioned, but LineSight’s algorithm picks up something fishy about Romain’s travel patterns – she’s visited the country several times over the past few years with a number children who had different last names, something predictive analytics associates with human trafficking.

“Romain also purchased her ticket using a credit card from a bank associated with a sex trafficking ring in Eastern Europe,” says Kendall. LineSight is able to obtain this information from the airline Romain is flying with and cross check it with law enforcement databases.

“All of this information can be gathered and sent to a customs official before Romain and Sandra check in for their flight,” adds Kendall. “We collect data from multiple sources. Different governments collect different information, whether it’s from their own databases, from travel agencies. It’s not neat.”

The system can take a similar approach to analysing cargo shipments, helping to pull together relevant information that might identify potential cases of smuggling.The power of Unisys’s AI approach is the ability to ingest and assess a huge amount of data in a very short amount of time – it takes just two seconds for LineSight to process all of the relevant data and complete a threat assessment..

Algorithms trained to recognise patterns or behavior with historic data sets can reflect the biases that exist in that information But there are concerns about using AI to analyse data in this way. Algorithms trained to recognise patterns or behavior with historic data sets can reflect the biases that exist in that information. Algorithms trained on data from the US legal system, for example, were found to replicate an unfair bias against black defendants, who were incorrectly identified as likely reoffend at nearly twice the rate as white criminals. The algorithm was replicating the human bias that existed in the US justice system.

Erica Posey of the Brennan Center for Justice fears similar biases could creep into algorithms used to make immigration decisions.“Any predictive algorithm trained on existing data sets about who has been prevented from traveling in the past will almost certainly rely heavily on proxies to replicate past patterns,” she says.

According to Kendall, Unisys hope to deal with this by allowing its algorithm learn from its mistakes.

“If they stop somebody, and it turns out there was nothing wrong, that automatically updates the algorithm,” he says. “So every time we do an assessment the algorithm gets smarter. It’s not based on intuition, it’s not based on my bias – it’s based on the full population of travellers that come through.”

The company also says LineSight doesn’t assign one piece of data more weight than another, instead presenting all the relevant information to the border and customs officers.There are some who hope that artificial intelligence might be better at picking up signs of deception

But there are other teams that are looking to go even further by allowing machines to make judgements about whether travellers can be trusted. Human border officers make decisions about this based on a person’s body language and the way they answer their questions. There are some who hope that artificial intelligence might be better at picking up signs of deception.

Aaron Elkins, a computer scientist at San Diego State University, points out that humans are typically only able to spot deception in other people 54% of the time. By comparison, AI-powered machine vision systems have been able to achieve an accuracy of over 80% in multiple studies. Infrared cameras that can pick up on changes in blood flow and pattern recognition systems capable of detecting subtle ticks have all been used.

Elkins himself is one of the inventors behind Avatar (Automated Virtual Agent for Truth Assessments in Real Time), a screening system that could soon be working with real-life border agents. Avatar uses a display that features a virtual border agent that asks travellers questions while the machine scrutinises the subject’s posture, eye movements, and changes in their voice.

After experiments of tens of thousands of subjects lying in a laboratory setting, the Avatar team believes they have managed to teach the system to pick up on the physical manifestations of deception.Another system, called iBorder Ctrl is to be tested at three land border crossings in Hungary, Greece and Latvia. It too features an automated interviewer that will interrogate travellers and has been trained on videos of people either telling the truth or lying.

There is an ongoing debate about whether such AI “lie detectors” actually work

Keeley Crocket, an expert in computational intelligence at Manchester Metropolitan University in the UK, who is one of those developing iBorder Ctrl, says the system looks for micro-gestures – subtle nonverbal facial cues that include blushing as well as subtle backward and forward movement. Crocket has high hopes for this first phase of field tests, saying the team are hoping the system will “obtain 85% accuracy” in the field tests.

“Until we have completed this [phase of testing], we will not know for sure,” she cautions.
But there is an ongoing debate about whether such AI “lie detectors” actually work.

Vera Wild, a lie detection researcher and vocal critic of the iBorder Ctrl technology, points out that science has yet to prove a definitive link between our outward behaviour and deception, which is precisely why polygraph tests are not admissible in court.

“There is no unique ‘lie response’ to detect,” she says.

Even if such a link could achieve scientific certainty, the use of such technology at a border crossing raises tricky legal questions. Judith Edersheim, co-director of the Massachusetts General Hospital Center for Law, Brain and Behavior (CLBB), has suggested that lie-detection technology could constitute an illegal search and seizure.

“Compulsory screening is a seizure of your thoughts, a search of your mind,” she says. This would require a warrant in the US. And there could be similar problems in Europe too. Article 22 of the General Data and Protection Regulation protects EU citizens against profiling. Can the iBorder Ctrl ever be transparent enough to prove it hasn’t used some element of profiling?

It’s important to note that at this stage, travellers testing out iBorder Ctrl will be volunteers and will still face a human border agent before they enter the countries where it is being tested. The system will give the human border officers a risk assessment score determined by the iBorder Ctrl’s AI.

And it seems likely that AI will never completely replace humans altogether when it comes to border control. The Unisys, Avatar, and iBorder Ctrl teams all agree that no matter how sophisticated the technology becomes, they’ll still rely heavily on humans to interpret the information.

But a reliance on machines to make judgements about a traveller’s right to enter a country still raises significant concerns among human rights and privacy advocates. If a traveller is determined to be a high risk, will a border patrol agency provide them information about why?

“We need transparency as to how the algorithm itself is developed and implemented, how different types of data will be weighted in algorithmic calculations, how human decision-makers are trained to interpret AI conclusions, and how the system is audited,” says Posey. “And fundamentally, we also need transparency as to the impact on individuals and the system as a whole.”

Kendall, however, believes AI may be an essential tool in dealing with the challenges facing international borders.“It’s a complex set of threats,” he says. “The threats we face today will be different from the threats in a couple of years’ time.”

The success of AI border-guards will depend not only on their ability to stay one step ahead of those who pose these threats, but also if they can make travelling easier for the 1.8 billion of us who want to see a bit more of the world.