A legal challenge was heard today in Europe’s Court of Justice in relation to a controversial EU-funded research project using artificial intelligence for facial “lie detection” with the aim of speeding up immigration checks.
The transparency lawsuit against the EU’s Research Executive Agency (REA), which oversees the bloc’s funding programs, was filed in March 2019 by Patrick Breyer, MEP of the Pirate Party Germany and a civil liberties activist — who has successfully sued the Commission before over a refusal to disclose documents.
He’s seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project. And is hoping to set a principle that publicly funded research must comply with EU fundamental rights — and help avoid public money being wasted on AI “snake oil” in the process.
“The EU keeps having dangerous surveillance and control technology developed, and will even fund weapons research in the future, I hope for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests,” said Breyer in a statement following today’s hearing. “With my transparency lawsuit, I want the court to rule once and for all that taxpayers, scientists, media and Members of Parliament have a right to information on publicly funded research — especially in the case of pseudoscientific and Orwellian technology such as the ‘iBorderCtrl video lie detector’.”
The court has yet to set a decision date on the case but Breyer said the judges questioned the agency “intensively and critically for over an hour” — and revealed that documents relating to the AI technology involved, which have not been publicly disclosed but had been reviewed by the judges, contain information such as “ethnic characteristics”, raising plenty of questions.
The presiding judge went on to query whether it wouldn’t be in the interests of the EU research agency to demonstrate that it has nothing to hide by publishing more information about the controversial iBorderCtrl project, per Breyer.
AI ‘lie detection’
The research in question is controversial because the notion of an accurate lie detector machine remains science fiction, and with good reason: There’s no evidence of a “universal psychological signal” for deceit.
Yet this AI-fuelled commercial R&D “experiment” to build a video lie detector — which entailed testers being asked to respond to questions put to them by a virtual border guard as a webcam scanned their facial expressions and the system sought to detect what an official EC summary of the project describes as “biomarkers of deceit” in an effort to score the truthfulness of their facial expressions (yes, really) — scored over €4.5 million/$5.4 million in EU research funding under the bloc’s Horizon 2020 scheme.
The iBorderCtrl project ran between September 2016 and August 2019, with the funding spread between 13 private or for-profit entities across a number of Member States (including the U.K., Poland, Greece and Hungary).
Public research reports the Commission said would be published last year, per a written response to Breyer’s questions challenging the lack of transparency, do not appear to have seen the light of day yet.
Back in 2019 The Intercept was able to test out the iBorderCtrl system for itself. The video lie detector falsely accused its reporter of lying — judging she had given four false answers out of 16, and giving her an overall score of 48, which it reported that a policeman who assessed the results said triggered a suggestion from the system she should be subject to further checks (though was not as the system was never run for real during border tests).
The Intercept said it had to file a data access request — a right that’s established in EU law — in order to obtain a copy of the reporter’s results. Its report quoted Ray Bull, a professor of criminal investigation at the University of Derby, who described the iBorderCtrl project as “not credible” — given the lack of evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.
“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money. The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive,” Bull also told it.
The notion that AI can automagically predict human traits if you just pump in enough data is distressingly common — just look at recent attempts to revive phrenology by applying machine learning to glean “personality traits” from face shape. So a face-scanning AI “lie detector” sits in a long and ignoble anti-scientific “tradition”.
In the 21st century it’s frankly incredible that millions of euros of public money are being funnelled into rehashing terrible old ideas — before you even consider the ethical and legal blindspots inherent in the EU funding research that runs counter to fundamental rights set out in the EU’s charter. When you consider all the bad decisions involved in letting this fly it looks head-hangingly shameful.
The granting of funds to such a dubious application of AI also appears to ignore all the (good) research that has been done showing how data-driven technologies risk scaling bias and discrimination.
We can’t know for sure, though, because only very limited information has been released about how the consortia behind iBorderCtrl assessed ethics considerations in their experimental application — which is a core part of the legal complaint.
The challenge in front of the European Court of Justice in Luxembourg poses some very awkward questions for the Commission: Should the EU be pouring taxpayer cash into pseudoscientific “research”? Shouldn’t it be trying to fund actual science? And why does its flagship research program — the jewel in the EU crown — have so little public oversight?
The fact that a video lie detector made it through the EU’s “ethics self-assessment” process, meanwhile, suggests the claimed “ethics checks” aren’t worth a second glance.
“The decision on whether to accept [an R&D] application or not is taken by the REA after Member States representatives have taken a decision. So there is no public scrutiny, there is no involvement of parliament or NGOs. There is no [independent] ethics body that will screen all of those projects. The whole system is set up very badly,” says Breyer.
“Their argument is basically that the purpose of this R&D is not to contribute to science or to do something for public good or to contribute to EU policies but the purpose of these programs really is to support the industry — to develop stuff to sell. So it’s really supposed to be an economical program, the way it has been devised. And I think we really actually need a discussion about whether this is right, whether this should be so.”
“The EU’s about to regulate AI and here it is actually funding unethical and unlawful technologies,” he adds.
No external ethics oversight
Not only does it look hypocritical for the EU to be funding rights-hostile research but — critics contend — it’s a waste of public money that could be spend on genuinely useful research (be it for a security purpose or, more broadly, for the public good; and for furthering those ‘European values’ EU lawmakers love to refer to).
“What we need to know and understand is that research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that actually wastes money for other programs that would be really important and useful,” argues Breyer.
“For example in the security program you could maybe do some good in terms of police protective gear. Or maybe in terms of informing the population in terms of crime prevention. So you could do a lot of good if these means were used properly — and not on this dubious technology that will hopefully never be used.”
The latest incarnation of the EU’s flagship research and innovation program, which takes over from Horizon 2020, has a budget of ~€95.5BN for the 2021-2027 period. And driving digital transformation and developments in AI are among the EU’s stated research funding priorities. So the pot of money available for ‘experimental’ AI looks massive.
But who will be making sure that money isn’t wasted on algorithmic snake oil — and dangerous algorithmic snake oil in instances where the R&D runs so clearly counter to the EU’s own charter of fundamental human rights?
The European Commission declined multiple requests for spokespeople to talk about these issues but it did send some on the record points (below), and some background information regarding access to documents which is a key part of the legal complaint.
Among the Commission’s on the record statements on ‘ethics in research’, it started with the claim that “ethics is given the highest priority in EU funded research”.
“All research and innovation activities carried out under Horizon 2020 must comply with ethical principles and relevant national, EU and international law, including the Charter of Fundamental Rights and the European Convention on Human Rights,” it also told us, adding: “All proposals undergo a specific ethics evaluation which verifies and contractually obliges the compliance of the research project with ethical rules and standards.”
It did not elaborate on how a ‘video lie detector’ could possibly comply with EU fundamental rights — such as the right to dignity, privacy, equality and non-discrimination.
And it’s worth noting that the European Data Protection Supervisor (EDPS) has raised concerns about misalignment between EU-funded scientific research and data protection law, writing in a preliminary opinion last year: “We recommend intensifying dialogue between data protection authorities and ethical review boards for a common understanding of which activities qualify as genuine research, EU codes of conduct for scientific research, closer alignment between EU research framework programmes and data protection standards, and the beginning of a debate on the circumstances in which access by researchers to data held by private companies can be based on public interest”.
On the iBorderCtrl project specifically the Commission told us that the project appointed an ethics advisor to oversee the implementation of the ethical aspects of research “in compliance with the initial ethics requirement”. “The advisor works in ways to ensure autonomy and independence from the consortium,” it claimed, without disclosing who the project’s (self-appointed) ethics advisor is.
“Ethics aspects are constantly monitored by the Commission/REA during the execution of the project through the revision of relevant deliverables and carefully analysed in cooperation with external independent experts during the technical review meetings linked to the end of the reporting periods,” it went on, adding that: “A satisfactory ethics check was conducted in March 2019.”
It did not provide any further details about this self-regulatory “ethics check”.
“The way how it works so far is basically some expert group that the Commission sets up with propose/call for tender,” says Breyer, discussing how the EU’s research program is structured. “It’s dominated by industry experts, it doesn’t have any members of parliament in there, it only has — I think — one civil society representative in it, so that’s falsely composed right from the start. Then it goes to the Research Executive Agency and the actual decision is taking by representatives of the Member States.
“The call [for research proposals] itself doesn’t sound so bad if you look it up — it’s very general — so the problem really was the specific proposal that they proposed in response to it. And these are not screened by independent experts, as far as I understand it. The issue of ethics is dealt with by self assessment. So basically the applicant is supposed to indicate whether there is a high ethical risk involved in the project or not. And only if they indicate so will experts — selected by the REA — do an ethics assessment.
“We don’t know who’s been selected, we don’t know their opinions — it’s also being kept secret — and if it turns out later that a project in unethical it’s not possible to revoke the grant.”
The hypocrisy charge comes in sharply here because the Commission is in the process of shaping risk-based rules for the application of AI. And EU lawmakers have been saying for years that artificial intelligence technologies need ‘guardrails’ to make sure they’re applied in line with regional values and rights.
Commission EVP Margrethe Vestager has talked about the need for rules to ensure artificial intelligence is “used ethically” and can “support human decisions and not undermine them”, for example.
Yet EU institutions are simultaneously splashing public funds on AI research that would clearly be unlawful if implemented in the region, and which civil society critics decry as obviously unethical given the lack of scientific basis underpinning ‘lie detection’.
In an FAQ section of the iBorderCtrl website, the commercial consortia behind the project concedes that real-world deployment of some of the technologies involved would not be covered by the existing EU legal framework — adding that this means “they could not be implemented without a democratic political decision establishing a legal basis”.
Or, put another way, such a system would be illegal to actually use for border checks in Europe without a change in the law. Yet European taxpayer funding was nonetheless ploughed in.
A spokesman for the EDPS declined to comment on Breyer’s case specifically but he confirmed that its preliminary opinion on scientific research and data protection is still relevant.
He also pointed to further related work which addresses a recent Commission push to encourage pan-EU health data sharing for research purposes — where the EDPS advises that data protection safeguards should be defined “at the outset” and also that a “thought through” legal basis should be established ahead of research taking place.
“The EDPS recommends paying special attention to the ethical use of data within the [health data sharing] framework, for which he suggests taking into account existing ethics committees and their role in the context of national legislation,” the EU’s chief data supervisor writes, adding that he’s “convinced that the success of the [health data sharing plan] will depend on the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights”.
tl;dr: Legal and ethical use of data must be the DNA of research efforts — not a check-box afterthought.
In addition to a lack of independent ethics oversight of research projects that gain EU funding, there is — currently and worryingly for supposedly commercially minded research — no way for outsiders to independently verify (or, well, falsify) the technology involved.
In the case of the iBorderCtrl tech no meaningful data on the outcomes of the project has been made public and requests for data sought under freedom of information law have been blocked on commercial interest grounds.
Breyer has been trying without success to obtain information about the results of the project since it finished in 2019. The Guardian reported in detail on his fight back in December.
Under the legal framework wrapping EU research he says there’s only a very limited requirement to publish information on project outcomes — and only long after the fact. His hope is thus that the Court of Justice will agree ‘commercial interests’ can’t be used to over-broadly deny disclosure of information in the public interest.
“They basically argue there is no obligation to examine whether a project actually works so they have the right to fund research that doesn’t work,” he tells TechCrunch. “They also argue that basically it’s sufficient to exclude access if any publication of the information would damage the ability to sell the technology — and that’s an extremely wide interpretation of commercially sensitive information.
“What I would accept is excluding information that really contains business secrets like source code of software programs or internal calculations or the like. But that certainly shouldn’t cover, for example, if a project is labelled as unethical. It’s not a business secret but obviously it will harm their ability to sell it — but obviously that interpretation is just outrageously wide.”
“I’m hoping that this [legal action] will be a precedent to clarify that information on such unethical — and also unlawful if it were actually used or deployed — technologies, that the public right to know takes precedence over the commercial interests to sell the technology,” he adds. “They are saying we won’t release the information because doing so will diminish the chances of selling the technology. And so when I saw this then I said well it’s definitely worth going to court over because they will be treating all requests the same.”
Civil society organizations have also been thwarted in attempts to get detailed information about the iBorderCtrl project. The Intercept reported in 2019 that researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the iBorderCtrl system, for example, but the hundreds of pages they got back were heavily redacted — with many completely blacked out.
“I’ve heard from [journalists] who have tried in vain to find out about other dubious research projects that they are massively withholding information. Even stuff like the ethics report or the legal assessment — that’s all stuff that doesn’t contain any commercial secrets, as such,” Breyer continues. “It doesn’t contain any source code, nor any sensitive information — they haven’t even released these partially.
“I find it outrageous that an EU authority [the REA] will actually say we don’t care what the interest is in this because as soon as it could diminish sales then we will withhold the information. I don’t think that’s acceptable, both in terms of taxpayers’ interests in knowing about what their money is being used for but also in terms of the scientific interest in being able to test/to verify these experiments on the so called ‘deception detection’ — which is very contested if it really works. And in order to verify or falsify it scientists of course need to have access to the specifics about these trials.
“Also democratically speaking if ever the legislator wants to decide on the introduction of such a system or even on the framing of these research programs we basically need to know the details — for example what was the number of false positives? How well does it really work? Does it have a discriminatory effect because it works less well on certain groups of people such as facial recognition technology. That’s all stuff that we really urgently need to know.”
Regarding access to documents related to EU-funded research the Commission referred us to Regulation no. 1049/2001 — which it said “lays down the general principles and limits” — though it added that “each case is analysed carefully and individually”.
However the Commission’s interpretation of the regulations of the Horizon program appears to entirely exclude the application of the freedom of information — at least in the iBorderCtrl project case.
Per Breyer, they limit public disclosure to a summary of the research findings — that can be published some three or four years after the completion of the project.
“You’ll see an essay of five or six pages in some scientific magazine about this project and of course you can’t use it to verify or falsify the technology,” he says. “You can’t see what exactly they’ve been doing — who they’ve been talking to. So this summary is pretty useless scientifically and to the public and democratically and it takes ages. So I hope that in the future we will get more insight and hopefully a public debate.”
The EU research program’s legal framework is secondary legislation. So Breyer’s argument is that a blanket clause about protecting ‘commercial interests’ should not be able to trump fundamental EU rights to transparency. But of course it will be up to the court to decide.
“I think I stand some good chance especially since transparency and access to information is actually a fundamental right in the EU — it’s in the EU charter of fundamental rights. And this Horizon legislation is only secondary legislation — they can’t deviate from the primary law. And they need to be interpreted in line with it,” he adds. “So I think the court will hopefully say that this is applicable and they will do some balancing in the context of the freedom of information which also protects commercial information but subject to prevailing public interests. So I think they will find a good compromise and hopefully better insight and more transparency.
“Maybe they’ll blacken out some parts of the document, redact some of it but certainly I hope that in principle we will get access to that. And thereby also make sure that in the future the Commission and the REA will have to hand over most of the stuff that’s been requested on this research. Because there’s a lot of dubious projects out there.”
A better system of research project oversight could start by having the committee that decides on funding applications not being comprised of mostly industry and EU Member State representatives (who of course will always want EU cash to come to their region) — but also parliamentary representatives, more civil society representatives and scientists, per Breyer.
“It should have independent participants and those should be the majority,” he says. “That would make sense to steer the research activities in the direction of public good, of compliance with our values, of useful research — because what we need to know and understand is research that will never be used because it doesn’t work or it’s unethical or it’s illegal, that wastes money for other programs that would be really important and useful.”
He also points to a new EU research program being set up that’s focused on defence — under the same structure, lacking proper public scrutiny of funding decisions or information disclosure, noting: “They want to do this for defence as well. So that will be even about lethal technologies.”
To date the only disclosures around iBorderCtrl have been a few parts of the technical specifications of its system and some of a communications report, per Breyer, who notes that both were ‘heavily redacted”.
“They don’t say for example which border agencies they have introduced this system to, they don’t say which politicians they’ve been talking to,” he says. “The interesting thing actually is that part of this funding is also presenting the technology to border authorities in the EU and politicians. Which is very interesting because the Commission keeps saying look this is only research; it doesn’t matter really. But in actual fact they are already using the project to promote the technology and the sales of it. And even if this is never used at EU borders funding the development will mean that it could be used by other governments — it could be sold to China and Saudi Arabia and the like.
“And also the deception detection technology — the company that is marketing it [a Manchester-based company called Silent Talker Ltd] — is also offering it to insurance companies, or to be used on job interviews, or maybe if you apply for a loan at a bank. So this idea that an AI system would be able to detect lies risks being used in the private sector very broadly and since I’m saying that it doesn’t work at all and it’s basically a lottery lots of people risk having disadvantages from this dubious technology.”
“It’s quite outrageous that nobody prevents the EU from funding such ‘voodoo’ technology,” he adds.
The Commission told us that “The Intelligent Portable Border Control System” (aka iBorderCtrl) “explored new ideas on increasing efficiency, convenience and security of land border crossing”, and like all security research projects it was “aimed at testing new ideas and technologies to address security challenges”.
“iBorderCtrl was not expected to deliver ready-made technologies or products. Not all research projects lead to the development of technologies with real-world applications. Once research projects are over, it is up to Member States to decide whether they want to further research and/or develop solutions studied by the project,” it also said.
It also pointed out that specific application of any future technology “will always have to respect EU and national law and safeguards, including on fundamental rights and the EU rules on the protection of personal data”.
However Breyer also calls foul on the Commission seeking to deflect public attention by claiming ‘it’s only R&D’ or that it’s not deciding on the use of any particular technology. “Of course factually it creates pressure on the legislator to agree to something that has been developed if it turns out to be useful or to work,” he argues. “And also even if it’s not used by the EU itself it will be sold somewhere else — and so I think the lack of scrutiny and ethical assessment of this research is really scandalous. Especially as they have repeatedly developed and researched surveillance technologies — including mass surveillance of public spaces.”
“They have projects on Internet on bulk data collection and processing of Internet data. The security program is very problematic because they do research into interferences with fundamental rights — with the right to privacy,” he goes on. “There are no limitations really in the program to rule out unethical methods of mass surveillance or the like. And not only are there no material limitations but also there is no institutional set-up to be able to exclude such projects right from the beginning. And then even once the programs have been devised and started they will even refuse to disclose access to them. And that’s really outrageous and as I said I hope the court will do some proper balancing and provide for more insight and then we can basically trigger a public debate on the design of these research schemes.”
Pointing again to the Commission’s plan to set up a defence R&D fund under the same industry-centric decision-making structure — with a “similarly deficient ethics appraisal mechanism” — he notes that while there are some limits on EU research being able to fund autonomous weapons, other areas could make bids for taxpayer cash — such as weapons of mass destruction and nuclear weapons.
“So this will be hugely problematic and will have the same issue of transparency, all the more of course,” he adds.
On transparency generally, the Commission told us it “always encourages projects to publicise as much as possible their results”. While, for iBorderCtrl specifically, it said more information about the project is available on the CORDIS website and the dedicated project website.
If you take the time to browse to the ‘publications‘ page of the iBorderCtrl website you’ll find a number of “deliverables” — including an “ethics advisor”; the “ethic’s advisor’s first report”; an “ethics of profiling, the risk of stigmatization of individuals and mitigation plan”; and an “EU wide legal and ethical review report” — all of which are listed as “confidential”.