The International Air Transport Association (IATA) is claiming a win following a biometrics proof-of-concept (PoC) which, according to a release, involved “two passengers using different digital wallets and travel credentials on a round-trip between Hong Kong and Tokyo.”
The two-day PoC involved partnerships with Cathay Pacific, Hong Kong International Airport, Narita International Airport, Branchspace, Facephi, NEC, Neoke, Northern Block and SICPA. Airport elements were conducted in a live environment, building on a 2023 PoC carried out in a test environment.
IATA and partners then ran the standard airport objective for biometrics firms: seamless progression through airport processes and checkpoints — including bag drop, security, immigration and boarding — using biometric authentication, eliminating the need to show travel documents.
“A seamless fully digital travel experience powered by digital identity and biometrics has moved from theory to proven reality,” says Nick Careen, senior vice president for operations, safety, and security for IATA. “The challenge now is to make this more efficient travel experience available to all travelers. There is good reason for optimism. With One ID standards already in place,” he says, “the industry could be ready for this in the very near future.”
Central Bank Digital Currency (CBDC) Projects Are Foundering in Five-Eye Nations. What Gives?
As we warned in May 2022, a financial revolution is quietly sweeping the world (or at least trying to) that has the potential to reconfigure the very nature of money, making it programmable, far more surveillable and centrally controlled. To quote Washington DC-based blogger and analyst NS Lyons, “if not deliberately and carefully constrained in advance by law,… CBDCs have the potential to become even more than a technocratic central planner’s dream. They could represent the single greatest expansion of totalitarian power in history.”
At the time of writing that post, around 90 countries and currency unions were in the process of exploring a CBDC, according to the Atlantic Council’s CBDC tracker. Today, just two and a half years later, that number has increased to 134, representing 98% of global GDP. Around 66 of those countries are in the advanced stage of exploration — development, pilot, or launch.
But they do not include the U.S. In fact, the U.S. is not just trailing most countries on CBDC development; it could soon become the first country to explicitly ban the central bank from issuing a CBDC, to the undisguised horror of certain think tanks.
In May, the US House of Representatives passed HR 5403, also known as the “CBDC Anti-Surveillance State Act.” The bill, first introduced in September 2023 and sponsored by U.S. Senator Ted Cruz, proposes amendments to the Federal Reserve Act to prohibit the U.S. Federal Reserve from issuing CBDCs. It also seeks to protect the right to financial privacy and prevent the U.S. government from “weaponizing their financial system against their own citizens.”
Legal Scholars Developing Guidance for Biometrics Legislation
Two law institutes, one from Europe and one from the U.S., launched a new collaborative project focusing on the ethical and legal implications of collecting and using biometric data.
Initiated by the Philadelphia-based American Law Institute and the Vienna-headquartered European Law Institute, the main task of the project is defining a legal framework aimed at regulators working in different democratic countries.
The move comes at a crucial time for regulating artificial intelligence and biometric data on both side of the Atlantic. This year, the European Union finally launched its artificial intelligence, or AI Act, while U.S. agencies have been developing AI guidelines and debating uses such as facial recognition. The project, titled Principles for the Governance of Biometrics, has four initial goals.
Deaths Linked to Chatbots Show We Must Urgently Revisit What Counts as ‘High-Risk’ AI
Last week, the tragic news broke that U.S. teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website.
As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school.
In a lawsuit filed against Character.AI by the boy’s mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modelled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as “that’s not a reason not to go through with it”.
In a statement to CNN, Character.AI has stated they “take the safety of our users very seriously” and have introduced “numerous new safety measures over the past six months”. In a separate statement on the company’s website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.)
However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems.
Largest U.S. Healthcare Data Breach Exposes Medical Records of 100 Million Customers
A staggering new update has confirmed that February’s UnitedHealth data breach has impacted over 100 million Americans, now marking it as the largest healthcare data breach in U.S. history. But the fallout extends beyond individual patients — it touches entire families, elevating the scope and scale of this unprecedented attack.
The revised figure was disclosed on Oct. 24 by the U.S. Department of Health and Human Services Office for Civil Rights, which updated its data breach portal to reflect the full breadth of the breach. While the attack occurred in February, this latest report is the first official accounting of the impact widespread.
As reported by Forbes, the breach didn’t just expose data; it crippled critical services to hospitals, clinics, and medical practices nationwide, causing widespread operational chaos across the healthcare network.
UnitedHealth reportedly paid the ransomware hacker group $22 million in a desperate bid to recover the stolen data and halt further exposure. But in a bold move, the hackers reneged on the deal, pocketing the payout while keeping the data — leaving tens of millions of Americans’ information dangling on the dark web.