Sandra Finley

Nov 282025
 

A  RELATED POSTING:

2017-05-25 Chrystia Freeland – Larry Summers – Dominic Barton connection. “Inclusive Capitalism Initiative” as “re-branding”. Summers bad news (corrupt), Canadian financial matters, advisor to Liberals.

 

TO ALL  CANADIANS:    an update on the Liberals   – –  Larry Summers and corruption

Thank-you very much,  Mr. Matt Taibbi, for the recent information  below re Summers.    He’s been on my radar for a long time.   Bad News for Canada.

The Federal Liberal Govt (Canada)  brought Chrystia Freeland, Larry Summers and Dominic Barton up from the U.S. to help them  win the Election 10 years ago.

I figured we Canadians were doomed:  Larry Summers was sleaze.   It didn’t take too much research.    We did NOT need MORE corruption in Ottawa.

Thank-you.  Thank-you!

From CNBC:

Former Treasury Secretary Larry Summers said Monday that he was stepping back from all public commitments amid fallout from the release of emails between him and the notorious sex offender Jeffrey Epstein.

“I am deeply ashamed of my actions and recognize the pain they have caused. I take full responsibility for my misguided decision to continue communicating with Mr. Epstein,” Summers said in a statement obtained by CNBC.

“While continuing to fulfill my teaching obligations, I will be stepping back from public commitments as one part of my broader effort,” said Summers, a former president of Harvard University…

Larry Summers is a rare perfect 10 on the celebrity-repugnance scale. He’s everything most normal people can’t stand about the current crop of “elites”: an arrogant Davos fixture whose toad face always looks pleased with his legacy of disastrous policy decisions, and who personifies his class’s habit of lavishing exalted academic titles on intellectual mediocrity.

Now he’s pioneered a new ritual, auto-cancelation. “I will be stepping back from public commitments” is cancel-culture seppuku, a way to give the mob a win before it gets going. That’s always a questionable tactic, but especially with the Jeffrey Epstein story, which is fast acquiring a familiar shape: a factually diffuse moral mania used as a disciplinary weapon by a media sector hungry for pelts.

The exchanges between Summers and Epstein are head-poundingly banal, like 99.9% of the documents in the just-released “trove” of Epstein-related documents. Summers is guilty of knowing Epstein and having pseudo-intellectual discussions with him about a mistress nicknamed “peril,” whom Summers feared might stray. Epstein compared the possibility to finding life on other planets, and tried to cheer Summers up by flattering the ex-Treasury Secretary’s fascination with Bayesian statistics:

Odds are limited to binary outcomes… since you are immobile. Do some homework… I concede your point on pessimism but would under bayesian rules. Feel comfortable. As humans are biased toward bad outcome avoidance… She is never ever going to find another Larry summers. Probability ZERO

It went on. “Send peril flowers,” Epstein advised. The two men briefly discussed whether Ehud Barak would be Prime Minister (this was July, 2019). Then Summers wrote, “At cape w mother brothers kids and nephews nieces. Bit of an Ibsen play.” Then it was “better than [sic] checov.” When the two pals from there plunged into a Google-aided discussion of Lady With Lapdog, I shut the computer off.

Congress voted yesterday to compel the Department of Justice to release in “searchable and downloadable format” all files related to Epstein within thirty days. The House vote was 427-1. Though Ro Khanna, Thomas Massie, and Mike Johnson all voiced concerns, only Louisiana Republican Clay Higgins actually voted no, saying that “this type of broad reveal of criminal investigative files, released to a rabid media, will absolutely result in innocent people being hurt.”

Predictably now Higgins is being teed up as a cancellation target, as social media fills with Meet the Lone Dickhead Who Voted Against Releasing the Epstein Filestype stories, each of which lists every maybe-naughty thing Higgins has ever done. This format, seen a lot in Russiagate with dissenters like Devin Nunes and Tulsi Gabbard, usually comes as prelude to a flood of Stubborn Ownthinker Faces Calls To Step Aside pieces, another fun part of the cancelation ritual.

For the record I’m very much in favor of releasing any Epstein files. The country deserves to know whatever there is to know about this mess, and if it exposes systemic wrongdoings, those need fixing. However, it’s extremely suspicious that a story that was deader than Epstein himself for years is suddenly the Most Important Thing now that Trump is back in the White House, especially since a lot of the techniques used to drive a media panic in the first Trump term are back. The fact that some of Trump’s top officials stoked public outrage about this subject en route to higher office does change the karmic equation this time, however.

Between Epstein’s beyond-suspicious death, multiple prosecutions for sex crimes, inexplicable $600 million fortune, and breathtaking Rolodex of powerful friends, there’s a lot to be curious about. But the public’s fascination with Epstein is based on the notion that he was not only operating an organized blackmail ring, but doing so on behalf of intelligence agencies, probably Israeli. That story simply isn’t there yet, and a lot of people who should know better, myself included, have assumed it is. It could be true, which is why releasing documents is a good idea. As of now, though, it’s closer to Russiagate, in which confirmable facts are overshadowed by a mountain range of inference:

Upgrade to paid

Continue reading this post for free in the Substack app

Claim my free post

Or upgrade your subscription. Upgrade to paid

Like
Comment
Restack

© 2025 Matt Taibbi
548 Market Street PMB 72296, San Francisco, CA 94104

Nov 232025
 

2024-01-10 Repeal Bill 36   the Health Professions & Occupations Act  (HPOA).  First class tyranny. (Civitas #1)

2025-02-17 Mark Carney, hopeful to be prime minister. All the Power he sees. Sedition.

2025-03-19 Mark Carney, PrimeMinister-Elect, Tells us exactly who he is. I’m an “elitist and a globalist” but “that’s exactly what we (Canadians)  need”

 

2023-12-06 Bill 36 (B.C.) is the Health Professions & Occupations Act. First class tyranny. But we can stop it, by pitching in to help the Canadian Society for Science & Ethics in Medicine. Just spread the word.

UPDATE:   I don’t know The status of the Society:

Nov 22, 2025:  The link I have for  the Canadian Society for Science & Ethics in Medicine comes up as “Page Not Found” on my computer.

 

2025-02-16 What is sedition? How are Charter Rights activated?

My Conclusion:

The Government may rescind the rights of an individual.  However,

  1. The Statistics Act does not give the Government the authority to do that.  StatsCan cannot just declare that this is so.
  2. In order to override the Charter Right of an individual, the Government has to pass the “Oakes Test“.

If StatsCan wishes to take away Canadians’ Charter Right to Privacy of Personal Information, it would have to make an application to the Court to do so, supplying the Court with the arguments to satisfy the Oakes Test.   It has not done that.   So the Charter Right stands.

2025-01-02 Dr Wm Makis on Man in America. Re: virus and ANTI-PARASITIC (ivermectin). Question re ANTI-FUNGAL (my experience) and (Deduction?)   – – the health of my immune system.    (Myco-toxins and TB)

For Your Selection FEBRUARY, 2025

2024-05-14 The WHO. You should know this. Swiss lawyer Philipp Kruse spells it out. The power consolidation for a global health dictatorship. (Not under my watch!)

2024-04-26 Poll finds BC Conservatives lead NDP for first time. Western Standard, by Jonathan Bradley.    (I think this is clearly a reaction to the covid debacle.)

Nov 222025
 

NOTE   It is not a mistake if  you are mixed up:   There are currently many more agencies of the Federal Govt,  and for example,  what used to be  in Ag and Ag Food,  is now in Health.

RESPONSIBILITY  IS  HEALTH CANADA:  Minister Marjorie Michel   (Liberal MP for  Papineau,  Justin Trudeau’s former riding)

      •  the killing of  3 to 400  healthy ostriches    (CFIA   Cdn Food Inspection Agency)
      • the secrecy around modified meat  (lab-grown and clonedNOT LABELLED).

RESPONSIBILITY IS AGRICULTURE AND  AGRI-FOOD CANADA:   Minister Heath MacDonald (Liberal MP for Malpeque, PEI)

      • I recall that the PMRA (Pest Mgmt Regulatory Agency) was  Health Canada and WATER was, at one time, under Agri-Food while Ag Canada . . . ??   Well, today, it’s  . . .

Who is responsible for WATER  today?    You may become even MORE confused!     You might think that

  • CANADA WATER MASTER PLAN  would answer a few questions.

It’s under  British Land.   “Canada Water” is described as “a new district for Central London”,  (Since about 2018, I think.)

SEE:    https://www.southwark.gov.uk/planning-environment-and-building-control/current-and-future-development/canada-water-masterplan

Food and Water and Health and Export.

When I witness the wrongness of the CFIA  vis-a-vis  ostriches,  and now the wrongness of the CFIA  and their secrecy around THEIR plans for the meat market in Canada,  I must say – – thank goodness that the agenda for Alberta (and Saskatchewan) Rising:  A Principled Vision for a Sovereign and Free Nation   (a divorce from Ottawa)   is well underway.    I hope every one of us is alive, well, and empowered to do action.

With many thanks to Tamara Ugolini (Rebel News).

Tamara is a great young journalist.   Vincent Breton (the alert citizen)  is to be applauded for his role in this.   You’ll meet him in the video.   He is an essential element,  the same as Katie Pasitney is for the ostriches.

Without accurate product labeling,  you CANNOT  know if,  or what,  health problems are created by what you are eating and drinking.

We lost the hard-fought battle in Canada to have GMO (a.k.a.  GE (genetically-engineered)) food products labelled.  Maybe this is a last kick-at-the-can.

I hope we can together get rid of this autocratic  hubris.   An ideal opportunity is at hand.   The ostriches are our beacon.

Sandra

– – – – – – – – – – – –

CEO exposes Health Canada’s cloned meat plot and forces policy walk back

DuBreton’s Vincent Breton blew the whistle on a covert policy shift that would have let lab-engineered animals bypass ‘novel food’ labels entirely, ultimately forcing Health Canada into a temporary retreat.

https://www.rebelnews.com/ceo_exposes_health_canada_s_secret_plan_to_deregulate_gene_edited_and_cloned_meat_forcing_a_policy_walk_back

 

 

Nov 192025
 

The previous Selection (October) – – about 3 of you have seen, Because distribution could not be completed:   

For Your Selection October 2025.     (https://sandrafinley.ca/blog/?p=32035)

 

– – – – – – – – – – – – –

For Your Selection NOVEMBER 20  2025

– – – – – – – – – –  – – –

  1.     2025-11-18 Katie Pasitney (ostriches) interviewed on the High-Wire, Del Bigtree. This will work.

2.     2025-11-07 Ostriches – – all were shot dead last nite by CFIA order.

https://sandrafinley.ca/blog/?p=32083

 

https://www.rebelnews.com/save_the_ostriches

– – – – – – – – – – – – – –

 

3.    2025-11-12    5:00 Radio News tells its large audience that the CFIA actions  around the slaughter of ostriches is false. (a Jimmy Pattison radio station, FM 99.9)

https://sandrafinley.ca/blog/?p=32095

I speak with people who think the ostrich story is not true.

Please help:   it’s as easy as starting a conversation about the ostriches.  “Do you know about . . .”

 

It works.  If people everywhere had not been doing that,  this would not have happened:

Nov 18th – I stumbled across a Parliamentary Standing Committee  on Agriculture and Agri-Food.

The CFIA  was hauled in front of the Committee to answer questions.  Towards the end,  some hard questions were asked.    The Committee Meeting was video taped, obviously.

Maybe the Liberals and the CFIA are starting to realize that they are not omnipotent.

– – – – – – – – – – – – –

4.     2025-11-07 Katie Pasitney re ostriches. Nadine Wellwood re Alberta Rising: A Principled Vision for a Sovereign Nation

https://sandrafinley.ca/blog/?p=32090

– – – – – – – – – – – – – 

 

5.     2025-11-18 EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn. WIth thanks to Reclaim The Net.

6.       2025-11-15 CRYPTO-GRAM, November 15, 2025, Bruce Schneier          https://sandrafinley.ca/blog/?p=32110

 The November newsletter has a discussion on AI.

 I sent a thank-you note to Bruce.  He is one of the stalwarts,  for years sending out information to help keep the public informed.

But I worry.     I might send you instead to the preceding  EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks,

I think we are in a transitional time when, if we don’t have strategies for neutralizing  “TREACHERIES”,  we are not going to beat the tyrants.    Bruce does everything very well,  and God Bless Him.   But not everyone is as principled.

– – – – – – – – – –  – – –

2.   2025-11-14   Lavigne Show:  Silencing Detective Grus – Trailer for the film.    And guest: Donald Best

https://sandrafinley.ca/blog/?p=32105

– – – – – – – – – –  – – –

3.    https://druthers.ca/

2025-10  The  November  issue is out.   

– – – – – – – – – –  – – –

5.    2025-11-07    Well done. Interview of Donna Laframboise, journalist. By Melanie Bennet on “Disrupted”.

https://sandrafinley.ca/blog/?p=32092

 

– – – – – – – – – –  – – – 

2025-11-03 Two Brits, become Canadians,  launched a court case against Justin Trudeau and his crew – – you should know the story. It’s bloody important.

https://sandrafinley.ca/blog/?p=32079

– – – – – – – – – –  – – –

2025-10-26 Cartel Canada | W5’s Avery Haines investigates a Canadian meth pipeline

https://sandrafinley.ca/blog/?p=32073

– – – – – – – – – –  – – –

2025-10-26 Something is wrong or fishy here. (Canada’s Meth Superlab exposed, near Kamloops.?)

https://sandrafinley.ca/blog/?p=32071

– – – – – – – – – –  – – – 

2025-10-26 World on Mute, by Lisa Miron

https://sandrafinley.ca/blog/?p=32068

Nov 192025
 

There are 2 parts to the posting:

2025-11- re censorship and ostriches

Part  1.    Censorship  in relation to the ostrich story.

 

Part  2.    Censorship  in relation to the recent (new) court challenge by Shaun Rickard and fellow-litigant.

Subject: Court case re Covid restrictions

EXCERPT:

Greetings Sandra. FYI An interesting interview.  What a national tragedy, the impossibility to even have your day in court.

Deny, delay, ignore, demonize  ……

On a similar note, let’s pray that the CFIA flawed  policy doesn’t prevail and the ostriches survive.

 

THIS IS THE OTHER LINK THAT DIDN’T WORK   ( a form of censorship.)

https://x.com/shaunrickard67/status/1985021998045519924/mediaviewer

As happens,  I did know about this court case.   The name  (shaun rickard) is part of the URL.

2025-11-03 Two Brits (now Canadian) launched a court case against Justin Trudeau and his crew – – you should know the story. It’s bloody important.

 

Nov 192025
 

The   original email thread starts at the bottom.  If you go there,  read up from bottom.

SUMMARY

  • Nov 5  –  A friend Mujo sent me a link to an “interesting posting”.   Which didn’t work.  I tried it on a couple different days. The message displayed said:

 Hmm…this page doesn’t exist. Try searching for something else.

I had no clue what the email was about.

Nov 7  –  I replied to Mujo:

The ostriches are dead, as you will know  . . .

I told him the link he had sent to me didn’t work and asked:

  – –  Does the link still work for you?

Nov 13 –  from Mujo.

Hmmmm no this won’t load now.  “something went wrong…retry” is the message

Must have been a “dangerous” post….

 

Nov 13 –  from Sandra

Happy I got this one up.    Katie Patsitney (Ostriches) on The Highwire.   Today.     Katie did a good job.

I included it with this:

2025-11-12 5:00 Radio News tells its large audience that the CFIA news about the slaughter of ostriches is false. (a Jimmy Pattison radio station)

Nov 13  – From Mujo

“oops, that page can’t be found”. When I try to view the high wire interview.

 

= = = = = = = = = = = = =

THE ORIGINAL EMAIL THREAD  – –  read from bottom up.

From: Sandra
Sent: November 14, 2025 7:31 PM
To:  Mujo
Subject: RE: Court case re Covid restrictions

Thanks Mujo.

Damn.

 

5.    From: Mujo
Sent: November 13, 2025 10:06 PM
To: Sandra
Subject: Re: Court case re Covid restrictions

“oops, that page can’t be found”. When I try to view the high wire interview.

Sent from my iPhone

 

4.    On Nov 13, 2025, at 10:48 PM, Sandra wrote:

A very dangerous post, yes – – must have been VERY dangerous!!

 

Happy I got this one (following)  up.    Katie Patsitney (Ostriches) on The Highwire.   Today.     Katie did a good job.

I included it with this:

2025-11-12 5:00 Radio News tells its large audience that the CFIA news about the slaughter of ostriches is false. (a Jimmy Pattison radio station)

 

3.    From: Mujo
Sent: November 13, 2025 12:21 PM
To: Sandra
Subject: Re: Court case re Covid restrictions

 

Hmmmm no this won’t load now.  “something went wrong…retry” is the message

 

Must have been a “dangerous” post…..

Sent from my iPhone

 

2.       On Nov 7, 2025, at 11:42 PM, Sandra wrote:

Hi Mujo,

The ostriches are dead, as you will know.   Hard to believe.

I went twice over the last days to the posting you sent.

The message received did not change.

Hmm…this page doesn’t exist. Try searching for something else.

 

– –  Does the link still work for you?

/Sandra

 

1.     —–Original Message—–
From: Mujo
Sent: November 5, 2025 11:03 PM
To: Sandra
Subject: Court case re Covid restrictions

Greetings Sandra. FYI An interesting interview.  What a national tragedy, the impossibility to even have your day in court.

Deny, delay, ignore, demonize  ……

On a similar note, let’s pray that the CFIA flawed  policy doesn’t prevail and the ostriches survive .

Mujo    https://x.com/shaunrickard67/status/1985021998045519924/mediaviewer

Sent from my iPhone

Nov 182025
 

Bless you,  Katie.

– – – – – – – –

INSTRUCTIONS,  to get to the video:

RE    Katie Pasitney (ostriches)    on the HighWire (Del Bigtree)

 

  1.     This link  might work:

    Episode 450: BROKEN FAITH

    This Episode 450 starts with the Ostrich story.

  2.     Or,   try this link:       COMPLY OR DIE: A FAMILY FARM UNDER FIRE

3.    Or,  put  this  into your search engine.        PASITNEY INTERVIEWED ON HIGH WIRE

 

Nov 182025
 

The article,  EU’s Weakened “Chat Control” Bill Still Poses Major problems   is last in the report below.

I think it is very important.   Some people are sleep-walking.

 

RELATED:

 

This work depends on the support of readers like you. Threats to free speech and privacy are growing, and too few people hear about them until it is too late. If you value this work, please consider upgrading to a paid subscription or gifting one to someone who should be following these issues. If you can’t contribute financially, sharing our work or forwarding this email helps ensure these stories reach the people who need to see them.
Become a Supporter

 

NEW LAWSUIT

 

 

Lawsuit Claims Google Secretly Used Gemini AI to Scan Private Gmail and Chat Data

 

When Google flipped a digital switch in October 2025, few users noticed anything unusual.

 

Gmail loaded as usual, Chat messages zipped across screens, and Meet calls continued without interruption.

 

Yet, according to a new class action lawsuit, something significant had changed beneath the surface.

 

We obtained a copy of the lawsuit for you here.

 

Plaintiffs claim that Google silently activated its artificial intelligence system, Gemini, across its communication platforms, turning private conversations into raw material for machine analysis.

 

The lawsuit, filed by Thomas Thele and Melo Porter, describes a scenario that reads like a breach of trust.

 

It accuses Google of enabling Gemini to “access and exploit the entire recorded history of its users’ private communications, including literally every email and attachment sent and received.”

 

The filing argues that the company’s conduct “violates its users’ reasonable expectations of privacy.”

 

Until early October, Gemini’s data processing was supposedly available only to those who opted in.

 

Then, the plaintiffs claim, Google “turned it on for everyone by default,” allowing the system to mine the contents of emails, attachments, and conversations across Gmail, Chat, and Meet.

 

The complaint points to a particular line in Google’s settings, “When you turn this setting on, you agree,” as misleading, since the feature “had already been switched on.”

 

This, according to the filing, represents a deliberate misdirection designed to create the illusion of consent where none existed.

 

There is a certain irony woven through the outrage. For all the noise about privacy, most users long ago accepted the quiet trade that powers Google’s empire.

 

They search, share, and store their digital lives inside Google’s ecosystem, knowing the company thrives on data.

 

The lawsuit may sound shocking, but for many, it simply exposes what has been implicit all along: if you live in Google’s world, privacy has already been priced into the convenience.

 

Thele warns that Gemini’s access could expose “financial information and records, employment information and records, religious affiliations and activities, political affiliations and activities, medical care and records, the identities of his family, friends, and other contacts, social habits and activities, eating habits, shopping habits, exercise habits, [and] the extent to which he is involved in the activities of his children.”

 

In other words, the system’s reach, if the allegations prove true, could extend into nearly every aspect of a user’s personal life.

 

The plaintiffs argue that Gemini’s analytical capabilities allow Google to “cross-reference and conduct unlimited analysis toward unmerited, improper, and monetizable insights” about users’ private relationships and behaviors.

 

The complaint brands the company’s actions as “deceptive and unethical,” claiming Google “surreptitiously turned on this AI tracking ‘feature’ without informing or obtaining the consent of Plaintiffs and Class Members.” Such conduct, it says, is “highly offensive” and “defies social norms.”

 

The case invokes a formidable set of statutes, including the California Invasion of Privacy Act, the California Computer Data Access and Fraud Act, the Stored Communications Act, and California’s constitutional right to privacy.

 

Google is yet to comment on the filing.

 

 

Reclaim The Net is reader-supported. Consider becoming a paid subscriber.

 

Become a Supporter

 

 

RAIDED

 

 

Germany Turns an X Post Into a Police Raid at Dawn

 

The story starts with a tweet that barely registered on the internet. A few hundred views, a handful of likes, and the kind of blunt libertarian framing that is common on X every hour of every day.

 

Yet in Germany, that tiny post triggered a 6am police raid, a forced phone handover, biometric collection, and a warning that the author was now under surveillance.

 

The thing to understand is that this story only makes sense once you see the sequence of events in order.

 

The story goes like this:

  • A man in Germany, known publicly only as Damian N., posts a short comment on X, calling government-funded workers “parasites.”
  • The post is tiny. At the time he was raided, it had roughly a hundred views. Even now, it has only a few hundred.
  • Despite the post’s obscurity, police arrive at Damian’s home at six in the morning.
  • He says they did not show him the warrant and did not leave documentation of what they seized.
  • Police pressured him to unlock his phone, confiscated it, took photos, fingerprints, and other biometric data, and even requested a blood sample for DNA.
  • One officer reportedly warned him to “think about what you post in the future” and said he is now “under surveillance.”
  • The entire action was justified under Section 130 of the German Criminal Code, which is meant to prohibit inciting hatred against protected groups.
  • Government employees are not such a group, which makes the legal theory tenuous at best.
  • Damian’s lawyer says the identification procedures and possibly the raid itself were illegal.

That is the sequence. A low-visibility political insult becomes a criminal investigation involving home searches, device seizure, and biometric collection.

 

 

The thing to understand is that this is not about one man’s post. It is about a bureaucracy that treats speech as something to manage and a set of enforcement structures that expand to fill the space they are given.

 

Start with the enforcement context. Germany has built a sprawling ecosystem around “online hate”: specialized prosecutor units, NGO tip lines, and automated scanning for taboo keywords.

The model is compliance first and legal theory second.

 

Once you create an apparatus like this, it behaves the way bureaucracies behave. It looks for work. It justifies resources by producing cases. A tiny X post with inflammatory language becomes a target because it contains the right keyword, not because it has societal impact.

 

Police behavior fits the same pattern. Confiscating phones is strategically useful because it imposes real pain without requiring a conviction.

 

Even prosecutors have said that losing a smartphone is often worse than the fine.

 

Early-morning raids create psychological pressure. Collecting biometrics raises the stakes further. None of this is about public safety. It is about creating friction for saying the wrong thing.

 

The legal mismatch is the tell. Section 130 protects groups defined by national, racial, religious, or ethnic identity.

 

There is also the privacy angle, which becomes impossible to ignore. Device access, biometrics, DNA requests: these are investigative tools built for serious crimes.

 

Deploying them against minor online speech means the line between public-safety policing and opinion policing has already been crossed. Once a state normalizes surveillance as a response to expression, the hard part becomes restoring restraint.

 

 

It is a deterrence strategy, not a justice strategy. And it reinforces why free speech and strong privacy protections matter. Without them, minor speech becomes an invitation for major intrusion.

 

The counterintuitive part is that the smallness of the post makes a raid more likely, not less.

 

High-profile content generates scrutiny and political costs. Low-profile content discovered through automated or NGO-driven monitoring is frictionless to act on. Unless people are reading Reclaim The Net, most people never hear of these smaller cases.

 

Looking ahead, the pressure will only increase. As more speech moves to global platforms that are harder to influence, local governments will lean more heavily on domestic law enforcement as their lever of control.

 

That means more investigations that hinge on broad interpretations of old statutes and more friction between individual rights and bureaucratic incentives.

 

This is particularly true in Germany and places like the UK, where the government doesn’t seem to feel any shame about raiding its citizens over online posts.

 

GET YOURS

 

 

Get Yours: Do Not Comply T-Shirt

 

Getting merchandise for yourself or as a gift helps support the mission to defend free speech and digital privacy.

 

It also helps raise awareness every time you wear or use it.

 

Your merch purchase goes directly toward sustaining our work and growing our reach.

 

It’s a simple, effective way to support. Get yours now.

 

Also available as a t-shirt or hoodie.

 

Shop Now

 

 

BIG WIN

 

 

Italian Court Orders Google to Restore Banned Catholic Blog

 

Google has been compelled by the Tribunale di Imperia to restore Messainlatino.it, a major Italian Catholic website that, as you may remember, the company had abruptly taken down from its Blogger platform in July.

 

The ruling, issued against Google Ireland Limited, the firm’s European branch, also requires payment of approximately €7,000 (about $8,100) in court costs.

 

The blog’s editor, Luigi Casalini, filed legal action after Google deleted the site without warning, claiming a violation of its “hate speech” rules.

 

The company’s notification consisted of a short, generic email and provided no explanation or chance to appeal.

 

For Casalini, whose publication had accumulated over 22,000 articles since 2008 and reached around one million monthly readers, the removal appeared to be less a matter of policy enforcement and more an attempt to silence dissenting religious opinion.

 

Messainlatino.it was well known for covering issues surrounding traditional Catholic liturgy and had been cited by major outlets.

 

Following Google’s action, questions were raised in both the European Parliament and Italy’s Chamber of Deputies.

 

Legislators noted that the deletion “raises serious questions about the respect for freedom of expression, speech and religion” as guaranteed by Article 11 of the EU Charter of Fundamental Rights and Article 10 of the European Convention on Human Rights.

 

They also pointed to the Digital Services Act (DSA), which, despite being a censorship law, obliges platforms to apply their moderation policies with “due regard” for fundamental rights.

 

Casalini’s legal case focused on that provision. He argued that Google’s decision breached Article 14 of the DSA, which calls for a balance between policy enforcement and the user’s right to free expression.

 

As Casalini stated to LifeSiteNews, “Google acted in this way in violation of the Digital Services Act.”

 

Google responded through five lawyers based in Milan. The company claimed that an interview with Bishop Joseph Strickland, who opposed the ordination of women as deacons, violated its hate speech policy.

 

When the defense team countered that the post merely reported the bishop’s words and contained no discriminatory content, Google’s attorneys maintained in court documents that “it does not matter the source, more or less authoritative (bishop, Pontiff) of the post, if it violates the Policy.”

 

Judge De Sanctis of the Imperia Court dismissed Google’s reasoning. The court found that the company had failed to justify the deletion and had breached European laws ensuring fair access to digital services.

 

The ruling ordered the immediate reinstatement of the blog and described Google’s conduct as incompatible with the principles of freedom of expression recognized by EU law.

 

The decision highlights a central flaw within the Digital Services Act. Although the law formally instructs platforms to consider free expression, it still empowers them to remove speech unilaterally under the guise of compliance.

 

The result is a system where large corporations can suppress lawful viewpoints with minimal oversight.

 

By ruling in favor of Messainlatino.it, the Italian court affirmed that private digital companies are not above the law when they interfere with constitutionally protected speech.

 

The case may now serve as a precedent for future disputes over online censorship in Europe, reminding regulators and corporations alike that freedom of expression must remain the foundation of the digital public space.

 

STILL DANGEROUS

 

 

EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

 

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

 

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

 

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

 

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

 

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

 

We obtained a copy of the letter for you here.

 

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

 

The criticism is focussed on the reliance on artificial intelligence to parse private messages for illicit content. While policymakers tout AI as a technical fix to an emotionally charged problem, researchers say the technology is simply not ready for such a task.

 

“Current AI technology is far from being precise enough to undertake these tasks with guarantees for the necessary level of accuracy,” the experts warn.

 

False positives, they say, are not theoretical. They are a near-certainty. AI-based tools struggle with nuance and ambiguity, especially in areas like text-based grooming detection, where the intent is often buried under layers of context.

 

“False positives seem inevitable, both because of the inherent limitations of AI technologies and because the behaviors the regulation targets are ambiguous and deeply context-dependent.”

 

These aren’t just minor errors. Flagging benign conversations, such as chats between teenagers or with trusted adults, could trigger law enforcement investigations or platform bans. At scale, this becomes more than a privacy risk. It becomes a systemic failure.

 

“Extending the scope of targeted formats will further increase the very high number of false positives – incurring an unacceptable increase of the cost of human labor for additional verification and the corresponding privacy violations.”

 

The critics argue that such systems could flood investigators with noise, actually reducing their ability to find real cases of abuse.

 

“Expanding the scope of detection only opens the door to surveil and examine a larger part of conversations, without any guarantee of better protection – and with a high risk of diminishing overall protection by flooding investigators with false accusations that prevent them from investigating the real cases.”

 

Alongside message scanning, the proposal mandates age verification for users of encrypted messaging platforms and app stores deemed to pose a “high risk” to children. It’s a seemingly common-sense measure, but one that technology experts say is riddled with problems.

 

“Age assessment cannot be performed in a privacy-preserving way with current technology due to reliance on biometric, behavioural or contextual information (e.g., browsing history),” the letter states, pointing to contradictions between the proposed text and the EU’s own privacy standards.

 

There are also concerns about bias and exclusion. AI-powered age detection tools have been shown to produce higher error rates for marginalized groups and often rely on profiling methods that undermine fundamental rights.

 

“AI-driven age inference techniques are known to have high error rates and to be biased for certain minorities.”

 

Even more traditional verification methods raise red flags. Asking users to upload a passport or ID introduces a host of new risks. It’s not just disproportionate, the researchers argue. It’s dangerous.

 

“Presenting full documents (e.g., a passport scan) obviously brings security and privacy risks and it is disproportionate as it reveals much more information than the age.”

 

The deeper issue, however, is one of equity. Many people, especially vulnerable populations, simply do not have easy access to government-issued IDs. Mandating proof of age, even for basic communication tools, threatens to lock these users out of essential digital spaces.

 

“There is a substantial fraction of the population who might not have easy access to documents that afford such a proof. These users, despite being adults in their full right of using services, would be deprived of essential services (even some as important as talking to a doctor).

 

This is not a technological problem, and therefore no technology can address it in a satisfactory manner.”

 

The broader concern isn’t just the functionality of the tools or the viability of the rules. It’s the principle. Encryption has long been a bedrock of digital security, relied upon by activists, journalists, medical professionals, and everyday citizens alike. But once a private message can be scanned, even “voluntarily” by a service provider, that foundational guarantee is broken.

 

“Any communication in which results of a scan are reported, even if the scan is voluntary, can no longer be considered secure or private, and cannot be the backbone of a healthy digital society,” the letter declares.

 

This line is particularly important. It cuts through the legal jargon and technical ambiguity. If messaging platforms are allowed to opt in to content scanning, the pressure to conform, whether political, social, or economic, will be immense. Eventually, “voluntary” becomes the norm. And encryption becomes meaningless.

 

***

 

Interestingly, the European Parliament has charted a different course. Its version of the regulation sidesteps the more intrusive measures, focusing instead on targeted investigations involving identified suspects. It also avoids universal age verification requirements.

 

The divergence sets up a legislative standoff between Parliament and the Council, with the European Commission playing mediator.

 

Unless the Council’s draft sees significant revision, two contentious features, voluntary message scanning and mandatory age verification, will dominate the trilogue negotiations in the months ahead.

 

The academics, for their part, are urging caution before the November 19 vote. Their message is clear: proceed slowly, if at all.

 

“Even if deployed voluntarily, on-device detection technologies cannot be considered a reasonable tool to mitigate risks, as there is no proven benefit, while the potential for harm and abuse is enormous.”

 

“We conclude that age assessment presents an inherent disproportionate risk of serious privacy violation and discrimination, without guarantees of effectiveness.”

 

“The benefits do not outweigh the risks.”

 

In a climate where public trust in technology is already fragile, the Council’s proposal flirts with the edge of overreach. The tools being proposed carry real dangers. The benefits, if they exist, remain unproven.

 

Europe has often led the way on digital rights and privacy. On November 19, it will reveal whether that leadership still holds.

 

Reclaim The Net is funded by the community. Keep us going and get extra benefits by becoming a supporter today. Thank you.

 

Become a Supporter

 

Thanks for reading,

Reclaim The Net

Nov 152025
 

Crypto-Gram
November 15, 2025

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

If these links don’t work in your email client, try reading this issue of Crypto-Gram on the web.

  1. Apple’s Bug Bounty Program
  2. Cryptocurrency ATMs
  3. A Surprising Amount of Satellite Traffic Is Unencrypted
  4. Agentic AI’s OODA Loop Problem
  5. A Cybersecurity Merit Badge
  6. Failures in Face Recognition
  7. Serious F5 Breach
  8. Part Four of The Kryptos Sculpture
  9. First Wap: A Surveillance Computer You’ve Never Heard Of
  10. Louvre Jewel Heist
  11. Social Engineering People’s Credit Card Details
  12. Signal’s Post-Quantum Cryptographic Implementation
  13. The AI-Designed Bioweapon Arms Race
  14. Will AI Strengthen or Undermine Democracy?
  15. AI Summarization Optimization
  16. Cybercriminals Targeting Payroll Sites
  17. Scientists Need a Positive Vision for AI
  18. Rigged Poker Games
  19. Faking Receipts with AI
  20. New Attacks Against Secure Enclaves
  21. Prompt Injection in AI Browsers
  22. On Hacking Back
  23. Book Review: The Business of Secrets
  24. The Role of Humans in an AI-Powered World
  25. Upcoming Speaking Engagements

** *** ***** ******* *********** *************

Apple’s Bug Bounty Program

[2025.10.15] Apple is now offering a $2M bounty for a zero-click exploit. According to the Apple website:

Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.

  1. We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
  2. Our bounty categories are expanding to cover even more attack surfaces. Notably, we’re rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million.
  3. We’re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.

** *** ***** ******* *********** *************

Cryptocurrency ATMs

[2025.10.16] CNN has a great piece about how cryptocurrency ATMs are used to scam people out of their money. The fees are usurious, and they’re a common place for scammers to send victims to buy cryptocurrency for them. The companies behind the ATMs, at best, do not care about the harm they cause; the profits are just too good.

** *** ***** ******* *********** *************

A Surprising Amount of Satellite Traffic Is Unencrypted

[2025.10.17] Here’s the summary:

We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice calls and SMS, and consumer Internet traffic from in-flight wifi and mobile networks. This data can be passively observed by anyone with a few hundred dollars of consumer-grade hardware. There are thousands of geostationary satellite transponders globally, and data from a single transponder may be visible from an area as large as 40% of the surface of the earth.

Full paper. News article.

** *** ***** ******* *********** *************

Agentic AI’s OODA Loop Problem

[2025.10.20] The OODA loop — for observe, orient, decide, act — is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, we need new systems of input, processing, and output integrity.

Many decades ago, U.S. Air Force Colonel John Boyd introduced the concept of the “OODA loop,” for Observe, Orient, Decide, and Act. These are the four steps of real-time continuous decision-making. Boyd developed it for fighter pilots, but it’s long been applied in artificial intelligence (AI) and robotics. An AI agent, like a pilot, executes the loop over and over, accomplishing its goals iteratively within an ever-changing environment. This is Anthropic’s definition: “Agents are models using tools in a loop.”1

OODA Loops for Agentic AI

Traditional OODA analysis assumes trusted inputs and outputs, in the same way that classical AI assumed trusted sensors, controlled environments, and physical boundaries. This no longer holds true. AI agents don’t just execute OODA loops; they embed untrusted actors within them. Web-enabled large language models (LLMs) can query adversary-controlled sources mid-loop. Systems that allow AI to use large corpora of content, such as retrieval-augmented generation (https://en.wikipedia.org/wiki/Retrieval-augmented_generation), can ingest poisoned documents. Tool-calling application programming interfaces can execute untrusted code. Modern AI sensors can encompass the entire Internet; their environments are inherently adversarial. That means that fixing AI hallucination is insufficient because even if the AI accurately interprets its inputs and produces corresponding output, it can be fully corrupt.

In 2022, Simon Willison identified a new class of attacks against AI systems: “prompt injection.”2 Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison’s insight was that this isn’t just a filtering problem; it’s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful — treating all inputs uniformly — is what makes it vulnerable. The security challenges we face today are structural consequences of using AI for everything.

  1. Insecurities can have far-reaching effects. A single poisoned piece of training data can affect millions of downstream applications. In this environment, security debt accrues like technical debt.
  2. AI security has a temporal asymmetry. The temporal disconnect between training and deployment creates unauditable vulnerabilities. Attackers can poison a model’s training data and then deploy an exploit years later. Integrity violations are frozen in the model. Models aren’t aware of previous compromises since each inference starts fresh and is equally vulnerable.
  3. AI increasingly maintains state — in the form of chat history and key-value caches. These states accumulate compromises. Every iteration is potentially malicious, and cache poisoning persists across interactions.
  4. Agents compound the risks. Pretrained OODA loops running in one or a dozen AI agents inherit all of these upstream compromises. Model Context Protocol (MCP) and similar systems that allow AI to use tools create their own vulnerabilities that interact with each other. Each tool has its own OODA loop, which nests, interleaves, and races. Tool descriptions become injection vectors. Models can’t verify tool semantics, only syntax. “Submit SQL query” might mean “exfiltrate database” because an agent can be corrupted in prompts, training data, or tool definitions to do what the attacker wants. The abstraction layer itself can be adversarial.

For example, an attacker might want AI agents to leak all the secret keys that the AI knows to the attacker, who might have a collector running in bulletproof hosting in a poorly regulated jurisdiction. They could plant coded instructions in easily scraped web content, waiting for the next AI training set to include it. Once that happens, they can activate the behavior through the front door: tricking AI agents (think a lowly chatbot or an analytics engine or a coding bot or anything in between) that are increasingly taking their own actions, in an OODA loop, using untrustworthy input from a third-party user. This compromise persists in the conversation history and cached responses, spreading to multiple future interactions and even to other AI agents. All this requires us to reconsider risks to the agentic AI OODA loop, from top to bottom.

  • Observe: The risks include adversarial examples, prompt injection, and sensor spoofing. A sticker fools computer vision, a string fools an LLM. The observation layer lacks authentication and integrity.
  • Orient: The risks include training data poisoning, context manipulation, and semantic backdoors. The model’s worldview — its orientation — can be influenced by attackers months before deployment. Encoded behavior activates on trigger phrases.
  • Decide: The risks include logic corruption via fine-tuning attacks, reward hacking, and objective misalignment. The decision process itself becomes the payload. Models can be manipulated to trust malicious sources preferentially.
  • Act: The risks include output manipulation, tool confusion, and action hijacking. MCP and similar protocols multiply attack surfaces. Each tool call trusts prior stages implicitly.

AI gives the old phrase “inside your adversary’s OODA loop” new meaning. For Boyd’s fighter pilots, it meant that you were operating faster than your adversary, able to act on current data while they were still on the previous iteration. With agentic AI, adversaries aren’t just metaphorically inside; they’re literally providing the observations and manipulating the output. We want adversaries inside our loop because that’s where the data are. AI’s OODA loops must observe untrusted sources to be useful. The competitive advantage, accessing web-scale information, is identical to the attack surface. The speed of your OODA loop is irrelevant when the adversary controls your sensors and actuators.

Worse, speed can itself be a vulnerability. The faster the loop, the less time for verification. Millisecond decisions result in millisecond compromises.

The Source of the Problem

The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

This is Ken Thompson’s “trusting trust” attack all over again.3 Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart — you can’t verify your inputs. Smart and secure — you check everything, slowly, because AI itself can’t be used for this. Secure and fast — you’re stuck with models with intentionally limited capabilities.

This trilemma isn’t unique to AI. Some autoimmune disorders are examples of molecular mimicry — when biological recognition systems fail to distinguish self from nonself. The mechanism designed for protection becomes the pathology as T cells attack healthy tissue or fail to attack pathogens and bad cells. AI exhibits the same kind of recognition failure. No digital immunological markers separate trusted instructions from hostile input. The model’s core capability, following instructions in natural language, is inseparable from its vulnerability. Or like oncogenes, the normal function and the malignant behavior share identical machinery.

Prompt injection is semantic mimicry: adversarial instructions that resemble legitimate prompts, which trigger self-compromise. The immune system can’t add better recognition without rejecting legitimate cells. AI can’t filter malicious prompts without rejecting legitimate instructions. Immune systems can’t verify their own recognition mechanisms, and AI systems can’t verify their own integrity because the verification system uses the same corrupted mechanisms.

In security, we often assume that foreign/hostile code looks different from legitimate instructions, and we use signatures, patterns, and statistical anomaly detection to detect it. But getting inside someone’s AI OODA loop uses the system’s native language. The attack is indistinguishable from normal operation because it is normal operation. The vulnerability isn’t a defect — it’s the feature working correctly.

Where to Go Next?

The shift to an AI-saturated world has been dizzying. Seemingly overnight, we have AI in every technology product, with promises of even more — and agents as well. So where does that leave us with respect to security?

Physical constraints protected Boyd’s fighter pilots. Radar returns couldn’t lie about physics; fooling them, through stealth or jamming, constituted some of the most successful attacks against such systems that are still in use today. Observations were authenticated by their presence. Tampering meant physical access. But semantic observations have no physics. When every AI observation is potentially corrupted, integrity violations span the stack. Text can claim anything, and images can show impossibilities. In training, we face poisoned datasets and backdoored models. In inference, we face adversarial inputs and prompt injection. During operation, we face a contaminated context and persistent compromise. We need semantic integrity: verifying not just data but interpretation, not just content but context, not just information but understanding. We can add checksums, signatures, and audit logs. But how do you checksum a thought? How do you sign semantics? How do you audit attention?

Computer security has evolved over the decades. We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption.4

Trustworthy AI agents require integrity because we can’t build reliable systems on unreliable foundations. The question isn’t whether we can add integrity to AI but whether the architecture permits integrity at all.

AI OODA loops and integrity aren’t fundamentally opposed, but today’s AI agents observe the Internet, orient via statistics, decide probabilistically, and act without verification. We built a system that trusts everything, and now we hope for a semantic firewall to keep it safe. The adversary isn’t inside the loop by accident; it’s there by architecture. Web-scale AI means web-scale integrity failure. Every capability corrupts.

Integrity isn’t a feature you add; it’s an architecture you choose. So far, we have built AI systems where “fast” and “smart” preclude “secure.” We optimized for capability over verification, for accessing web-scale data over ensuring trust. AI agents will be even more powerful — and increasingly autonomous. And without integrity, they will also be dangerous.

References

  1. 1. S. Willison, Simon Willison’s Weblog, May 22, 2025. [Online]. Available: https://simonwillison.net/2025/May/22/tools-in-a-loop/
  2. 2. S. Willison, “Prompt injection attacks against GPT-3,” Simon Willison’s Weblog, Sep. 12, 2022. [Online]. Available: https://simonwillison.net/2022/Sep/12/prompt-injection/
  3. 3. K. Thompson, “Reflections on trusting trust,” ACM, vol. 27, no. 8, Aug. 1984. [Online]. Available: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
  4. 4. B. Schneier, “The age of integrity,” IEEE Security & Privacy, vol. 23, no. 3, p. 96, May/Jun. 2025. [Online]. Available: https://www.computer.org/csdl/magazine/sp/2025/03/11038984/27COaJtjDOM

This essay was written with Barath Raghavan, and originally appeared in IEEE Security & Privacy.

** *** ***** ******* *********** *************

A Cybersecurity Merit Badge

[2025.10.21] Scouting America (formerly known as Boy Scouts) has a new badge in cybersecurity. There’s an image in the article; it looks good.

I want one.

** *** ***** ******* *********** *************

Failures in Face Recognition

[2025.10.22] Interesting article on people with nonstandard faces and how facial recognition systems fail for them.

Some of those living with facial differences tell WIRED they have undergone multiple surgeries and experienced stigma for their entire lives, which is now being echoed by the technology they are forced to interact with. They say they haven’t been able to access public services due to facial verification services failing, while others have struggled to access financial services. Social media filters and face-unlocking systems on phones often won’t work, they say.

It’s easy to blame the tech, but the real issue are the engineers who only considered a narrow spectrum of potential faces. That needs to change. But also, we need easy-to-access backup systems when the primary ones fail.

** *** ***** ******* *********** *************

Serious F5 Breach

[2025.10.23] This is bad:

F5, a Seattle-based maker of networking software, disclosed the breach on Wednesday. F5 said a “sophisticated” threat group working for an undisclosed nation-state government had surreptitiously and persistently dwelled in its network over a “long-term.” Security researchers who have responded to similar intrusions in the past took the language to mean the hackers were inside the F5 network for years.

During that time, F5 said, the hackers took control of the network segment the company uses to create and distribute updates for BIG IP, a line of server appliances that F5 says is used by 48 of the world’s top 50 corporations. Wednesday’s disclosure went on to say the threat group downloaded proprietary BIG-IP source code information about vulnerabilities that had been privately discovered but not yet patched. The hackers also obtained configuration settings that some customers used inside their networks.

Control of the build system and access to the source code, customer configurations, and documentation of unpatched vulnerabilities has the potential to give the hackers unprecedented knowledge of weaknesses and the ability to exploit them in supply-chain attacks on thousands of networks, many of which are sensitive. The theft of customer configurations and other data further raises the risk that sensitive credentials can be abused, F5 and outside security experts said.

F5 announcement.

** *** ***** ******* *********** *************

Part Four of The Kryptos Sculpture

[2025.10.24] Two people found the solution. They used the power of research, not cryptanalysis, finding clues amongst the Sanborn papers at the Smithsonian’s Archives of American Art.

This comes as an awkward time, as Sanborn is auctioning off the solution. There were legal threats — I don’t understand their basis — and the solvers are not publishing their solution.

** *** ***** ******* *********** *************

First Wap: A Surveillance Computer You’ve Never Heard Of

[2025.10.27] Mother Jones has a long article on surveillance arms manufacturers, their wares, and how they avoid export control laws:

Operating from their base in Jakarta, where permissive export laws have allowed their surveillance business to flourish, First Wap’s European founders and executives have quietly built a phone-tracking empire, with a footprint extending from the Vatican to the Middle East to Silicon Valley.

It calls its proprietary system Altamides, which it describes in promotional materials as “a unified platform to covertly locate the whereabouts of single or multiple suspects in real-time, to detect movement patterns, and to detect whether suspects are in close vicinity with each other.”

Altamides leaves no trace on the phones it targets, unlike spyware such as Pegasus. Nor does it require a target to click on a malicious link or show any of the telltale signs (such as overheating or a short battery life) of remote monitoring.

Its secret is shrewd use of the antiquated telecom language Signaling System No. 7, known as SS7, that phone carriers use to route calls and text messages. Any entity with SS7 access can send queries requesting information about which cell tower a phone subscriber is nearest to, an essential first step to sending a text message or making a call to that subscriber. But First Wap’s technology uses SS7 to zero in on phone numbers and trace the location of their users.

Much more in this Lighthouse Reports analysis.

** *** ***** ******* *********** *************

Louvre Jewel Heist

[2025.10.27] I assume I don’t have to explain last week’s Louvre jewel heist. I love a good caper, and have (like many others) eagerly followed the details. An electric ladder to a second-floor window, an angle grinder to get into the room and the display cases, security guards there more to protect patrons than valuables — seven minutes, in and out.

There were security lapses:

The Louvre, it turns out — at least certain nooks of the ancient former palace — is something like an anopticon: a place where no one is observed. The world now knows what the four thieves (two burglars and two accomplices) realized as recently as last week: The museum’s Apollo Gallery, which housed the stolen items, was monitored by a single outdoor camera angled away from its only exterior point of entry, a balcony. In other words, a free-roaming Roomba could have provided the world’s most famous museum with more information about the interior of this space. There is no surveillance footage of the break-in.

Professional jewelry thieves were not impressed with the four. Here’s Larry Lawton:

“I robbed 25, 30 jewelry stores — 20 million, 18 million, something like that,” Mr. Lawton said. “Did you know that I never dropped a ring or an earring, no less, a crown worth 20 million?”

He thinks that they had a co-conspirator on the inside.

Museums, especially smaller ones, are good targets for theft because they rarely secure what they hold to its true value. They can’t; it would be prohibitively expensive. This makes them an attractive target.

We might find out soon. It looks like some people have been arrested

Not being out of the country — out of the EU — by now was sloppy. Leaving DNA evidence was sloppy. I can hope the criminals were sloppy enough not to have disassembled the jewelry by now, but I doubt it. They were probably taken apart within hours of the theft.

The whole thing is sad, really. Unlike stolen paintings, those jewels have no value in their original form. They need to be taken apart and sold in pieces. But then their value drops considerably — so the end result is that most of the worth of those items disappears. It would have been much better to pay the thieves not to rob the Louvre.

** *** ***** ******* *********** *************

Social Engineering People’s Credit Card Details

[2025.10.28] Good Wall Street Journal article on criminal gangs that scam people out of their credit card information:

Your highway toll payment is now past due, one text warns. You have U.S. Postal Service fees to pay, another threatens. You owe the New York City Department of Finance for unpaid traffic violations.

The texts are ploys to get unsuspecting victims to fork over their credit-card details. The gangs behind the scams take advantage of this information to buy iPhones, gift cards, clothing and cosmetics.

Criminal organizations operating out of China, which investigators blame for the toll and postage messages, have used them to make more than $1 billion over the last three years, according to the Department of Homeland Security.

[…]

Making the fraud possible: an ingenious trick allowing criminals to install stolen card numbers in Google and Apple Wallets in Asia, then share the cards with the people in the U.S. making purchases half a world away.

** *** ***** ******* *********** *************

Signal’s Post-Quantum Cryptographic Implementation

[2025.10.29] Signal has just rolled out its quantum-safe cryptographic implementation.

Ars Technica has a really good article with details:

Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

Also read this post on X.

** *** ***** ******* *********** *************

The AI-Designed Bioweapon Arms Race

[2025.10.30] Interesting article about the arms race between AI systems that invent/design new biological pathogens, and AI systems that detect them before they’re created:

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

[…]

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin.

[…]

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

The research is all preliminary, and there are a lot of ways in which the experiment diverges from reality. But I am not optimistic about this particular arms race. I think that the ability of AI systems to create something deadly will advance faster than the ability of AI systems to detect its components.

** *** ***** ******* *********** *************

Will AI Strengthen or Undermine Democracy?

[2025.10.31] Listen to the Audio on NextBigIdeaClub.com

Below, co-authors Bruce Schneier and Nathan E. Sanders share five key insights from their new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.

What’s the big idea?

AI can be used both for and against the public interest within democracies. It is already being used in the governing of nations around the world, and there is no escaping its continued use in the future by leaders, policy makers, and legal enforcers. How we wire AI into democracy today will determine if it becomes a tool of oppression or empowerment.

1. AI’s global democratic impact is already profound.

It’s been just a few years since ChatGPT stormed into view and AI’s influence has already permeated every democratic process in governments around the world:

  • In 2022, an artist collective in Denmark founded the world’s first political party committed to an AI-generated policy platform.
  • Also in 2022, South Korean politicians running for the presidency were the first to use AI avatars to communicate with voters en masse.
  • In 2023, a Brazilian municipal legislator passed the first enacted law written by AI.
  • In 2024, a U.S. federal court judge started using AI to interpret the plain meaning of words in U.S. law.
  • Also in 2024, the Biden administration disclosed more than two thousand discrete use cases for AI across the agencies of the U.S. federal government.

The examples illustrate the diverse uses of AI across citizenship, politics, legislation, the judiciary, and executive administration.

Not all of these uses will create lasting change. Some of these will be one-offs. Some are inherently small in scale. Some were publicity stunts. But each use case speaks to a shifting balance of supply and demand that AI will increasingly mediate.

Legislators need assistance drafting bills and have limited staff resources, especially at the local and state level. Historically, they have looked to lobbyists and interest groups for help. Increasingly, it’s just as easy for them to use an AI tool.

2. The first places AI will be used are where there is the least public oversight.

Many of the use cases for AI in governance and politics have vocal objectors. Some make us uncomfortable, especially in the hands of authoritarians or ideological extremists.

In some cases, politics will be a regulating force to prevent dangerous uses of AI. Massachusetts has banned the use of AI face recognition in law enforcement because of real concerns voiced by the public about their tendency to encode systems of racial bias.

Some of the uses we think might be most impactful are unlikely to be adopted fast because of legitimate concern about their potential to make mistakes, introduce bias, or subvert human agency. AIs could be assistive tools for citizens, acting as their voting proxies to help us weigh in on larger numbers of more complex ballot initiatives, but we know that many will object to anything that verges on AIs being given a vote.

But AI will continue to be rapidly adopted in some aspects of democracy, regardless of how the public feels. People within democracies, even those in government jobs, often have great independence. They don’t have to ask anyone if it’s ok to use AI, and they will use it if they see that it benefits them. The Brazilian city councilor who used AI to draft a bill did not ask for anyone’s permission. The U.S. federal judge who used AI to help him interpret law did not have to check with anyone first. And the Trump administration seems to be using AI for everything from drafting tariff policies to writing public health reports — with some obvious drawbacks.

It’s likely that even the thousands of disclosed AI uses in government are only the tip of the iceberg. These are just the applications that governments have seen fit to share; the ones they think are the best vetted, most likely to persist, or maybe the least controversial to disclose.

3. Elites and authoritarians will use AI to concentrate power.

Many Westerners point to China as a cautionary tale of how AI could empower autocracy, but the reality is that AI provides structural advantages to entrenched power in democratic governments, too. The nature of automation is that it gives those at the top of a power structure more control over the actions taken at its lower levels.

It’s famously hard for newly elected leaders to exert their will over the many layers of human bureaucracies. The civil service is large, unwieldy, and messy. But it’s trivial for an executive to change the parameters and instructions of an AI model being used to automate the systems of government.

The dynamic of AI effectuating concentration of power extends beyond government agencies. Over the past five years, Ohio has undertaken a project to do a wholesale revision of its administrative code using AI. The leaders of that project framed it in terms of efficiency and good governance: deleting millions of words of outdated, unnecessary, or redundant language. The same technology could be applied to advance more ideological ends, like purging all statutory language that places burdens on business, neglects to hold businesses accountable, protects some class of people, or fails to protect others.

Whether you like or despise automating the enactment of those policies will depend on whether you stand with or are opposed to those in power, and that’s the point. AI gives any faction with power the potential to exert more control over the levers of government.

4. Organizers will find ways to use AI to distribute power instead.

We don’t have to resign ourselves to a world where AI makes the rich richer and the elite more powerful. This is a technology that can also be wielded by outsiders to help level the playing field.

In politics, AI gives upstart and local candidates access to skills and the ability to do work on a scale that used to only be available to well-funded campaigns. In the 2024 cycle, Congressional candidates running against incumbents like Glenn Cook in Georgia and Shamaine Daniels in Pennsylvania used AI to help themselves be everywhere all at once. They used AI to make personalized robocalls to voters, write frequent blog posts, and even generate podcasts in the candidate’s voice. In Japan, a candidate for Governor of Tokyo used an AI avatar to respond to more than eight thousand online questions from voters.

Outside of public politics, labor organizers are also leveraging AI to build power. The Worker’s Lab is a U.S. nonprofit developing assistive technologies for labor unions, like AI-enabled apps that help service workers report workplace safety violations. The 2023 Writers’ Guild of America strike serves as a blueprint for organizers. They won concessions from Hollywood studios that protect their members against being displaced by AI while also winning them guarantees for being able to use AI as assistive tools to their own benefit.

5. The ultimate democratic impact of AI depends on us.

If you are excited about AI and see the potential for it to make life, and maybe even democracy, better around the world, recognize that there are a lot of people who don’t feel the same way.

If you are disturbed about the ways you see AI being used and worried about the future that leads to, recognize that the trajectory we’re on now is not the only one available.

The technology of AI itself does not pose an inherent threat to citizens, workers, and the public interest. Like other democratic technologies — voting processes, legislative districts, judicial review — its impacts will depend on how it’s developed, who controls it, and how it’s used.

Constituents of democracies should do four things:

  • Reform the technology ecosystem to be more trustworthy, so that AI is developed with more transparency, more guardrails around exploitative use of data, and public oversight.
  • Resist inappropriate uses of AI in government and politics, like facial recognition technologies that automate surveillance and encode inequity.
  • Responsibly use AI in government where it can help improve outcomes, like making government more accessible to people through translation and speeding up administrative decision processes.
  • Renovate the systems of government vulnerable to the disruptive potential of AI’s superhuman capabilities, like political advertising rules that never anticipated deepfakes.

These four Rs are how we can rewire our democracy in a way that applies AI to truly benefit the public interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Next Big Idea Club.

EDITED TO ADD (11/6): This essay was republished by Fast Company.

** *** ***** ******* *********** *************

AI Summarization Optimization

[2025.11.03] These days, the most important meeting attendee isn’t a person: It’s the AI notetaker.

This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence.

But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).

Optimizing for algorithmic manipulation

AI summarization optimization has a well-known precursor: SEO.

Search-engine optimization is as old as the World Wide Web. The idea is straightforward: Search engines scour the internet digesting every possible page, with the goal of serving the best results to every possible query. The objective for a content creator, company, or cause is to optimize for the algorithm search engines have developed to determine their webpage rankings for those queries. That requires writing for two audiences at once: human readers and the search-engine crawlers indexing content. Techniques to do this effectively are passed around like trade secrets, and a $75 billion industry offers SEO services to organizations of all sizes.

More recently, researchers have documented techniques for influencing AI responses, including large-language model optimization (LLMO) and generative engine optimization (GEO). Tricks include content optimization — adding citations and statistics — and adversarial approaches: using specially crafted text sequences. These techniques often target sources that LLMs heavily reference, such as Reddit, which is claimed to be cited in 40% of AI-generated responses. The effectiveness and real-world applicability of these methods remains limited and largely experimental, although there is substantial evidence that countries such as Russia are actively pursuing this.

AI summarization optimization follows the same logic on a smaller scale. Human participants in a meeting may want a certain fact highlighted in the record, or their perspective to be reflected as the authoritative one. Rather than persuading colleagues directly, they adapt their speech for the notetaker that will later define the “official” summary. For example:

  • “The main factor in last quarter’s delay was supply chain disruption.”
  • “The key outcome was overwhelmingly positive client feedback.”
  • “Our takeaway here is in alignment moving forward.”
  • “What matters here is the efficiency gains, not the temporary cost overrun.”

The techniques are subtle. They employ high-signal phrases such as “key takeaway” and “action item,” keep statements short and clear, and repeat them when possible. They also use contrastive framing (“this, not that”), and speak early in the meeting or at transition points.

Once spoken words are transcribed, they enter the model’s input. Cue phrases — and even transcription errors — can steer what makes it into the summary. In many tools, the output format itself is also a signal: Summarizers often offer sections such as “Key Takeaways” or “Action Items,” so language that mirrors those headings is more likely to be included. In effect, well-chosen phrases function as implicit markers that guide the AI toward inclusion.

Research confirms this. Early AI summarization research showed that models trained to reconstruct summary-style sentences systematically overweigh such content. Models over-rely on early-position content in news. And models often overweigh statements at the start or end of a transcript, underweighting the middle. Recent work further confirms vulnerability to phrasing-based manipulation: models cannot reliably distinguish embedded instructions from ordinary content, especially when phrasing mimics salient cues.

How to combat AISO

If AISO becomes common, three forms of defense will emerge. First, meeting participants will exert social pressure on one another. When researchers secretly deployed AI bots in Reddit’s r/changemyview community, users and moderators responded with strong backlash calling it “psychological manipulation.” Anyone using obvious AI-gaming phrases may face similar disapproval.

Second, organizations will start governing meeting behavior using AI: risk assessments and access restrictions before the meetings even start, detection of AISO techniques in meetings, and validation and auditing after the meetings.

Third, AI summarizers will have their own technical countermeasures. For example, the AI security company CloudSEK recommends content sanitization to strip suspicious inputs, prompt filtering to detect meta-instructions and excessive repetition, context window balancing to weight repeated content less heavily, and user warnings showing content provenance.

Broader defenses could draw from security and AI safety research: preprocessing content to detect dangerous patterns, consensus approaches requiring consistency thresholds, self-reflection techniques to detect manipulative content, and human oversight protocols for critical decisions. Meeting-specific systems could implement additional defenses: tagging inputs by provenance, weighting content by speaker role or centrality with sentence-level importance scoring, and discounting high-signal phrases while favoring consensus over fervor.

Reshaping human behavior

AI summarization optimization is a small, subtle shift, but it illustrates how the adoption of AI is reshaping human behavior in unexpected ways. The potential implications are quietly profound.

Meetings — humanity’s most fundamental collaborative ritual — are being silently reengineered by those who understand the algorithm’s preferences. The articulate are gaining an invisible advantage over the wise. Adversarial thinking is becoming routine, embedded in the most ordinary workplace rituals, and, as AI becomes embedded in organizational life, strategic interactions with AI notetakers and summarizers may soon be a necessary executive skill for navigating corporate culture.

AI summarization optimization illustrates how quickly humans adapt communication strategies to new technologies. As AI becomes more embedded in workplace communication, recognizing these emerging patterns may prove increasingly important.

This essay was written with Gadi Evron, and originally appeared in CSO.

** *** ***** ******* *********** *************

Cybercriminals Targeting Payroll Sites

[2025.11.04] Microsoft is warning of a scam involving online payroll systems. Criminals use social engineering to steal people’s credentials, and then divert direct deposits into accounts that they control. Sometimes they do other things to make it harder for the victim to realize what is happening.

I feel like this kind of thing is happening everywhere, with everything. As we move more of our personal and professional lives online, we enable criminals to subvert the very systems we rely on.

** *** ***** ******* *********** *************

Scientists Need a Positive Vision for AI

[2025.11.05] For many in the research community, it’s gotten harder to be optimistic about the impacts of artificial intelligence.

As authoritarianism is rising around the world, AI-generated “slop” is overwhelming legitimate media, while AI-generated deepfakes are spreading misinformation and parroting extremist messages. AI is making warfare more precise and deadly amidst intransigent conflicts. AI companies are exploiting people in the global South who work as data labelers, and profiting from content creators worldwide by using their work without license or compensation. The industry is also affecting an already-roiling climate with its enormous energy demands.

Meanwhile, particularly in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.

This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.

The Academy’s View of AI

A Pew study in April found that 56 percent of AI experts (authors and presenters of AI-related conference papers) predict that AI will have positive effects on society. But that optimism doesn’t extend to the scientific community at large. A 2023 survey of 232 scientists by the Center for Science, Technology and Environmental Policy Studies at Arizona State University found more concern than excitement about the use of generative AI in daily life — by nearly a three to one ratio.

We have encountered this sentiment repeatedly. Our careers of diverse applied work have brought us in contact with many research communities: privacy, cybersecurity, physical sciences, drug discovery, public health, public interest technology, and democratic innovation. In all of these fields, we’ve found strong negative sentiment about the impacts of AI. The feeling is so palpable that we’ve often been asked to represent the voice of the AI optimist, even though we spend most of our time writing about the need to reform the structures of AI development.

We understand why these audiences see AI as a destructive force, but this negativity engenders a different concern: that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.

Elements of a Positive Vision for AI

Many have argued that turning the tide of climate action requires clearly articulating a path towards positive outcomes. In the same way, while scientists and technologists should anticipate, warn against, and help mitigate the potential harms of AI, they should also highlight the ways the technology can be harnessed for good, galvanizing public action towards those ends.

There are myriad ways to leverage and reshape AI to improve peoples’ lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples have arisen from the scientific community and deserve to be celebrated.

Some examples: AI is eliminating communication barriers across languages, including under-resourced contexts like marginalized sign languages and indigenous African languages. It is helping policymakers incorporate the viewpoints of many constituents through AI-assisted deliberations and legislative engagement. Large language models can scale individual dialogs to address climatechange skepticism, spreading accurate information at a critical moment. National labs are building AI foundation models to accelerate scientific research. And throughout the fields of medicine and biology, machine learning is solving scientific problems like the prediction of protein structure in aid of drug discovery, which was recognized with a Nobel Prize in 2024.

While each of these applications is nascent and surely imperfect, they all demonstrate that AI can be wielded to advance the public interest. Scientists should embrace, champion, and expand on such efforts.

A Call to Action for Scientists

In our new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, we describe four key actions for policymakers committed to steering AI toward the public good.

These apply to scientists as well. Researchers should work to reform the AI industry to be more ethical, equitable, and trustworthy. We must collectively develop ethical norms for research that advance and applies AI, and should use and draw attention to AI developers who adhere to those norms.

Second, we should resist harmful uses of AI by documenting the negative applications of AI and casting a light on inappropriate uses.

Third, we should responsibly use AI to make society and peoples’ lives better, exploiting its capabilities to help the communities they serve.

And finally, we must advocate for the renovation of institutions to prepare them for the impacts of AI; universities, professional societies, and democratic organizations are all vulnerable to disruption.

Scientists have a special privilege and responsibility: We are close to the technology itself and therefore well positioned to influence its trajectory. We must work to create an AI-infused world that we want to live in. Technology, as the historian Melvin Kranzberg observed, “is neither good nor bad; nor is it neutral.” Whether the AI we build is detrimental or beneficial to society depends on the choices we make today. But we cannot create a positive future without a vision of what it looks like.

This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.

** *** ***** ******* *********** *************

Rigged Poker Games

[2025.11.06] The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games.

In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand. This information was transmitted to an off-site member of the conspiracy, who then transmitted that information via cellphone back to a member of the conspiracy who was playing at the table, referred to as the “Quarterback” or “Driver.” The Quarterback then secretly signaled this information (usually by prearranged signals like touching certain chips or other items on the table) to other co-conspirators playing at the table, who were also participants in the scheme. Collectively, the Quarterback and other players in on the scheme (i.e., the cheating team) used this information to win poker games against unwitting victims, who sometimes lost tens or hundreds of thousands of dollars at a time. The defendants used other cheating technology as well, such as a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an x-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards.

News articles.

** *** ***** ******* *********** *************

Faking Receipts with AI

[2025.11.07] Over the past few decades, it’s become easier and easier to create fake receipts. Decades ago, it required special paper and printers — I remember a company in the UK advertising its services to people trying to cover up their affairs. Then, receipts became computerized, and faking them required some artistic skills to make the page look realistic.

Now, AI can do it all:

Several receipts shown to the FT by expense management platforms demonstrated the realistic nature of the images, which included wrinkles in paper, detailed itemization that matched real-life menus, and signatures.

[…]

The rise in these more realistic copies has led companies to turn to AI to help detect fake receipts, as most are too convincing to be found by human reviewers.

The software works by scanning receipts to check the metadata of the image to discover whether an AI platform created it. However, this can be easily removed by users taking a photo or a screenshot of the picture.

To combat this, it also considers other contextual information by examining details such as repetition in server names and times and broader information about the employee’s trip.

Yet another AI-powered security arms race.

** *** ***** ******* *********** *************

New Attacks Against Secure Enclaves

[2025.11.10] Encryption can protect data at rest and data in transit, but does nothing for data in use. What we have are secure enclaves. I’ve written about this before:

Almost all cloud services have to perform some computation on our data. Even the simplest storage provider has code to copy bytes from an internal storage system and deliver them to the user. End-to-end encryption is sufficient in such a narrow context. But often we want our cloud providers to be able to perform computation on our raw data: search, analysis, AI model training or fine-tuning, and more. Without expensive, esoteric techniques, such as secure multiparty computation protocols or homomorphic encryption techniques that can perform calculations on encrypted data, cloud servers require access to the unencrypted data to do anything useful.

Fortunately, the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

Secure enclaves are critical in our modern cloud-based computing architectures. And, of course, they have vulnerabilities:

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month — which worked only against CPUs using DDR4 memory — TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Yes, these attacks require physical access. But that’s exactly the threat model secure enclaves are supposed to secure against.

** *** ***** ******* *********** *************

Prompt Injection in AI Browsers

[2025.11.11] This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL.

LayerX researchers say that the prompt tells the agent to consult its memory and connected services instead of searching the web. As the AI tool is connected to various services, an attacker leveraging the CometJacking method could exfiltrate available data.

In their tests, the connected services and accessible data include Google Calendar invites and Gmail messages and the malicious prompt included instructions to encode the sensitive data in base64 and then exfiltrate them to an external endpoint.

According to the researchers, Comet followed the instructions and delivered the information to an external system controlled by the attacker, evading Perplexity’s checks.

I wrote previously:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

** *** ***** ******* *********** *************

On Hacking Back

[2025.11.12] Former DoJ attorney John Carlin writes about hackback, which he defines thus: “A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are — by definition — not passive defensive measures.”

His conclusion:

As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.

At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation.

As for the broad range of other hack back tactics that fall in the middle of active defense and offensive measures, private parties should continue to engage in these tactics only with government oversight or authorization. These measures exist within a legal gray area and would likely benefit from amendments to the CFAA and CISA that clarify and carve out the parameters of authorization for specific self-defense measures. But in the absence of amendments or clarification on the scope of those laws, private actors can seek governmental authorization through an array of channels, whether they be partnering with law enforcement or seeking authorization to engage in more offensive tactics from the courts in connection with private litigation.

** *** ***** ******* *********** *************

Book Review: The Business of Secrets

[2025.11.13] The Business of Secrets: Adventures in Selling Encryption Around the World by Fred Kinch (May 24, 2024)

From the vantage point of today, it’s surreal reading about the commercial cryptography business in the 1970s. Nobody knew anything. The manufacturers didn’t know whether the cryptography they sold was any good. The customers didn’t know whether the crypto they bought was any good. Everyone pretended to know, thought they knew, or knew better than to even try to know.

The Business of Secrets is the self-published memoirs of Fred Kinch. He was founder and vice president of — mostly sales — at a US cryptographic hardware company called Datotek, from company’s founding in 1969 until 1982. It’s mostly a disjointed collection of stories about the difficulties of selling to governments worldwide, along with descriptions of the highs and (mostly) lows of foreign airlines, foreign hotels, and foreign travel in general. But it’s also about encryption.

Datotek sold cryptographic equipment in the era after rotor machines and before modern academic cryptography. The company initially marketed computer-file encryption, but pivoted to link encryption — low-speed data, voice, fax — because that’s what the market wanted.

These were the years where the NSA hired anyone promising in the field, and routinely classified — and thereby blocked — publication of academic mathematics papers of those they didn’t hire. They controlled the fielding of strong cryptography by aggressively using the International Traffic in Arms regulation. Kinch talks about the difficulties in getting an expert license for Datotek’s products; he didn’t know that the only reason he ever got that license was because the NSA was able to break his company’s stuff. He had no idea that his largest competitor, the Swiss company Crypto AG, was owned and controlled by the CIA and its West German equivalent. “Wouldn’t that have made our life easier if we had known that back in the 1970s?” Yes, it would. But no one knew.

Glimmers of the clandestine world peek out of the book. Countries like France ask detailed tech questions, borrow or buy a couple of units for “evaluation,” and then disappear again. Did they break the encryption? Did they just want to see what their adversaries were using? No one at Datotek knew.

Kinch “carried the key generator logic diagrams and schematics” with him — even today, it’s good practice not to rely on their secrecy for security — but the details seem laughably insecure: four linear shift registers of 29, 23, 13, and 7 bits, variable stepping, and a small nonlinear final transformation. The NSA probably used this as a challenge to its new hires. But Datotek didn’t know that, at the time.

Kinch writes: “The strength of the cryptography had to be accepted on trust and only on trust.” Yes, but it’s so, so weird to read about it in practice. Kinch demonstrated the security of his telephone encryptors by hooking a pair of them up and having people listen to the encrypted voice. It’s rather like demonstrating the safety of a food additive by showing that someone doesn’t immediately fall over dead after eating it. (In one absolutely bizarre anecdote, an Argentine sergeant with a “hearing defect” could understand the scrambled analog voice. Datotek fixed its security, but only offered the upgrade to the Argentines, because no one else complained. As I said, no one knew anything.)

In his postscript, he writes that even if the NSA could break Datotek’s products, they were “vastly superior to what [his customers] had used previously.” Given that the previous devices were electromechanical rotor machines, and that his primary competition was a CIA-run operation, he’s probably right. But even today, we know nothing about any other country’s cryptanalytic capabilities during those decades.

A lot of this book has a “you had to be there” vibe. And it’s mostly tone-deaf. There is no real acknowledgment of the human-rights-abusing countries on Datotek’s customer list, and how their products might have assisted those governments. But it’s a fascinating artifact of an era before commercial cryptography went mainstream, before academic cryptography became approved for US classified data, before those of us outside the triple fences of the NSA understood the mathematics of cryptography.

This book review originally appeared in AFIO.

** *** ***** ******* *********** *************

The Role of Humans in an AI-Powered World

[2025.11.14] As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions.

For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, you would take the AI in a second. You want the more accurate tool. But justice is harder because justice is inherently a human quality in a way that “Is this tumor cancerous?” is not. That’s a fact-based question. “What’s the right thing to do here?” is a human-based question.

Chess provides a useful analogy for this evolution. For most of history, humans were best. Then, in the 1990s, Deep Blue beat the best human. For a while after that, a good human paired with a good computer could beat either one alone. But a few years ago, that changed again, and now the best computer simply wins. There will be an intermediate period for many applications where the human-AI combination is optimal, but eventually, for fact-based tasks, the best AI will likely surpass both.

The enduring role for humans lies in making judgments, especially when values come into conflict. What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear. A lot of societal governance is about resolving conflicts between people’s rights — my right to play my music versus your right to have quiet. There’s no factual answer there. We can imagine machines will help; perhaps once we humans figure out the rules, the machines can do the implementing and kick the hard cases back to us. But the fundamental value judgments will likely remain our domain.

This essay originally appeared in IVY.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2025.11.14] This is a current list of where and when I am scheduled to speak:

  • My coauthor Nathan E. Sanders and I are speaking at the Rayburn House Office Building in Washington, DC at noon ET on November 17, 2025. The event is hosted by the POPVOX Foundation and the topic is “AI and Congress: Practical Steps to Govern and Prepare.”
  • I’m speaking on “Integrity and Trustworthy AI” at North Hennepin Community College in Brooklyn Park, Minnesota, USA, on Friday, November 21, 2025, at 2:00 PM CT. The event is cohosted by the college and The Twin Cities IEEE Computer Society.
  • Nathan E. Sanders and I will be speaking at the MIT Museum in Cambridge, Massachusetts, USA, on December 1, 2025, at 6:00 pm ET.
  • Nathan E. Sanders and I will be speaking at a virtual event hosted by City Lights on the Zoom platform, on December 3, 2025, at 6:00 PM PT.
  • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, on February 5, 2026. Details to come.

The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books — including his latest, A Hacker’s Mind — as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2025 by Bruce Schneier