Nov 182025
 

The article,  EU’s Weakened “Chat Control” Bill Still Poses Major problems   is last in the report below.

I think it is very important.   Some people are sleep-walking.

 

RELATED:

 

This work depends on the support of readers like you. Threats to free speech and privacy are growing, and too few people hear about them until it is too late. If you value this work, please consider upgrading to a paid subscription or gifting one to someone who should be following these issues. If you can’t contribute financially, sharing our work or forwarding this email helps ensure these stories reach the people who need to see them.
Become a Supporter

 

NEW LAWSUIT

 

 

Lawsuit Claims Google Secretly Used Gemini AI to Scan Private Gmail and Chat Data

 

When Google flipped a digital switch in October 2025, few users noticed anything unusual.

 

Gmail loaded as usual, Chat messages zipped across screens, and Meet calls continued without interruption.

 

Yet, according to a new class action lawsuit, something significant had changed beneath the surface.

 

We obtained a copy of the lawsuit for you here.

 

Plaintiffs claim that Google silently activated its artificial intelligence system, Gemini, across its communication platforms, turning private conversations into raw material for machine analysis.

 

The lawsuit, filed by Thomas Thele and Melo Porter, describes a scenario that reads like a breach of trust.

 

It accuses Google of enabling Gemini to “access and exploit the entire recorded history of its users’ private communications, including literally every email and attachment sent and received.”

 

The filing argues that the company’s conduct “violates its users’ reasonable expectations of privacy.”

 

Until early October, Gemini’s data processing was supposedly available only to those who opted in.

 

Then, the plaintiffs claim, Google “turned it on for everyone by default,” allowing the system to mine the contents of emails, attachments, and conversations across Gmail, Chat, and Meet.

 

The complaint points to a particular line in Google’s settings, “When you turn this setting on, you agree,” as misleading, since the feature “had already been switched on.”

 

This, according to the filing, represents a deliberate misdirection designed to create the illusion of consent where none existed.

 

There is a certain irony woven through the outrage. For all the noise about privacy, most users long ago accepted the quiet trade that powers Google’s empire.

 

They search, share, and store their digital lives inside Google’s ecosystem, knowing the company thrives on data.

 

The lawsuit may sound shocking, but for many, it simply exposes what has been implicit all along: if you live in Google’s world, privacy has already been priced into the convenience.

 

Thele warns that Gemini’s access could expose “financial information and records, employment information and records, religious affiliations and activities, political affiliations and activities, medical care and records, the identities of his family, friends, and other contacts, social habits and activities, eating habits, shopping habits, exercise habits, [and] the extent to which he is involved in the activities of his children.”

 

In other words, the system’s reach, if the allegations prove true, could extend into nearly every aspect of a user’s personal life.

 

The plaintiffs argue that Gemini’s analytical capabilities allow Google to “cross-reference and conduct unlimited analysis toward unmerited, improper, and monetizable insights” about users’ private relationships and behaviors.

 

The complaint brands the company’s actions as “deceptive and unethical,” claiming Google “surreptitiously turned on this AI tracking ‘feature’ without informing or obtaining the consent of Plaintiffs and Class Members.” Such conduct, it says, is “highly offensive” and “defies social norms.”

 

The case invokes a formidable set of statutes, including the California Invasion of Privacy Act, the California Computer Data Access and Fraud Act, the Stored Communications Act, and California’s constitutional right to privacy.

 

Google is yet to comment on the filing.

 

 

Reclaim The Net is reader-supported. Consider becoming a paid subscriber.

 

Become a Supporter

 

 

RAIDED

 

 

Germany Turns an X Post Into a Police Raid at Dawn

 

The story starts with a tweet that barely registered on the internet. A few hundred views, a handful of likes, and the kind of blunt libertarian framing that is common on X every hour of every day.

 

Yet in Germany, that tiny post triggered a 6am police raid, a forced phone handover, biometric collection, and a warning that the author was now under surveillance.

 

The thing to understand is that this story only makes sense once you see the sequence of events in order.

 

The story goes like this:

  • A man in Germany, known publicly only as Damian N., posts a short comment on X, calling government-funded workers “parasites.”
  • The post is tiny. At the time he was raided, it had roughly a hundred views. Even now, it has only a few hundred.
  • Despite the post’s obscurity, police arrive at Damian’s home at six in the morning.
  • He says they did not show him the warrant and did not leave documentation of what they seized.
  • Police pressured him to unlock his phone, confiscated it, took photos, fingerprints, and other biometric data, and even requested a blood sample for DNA.
  • One officer reportedly warned him to “think about what you post in the future” and said he is now “under surveillance.”
  • The entire action was justified under Section 130 of the German Criminal Code, which is meant to prohibit inciting hatred against protected groups.
  • Government employees are not such a group, which makes the legal theory tenuous at best.
  • Damian’s lawyer says the identification procedures and possibly the raid itself were illegal.

That is the sequence. A low-visibility political insult becomes a criminal investigation involving home searches, device seizure, and biometric collection.

 

 

The thing to understand is that this is not about one man’s post. It is about a bureaucracy that treats speech as something to manage and a set of enforcement structures that expand to fill the space they are given.

 

Start with the enforcement context. Germany has built a sprawling ecosystem around “online hate”: specialized prosecutor units, NGO tip lines, and automated scanning for taboo keywords.

The model is compliance first and legal theory second.

 

Once you create an apparatus like this, it behaves the way bureaucracies behave. It looks for work. It justifies resources by producing cases. A tiny X post with inflammatory language becomes a target because it contains the right keyword, not because it has societal impact.

 

Police behavior fits the same pattern. Confiscating phones is strategically useful because it imposes real pain without requiring a conviction.

 

Even prosecutors have said that losing a smartphone is often worse than the fine.

 

Early-morning raids create psychological pressure. Collecting biometrics raises the stakes further. None of this is about public safety. It is about creating friction for saying the wrong thing.

 

The legal mismatch is the tell. Section 130 protects groups defined by national, racial, religious, or ethnic identity.

 

There is also the privacy angle, which becomes impossible to ignore. Device access, biometrics, DNA requests: these are investigative tools built for serious crimes.

 

Deploying them against minor online speech means the line between public-safety policing and opinion policing has already been crossed. Once a state normalizes surveillance as a response to expression, the hard part becomes restoring restraint.

 

 

It is a deterrence strategy, not a justice strategy. And it reinforces why free speech and strong privacy protections matter. Without them, minor speech becomes an invitation for major intrusion.

 

The counterintuitive part is that the smallness of the post makes a raid more likely, not less.

 

High-profile content generates scrutiny and political costs. Low-profile content discovered through automated or NGO-driven monitoring is frictionless to act on. Unless people are reading Reclaim The Net, most people never hear of these smaller cases.

 

Looking ahead, the pressure will only increase. As more speech moves to global platforms that are harder to influence, local governments will lean more heavily on domestic law enforcement as their lever of control.

 

That means more investigations that hinge on broad interpretations of old statutes and more friction between individual rights and bureaucratic incentives.

 

This is particularly true in Germany and places like the UK, where the government doesn’t seem to feel any shame about raiding its citizens over online posts.

 

GET YOURS

 

 

Get Yours: Do Not Comply T-Shirt

 

Getting merchandise for yourself or as a gift helps support the mission to defend free speech and digital privacy.

 

It also helps raise awareness every time you wear or use it.

 

Your merch purchase goes directly toward sustaining our work and growing our reach.

 

It’s a simple, effective way to support. Get yours now.

 

Also available as a t-shirt or hoodie.

 

Shop Now

 

 

BIG WIN

 

 

Italian Court Orders Google to Restore Banned Catholic Blog

 

Google has been compelled by the Tribunale di Imperia to restore Messainlatino.it, a major Italian Catholic website that, as you may remember, the company had abruptly taken down from its Blogger platform in July.

 

The ruling, issued against Google Ireland Limited, the firm’s European branch, also requires payment of approximately €7,000 (about $8,100) in court costs.

 

The blog’s editor, Luigi Casalini, filed legal action after Google deleted the site without warning, claiming a violation of its “hate speech” rules.

 

The company’s notification consisted of a short, generic email and provided no explanation or chance to appeal.

 

For Casalini, whose publication had accumulated over 22,000 articles since 2008 and reached around one million monthly readers, the removal appeared to be less a matter of policy enforcement and more an attempt to silence dissenting religious opinion.

 

Messainlatino.it was well known for covering issues surrounding traditional Catholic liturgy and had been cited by major outlets.

 

Following Google’s action, questions were raised in both the European Parliament and Italy’s Chamber of Deputies.

 

Legislators noted that the deletion “raises serious questions about the respect for freedom of expression, speech and religion” as guaranteed by Article 11 of the EU Charter of Fundamental Rights and Article 10 of the European Convention on Human Rights.

 

They also pointed to the Digital Services Act (DSA), which, despite being a censorship law, obliges platforms to apply their moderation policies with “due regard” for fundamental rights.

 

Casalini’s legal case focused on that provision. He argued that Google’s decision breached Article 14 of the DSA, which calls for a balance between policy enforcement and the user’s right to free expression.

 

As Casalini stated to LifeSiteNews, “Google acted in this way in violation of the Digital Services Act.”

 

Google responded through five lawyers based in Milan. The company claimed that an interview with Bishop Joseph Strickland, who opposed the ordination of women as deacons, violated its hate speech policy.

 

When the defense team countered that the post merely reported the bishop’s words and contained no discriminatory content, Google’s attorneys maintained in court documents that “it does not matter the source, more or less authoritative (bishop, Pontiff) of the post, if it violates the Policy.”

 

Judge De Sanctis of the Imperia Court dismissed Google’s reasoning. The court found that the company had failed to justify the deletion and had breached European laws ensuring fair access to digital services.

 

The ruling ordered the immediate reinstatement of the blog and described Google’s conduct as incompatible with the principles of freedom of expression recognized by EU law.

 

The decision highlights a central flaw within the Digital Services Act. Although the law formally instructs platforms to consider free expression, it still empowers them to remove speech unilaterally under the guise of compliance.

 

The result is a system where large corporations can suppress lawful viewpoints with minimal oversight.

 

By ruling in favor of Messainlatino.it, the Italian court affirmed that private digital companies are not above the law when they interfere with constitutionally protected speech.

 

The case may now serve as a precedent for future disputes over online censorship in Europe, reminding regulators and corporations alike that freedom of expression must remain the foundation of the digital public space.

 

STILL DANGEROUS

 

 

EU’s Weakened “Chat Control” Bill Still Poses Major Privacy and Surveillance Risks, Academics Warn

 

On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.

 

The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.

 

This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.

 

But for researchers closely following the legislation, the revised proposal is anything but a retreat.

 

“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.

 

We obtained a copy of the letter for you here.

 

The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.

 

The criticism is focussed on the reliance on artificial intelligence to parse private messages for illicit content. While policymakers tout AI as a technical fix to an emotionally charged problem, researchers say the technology is simply not ready for such a task.

 

“Current AI technology is far from being precise enough to undertake these tasks with guarantees for the necessary level of accuracy,” the experts warn.

 

False positives, they say, are not theoretical. They are a near-certainty. AI-based tools struggle with nuance and ambiguity, especially in areas like text-based grooming detection, where the intent is often buried under layers of context.

 

“False positives seem inevitable, both because of the inherent limitations of AI technologies and because the behaviors the regulation targets are ambiguous and deeply context-dependent.”

 

These aren’t just minor errors. Flagging benign conversations, such as chats between teenagers or with trusted adults, could trigger law enforcement investigations or platform bans. At scale, this becomes more than a privacy risk. It becomes a systemic failure.

 

“Extending the scope of targeted formats will further increase the very high number of false positives – incurring an unacceptable increase of the cost of human labor for additional verification and the corresponding privacy violations.”

 

The critics argue that such systems could flood investigators with noise, actually reducing their ability to find real cases of abuse.

 

“Expanding the scope of detection only opens the door to surveil and examine a larger part of conversations, without any guarantee of better protection – and with a high risk of diminishing overall protection by flooding investigators with false accusations that prevent them from investigating the real cases.”

 

Alongside message scanning, the proposal mandates age verification for users of encrypted messaging platforms and app stores deemed to pose a “high risk” to children. It’s a seemingly common-sense measure, but one that technology experts say is riddled with problems.

 

“Age assessment cannot be performed in a privacy-preserving way with current technology due to reliance on biometric, behavioural or contextual information (e.g., browsing history),” the letter states, pointing to contradictions between the proposed text and the EU’s own privacy standards.

 

There are also concerns about bias and exclusion. AI-powered age detection tools have been shown to produce higher error rates for marginalized groups and often rely on profiling methods that undermine fundamental rights.

 

“AI-driven age inference techniques are known to have high error rates and to be biased for certain minorities.”

 

Even more traditional verification methods raise red flags. Asking users to upload a passport or ID introduces a host of new risks. It’s not just disproportionate, the researchers argue. It’s dangerous.

 

“Presenting full documents (e.g., a passport scan) obviously brings security and privacy risks and it is disproportionate as it reveals much more information than the age.”

 

The deeper issue, however, is one of equity. Many people, especially vulnerable populations, simply do not have easy access to government-issued IDs. Mandating proof of age, even for basic communication tools, threatens to lock these users out of essential digital spaces.

 

“There is a substantial fraction of the population who might not have easy access to documents that afford such a proof. These users, despite being adults in their full right of using services, would be deprived of essential services (even some as important as talking to a doctor).

 

This is not a technological problem, and therefore no technology can address it in a satisfactory manner.”

 

The broader concern isn’t just the functionality of the tools or the viability of the rules. It’s the principle. Encryption has long been a bedrock of digital security, relied upon by activists, journalists, medical professionals, and everyday citizens alike. But once a private message can be scanned, even “voluntarily” by a service provider, that foundational guarantee is broken.

 

“Any communication in which results of a scan are reported, even if the scan is voluntary, can no longer be considered secure or private, and cannot be the backbone of a healthy digital society,” the letter declares.

 

This line is particularly important. It cuts through the legal jargon and technical ambiguity. If messaging platforms are allowed to opt in to content scanning, the pressure to conform, whether political, social, or economic, will be immense. Eventually, “voluntary” becomes the norm. And encryption becomes meaningless.

 

***

 

Interestingly, the European Parliament has charted a different course. Its version of the regulation sidesteps the more intrusive measures, focusing instead on targeted investigations involving identified suspects. It also avoids universal age verification requirements.

 

The divergence sets up a legislative standoff between Parliament and the Council, with the European Commission playing mediator.

 

Unless the Council’s draft sees significant revision, two contentious features, voluntary message scanning and mandatory age verification, will dominate the trilogue negotiations in the months ahead.

 

The academics, for their part, are urging caution before the November 19 vote. Their message is clear: proceed slowly, if at all.

 

“Even if deployed voluntarily, on-device detection technologies cannot be considered a reasonable tool to mitigate risks, as there is no proven benefit, while the potential for harm and abuse is enormous.”

 

“We conclude that age assessment presents an inherent disproportionate risk of serious privacy violation and discrimination, without guarantees of effectiveness.”

 

“The benefits do not outweigh the risks.”

 

In a climate where public trust in technology is already fragile, the Council’s proposal flirts with the edge of overreach. The tools being proposed carry real dangers. The benefits, if they exist, remain unproven.

 

Europe has often led the way on digital rights and privacy. On November 19, it will reveal whether that leadership still holds.

 

Reclaim The Net is funded by the community. Keep us going and get extra benefits by becoming a supporter today. Thank you.

 

Become a Supporter

 

Thanks for reading,

Reclaim The Net

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)