The World Economic Forum (WEF) is gearing up for its Davos meeting, set to take place January 20-24, and the group has now released the Global Risks Report 2025.
The report is based on “insights” from the Global Risks Perception Survey that take into account the opinions of 900 “global leaders” across business, government, academia and civil society, the WEF said.
The report reflects the unrelenting drive still present in many corners of the world and among political elites to push what they consider “disinformation” to the top of this agenda.
And so the WEF paper talks about “armed conflict, environment, and disinformation” as “top threats” this year. And that, as the authors note, from their point of view leaves economic risks as having “less immediate prominence.”
Meanwhile, “mis/disinformation” is ranked higher as a threat and that has happened two years in a row. This reads like another instance of taking an alarmist approach to “disinformation” (which then comes in handy when pushing all sorts of controversial policies, affecting online speech, security, and technology development).
The WEF report elevates “disinformation” to a “persistent threat to societal cohesion and governance by eroding trust” – and even “exacerbating divisions within and between nations” and “complicating” ways to cooperate on ending international crises.
And, when AI is thrown into the mix in its “adverse” form – “disinformation” underpins rising geopolitical tensions.
The way the report frames the issue of disinformation, that seems to be the only thing standing in the way of world peace.
While creating high drama around “disinformation” is one piece of the puzzle, the WEF also looks at long-term threats, such as to the environment. This, according to the document, will be dominant over the next decade, and this is the language the group uses: “(…) led by extreme weather events, biodiversity loss and ecosystem collapse.”
With the threats presented like this, the “solutions” are also very much in line with the WEF mission: promote more and more globalization, even as many countries might be looking to what the group disapprovingly calls, “turning inward.”
Instead, the WEF wants them to essentially double down on globalization, allegedly as the only way to “prevent a downward spiral of instability.”
One of the goals the WEF promotes – and is also one of the five overall topics of this year’s Davos meeting – is “rebuilding trust.”
Now, if only this group would focus more on explaining how that trust was lost. |
You subscribe to Reclaim The Net because you value free speech and privacy. Each issue we publish is a commitment to defend these critical rights, providing insights and actionable information to protect and promote liberty in the digital age.
Despite our wide readership, less than 0.2% of our readers contribute financially. With your support, we can do more than just continue; we can amplify voices that are often suppressed and spread the word about the urgent issues of censorship and surveillance.
Consider making a modest donation — just $5, or whatever amount you can afford. Your contribution will empower us to reach more people, educate them about these pressing issues, and engage them in our collective cause.
Thank you for considering a contribution. Each donation not only supports our operations but also strengthens our efforts to challenge injustices and advocate for those who cannot speak out.
Thank you. |
Barely half a day after TikTok went offline across the United States, the widely popular video-sharing platform is beginning to come back online. This swift reversal follows a statement from TikTok announcing its efforts to restore service, facilitated by new assurances from the Trump administration. |
“In agreement with our service providers, TikTok is in the process of restoring service,” the company confirmed. “We thank President Trump for providing the necessary clarity and assurance to our service providers that they will face no penalties providing TikTok to over 170 million Americans and allowing over 7 million small businesses to thrive.”
TikTok’s abrupt shutdown came as a law targeting its operations in the US was set to take effect. The legislation, passed under President Joe Biden’s administration, required TikTok’s Chinese parent company, ByteDance, to sell the app or face a nationwide ban. It also prohibited American companies from offering services essential to the app’s distribution or maintenance. As uncertainty loomed, TikTok ceased functioning late Saturday night and disappeared from the Apple and Google Play app stores.
In a dramatic turn of events, President-elect Donald Trump addressed the issue Sunday morning, promising executive action to delay the ban. He stated his intention to ensure TikTok’s return and suggested the importance of the app being operational for Americans to enjoy his Inauguration Day celebrations.
“Americans deserve to see our exciting Inauguration on Monday,” Trump wrote, adding that his executive order would confirm no legal repercussions for companies that facilitated TikTok’s operations before his intervention. |
These reassurances appeared to be sufficient for TikTok and its partners, as users began regaining access to the app shortly after the announcement. While some devices experienced restored functionality, TikTok’s absence from major app stores persisted as of early Sunday afternoon.
Trump also floated an idea for a resolution to the app’s future in the United States, suggesting a joint venture that would grant the US a 50% ownership stake. TikTok has expressed willingness to collaborate, stating it is committed to working with the Trump administration on a long-term solution to ensure the app’s continued presence in the country.
In an NBC interview, Trump confirmed he is considering granting TikTok a 90-day extension to comply with the divestment requirement, a decision he plans to announce imminently. “The 90-day extension is something that will be most likely done because it’s appropriate,” Trump remarked. “It’s a very big situation.”
As political wrangling continues, TikTok remains at the center of a contentious debate over free speech, economic interests, and national security. |
Jonathan Greenblatt, CEO of the Anti-Defamation League (ADL), used his appearance in the Knesset to emphasize what he described as the urgent need to combat antisemitism in the digital space, presenting it as a critical front alongside more traditional challenges faced by the Jewish community.
However, his remarks leaned heavily into the idea of targeting online platforms and reshaping how information flows, sparking concerns about potential calls for increased censorship in the name of addressing this issue.
In remarks delivered to the Knesset’s Committee for Immigration, Absorption, and Diaspora Affairs, Greenblatt outlined the rise in antisemitic incidents over the past 15 months and insisted that Israel and Jewish organizations worldwide must prioritize combating this trend online.
“Capturing TikTok might seem less meaningful than holding on to Mount Hermon. Libelous tweets certainly might seem less deadly than missiles from Yemen. But this is urgent because the next war will be decided based on how Israel and its allies perform online as much as offline,” he stated.
Greenblatt framed this digital battleground as essential to shaping perceptions and influencing outcomes globally, arguing that strategies must be reimagined. He called for creativity and innovation but stopped short of specifying what such efforts might entail. “It won’t be solved by the government just throwing money at the problem. It won’t be solved by the IDF spokesperson’s unit issuing updated talking points or suddenly using TikTok,” he said. While he spoke about the importance of “fresh thinking” and experimentation, his focus on online platforms raises questions about whether his approach could lead to demands for further restrictions on speech.
Greenblatt called for creativity in targeting online discourse, comparing it to Israel’s storied military ingenuity. “We need the kind of genius that manufactured Apollo Gold Pagers and infiltrated Hezbollah for over a decade to prepare for this battle. We need the kind of courage that executed Operation Deep Layer inside Syria and destroyed Iranian missile manufacturing capabilities to undertake this mission,” he declared.
The ADL’s role in monitoring and flagging what it defines as harmful online content has long been a point of contention, with many arguing that its initiatives curb free expression. Greenblatt’s comments appeared to lean into this dynamic, asserting that Israel and its allies need to prioritize their digital presence to effectively counter antisemitism. “The next war will be decided based on how Israel and its allies perform online,” he insisted.
Despite these alternative approaches, Greenblatt’s emphasis on the digital sphere stood out, particularly his assertion that governments and organizations must treat online spaces as strategic battlegrounds. While he avoided proposing concrete policies, his rhetoric suggests an inclination toward pressing tech companies and governments to take more aggressive steps in controlling the narrative on social media platforms.
Critics might view this as a potential push for censorship disguised as combating antisemitism, raising broader concerns about the implications for free speech. By framing the online fight as a war of strategic importance, Greenblatt’s remarks echo a growing trend where controlling information becomes as central as addressing physical threats — a shift that warrants careful scrutiny. |
Facial recognition company Clearview AI has suffered a legal setback in Canada, where the Supreme Court of British Columbia decided to throw out the company’s petition aimed at cancelling an Information and Privacy Commissioner’s order.
The order aims to prevent Clearview AI from collecting facial biometric data for biometric comparison in the province without the targeted individuals’ consent.
We obtained a copy of the order for you here.
The controversial company markets itself as “an investigative platform” that helps law enforcement identify suspects, witnesses, and victims.
Privacy advocates critical of Clearview AI’s activities, however, see it as a major component in the burgeoning facial surveillance industry, stressing in particular the need to obtain consent – via opt-ins – before people’s facial biometrics can be collected.
And Clearview AI is said to subjecting billions of people to this, without consent. From there, the implications for privacy, free speech, and even data security are evident.
The British Columbia Commissioner appears to have been thinking along the same lines when issuing the order, that bans Clearview from selling biometric facial arrays taken from non-consenting individuals to its clients.
In addition, the order instructs Clearview to “make best efforts” to stop the practice in place so far, which includes collection, use, and disclosure of personal data – but also delete this type of information already in the company’s possession.
Right now, there is no time limit to how long Clearview can retain the data, which it collects from the internet using an automated “image crawler.”
Clearview moved to try to get the order dismissed as “unreasonable,” arguing that on the one hand, it is unable to tell if an image of a persons face is that of a Canadian, while also claiming that no Canadian law is broken since this biometric information is available online publicly.
The legal battle, however, revealed that images of faces of residents of British Columbia, children included, are among Clearview’s database of more than three billion photos (of Canadians) – while the total figure is over 50 billion.
The court also finds the Commissioner’s order to be very reasonable indeed – including when rejecting “Clearview’s bald assertion” that, in British Columbia, “it simply could not do” what it does in the US state of Illinois, to comply with the Biometric Information Privacy Act (BIPA). |
The United Kingdom is set to launch digital driving licenses this year, marking a significant step toward integrating technology into public services.
Simultaneously, it’s likely no coincidence that the country is preparing to implement stringent online age verification systems under its new censorship law, the Online Safety Act. While these initiatives aim to modernize services and protect users, their convergence raises critical questions about privacy, surveillance, and the future of digital identity in the UK.
Digital Driving Licenses: Convenience or a Gateway to Surveillance?
The Labour government has announced plans to introduce voluntary digital driving licenses, which will be accessible via a government app rather than existing platforms like Google or Apple Wallets. These digital licenses promise convenience, allowing users to present identification for voting, purchasing alcohol, or even boarding domestic flights. Physical licenses will remain available, and the government insists the digital option will not be mandatory. (Yet.)
However, critics argue that these so-called “voluntary” systems often become de facto mandatory over time, as more services require digital verification.
While the government touts advanced security measures such as biometrics and multi-factor authentication, these systems are not immune to breaches or misuse. The concentration of sensitive data in one app heightens risks of hacking and unauthorized access. Moreover, the integration of services such as tax payments and benefits claims could lead to a surveillance ecosystem where citizens are increasingly tracked and monitored.
Privacy advocates have expressed concerns that the normalization of digital IDs could gradually erode personal freedoms. For example, the ability to hide addresses might seem beneficial in certain contexts, but it also highlights the intrusive nature of these systems, which store more information than is typically required for identification. This level of data centralization poses significant risks to individual autonomy and privacy.
Online Age Verification: A Prelude to Widespread Digital ID?
Under Ofcom’s new guidelines, websites hosting adult content must introduce robust age verification systems by July 2025. These measures include intrusive technologies like photo ID verification and facial age estimation to ensure minors cannot access harmful content. While the initiative aims to safeguard children, critics fear it could erode online anonymity and set a precedent for broader surveillance measures.
Age verification systems risk creating a digital footprint for users, linking their identity to specific online activities. The Online Safety Act’s requirements for platforms to assess their accessibility to minors could pave the way for digital IDs becoming a universal requirement for accessing the internet. Such a shift could fundamentally alter how individuals interact online, turning the digital realm into a tightly controlled and monitored space.
Privacy advocates also warn of mission creep—the tendency for systems designed for one purpose to be expanded for others. Age verification tools could easily be repurposed to enforce broader controls, such as tracking users’ online behavior or restricting access to dissenting content. This not only threatens online anonymity but also raises concerns about free expression and the chilling effect of constant surveillance.
The simultaneous rollout of digital driving licenses and online age verification systems suggests a broader push toward integrating digital identity systems into daily life. While the government emphasizes convenience, these initiatives could blur the lines between voluntary and mandatory participation.
For example, the digital driving license app could be expanded to include age verification features for online services, effectively linking users’ offline and online identities. Such integration raises concerns about data privacy and the potential for misuse, particularly if these systems are later tied to other government databases or used for broader surveillance purposes. |
|