|As part of the proposed Pandemic Treaty, the WHO has for the first time said that it’s going to prioritize restricting civil liberties. While we all know they’ve been pushing for this for some years, this is the first time they’ve openly admitted it directly.
|During a meeting last week, a co-chair of a World Health Organization (WHO) working group that’s focused on international law amendments that would increase the WHO’s powers, took the power grab a step further by urging members to prioritize “actions that may restrict individual liberties.”
The co-chair of the WHO’s working group on amendments to the International Health Regulations (2005), Dr. Abdullah Assiri, made the comments during a strategic roundtable at the seventy-sixth World World Health Assembly (an annual meeting of the WHO’s decision-making body).
During the strategic roundtable, WHO members discussed the international pandemic treaty and amendments to the International Health Regulations — two instruments that will collectively expand the WHO’s powers to target “misinformation,” increase its surveillance powers, and push global vaccine passports.
Assiri provided an update on the WHO’s progress with the IHR amendments before suggesting that individual liberties should be curtailed by this unelected health agency.
“The world, however, requires different legal mandates, such as the pandemic treaty, to navigate through a particular pandemic, should one occur, and it will,” Assiri said. “Prioritizing actions that may restrict individual liberties, mandating and sharing of information, knowledge, and resources, and most importantly, providing fund for pandemic control efforts are all necessary during a pandemic. The means to carry out these actions are simply not…currently at hand.”
Watch the video here.
While the sweeping new powers contained within the pandemic treaty and amendments to the IHR do curb individual liberties, such as privacy and free speech, WHO officials have previously refrained from admitting this directly.
Assiri’s comments are the latest of many examples of the WHO continuing to demand more power, despite the unelected health agency already gaining significant influence during the pandemic.
Since 2020, the WHO has partnered with YouTube, Facebook, and Wikipedia and had a direct impact on the speech rules on these platforms. Google renewed its partnership with the WHO last month.
Yet despite already having major influence over online speech, the WHO pushed for even more power in 2021 (when work began on the international pandemic treaty) and 2022 (when the Biden administration proposed amendments to the IHR).
The WHO’s power grab has faced pushback from politicians in the US, Canada, UK, Australia, and Europe. However, it continues to move forward and the WHO is planning to finalize the pandemic treaty and IHR amendments by May 2024.
If adopted, the pandemic treaty will apply to the WHO’s 194 member states (which represent 98% of the world’s countries) and the IHR amendments will apply to 196 countries. Both instruments are legally binding under international law.
|Politicians that plan to regulate AI have used the increase in AI memes as a call to justify the idea that AI companies need to be licensed by the government before they can operate.
We break down their plans today.
Watch our video report on YT here.
Watch our video report on Rumble here.
|Microsoft’s president and vice chairman Brad Smith has made some bold statements about how he expects the US government to regulate artificial intelligence in the next year. He made the remarks in an interview on CBS’ “Face the Nation.”
Smith wants it to be “unlawful” to remove the AI metadata that Microsoft and others are developing. Microsoft is also developing technology to be able to detect when the watermark is removed.
Watch the video here.
“We’ll need a system that we and so many others have been working to develop that protects content, that puts a watermark on it, so that if somebody alters it, if somebody removes the watermark, if they do that to try to deceive or defraud someone, first of all, they’re doing something that the law makes unlawful. We may need some new law to do that,” Smith said.
“But second, we can then use the power of AI to detect when that happens. So that means a news organization like CBS would have video that somehow could be identified. And I would guess and hope that CBS will be absolutely at the forefront of this.”
The president of the Big Tech behemoth explained that metadata within a file will identify when an image or video is AI generated and said that it should be illegal to remove it.
“You embed what we call metadata. It’s part of the file. If it’s removed, we’re able to detect it. If there’s an altered version, we in effect create a hash. Think of it like the fingerprint of something and then can look for that fingerprint across the internet.”
Smith said he wants the US to increase the pace of AI regulation.
“I was in Japan just three weeks ago, and they have a national A.I. strategy. The government has adopted it,” Smith said. “The world is moving forward. Let’s make sure that the United States at least keeps pace with the rest of the world.”
|On Friday, the European Union’s Internal Markets Commissioner, Thierry Breton, confirmed that Twitter had ditched the EU’s code of practice on disinformation. He warned that the platform cannot “hide” from obligations to censor content.
“You can run but you can’t hide,” Breton threatened in a tweet.
“Beyond voluntary commitments, fighting disinformation will be a legal obligation under DSA as of August 25,” Breton continued. “Our teams will be ready for enforcement.”
Breton was referring to the censorship law, the controversial Digital Services Act (DSA), a new set of rules for social media platforms operating in Europe, which require them to actively police content or risk fines of up to 6% of global turnover.
|The current code of practice, which is voluntary, includes obligations for social media to stop the monetization of “disinformation,” monitor political advertising, and allow third-parties to access their algorithms.
In February, Twitter did not submit a report on its implementation of the code. It was the only major platform to fail to do so.
Unlike the code of practice, the DSA is legally binding, and large platforms, Twitter, Facebook, Instagram, YouTube, TikTok, Pinterest, Snapchat, and LinkedIn, will have to comply with it if they want to operate in Europe.
In previous tweets, Breton has indicated that he will hold Twitter owner Elon Musk to account for the platform’s failures to comply with the content rules in the EU.
|CEO of global media organization News Corp, Robert Thomson, said that he had discovered that staff at advertising agencies were allowing personal “political prejudices” to guide their work. He made the remarks at the International News Media Association (INMA) World Congress of News Media in New York.
“I asked the chief executive of one of the world largest companies why he had an ad ban against the New York Post … (with around 158 million monthly uniques),” he said, according to The Australian.
“The chief exec said he was completely unaware of any such ban – so he checked, and to his genuine and annoyed surprise, a hyper-politicized agency flunkey had a Post prohibition.
“The medium may be the message but unless we are more assertive and there is more transparency, certain advertising agencies will indulge their worst instincts, ad nauseam.”
Thomson said that the frustration with the Global Disinformation Index, a firm funded by the US and UK that provides blacklists of conservative websites to advertisers, was justifiable.
“These arrogant armchair amateurs have undue influence on ad spend by agencies and companies,” he said.
“No masthead is immune to sudden, capricious changes in algorithmic ranking that can affect your ad revenue.”
|The Dallas Independent School District is rolling out AI-equipped cameras to spy on students, violating privacy under the pretense of keeping them safe.
The school district partnered with a company called Davista to use AI to monitor each student and notify the administration if a student deviates from their “baseline” behavior.
The press release announcing the technology stated: “This initiative will utilize Davista’s Heimdall platform, a breakthrough technology that empowers organizations to identify risk and take action before the projected risk becomes a consequential event or incident.
“Davista’s student safety and support platform enables comprehensive analysis and review of student data through software, minimizing inherent human biases and disparities by objectively assessing data points and reducing assumptions and cognitive fatigue. Leveraging existing data within the school, the technology pays attention to students’ participation, performance, and behavioral patterns. This process establishes a baseline for each student, derived from their past information, allowing real-time analysis of any deviations from their personal baseline.”
|Thanks for reading,
Reclaim The Net