Nov 182018
 

From: Bruce Schneier
Sent: November 15, 2018

Crypto-Gram
November 15, 2018

by Bruce Schneier
CTO, IBM Resilient
schneier   AT   schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

Read this issue on the web

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:

  1. How DNA Databases Violate Everyone’s Privacy
  2. Privacy for Tigers
  3. Government Perspective on Supply Chain Security
  4. West Virginia Using Internet Voting
  5. Are the Police Using Smart-Home IoT Devices to Spy on People?
  6. On Disguise
  7. China’s Hacking of the Border Gateway Protocol
  8. Android Ad-Fraud Scheme
  9. Detecting Fake Videos
  10. Security Vulnerability in Internet-Connected Construction Cranes
  11. More on the Supermicro Spying Story
  12. Cell Phone Security and Heads of State
  13. ID Systems Throughout the 50 States
  14. Was the Triton Malware Attack Russian in Origin?
  15. Buying Used Voting Machines on eBay
  16. How to Punish Cybercriminals
  17. Troy Hunt on Passwords
  18. Security of Solid-State-Drive Encryption
  19. Consumer Reports Reviews Wireless Home-Security Cameras
  20. iOS 12.1 Vulnerability
  21. Privacy and Security of Data at Universities
  22. The Pentagon Is Publishing Foreign Nation-State Malware
  23. Hiding Secret Messages in Fingerprints
  24. New IoT Security Regulations
  25. Oracle and “Responsible Disclosure”
  26. More Spectre/Meltdown-Like Attacks
  27. Upcoming Speaking Engagements

** *** ***** ******* *********** *************

How DNA Databases Violate Everyone’s Privacy

[2018.10.15] If you’re an American of European descent, there’s a 60% chance you can be uniquely identified by public information in DNA databases. This is not information that you have made public; this is information your relatives have made public.

Research paper:

“Identity inference of genomic data using long-range familial searches.”

Abstract: Consumer genomics databases have reached the scale of millions of individuals. Recently, law enforcement authorities have exploited some of these databases to identify suspects via distant familial relatives. Using genomic data of 1.28 million individuals tested with consumer genomics, we investigated the power of this technique. We project that about 60% of the searches for individuals of European-descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers. Moreover, the technique could implicate nearly any US-individual of European-descent in the near future. We demonstrate that the technique can also identify research participants of a public sequencing project. Based on these results, we propose a potential mitigation strategy and policy implications to human subject research.

A good news article.

** *** ***** ******* *********** *************

Privacy for Tigers

[2018.10.16] Ross Anderson has some new work:

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos — and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

Video here.

** *** ***** ******* *********** *************

Government Perspective on Supply Chain Security

[2018.10.18] This is an interesting interview with a former NSA employee about supply chain security. I consider this to be an insurmountable problem right now.

** *** ***** ******* *********** *************

West Virginia Using Internet Voting

[2018.10.19] This is crazy (and dangerous). West Virginia is allowing people to vote via a smart-phone app. Even crazier, the app uses blockchain — presumably because they have no idea what the security issues with voting actually are.

** *** ***** ******* *********** *************

Are the Police Using Smart-Home IoT Devices to Spy on People?

[2018.10.22] IoT devices are surveillance devices, and manufacturers generally use them to collect data on their customers. Surveillance is still the business model of the Internet, and this data is used against the customers’ interests: either by the device manufacturer or by some third party the manufacturer sells the data to. Of course, this data can be used by the police as well; the purpose depends on the country.

None of this is new, and much of it was discussed in my book Data and Goliath . What is common is for Internet companies is to publish “transparency reports” that give at least general information about how police are using that data. IoT companies don’t publish those reports.

TechCrunch asked a bunch of companies about this, and basically found that no one is talking.

Boing Boing post.

** *** ***** ******* *********** *************

On Disguise

[2018.10.23] The former CIA Chief of Disguise has a fascinating video about her work.

** *** ***** ******* *********** *************

China’s Hacking of the Border Gateway Protocol

[2018.10.24] This is a long — and somewhat technical — paper by Chris C. Demchak and Yuval Shavitt about China’s repeated hacking of the Internet Border Gateway Protocol (BGP): “China’s Maxim — Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking.”

BGP hacking is how large intelligence agencies manipulate Internet routing to make certain traffic easier to intercept. The NSA calls it “network shaping” or “traffic shaping.” Here’s a document from the Snowden archives outlining how the technique works with Yemen.

EDITED TO ADD (10/27): Boing Boing post.

** *** ***** ******* *********** *************

Android Ad-Fraud Scheme

[2018.10.25] BuzzFeed is reporting on a scheme where fraudsters buy legitimate Android apps, track users’ behavior in order to mimic it in a way that evades bot detectors, and then uses bots to perpetuate an ad-fraud scheme.

After being provided with a list of the apps and websites connected to the scheme, Google investigated and found that dozens of the apps used its mobile advertising network. Its independent analysis confirmed the presence of a botnet driving traffic to websites and apps in the scheme. Google has removed more than 30 apps from the Play store, and terminated multiple publisher accounts with its ad networks. Google said that prior to being contacted by BuzzFeed News it had previously removed 10 apps in the scheme and blocked many of the websites. It continues to investigate, and published a blog post to detail its findings.

The company estimates this operation stole close to $10 million from advertisers who used Google’s ad network to place ads in the affected websites and apps. It said the vast majority of ads being placed in these apps and websites came via other major ad networks.

Lots of details in both the BuzzFeed and the Google links.

The Internet advertising industry is rife with fraud, at all levels. This is just one scheme among many.

** *** ***** ******* *********** *************

Detecting Fake Videos

[2018.10.26] This story nicely illustrates the arms race between technologies to create fake videos and technologies to detect fake videos:

These fakes, while convincing if you watch a few seconds on a phone screen, aren’t perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake’s guts, Lyu realized that the images that the program learned from didn’t include many with closed eyes (after all, you wouldn’t keep a selfie where you were blinking, would you?). “This becomes a bias,” he says. The neural network doesn’t get blinking. Programs also might miss other “physiological signals intrinsic to human beings,” says Lyu’s paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.

Lyu’s blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.

I don’t know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.

** *** ***** ******* *********** *************

Security Vulnerability in Internet-Connected Construction Cranes

[2018.10.29] This seems bad:

The F25 software was found to contain a capture replay vulnerability — basically an attacker would be able to eavesdrop on radio transmissions between the crane and the controller, and then send their own spoofed commands over the air to seize control of the crane.

“These devices use fixed codes that are reproducible by sniffing and re-transmission,” US-CERT explained.

“This can lead to unauthorized replay of a command, spoofing of an arbitrary message, or keeping the controlled load in a permanent ‘stop’ state.”

Here’s the CERT advisory.

** *** ***** ******* *********** *************

More on the Supermicro Spying Story

[2018.10.29] I’ve blogged twice about the Bloomberg story that China bugged Supermicro networking equipment destined to the US. We still don’t know if the story is true, although I am increasingly skeptical because of the lack of corroborating evidence to emerge.

We don’t know anything more, but this is the most comprehensive rebuttal of the story I have read.

** *** ***** ******* *********** *************

Cell Phone Security and Heads of State

[2018.10.30] Earlier this week, the New York Times reported that the Russians and the Chinese were eavesdropping on President Donald Trump’s personal cell phone and using the information gleaned to better influence his behavior. This should surprise no one. Security experts have been talking about the potential security vulnerabilities in Trump’s cell phone use since he became president. And President Barack Obama bristled at — but acquiesced to — the security rules prohibiting him from using a “regular” cell phone throughout his presidency.

Three broader questions obviously emerge from the story. Who else is listening in on Trump’s cell phone calls? What about the cell phones of other world leaders and senior government officials? And — most personal of all — what about my cell phone calls?

There are two basic places to eavesdrop on pretty much any communications system: at the end points and during transmission. This means that a cell phone attacker can either compromise one of the two phones or eavesdrop on the cellular network. Both approaches have their benefits and drawbacks. The NSA seems to prefer bulk eavesdropping on the planet’s major communications links and then picking out individuals of interest. In 2016, WikiLeaks published a series of classified documents listing “target selectors”: phone numbers the NSA searches for and records. These included senior government officials of Germany — among them Chancellor Angela Merkel — France, Japan, and other countries.

Other countries don’t have the same worldwide reach that the NSA has, and must use other methods to intercept cell phone calls. We don’t know details of which countries do what, but we know a lot about the vulnerabilities. Insecurities in the phone network itself are so easily exploited that 60 Minutes eavesdropped on a US congressman’s phone live on camera in 2016. Back in 2005, unknown attackers targeted the cell phones of many Greek politicians by hacking the country’s phone network and turning on an already-installed eavesdropping capability. The NSA even implanted eavesdropping capabilities in networking equipment destined for the Syrian Telephone Company.

Alternatively, an attacker could intercept the radio signals between a cell phone and a tower. Encryption ranges from very weak to possibly strong, depending on which flavor the system uses. Don’t think the attacker has to put his eavesdropping antenna on the White House lawn; the Russian Embassy is close enough.

The other way to eavesdrop on a cell phone is by hacking the phone itself. This is the technique favored by countries with less sophisticated intelligence capabilities. In 2017, the public-interest forensics group Citizen Lab uncovered an extensive eavesdropping campaign against Mexican lawyers, journalists, and opposition politicians — presumably run by the government. Just last month, the same group found eavesdropping capabilities in products from the Israeli cyberweapons manufacturer NSO Group operating in Algeria, Bangladesh, Greece, India, Kazakhstan, Latvia, South Africa — 45 countries in all.

These attacks generally involve downloading malware onto a smartphone that then records calls, text messages, and other user activities, and forwards them to some central controller. Here, it matters which phone is being targeted. iPhones are harder to hack, which is reflected in the prices companies pay for new exploit capabilities. In 2016, the vulnerability broker Zerodium offered $1.5 million for an unknown iOS exploit and only $200K for a similar Android exploit. Earlier this year, a new Dubai start-up announced even higher prices. These vulnerabilities are resold to governments and cyberweapons manufacturers.

Some of the price difference is due to the ways the two operating systems are designed and used. Apple has much more control over the software on an iPhone than Google does on an Android phone. Also, Android phones are generally designed, built, and sold by third parties, which means they are much less likely to get timely security updates. This is changing. Google now has its own phone — Pixel — that gets security updates quickly and regularly, and Google is now trying to pressure Android-phone manufacturers to update their phones more regularly. (President Trump reportedly uses an iPhone.)

Another way to hack a cell phone is to install a backdoor during the design process. This is a real fear; earlier this year, US intelligence officials warned that phones made by the Chinese companies ZTE and Huawei might be compromised by that government, and the Pentagon ordered stores on military bases to stop selling them. This is why China’s recommendation that if Trump wanted security, he should use a Huawei phone, was an amusing bit of trolling.

Given the wealth of insecurities and the array of eavesdropping techniques, it’s safe to say that lots of countries are spying on the phones of both foreign officials and their own citizens. Many of these techniques are within the capabilities of criminal groups, terrorist organizations, and hackers. If I were guessing, I’d say that the major international powers like China and Russia are using the more passive interception techniques to spy on Trump, and that the smaller countries are too scared of getting caught to try to plant malware on his phone.

It’s safe to say that President Trump is not the only one being targeted; so are members of Congress, judges, and other senior officials — especially because no one is trying to tell any of them to stop using their cell phones (although cell phones still are not allowed on either the House or the Senate floor).

As for the rest of us, it depends on how interesting we are. It’s easy to imagine a criminal group eavesdropping on a CEO’s phone to gain an advantage in the stock market, or a country doing the same thing for an advantage in a trade negotiation. We’ve seen governments use these tools against dissidents, reporters, and other political enemies. The Chinese and Russian governments are already targeting the US power grid; it makes sense for them to target the phones of those in charge of that grid.

Unfortunately, there’s not much you can do to improve the security of your cell phone. Unlike computer networks, for which you can buy antivirus software, network firewalls, and the like, your phone is largely controlled by others. You’re at the mercy of the company that makes your phone, the company that provides your cellular service, and the communications protocols developed when none of this was a problem. If one of those companies doesn’t want to bother with security, you’re vulnerable.

This is why the current debate about phone privacy, with the FBI on one side wanting the ability to eavesdrop on communications and unlock devices, and users on the other side wanting secure devices, is so important. Yes, there are security benefits to the FBI being able to use this information to help solve crimes, but there are far greater benefits to the phones and networks being so secure that all the potential eavesdroppers — including the FBI — can’t access them. We can give law enforcement other forensics tools, but we must keep foreign governments, criminal groups, terrorists, and everyone else out of everyone’s phones. The president may be taking heat for his love of his insecure phone, but each of us is using just as insecure a phone. And for a surprising number of us, making those phones more private is a matter of national security.

This essay previously appeared in the Atlantic.

EDITED TO ADD: Steven Bellovin and Susan Landau have a good essay on the same topic, as does Wired. Slashdot post.

** *** ***** ******* *********** *************

ID Systems Throughout the 50 States

[2018.10.31] Jim Harper at CATO has a good survey of state ID systems in the US.

** *** ***** ******* *********** *************

Was the Triton Malware Attack Russian in Origin?

[2018.10.31] The conventional story is that Iran targeted Saudi Arabia with Triton in 2017. New research from FireEye indicates that it might have been Russia.

I don’t know. FireEye likes to attribute all sorts of things to Russia, but the evidence here looks pretty good.

** *** ***** ******* *********** *************

Buying Used Voting Machines on eBay

[2018.11.01] This is not surprising:

This year, I bought two more machines to see if security had improved. To my dismay, I discovered that the newer model machines — those that were used in the 2016 election — are running Windows CE and have USB ports, along with other components, that make them even easier to exploit than the older ones. Our voting machines, billed as “next generation,” and still in use today, are worse than they were before — dispersed, disorganized, and susceptible to manipulation.

Cory Doctorow’s comment is correct:

Voting machines are terrible in every way: the companies that make them lie like crazy about their security, insist on insecure designs, and produce machines that are so insecure that it’s easier to hack a voting machine than it is to use it to vote.

I blame both the secrecy of the industry and the ignorance of most voting officials. And it’s not getting better.

** *** ***** ******* *********** *************

How to Punish Cybercriminals

[2018.11.02] Interesting policy paper by Third Way: “To Catch a Hacker: Toward a comprehensive strategy to identify, pursue, and punish malicious cyber actors“:

In this paper, we argue that the United States currently lacks a comprehensive overarching strategic approach to identify, stop and punish cyberattackers. We show that:

  • There is a burgeoning cybercrime wave: A rising and often unseen crime wave is mushrooming in America. There are approximately 300,000 reported malicious cyber incidents per year, including up to 194,000 that could credibly be called individual or system-wide breaches or attempted breaches. This is likely a vast undercount since many victims don’t report break-ins to begin with. Attacks cost the US economy anywhere from $57 billion to $109 billion annually and these costs are increasing.
  • There is a stunning cyber enforcement gap: Our analysis of publicly available data shows that cybercriminals can operate with near impunity compared to their real-world counterparts. We estimate that cyber enforcement efforts are so scattered that less than 1% of malicious cyber incidents see an enforcement action taken against the attackers.
  • There is no comprehensive US cyber enforcement strategy aimed at the human attacker: Despite the recent release of a National Cyber Strategy, the United States still lacks a comprehensive strategic approach to how it identifies, pursues, and punishes malicious human cyberattackers and the organizations and countries often behind them. We believe that the United States is as far from this human attacker strategy as the nation was toward a strategic approach to countering terrorism in the weeks and months before 9/11.

In order to close the cyber enforcement gap, we argue for a comprehensive enforcement strategy that makes a fundamental rebalance in US cybersecurity policies: from a heavy focus on building better cyber defenses against intrusion to also waging a more robust effort at going after human attackers. We call for ten US policy actions that could form the contours of a comprehensive enforcement strategy to better identify, pursue and bring to justice malicious cyber actors that include building up law enforcement, enhancing diplomatic efforts, and developing a measurable strategic plan to do so.

** *** ***** ******* *********** *************

Troy Hunt on Passwords

[2018.11.05] Troy Hunt has a good essay about why passwords are here to stay, despite all their security problems:

This is why passwords aren’t going anywhere in the foreseeable future and why [insert thing here] isn’t going to kill them. No amount of focusing on how bad passwords are or how many accounts have been breached or what it costs when people can’t access their accounts is going to change that. Nor will the technical prowess of [insert thing here] change the discussion because it simply can’t compete with passwords on that one metric organisations are so focused on: usability. Sure, there’ll be edge cases and certainly there remain scenarios where higher-friction can be justified due to either the nature of the asset being protected or the demographic of the audience, but you’re not about to see your everyday e-commerce, social media or even banking sites changing en mass.

He rightly points out that biometric authentication systems — like Apple’s Face ID and fingerprint authentication — augment passwords rather than replace them. And I want to add that good two-factor systems, like Duo, also augment passwords rather than replace them.

Hacker News thread.

** *** ***** ******* *********** *************

Security of Solid-State-Drive Encryption

[2018.11.06] Interesting research: “Self-encrypting deception: weaknesses in the encryption of solid state drives (SSDs)“:

Abstract: We have analyzed the hardware full-disk encryption of several SSDs by reverse engineering their firmware. In theory, the security guarantees offered by hardware encryption are similar to or better than software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret. BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the SSD advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised. This challenges the view that hardware encryption is preferable over software encryption. We conclude that one should not rely solely on hardware encryption offered by SSDs.

EDITED TO ADD: The NSA is known to attack firmware of SSDs.

EDITED TO ADD (11/13): CERT advisory. And older research.

** *** ***** ******* *********** *************

Consumer Reports Reviews Wireless Home-Security Cameras

[2018.11.07] Consumer Reports is starting to evaluate the security of IoT devices. As part of that, it’s reviewing wireless home-security cameras.

It found significant security vulnerabilities in D-Link cameras:

In contrast, D-Link doesn’t store video from the DCS-2630L in the cloud. Instead, the camera has its own, onboard web server, which can deliver video to the user in different ways.

Users can view the video using an app, mydlink Lite. The video is encrypted, and it travels from the camera through D-Link’s corporate servers, and ultimately to the user’s phone. Users can also access the same encrypted video feed through a company web page, mydlink.com. Those are both secure methods of accessing the video.

But the D-Link camera also lets you bypass the D-Link corporate servers and access the video directly through a web browser on a laptop or other device. If you do this, the web server on the camera doesn’t encrypt the video.

If you set up this kind of remote access, the camera and unencrypted video is open to the web. They could be discovered by anyone who finds or guesses the camera’s IP address — and if you haven’t set a strong password, a hacker might find it easy to gain access.

The real news is that Consumer Reports is able to put pressure on device manufacturers:

In response to a Consumer Reports query, D-Link said that security would be tightened through updates this fall. Consumer Reports will evaluate those updates once they are available.

This is the sort of sustained pressure we need on IoT device manufacturers.

Boing Boing link.

EDITED TO ADD (11/13): In related news, the US Federal Trade Commission is suing D-Link because their routers are so insecure. The lawsuit was filed in January 2017.

** *** ***** ******* *********** *************

iOS 12.1 Vulnerability

[2018.11.08] This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users’ contact information with no need for a passcode.

[…]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim’s contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select “Add Person”
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that’s stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don’t know the number, they can say “call my phone.” We tested this with both the owners’ voice and a strangers voice, in both cases, Siri initiated the call.

** *** ***** ******* *********** *************

Privacy and Security of Data at Universities

[2018.11.09] Interesting paper: “Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier,” by Christine Borgman:

Abstract: As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of “grey data” about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.

** *** ***** ******* *********** *************

The Pentagon Is Publishing Foreign Nation-State Malware

[2018.11.09] This is a new thing:

The Pentagon has suddenly started uploading malware samples from APTs and other nation-state sources to the website VirusTotal, which is essentially a malware zoo that’s used by security pros and antivirus/malware detection engines to gain a better understanding of the threat landscape.

This feels like an example of the US’s new strategy of actively harassing foreign government actors. By making their malware public, the US is forcing them to continually find and use new vulnerabilities.

EDITED TO ADD (11/13): This is another good article. And here is some background on the malware.

** *** ***** ******* *********** *************

Hiding Secret Messages in Fingerprints

[2018.11.12] This is a fun steganographic application: hiding a message in a fingerprint image.

Can’t see any real use for it, but that’s okay.

** *** ***** ******* *********** *************

New IoT Security Regulations

[2018.11.13] Due to ever-evolving technological advances, manufacturers are connecting consumer goods — from toys to light bulbs to major appliances — to the Internet at breakneck speeds. This is the Internet of Things, and it’s a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon’s Alexa, which not only answers questions and plays music but allows you to control your home’s lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner — cars, pacemakers, thermostats — the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or “IoT” devices sold in the state — and the effects will soon be felt worldwide.

California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.” The good news is that the term “connected devices” is broadly defined to include just about everything connected to the Internet. The not-so-good news is that “reasonable security” remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California’s attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There’s just one specific in the law that’s not subject to the attorney general’s interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it’s just one of dozens of awful “security” measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us — in the United States or elsewhere — are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won’t make sense to have two versions: one for California and another for everywhere else. It’s much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you’ve read and agreed to the website’s privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR — people physically in the European Union, and EU citizens wherever they are — and those who are not. It’s easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can’t, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don’t have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there’s a market for new ideas and new products. We’ve seen this pattern again and again in safety and security engineering, and we’ll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it’s heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

** *** ***** ******* *********** *************

Oracle and “Responsible Disclosure”

[2018.11.14] I’ve been writing about “responsible disclosure” for over a decade; here’s an essay from 2007. Basically, it’s a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it’s the best solution we have. Making it illegal to publish vulnerabilities without the vendor’s consent means that they won’t get fixed quickly — and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

** *** ***** ******* *********** *************

More Spectre/Meltdown-Like Attacks

[2018.11.14] Back in January, we learned about a class of vulnerabilities against microprocessors that leverages various performance and efficiency shortcuts for attack. I wrote that the first two attacks would be just the start:

It shouldn’t be surprising that microprocessor designers have been building insecure hardware for 20 years. What’s surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren’t thinking about security. They didn’t have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

We saw several variants over the year. And now researchers have discovered seven more.

Researchers say they’ve discovered the seven new CPU attacks while performing “a sound and extensible systematization of transient execution attacks” — a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU’s internal caches, and other internal execution stages.

The research team says they’ve successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers.

Microprocessor designers have spent the year rethinking the security of their architectures. My guess is that they have a lot more rethinking to do.

** *** ***** ******* *********** *************

Upcoming Speaking Engagements

[2018.11.14] This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books — including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World — as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.

Copyright © 2018 by Bruce Schneier.

** *** ***** ******* *********** *************

Mailing list hosting graciously provided by MailChimp. Sent without web bugs or link tracking.

Bruce Schneier · Harvard Kennedy School · 1 Brattle Square · Cambridge, MA 02138 · USA

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)