Both Meltdown and Spectre are “timing-channel attacks”. They subvert a computer’s security mechanisms by analysing the time taken to perform various operations.
Intel’s statement on Jan. 3 described these hardware flaws as “methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from computing devices that are operating as designed”.
Gernot Heiser describes them another way.
“Remove the spin. This means our hardware operates according to a contract we defined. It’s your problem the contract doesn’t work for you,” Heiser told ZDNet.
Heiser is a scientia professor and the John Lions Chair of Operating Systems at the University of New South Wales, and leader of the Trustworthy Systems Group at Data61. In what he describes as “exquisite timing”, just two months before news of Meltdown and Spectre broke, a brief paper he’d written was accepted by the journal IEEE Design and Test. Titled For safety’s sake: we need a new hardware-software contract!, it will be published in April.
That contract is currently something called the instruction set architecture (ISA).
“The ISA describes the functional interface of the hardware to software. Specifically, it describes all you need to know for writing a functionally correct program,” Heiser wrote. Write software according to the rules, and the vendor “promises” that the hardware will execute it correctly.
Safety and security require more than just functional correctness, however. They must also account for time. That’s not part of the ISA.
“Hard real-time systems, where failure to complete an action by a deadline is disastrous, used to be small control programs running on simple microcontrollers without internal protection. This model has reached its use-by date, with even critical systems becoming complex and rich in functionality. This means that modern real-time systems are increasingly mixed-criticality systems (MCS), where functions of different criticality co-exist on the same processor. A core property of an MCS is that the ability of a critical task to meet its deadlines must not depend on the correct behavior of less critical components,” Heiser wrote.
“If the safety story was not bad enough, the security situation is worse. One defense against timing-channel attacks, especially for crypto algorithms, is constant-time implementations, where execution time is independent of inputs. However, these are only possible if the implementer understands exactly what the hardware does, and in general they do not have sufficient information about the hardware. The result is frequently that ‘constant-time’ implementations are not constant-time at all, as we have recently demonstrated on the supposedly constant-time implementation of TLS in OpenSSL 1.0.1e.”
Heiser’s paper was a by-product of research conducted for the formally-verified seL4 microkernel. seL4 is a proven-correct secure operating system that’s already being used in Qualcomm modem chips, amongst others, as well as by Apple for the iOS secure enclave. The US Defense Advanced Projects Agency (DARPA) is using it in experiments with Boeing on an autonomous drone helicopter, and in autonomous trucks that are already driving the streets of Detroit.
Timing issues were critical to the development of the recently released MCS branch of seL4, which Heiser discussed in his presentation to the linux.conf.au open-source software conference in Sydney on Jan. 26. Part of that project included writing a whole new architecture for the kernel thread scheduling system, which is claimed to be 10 times faster than the Linux kernel.
But the complete verification of that branch is impossible without all the hardware details.
Heiser’s call for a new contract echoes a research paper published more than two decades ago.
The US National Security Agency (NSA) commissioned research which was published in 1994 under the title An Analysis of the Intel 80×86 Security Architecture and Implementations.
Not only did the researchers find the potential for timing channel and other attacks, as well as hardware implementation errors, they also issued a warning about increasing hardware complexity, and called for more transparency from the hardware vendors.
The researchers noted the “imbalance of scrutiny” between hardware and software, and that the imbalance was “increasingly difficult to justify” as hardware became more complex.
Here in 2018, concerns over closed processor hardware are not limited to the lack of timing information, or implementation errors. There’s also the possibility that malicious systems could be built into the hardware or firmware itself.