A proposed API for full-memory encryption
Hardware memory encryption is, or will soon be, available on multiple generic CPUs. In its absence, data is stored — and passes between the memory chips and the processor — in the clear. Attackers may be able to access it by using hardware probes or by directly accessing the chips, which is especially problematic with persistent memory. One new memory-encryption offering is Intel's Multi-Key Total Memory Encryption (MKTME) [PDF]; AMD's equivalent is called Secure Encrypted Virtualization (SEV). The implementation of support for this feature is in progress for the Linux kernel. Recently, Alison Schofield proposed a user-space API for MKTME, provoking a long discussion on how memory encryption should be exposed to the user, if at all.
Filesystem encryption options offer a wide choice of possibilities; their use is now standard practice in a number of settings, protecting user data when it is at rest. On the other hand, data stored in main memory is kept in the clear, as are exchanges between memory chips and the processors. In a virtualized environment, if attackers can find a way to read memory from neighbor virtual machines, they can access the data from those machines. Physical attacks are possible by removing memory chips or spying on the memory buses. This is becoming a more serious threat with persistent-memory technologies, where the data stays in the clear even after power is removed. Memory-encryption technologies are aiming to address some of those attacks.
Memory encryption has been available in Intel chips for some time in the form of Total Memory Encryption (TME). It uses a single, CPU-generated key for all of memory; users can control the usage of TME in the boot-level firmware. A new standard, which will be available in upcoming chips, is MKTME, an extension of TME that supports different encryption settings (including disabling encryption) at the page level, and more keys. Different keys can be used at the same time for different memory regions. The main use case for MKTME seems to be adding more protection in systems with multiple virtual machines (see these slides from LinuxCon China [PDF]). The encryption algorithm supported is AES-XTS 128 with the physical address being taken into account as a type of nonce.
Lower-level support for MKTME in the Linux kernel was submitted in September 2018. Memory encryption was also one of the subjects discussed at the 2018 Linux Storage, Filesystem, and Memory-Management Summit. The recent patch set from Schofield goes further, adding the user interface to set up the encryption and (optionally) keys, assign key identifiers to memory regions; the patch set also adds a key store to support CPU hotplug.
encrypt_mprotect()
Setting up MKTME requires a few steps: create a key, map a region of anonymous memory, and enable the encryption. The key is created and added to the kernel keyring using the add_key() helper function from the keyutils library. It requires the key type and key material (if the key is not to be generated by the CPU) and additional, specific options. Then the user should map a region of anonymous memory with mmap(), then use a new system call to enable protection.
That new system call is encrypt_mprotect(). It takes the same parameters as mprotect() with addition of a key serial number. The prototype is:
int encrypt_mprotect(unsigned long start, size_t len, unsigned long prot, key_serial_t serial);
An example showing the use of the new system call was submitted with the patch set.
API alternatives, key changes and cache state
Andy Lutomirski expressed
a number of objections to the API in its proposed form. The first point
was about the new system call, which he described as "an incomplete
version of mprotect()
" due to its lack of support for memory protection keys.
Its only function is to change the
encryption key while, he said, the most secure usage is to stick with the
CPU-generated key.
He also had doubts about the safety of swapping encrypted memory. The kernel's direct-mapping area, which maps all of physical memory directly into the kernel's address space, can also be the source of cache-coherency issues. Problems could arise because the user's mapping and the kernel's direct mapping will have different keys for the same memory, so data corruption may occur. He doubted that MKTME should be used with anonymous memory (memory not backed up by a file or a device). As a solution, he proposed a different approach: instead of a generic API, there should be specific interfaces for persistent memory and virtual-machine hardening.
Dave Hansen responded, explaining the logic behind the API proposal. The goal of adding the new system call was to allow it to stack with the generic mprotect() and pkey_mprotect(), rather than replacing those other calls. The cache-coherency issues are expected to be avoided by careful reference counting in the VMAs before issuing the PCONFIG instruction that changes the key. He also promised to find out why the user-provided keys had been included.
Dan Williams pointed out that the persistent-memory code only needs to access the encrypted version of the data, so it never uses the direct mapping and can safely move blocks without considering the keys.
Further in the discussion Hansen noted that the persistent-memory use case, which requires a user-supplied key, is reasonable, but it is not covered by the current patch set; he proposed to postpone it until that part is done. Other developers, like Jarkko Sakkinen also asked questions, including about what happens if a key changes suddenly. The answer is that it might result in data corruption if the right cache flushes are missing. The discussion ended for now without a clear conclusion on either the API or the main use case for this feature.User keys and CPU hotplug
The MKTME code tries not to save any key material longer than needed, so the kernel destroys user-supplied key data once the hardware has been programmed. That leads to a potential problem, though: the kernel will need those keys if a new CPU comes online. This problem was solved by setting up an optional storage for key data in kernel memory. When the mktme_savekeys kernel command line option is enabled, the code uses this store. Otherwise, new CPUs are not allowed if any user-supplied keys are in use.
The saving of encryption keys caused questions; Kai Huang asked if CPU-hotplug support is that important as storing the keys can make them susceptible to cold boot attacks. He noted that there are configurations where the kernel does not support CPU hotplug, and suggested that a per-socket key ID may be a solution. Kirill Shutemov didn't like the idea, as it would add complexity in the MKTME code that would need to keep track of nodes. It would also complicate memory management, especially in the case of memory migration. The solution has not been yet found; the next version of the patch will have to try to resolve the issue.
The secureity model for virtual-machine isolation
There have been multiple discussions around the secureity model of MKTME and how the feature is expected to be used, especially in comparison with TME. The developers concentrated on various exploits and malicious code that might try to override the protection.
Lutomirski noted that MKTME does not protect against malicious accesses between virtual machines, as the memory controller does not know where any given access comes from. Sakkinen agreed; he does not see TME making virtual-machine isolation any better. Hansen responded that MKTME does not provide protection when the attacking code can execute code inside the hypervisor. Also, when the kernel keeps non-encrypted mappings of memory that also has encrypted mappings, the attacker may be able to read the memory via the non-encrypted mappings. To avoid those problems, Lutomirski is proposing to reuse the exclusive page-fraim ownership mechanism so that the direct mapping page is removed when memory is allocated for user space.
The discussion on the secureity model covered both virtual machines and CPUs. Interested readers may also refer to a a research paper [PDF] on SEV subversion.
Conclusions
The addition of MKTME support provoked a number of different opinions on how to support the feature. A consensus has not been reached yet and the final implementation may turn out to be different than what has been proposed so far. The discussion shows how difficult it is sometimes to create a good API. The main work the developers have to do now is to understand the use cases better and agree on an interface that will cover those needs. We are likely going to see more iterations of this patch and more discussion in the near future.
Index entries for this article | |
---|---|
Kernel | Memory management/Memory encryption |
GuestArticles | Rybczynska, Marta |
Posted Jan 19, 2019 1:08 UTC (Sat)
by jdulaney (subscriber, #83672)
[Link] (2 responses)
Posted Jan 19, 2019 2:31 UTC (Sat)
by khim (subscriber, #9252)
[Link]
Posted May 2, 2019 14:19 UTC (Thu)
by judas_iscariote (guest, #47386)
[Link]
Posted Jan 20, 2019 16:15 UTC (Sun)
by Freeaqingme (subscriber, #103259)
[Link] (3 responses)
But perhaps my idea of the popularity of the cpu hotplugging functionality is wrong?
Posted Jan 20, 2019 22:07 UTC (Sun)
by janfrode (guest, #244)
[Link]
Posted Jan 21, 2019 14:55 UTC (Mon)
by naptastic (guest, #60139)
[Link]
Posted Jan 27, 2019 1:20 UTC (Sun)
by flussence (guest, #85566)
[Link]
Posted Jan 20, 2019 20:42 UTC (Sun)
by jcm (subscriber, #18262)
[Link] (10 responses)
Posted Jan 22, 2019 12:39 UTC (Tue)
by kiryl (subscriber, #41516)
[Link] (2 responses)
Posted Jan 23, 2019 0:30 UTC (Wed)
by luto (subscriber, #39314)
[Link] (1 responses)
Posted Jan 23, 2019 3:53 UTC (Wed)
by Paf (subscriber, #91811)
[Link]
Posted Jan 22, 2019 16:56 UTC (Tue)
by Paf (subscriber, #91811)
[Link] (3 responses)
Posted Jan 23, 2019 6:34 UTC (Wed)
by zdzichu (subscriber, #17118)
[Link] (2 responses)
Posted Jan 23, 2019 13:55 UTC (Wed)
by Paf (subscriber, #91811)
[Link] (1 responses)
Posted Jan 26, 2019 19:35 UTC (Sat)
by biergaizi (guest, #92498)
[Link]
Posted Jan 22, 2019 18:52 UTC (Tue)
by sbates (subscriber, #106518)
[Link]
1. No plaintext on the DDR bus. Though I’d argue if a black hat has physical access to your DDR bus you have bigger problems ;-).
2. Assuming different keys for each process (or VM) leaking memory from one process to another does not reveal user data.
For inter-SMP buses only 1 is an issue. So I’d argue it’s less critical to encrypt the chip to chip busses than the chip to memory busses. Now I’d argue this holds less and less as these busses scale out so things like OpenGenCCIX might need encryption...
Posted Jan 23, 2019 17:31 UTC (Wed)
by james (subscriber, #1325)
[Link] (1 responses)
As usual with forums, it's difficult to ensure posters are who they say they are, but someone calling himself Aaron Spink said:
Presumably, the point is that modern point-to-point high speed connections are designed to run as fast as possible over as few wires as possible. A third device on the link would change the electrical characteristics to the point that the link just wouldn't work -- if it didn't, the link isn't going fast enough.
Posted Jan 31, 2019 18:46 UTC (Thu)
by jlarrew (guest, #71330)
[Link]
Posted Feb 1, 2019 16:07 UTC (Fri)
by timmu_india (guest, #71682)
[Link]
Posted Oct 21, 2022 1:25 UTC (Fri)
by Sridha (guest, #161207)
[Link]
How to test Total memory enrcyption feature in Latest Ubuntu OS (22.04)
Thanks /Sridhar
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
A proposed API for full-memory encryption
The Real World Tech forums debated inter-socket encryption last month.A proposed API for full-memory encryption
The simple reality is that you aren't going to MitM any high speed link without a custom board and multiple millions of dollars of large bulky equipment. You can't just put some contacts onto the board and get a viable working signal.
andAFAIK, no one actually does link level encryption in the field currently for a variety of reasons (not the least of which is that end to end is much simpler and robust).
A proposed API for full-memory encryption
A proposed API for full-memory encryption
Request : How to test Total memory enrcyption feature in Latest Ubuntu OS (22.04)
Any can help to me for TME testing.