Skip to content

Instantly share code, notes, and snippets.

@chinmaydd
Created May 29, 2018 18:29
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chinmaydd/32ad186e17fc24f3e79384f8bcc2d76f to your computer and use it in GitHub Desktop.
Save chinmaydd/32ad186e17fc24f3e79384f8bcc2d76f to your computer and use it in GitHub Desktop.
Panoply Paper Review
[*] Explain the fundamental technique of the paper in your own words.
- define an abstraction called micron, focus on low TCB
- inter micron flow and data integrity, confidentiality
- expose a rich set of posix apis for the enclave
- system calls: delegate-rather-than emulate and adding custom checks
- enable fork-exec for applications which need multi-processing and multi-threading capabilities
- provides a highly-flexible compiler infrastructure for security architects to midfy existing applications
[*] What problem is the paper claiming to solve?
Panoply claims to solve the problem of providing a rich set of POSIX abstractions (and hence bridging the gap with SGX-native abstractions) while maintaining a low TCB for Linux applications using hardware-isolated (Intel SGX) enclaves.
[*] Does it actually achieve a solution to it? If not, then what sub-problem does it solve?
Panoply achieves a partial solution to the problem. It stems off of the fact that it requires that code (which aims to use the features promised) be compiled using the infrastructure provided by the authors.
In case of closed-source software vendors, it would require a great deal of effort to convince them to use this research and integrate with their products. This is including and not limited to desktop-based password managers, payment applications and web-servers. By agreeing to use such software, the user is already vesting a lot of trust in the vendor's ability to protect his secrets. However there is ongoing research to verify such applications through automated reverse engineering (through taint and data-flow analyses tracking the lifetime of sensitive secrets) which might discover the complexity of the TCB embedded in the enclave.
It is not possible for a user to run legacy applications inside an enclave and also leverage Panoply features. This hinders off-the-shelf integration and hence the overall adoption of the technology.
(The research focuses on achieving a lower TCB and does not make any claims about optimizing performance in comparison to existing approaches. Hence related issues are not mentioned in this section)
[*] Is the problem claimed to be solved new to the field?
Partially. The problem claimed to be solved can be split into two different components:
- Providing a rich set of POSIX abstractions to enclaved code
- Maintaining a low TCB for a "shielding module" which acts as an intermediary between the untrusted underlying OS and the enclaved code
Sub-problem 1 is not new to the research domain focusing on using SGX to protect applications looking to use the SGX-provided isolation (attempts include those described in Haven and Graphene). Although sub-problem 2 is not the main focus of previous approaches, they take into account the need for a minimal (and hence an eventually-verifiable) TCB for such a trusted execution environment such as the Intel SGX.
[*] If not, what is the key novelty of the paper?
Panoply defines the concept of a "micron" which possesses the following features:
- light-weight (in terms of TCB code size)
- delegate-rather-than-emulate (delegates the implementation of OS abstractions to the underlying OS)
- shim-library which acts a "shielding module" to protect against malicious OS attacks
- (virtue of the features above) provides multi-processing, dynamic multi-threading and event-management support
The novel "micron" abstraction aims to solve both the sub-problems mentioned earlier.
[*] What do you like about this approach?
- The resarch tries to solve the problem of reducing bloat inside the enclaves, rightly identifies the sources which introduce it and work towards reducing it while maintaining equivalent API richness.
- Library-OSes emulate OS abstractions inside the enclave to prevent against a malicious OS. The approach delegates much of this functionality to the OS and implements a module to perform custom-checks against the return values of a majority of system calls (knowing that they return an integer or an error value).
- The authors note that applications can be broken down into modules, partitioned so as to require only a subset of OS features for each of them. Combining this knowledge with the delegate-rather-than-emulate approach, each module would thus require only a portion the underlying native abstractions exposed across the enclave interface. This allows us to do away with unncessary (library) code and hence reduce the TCB.
- Panoply intercepts calls to the glibc API through the modified driver interface which allows us to keep the glibc code outside the enclave, reducing the TCB even further.
- The programmer is also given safety and integrity guarentees by the infrastructure while designing micron architecture and their interactions through inter-micron flow-integrity checks and encrypted communication channels between them.
[*] What would you have done differently? In what ways would your solution be better?
The approach taken by the authors to solve the problem (of the requirement of a low TCB) is extremely agressive in my opinion.
Panoply sacrifices performance while trying to cut down on the TCB. Application code usually contains certain "hot-spots" which are executed more frequently. A possibly better way to approach this problem would have been to try and optimize such enclaved "hot-spots". This would involve having specific OS abstractions implemented in the enclave and others delegated to the OS as required. User can provide annotations for code which they consider to be important and the infrastructure would use that information to design the per-micron included shim. This would improve the performance a bit more since it tries to take the positives from both the worlds (libraryOS and Panoply).
[*] Does the technique sufficiently demonstrate success in its evaluation? Where will this technique fail or why is the evaluation not sufficient (if so)?
The evaluation is fairly sufficient for understanding the trade-offs between performance and the achieved TCB through Panoply.
As discussed in the paper, the technique will incur a major loss in performance when the number if I/O operations (with the OS) across the enclave boundary supersede the computations being carried out inside it. This is specifically true for database-styled applications.
The applications are intelligently partitioned in case of this evaluation. As can be seen from the inter-micron API collumn, there is very little dependency between them. For legacy applications, such a partitioning might not be possible. They might require high amount of interaction which would result in some more performance hit due to the encryption and decryption of shared data (or a common shared memory). Such an application could have been considered to improve overall micron feature-testing coverage.
[*] Describe at least three technical differences between Panoply and Graphene-SGX that you learned from the paper. Ensure that you explain it in details.
- emulation vs delegation
- (need_to_read)
[*] If you were the author of this paper, suggest three novel ideas for future papers that you would work as follow-on work (Feel free to think out-of-the-box, and suggest whatever you like.) -- state each idea in brief? Explain on any one such idea in depth: Motivate the problem clearly with an example, Sketch a high-level insight that will solve the problem, and suggest clear target case-studies to put in the "evaluation" section of that next paper.
--------------------------------------------------------------------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment