at an arbitrary time during execution. It can even deal with crashes happening during recovery, when the file system attempts to reconstruct a consistent state from whatever it finds on… Click to show full abstract
at an arbitrary time during execution. It can even deal with crashes happening during recovery, when the file system attempts to reconstruct a consistent state from whatever it finds on the persistent medium after a crash, and in the worst case no progress happens (a situation that could arise due to a crash triggered by a fault in the file system itself). The authors show how this logic can be applied to actual file systems, including difficulties such as read and write operations that are asynchronous to the execution of the file-system code. They then use this formalism to specify and implement an actual, fully featured file system, and produce a machine-checked and largely automated proof that this file system is actually crash-safe. This is truly remarkable work, combining the development of a new formalism with its application in real systems to prove very complex properties. It is exactly this kind of properties where formal proofs are most powerful , as anything that is as complex and interconnected as the crash-safety property is notoriously difficult to get right in the implementation. Experience shows it is almost never right— unless formally proved. Proof automation is very important to making such techniques scale to real-world, high-performance systems. In this paper, the authors have taken a first step toward reducing the manual effort, and their work has already triggered further progress in automating such proof. Other work has demonstrated the feasibility of automating some of the underlying infrastructure, which this work assumes correct without proof. All this means is that complete operating systems , with full proofs of functional cor-rectness, may be closer than we thought only two years ago. FILE SYSTEMS HAVE been a standard part of computer systems for over 50 years. We routinely (and mostly un-thinkingly) use them to store and retrieve persistent data. But how persistent are they really? This question is far trickier to answer than it may seem at first, for several reasons. Persistent storage media are orders of magnitude slower than non-persistent memories, such as processor registers or random-access memory. They are also organized differently: persistent media are read or written in blocks whose size is on the order of kilobytes but can be up to megabytes. Bridging the speed and granularity gaps requires complex logic for data buffering, implementing efficient lookups, and reordering slow operations to obtain the best performance. Some of this is performed …
               
Click one of the above tabs to view related content.