NAND flash memory has been the default storage component in embedded systems. One of the key technologies for flash management is the address mapping scheme between logical addresses and physical… Click to show full abstract
NAND flash memory has been the default storage component in embedded systems. One of the key technologies for flash management is the address mapping scheme between logical addresses and physical addresses, which deals with the inability of in-place-updating in flash memory. Demand-based page-level mapping cache is often applied to match the cache size constraint and performance requirement of embedded storage systems. However, recent studies showed that the management overhead of mapping cache schemes is sensitive to the host I/O patterns, especially when the mapping cache is small. This paper presents a novel I/O scheduling scheme, called MAP+, to alleviate this problem. The proposed scheduling approach reorders I/O requests for performance improvement from two angles. Prioritizing the requests that will hit in the mapping cache, and grouping requests with related logical addresses into large batches. Batches of requests are reordered to further optimize request waiting time. Experimental results show that MAP+ improved upon traditional I/O schedulers by 48% and 18% in terms of read and write latencies, respectively.
               
Click one of the above tabs to view related content.