A few years ago solid state drives (SSD) were not a so popular storage solution for most of users due to their unreasonable steep prices and the extreme limitation of capacities they provided to the extent that they were too small to even handle the operating system alongside with the basic programs alone. That time if one could get just 32GB SSD and use it in a hybrid system they would be lucky while others were starving in the traditional hard drive (HDD) space.
Diversity Of Solid State Drives
A solid-state drive (SSD) (also known as a solid-state disk though it contains no actual disk, nor a drive motor to spin a disk) is a solid-state storage device that uses integrated circuit assemblies as memory to store data persistently. Using current chips as a basis, researchers set out to gauge the state of flash technology [PDF] overall. Now when you want to get an external storage devices you can dispense with the old-fashioned external hard drives and go for the best external SSD instead. It’s easy to say “SSDs make my computer fast,” but understanding why they make your computer fast requires a look at the places inside a computer where data gets stored. These locations can collectively be referred to as the “memory hierarchy,” and they are described in great detail in the classic Ars article “Understanding CPU Caching and Performance.” They found that latency and data errors increased as drive size increased. These issues worsened to the point of making the drive too unstable somewhere around 16TB, which the researchers say we will reach sometime in the middle of the next decade. Solid state disks use either NAND flash or SDRAM (non-volatile and volatile storage respectively). NAND flash is so-called because of the NAND-gate technology it uses and is common in USB flash drives and many types of memory card. NAND flash based drives are persistent and can therefore effectively mimic a hard disk drive. Synchronous dynamic random access memory (SDRAM) is volatile and requires a separate power source if it is to operate independently from a computer. SSD technology primarily uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives, which permit simple replacements in common applications. Additionally, new I/O interfaces, like SATA Express, have been designed to address specific requirements of the SSD technology.
SSDs have no moving (mechanical) components. This distinguishes them from traditional electromechanical magnetic disks such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads. Making matters worse, the speed advantage that SSDs now enjoy — a common reason to chose the technology over traditional hard drives — is expected to disappear. By 2024, latency will increase by as much as 2.5 times over current rates, the study says. Compared with electromechanical disks, SSDs are typically more resistant to physical shock, run silently, have lower access time, and less latency. It’s an axiom of the memory hierarchy that as one walks down the tiers from top to bottom, the storage in each tier becomes larger, slower, and cheaper. Solid state drives may be preferred over traditional disk drives for a number of reasons. The first advantage is found, as mentioned briefly above, in the speed of operation. Because hard disk drives need to be spinning for the head to read sectors of the platter, sometimes we have to wait for spin up time. With the emergence of msata SSD for thin laptops and compact systems, the diversity of SSDs advanced one more step in the way of publicity. Once the disk is spinning, the head must seek the correct place on the disk, and from there the disk must spin just enough so that the correct data is read. If data is spread over different parts of the disk (fragmented) then this operation is repeated until all the data has been read or written. While each individual operation only takes fractions of a second the sum of them may not. It is often the case that reads to and writes from the hard disk are the bottleneck in a system. The primary measure of speed we’re concerned with here is access latency, which is the amount of time it takes for a request to traverse the wires from the CPU to that storage tier. Latency plays a tremendous role in the effective speed of a given piece of storage, because latency is dead time; time the CPU spends waiting for a piece of data is time that the CPU isn’t actively working on that piece of data. However, while the price of SSDs has continued to decline over time, consumer-grade SSDs are still roughly six to seven times more expensive per unit of storage than consumer-grade HDDs.
Comparison With Traditional HDD
Because the information on solid state drives can be accessed immediately (technically at the speed of light) there is no latency experience when data is transferred. As of 2014, most SSDs use NAND-based flash memory, which retains data without power. For applications requiring fast access, but not necessarily data persistence after power loss, SSDs may be constructed from random-access memory (RAM). SSD for laptop computers is a common phenomenon nowadays in terms of upgrading the performance of portable computers. Such devices may employ separate power sources, such as batteries, to maintain data after power loss. Because there is no relationship between spatial locality and retrieval speed, there is no degradation of performance when data is fragmented.
At the very top of the hierarchy are the tiny chunks of working space inside a CPU where the CPU stores things it’s actively manipulating; these are called registers. They are small—only a few hundred bytes total—and as far as memory goes, they have the equivalent of a Park Avenue address. This is definitely a roadblock that looks unavoidable, but there are plenty of technologies in the works that could take the place of flash storage. One possibility is 3D memory, a technology that has been around for the better part of the last decade. 3D seems to be the future in memory, and there are several companies currently working to make it a reality. They have the lowest latency of any segment of the entire memory hierarchy—the electrical paths from the parts of the CPU doing the work to the registers themselves are unfathomably tiny, never even leaving the core portion of the CPU’s die. Getting data out in and out of a register takes essentially no time at all.
Consequences of the increased speed of writes for fragmented data include a much decreased application start up time: SanDisk, for instance, claim to have achieved Windows Vista start up times of around 30 seconds for a laptop with its SSD SATA 5000 2.5.