By now, most IT professionals have dipped their toes in the solid-state drive (SSD) vs. hard disk drive (HDD) waters, but may not have had time to sort out the different types of SSDs used in their machines.
Here are some fairly recent commercial products that illustrate the SSD's ubiquity:
The SSD is clearly mainstream in some respect, but it is far from fully displacing the hard drive. So is there a best practice for deploying SSDs? Or is this a purely economic decision?
According to Tech Times, the cost of SSDs still makes some companies unable to undergo a full switchover. This acknowledges the primacy of a dynamically calculated crossover point. SSD vs. HDD? In the history of computing, this is not a new question.
From the beginning, there have been memory storage tiers: fast memory and slow memory. Cost, capacity and speed are traded off. In 1965, the CDC 6600 computer system featured central memory, extended core storage, fixed HDDs, moving head HDDs and tape. Managing storage tiers has always been critical to building hardware, creating applications and managing systems. In fact, few among today's generation of IT managers have had to schedule time to mount tape drives or spin down massive CDC 9760 disk packs.
Fast forward (to stay with the tape metaphor) a few decades to 2011. In that year, Wired introduced its readers to Gordon, "the world's first flash supercomputer." Named after Flash Gordon, the supercomputer installed at the San Diego Supercomputer Center (SDSC) used 300 terabytes of flash memory (initially Intel 710 series drives). Gordon foreshadowed even wider adoption of flash memory. SDSC applications lead Bob Sinkovits told Wired why they bought into SSDs: "For data-intensive applications, though, the biggest advantage is much lower latency."
It was an architectural goal that would have been familiar to the CDC 6600 design team.
Whether their ideas are truly new, or a sort of Back to the Future exercise, engineers are busy studying how best to use devices like SSDs. In a 2013 IEEE Computer article introducing a collection of papers on coming memory innovations, Atwood, Chae and Shim explain that as demand for scalable memory systems increases, "...memory technology becomes both a solution and a bottleneck, spurring the industry to redefine how these systems use memory. One of the best examples of this is the emergence of solid-state drives (SSDs) across the range of computing devices."
The changes some anticipate are major. The title of a paper in that issue by Swanson and Caulfield is typical of this vision: "Refactor, Reduce, Recycle: Restructuring the I/O Stack for the Future of Storage." Still more recently, SSDs are a big part of the drive toward software-defined storage, as InfoWorld reports.
Admins and DIY advocates would do well to hone their matchmaking skills. When SSDs are appropriately matched to the application, the results can bring smiles to users. If CFOs aren't happy, they can live with the results.
What sort of matchmaking analysis? Here are two examples:
Your mileage may vary, but that's kind of the point. It's been that way with memory staging right from the beginning of computing.
View all posts from Mark Underwood on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites