Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
Storage virtualization will fulfill its promise when products are able to fully automate data placement. Virtualization is the key to the concept of a data storage utility, in which computing resources are simply plugged in to a network and the network itself is able to identify the type of data being generated, the performance and availability requirements typical of that data type, its quality of service requirements, the security policies that should be enforced, and the appropriate archival methods that should be applied, and then automatically provision additional capacity as the volume of data grows. Post-production video editing, for example, has unique requirements compared with on-line transaction processing or tape backup. Video streams require high performance, both from the SAN transport and from disk storage. Data written to the longer, outer tracks of disk media requires less disk head movement and repositioning, and so it delivers data more efficiently than data written to inner tracks. Because video production represents a substantial investment of money and human resources, re-creating lost or corrupted image streams is prohibitively expensive. High availability via synchronous mirroring and point-in-time snapshots for immediate tape archiving are therefore appropriate. Video editing may require file sharing so that multiple editing workstations can manipulate the same data streams. The editing facilities themselves may be geographically separated, requiring both high-performance SAN/WAN links and data encryption over untrusted network segments. Finally, the sheer volume of data generated by digitized video requires scalable storage capacity that can seamlessly accommodate the influx of new data. With current technology, fulfilling the requirements of particular data types is a labor-intensive and often tedious process. Each step requires manual configuration of unique products and careful crafting of processes to ensure interoperability between applications and the supporting infrastructure. The level of automated intelligence required for application-aware, policy-based virtualization is significantly more sophisticated than that required for storage pooling or tape virtualization, but it is achievable. As with any technology, the starting point is to break down individual manual processes into their constituent parts and define the means to replicate them artificially. The ideal data storage utility must also accommodate heterogeneous deployments of applications, operating systems, servers, interfaces, interconnections, and storage targets. Although customers are not always happy with the rainbow coalition of storage products in their networks, so far they have not been willing to return to the era of monolithic, single-vendor solutions with its accompanying vendor lock-in. Making the data storage utility a reality will therefore require much closer multivendor cooperation. As always, the onus falls on customers to make sure their vendors are sufficiently motivated to take the technology forward. |