According to HDS, the future of
information management is wholly dependent on new storage management systems.
Over the past few decades, rapid
advancements in disk drive technology have created a year-over-year price
erosion of about 10 to 15 percent in disk media costs. The affordable pricing
certainly made it easier for many enterprises to buy new disks rather than
maintain old ones.
However, due to manufacturing and supply
shortages following the catastrophic floods in Thailand last year, hard disk
prices have now increased for the first time ever by as much as 5 to 15 percent.
Gartner predicts that disk prices will continue to increase his year (even
higher to 20-percent) and IDC anticipates the shortage will continue to affect
the industry into 2013.
According
to HDS, the future of information management is wholly dependent on new storage
management systems.
Whether this price surge is just a short term
anomaly or a long term reality as manufacturers rebuild their production
facilities and invest in next- generation disk technologies, one thing is for
certain: it is creating new cost burdens for data centres.
Indeed, even when disk prices were going
down, IT managers faced a long list of cost challenges due to the rapid growth
of data and the need to acquire new storage devices. They also had to
constantly struggle to maintain rapidly-increasing storage islands made up of
multi-vendor hardware and software, control the demands for expanding floor
space as well as reduce power and cooling costs.
Hitachi
data systems
Capitalise storage assets
Although storage costs are spiralling
upwards for many organisations, it's an interesting fact that the scale of
company's storage, often than not, doesn't match its actual consumption. In
most cases, businesses use only 30-percent of their storage capacity while the
remaining amount idly sits in there and does nothing. What are the implications
of this phenomenon? For a start, it means there's still plenty of room for
growth and resource utilisation in existing storage capacity.
More importantly, it means the organisation
needn't suffer from rising prices. Instead of acquiring expensive new media,
they can instead focus on capitalising their existing storage assets to enhance
storage utilization, enable higher Capacity Efficiency (CE) and achieve a lower
Total Cost of Ownership (TCO).
Achieve better CE
To generate a high level of CE requires
maximising storage in two key aspects: allocation efficiency and utilisation
efficiency. The former involves eliminating the waste of over-allocation, while
the latter is about using the available storage space in an efficient manner,
so as to reduce costs and increase performance and availability.
Intuitive allocation
The over-allocation of disk resources is a
common practice for many IT personnel. They usually allocate capacity beyond
users' requests and keep 10 to 15 copies of all stored data in order to ensure
server storage level do not unexpectedly run out. However, this over-allocation
can be eliminated by using thin provisioning, where you provide virtual space
for the requested allocation and only provision the capacity that is being
used. This approach also helps to support the APIs for file systems, like VMFS
and Symantec types, where it can notify the storage system when files are
deleted and re-assign the given allocation elsewhere. By eliminating that
section unused space, the capacity and time needed to make copies are reduced.
Reduction of copies can be made with copy-on-write so that only the new changes
are replicated.
Better utilisation
Placement of data should be based on the
appropriate tier of storage, frequency of access, business values and the cost
to purchase storage space. This can be achieved by using an automated tier
system that operates on policies that are triggered by specific times or
events. There are two types of data automations, namely volume level and page
level. However, utilisation efficiency can only be achieved by putting data on
the "page level". This is because volume level tiering needs to move
the whole volume in all the tiers, which requires significantly more space.
Page level tiering, meanwhile, focus on only moving around hot pages around
tiers, thus occupying only about 5- to 10-percent of the total volume.
Tweaked storage virtualisation
By far the most important solution to
enable CE is the deployment and management of virtualised storage. It extends
storage efficiency tools like and thin provisioning to existing storage systems
that do not have those capabilities.
With storage virtualisation, you can
consolidate all data types - files, content and block storage, for both
structured and unstructured data - from internal, external and multivendor
storage onto a single storage platform. As all storage assets become a single
pool of resources, the automated functions are amplified to the entire storage
infrastructure. This makes it easy to reclaim capacity and maximise
utilisation, and even re-purpose existing assets to extend usage. In doing so,
it significantly enhances your IT agility and capacity efficiency to meet
unstructured data growth.
Barry
Whyte - An exchange and discussion of Storage Virtualization
Storage virtualisation also offers the
benefit of competitive automated tiering, multi-vendor storage purchasing
strategies. With the freedom to move into a virtualised environment, external
storage effectively will become a commoditised product. This will allow
businesses to design different price-range storage systems for different tiers
of storage, giving them the maximum return on their investments as well as
granting them the ability to choose the lowest bid where appropriate. More
competitive-priced media can also be assigned on mid- and low-tier storage while
the high-end disk is used for high-tier storage.
Having gained such abilities, organisations
no longer need to worry about upcoming hard disk shortages and price increases.
Ultimately, they can flexibly design their long-term storage purchasing strategies
according to their specific needs and actual consumption patterns.
Independent storage control console
maximises performance with lower costs
To optimise storage performance with the
highest efficiency, the ideal storage infrastructure needs to separate disk
capacity with an independent storage controller. This is because dynamic
page-level tier-ing requires the handling of more meta-data and, as such, needing
higher processing power within the storage system. By implementing separate
pools of processors to support this expanding function, the efficiency of data
mobility can be maximised without impacting the basic I/O performance and
throughput.
Another advantage of separating disk
capacity from the storage system controllers is the freedom it creates to
manage storage media. Disk capacity no longer needs to be refreshed at the same
pace as the storage system controllers, which are normally kept current with
systems technologies on a three- year cycle. Thus, businesses can prolong the
life of existing disk capacity on a five- to seven-year cycle, which can be
done according to their specific needs. Since storage media still accounts for
the bulk cost of a new storage system, this longer depreciation cycle will
significantly reduce capital costs. Additionally, oxrganisations can enjoy
multi-vendor purchasing strategies for external storage, thus getting much more
competitive prices on mid- and low-tier storage.
Storage infrastructure maximises CE with
minimised spending
Rather than buying additional disks to
tackle Big Data in this period of rising storage costs, IT decision makers
should choose more cost-efficient alternatives to maximise capacity efficiency
with storage virtualisation. The right storage solution will help to simplify
infrastructure, ensure Quality of Service, reduce risk, and align the right
storage tier to the right application thereby reducing operation and capital
expenditures.
It is now critical that companies not only
free up the capacity they need today from existing storage assets but also
position their business for sustainable growth long into the future.