5 Steps to Keeping Your Storage Sanity

8 minute read
storage sanity

Organizations of all sizes across many industries are looking for ways to control their exploding data storage growth.

Why?

The digital growth of forms, images, video, increased use of messaging/social networking applications, cloud-born utilities, and the business itself.

Much of this important business information needs to be stored for longer periods of time — sometimes forever — thanks to numerous regulations requiring data protection.

With the emergence of deeper demands on data, along with business requirements and the constant flow of critical information for the business, backup and maintenance windows are disappearing.  Finding time to schedule the traditional backup window or take systems offline is not an option for many 24x7 digital businesses.

In addition to initiatives to control data growth, chief information officers (CIOs) are being asked to deliver IT agility and more data security, all the while reducing costs.

This mandate almost contradicts the industry direction of what IT is being asked to do: drive innovation to improve business growth.

A typical CIO of any organization is being measured daily and asked:

  • How is IT facilitating business innovation to make more money?
    • Processing more business transactions per second
    • Improving business insights and decision making
    • Enabling faster time/turnaround with supply chain effectiveness
  • How is IT helping to save money?
    • With far fewer devices, tools, utilities, resources
    • By lowering energy and floor space costs
    • By reducing administrative and operational costs
  • How is IT helping the business gain a competitive edge?
    • Positioning IT as a strategically and efficiently as a solutions provider
    • Helping the business gain competitive advantage through insights
    • Creating better customer experience (i.e. self-service)

IT organizations struggle today in two key areas: costs, and agility/ innovation.

Data management and controlling exponential growth are at the center of these two initiatives.

There are many emerging technologies and products available to help organizations tame the data storage beast.

Most IT departments have heard of flash, hyperconvergence, and analytic toolsets, but they also need to maintain the legacy skillsets and knowledge base of the current solutions that still function and are used heavily today. 

In a time of understaffed IT departments with limited budgets, the rapid progress in storage and data technologies can be daunting.

Many organizations struggle with challenges such as:

  • How to understand and track all the modern technologies that impact storage management and backup
  • How to decide which technologies and products make sense for an organization’s business requirements and IT infrastructure
  • How and when to introduce emerging technologies and address backup data issues without generating excessive capital expenditures (CAPEX) and operational expenditures (OPEX) costs and disrupting operations
  • How to cope with the skills gaps regarding time and expense of training IT staff to work with these modern technologies
  • How and when to commit to cloud-based solutions (public, private or hybrid)

Addressing these issues can be extremely challenging given the blurring pace of data growth and the complexity of solutions that may be utilized to address it.

However, there is a viable, sane strategy that organizations can follow to make the job easier.

How to Keep Your Storage Sanity

There are five steps to help you develop a successful storage strategy.

Step 1: Understand Your Data

Many organizations have expensive, legacy, high-performance storage systems that consume more cost and budget than required.

Why?

Because, at the time of implementation, many organizations may not have fully understood the impact of the data consumption in the current infrastructure capacities.

In many cases, only a portion of organizational data is mission-critical enough to be stored in the highest-performing storage. Many organizations store a much higher percentage of data this way than necessary.

Before addressing strategies for managing storage, backup, compliance, and disaster recovery, it makes sense first to understand your da

Classifying and understanding your data is a necessary step when reviewing your environment. 

Ask these questions:

  • What types of data are critically important to the business?
  • Where does such data currently reside (storage, server, other)?
  • How it used and stored in your organization?
  • What applications generate the data?
  • Are the applications cloud-ready? (Can data move seamlessly through the ecosystem with no impact to application accessibility, performance or location?)
  • Where does the data go (workflow, movement, migration, replicate)?
  • Where is it stored long-term and for what duration?
  • How is it protected?

Business units can also provide crucial insight into future growth affecting storage, such as acquisitions, new initiatives and new lines of business.

Step 2: Understand Your Data Infrastructure

As you audit your organization’s data, you should take the opportunity to gain a deeper understanding of the storage and other infrastructure used to store it.

Here are additional questions to help understand the infrastructure:

  • What are storage and server hardware types, applications and products used?
  • What is the age of data (is there a data profiling utility to help gauge and track data)?
  • Which systems are under and overutilized?
  • What performance, management, virtualization, and other features are being used?
  • What features are offered by the hardware manufacturer that is not currently used?
  • Are these systems fully integrated and do they work or don’t work together?

Do the same for all your backup and disaster recovery hardware, software, and cloud services, if applicable.

Step 3: Create and Update Policies through Empirical Data

Analytics tools have quickly been a mainstay and commonplace among newer technologies.  As either embedded toolsets or compatible add-on utilities, analytics help decipher workflow management, as well as data placement. This is invaluable when architecting, designing or enhancing the data center infrastructure for automation, as well as orchestration levels to support the next generation-style of requirements for businesses.

Through analytics tools, data can be reviewed, measured, managed and categorized for more efficient infrastructure usage. 

Once you understand your data and its infrastructure, it’s time to take a fresh look at your backup and retention policies, given the knowledge you’ve gained of the relative importance of different data sources and compliance requirements.

Step 4: Prepare Your Data Lake and Create Your Software-Defined Strategies

Once you understand which data requires the highest levels of performance storage, backup, and disaster recovery for the business applications and databases, start architecting a storage data lake through hyperconverged infrastructure (HCI) or scale-out architecture that will be compatible with the integration of a software-defined storage (SDS) strategy. 

Many SDS strategies either have or are currently being configured to detect when data should be moved from one place to another, via data mobility engines and predefined criteria in algorithms. Data mobility engines and processes are embedded in many solutions today and are being heavily leveraged for the purposes of auto-placing data on the appropriate platforms, tiers and/or location based on age and usage meta-data criteria. These “places” could consist of legacy, slower-spinning, hard-disk designs and higher cost storage locations being replaced by cheaper more agile solutions. 

They can also be utilized as a migration utility in a self-detecting, relocatable environment reducing the need for outages, downtime, human execution and lesser efficient means of transport.

Step 5: Create an Information and Data Lifecycle Management Process as You Prepare for Internet of Things (IoT)

The most effective next step, in our experience working with our clients, is to reduce the sheer volume of data with standard technology enhancements available in most storage systems today. These enhancements include deduplication, compression, double/triple capacity and performance styles of flash.  

Beyond commonly embedded toolsets mentioned, asset management, configuration management database (CMDB) and data attribute tagging can be integrated with a chargeback (or show-back) mechanism. This mechanism tracks the cost of your data to help address the impacts of CAPEX/OPEX or, at the very least, in order to measure and identify the impacts of CAPEX/OPEX today.

Having fewer data to store-- through technological advancements and tools--immediately translates into lower equipment costs.

However, policies and procedures will need to be defined to support the infrastructure being leveraged.

There are also many advancements in software that support 5-clicks or less operational activities.  In recent years, operational efficiencies have been the main goal when simplifying repetitive tasks to manage more functions using less keystrokes, or clicks.  These are the building blocks for automation and orchestration of daily data center functions.

Saving time to deployment, changes/modifications and many other manual areas are open for error.  Savings can be found in network bandwidth, data center real estate, power, cooling, and management resources – these savings are also dramatic when the sheer volume of storage is reduced significantly.

With the size of data reduced, archival data is more accessible, and backup window requirements are reduced.

A major financial exchange saw spending on storage technology drop by 80 percent with deduplication, from $10 million annually to $2 million over five years. In addition, the costs of measurable service-level agreements (SLAs) and uptime improvements lessened, even as the company data requirements continued to grow at a fast pace.

An Eye on Cost and Complexity

In evaluating technologies for the data lake, SDS, IoT and other purposes, always keep an eye on cost and complexity, implementation, full integration and the actual expected return on investment into these strategies.

There are many reasons why early adopters start small in a prudent manner by test bedding and proving the concept in a controlled ecosystem prior applying technology and standards enterprise-wide. 

There are equal reasons why organizations choose to approach more slowly, thereby allowing the early adopters to pave the way.  This could be costly in each example, given the pace of technology and the need for business value as well as economic growth. 

Move too fast? It may set you back irreparably. Move too slowly with a bit-by-bit incremental approach?  You may have no choice but to over-invest at a most inopportune time. 

Choose not to move at all? Your IT ecosystem may become too outdated to make any move to the next generation of technological advancement profitable to your business. 

Transform your business with cloud
Leave a Comment

You Might Also Like

More Info Provided By

About the Authors

Popular Today

Slideshows

Videos

@ForsytheTech

1

Powerful Partnerships — Measurable Outcomes

Forsythe & DellEMC’s 20-year partnership has helped drive measurable and significant business outcomes.


Offering specialized, solutions-based capabilities, including cross-platform integration across the entire IT stack, and a comprehensive converged infrastructure portfolio designed to maximize investment.


Learn more about what DellEMC and Forsythe can do for you.