Application Optimization Application Performance Application Placement Application Rationalization Big Data Data Center Data Center Migration Data Center Strategy Hybrid Data Center Data Center Facilities Financial Leasing Infrastructure Optimization Network Optimization Server Optimization Storage Optimization IT Management C-Suite Collaboration IT Workforce IT Security Big Data Security Cloud Security Data Security Mobile Security Virtual Security Workspace Security IT Strategy Managed Services Most Popular Articles Recoverability Business Continuity & Disaster Recovery Data Protection & Recovery Management Service Management Sourcing Maintenance Transformational Technologies Cloud Convergence Mobility Virtualization Workspace Communications & Collaboration
Application Optimization Application Performance Application Placement Application Rationalization Big Data Data Center
Videos White Papers Brainsharks Webinars Slideshows Infographics
Stay Connected get our newsletter Subscribe Now
IT FOCUS AREA: Virtualization
more in Virtualization

Driving Short-Term Efficiencies through Virtualization

AUTHOR: Zachary Mettee Gary Brown
Virtualization Efficiency

Server virtualization has become the panacea for optimizing underutilized servers. It has also sprouted hooks into almost every other data center discipline such as storage, networking, security, information technology infrastructure library (ITIL) principles and the data center facility itself.

According to research firm Gartner, “virtualization is the highest-impact issue changing infrastructure and operations through 2012. It will change how you manage, how and what you buy, how you deploy, how you plan and how you charge.”

In simple terms, virtualization is the partitioning of a server to function as many virtual servers. It turns one physical server into many virtual servers, thus avoiding physical “server sprawl” within the data center. Revolutionary for the technology industry, virtualization works not only for tried and true UNIX platforms, but especially for the x86 platform. By optimizing data center resources through consolidation, organizations are realizing cost savings now that will continue long-term.

Consider some of the short-term benefits that virtualization can provide to a business within six months. It:

  • Defers unnecessary hardware purchases

  • Extends the life of current data center assets (instead of incurring the cost of physically expanding or building a new data center)

  • Decreases short-term maintenance costs

  • Reduces or avoids power and cooling costs

  • Utilizes an underutilized server asset (if a server is only operating at 10% capacity, you can up the utilization to 100%)

  • Takes advantage of software capabilities for storage and networking throughout the entire data center

Another way to think about the savings that virtualization provides is to consider server consolidation. Imagine that you have 10 servers in your data center, all running at 10 percent capacity. You decide to use virtualization to consolidate 10 servers into 1 physical server and 9 virtual servers. Through consolidation, you are now saving 90 percent of your original costs—all in the form of power, cooling and maintenance for nine physical servers. Running one server at 100 percent capacity will be much more efficient and cost effective. In working with our clients, Forsythe has seen significant savings from server consolidation with payback of less than 4 months when 50 percent of the servers are consolidated at a 12 to one consolidation ratio or higher.

Especially as companies face resource constraints, it is a tool that will likely rest at the top of CIOs’ agendas.

Virtualization Matures, Benefits Multiply

Virtualization began gaining popularity for open systems servers in the 80s and 90s. Many cutting edge technology shops eased into early adoption virtualization, which provided the ability to multi-purpose servers.

The next evolution of virtualization—building it into the overall data center architecture—can deliver even more savings related to ease of management, and better control and integration with networked storage. In short, the benefits are magnified as the percentage of virtualization increases.

Integrating Virtualization

Integrating virtualization is the key to continued savings and data center optimization. But to do so also means changing the run-book on which the data center is built and operated. Virtualization changes not only how servers are provisioned and managed, but also how the environment is architected. The key is to understand performance metrics for servers and applications so that they are not under- or over-sized. It’s also important to maximize the server, storage and network resources.

To do this, technology teams must look at their entire data center operation holistically. Instead of just thinking about servers, consider how storage, security and the network are impacted. Backup times can be decreased, less data can be replicated, virtual machines and applications are more portable (no longer like-to-like hardware) for disaster recovery, and the overall business continuity (BC/ DR) plan changes. The intrinsic features of virtualization parlay into risk avoidance.

While the up-front cost of some virtualization initiatives may require an investment, such as the virtual desktop, the downstream savings from having to update or upgrade one operating system, instead of hundreds or thousands, saves time and money.

All operating systems are impacted by this technology, as IT staff will also consider moving from virtualizing test and development servers to the production environment, where production servers are. The next evolution of virtualization, building it into the overall data center architecture, can deliver even more savings.

Keeping all these things in mind, the end goal is the same. Look at all IT resources, find the allocation versus utilization threshold that you’ve determined is acceptable. Then determine how to better use the resources to maximize capacity without negatively affecting other parts of the data center. In short, optimize and reduce costs through consolidation and virtualization.

Step 1: Complete Performance Profile

Completing a performance profile analysis and metrics assessment is the logical first step toward integrating virtualization. It is a data driven process and very customized and unique for each organization.

The IT staff must first conduct data collection to clearly categorize the types of operating systems and applications that are running; what resources they are consuming; interdependencies; and what can, cannot, should and should not be virtualized.

There are a myriad of tools available to complete the profile, such as an inventory discovery tool. But the most important thing is to complete the profile. This data becomes the IT staff’s baseline moving forward.

To complete the performance profile, you’ll have to choose a virtualization platform. It’s a good idea to use the program that is already integrated into the operating system. If you have to choose your own platform, for Windows or x86, VMware is the most mature and intuitive platform, and typically works in a seamless fashion for most data centers. There are other options, but be sure to research success rates thoroughly.

Step 2: Plan Time to Migrate

Once the performance profile is complete, the team must plan time to migrate from a physical environment to a virtual platform. Planning must be a highly choreographed event. Often groups of information are migrated one at a time, e.g. human resources information on Saturday, accounting data on Sunday, etc. Studying and architecting for business peaks and valleys is essential. Retailers, for example, would not choose to migrate during December, since it’s a busy month and migration can slow system performance.

Many servers are typically redeployed in a trickle in/out fashion. This means that you turn a physical server into a virtual server, migrate physical machines into virtual machines, and simultaneously move them onto a virtual server. Then, the freed up physical machines are turned into virtual servers.

Step 3: Decide Consolidation Ratio

Once an organization has completed the performance profile and has a migration timeline, the technology team will look at the results and decide on the targeted virtualization ratio that it should be working toward.

For example, if a company wants to consolidate their servers from 30 to 1, they must reconcile their goal with the metrics to ensure that it is a realistic goal. Factors to consider include the current storage and server footprint, business continuity and disaster recovery (BC/DR) requirements, application growth and the network requirements. Recommending an exact ratio is difficult, as each organization will need to determine their own comfort level based on risk, application performance requirements, and capacity.

A critical part of the performance analysis is to examine the peaks and valleys of each resource’s utilization, and architect to absorb the peaks while leaving room for growth. Some organizations choose very aggressive ratios knowing that the virtualized application may need to be moved for more resources, whether storage, CPU or memory. And some organizations will leave plenty of head room–there is no right or wrong. The flexibility that you architect with the entire virtual infrastructure will allow you to shrink and grow when appropriate. Flexibility is the key goal.

In the end, the ratio involves the technology team “stacking” applications. Some will be stacked (housed within the same physical server). Others will be migrated from physical to virtual, and still others virtual to virtual (rebalancing).

Based on expectations of performance requirements or the importance of an application to the rest of the environment, you can determine the best ratio by using the metrics. If an application requires heavy resource utilization, you may still consider virtualizing it, as you will benefit from the virtualization software for portability, backup, DR and storage consumption. But you may decide to have one operating system and application still dedicated to one physical server that is virtualized.

Step 4: Set a Policy

It’s necessary to set policies around virtualization, such as “All Windows-based applications and operating systems will be virtualized and have capacity set to match the application requirements during peak workloads.” This is a best practice, and applying it to all the servers, physical or virtual, will help eliminate server sprawl and its accompanying data center impact.

Step 5: Setting Up the New Environment

Upon completion of the above steps, the technology team is now ready to architect the new optimal environment. Some of the changes the technology team will be working toward may include:

  • Standardizing on a hardware and virtualization platform

  • Replacing “U” rack servers with blades chassis

  • Replacing existing servers with higher-capacity (multicore/multi-CPU, highermemory) configurations

  • Consolidating several instances of the same application on a single server

  • Combining several different applications on a single server

  • Reducing the number of OS instances across the entire server infrastructure

  • Deciding the network bandwidth as well as storage protocol based on the applications performance

Step 6: Maintain, Maintain, Maintain

After integrating virtualization into a data center plan, ongoing maintenance is a must. A virtual environment allows more room to move applications and operating systems, which can be complex. Once in the virtual environment, it’s important to keep re-balancing for optimization.

Prepare for Some Hurdles

Keeping in mind the short and long-term benefits of virtualization, there are some hurdles:

Maintenance

Maintenance in a virtual environment can be complex due to interdependencies and relocation issues. Keep it simple; designate servers to house required or 24x7 applications. Set-up appropriate maintenance.

Additional Costs Upfront

Integrating the storage network or LAN for some companies may mean additional costs. But in the long-term, with deduplication and better backup policies, license costs, as well as tape and tape handling, costs can decrease.

Outsourced Environments

Heavily outsourced environments can be more complex to analyze and manage, but virtualization may provide the flexibility and portability you need to move applications in or out.

Security

There are still questions around the security impact of virtualization. The operating system and application, regardless of its being physically or virtually housed, is still vulnerable. If the technology team addresses all these variables up front, there will still be significant soft and hard savings. It’s simply essential to follow the full planning cycle.

Virtualization Continues to Evolve

While we currently watch many businesses integrate virtualization into their data center, we are also keenly aware of the next evolution of virtualization. Eventually, automated tools will mature within the virtualized environment. You can already power virtual machines up and down based on policy, and move applications if resource requirements are increasing or decreasing. Auto-provisioning, while a blessing to keep the “gold” image (standard operating system complete with patches, software, etc.) ready to deploy, can increase virtual machine sprawl.

Another positive development that continues to evolve because of virtualization is server management. Technology engineers are fast tracking tools that will seamlessly interact with already existing enterprise management suites. Server sprawl is making this issue a priority because of change management and service-oriented architectures.

For now, based on performance metrics, focus on what makes sense for your organization. The main reason to utilize virtualization is to reduce costs, but it’s also a decision that is tied to the bigger picture. Virtualizing now will likely skirt a bigger capitalization, not to mention organizing, consolidating and optimizing data center operations: the perfect combination to support your business now and beyond.

Leave A Comment




Comments

Submit Comments

Policy for Comments

Please make sure your comments follow these guidelines:

- Use your real name, not keywords
- No signature links in your comments
- No foul language (please)

Readers are solely responsible for the content of the comments they post here. Comments are subject to the site’s terms and conditions of use and do not reflect the opinion or approval of Forsythe Technology. Readers whose comments violate the terms of use may have their comments removed.

You might also like

IT Strategy
IT Strategy

Why Traditional Processes Can Cost You More Than You Expect

The importance of developing a new and effective infrastructure planning model.
Virtualization
Virtualization

VDI: 7 Steps for a Successful Implementation

Learn why VDI is a top initiative for many IT leaders and how you can successfully implement it.
Virtual Security
Virtual Security

7 Steps to Virtual Security

Ensure your organization takes a successful path toward virtual security with these vital steps.
Subscribe To Our Newsletter