AllSolving the Operational Challenges of the Modern Data Center
Solving the Operational Challenges of the Modern Data Center
THE STATE OF DATA CENTER MODERNIZATION
Most organizations are pushing IT to modernize
their data center, with making it more cloud-like as
the eventual goal. They are looking for something
that is self-service and automatically adjusts to user
and application demands on-the-fly.
As it stands today, the typical data center has
scaled beyond human comprehension, exacerbated
by virtualization. Virtualization has both helped
and hurt IT in its effort to tame the data center
beast. Virtualization allows IT to quickly deploy
compute for a new application by abstracting the
applications from the server hardware. The problem
is deploying a virtual machine still requires specific
steps and knowledge regarding networking and
storage. Virtualization’s abstraction also makes it
difficult to determine the cause of any performance
problem or inconsistency.
In an effort not to get caught off-guard, IT will
typically over deploy compute, networking and
storage resources. But even with seemingly more than
enough resources, one user request or application
process can trigger a noisy neighbor situation. Noisy
neighbors occur when a virtual machine suddenly
increases processor consumption or storage IO that
it might starve out other VMs from accessing those
resources. The result is a ripple effect of inconsistent
performance throughout the enterprise, with no real
indication of the original request that caused the
problem in the first place. Without that information IT
is forced, once again, to throw even more hardware
at the problem wasting even more IT budget.
SILO’ED MANAGEMENT FOR AN ENTERPRISE PROBLEM
IT, instead of spending time on modernization efforts
is drowning in a sea of day-to-day task and interrupt
driven fire-fighting drills. To help they have enlisted the
aid of multiple management tools to provide them with
the insight into the performance problems they are
encountering. The problem is most of these solutions
are silo’ed. While they often can provide excellent
details on the particular component they monitor, the
tools can’t correlate that information across the other
infrastructure components and have no understanding
of the applications they are affecting.
The lack of end-to-end management is especially
a challenge in the highly virtualized data center
of today. And, if not corrected, will be even worse
as organizations move from highly virtualized
environments to high containerized environments.
The reality is most virtualization management tools
only manage out to the physical server, and most
infrastructure tools manage up to the physical server.
IT lacks a solution that can cross the chasm and
provide high-quality monitoring and analysis of the
entire infrastructure.
Without the end-to-end view of the environment,
IT must take a brute force route to reducing
performance inconsistencies by doing what it has
always done, throw hardware at the problem and
try to manually correlate data across systems.
As a result, data centers end up with too many
servers, that are too large, networks with too much
bandwidth and all-flash storage systems that sit idle
most of the time – in addition to the significant staff
time that is wasted in troubleshooting problems.
Understanding how the storage architecture is responding
to application requests especially when under load from
multiple applications, is key to delivering predictable,
consistent performance to users.
Please complete the form to gain access to this content