Monday 31 December 2012

Happy New Year from everyone at Metron

Here’s hoping that everyone has had an enjoyable holiday season and is looking forward to the start of a new year.  Happy New Year to everyone from all the team at Metron.

I’m not a great one for New Year resolutions, so I won’t offer any of those.  There are things that I know I should be getting around to – my conscience regularly tells me I need to update my LinkedIn profile, but this is just part of an on-going ‘to do’ list, whatever the time of year.

I’ll leave others to make the big predictions about what the big themes will be in IT.  We’ve probably all seen plenty of information about this anyway.  Cloud is getting bigger and bigger such that it isn’t something new or special, it’s just how we do things. Software as a Service is more and more part of our day to day life – many things that used to be internal to Metron such as email and CRM are now externally hosted.  The same is happening elsewhere, with what are considered critical applications now being put in the Cloud by some organizations. 
Big data is going to be, well, BIG, so storage will continue to spread faster than anything.  Note to self: must learn what comes after ‘petabyte’, as this already seems to be a common capacity term.  Are exabytes, zettabytes, yottabytes and so on real terms, or is someone having fun at my expense?  Time will tell, all too quickly.  For those of us involved in capacity management, we need to find out sooner rather than later.

I guess our area at Metron is capacity management, and so I should be most concerned with that.  Here are three things I would like to see in the New Year.  They are randomly picked personal items – there are many others I could have selected.   I’d be interested to hear your alternatives – at least for the capacity management items(J):

-          As we still battle to come out of the recession, I’d like to see every cent valued by organizations.  Too often I hear that ‘we don’t need capacity management because servers/virtual servers/Cloud resources are cheap’ or ‘we don’t need capacity management because we’ve bought enough resource to see us through the next 3 years’ – the latter being a genuine quote from a Deputy CIO to me.  Nothing is cheap if you buy more than you need.  The cost is not the unit cost of the item, it’s what value that money could have bought your business used elsewhere.


-          I’d like to see the Capacity Management Group (www.cmg.org) resurgent.  Capacity management should be seen as vital to cost effective IT service delivery, too often it is not.  CMG offers a superb forum for exchange of ideas and free education to enable you to allow capacity management achieve its potential for your organization.  Restrictions on travel and an increasing internal focus by management in response to the recession have seen attendance at conferences such as CMG diminish due to their ‘cost’, rather than be considered for their ‘value’.  Developments in the IT world such as virtualization and Cloud will make it ever easier to spend money ineffectively on IT infrastructure – capacity management offers a route to avoid that.  Participating in CMG can help any organization realize that benefit.


-          I’d like to see the New York Jets and Sheffield Wednesday Football Club conquer all before them...... Oh well, you can’t have everything.

By all means send me your capacity management wishes for the New Year and I will pass them on through our blog.  In the meantime, have a happy, healthy and successful 2013.

Andrew Smith
Chief Sales & Marketing Officer

 

 

 

Thursday 27 December 2012

vSphere 5 versus Hyper-V SP1 Performance Showdown

It's time to stop guessing and start testing.

vSphere 5 is the most popular x86-virtualization platform, and exciting enhancements keep coming with each new release.

Hyper-V from Microsoft is also a popular solution for server virtualization on the x86 platform, and it has become even more so with the addition of advanced features in SP1. The fact that Hyper-V is included with Windows also makes it attractive from a cost perspective.

Understanding the performance aspects of these virtual environments is important to ensuring that you get maximum benefit from your virtualization investments.

The usual way to test performance between platforms is through benchmarking. The ideal benchmark would incorporate real production workloads, which in most cases is not feasible. An alternative is to utilize generic benchmarks that approximate production workloads.

I recently carried out a general performance comparison between vSphere and Hyper-V, across all major components using identical generic benchmarking tools.I also examined virtualization specific performance metrics that are available in both environments.

On January 3, 2013(8am PT, 9am MT, 10am CT, 11am ET, 4pm UK, 5pm CET) I'll run through the results with you on a webinar.

As a basic understanding of the hypervisor architecture is important when evaluating performance data I'll also be looking at architectures, compared and related to available metrics and discussing important differences in architecture and terminology .

I'll share my benchmark results from both environments along with conclusions that I've formed from my findings.

Join me as I compare two of the most popular x86-virtualization platforms in use.

  • Architecture review
  • Metrics available
  • Challenges of benchmarking virtual environments
  • Testing environment and benchmarks
  • Methods and objectives
  • Results and conclusions

Register for this Webinar

Look forward to speaking to you then.

Dale Feiste
Consultant

Monday 24 December 2012

Merry Christmas from all at Metron


Happy holidays to everyone who has had contact with Metron throughout the year:  clients, prospects, partners, analysts, suppliers.  It has been another fun, enjoyable and as ever, challenging year.  More than ever before, Metron’s business seems to be encompassing a wider geographical sphere covering a more varied range of cultural and religious areas.  Whether you celebrate Christmas or not, may I wish a happy and healthy holiday season to everyone.

I guess Metron’s wider geographical coverage is symptomatic of how large scale IT infrastructure environments are developing.  More and more of the businesses we work with are treating capacity management as a centralized global function spanning data centers in many countries.  It is common now to have capacity management carried out on one continent, managed from another for applications used on a third.  The need for rapid deployment, standard processes and common format for reporting becomes ever more important in such circumstances. 

Our response to this during the year has been to formalize our capacity management beliefs and practices over the last 25 years into our 360 Capacity Management strategy.  In such a disparate global environment, those delivering capacity management to their organization need to bring all views of capacity together to reduce resources required for capacity reporting and help their users understand capacity issues better by presenting such varied information in an easy to understand, common format.

360 Capacity Management offers just this.  Over the 25 years Metron has developed comprehensive support for capacity management of all key server environments as a platform for enterprise capacity management.  For many years the CustomDB element of athene® has offered the capability to extend athene®’s core facilities to other areas.  Now, with Integrator, the successor to CustomDB, this has moved further forward.  Metron and clients can quickly and easily create Capture Packs, connectors to any capacity data you have available.  Metron provides and supports an ever-growing library of Capture Packs covering disk, network, application, end to end and alternative server data sources.  Combined with client provided data unique to their own business, all this data within the athene® CMIS is then available for capacity reporting and prediction using athene®’s application functionality: comprehensive enterprise wide capacity reporting for physical, virtual and Cloud environments from component, service and business perspectives.

It’s been fun formalizing this 360 Capacity Management strategy over the last year and working closely with individual organizations to develop the existing Capture Pack library.  I look forward to seeing this collaborative approach continue and extend in the year ahead.  Once again many thanks to those who have been involved and best wishes for the holiday season to you all.

 
Andrew Smith
Chief Sales & Marketing Officer
 

Wednesday 19 December 2012

Performance counters for Network and Storage - Top performance and capacity tips for Hyper-V ( 6 of 7)


As I mentioned on Monday today I'll be taking a look at Hyper-V performance counters for Network and Storage.
 
From the perspective of the network, the following objects provide all the required data about the physical interfaces and the performance when virtualized.


         Network Interface

         Hyper-V Virtual Switch

         Hyper-V Legacy Network Adapter

         Hyper-V Virtual Network Adapter

 

As stated previously the network side of things becomes more and more important as we’re starting to have larger numbers of, and larger sized, virtual machines undertaking the work.

The network interface counter is still valid and is a very useful way of understanding how the network ports are being used.

Finally the storage counters provide a useful level of information relating to the disks, both from the physical and virtual perspectives.

 
         Physical Disk

         Hyper-V Virtual IDE Controller

         Hyper-V Virtual Storage Device


As with any OS level performance monitoring, the usual rules apply when dealing with SAN based storage.  The technology employed by SAN’s to manage the disks and improve the performance e.g. caching, buffering etc remain transparent to the OS, so be wary of how the data is interpreted.  Response times are usually an indicator, disk utilization less so.
 
I'll conclude on friday with my top tips for managing Hyper-V capacity.
 
Rob Ford
Principal Consultant
 
 

Tuesday 18 December 2012

Key Metrics for Effective Storage Performance and Capacity Reporting

Doing capacity management for storage can be difficult with the many complex and varied technologies being used.

Given all of the options available for data storage strategy, a clear understanding of the architecture is important in identifying performance and capacity concerns. A technician looking at metrics on a server is often seeing only the tip of a storage iceberg. However, the host view is important when looking at measured I/O response.

If response times are severely impacted on a busy server, then end users of the hosted applications will also be impacted. High response at the OS typically originates somewhere in the backend storage infrastructure.

Tracking and reporting on key metrics at the host and backend storage controllers can prevent these incidents from occurring and having the right tools in place can be the difference between prevention and firefighting.

I'll be taking a closer look at key metrics for storage performance at my webinar on Wednesday covering:

  • Storage architecture
  • Virtualization
  • Key metrics for the host and backend storage environment
  • Reporting on what is most important

  • Register and come along http://www.metron-athene.com/services/training/webinars/webinar-summaries.html

    Dale Feiste
    Consultant

    Monday 17 December 2012

    Capturing performance data - Hyper-V performance counters for CPU and Memory (5 of 7)

    Moving onto capturing performance data; the main sources of information are the Hyper-V performance counters as seen from the root partition, there are 21 functioning counters that provide around 600 metrics in total and Vendor products should interrogate these remotely via WMI.

    Sadly Perfmon metrics within each guest partition may not be reliable for CPU etc, due to processor skew, so be careful how this data is being used. However certain other metrics can be used and these can be seen via SCVMM.

    “In guest” monitoring is very light (as it is in vmware), so process level metrics aren’t captured, so an additional in guest agent will be required.

    Now the performance counters

    For the CPU, the following objects can provide a useful source of information:
     

             Hyper-V Hypervisor Logical Processor

             Hyper-V Hypervisor Root Virtual Processor

             Hyper-V Hypervisor Virtual Processor

             Processor

     
    One important fact to be aware of, if you are monitoring the root partition the processor counter isn’t a Hyper-V counter and will give you the wrong numbers. To accurately monitor how the physical hardware, CPU’s and the guests are being utilized you will need to use the Hyper-V hypervisor counters. The processor counter isn’t virtualized and isn’t aware that it is virtualized.

    For the memory, the following objects are recommended:
     

             Hyper-V Hypervisor Partition

             Hyper-V Hypervisor Root Partition

             Hyper-V Dynamic Memory Balancer

             Hyper-V Dynamic Memory VM

             Memory

     
    These provide a good overview of how the memory is being consumed at the partition level and at the hypervisor level via the Dynamic Memory counters.  The “Memory” performance object also provides some guidance as to the host level consumption.

    On Wednesday I'll look at the performance counters for Network and Storage.

    Rob Ford
    Principal Consultant

     

    Wednesday 12 December 2012

    Metrics and Monitoring - Top performance and capacity tips for Hyper-V ( 4 of 7)


    The main focus today is what metrics you should be looking at and the options for capturing the data.

    Let’s look at the options for capturing the data first.  Hyper-V Manager is an ‘out of the box’ tool which is more of a management GUI, similar to vCenter in some respects but not quite as polished or developed. It doesn’t really have any performance or capacity aspects, but provides a little bit of information although not to the same level as vCenter.

    System Center Operations Manager is Microsoft’s management monitoring tool and provides a central source of monitoring for Hyper-V. It is driven by having Hyper-V management packs and can provide a useful source of information. The main issue with SCOM is that there are minimal metrics and they don’t always capture the right level of data for Capacity Management. Although there is some basic trending, there are no modeling capabilities and there is no real control over inbuilt aggregation. It tends to be captured with the operational end of the tool and then automatically aggregated as it’s put into the data warehouse.

    It does provide some monitoring level support so you can see the Host, Guest, potentially cluster level and some of the application metrics.

    System Center Virtual Machine Manager (SCVMM) allows for multiple host management and multiple hypervisor management. The template and library management allows automatic deployment from templates.  It has integrated P2V conversions, a modicum of virtual machine performance monitoring (from SCOM) and allows you to drive live migration events.  Interestingly it allows you to manage your VMware estate as well, via vCenter.

    Whilst it does provide some metrics, they tend to be fairly high level and it is more of a monitoring alerting tool rather than capacity management tool.

    Moving onto capturing performance data; the main sources of information are the Hyper-V performance counters as seen from the root partition, there are 21 functioning counters that provide around 600 metrics in total and Vendor products should interrogate these remotely via WMI.

    On Friday I'll be looking at these performance counters.

    Rob Ford
    Principal Consultant

     

     

    Monday 10 December 2012

    Comparison between Hyper-V and VMware - Top performance and capacity tips for Hyper-V (3 of 7)


    Today we’ll take a look at how Hyper-V 2012 compares with VMware. The table below shows the differences.


    The Raw Device Mapping (RDM) is effectively allowing you to connect the hard disk directly to a virtual machine.  Within vSphere 5.1 this can be done up to 64TB whilst Microsoft suggests that Hyper-V 2012 can do 256TB plus and is only limited by the size of the physical disk – so there is scope to grow.

    Both vSphere 5.1 and Hyper-V 2012 offers similar support for guests, although interestingly this has only been provided by VMware on a recent upgrade to 5.1 and it’s the first time that vmware have had to upgrade their software to ‘catch up’ with Hyper-V.

    Currently Hyper-V supports larger cluster sizes and is potentially cheaper, although this appears to be dependent on which website you’re looking at.

    If your organisation is buying datacenter licenses then certainly Microsoft can work out more cost effective, a data centre licence allows you to enable the hyper-v role and then all of the guests running on that server are automatically licensed.  The same cannot be said for VMware, which involves buying the VMware software and then your Microsoft software.

    Hyper-V comes with enterprise functionality available as standard whereas with VMware you tend to have to have the ‘higher level’ versions to unlock things like V-motion etc.  It’s a similar story when looking at the management layer,Hyper-V can be managed with the out of the box tool whereas VMware requires the purchase of vCenter.

    With regards to the performance comparison, there are very few independent sources that have performed an objective comparison and the likelihood is that each will perform better in different circumstances.

    That said and whilst Hyper-V 2012 is still quite new, early test results suggest that there is not much between the two platforms with regards to CPU and memory.

    Early reports, suggest

     
             Improved IO throughput with Hyper-V

             Comparable CPU loading

             Improved memory utilization with Hyper-V

    It’s still too early to tell whether there are significant benefits to be had by choosing one over the other.
     
    On Wednesday I'll be sharing my thoughts on what metrics you should be monitoring and the options for capturing data. In the meantime join our community and get access to our white papers, podcasts and free downloads.....
     
    Rob Ford
    Principle Consultant

    Friday 7 December 2012

    Updates to functionality - Top performance and capacity tips for Hyper-V ( 2 of 7)


    There are numerous updates to functionality.
     
    The last reasonable contender was Hyper-V 2008.  The following tables summarise the differences and what they mean.
     
     

    Hyper-V 2012 has increased guest and cluster support bringing these up to serious production levels.
     
     

    As you can see from the table Windows 2008 R2 provided live migration, but relied on the servers being built as windows cluster boxes, so it didn’t really provide the functionality and flexibility that VMware gave you and being single instance, you could only do one live migration event automatically.  These changes mean that we’re now looking at a far closer parity with vmware.

    We now have the option to migrate child partitions between Windows servers that aren’t clustered and combined with live storage migration, migrate between servers that aren’t running on shared storage as well. This now provides a good deal of flexibility.

    One of the key differences in 2012 is the addition of SR-IOV support which allows full access to the physical network adapters for a guest. You will see that from the size of some of the virtual machines that could, in theory, be created the next bottlenecks will undoubtedly be in shared networking. The SR-IOV support is a key facilitator for having this size of virtual machine and allows for complete access to the network adapter. So the required network bandwidth will be available to cope with the volume of work that the guests are going to have to do.

    Dynamic memory now has improved management. Memory reclamation is included which allows you to balloon things when required and also to allow a guest to start up even if minimum memory is not available. In a lot of respects this is a step forward in terms of what VMware have available, as Hyper-V will allow you to ‘dig in’ to what the box is doing and balloon in an intelligent way, to free up memory and allow you to get other resources available.
     

    Guest NUMA support extends the hardware based functionality into the realms of the guest; again, key given the potential guest resource allocation.

    Smart paging is intelligent memory management that allows you to bridge the gaps between minimum and start up memory if physical resource is low. This is more dynamic in terms of how it is managing its memory and is an improvement over VMware.

    Runtime memory configuration allows you to change the dynamic memory allocation when the virtual machine is running, which is a big operational step forward when managing heavily utilized environments.

    Resource metering allows you to track how key performance metrics are used over time. Not quite as good as it sounds, predominantly around the network side of things and allows you to monitor some of these key metrics tied really with chargeback more than anything else and persists through live migration. 
     
    On Monday I'll be making some comparisons between Hyper-V and VMware.
     
    Rob Ford
    Principal Consultant

     
     
     
     
     

     

    Wednesday 5 December 2012

    Top performance and capacity tips for Hyper-V

    My blog series will look at the changes in Windows/Hyper-V 2012 - what that means from the perspective of the business and managing the capacity and concentrates on the following areas:

     
    ·         Technology recap

    ·         Updates to the functionality

    ·         Comparison between VMware and Hyper-V

    ·         Metrics and Monitoring

    ·         Top tips for managing Hyper-V capacity


    Technology recap

    What is Hyper-V?

    Even though Hyper-V has been out there for some time it’s still not widely adopted.  It’s as similar in design as Xen; it’s still classed as a type 1 hypervisor, but has a managing partition rather than Vmware which has multiple guests running on a hypervisor.

     It was first released in June 2008 and the latest versions are available by enabling a role within Windows 2012 or via the core version.  The key elements are:

     
             The hypervisor (around 100k in size)

             Parent or root partition (the first and controlling guest)

             Child partitions

             Two versions –Full and server core

     
    The main difference between the two versions is the available functionality.  The core version purely allows for Hyper-V, whereas the full Windows 2012 allows for all of the usual windows roles to be enabled e.g. IIS, AD, FTP etc

    This is an architecture diagram from MSDN which gives you an idea of how the architecture hangs together.

     


     
    On the left you can see the root partition which effectively manages the child partition and allows you to create them.
    The key differentiator here is the enlightened and unenlightened child partitions, when a partition is enlightened it enables you to use ‘VMware type’ tools and provides better ‘all round’ performance.

    You will need to bear in mind that the child partitions communicate to the hyper visor via the root partition from the I/O perspective, so if the root partition is very busy performance and capacity may be impacted.
     
    On Friday I'll be looking at the updates to functionality.
    In the meantime why not join our community and get free access to our papers,podcasts and downloads http://www.metron-athene.com/_downloads/index.html
     
    Rob Ford
    Principal Consultant

     

     

    Friday 30 November 2012

    Top 5 Don’ts for VMware

    As promised today I’ll be dealing with the TOP 5 Don’ts for VMware.


    DON’T


    1)       CPU Overcommit (unless over ESX Host usage is <50%).  Why?  I’m sure that most of you would have heard of CPU Ready Time?  CPU Ready Time is the time spent (msecs) that a guest vCPUs are waiting run on the ESX Hosts physical CPUs.  This wait time can occur due to the co-scheduling constraints of operating systems and a higher CPU scheduling demand due to an overcommitted number of guest vCPUs against pCPUs.  The likelihood is that if all the ESX hosts within your environment have on average a lower CPU usage demand, then overcommitting vCPUs to pCPUs is unlikely to see any significant rise in CPU Ready Time or impact on guest performance.


    2)       Overcommit virtual memory to the point of heavy memory reclamation on the ESX host.  Memory overcommitment is supported within your vSphere environment by a combination of Transparent Page Sharing, memory reclamation (Ballooning & Memory Compression) and vSwp files (Swapping).  When memory reclamation takes place it incurs some memory management overhead and if DRS is enabled automatically,  an increase in the number of vMotion migrations. Performance at this point can degrade due to the increase in overhead required to manage these operations.


    3)       Set CPU or Memory limits (unless absolutely necessary).  Why?  Do you really need to apply a restriction on usage to a guest or set of guests in a Resource Pool?  By limiting usage, you may unwittingly restrict the performance of a guest.  In addition, maintaining these limits incurs overhead, especially for memory, where the limits are enforced by Memory Reclamation.  A better approach is to perform some proactive monitoring to identify usage patterns and peaks, then adjust the amount of CPU (MHz) and Memory (MB) allocated to your guest virtual machine.   Where necessary guarantee resources by applying reservations.


    4)       Use vSMP virtual machines when running single-threaded workloads.  Why?  vSMP virtual machines have more than one vCPU assigned.  A single-threaded workload running on your guest will not take advantage of those “extra” executable threads.  Therefore extra CPU cycles used to schedule those vCPUs will be wasted.


    5)       Use 64-bit operating systems unless you are running 64-bit      applications.  Why?  64-bit virtual machines require more memory overhead than 32-bit ones.  Compare the benchmarks of 32/64-bit applications to determine whether it is necessary to use the 64-bit version.


    If you want more information on performance and capacity management of VMware visit our website and sign up to be part of our community, being a community member provides you with free access to our library of white papers and podcasts. http://www.metron-athene.com/_downloads/index.html or visit our capacity management channel on YouTube http://www.youtube.com/user/35Metron?blend=1&ob=0

    I'm at CMG Las Vegas next week and hope to meet up with some of you there.

    Jamie Baker
    Principal Consultant





    Wednesday 28 November 2012

    VMware - Top 5 Do's

    I’ve put together a quick list of the Top 5 Do’s and Don’ts for VMware which I hope you’ll find useful.

    Today I’m starting with the Top 5 Do’s

    DO


    1) Select the correct operating system when creating your Virtual Machine. Why? The operating system type determines the optimal monitor mode to use, the optimal devices, such as the SCSI controller and the network adapter to use. It also specifies the correct version of VMware Tools to install.


    2) Install VMware Tools on your Virtual Machine. Why? VMware Tools installs the Balloon Driver (vmmemctl.sys) which is used for virtual memory reclamation when an ESX host becomes imbalanced on memory usage, alongside optimized drivers and can enable Guest to Host Clock Synchronization to prevent Guest clock drift (Windows Only).


    3) Keep vSwp files in their default location (with VM Disk files). Why? vSwp files are used to support overcommitted guest virtual memory on an ESX host. When a virtual machine is created, the vSwp file is created and its size is set to the amount of Granted Memory given to the virtual machine. Within a clustered environment, the files should be located within the shared VMFS datastore located on a FC SAN/iSCSI NAS. This is because of vMotion and the ability to migrate VM Worlds between hosts. If the vSwp files were stored on a local (ESX) datastore, when the associated guest is vMotioned to another host the corresponding vSwp file has to be copied to that host and can impact performance.


    4) Disable any unused Guest CD or USB devices. Why? Because CPU cycles are being used to maintain these connections and you are effectively wasting these resources.


    5) Select a guest operating system that uses fewer “ticks”. Why? To keep time, most operating system count periodic timer interrupts or “ticks”. Counting these ticks can be a real-time issue as ticks may not always be delivered on time or if a tick is lost, time falls behind. If this happens, ticks are backlogged and then the system delivers ticks faster to catch up. However, you can mitigate these issues by using guest operating systems which use fewer ticks. Windows (66Hz to 100Hz) or Linux (250Hz). It is also recommended to use NTP for Guest to Host Clock Synchronization, KB1006427.


    On Friday I’ll go through the Top 5 Don’ts.
    If you want more detailed information on my Top 10 performance metrics to identify bottlenecks in VMware take a look at my video http://www.youtube.com/watch?v=Gf90Kn_ZVdc&feature=plcp

    Jamie Baker
    Principal Consultant

    Friday 23 November 2012

    Getting your VMware memory allocation wrong can cost you more than just money

    VMware have made some interesting changes with regard to the licensing of their vRAM technology. The previous licensing model enforced RAM restrictions and limitations on users of vSphere 5.

    Now the previous vRAM licensing limits have gone and VMware have returned to a per-CPU licensing charge for the product.

    So following on from VMware’s u-turn on vRAM licensing, does this mean that Memory reporting and allocation have become less important? 
    No, in fact it’s as important as it’s ever been.  Faster CPUs and better information around CPU fragmentation has shifted focus away from CPU performance and onto to virtual memory allocations and performance.

    Therefore, getting the most out of your VMware environment is a pre-requisite in these cash strapped days.  Getting your memory allocation wrong can cost you more than just money, it can affect service performance and the subsequent knock on effects can significantly impact the whole enterprise.
    From my experience, the common questions around virtual memory are:

    ·    What’s the difference between Active, Consumed or Granted Memory within a      VMware environment?

    ·    How much virtual memory should you allocate to a virtual machine and how do you get it just right?

    ·    What are the benefits and disadvantages of using Memory Limits and Resource Pools?

    I’ll be answering these questions and many more in my presentation at CMG, Las Vegas December 3 -7.
    I plan to take you on, what I hope, will be a memorable journey.

    Explaining in detail how virtual memory is used and why VMware supports memory over-allocation, I’ll help you understand how you can identify your over provisioned VMs, what memory metrics to monitor and how to interpret them.
    Finally, I’ll also be providing you with some best practice guidelines on Virtual Memory and some interesting information on using Memory Limits within your VMware Environment.

    If you’re going to CMG, Las Vegas make sure you register for this session and for those of you who’ll miss it I’ll be writing it up as a paper and blog series on my return.

    Jamie Baker
    Principal Consultant

    Friday 16 November 2012

    5 APM and Capacity Planning Imperatives for a Virtualized World


    Having been involved in Capacity Planning for the better part of two decades, I've watched the environments we manage become more and more complicated even as companies decide to devote less and less staff to such an important function.

     Back in the 90s, we'd frequently make decisions based on utilization of servers and mainframes and would upgrade them when we hit certain thresholds.  We spent no time whatsoever worrying about how optimized the applications were and how well the infrastructure was planned.  Most services ran on the mainframe and those that didn't were very simple client-server applications that were relatively easy to manage.

    Today's data center is much different and much more complex.  Virtualized applications, centralized storage, and Cloud Computing make the task of Capacity Planning quite complex as cost savings can be realized when physical resources don't have a lot of excess headroom.  That means, however, that we need the skills and tools that allow us to understand applications from the end user to the data center and know where to optimize those applications in order to get the most bang for the buck.
     
    IT operations and capacity planners now must understand and optimize their applications and infrastructure from the end user to the data center.

    Metron and Correlsense recognize that Capacity Planning and application performance management (APM) are both key functions that are vital to the smooth operation of the modern data center and these functions must work together to optimize applications and services.  Metron's athene® and Correlsense's Sharepath integrate to bring together important APM and Capacity Planning data in one centralized location for the use of many different groups in the data center.

    We'll be discussing what you need to know about capacity management when operating in both physical and virtual environments, how performance monitoring in virtual environments relates to your capacity management goals and what's unique about capacity and performance management for virtualized applications at our joint webinar on November 20.

    Why not join us, register now   http://www.metron-athene.com/training/webinars/correlsense.html

    Rich Fronheiser
    SVP, Strategic Marketing