How to Resolve a Complex IT Problem in Just a Few Clicks with eG Enterprise


topimg_22877_complexity_600x400

One of the most common and yet difficult things for any admin to accomplish is to trouble shoot end user “slow-time” issues. Application, database, network and server unresponsiveness or “slow-time” negatively affects enterprise performance and end user productivity ten times more often than downtime and can originate from just about anywhere within the enterprise.

Misconfiguration due to human error, missing drivers, intermittent memory faults, network IP cache errors, unbalanced workloads and constrained virtual resources can all be the root-cause of slow-time, or they could just be a resulting symptom, the key to resolving such issues is getting to the root-cause quickly before they spread to other systems and bring productivity to a standstill.

In the following walkthrough, I detail how such a scenario can be resolved quickly and easily before end users notice with the help of eG Enterprise. This example focuses primarily on Citrix and vmware but eG Enterprise can help IT departments maintain maximum productivity for millions of combinations of enterprise components.

eG Enterprise is a 100% web based solution making it possible for anyone in IT from the CIO and IT Managers to admins and helpdesk specialist to proactively monitor their environment anytime, anywhere on any device.

eG Alarms Window for Blog

When a “slow-time” error occurs, eG Enterprise automatically generates an alarm to the appropriate admin so they can take action immediately. The solution correlates and color codes the minor, major and critical alerts and displays them using a layer model with the most critical alert at the top.

eG Alarms Window Details for Blog

According to the alert Virtual CPU usage in the vmware ESX system console is high. The system console is a bootstrap operating system of ESX and should only be using about 2% of the CPU allocated to it, by scrolling over the description we can see the usage has suddenly increased to 100%; left alone this would surely affect Citrix performance and generate a large number of support calls to IT from end users.

eG Detailed Diagnosis Window for Blog

Fortunately, eG Enterprise patented detailed diagnosis technology makes identifying the root-cause a breeze. By scrolling to the right and using the magnifying glass icon, the root-cause is revealed within the Detailed Diagnosis window. The window is displaying information for the top 10 processes using virtual CPU resources; to the right of the window those processes are listed as SAMBA backups.

eG Fix Feedback Window

The root-cause is simply this, the vmware admin is performing a normal backup but it is taking place before the end of the workday potentially affecting Citrix users when they attempt to log on and access applications. The best solution is to contact the vmware admin, explain the situation and either agree to reschedule the backups or adjust virtual resources.

In this case, the Citrix and ESX admin agree to reschedule the backups and then the Citrix admin uses the Fix Feedback feature within eG Enterprise to document the event, as well as the agreed solution and save the record. Resolving the issue took just a few clicks and a quick phone call between admins.

eG Enterprise provides universal insight across platforms and domains whether they exist in the Cloud, the data center or in virtual space, it is for this reason that the Citrix admin had the visibility they needed to identify the root-cause as a virtual resource constraint within the vm’s that support Citrix.

The following is a more detailed look at the Citrix admins view of the eG Enterprise Universal Insight dashboard as well as the methodology and technology behind the solution.

eG Universal Insight Dashboard for Blog

The color codes of eG enterprise are familiar, green is “Normal”, yellow is a “Minor” alert, orange is a “Major” alert and red is a “Critical” alert requiring immediate attention.

The dashboard provides universal insight for the 12 different components that comprise the two Citrix services they are monitoring. The Component Type panel lists the details for each of the 12 components.

Clicking on the listed services in the middle of the Infrastructure Health panel on the left will reveal which of the two services is generating alerts.

The Measure at – A – Glance panel at the bottom left lists the measurements and tests conducted for each of the 12 components that comprise the two different Citrix service being monitored. Details include CPU utilization, Free memory, Active Citrix sessions, and more.

The bell icon on the top right of the window is a link to the alarm window details viewed previously.

eG Infomart Services Window for Blog

After clicking on the Services panel eG Enterprise presents two Citrix service icons for the different services, it appears that Infomart is the service experiencing major issues, clicking on the Infomart icon opens the list of Web Transactions for the Infomart service.

eG Informart Transactions Window for Blog

The list indicates that Application Access and User Logons are experiencing errors. The Citrix admin may click on the Topology tab or the transactions themselves for a service topology graph for the Infomart service.

eG Citrix Topology Window for Blog

Navigating the Infomart service topology from left to right, end users are connecting to Infomart through a network node, then a web server that is experiencing minor errors; the requests then reach a Citrix Zone Data Collector, which sends the request to one of the Citrix XenApp servers, which is experiencing major errors. The XenApp server then accesses the appropriate file, print or database server on the backend.

Based on the color codes eG Enterprise is indicating that the primary focus should be on the Citrix XenApp servers. Clicking on them will present the Citrix admin with either a physical or a virtual topology for Citrix XenApp depending on the supporting host.

eG Virtual Citrix XenApp Topology Window for Blog

The virtual topology indicates that the vmware ESX virtual machine hosting the Citrix Zone Data Collector, the Web server and Citrix XenApp are experiencing critical errors, that is where the focus needs to be. Clicking on that area of the virtual service topology reveals the elements that support those virtual machines.

eG Layer Model Window for Blog

This is the eG Enterprise Layer model for the Citrix XenApp service topology. On the right are all of the elements that support the virtual machines as well as the various tests that correspond to that layer. As an example within the OS layer for the virtual hypervisor are measurements and tests for the System Console, CPU, Disk Space and more; depending on which layer is selected the information within the right panel changes accordingly.

eG Detailed Diagnosis Window for Blog

eG Enterprise has already identified that virtual CPU resources within the system console are constrained. Using the magnifying glass icon on the far right opens the same Detailed Diagnosis window previously accessed from the Alarms window and the same SAMBA errors are viewable, this confirms the previous diagnosis.

Regardless of the path an admin chooses to use, identifying the root-cause of a complex IT problem takes just a few clicks with the eG Enterprise.

eG Infrastructure Health Reporting Window for Blog

The last thing I will cover are some of the reporting benefits that eG Enterprise provides. By returning to the Universal Insight dashboard and selecting the Reporter tab, anyone in IT can pull performance reports for the infrastructure.

Reports are available based on Function, Component, Service, or Segment; two of the more important are Operational KPI and Capacity Planning. Easy access to comprehensive reporting make it possible to maintain business continuity, predict peak needs, ensure future readiness for emerging technologies while keeping costs down and increasing productivity.

The eG Enterprise methodology is simple, the technology is powerful and the universal insight is comprehensive.

For a free trial, to schedule a live demo or obtain more information about eG Enterprise send a request to info@eginnovations.com or go to our website at www.eginnovations.com

Reduce Downtime and Slow-time in the Borderless Enterprise


nextgov-mediumThe era of the borderless enterprise is here!

Ensuring on-demand availability of apps, data and seamless collaboration via the cloud, across the web, virtual networks, servers and storage requires complete and total visibility across platforms, domains, time zones and geographies.

When apps and databases are unavailable or slow and unresponsive the downstream impact can be truly devastating. Customer loyalty suffers, industry reputations become damaged, and strategic partnerships dissolve, legal liabilities increase, stock values decrease and financial markets collapse.

Do you know how many man hours are affected by downtime and slow-time each year by just the Fortune 500?

Do you know the estimated compensation cost of downtime and slow-time compared to annual GDP?

Are you a CIO or IT Executive and want to know more?

Within our most recent Enterprise Solution Brief “Reducing Downtime and Slow-time in the Borderless Enterprise” you will discover the following;

  • 3 key benefits of eG Enterprise Universal Insight within the Borderless Enterprise
  • The REAL Cost of Downtime and Slow-time
  • Why “Universal Insight is Required”
  • How eG Enterprise Universal Insight is “Ready for the Future of the Borderless Enterprise”
  • How eG Enterprise Universal Insight is a “Force Multiplier for Enterprise IT”

Read the brief then let us show you the numerous ways that eG Enterprise Universal Insight can help you reduce downtime and slow-time, enhance IT service performance, increase operational efficiency, ensure IT effectiveness.

Email info@eginnovations.com

Call 866.526.6700

Enhance Healthcare IT and Improve Patient Care


innovationwordcloud400FACT: Preventable medical errors are the 3rd leading cause of death in the United States right behind heart disease and cancer. It doesn’t seem possible but according to “Death by Medical Mistakes Hits Record” in Healthcare IT News it’s true and the related financial costs may be as high as $1 trillion dollars.

Certainly many of the deaths are due to situations Healthcare IT has no influence over, but some are due to preventable problems such as inaccurate and missing records or systems which are completely down, experiencing application or database slow-time when doctors, clinicians and patients are in critical need.

According another Healthcare IT News article “The True Nature of Recovery: 5 Ways to Mitigate Downtime, Data Loss” there are a number of things that Healthcare CIOs and IT managers can do to mitigate the risks to both patients and the bottom line of healthcare organizations. Though the article is focused on disaster recovery it is very quick to point out that knowing your systems, avoiding points of failure and ensuring that you have the data necessary to right-size, avoid cost over runs and prepare for emerging technologies are highly important.

What Healthcare IT organizations need is a solution that helps reduce IT complexity, increases performance, tracks app adoption, offers role based reporting and grants CIOs, Managers and admins “universal insight” across the solution stack as well as all other interdependent physical and virtual layers; and they need the ability to do it all from just a single interface instead of multiple silo-centric tools.

Within our most recent Enterprise Solution Brief specifically for Healthcare IT you will discover

  • 4 key benefits that improve Healthcare IT efficiency and result in enhanced patient care
  • What factors are driving “Healthcare IT Transformation
  • How to “Enable Successful Healthcare IT Transformation
  • How eG Enterprise Universal Insight is “Ready for the Future of Healthcare IT”
  • How eG Enterprise Universal Insight is a “Force Multiplier for Healthcare IT”

Read the brief then let us show you the many ways that eG Enterprise Universal Insight can enhance your Healthcare IT department, improve patient care and save lives.

Email info@eginnovations.com

Call 866.526.6700

In 2015 CIOs Will Need Universal Insight Across the Enterprise


Imagekid Silo-centric monitoring and management tools are fine for supporting a confined infrastructure with very limited interdependencies but for CIOs seeking to prevent drains on CAPEX and ensure they are contributing to OPINC they aren’t enough.

Resolving IT pain and avoiding unscheduled downtime is the traditional focus of APM and NPM solutions and it will continue to be important in 2015 but emerging technologies like Cloud, mobility and XaaS are reinventing the role of the modern CIO.

Those who virtualized and consolidate their data center servers over the last ten years are now extending virtualization across the enterprise to include networking, storage, cloud and mobility deployments.

With the new era of the borderless enterprise upon them CIOs are keenly aware of the need to research and adopt solutions that both resolve traditional IT pain while giving them universal insight across the enterprise so they can ensure emerging technologies deliver greater value to the business.

Below are some of the predictions for 2015 that will shape the need for CIOs to have universal insight across the enterprise…

Enable workspace flexibility. IT executives are being challenged to adopt and integrate mobile solutions at a blinding pace but 70% of end user devices cannot pass basic compliance and security tests, so introducing foreign devices on the corporate network poses serious risks. CIOs will need to ensure availability while managing access, device and user compliance and security, having universal insight across user and device profiles, approved and blacklisted apps, databases and domains will be critical to success.

XaaS becomes the new IT stack for Hybrid Cloud. The era of everything as a service has arrived. The development of virtual cloud and mobility apps are driving IT innovation and the consolidation of intellectual properties at a blinding pace. To maintain market share and demonstrate thought leadership traditional product centric companies will accelerate efforts to bring new XaaS offerings to market. The rollout and adoption of new service offerings like vDaaS, DRaaS, IaaS, MWaaS, PaaS, and WPaaS with grow and mature in 2015 and throughout the remainder of the decade.

The borderless enterprise explosion will usher in a new era of compliance and security. The gaps and interdependencies between cloud, mobility, virtual and shared infrastructures, social media platforms and SaaS will inspire a renewed focus on compliance and security the way email, malware and network security have before. APM providers will be faced with some interesting choices; either acquire or develop additional internal compliance and security expertise, partner with an existing security provider or remain focused on their existing silo niche. CIOs will be left with deciding to go all-in with a security provider, purchase silo-centric solutions that provide limited compliance and security visibility, or evaluate and choose an APM NPM solution that meets most of their needs now as APM NPM compliance and security maturity continues to grow.

Containerization remains a test and development play, for now. Game changer, disruptive and death knell are all phrases tossed about when containerization solutions are discussed as an alternative to traditional virtualization but the reality is much less dramatic. Container solutions are well suited for accelerating Linux app portability and reducing associated overhead, Google and Facebook have deployed containers very successfully but their demands for rapid deployment and scale are different from most corporate customers. Until container solutions are cross compatible, offer mature management options and enhanced security capabilities expect containers to remain a solution for Linux test and development environments while traditional VMs meet the majority of data center production demands.

Data is the new natural resource and almost as important as air and water. Ensuring on-demand availability of data and seamless collaboration via the cloud, across virtual networks, servers and storage will require that CIOs have complete and total visibility across platforms, domains, time zones and geographies in 2015. Ensuring maximum uptime, preventing downtime and performing root-cause analysis are now IT table stakes. Whereas having universal insight across the enterprise, providing a seamless user experience, preventing slow-time and increasing productivity via the borderless enterprise are the new business goals CIOs must be aligned with and focused on.

Ensure IT effectiveness and business alignment. CIOs must align IT initiatives with desired business outcomes for productivity, growth and profit. APM historical performance reports provide the empirical data they need to help them balance workloads, right-size the enterprise and eliminate cost overruns so capacity planning meets the business needs of today while preparing for the emerging technologies of tomorrow.

Deliver a positive user experience through enhanced service performance. End users judge their experience relative to their ability to be productive and complete an end goal. Whether end users are employees and partners seeking to work seamlessly between the office and a mobile device as they move across domains or customers accessing a web cart, they all expect apps and databases to be available, accessible and responsive. CIOs will rely heavily on APM solutions to provide KPI for user logons, average response time, page loads, app adoption, abandonment rates and other metrics to ensure that end users are happy and productive.

Expanded use of KPI metrics. What started with call center, helpdesk and customer service metrics is expanding rapidly. APM solutions that can be adapted to collect KPIs for industry and role specific applications are influencing the decision making of CEOs, CFOs and other executives. APM solutions will be used to measure and determine the viability of pilot programs, industrial expansion and even the purchase of competing intellectual properties.

Improve operational efficiency. Reliance on command line interfaces and technology trees is functional but outdated. CIOs will arm and empower IT managers, admins and specialists with APM solutions that are customizable, intuitive, integrate easily with existing NOC tools and provide a unified view of the enterprise. The end goal will be to accelerate time to resolution, eliminate guesswork, reduce dependency on multiple silo-centric tools with limited visibility and mitigate the impact that natural attrition has on tribal knowledge.

For CIOs to achieve these goals and remain future ready they will need universal insight across the enterprise and timely, correlated information that enables data driven decision making. 2015 will be a very interesting year as CIO thought leaders seek to improve the end user experience and enhance productivity within the enterprise.

About eG Innovations eG Innovations is dedicated to helping businesses across the globe transform IT service delivery into a competitive advantage and a center for productivity, growth and profit. Many of the world’s largest businesses use eG Enterprise Universal Insight technology to enhance IT service performance, increase operational efficiency and ensure IT effectiveness. Visit here for more information.

Outages Bring Cloud Monitoring in Focus!


The outage of Amazon's cloud service impacted many businesses hosted in their cloud data center

Over the last week, Amazon’s cloud service had a serious outage that caused many popular web businesses to go offline for several hours and resulted in significant loss of business.

All of a sudden, many in the press (and users as well!) are beginning to realize that applications hosted in the cloud are actually hosted on servers in data centers and are hence, prone to same kind of problems as servers in their own data center. Just because you have not purchased a server or have to provision it, provide power/space, etc., does not mean that the server is failure-proof. As this article indicates, failures can happen due to any number of reasons – a hardware failure, a network outage, an application coding error, etc. Even a configuration error inadvertently made by an administrator can cause catastrophic failures.


Many have gone over-board, predicting the end of cloud computing! If you look at the service contract from these cloud service providers, they have not guaranteed that the infrastructure will be 100% failure-proof. With cloud computing as with everything else, you get what you pay for. Not every business that used Amazon suffered during this outage. The outage was limited to the Amazon east coast (Northern Virginia) data center and for enterprises that had paid for Amazon Web Services’ redundant cloud architecture, it was business as usual. Netflix, the popular movie rental site, was one such.

"You get what you pay for" applies to monitoring tools as well!

Outages like the one Amazon had bring cloud monitoring tools into focus. The saying “You get what you pay for” applies to monitoring tools as well. If you are looking to be alerted once a problem happens, a simple low-cost up/down monitoring tool suffices. On the other hand, if you are looking to be like Netflix and be proactive, want to detect problems before they become revenue impacting, you need a monitoring tool that can alert you to abnormal situations in advance, well before users notice the problem. More sophisticated cloud monitoring tools can also help you rapidly triage where the root-cause of a problem lies – i.e., is it in the cloud data center? is it in your application? is it in the infrastructure services (DNS, Active Directory, etc.)?


Monitoring tools provide insurance cover for your infrastructure. Like Netflix would have assessed the cost of redundancy vs. the benefit from having their business up during the outage, you should assess the return on investment from a monitoring tool.

There are several ways to assess the ROI from a monitoring tool::

  1. By the number of times the monitoring tool can help you avert a problem by proactively alerting you and enabling you to take action before users notice the issue;
  2. By the time the monitoring tool saves by helping you to pinpoint the root-cause of a problem;
  3. By the amount of time that the monitoring tool saves for your key IT personnel by allowing your first level support teams to handle user complaints;
  4. By the savings that the tool provides by enabling you to optimize your infrastructure and to get more out of your existing investment, without having to buy additional hardware or to use additional cloud services.

Related links:

The top requirements for a Cloud-Ready Monitoring SolutionClick here >>>

Agility – Not Cost – is Driving Private Cloud Computing


According to Gartner, agility (not cost) is the most important driving factor motivating private cloud computing deployments. If you have used a public cloud service provider and experienced a cloud-based service first hand, this is not surprising. Having the ability to get a server you need in minutes with the applications you need pre-installed is a huge change from the traditional way of ordering a server and waiting for days to have the server OS and applications to be installed.

The term “cloud computing” has been one of the most over used terms in the last year. To understand what cloud computing can deliver, let me take a simple analogy. Remember the days when you had to physically visit a bank to get to know your bank balance or to transfer money between accounts? Internet and mobile banking has made it significantly easier for us to do the same today. The result – great convenience for you the consumer. At the same time, it also introduces operational efficiency for the bank – the bank can have fewer tellers. Cloud computing with its self-service interface promises to do the same. Users will benefit from the increased agility and the convenience to create the servers they need. The IT operations team will also benefit from having to deal with fewer routine enquiries.

The challenge with private clouds is getting the IT team's buy-in.

The challenge though is that not many IT operations teams may believe so! According to Gartner, early adopters of private cloud computing indicated that technology is one of the least important challenges in building private cloud solutions. Much tougher issues are the cultural, process, business model and political issues! This is not surprising because IT administrators are likely to believe that role is being undermined by allowing users to self provision what they need. Gartner’s recommendation is therefore to focus first on the fundamental concept of turning IT into a service provider that delivers through a service catalog, leveraging automation and probably being paid by usage.

Management Technologies will Play a Central Role in Fulfilling the Promise of Cloud Computing and Virtualization Technologies


2011 is almost here and it promises to be an exciting and challenging year!  Here are my top 10 predictions in the monitoring and management space for 2011.

Virtualization and cloud computing have garnered a lot of attention recently. While virtualization has been successfully used for server applications, its usage for desktops is still in its early stages. Cloud computing is being tested for different enterprise applications, but has yet to gain complete acceptance in the enterprise. 2011 will be the year that these technologies become mainstream.

A key factor determining the success of these technologies will be the total cost of ownership (TCO). The lower the TCO, the greater the chance of adoption. By proactively alerting administrators to problems, pointing to bottleneck areas and suggesting means of optimizing the infrastructure, management technologies will play a central role in ensuring that these technologies are successful. With this in mind, I make the following predictions for 2011:

1. Virtualization will go mainstream in production environments. Very few organizations will not have at least one virtualized server hosting VMs. Enterprises will focus on getting the maximum out of their existing investments and will look to increase the VM density – i.e., the number of VMs for each physical server. In order to do so, administrators will need to understand the workload on each VM and which workloads are complementary (e.g., memory intensive vs. CPU intensive), so IT can use a mix and match of VMs with different workloads to maximize usage of the physical servers. Management tools will provide the metrics that will form the basis for such optimizations.

2. Multiple virtualization platforms in an organization will become a reality. Over the last year, different vendors have come up with virtualization platforms that offer lower cost alternatives to the market leader, VMware. Expect enterprises to use a mix of virtualization technologies; the most critical applications being hosted on virtualization platforms with the best reliability and scalability, while less critical applications may be hosted on lower-cost platforms. Enterprises will look for management tools that can support all of these virtualization platforms from a single console.

3. Enterprises will realize that they cannot effectively manage virtual environments as silos. As key applications move to virtual infrastructures, enterprises will realize that mis-configuration or problems in the virtual infrastructure can also affect the performance of business services running throughout the infrastructure. Further, because virtual machines share the common resources of the physical server, a single malfunctioning virtual machine (or application) can impact the performance seen by all the other virtual machines (and the applications running on them). If virtualization is managed as an independent silo, enterprise service desks will have no visibility into issues in the virtual infrastructure and, as a result, could end up spending endless hours troubleshooting a problem that was caused at the virtualization tier. Enterprise service desks will need management systems that can correlate the performance of business services with that of the virtual infrastructure and help them quickly translate a service performance problem into an actionable event at the operational layer.

4. Virtual desktop deployments will finally happen. VDI deployments in 2010 have mostly been proof of concepts; relatively few large scale production VDI deployments of VDI have occurred. Many VDI deployments run into performance problems, so IT ends up throwing more hardware at the problem, which in turn makes the entire project prohibitively expensive. Lack of visibility into VDI is also a result of organizations trying to use the same tools they have used for server virtualization management for VDI. In 2011, enterprises will realize that desktop virtualization is very different from server virtualization, and that management tools for VDI need to be tailored to the unique challenges that a virtual desktop infrastructure poses. Having the right management solution in place will also provide VDI administrators visibility into every tier of the infrastructure, thereby allowing them to determine why a performance slowdown is happening and how they can re-engineer the infrastructure for optimal performance.

5. Traditional server-based computing will get more attention as organizations realize that VDI has specific use cases and will not be a fit for others. For some time now, enterprise architects have been advocating the use of virtual desktops for almost every remote access requirement. As they focus on cost implications of VDI, enterprise architects will begin to evaluate which requirements really need the flexibility and security advantages that VDI offers over traditional server-based computing. As a result, we expect server-based computing deployments to have a resurgence. For managing these diverse remote access technologies, enterprises will look for solutions that can handle both VDI and server-based computing environments equally well and offer consistent metrics and reporting across these different environments.

6. Cloud computing will gain momentum. Agility will be a key reason why enterprises will look at cloud technologies. With cloud computing, enterprise users will have access to systems on-demand, rather than have to wait for weeks or months for enterprise IT teams to procure, install and deliver the systems. Initially, as with virtualization, less critical applications including testing, training and other scratch-and-build environments will move to the public cloud. Internal IT teams in enterprises will continue work on public clouds and ultimately a hybrid cloud model will evolve in the enterprise. Monitoring and management technologies will need to evolve to manage business services that span one or more cloud providers, where the service owner will not have complete visibility into the cloud infrastructure that their service is using.

7. Enterprises will move towards greater automation. For all the talk about automation, very few production environments make extensive use of this powerful functionality. For cloud providers, automation will be a must as they seek to make their environments agile. Dynamic provisioning, automated load balancing and on-demand power on/power off of VMs based on user workloads will all start to happen in the data center.

8. Do more with less will continue to be the paradigm driving IT operations. Administrators will look for tools that can save them at least a few hours of toil each day through proactive monitoring, accurate root-cause diagnosis and pinpointing of bottleneck areas. Cost will be an important criterion for tool selection and, as hardware becomes cheaper, management tool vendors will be forced away from pricing per CPU, core, socket or per application managed.

9. Enterprises will continue to look to consolidate monitoring tools. Enterprises have already begun to realize that having specialized tools for each and every need is wasteful spending and actually disruptive. Every new tool and introduced carries a cost and adds requirements for operator training, tool certification, validation, etc. In 2011, we expect enterprises to look for multi-faceted tools that can cover needs in multiple areas. Tools that can span the physical and virtual worlds, that can offer both active and passive monitoring capabilities, support both performance and configuration management will be in high demand. Consolidation of monitoring tools will result in tangible operational savings and actually work better than a larger number of dedicated element managers.

10. ROI will be the driver for any IT initiative. In the monitoring space, tools will be measured not by the number of metrics they collect but by how well they help solve real-world problems. IT staff will look for solutions that excel at proactively monitoring and issuing alerts in advance of before a problem happens, and how they can help customers be more productive and efficient (e.g., by reducing the time an expert has to spend on a trouble call).

Reposted from VMBlog – http://vmblog.com/archive/2010/12/09/eg-innovations-management-technologies-will-play-a-central-role-in-fulfilling-the-promise-of-cloud-computing-and-virtualization-technologies.aspx