Guest post by Sam Bocetta!

Sam is a freelance journalist specializing in U.S. diplomacy and national security, with emphases on technology trends in cyberwarfare, cyberdefense, and cryptography.


Another day, another major data breach. This time, it’s from Capital One bank, a major credit card company based out of McClean, Virginia. Although the hacker in this case was found and arrested in record time, this breach could have been detected before some 100+ million customers had their data accessed.

The breach was caused, according to the best information we have at the moment, by an insider (former employee of the web hosting company) who obtained the AWS IAM keys for Capital One S3 bucket. That’s a pretty embarrassing mistake to make for a huge, multinational financial services company, and so it’s no surprise that all the companies involved in the breach are blaming each other. The fact remains, though, that someone messed up, and that the breach could have easily been avoided.

What Happened With Capital One?

On July 19, 2019, a former employee of Amazon Web Services accessed credit card applications submitted to the company between 2005 and earlier this year. The database contained names, addresses, and other personal information of 106 million customers in the US and Canada.

While the company claims that there’s no evidence that the breach was for financial gain or to disseminate the information, there is some evidence that the hacker, Paige Thompson of Seattle, toyed with the idea of releasing the information for sale on the dark web on several forums. It has also been reported that she may have breached more than 30 organizations.

The database was accessed due to a configuration vulnerability which was discovered by an outside cyber security firm on July 19. Capital One released an apology to customers and offered free credit monitoring and identity protection for one year to those affected. While no customers have been harmed financially so far, Capital One is expected to lose between $100 million and $150 million in breach mitigation costs.

In some ways, the Capital One breach is a typical example of how data breaches occur in 2019. Two features of the hack have become depressingly familiar in recent years: it seems to have been motivated by a disgruntled employee who still had access to critical systems, and could have been prevented had the Capital One been following basic security precautions.

How the Capital One Breach Could Have Been Detected Sooner

Capital One is among the first credit card companies to move fully to a public cloud-based business model. They hired Amazon Web Services, one of the oldest cloud computing companies, to manage their platform. The company states that there was no flaw on their end. A misconfigured firewall on the server side of the equation was to blame.

Thompson was able to access data that Capital One had stored on servers maintained by their cloud provider. These servers are protected by firewalls that automatically detect and shut down any incoming connection from a non-trusted source. That’s what should have happened in this case, had someone not forgotten to configure the firewall properly.

Though Capital One was quick to point the finger of blame at AWS, Amazon just as quickly denied the charge: “The perpetrator gained access through a misconfiguration of the web application and not the underlying cloud-based infrastructure,” an Amazon spokesperson said in a statement.

Cybersecurity experts agree. Several experts told the Houston Chronicle that the mistake is far more likely to have occurred within Capital One. They also noted that had the servers undergone proper penetration testing, the vulnerability would have easily been detected far in advance of the breach occurring.

The incident also points to some deeper issues. More and more companies are now using cloud-based storage solutions, because of the increased speed and scalability that these provide. However, as more companies are involved in maintaining the same system, it becomes difficult to assess the responsibility (and blame) of each one. Instead, each company relies on the other to keep data safe, and blames the other when something goes wrong.

Fortunately, the solution to this is pretty simple: all companies should have in place a robust performance management system.

When you have a robust monitoring system, it provides teams with an overview about what is happening within the data center, be it your AWS Cloud account or Azure cloud or on premise data center.

In this case, IT system administrators could configure the IT monitoring system to set alerts on normal threshold vs abnormal threshold. For example, the hacker in the Capital One scenario downloaded terabytes of data which means a lot of data transfer activities. In IT terminology, this should have shown some spikes on Network In and Network Out metrics. Having proper thresholds on data transfer activities could have alerted the administrators and reduced the impact of the hacker actions.

The Advantages of Performance Monitoring Service

Cyber crime is something that all of us need to worry about, whether we’re individuals, eMerchants, or security professionals. Customer databases are especially attractive targets for hackers because they often contain account numbers and personal identification information that fetch a nice bounty when sold on the Dark Web.

However, as networks get more complex, access control and log monitoring are not enough. In the case of Capital One, access was gained by someone with some level of privilege and knowledge of how to get into their customer databases. This means that business owners must be aware of the security protocols that are in place on every system that they use, from cloud storage providers to web hosts, from their email marketing tools to their social media accounts.

That means prevention is not enough.

One of the surest ways to protect your website and reputation is through a system of comprehensive performance monitoring. Through such oversight, you can determine if any area has been infiltrated through a forensic analysis to detect inconsistencies between the log record and physical storage.

Practiced cyber criminals can still bypass log audits by erasing all evidence from the history of SQL queries but it still leaves traces of their presence on the disk storage record and RAM. Attempting to access the OS to tamper with this log is too risky for all but the most reckless hackers.

The Bottom Line

Had Capital One put in place quality performance monitoring tools, it’s unlikely that the recent breach would have occurred. Consumers and business owners are disheartened whenever information about a huge hack hits the news. It reinforces fears of identity theft and financial ruin. We worry because if governments and huge corporations can’t protect data, how can individuals and SMBs?

We can’t afford to take data integrity for granted. The first step toward more comprehensive cyber security is knowing where breaches are possible. The second is using available tools and monitoring systems diligently and consistently.

As an IT professional, your job is to ensure continual systems availability and to mitigate risk. Monitoring your IT infrastructure is an essential part of your overall IT strategy, yet many companies either don’t have an effective system in place or are using outdated tools that only provide part of the picture.

The risks associated with not monitoring your system or using outdated tools definitely outweigh the time required to advance your systems measurement resources and their costs. Failure to monitor or using outdated systems can lead to unnecessary downtime, reduced security, lost profits and is a major blow to your company’s street cred. Here are the Top Five Reasons Why IT Performance Monitoring is Critical

Prevent Unnecessary Downtime

No one wants to experience downtime. Your network needs to do more than simply work as expected. It is imperative that it is working at all times. By integrating newer IT performance monitoring tools, you’ll ensure that every aspect of your IT infrastructure is stable and functioning as it should. Alerting functions provide up-to-the-minute information about performance issues that could cost your company hundreds of thousands of dollars in unplanned downtime.

Security Breach Mitigation

Just turn on the evening news and you’ll hear about hackers, phishing schemes and other malicious attempts to extract customer and credit card data from companies. By incorporating an IT performance monitoring software, you’ll have the protection you need to mitigate the risk of experiencing a “Day at the Breach” by proactively identifying weak points in your security setup. IT performance monitoring tools will automatically alert you to atypical system activity which gives you the power to respond to potential threats and stop the bad guys in their tracks before it’s too late.

Manage Expectations

It’s is easier than ever to manage internal and external expectations with an IT performance monitoring system in place.  With just a few clicks of your mouse, you can provide your staff with the tools they need to report on what’s working, and what isn’t.  You’ll also be able to ensure that you’re providing a reliable customer experience.

A Picture is Worth a Thousand Words

Data visualization turns obscure data into easily understandable visuals and provides a quick way to convey your message. When the data is presented visually, the IT team can more effectively recognize patterns, identify data outliers and analyze data over time. Elements and patterns that were once too obscure to notice on a spreadsheet will pop off the page when delivered in a visual manner. Data visualization also allows members of the c-suite and other decision-makers to quickly identify trends and patterns to understand how one variable affects other areas of the company.

Strengthen Your Company’s Reputation

Your company’s brand is at stake.  Now we realize that many of you might be saying “branding…schmanding….what does our internal IT infrastructure have to do with my company’s brand anyway?” Well, take our word for it…it does.  As more and more customers interact with your brand online, ensuring that your systems are safe, secure and always working is imperative to repeat business and an IT performance monitoring solution is the key element to delivering an exceptional customer experience.

A Bonus 6th Reason IT Performance Monitoring is Essential…
Your Competition is Using IT Performance Monitoring Software to Deliver More Value Internally and Externally

More and more companies are realizing that they can gain a competitive edge by leveraging the data that results from IT performance monitoring. Your competition is implementing IT performance monitoring to easily capture, monitor and visualize data streams to improve quality and reduce the costs of operations to remain competitive.

Successful companies are leveraging the advanced data and analytics to ensure system-wide performance. Whether they seek to improve customer experiences, catch product flaws before repairs or replacements are needed, or increase safety, these systems also provide IT and OT professionals in many industries such as manufacturing, financial services, and telecommunications with previously untapped views into how their businesses operate.

Get More Information About How IT Performance Monitoring Helps Your Business Become Better, Faster and Stronger

Companies like Sightline Systems are helping customers achieve business transformation with IT performance monitoring. The newest release of Sightline’s award winning platform for managing the continuous streams of time series data has broken new barriers, collecting data in real time millisecond observations. This breakthrough technology is providing users with access to data which was previously unavailable.

Sightline EDM helps users easily capture, monitor and visualize data streams from their IT environments. Older, legacy systems have for many years provided visibility into operations but the data was frequently summarized due to the volume of data produced.  The state-of-the-art Sightline EDM software has removed these barriers and can store millisecond level data in real time and preserve the data for future analysis and planning tasks.

For more information about how IT performance can help your company reduce downtime, optimize performance and achieve real results, contact Sightline Systems.

Now that we’ve entered the era of self-driving vehicles and asset sharing (think Uber and Lyft), you might wonder why – or if — trains still exist. Turns out, the old Iron Horse still plays a critical role in our transportation system, carrying between 16% and 18% of our freight. Why? Because trains are nine times more powerful and efficient than trucks, able to carry significantly larger loads in a single haul. Add to that the simplicity of maintaining a single locomotive engine, and you have quite a few reasons to use rail to transport freight.

The same applies to the mainframe in the mobile-first, cloud computing era of information technology. Big Iron is still the most efficient, reliable, and secure way to store large volumes of data and process tremendous numbers of transactions all the while simplifying maintenance.

Mainframes aren’t hip or sexy, but they are still critical to many enterprises that handle businesses with large-scale transaction. As consumers rely more and more on their smart phones to conduct transactions, banks and hospitals rely on mainframes to process these transactions. Not just because they’re secure, but because the modern mainframe combines fast data access with scalable, sub-second transactional capability. Most of us don’t think “mainframe” when we think “mobile banking,” but maybe we should.

What’s a mainframe?

From a hardware perspective, today’s mainframes are powerful but not necessarily as large as their old nickname Big Iron implies. Because they continue to be designed for redundancy and resiliency — mainframes almost never go down — they’ve maintained their legendary reliability. As a result, they still shine anywhere computing power, large I/O requirements, and massive transaction processing are required. So, like trains, mainframes aren’t going away.

Consider the data center, full of small, inexpensive computers networked together. Each computer hosts multiple virtual machines that handle resource allocation, and the entire collections is managed and reported on to create a tightly integrated system that looks and functions a lot like a mainframe.

Of course, in the data center, each device must be configured, integrated and managed to ensure the appropriate level of security and performance. System administration costs often exceed the hardware purchase price.

Are mainframes on anyone’s technology roadmap?

Today, most organizations understand the need to provide familiar interfaces and mobile options to both customers and employees, so mobile and cloud computing are part of their technology roadmap. Industries that require processing power, security, and reliability typically have mainframes on their technology roadmap as well, often in a hybrid cloud model.

Payment processing, trading, and reservation systems all place unusually high demands on IT infrastructure 24x7x365. Industries that rely on these activities process billions of transactions per second, support thousands of concurrent users, and provide millisecond response times. General purpose hardware and operating systems are typically unable to support such demands, so mainframes are a must for many businesses within the travel, finance, banking, and healthcare verticals.

Including mainframes on your technology roadmap doesn’t necessarily mean replacing existing mainframe hardware, but it does typically include modernizing it through a variety of software tools.

How do you monitor and maintain a diverse environment that includes mainframes?

Of course, the more complex and diverse your operating environment, the more difficult it becomes to maintain, let alone use to gain insights into your business.

Sightline, Enterprise Data Manager (EDM) combines data from countless devices, sensors, servers, and mainframes to create a “single pane” view into the state of your digital health. EDM provides real-time anomaly detection, forecasting, capacity planning, and root cause analysis, enabling you to monitor and control your IT environment. Its highly interactive, visual tools are used to achieve results in minutes, accelerating discovery and investigation within any environment.

EDM, through a variety of Power Agents is compatible with Unisys ClearPath OS 2200, Clearpath MCP and Stratus VOS systems. Power Agents reside on the host infrastructure, collect and report performance data from all key components of the system, enable IT teams to monitor the entire system in real time to proactively predict performance issues and prevent unplanned downtime and data loss.

While you might not recognize our name, Sightline Systems has been helping clients maintain their IT infrastructure for over 20 years. We serve blue chip customers in industries as diverse as energy, finance, and telecom as well as manufacturing, retail, and travel.

If you’re struggling to monitor a diverse and growing network of systems, do yourself a favor: let Sightline Systems do the hard work for you.

In Linux, calculating available memory is not always straightforward. This is because Linux treats memory resources differently than other operating systems. Because of this, many Linux server monitoring tools do not properly calculate the true value of the property correctly, because of what Linux is doing with memory resources behind the scenes. So a Linux admin running a Linux system might see that they have zero (0) Memory resources available, when in fact plenty of memory is available.

Linux, by design, uses RAM memory for disk caching to speed up the system. This means that the Mem % Free metric will consistently be low (maybe 5%), when in actuality, the system is only using 50% of the RAM.

It is possible in Sightline EDM to accurately monitor Linux memory usage and generate alerts when the amount of real memory gets too low, as opposed to when the default Mem % Free metric only appears to be too low.

Currently, this needs to be done using an expression, which lets you build and define your own metrics using currently existing metrics. We will make 2 expressions in order to monitor real Linux memory usage.

  1. Mem Real Free GByte: We will make an expression called “Mem Real Free GByte”. It will add the following three metrics together: “Mem Buffers GByte” + “Mem Cached GByte” + “Mem Free GByte“. These metrics added together provide a metric which takes out the cache and buffers from the memory calculation.
  2. Mem Real pct Free: We will then create an expression called “Mem Real pct Free.” This metric provides a percentage value which can be used to create accurate Linux memory usage alerts across systems. This is because a percentage will be accurate whether the system has 4 GB or 40 GB. This is done by the following calculation: (“Mem Real Free GByte” / “Mem Total GByte“)  * 100. This takes the free GBytes and divides by the total GBytes the system has, and then multiplies the result by 100 to get a percentage. For example, if a 16 GB system has a “Mem Real Free GByte” value of 10 GB, then the calculation would be  (10/16) * 100, which equals 62.5%.

By using these expressions, it is possible to create meaningful alerts based on real memory instead of the default Mem % Free across a wide range of Unix systems.

The screenshot below shows the default Linux memory metric, Mem % Free, in the lower blue line hovering around 1% free, in comparison with the expression created for Mem Real pct Free, which shows the upper orange line around 36% free. Although the blue line appears to indicate that the Linux system is out of memory, that memory is actually being used for disk caching, whereas the orange line shows real memory around 36% free, which is a much better metric for creating performance alerts.

sightline-edm-linux-memory-usage-screenshot.jpg

The difference can also be seen at the end of the graph, when an application begins using real memory, causing the orange line to dip down to 5%. The blue line does not reflect this change, however, because the system simply decreases the amount of memory available for disk caching and increases the amount of memory availabile to other applications, which effectively cancels each other out. In this way, it is possible to set up alerts to accurately monitor Linux memory usage in Sightline EDM’s IT Infrastructure monitoring system.

The Industrial Internet of Things (IIoT) is changing the landscape of the U.S. manufacturing industry.  Companies that understand the patterns and trends and position themselves to prepare for the impending advances will most certainly gain a competitive edge in the global marketplace. 

Companies no longer have the luxury of being anything but data-driven. Data used to be something to simply maintain and manage, but now it’s a valuable asset that companies use to gain a competitive edge. With change happening so rapidly, how are manufacturers preparing to take advantage of the massive amounts of data that is available and more importantly, how are they using that data to really take advantage of the power that IIoT delivers?

When posing the question of how manufacturing companies are preparing for IIoT, many manufacturing leaders think of IIoT as something far off in the distance, but they don’t really understand the full impact that is coming. Many see it as a fad or something that may only have some effect on the way they handle day-to-day operations in the long run.  As we begin to peel back the layers of IIoT, one sees that there is a strong potential for a shift to occur that will change the entire manner in which manufacturing companies operate similar to what the manufacturing industry saw when they were first implementing automation and began using IT and other electronics. As a result, manufacturing leaders are seeking to develop formal and informal IIoT strategies that will position their companies to take advantage of new opportunities to streamline efficiencies, reduce downtime and stimulate profitability sooner rather than later.


Where We’ve Been and Where We’re Going

If one reviews the history of manufacturing, there are four distinct manufacturing industrial revolutions spanning from the initial mechanical production facilities to mass production to use of electronics and IT to IIoT and systems integration. The fourth industrial revolution, or Industry 4.0, will allow manufacturers to leverage the Industrial Internet of Things (IIoT) to collect vast amounts of sensor and network data, apply advanced analytics and further utilize new technology such as robots and 3-D printing to improve quality and output.

While some progressive manufacturers see where the industry is headed, many are only at the starting gate of the next wave of innovation fueled by IIoT applications and solutions. According to a recent study by Smart Industry, many manufacturers are focused on learning and benchmarking to formulate winning strategies. Many will be using the findings to reduce operational costs, optimize asset utilization, improve worker productivity, enhance workplace safety, enhance the customer experience and create new business models and revenue streams.


IIoT Will Produce More “A-Ha” Moments for Manufacturers

The best way for the manufacturing industry to capitalize on IIoT is by gathering more data from sensors and systems and utilizing it to make business-driven decisions. While that may seem as though it is not an easy task, by adding advanced analytics solutions now, manufacturers will most certainly have more “a-ha” moments as they produce insights previously clouded by uncertainty or unattainable due to limited resources and time.

The advanced platforms will enable manufacturers to gather the right data, at the right time which can be leveraged to make well-informed, and most importantly, proactive business decisions. These tools will provide more insight and will enable manufacturers to develop a major engine to identify and create new products, services and profit centers all while simultaneously improving production efficiency, reducing costs, preventing downtime, ensuring quality and enhancing their overall ability to strategically plan business operations.


Manufacturers Aren’t Really Sure Where to Begin

Data is being collected by sensors, PLCs and more to the point some manufacturers are overwhelmed with data and aren’t really sure where to start. With so much data readily available, many manufacturers are wondering how to start implementing IIoT technologies in a thoughtful manner. Many manufacturers are taking a very close look at the data they want to collect and how they will use the information to streamline efficiencies, realize opportunities and produce a sizeable return on investment (ROI).

Manufacturers are concerned by a host of obstacles for adopting IIoT in their companies, with the most notable being cybersecurity. Cybersecurity concerns, lack of overall IIoT knowledge internally, legacy products that do not have obvious IIoT connectivity and lack of senior management support and commitment, just to name a few, are among the most pressing issues that keep manufacturers up at night. In order to wrap their arms around these challenges, proactive manufacturers will need to gain a better understanding of how to leverage advanced analytics. The traditional manufacturing business model is quite reactive and relies on management to be the primary driver of change, production that is driven by a sales forecast, and system improvements if, and only if, it is perceived to be “broken.”  As the manufacturing landscape advances due to IIoT, manufacturers must begin to take a more holistic view of the entire company to better understand how one part of the operation affects other parts in order to take advantage of enormous opportunities for improvement and to proactively gain the competitive edge.

As manufacturers begin to take a more holistic approach, many are working with internal teams, suppliers and consultants to decide the most valuable data to collect, what systems require enhancement, how the data will help them realize opportunities as well as how to gauge the full impact of IIoT changes within and outside the company. The two most critical issues are data management and cybersecurity. These areas will be critical challenges for the company to address as it affects future competitiveness

IIoT is most definitely changing the landscape of the manufacturing industry as we know it. Manufacturers that read the trends, understand data patterns and begin to lay the foundation now to proactively take advantage of the technological advances will be poised to remain viable in the global marketplace throughout the decades to come.

1. Log into Sightline EDM
2. Add an Alert Group

  1. Navigate to the Alerts Tab
  2. Navigate to the Alert Groups Tab
  3. Click the “Add Alert Group” button OR edit an existing Alert Group.
  4. Fill in the fields and click save
3. Click the "Add Metric" button
4. Select a connection that will be used to create the expression
5. Click the "Create Expression" button

6. Find the desired metric and click the insert button

  1. Note: You can select a specific CPU, disk or storage or, you can select the metric name to monitor ALL of them.
  2. You can click the help icon in order to read the description of the metric, to make sure that is the one you want to monitor.
  3. Repeat for each item you want to monitor in this compound expression

7. Add in the necessary operators (see EDM Documentation for more detailed description of available operations)
8. Add an Expression Name
9. Click "Save Expression"

10. Find and click your newly created expression under the "---All Expressions ---" group
11. Click the "Select Metric" button

12. Set the Alert Value to .999
13. Fill in any information for alert emails, traps, sounds etc. See EDM documentation for more information about configuring alerts.
14. Click the Finish Button
15. Assign the newly created alert group to a connection.

Installing and Updating the Power Agent on Windows Systems

Sept30

Creating a Response File

We have had several inquiries about silent installs and updates for the Sightline Power Agents. Silent installs and updates for Windows Power Agents are accomplished by creating a response file, documented here.

VMware released vCenter 6.0 in April 2015. Like many IT professionals, we were interested in seeing what changes were made. After we upgraded to vCenter 6.0, we discovered that while it was more locked down, its shell could still give us more access.

While Sightline can monitor vCenter, ESX hosts and VMs agentlessly, our Power Agents offer a lot more data about what’s going on inside VM’s (mainly process level information), including the vCenter appliance. In fact, Power Agents included with Enterprise Data Management provide you with the real-time data you need to make smarter, more cost effective decisions. EDM is an award winning platform for managing the continuous stream of time series data that is being produced and will help you:

  • Monitor systems
  • Analyze trends and patterns
  • Diagnose costly issues quickly
  • Reduce cost
  • Conduct root cause analysis
  • Automate capacity planning

These are steps you can take to access, and add a firewall port exception to the built-in firewall that comes with the appliance.

VMware, of course, provides instructions on how to manipulate the firewall. But it only allows adding an ip, or ip range to the allowed list of systems that can communicate with vCenter.

In short, it doesn’t allow you to open a port. That was a problem since our Power Agent uses port 1645 for communicating and sending detailed performance data back to our analytics engine. We needed to open that port and that proved to be harder than we thought.

Adding a Port to vCenter:
1) First, you’ll need console access. This presents a familiar screen for admins who have accessed the ESX server consoles before. This is new for the vCenter 6.0 appliance.

vm1

2) Here, you’ll want to navigate to a hidden screen by pressing ALT+F1. Then, you’ll get this login screen:

vm2

3) Here, login with admin credentials and you’ll get a list of help commands.

4) Now, run the following:

vm3

After running “shell.set –enabled True” and “shell”, you’ll get a standard Linux-style prompt.

There is a warning about using the pi shell, and it’s only for advanced troubleshooting. As such, continue at your own risk.

5) Navigate to /etc/vmware/appliance

Here is where you can add custom firewall port changes in the services.conf.

vm5

6) WARNING: Initially, we tried to add a new group to the json in service.conf, and we ended up losing SSH access to the VM. It seems that VMware has a hardcoded limit of 4 rules. Adding a 5th seems to bump the first out.
7) To get around this, we just added our rule to the ssh rule.
run “vi services.conf”

8) We added a comma, and then the section in red.

vm6

9) Then, reload vSphere vCenter 6.0 Appliance FW rule by executing:
/usr/lib/applmgmt/networking/bin/firewall-reload
or simply reboot the vm.

After we rebooted… we could now access our performance monitoring tool on port 1645.

The New Sightline Systems Site. Earlier this summer we launched our new website. It’s clearer, more responsive, works on mobile devices and offers a cleaner, more modern look. You’ll also notice that we’ve added a blog and an area for past editions of Sightline Highlights.
Visit the new Sightline Systems Site

EDM 4.2 Released. EDM 4.2 expands its list of purpose-built features with automated and dynamic threshold settings for any monitored device, unlimited coverage to capture data from SNMP and customer-requested chart feature that gives IT administrators a quick way to prioritize infrastructure concerns.
Read Sightline EDM 4.2 Delivers Dynamic Alerts, Rapid Visualization and Quick Chart Feature

We Made the Most Promising Red Hat Solution Providers 2015 List. CIO Review’s annual top Red Hat Solution Providers list selects Sightline Systems for its dedication to providing the right tools that help companies transform server and application performance data into actionable business goals.
Read More About the Award

We’re Hiring. Think that you or a friend has the right stuff to join our team of experts? We’re hiring for six new positions in our Fairfax, Virginia office: Chief Architect, Mid Level Java Software Engineer, Senior Sales Representative, Solution Architect and a UI/UX Developer.
Need a New Job?

Virtual Appliances Made Easy With Oracle Linux. John Park, our systems administrator extraordinaire, recently created a virtual appliance that will help future customers add EDM to their infrastructures more easily. We were so impressed that we asked him to blog about it… Oracle was too… the company added the blog to its latest Oracle Linux newsletter.
Read About Easy Virtual Appliance Creation

Ask John
Questions? Comments? Suggestions? Ask John! If we use your input in a future newsletter, we’ll send you a $10 Amazon gift card.

Sightline in the vCenter Environment

Sightline in the vCenter Environment. The Sightline Performance Management Solution is a perfect complement to your organization’s vCenter implementation. When used together, Sightline can save you time and money, while providing the peace of mind that your systems are running smoothly.

Read about Sightline in the vCenter Environment

Monitoring VMware vCenter. Being able to correctly predict what your infrastructure is going to do in the future can be a truly monumental task. Using the capacity planning functionality within Sightline you can easily create supporting documentation to provide the necessary information to accurately determine what your infrastructure should look like in the weeks, months and years to come.

Read about Capacity Planning with EDM

Using the VM Count metric from vCenter. We’re often asked about the metrics we collect, whether we collect certain information, or how we would represent a specific situation. An interesting metric collected from vCenter is VM Count. Is there a metrics you’re interested in? Let us know, and we’ll investigate!

Read about VM Count

Your environment under a single umbrella — Sightline. As your organization evolves, so does the IT environment. It is critical that the IT team can intelligently monitor all physical, virtual and cloud components,with accurate information available to correct situations as they occur. Once collected, information can be used to assist with migrating business critical applications to different platforms and ensure that the environment is correctly sized to maximize efficiency and limit spending. Sightline provides this and more – all under the same umbrella.

Read about the Sightline Unified Monitoring Solution

Data visualization in Sightline EDM: who you are influences what you want to see. Visualization of data sounds like an easy proposition; collect data and display it. But there’s a lot more to it than that! What data should be displayed? How much data? And in what format? With the depth of data available from Sightline data collection agents, and the versatility of Sightline‘s display interface, we can show you whatever you need to see.

Read about Data Visualization in EDM

Maximize performance monitoring of your VMware environment using Sightline. Sightline offers in-depth monitoring of your entire VMware environment. We use two different strategies to provide you with the best overall view of your virtual environment. First, we look at the vCenter servers to provide basic monitoring of the virtual space; then, we utilize Power Agents and Interface Agents to provide in-depth data collection for your mission-critical systems and applications. You decide just how deep you want to go into each instance in the environment.

Read more about Sightline in your VMware environment

Ask John
Questions? Comments? Suggestions? Ask John! If we use your input in a future newsletter, we’ll send you a $10 Amazon gift card.