Guest post by Sam Bocetta!

Sam is a freelance journalist specializing in U.S. diplomacy and national security, with emphases on technology trends in cyberwarfare, cyberdefense, and cryptography.

Another day, another major data breach. This time, it’s from Capital One bank, a major credit card company based out of McClean, Virginia. Although the hacker in this case was found and arrested in record time, this breach could have been detected before some 100+ million customers had their data accessed.

The breach was caused, according to the best information we have at the moment, by an insider (former employee of the web hosting company) who obtained the AWS IAM keys for Capital One S3 bucket. That’s a pretty embarrassing mistake to make for a huge, multinational financial services company, and so it’s no surprise that all the companies involved in the breach are blaming each other. The fact remains, though, that someone messed up, and that the breach could have easily been avoided.

What Happened With Capital One?

On July 19, 2019, a former employee of Amazon Web Services accessed credit card applications submitted to the company between 2005 and earlier this year. The database contained names, addresses, and other personal information of 106 million customers in the US and Canada.

While the company claims that there’s no evidence that the breach was for financial gain or to disseminate the information, there is some evidence that the hacker, Paige Thompson of Seattle, toyed with the idea of releasing the information for sale on the dark web on several forums. It has also been reported that she may have breached more than 30 organizations.

The database was accessed due to a configuration vulnerability which was discovered by an outside cyber security firm on July 19. Capital One released an apology to customers and offered free credit monitoring and identity protection for one year to those affected. While no customers have been harmed financially so far, Capital One is expected to lose between $100 million and $150 million in breach mitigation costs.

In some ways, the Capital One breach is a typical example of how data breaches occur in 2019. Two features of the hack have become depressingly familiar in recent years: it seems to have been motivated by a disgruntled employee who still had access to critical systems, and could have been prevented had the Capital One been following basic security precautions.

How the Capital One Breach Could Have Been Detected Sooner

Capital One is among the first credit card companies to move fully to a public cloud-based business model. They hired Amazon Web Services, one of the oldest cloud computing companies, to manage their platform. The company states that there was no flaw on their end. A misconfigured firewall on the server side of the equation was to blame.

Thompson was able to access data that Capital One had stored on servers maintained by their cloud provider. These servers are protected by firewalls that automatically detect and shut down any incoming connection from a non-trusted source. That’s what should have happened in this case, had someone not forgotten to configure the firewall properly.

Though Capital One was quick to point the finger of blame at AWS, Amazon just as quickly denied the charge: “The perpetrator gained access through a misconfiguration of the web application and not the underlying cloud-based infrastructure,” an Amazon spokesperson said in a statement.

Cybersecurity experts agree. Several experts told the Houston Chronicle that the mistake is far more likely to have occurred within Capital One. They also noted that had the servers undergone proper penetration testing, the vulnerability would have easily been detected far in advance of the breach occurring.

The incident also points to some deeper issues. More and more companies are now using cloud-based storage solutions, because of the increased speed and scalability that these provide. However, as more companies are involved in maintaining the same system, it becomes difficult to assess the responsibility (and blame) of each one. Instead, each company relies on the other to keep data safe, and blames the other when something goes wrong.

Fortunately, the solution to this is pretty simple: all companies should have in place a robust performance management system.

When you have a robust monitoring system, it provides teams with an overview about what is happening within the data center, be it your AWS Cloud account or Azure cloud or on premise data center.

In this case, IT system administrators could configure the IT monitoring system to set alerts on normal threshold vs abnormal threshold. For example, the hacker in the Capital One scenario downloaded terabytes of data which means a lot of data transfer activities. In IT terminology, this should have shown some spikes on Network In and Network Out metrics. Having proper thresholds on data transfer activities could have alerted the administrators and reduced the impact of the hacker actions.

The Advantages of Performance Monitoring Service

Cyber crime is something that all of us need to worry about, whether we’re individuals, eMerchants, or security professionals. Customer databases are especially attractive targets for hackers because they often contain account numbers and personal identification information that fetch a nice bounty when sold on the Dark Web.

However, as networks get more complex, access control and log monitoring are not enough. In the case of Capital One, access was gained by someone with some level of privilege and knowledge of how to get into their customer databases. This means that business owners must be aware of the security protocols that are in place on every system that they use, from cloud storage providers to web hosts, from their email marketing tools to their social media accounts.

That means prevention is not enough.

One of the surest ways to protect your website and reputation is through a system of comprehensive performance monitoring. Through such oversight, you can determine if any area has been infiltrated through a forensic analysis to detect inconsistencies between the log record and physical storage.

Practiced cyber criminals can still bypass log audits by erasing all evidence from the history of SQL queries but it still leaves traces of their presence on the disk storage record and RAM. Attempting to access the OS to tamper with this log is too risky for all but the most reckless hackers.

The Bottom Line

Had Capital One put in place quality performance monitoring tools, it’s unlikely that the recent breach would have occurred. Consumers and business owners are disheartened whenever information about a huge hack hits the news. It reinforces fears of identity theft and financial ruin. We worry because if governments and huge corporations can’t protect data, how can individuals and SMBs?

We can’t afford to take data integrity for granted. The first step toward more comprehensive cyber security is knowing where breaches are possible. The second is using available tools and monitoring systems diligently and consistently.

As an IT professional, your job is to ensure continual systems availability and to mitigate risk. Monitoring your IT infrastructure is an essential part of your overall IT strategy, yet many companies either don’t have an effective system in place or are using outdated tools that only provide part of the picture.

The risks associated with not monitoring your system or using outdated tools definitely outweigh the time required to advance your systems measurement resources and their costs. Failure to monitor or using outdated systems can lead to unnecessary downtime, reduced security, lost profits and is a major blow to your company’s street cred. Here are the Top Five Reasons Why IT Performance Monitoring is Critical

Prevent Unnecessary Downtime

No one wants to experience downtime. Your network needs to do more than simply work as expected. It is imperative that it is working at all times. By integrating newer IT performance monitoring tools, you’ll ensure that every aspect of your IT infrastructure is stable and functioning as it should. Alerting functions provide up-to-the-minute information about performance issues that could cost your company hundreds of thousands of dollars in unplanned downtime.

Security Breach Mitigation

Just turn on the evening news and you’ll hear about hackers, phishing schemes and other malicious attempts to extract customer and credit card data from companies. By incorporating an IT performance monitoring software, you’ll have the protection you need to mitigate the risk of experiencing a “Day at the Breach” by proactively identifying weak points in your security setup. IT performance monitoring tools will automatically alert you to atypical system activity which gives you the power to respond to potential threats and stop the bad guys in their tracks before it’s too late.

Manage Expectations

It’s is easier than ever to manage internal and external expectations with an IT performance monitoring system in place.  With just a few clicks of your mouse, you can provide your staff with the tools they need to report on what’s working, and what isn’t.  You’ll also be able to ensure that you’re providing a reliable customer experience.

A Picture is Worth a Thousand Words

Data visualization turns obscure data into easily understandable visuals and provides a quick way to convey your message. When the data is presented visually, the IT team can more effectively recognize patterns, identify data outliers and analyze data over time. Elements and patterns that were once too obscure to notice on a spreadsheet will pop off the page when delivered in a visual manner. Data visualization also allows members of the c-suite and other decision-makers to quickly identify trends and patterns to understand how one variable affects other areas of the company.

Strengthen Your Company’s Reputation

Your company’s brand is at stake.  Now we realize that many of you might be saying “branding…schmanding….what does our internal IT infrastructure have to do with my company’s brand anyway?” Well, take our word for it…it does.  As more and more customers interact with your brand online, ensuring that your systems are safe, secure and always working is imperative to repeat business and an IT performance monitoring solution is the key element to delivering an exceptional customer experience.

A Bonus 6th Reason IT Performance Monitoring is Essential…
Your Competition is Using IT Performance Monitoring Software to Deliver More Value Internally and Externally

More and more companies are realizing that they can gain a competitive edge by leveraging the data that results from IT performance monitoring. Your competition is implementing IT performance monitoring to easily capture, monitor and visualize data streams to improve quality and reduce the costs of operations to remain competitive.

Successful companies are leveraging the advanced data and analytics to ensure system-wide performance. Whether they seek to improve customer experiences, catch product flaws before repairs or replacements are needed, or increase safety, these systems also provide IT and OT professionals in many industries such as manufacturing, financial services, and telecommunications with previously untapped views into how their businesses operate.

Get More Information About How IT Performance Monitoring Helps Your Business Become Better, Faster and Stronger

Companies like Sightline Systems are helping customers achieve business transformation with IT performance monitoring. The newest release of Sightline’s award winning platform for managing the continuous streams of time series data has broken new barriers, collecting data in real time millisecond observations. This breakthrough technology is providing users with access to data which was previously unavailable.

Sightline EDM helps users easily capture, monitor and visualize data streams from their IT environments. Older, legacy systems have for many years provided visibility into operations but the data was frequently summarized due to the volume of data produced.  The state-of-the-art Sightline EDM software has removed these barriers and can store millisecond level data in real time and preserve the data for future analysis and planning tasks.

For more information about how IT performance can help your company reduce downtime, optimize performance and achieve real results, contact Sightline Systems.

Now that we’ve entered the era of self-driving vehicles and asset sharing (think Uber and Lyft), you might wonder why – or if — trains still exist. Turns out, the old Iron Horse still plays a critical role in our transportation system, carrying between 16% and 18% of our freight. Why? Because trains are nine times more powerful and efficient than trucks, able to carry significantly larger loads in a single haul. Add to that the simplicity of maintaining a single locomotive engine, and you have quite a few reasons to use rail to transport freight.

The same applies to the mainframe in the mobile-first, cloud computing era of information technology. Big Iron is still the most efficient, reliable, and secure way to store large volumes of data and process tremendous numbers of transactions all the while simplifying maintenance.

Mainframes aren’t hip or sexy, but they are still critical to many enterprises that handle businesses with large-scale transaction. As consumers rely more and more on their smart phones to conduct transactions, banks and hospitals rely on mainframes to process these transactions. Not just because they’re secure, but because the modern mainframe combines fast data access with scalable, sub-second transactional capability. Most of us don’t think “mainframe” when we think “mobile banking,” but maybe we should.

What’s a mainframe?

From a hardware perspective, today’s mainframes are powerful but not necessarily as large as their old nickname Big Iron implies. Because they continue to be designed for redundancy and resiliency — mainframes almost never go down — they’ve maintained their legendary reliability. As a result, they still shine anywhere computing power, large I/O requirements, and massive transaction processing are required. So, like trains, mainframes aren’t going away.

Consider the data center, full of small, inexpensive computers networked together. Each computer hosts multiple virtual machines that handle resource allocation, and the entire collections is managed and reported on to create a tightly integrated system that looks and functions a lot like a mainframe.

Of course, in the data center, each device must be configured, integrated and managed to ensure the appropriate level of security and performance. System administration costs often exceed the hardware purchase price.

Are mainframes on anyone’s technology roadmap?

Today, most organizations understand the need to provide familiar interfaces and mobile options to both customers and employees, so mobile and cloud computing are part of their technology roadmap. Industries that require processing power, security, and reliability typically have mainframes on their technology roadmap as well, often in a hybrid cloud model.

Payment processing, trading, and reservation systems all place unusually high demands on IT infrastructure 24x7x365. Industries that rely on these activities process billions of transactions per second, support thousands of concurrent users, and provide millisecond response times. General purpose hardware and operating systems are typically unable to support such demands, so mainframes are a must for many businesses within the travel, finance, banking, and healthcare verticals.

Including mainframes on your technology roadmap doesn’t necessarily mean replacing existing mainframe hardware, but it does typically include modernizing it through a variety of software tools.

How do you monitor and maintain a diverse environment that includes mainframes?

Of course, the more complex and diverse your operating environment, the more difficult it becomes to maintain, let alone use to gain insights into your business.

Sightline, Enterprise Data Manager (EDM) combines data from countless devices, sensors, servers, and mainframes to create a “single pane” view into the state of your digital health. EDM provides real-time anomaly detection, forecasting, capacity planning, and root cause analysis, enabling you to monitor and control your IT environment. Its highly interactive, visual tools are used to achieve results in minutes, accelerating discovery and investigation within any environment.

EDM, through a variety of Power Agents is compatible with Unisys ClearPath OS 2200, Clearpath MCP and Stratus VOS systems. Power Agents reside on the host infrastructure, collect and report performance data from all key components of the system, enable IT teams to monitor the entire system in real time to proactively predict performance issues and prevent unplanned downtime and data loss.

While you might not recognize our name, Sightline Systems has been helping clients maintain their IT infrastructure for over 20 years. We serve blue chip customers in industries as diverse as energy, finance, and telecom as well as manufacturing, retail, and travel.

If you’re struggling to monitor a diverse and growing network of systems, do yourself a favor: let Sightline Systems do the hard work for you.

In Linux, calculating available memory is not always straightforward. This is because Linux treats memory resources differently than other operating systems. Because of this, many Linux server monitoring tools do not properly calculate the true value of the property correctly, because of what Linux is doing with memory resources behind the scenes. So a Linux admin running a Linux system might see that they have zero (0) Memory resources available, when in fact plenty of memory is available.

Linux, by design, uses RAM memory for disk caching to speed up the system. This means that the Mem % Free metric will consistently be low (maybe 5%), when in actuality, the system is only using 50% of the RAM.

It is possible in Sightline EDM to accurately monitor Linux memory usage and generate alerts when the amount of real memory gets too low, as opposed to when the default Mem % Free metric only appears to be too low.

Currently, this needs to be done using an expression, which lets you build and define your own metrics using currently existing metrics. We will make 2 expressions in order to monitor real Linux memory usage.

  1. Mem Real Free GByte: We will make an expression called “Mem Real Free GByte”. It will add the following three metrics together: “Mem Buffers GByte” + “Mem Cached GByte” + “Mem Free GByte“. These metrics added together provide a metric which takes out the cache and buffers from the memory calculation.
  2. Mem Real pct Free: We will then create an expression called “Mem Real pct Free.” This metric provides a percentage value which can be used to create accurate Linux memory usage alerts across systems. This is because a percentage will be accurate whether the system has 4 GB or 40 GB. This is done by the following calculation: (“Mem Real Free GByte” / “Mem Total GByte“)  * 100. This takes the free GBytes and divides by the total GBytes the system has, and then multiplies the result by 100 to get a percentage. For example, if a 16 GB system has a “Mem Real Free GByte” value of 10 GB, then the calculation would be  (10/16) * 100, which equals 62.5%.

By using these expressions, it is possible to create meaningful alerts based on real memory instead of the default Mem % Free across a wide range of Unix systems.

The screenshot below shows the default Linux memory metric, Mem % Free, in the lower blue line hovering around 1% free, in comparison with the expression created for Mem Real pct Free, which shows the upper orange line around 36% free. Although the blue line appears to indicate that the Linux system is out of memory, that memory is actually being used for disk caching, whereas the orange line shows real memory around 36% free, which is a much better metric for creating performance alerts.


The difference can also be seen at the end of the graph, when an application begins using real memory, causing the orange line to dip down to 5%. The blue line does not reflect this change, however, because the system simply decreases the amount of memory available for disk caching and increases the amount of memory availabile to other applications, which effectively cancels each other out. In this way, it is possible to set up alerts to accurately monitor Linux memory usage in Sightline EDM’s IT Infrastructure monitoring system.

The Industrial Internet of Things (IIoT) is changing the landscape of the U.S. manufacturing industry.  Companies that understand the patterns and trends and position themselves to prepare for the impending advances will most certainly gain a competitive edge in the global marketplace. 

Companies no longer have the luxury of being anything but data-driven. Data used to be something to simply maintain and manage, but now it’s a valuable asset that companies use to gain a competitive edge. With change happening so rapidly, how are manufacturers preparing to take advantage of the massive amounts of data that is available and more importantly, how are they using that data to really take advantage of the power that IIoT delivers?

When posing the question of how manufacturing companies are preparing for IIoT, many manufacturing leaders think of IIoT as something far off in the distance, but they don’t really understand the full impact that is coming. Many see it as a fad or something that may only have some effect on the way they handle day-to-day operations in the long run.  As we begin to peel back the layers of IIoT, one sees that there is a strong potential for a shift to occur that will change the entire manner in which manufacturing companies operate similar to what the manufacturing industry saw when they were first implementing automation and began using IT and other electronics. As a result, manufacturing leaders are seeking to develop formal and informal IIoT strategies that will position their companies to take advantage of new opportunities to streamline efficiencies, reduce downtime and stimulate profitability sooner rather than later.

Where We’ve Been and Where We’re Going

If one reviews the history of manufacturing, there are four distinct manufacturing industrial revolutions spanning from the initial mechanical production facilities to mass production to use of electronics and IT to IIoT and systems integration. The fourth industrial revolution, or Industry 4.0, will allow manufacturers to leverage the Industrial Internet of Things (IIoT) to collect vast amounts of sensor and network data, apply advanced analytics and further utilize new technology such as robots and 3-D printing to improve quality and output.

While some progressive manufacturers see where the industry is headed, many are only at the starting gate of the next wave of innovation fueled by IIoT applications and solutions. According to a recent study by Smart Industry, many manufacturers are focused on learning and benchmarking to formulate winning strategies. Many will be using the findings to reduce operational costs, optimize asset utilization, improve worker productivity, enhance workplace safety, enhance the customer experience and create new business models and revenue streams.

IIoT Will Produce More “A-Ha” Moments for Manufacturers

The best way for the manufacturing industry to capitalize on IIoT is by gathering more data from sensors and systems and utilizing it to make business-driven decisions. While that may seem as though it is not an easy task, by adding advanced analytics solutions now, manufacturers will most certainly have more “a-ha” moments as they produce insights previously clouded by uncertainty or unattainable due to limited resources and time.

The advanced platforms will enable manufacturers to gather the right data, at the right time which can be leveraged to make well-informed, and most importantly, proactive business decisions. These tools will provide more insight and will enable manufacturers to develop a major engine to identify and create new products, services and profit centers all while simultaneously improving production efficiency, reducing costs, preventing downtime, ensuring quality and enhancing their overall ability to strategically plan business operations.

Manufacturers Aren’t Really Sure Where to Begin

Data is being collected by sensors, PLCs and more to the point some manufacturers are overwhelmed with data and aren’t really sure where to start. With so much data readily available, many manufacturers are wondering how to start implementing IIoT technologies in a thoughtful manner. Many manufacturers are taking a very close look at the data they want to collect and how they will use the information to streamline efficiencies, realize opportunities and produce a sizeable return on investment (ROI).

Manufacturers are concerned by a host of obstacles for adopting IIoT in their companies, with the most notable being cybersecurity. Cybersecurity concerns, lack of overall IIoT knowledge internally, legacy products that do not have obvious IIoT connectivity and lack of senior management support and commitment, just to name a few, are among the most pressing issues that keep manufacturers up at night. In order to wrap their arms around these challenges, proactive manufacturers will need to gain a better understanding of how to leverage advanced analytics. The traditional manufacturing business model is quite reactive and relies on management to be the primary driver of change, production that is driven by a sales forecast, and system improvements if, and only if, it is perceived to be “broken.”  As the manufacturing landscape advances due to IIoT, manufacturers must begin to take a more holistic view of the entire company to better understand how one part of the operation affects other parts in order to take advantage of enormous opportunities for improvement and to proactively gain the competitive edge.

As manufacturers begin to take a more holistic approach, many are working with internal teams, suppliers and consultants to decide the most valuable data to collect, what systems require enhancement, how the data will help them realize opportunities as well as how to gauge the full impact of IIoT changes within and outside the company. The two most critical issues are data management and cybersecurity. These areas will be critical challenges for the company to address as it affects future competitiveness

IIoT is most definitely changing the landscape of the manufacturing industry as we know it. Manufacturers that read the trends, understand data patterns and begin to lay the foundation now to proactively take advantage of the technological advances will be poised to remain viable in the global marketplace throughout the decades to come.

Sightline Systems Included in Prestigious ITOA Predictions List

FAIRFAX, VA – March 6, 2016 – We are proud to announce that Sightline Systems’ own Brandon Witte is mentioned in the ITOA Landscape’s 2016 ITOA Predictions list. The list features quotes from the ITOA50 (the Who’s Who of ITOA IT Operations Analytics Leaders) and ITOA analysts. The list of quotes provides insightful predictions about just how IT Operations Analytics (ITOA) will affect organizations in 2016.

According to Mr. Witte, “This year, we are going to see predictive analytics take center stage for infrastructure monitoring. Rather than being in a reactionary mode, IT teams will now be able to access more data-points and real-time analytics than ever before to help them understand what is happening now and to accurately forecast what will happen in their businesses over time. Predictive analytics will help businesses improve efficiency and up-time which will stimulate long-term growth. Data centers will now be able to closely monitor everything from primary servers to PDUs and HVAC systems to cameras and electronic door locks to identify issues before they become too costly.”

To read other quotes from the 2016 ITOA Predictions List, visit

Compound Expressions Improve Your IT Monitoring

While today’s monitoring solutions help IT teams view their IT infrastructures with little to no tweaking, using an IT monitoring solution that allows for complex expressions can be very useful in improving reliability.

So what’s a complex expression? A compound expression is a series of simple expressions joined by arithmetic operators.  It’s a way of combining a series of thresholds together that, when in violation all at the same time, could lead to a bigger problem. Once these compound expressions are created, teams can apply those expressions to historical data to see if when problems began or occurred. Or they can combine them with their IT monitoring solution’s alerting capabilities to inform IT teams when certain conditions are met.


  1. A server’s heat increase and that additional heat affected the CPU. That caused website response time to degrade. A compound expression could be useful here to monitor CPU, temperature and web server response times.  By combining an alert with this expression IT admins can be notified in advanced before the problem becomes too serious.Temperature > 80 degrees AND CPU Time Busy > 90% AND response time < 5 seconds
  2. An engineer or IT admin notices problems with a particular server, and wants to pinpoint the time of the day that the server is performing the worst to see if a particular event may be the cause. A compound expression monitoring CPU, memory and disk usage would help.CPU Time Busy > 90% AND Memory Usage > 75% AND Disk Busy > 75%

Using an IT monitoring and analysis solution such as Sightline EDM, this expression can be used to perform root cause analysis. Once the expression has been created and data has been collected, the data can be plotted over a time period to find any violations.  Correlation can then be used with root cause analysis to find related events (across different systems). If you’re already an EDM user, I’ve even written a short guide on Creating Compound Expression Alerts in Sightline EDM.

Compound expressions, even such as the simple math equations above drastically improve your IT monitoring solution. It’s up to you to be creative and make awesome expressions to monitor your IT environment and solve problems when or even before they happen.

Increase Productivity by Classifying Monitored Alerts

Few people think about the types of alerts they receive. Yet, having the right types of alerts configured can result in significant cost-savings, improved infrastructure performance and increased end-user productivity. By planning and categorizing the right alerts for your company’s infrastructure operations, you can be confident that your IT operations are running smoothly and efficiently.

Alerts can be categorized into three classifications:

Performance Alerts – These are simple alerts used to make sure day to day operations are running smoothly.  These are things that may need immediate attention, such as a sudden spike in CPU utilization that’s causing your end-users to have a bad experience, or a large memory increase indicating that something needs to be investigated in the very near future.

Capacity Alerts – Classic alerts that warn end-users about when storage is full, CPUs are at full utilization, RAM is at zero or other factors. Capacity alerts have evolved to include triggers before capacity thresholds are exceeded. These triggers alerts IT teams when things should be fixed or fined tuned in order to reduce operating costs or increase productivity.

Dynamic Alerts – A modern, smarter approach to notification is the dynamic alert. A dynamic alert has the ability to move the threshold parameters based on day-to-day or week-to-week use.  Say on a Monday morning, CPU utilization of your farm is high at 80% due to workers showing up after a weekend, ready to tackle their assignments.  A dynamic alert would be smart enough to capture typical Monday CPU utilization and classify that use as “normal”. Yet, a dynamic alert might be triggered if CPU use on a Monday hits 99% or drops to 30%, a clear sign that there’s a problem. By analyzing trends over time by time of day we can create intelligent alerts and become aware of things which otherwise may not be noticed.

Now that you know the three types of alerts, you should look at your current alerts and see where switching alerts to a different type might help decrease the amount of alerts you get while improving the fidelity of alerts that need immediate attention. Good luck.

Meet Sightline Assure

As a successful monitoring and analytics solution provider, we’ve had 12 years of success answering the needs of current and future customers. While our EDM product has continuously delivered a high level of performance for IT teams from banks to telecommunications to airlines, it is a powerful enterprise-grade solution that some businesses don’t need yet. Yet, we believe that regardless of size, every business needs an IT monitoring solution that helps keep their systems running.

So we decided to simplify.

Sightline Assure is a new product that utilizes the power of EDM into a simplified, easy-to-read solution for companies that need to look and understand their entire server stack essentials from hardware to operating systems (Windows, Linux and VMware) to virtual machines.

Accuracy was essential. Assure features the same EDM platform technology to deliver high-fidelity of data monitoring to the end user. With 98% of EDM customers choosing to stick with EDM for the past 10 years, its platform as the foundation not only made Assure easier to develop, it allows us to update Assure with better technologies when they become available.

Portability was essential. While EDM’s is web-based, Assure needed to offer an interface that’s not only accessible from a laptop, its information needed to be readable from a smartphone or tablet. We tested Assure from a wide variety of devices and we’re happy to report that Assure is up to the task.

Simplicity was essential. Assure needed to boil down the 700+ metrics it receives into knowledge and notification – it needed to convey immediate context to any user. So Assure features only three easy to understand colors: Grey for good, yellow for warning and red for more immediate concerns. When end users drill deeper into Assure, they gain more knowledge about issues, when they began and how long they’ve lasted. It’s simply enough to learn in just a few minutes, doesn’t need a long-winded manual or long hours of training.

Pricing was essential. With Assure, we wanted to create a solution for any company looking to increase their IT systems’ operational stability so we priced Assure at a level that even a very small company could easily afford.

We’d like to thank the Sightline Assure team for adding expertise to building a solution for an industry that can significantly improve their operations as they upgrade their operations to Industry 4.0.


Meet the Sightline Assure Team

So when will we release Sightline Assure? Today. Take a moment to look at the Assureproduct page to learn more, watch a video that talks about how it works and feel free to reach out to ask about how to purchase Assure for your company.

East of Data Center Alley: Our Trip to Data Center World


dcw-raffle2A good deal of the Internet traffic that flows to government sites and commercial sites like Amazon, LinkedIn and Facebook go through “Data Center Alley,” which is just minutes to the west of our Fairfax, VA-based headquarters.

Given that we offer enterprise-grade IT monitoring and analysis solutions that can work with data centers, we ventured to Data Center World a few weeks ago at National Harbor, MD to investigate how we could help.

We were welcomed into the data center industry as attendees were open to our questions and curious about what Sightline Systems could do.

While we talked about how we monitor and analyze the infrastructure that makes IT work inside businesses, other exhibitors showed us cooling, wiring, racks and power products that help fuel the IT infrastructures that we support.

Sightline Systems CEO Brandon Witte was also on hand to talk to business leaders, tell them about our solutions and handed out a $250 Amazon gift card to our raffle winner, Rahman Khaalis.

Leaving the show not only helped us understand the work and planning that goes into data center development and management, it gave us a new appreciation for the massive windowless buildings that we see on our daily commute.