Augmented Analytics with SAP Analytics Cloud

Augmented Analytics

In 2017, Gartner coined the term ‘augmented analytics’ and claimed it would be the future of data analytics. They predicted it would be a dominant driver of new purchases of analytics and business intelligence as well as data science and machine learning platforms, and of embedded analytics.

Here is the why and how.

Most organizations depend on data to back up its decision-making and strategy. Organizations collect data on all accounts of processes and events; thus, analyzing and effectively managing the breadth of this data is challenging yet significant for mining it for business insights.

Traditional business intelligence tools have given way to a new generation of business intelligence tools – Augmented Analytics technology.

Augmented Analytics is an approach of data analytics that employs machine learning (ML) and natural language processing (NLP) to automate and improve data access and data quality, uncover hidden patterns and correlations in data, pinpoint what’s driving results, predict future results and suggest actions to maximize or minimize desirable or undesirable outcomes.

Augmented Analytics is designed to conduct analyses and generate business insights automatically with little to no supervision and can be used without needing the assistance of a business analyst or data scientist. However, the focus of Augmented Analytics stays in its assistive role, where technology does not replace humans but supports them.

Evolution of Analytics

Business Intelligence (BI) and Analytics has evolved, increasing the demand for decision making through data analytics. It drives to unfold from traditional mirror reporting into self-service Business Intelligence and analytics.

Despite the advances in self-service analytics with agile discovery, many businesses demand assistance to uncover insights in data.

The next generation of BI and analytics products are augmented with artificial intelligence (AI) including ML, which automates complex analytics processes, and NLP makes it easier for users without knowledge of data science or query languages to obtain insights.

best aiops solutions in usa

Augmented analytics offer starting-point suggestions and guidance to the users. It also empowers businesses to leverage more of their data to make better decisions when compared to the traditional and self-service Business Intelligence.

SAP Analytics Cloud

SAP Analytics Cloud (SAC) is an analytical solution that features all the analytics functionalities like business intelligence, augmented analytics, predictive analytics, enterprise planning, and application building in one intuitive user interface. It is empowered with ML and built-in AI that helps discover in-depth insights, simplify access to critical information and enable adequate decision-making.

aiops for devops services in usa

Augmented SAP Analytics Cloud

Augmented analytics capabilities offered by SAP Analytics Cloud empowers business intelligence to reap the benefits of AI and ML.

SAP Analytics Cloud facilitates users to interact with the system using natural language to gather automatic insights, where Predictive Scenarios offer an accessible way into Predictive Analytics using the past data to foresee the future.

Let’s look at the Analytics features, and capabilities offered by SAP Analytics Cloud

AI Devops Automation Service Tools

Search to Insight – Query search in Natural Language

The Search to Insight feature enables query search through natural language through conversational AI and NLP. No knowledge of query languages like SQL, R, or Python is required. Asking questions just like in a search engine or digital personal assistant fetches insightful answers represented by visualization or numeric values tailored based on the question type.

Search to Insight provides auto-complete suggestions to match words or phrases in questions for measures and dimensions in the data and includes auto spell-check.

AIOps Artificial Intelligence for IT Operations

Smart Insights – Instant explanations

The Smart Insights feature facilitates digging deeper into the data points. It analyzes the underlying dataset and runs various statistical algorithms to offer insights based on the current user context.

It helps to understand top contributors of specific data points without having to manually pivot or slice and dice the data. When a data point is selected, ML calculations run on information that is of the same nature as the selected data point. For example, if the selected data point is ‘Total Revenue’, the top contributors are based on ‘Total Revenue’. It analyzes the dimension in the selected data and looks for members in these dimensions that influence the selected value.

Smart Discovery – Easily reveal insights

The Smart Discovery feature identifies hidden patterns and statically relevant relationships in the data to discover how business factors influence performance. It helps to understand the business drivers behind the core KPIs.

Based on the selection of measure or dimension, smart discovery automatically generates interactive story pages as below –

Overview: It explains the data distribution, summary of trends, and the detected patterns for the target dimension or measure.

Key Influencers:  It explains the influence of the dimensions for the value of the target measures in the context of the selected model using classification and regression techniques, where the classification techniques are used to identify dimensions that segregate results into different groups of results and the regression techniques identify relationships between data points to predict future results.

Unexpected Values: It displays the details about outliers, where the actual values differ greatly from what the predictive model would expect. If an actual value diverges from the regression line it is categorized as unexpected.

Simulation: The simulation facilitates the ‘what-if’ analysis, users can change the values of the measures and dimensions to see the predicted change positively, negatively, or neutrally in the target measure.

Smart Predict – Answers the toughest questions

Smart Predict feature predicts the likelihood of different outcomes based on the historical data using techniques such as data mining, statistics, machine learning, and artificial intelligence.

Smart Predict, also referred as Predictive forecasting, considers different values, trends, cycles, and/or fluctuations in the data to make predictions that can be leveraged to aid business planning processes.

Smart Predict provides 3 different predictive scenario options for selection

Classification: It can be used to generate predictions for a binary event. For example, whether individual customers would be likely to buy the target product or not.

Time Series: It can be used to forecast values over a set period. For example, forecasting the sales of product by month or week, using historical data.

Regression: It can be used to predict values and explore key values behind them. For example, predicting the price of an imported product based on projected duties or shipping charges.

In the modern world of business Intelligence, SAP Analytics cloud’s ML technology augments the analytic process which assists from insights to actions and enables avoiding the agenda-driven and biased decision making by revealing the accurate patterns which drives the business.

References

MF Kashif

About the Author –

Kashif is a SAP Business objects consultant and a business analytics enthusiast. He believes that the “Ultimate goal is not about winning, but to reach within the depth of capabilities and to compete against yourself to be better than what you are today.”

Why is AIOps an Industrial Benchmark for Organizations to Scale in this Economy?

Business Environment Overview

In this pandemic economy, the topmost priorities for most companies are to make sure the operations costs and business processes are optimized and streamlined. Organizations must be more proactive than ever and identify gaps that need to be acted upon at the earliest.

The industry has been striving towards efficiency and effectivity in its operations day in and day out. As a reliability check to ensure operational standards, many organizations consider the following levers:

  1. High Application Availability & Reliability
  2. Optimized Performance Tuning & Monitoring
  3. Operational gains & Cost Optimization
  4. Generation of Actionable Insights for Efficiency
  5. Workforce Productivity Improvement

Organizations that have prioritized the above levers in their daily operations require dedicated teams to analyze different silos and implement solutions that provide the result. Running projects of this complexity affects the scalability and monitoring of these systems. This is where AIOps platforms come in to provide customized solutions for the growing needs of all organizations, regardless of the size.

Deep Dive into AIOps

Artificial Intelligence for IT Operations (AIOps) is a platform that provides multilayers of functionalities that leverage machine learning and analytics.  Gartner defines AIOps as a combination of big data and machine learning functionalities that empower IT functions, enabling scalability and robustness of its entire ecosystem.

These systems transform the existing landscape to analyze and correlate historical and real-time data to provide actionable intelligence in an automated fashion.

AIOps platforms are designed to handle large volumes of data. The tools offer various data collection methods, integration of multiple data sources, and generate visual analytical intelligence. These tools are centralized and flexible across directly and indirectly coupled IT operations for data insights.

The platform aims to bring an organization’s infrastructure monitoring, application performance monitoring, and IT systems management process under a single roof to enable big data analytics that give correlation and causality insights across all domains. These functionalities open different avenues for system engineers to proactively determine how to optimize application performance, quickly find the potential root causes, and design preventive steps to avoid issues from ever happening.

AIOps has transformed the culture of IT war rooms from reactive to proactive firefighting.

Industrial Inclination to Transformation

The pandemic economy has challenged the traditional way companies choose their transformational strategies. Machine learning powered automation for creating an autonomous IT environment is no longer a luxury. he usage of mathematical and logical algorithms to derive solutions and forecasts for issues have a direct correlation with the overall customer experience. In this pandemic economy, customer attrition has a serious impact on the annual recurring revenue. Hence, organizations must reposition their strategies to be more customer centric in everything they do. Thus, providing customers with the best-in-class service coupled with continuous availability and enhanced reliability has become an industry-standard.

As reliability and scalability are crucial factors for any company’s growth, cloud technologies have seen a growing demand. This shift of demand for cloud premises for core businesses has made AIOps platforms more accessible and easier to integrate. With the handshake between analytics and automation, AIOps has become a transformative technology investment that any organization can make.

As organizations scale in size, so does the workforce and the complexity of the processes. The increase in size often burdens organizations with time-pressed teams having high pressure on delivery and reactive housekeeping strategies. An organization must be ready to meet the present and future demands with systems and processes that scale seamlessly. This why AIOps platforms serve as a multilayered functional solution that integrates the existing systems to manage and automate tasks with efficiency and effectivity. When scaling results in process complexity, AIOps platforms convert the complexity to effort savings and productivity enhancements.

Across the industry, many organizations have implemented AIOps platforms as transformative solutions to help them embrace their present and future demand. Various studies have been conducted by different research groups that have quantified the effort savings and productivity improvements.

The AIOps Organizational Vision

As the digital transformation race has been in full throttle during the pandemic, AIOps platforms have also evolved. The industry did venture upon traditional event correlation and operations analytical tools that helped organizations reduce incidents and the overall MTTR. AIOps has been relatively new in the market as Gartner had coined the phrase in 2016.  Today, AIOps has attracted a lot of attention from multiple industries to analyze its feasibility of implementation and the return of investment from the overall transformation. Google trends show a significant increase in user search results for AIOps during the last couple of years.

ai automated root cause analysis solution

While taking a well-informed decision to include AIOps into the organization’s vision of growth, we must analyze the following:

  1. Understanding the feasibility and concerns for its future adoption
  2. Classification of business processes and use cases for AIOps intervention
  3. Quantification of operational gains from incident management using the functional AIOps tools

AIOps is truly visioned to provide tools that transform system engineers to reliability engineers to bring a system that trends towards zero incidents.

Because above all, Zero is the New Normal.

About the Author –

Ashish Joseph

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music and food.

Customize Business Outcomes with ZIFTM

Zero Incident Framework™ (ZIF) is the only AIOps platform that is powered with true machine learning algorithms with the capability to self-learn and adapt to today’s modern IT infrastructure.

ZIF’s goal has always been to deliver the right business outcomes for the stakeholders. Return on investment can be measured based on the outcomes the platform has delivered. Users get to choose what business outcomes are expected from the platform and the respective features are deployed in the enterprise to deliver the chosen outcome.

Single Pane of Action – Unified View across IT Enterprise

The biggest challenge IT Operations teams have been trying to tackle over the years is to get a bird’s eye view on what is happening across their IT landscape. The more complex the enterprise becomes the harder it becomes for the IT Operations team to understand what is happening across their enterprise. ZIF solves this issue with ease.

digital transformation company in usa

The capability to ingest data from any source monitoring or ITSM tool has helped IT organizations to have a real-time view of what is happening across their landscape. Enormous time can be saved by the IT engineers with ZIF’s unified view, who would otherwise be traversing between multiple monitoring tools.

ZIF can integrate with 100+ tools to ingest (static/dynamic) data in real-time via ZIF Universal Connector. This is a low code component of ZIF and dataflows within the connector can also be templatized for reuse. 

AIOps based Analytics Platform

Intelligence – Reduction in MTTR – Correlation of Alerts/Events

Approximately 80% of the time is lost by IT engineers in identifying the problem statement for an incident. This has been costing billions of dollars for enterprises. ZIF, with the help of Artificial Intelligence, can reduce the mean time to identify the probable root cause of the incident within seconds. The high-performance correlation engine that runs under the hood of the platform process millions of patterns that the platform has learned from the historical data and correlates the sequences that are happening in real-time and creates cases. These cases are then assigned to IT engineers with the probable root cause for them to fix the issue. This increases the productivity of the IT engineers resulting in better revenue for organizations.

best aiops solutions in usa

best aiops products tools and products

Intelligence – Predictive Analytics

AIOps platforms are incomplete without the Predictive Analytics capability. ZIF has adopted unsupervised machine learning algorithms to perform predictive analytics on the utilization data that is ingested into the platform. These algorithms can learn trends and understand the symptoms of an incident by analyzing tons of data that the platform had consumed over a period. Based on the analysis, the platform generates opportunity cards that help IT engineers take proactive measures on the forecasted incident. These opportunity cards are generated a minimum of 60 minutes in advance which gives the engineers a lead time to fix an issue before it strikes the landscape.

Visibility – Auto-Discovery of IT Assets & Applications

ZIF agentless discovery is a seamless discovery component, that helps in identifying all the IP assets that are available in an enterprise. Just not discovering the assets, but the component also plots a physical topology & logical map for better consumption of the IT engineers. This gives a very detailed view of every asset in the IT landscape. The logical topology gives in-depth insights into the workload metrics that can be utilized for deep analytics.

predictive analytics using ai applications

ai data analytics monitoring tools

Visibility – Cloud Monitoring

ai devops platform management services

In today’s digital transformation journey, cloud is inevitable. To have a better control over the cloud orchestrated application, enterprises must depend on the monitoring tools provided by cloud providers. The lack of insights often leads to the unavailability of applications for end-users. More than monitoring, insights that help enterprises take better-informed decisions are the need of the hour.

ZIF’s cloud monitoring components can monitor any cloud instance. Data that are generated from the provider provided monitoring tools are ingested into ZIF to further analyze the data. ZIF can connect to Azure, AWS & Google Cloud to derive data-driven insights.

Optimization – Remediation – Autonomous IT Operations

ZIF does not stop by just providing insights. The platform deploys the right automation bot to remediate the incident.

ZIF has 250+ automation bots that can be deployed to fast-track the resolution process by a minimum of 90%. Faster resolutions result in increased uptime of applications and better revenue for the enterprise.

Sample ZIF bots:

  • Service Restart / VM Restart
  • Disk Space Clean-up
  • IIS Monitoring App Pool
  • Dynamic Resource Allocation
  • Process Monitoring & Remediation
  • DL & Security Group Management
  • Windows Event Log Monitoring
  • Automated phishing control based on threat score
  • Service request automation like password reset, DL mapping, etc.
best aiops solutions in usa

For more information on ZIF, please visit www.zif.ai

About the Author –

Anoop Aravindakshan

An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming, and Mobile.

Addressing Web Application Performance Issues

With the use of hybrid technologies and distributed components, the applications are becoming increasingly complex. Irrespective of the complexity, it is quite important to ensure the end-user gets an excellent experience in using the application. Hence, it is mandatory to monitor the performance of an application to provide greater satisfaction to the end-user.

External factors

When the web applications face performance issues, here are some questions you need to ask:

  • Does the application always face performance issues or just during a specific period?
  • Whether a particular user or group of users face the issue or is the problem omnipresent for all the users?
  • Are you treating your production environment as real production environment or have you loaded it with applications, services, and background processes running without any proper consideration?
  • Was there any recent release to any of the application stack like Web, Middle Tier, API, DB, etc., and how was the performance before this release?
  • Have there been any hardware or software upgrades recently?

Action items on the ground

Answering the above set of questions would have brought you closer to the root cause. If not, given below are some steps you can do to troubleshoot the performance issue:

  • Look at the number of incoming requests, is the application facing unusual load?
  • Identify how many requests are delaying more than a usual level, say more than 5000 milliseconds to serve a request, or a web page.
  • Is the load getting generated by a specific or group of users – is someone trying to create intentional load?
  • Look at the web pages/methods/functions in the source code which are taking more time. Check the logs of the web server, this can be identified provided the application does that level of custom logging.
  • Identify whether any 3rd party links or APIs which are being used in the application is causing slowness.
  • Check whether the database queries are taking more time.
  • Identify whether the problem is related to a certain browser.
  • Check if the server side or client side is facing any uncaught exceptions which are impacting the performance.
  • Check the performance of the CPU, Memory, and Disk of the server(s) in which the application is hosted.
  • Check the sibling processes which are consuming more Memory/CPU/Disk in all servers and take appropriate action depending on whether those background processes need to be in that server or can be moved somewhere or can be removed totally.
  • Look at the web server performance to fine tune the Cache, Session time out, Pool size, and Queue-length.
  • Check for deadlock, buffer hit ratio, IO Busy, etc. to fine tune the performance.

Challenges 

  • Doing all these steps exactly when there is a performance issue may not be practically all the time. By the time you collect some of these, you may lose important data for the rest of the items unless the history data is collected and stored for reference.
  • Even if the data is collected, correlating them to arrive at the exact root cause is not an easy task
  • You need to be tech savvy across all layers to know what parameters to collect and how to collect.

And the list of challenges goes on…

Think of an ideal situation where you have metrics of all these action items described above, right in front of you. Is there such magic bullet available? Yes, Zero Incident FrameworkTM Application Performance Monitoring (ZIF APM), it gives you the above details at your fingertips, thereby makes troubleshooting a simple task.

ZIF APM has more to offer than other regular APM. The APM Engine has built-in AI features. It monitors the application across all layers, starting from end-user, web application, web server, API layers, databases, underlying infrastructure that includes the OS and performance factors, irrespective of whether these layers are hosted on cloud or on-premise or both. It also applies the AI for monitoring, mapping, tracing and analyze the pattern to provide the Observability and Insights. Given below is a typical representation of distributed application and its components. And the rest of the section covers, how ZIF APM provides such deep level of insights.

ZIF APM

Once the APM Engine is installed/run on portfolio servers, the build-in AI engine does the following automatically: 

  1. Monitors the performance of the application (Web) layer, Service Layer, API, and Middle tier and Maps the insights from User <–> Web <–> API <–> Database for each and every applications – No need to manually link Application 1 in Web Server A with API1 in Middle Tier B and so on.
  2. Traces the end-to-end user transaction journey for all transactions with Unique ID.
  3. Monitors the performance of the 3rd party calls (e.g. web service, API calls, etc.), no need to map them.
  4. Monitors the End User Experience through RUM (Real User Monitoring) without any end-user agent.

<A reference screenshot of how APM maps the user transaction journey across different nodes. The screenshot also gives the Method level performance insights>

Why choose ZIF APM? Key Features and Benefits

  1. All-in-One – Provides the complete insight of the underlying Web Server, API server, DB server related infrastructure metrics like CPU, Memory, Disk, and others.
  2. End-user experience (RUM) – Captures performance issues and anomalies faced by end-user at the browser side.
  3. Anomalies detection – Offers deeper insights on the exceptions faced by the application including the line number in the source code where the issue has occurred.
  4. Code-level insights – Gives details about which method and function calls within the source code is taking more time or slowing down the application.
  5. 3rd Party and DB Layer visibility – Provides the details about 3rd party APIs or Database calls and Queries which are delaying the web application response.
  6. AHI – Application Health Index is a scorecard based on A) End User Experience, B) Application Anomalies, C) Server Performance and D) Database performance factors that are applicable in the given environment or application. Weightage and number of components A, B, C, D are variables. For instance, if ‘Web server performance’ or ‘Network Performance’ needs to be brought in as new variable ‘E’, then accordingly the weightage will be adjusted/calculated against 100%.
  7. Pattern Analysis – Analyzes unusual spikes through pattern matching and alerts are provided.
  8. GTrace – Provides the transaction journey of the user transaction and the layers it is passing through and where the transaction slows down, by capturing the performance of each transaction of all users.
  9. JVM and CLR – Provides the Performance of the underlying operating system, Web server, and run time (JVM, CLR).
  10. LOG Monitoring – Provides deeper insight on the application logs.
  11. Problem isolation– ZIF APM helps in problem isolation by comparing the performance with another user in the same location at the same time.

Visit www.zif.ai for more details.

About the Author –

Suresh Kumar Ramasamy

Suresh heads the Monitor component of ZIF at GAVS. He has 20 years of experience in Native Applications, Web, Cloud, and Hybrid platforms from Engineering to Product Management. He has designed & hosted the monitoring solutions. He has been instrumental in conglomerating components to structure the Environment Performance Management suite of ZIF Monitor. Suresh enjoys playing badminton with his children. He is passionate about gardening, especially medicinal plants.

Ensure Service Availability and Reliability with ZIF

To survive in the current climate, most enterprises have already embarked on their digital transformation journeys. This is leading to uncertainty in the way applications and services supporting the applications are being monitored and managed. Inadequate information is leading to downtime in service availability for end-users eventually resulting in unhappy users and revenue loss.

Zero Incident Framework™ has been architected to address the IT Ops issues of today and tomorrow.

Leveraging the power of Artificial Intelligence on telemetry data ingested in real-time, ZIF can provide insights and resolve forecasted issues – resulting in the availability of application service when end-user wants the service at the right time.

Business Value delivered to customers from ZIF

  • Minimum 40% reduction in capital expenses and a minimum 50% reduction in IT operational cost
  • Faster resolution by 60% (MTTR)
  • Service availability of 99.99%
  • ZIF bots to increase productivity by a minimum of 80%
  • Increased user experience measured by metrics (UEI) User Experience Index
AI in operations management service

ICEBERG STATE IN ITOps

Many IT operations are in an ‘ICEBERG’ state even today. Do not be surprised if your organization is also one of them. Issues and incidents that surfaces to the top are the ones that are known to the team. But the unknown issues are not uncovered.

Therefore, enterprises have started to embark on artificial intelligence to help them identify and track the unknown issues within the complex IT landscape.

OBSERVABILITY USING ZIF

ZIF, architected and developed on the premise of observability, not only helps with visibility but also enables discovering deeper insights, thus freeing up more time for more strategic initiatives. This becomes critical to the overall success of Site Reliability Engineering (SRE) in enterprises.

Externalizing the internal state of systems, services, and application to the maximum, helps in complete observability.

Monitoring Vs. Observability?

automated discovery of networked services

Pillars of Observability – Events | Metrics | Traces

Ensure SERVICE RELIABILITY

“Reliability is defined as the probability that an application, system, or service will perform its intended function adequately for a specified period or will operate in a defined environment without failure.”

ZIF has mastered the art of predicting device, application & service failure, or performance degradation. This unique proposition from ZIF gives IT engineers the edge on service reliability of all applications, systems, or services that they are responsible for. ZIF’s auto-remediation bots can resolve predicted issues to make sure the intended function performs as and when expected by users.

SERVICE AVAILABILITY

Availability is measured as the percentage of time your service or system or application is available.

A small variation in availability percentage will have to be addressed on priority. A 99.999% availability allows only 5.26 minutes of downtime a year, whereas 99% availability allows downtime of 3.65 days a year.

ZIF helps IT engineers achieve the agreed-upon availability of application or system by learning the usage of the system and application from the metrics that are collected from the environment. Collecting the right metrics helps in getting the right availability. With the help of unsupervised algorithms, patterns are learned which helps in discovering when the application or system is required the most and then predicting any potential downtime. With above 95% accuracy in prediction, ZIF can achieve 99.99% availability for application and devices which allows 52.56 minutes downtime a year.

ZIF’s goal has always been to deliver the right business outcomes for the stakeholders. Users have the privilege to choose what business outcomes are expected from the platform and the respective features are deployed in the enterprise to deliver the chosen outcome.

About the Author

Anoop Aravindakshan

An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming, and Mobile.

Algorithmic Alert Correlation

Today’s always-on businesses and 24×7 uptime demands have necessitated IT monitoring to go into overdrive. While constant monitoring is a good thing, the downside is that the flood of alerts generated can quickly get overwhelming. Constantly having to deal with thousands of alerts each day causes alert fatigue, and impacts the overall efficiency of the monitoring process.

Hence, chalking out an optimal strategy for alert generation & management becomes critical. Pattern-based thresholding is an important first step, since it tunes thresholds continuously, to adapt to what ‘normal’ is, for the real-time environment. Threshold accuracy eliminates false positives and prevents alerts from getting fired incorrectly. Selective alert suppression during routine IT Ops maintenance activities like backups, patches, or upgrades, is another. While there are many other strategies to keep alert numbers under control, a key process in alert management is the grouping of alerts, known as alert correlation. It groups similar alerts under one actionable incident, thereby reducing the number of alerts to be handled individually.

But, how is alert ‘similarity’ determined? One way to do this is through similarity definitions, in the context of that IT landscape. A definition, for instance, would group together alerts generated from applications on the same host, or connectivity issues from the same data center. This implies that similarity definitions depend on the physical and logical relationships in the environment – in other words – the topology map. Topology mappers detect dependencies between applications, processes, networks, infrastructure, etc., and construct an enterprise blueprint that is used for alert correlation.

But what about related alerts generated by entities that are neither physically nor logically linked? To give a hypothetical example, let’s say application A accesses a server S which is responding slowly, and so A triggers alert A1. This slow communication of A with S eats up host bandwidth, and hence affects another application B in the same host. Due to this, if a third application C from another host calls B, alert A2 is fired by C due to the delayed response from B.  Now, although we see the link between alerts A1 & A2, they are neither physically nor logically related, so how can they be correlated? In reality, such situations could imply thousands of individual alerts that cannot be combined.

Algorithmic Alert Correlation

This is one of the many challenges in IT operations that we have been trying to solve at GAVS. The correlation engine of our AIOps Platform ZIF uses algorithmic alert correlation to find a solution for this problem. We are working on two unsupervised machine learning algorithms that are fundamentally different in their approach – one based on pattern recognition and the other based on spatial clustering. Both algorithms can function with or without a topology map, and work around what is supplied and available. The pattern learning algorithm derives associations based on learnings from historic patterns of alert relationships. The spatial clustering algorithm works on the principle of similarity based on multiple features of alerts, including problem similarity derived by applying Natural Language Processing (NLP), and relationships, among several others. Tuning parameters enable customization of algorithmic behavior to meet specific demands, without requiring modifications to the core algorithms. Time is also another important dimension factored into these algorithms, since the clustering of alerts generated over an extended period of time will not give meaningful results.

Traditional alert correlation has not been able to scale up to handle the volume and complexity of alerts generated by the modern-day hybrid and dynamic IT infrastructure. We have reached a point where our ITOps needs have surpassed the limits of human capabilities, and so, supplementing our intelligence with Artificial Intelligence and Machine Learning has now become indispensable.

About the Authors –

Padmapriya Sridhar

Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel, and Yoga. She aspires to become a Yoga Instructor someday!

Gireesh Sreedhar KP

Gireesh is a part of the projects run in collaboration with IIT Madras for developing AI solutions and algorithms. His interest includes Data Science, Machine Learning, Financial markets, and Geo-politics. He believes that he is competing against himself to become better than who he was yesterday. He aspires to become a well-recognized subject matter expert in the field of Artificial Intelligence.

Cloud Adoption, Challenges, and Solution Through Monitoring, AI & Automation

Cloud Adoption

Cloud computing is the delivery of computing services including Servers, Database, Storage, Networking & others over the internet. Public, Private & Hybrid clouds are different ways of deploying cloud computing.  

  • In public cloud, the cloud resources are owned by 3rd party cloud service provider
  • A private cloud consists of computing resources exclusively by one business or organization
  • Hybrid provides the best of both worlds, combines on-premises infrastructure, private cloud with public cloud

Microsoft, Google, Amazon, Oracle, IBM, and others are providing cloud platform to users to host and experience practical business solution. The worldwide public cloud services market is forecast to grow 17% in 2020 to total $266.4 billion and $354.6 billion in 2022, up from $227.8 billion in 2019, per Gartner, Inc.

There are various types of Instances, workloads & options available as part of cloud ecosystem, i.e. IaaS, PaaS, SaaS, Multi-cloud, Serverless.

Challenges

When very large, large and medium enterprise decides to move their IT environment from on-premise to cloud, they try to move some/most of their on-premises into cloud and keep the rest under their control on-premise. There are various factors that impact the decision, to name a few,

  1. ROI vs Cost of Cloud Instance, Operation cost
  2. Architecture dependency of the application, i.e. whether it is monolithic or multi-tier or polyglot or hybrid cloud
  3. Requirement and need for elasticity and scalability
  4. Availability of right solution from the cloud provider
  5. Security of some key data

After crossing all, once the IT environment is cloud-enabled, the challenge comes in ensuring the monitoring of the Cloud-enabled IT environment. Here are some of the business and IT challenges

1. How to ensure the various workloads & Instances are working as expected?

While the cloud provider may give high availability & up time depending on the tier we choose, it is important that our IT team monitors the environment, as in the case of IaaS and to some extent in PaaS as well.

2. How to ensure the Instances are optimally used in terms of compute and storage?

Cloud providers give most of the metrics around the Instances, though it may not provide all metrics that we may need to make decision in all scenarios.

The disadvantage with this model is, cost, latency & not straight forward, e.g. the LOG analytics which comes in Azure involves cost for every MB/GB of data that is stored and the latency in getting the right metrics at right time, if there is latency/delay, you may not get a right result

3. How to ensure the Application or the components of a single solution that are spread across on-premise and Cloud environment is working as expected?

Some cloud providers give tools for integrating the metrics from on-premise to cloud environment to have a shared view.

The disadvantage with this model is, it is not possible to bring in all sorts of data together to get the insights straight. That is, observability is always a question. The ownership of getting the observability lies with the IT team who handles the data.

4. How to ensure the Multi-Cloud + On-Premise environment is effectively monitored & utilized to ensure the best End-user experience?

Multi-Cloud environment – With rapid growing Microservices Architecture & Container based cloud enabled model, it is quite natural that the Enterprise may choose the best from different cloud providers like Azure, AWS, Google & others.

There is little support from cloud provider on this space. In fact, some cloud providers do not even support this scenario.

5. How to get a single panel of view for troubleshooting & root cause analysis?

Especially when problem occurs in Application, Database, Middle Tier, Network & 3rd party layers that are spread across multi-cluster, multi-cloud, elastic environment, it is very important to get a Unified view of entire environment.

ZIF (Zero Incident FrameworkTM), provides a single platform for Cloud Monitoring.

ZIF has Discovery, Monitoring, Prediction & Remediate that seamlessly fits for a cloud enabled solution. ZIF provides the unified dashboard with insights across all layers of IT infrastructure that is distributed across On-premise host, Cloud Instance & Containers.

Core features & benefits of ZIF for Cloud Monitoring are,

1. Discovery & Topology

  • Discovers and provides dynamic mapping of resources across all layers.
  • Provides real-time mapping of applications and its dependent layers irrespective of whether the components live on-premise, or on cloud or containerized in cloud.
  • Dynamically built topology of all layers which helps in taking effective decisions.

2. Observability across Multi-Cloud, Hybrid-Cloud & On-Premise tiers

  • It is not just about collecting metrics; it is very important to analyze the monitored data and provide meaningful insights.
  • When the IT infrastructure is spread across multiple cloud platform like Azure, AWS, Google Cloud, and others, it is important to get a unified view of your entire environment along with the on-premise servers.
  • Health of each layers are represented in topology format, this helps to understand the impact and take necessary actions.

3. Prediction driven decision for resource optimization

  • Prediction engine analyses the metrics of cloud resources and predicts the resource usage. This helps the resource owner to make proactive action rather than being reactive.
  • Provides meaningful insights and alerts in terms of the surge in the load, the growth in number of VMs, containers, and the usage of resource across other workloads.
  • Authorize the Elasticity & Scalability through real-time metrics.

4. Container & Microservice support

  • Understand the resource utilization of your containers that are hosted in Cloud & On-Premise.
  • Know the bottlenecks around the Microservices and tune your environment for the spikes in load.
  • Provides full support for monitoring applications distributed across your local host & containers in cloud in a multi-cluster setup.

5. Root cause analysis made simple

  • Quick root cause analysis by analysing various causes captured by ZIF Monitor instead of going through layer by layer. This saves time to focus on problem-solving and arresting instead of spending effort on identifying the root cause.
  • Provides insights across your workload including the impact due to 3rd party layers as well.

6. Automation

  • Irrespective of whether the workload and instance is on-premise or on Azure or AWS or other provider, the ZIF automation module can automate the basics to complex activities

7. Ensure End User Experience

  • Helps to improve the end-user experience who gets served by the workload from cloud.
  • The ZIF tracing helps to trace each & every request of each & every user, thereby it is quite natural for ZIF to unearth the performance bottleneck across all layers, which in turn helps to address the problem and thereby improve the User Experience

Cloud and Container Platform Support

ZIF Seamlessly integrates with following Cloud & Container environments,

  • Microsoft Azure
  • AWS
  • Google Cloud
  • Grafana Cloud
  • Docker
  • Kubernetes

About the Author

Suresh Kumar Ramasamy-Picture

Suresh Kumar Ramasamy


Suresh heads the Monitor component of ZIF at GAVS. He has 20 years of experience in Native Applications, Web, Cloud, and Hybrid platforms from Engineering to Product Management. He has designed & hosted the monitoring solutions. He has been instrumental in conglomerating components to structure the Environment Performance Management suite of ZIF Monitor.

Suresh enjoys playing badminton with his children. He is passionate about gardening, especially medicinal plants.

Generative Adversarial Networks (GAN)

In my previous article (zif.ai/inverse-reinforcement-learning/), I had introduced Inverse Reinforcement Learning and explained how it differs from Reinforcement Learning. In this article, let’s explore Generative Adversarial Networks or GAN; both GAN and reinforcement learning help us understand how deep learning is trying to imitate human thinking.

With access to greater hardware power, Neural Networks have made great progress. We use them to recognize images and voice at levels comparable to humans sometimes with even better accuracy. Even with all of that we are very far from automating human tasks with machines because a tremendous amount of information is out there and to a large extent easily accessible in the digital world of bits. The tricky part is to develop models and algorithms that can analyze and understand this humongous amount of data.

GAN in a way comes close to achieving the above goal with what we call automation, we will see the use cases of GAN later in this article.

This technique is very new to the Machine Learning (ML) world. GAN is a deep learning, unsupervised machine learning technique proposed by Ian Goodfellow and few other researchers including Yoshua Bengio in 2014. One of the most prominent researcher in the deep learning area, Yann LeCun described it as “the most interesting idea in the last 10 years in Machine Learning”.

What is Generative Adversarial Network (GAN)?

A GAN is a machine learning model in which two neural networks compete to become more accurate in their predictions. GANs typically run unsupervised and use a cooperative zero-sum game framework to learn.

The logic of GANs lie in the rivalry between the two Neural Nets. It mimics the idea of rivalry between a picture forger and an art detective who repeatedly try to outwit one another. Both networks are trained on the same data set.

A generative adversarial network (GAN) has two parts:

  • The generator (the artist) learns to generate plausible data. The generated instances become negative training examples for the discriminator.
  • The discriminator (the critic) learns to distinguish the generator’s fake data from real data. The discriminator penalizes the generator for producing implausible results.

GAN can be compared with Reinforcement Learning, where the generator is receiving a reward signal from the discriminator letting it know whether the generated data is accurate or not.

Generative Adversarial Networks

During training, the generator tries to become better at generating real looking images, while the discriminator trains to be better classify those images as fake. The process reaches equilibrium at a point when the discriminator can no longer distinguish real images from fakes.

Generative Adversarial Networks

Here are the steps a GAN takes:

  • The input to the generator is random numbers which returns an image.
  • The output image of the generator is fed as input to the discriminator along with a stream of images taken from the actual dataset.
  • Both real and fake images are given to the discriminator which returns probabilities, a number between 0 and 1, 1 meaning a prediction of authenticity and 0 meaning fake.

So, you have a double feedback loop in the architecture of GAN:

  • We have a feedback loop with the discriminator having ground truth of the images from actual training dataset
  • The generator is, in turn, in a feedback loop along with the discriminator.

Most GANs today are at least loosely based on the DCGAN architecture (Radford et al., 2015). DCGAN stands for “deep, convolution GAN.” Though GANs were both deep and convolutional prior to DCGANs, the name DCGAN is useful to refer to this specific style of architecture.

Applications of GAN

Now that we know what GAN is and how it works, it is time to dive into the interesting applications of GANs that are commonly used in the industry right now.

Generative Adversarial Networks

Can you guess what’s common among all the faces in this image?

None of these people are real! These faces were generated by GANs, exciting and at the same time scary, right? We will focus about the ethical application of the GAN in the article.

GANs for Image Editing

Using GANs, appearances can be drastically changed by reconstructing the images.

GANs for Security

GANs has been able to address the concern of ‘adversarial attacks’.

These adversarial attacks use a variety of techniques to fool deep learning architectures. Existing deep learning models are made more robust to these techniques by GANs by creating more such fake examples and training the model to identify them.

Generating Data with GANs

The availability of data in certain domains is a necessity, especially in domains where training data is needed to model learning algorithms. The healthcare industry comes to mind here. GANs shine again as they can be used to generate synthetic data for supervision.

GANs for 3D Object Generation

GANs are quite popular in the gaming industry. Game designers work countless hours recreating 3D avatars and backgrounds to give them a realistic feel. And, it certainly takes a lot of effort to create 3D models by imagination. With the incredible power of GANs, wherein they can be used to automate the entire process!

GANs are one of the few successful techniques in unsupervised machine learning and it is evolving quickly and improving our ability to perform generative tasks. Since most of the successful applications of GANs have been in the domain of computer vision, generative model sure has a lot of potential, but is not without some drawbacks.

About the Author –

Naresh B

Naresh is a part of Location Zero at GAVS as an AI/ML solutions developer. His focus is on solving problems leveraging AI/ML.
He strongly believes in making success as a habit rather than considering it as a destination.
In his free time, he likes to spend time with his pet dogs and likes sketching and gardening.

Lambda (λ), Kappa (κ) and Zeta (ζ) – The tale of three musketeers (Part-2)

In my previous article https://bit.ly/2T7DO9r, we saw the brief introduction and terminologies of Lambda Architecture. Let’s jump on to its various implementation patterns in the enterprises.

Lambda data processing architecture can be implemented in three ways,

  1. GenericLambdaλArchitecture.
  2. Unified LambdaλArchitecture.
  3. Multi-Agent Lambdaλ Architecture (MALA)

Generic Lambda λ Architecture

The three layers of Generic λ

  • Batch Layer-The The master data is managed here, and the batch views are precomputed.
  • Speed Layer-This layer serves recent data only and increments the real-time views
  • Serving Layer-This layer is responsible for indexing and exposing the views so that they can be queried.

How does Generic Lambdaλ Architecture work?

The new information collected, or ingested data is sent simultaneously to both Batch and Speed/Streaming layers for processing. The batch layer, called ‘Data Lake’, handles two vital tasks.

1) Managing the master data set (Data Lake), which is an immutable append-only raw data.

2) Precomputing the batch views on business-relevant aggregations and metrics.

The computation from Batch Layer is fed into Serving Layer which indexes the batch views, for a low latency query.

In the Speed layer or Streaming layer, the views are transient in nature, since only new data is considered to compensate for the high latency of the writes.

A serving layer can be a presentation side/reporting layer aimed to handle both batch reporting as well as real-time reporting. At the presentation side, queries are answered by merging both batch and real-time views. 

Generic Lambdaλ is technology agnostic

 The data pipeline can be broken down into layers with clear delineation of roles and responsibilities and at each layer, we can choose from several technologies. For instance, in the speed layer any of Apache Storm or Apache Spark Streaming, or spring ‘XD’ (eXtreme Data)could be employed.

Speed Layer Components

The following table represents some of the Stream Processing Frameworks, that are well suited for the speed components.

Apache Storm

Apache Storm is an open-source, distributed, and advanced Big Data processing engine that processes the real-time streaming data at an unprecedented speed, way faster than Apache Hadoop. What Hadoop does for batch processing, Apache Storm does for unbounded streams of data in a reliable manner.

  • Apache Storm can process over a million jobs on a node in a fraction of a second.
  • It is integrated with Hadoop to harness higher throughputs.
  • It is easy to implement and can be integrated with any programming language.

Apache Spark Streaming

Spark Streaming was added to Apache Spark in 2013, an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data is ingested from varied sources like Apache Kafka, Flume, Kinesis and can be processed using complex algorithms expressed with high-level functions like map, reduce, join, and window. The processed data can be pushed out to filesystems, databases, and live dashboards. Spark’s machine learning and graph processing algorithms can be applied to data streams.

Spring XD (eXtreme Data)

Spring XD (eXtreme Data) is a unified, distributed, and extensible service for data ingestion, real-time analytics, batch processing, and data export.

The Spring Data team has via Spring XD has provided support for NoSQL datastores and has also simplified the development experience with Hadoop. Spring XD is built on the fundamental blocks of Apache Hadoop. Also, it uses various pre-existing Spring technologies. For instance, Spring Data supports NoSQL/Hadoop work, Spring Batch is employed to support the workflow orchestration with job state management and retry/restart capabilities, and Spring Integration manages the event-driven data ingestion stream processing and the various Enterprise Application Integration patterns. Spring Reactor provides simplified API for developing asynchronous applications using the LMAX Disruptor.

Batch layer Components

Similarly, in Batch Layer frameworks like Apache Pig, Apache MapReduce and Apache spark can be employed. The Processing Frameworks that are commonly used in the batch layer are outlined below.

Apache Hadoop MapReduce

Hadoop MapReduce is a paradigm and software framework for writing applications that process large amounts of data on large clusters of commodity hardware in a parallel, reliable, fault-tolerant manner. MapReduce programs are written in various languages like Java, Ruby, Python, and C++ can be run in Apache Hadoop platform.

Apache Pig

Apache Pig is an abstraction over MapReduce and a tool/platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. To address the problem of programs generating series of Map and Reduce stages in MapReduce, Apache Pig creates an abstraction over them. The most noticeable property of Pig programs is that their structure is amenable to substantial parallelization, which in turn enables them to handle very large data sets.

Apache Spark

Apache Spark is the largest open source project in Big data processing. It is a blazing-fast cluster computing technology, designed for fast computation. It is based on Hadoop MapReduce and it extends the MapReduce model to efficiently use it for more types of computations, which includes interactive queries and stream processing. The main feature of Spark is its in-memory cluster computing that increases the processing speed of an application.

Serving Layer Components

Technologies or Merge/Low-Latency Databases like Druid, Apache HBase, Elephant DB, Apache SOLR, Elasticsearch, Azure Cosmos DB, MongoDB, VoltDBcan be employed for speed-layer output.

Limitations of GenericLambda λ architecture

  1. Write everything twice
  2. In the generic lambda architecture, the data must be written twice i.e. data is sent to both the speed layer and the batch layer as it is created. Any logic is duplicated and implemented twice. The batch layer takes a while to produce results so the speed layer does the same work so it can answer questions about in-flight events and recent activities.

  3. Two execution paths
  4. There are always two separate execution paths for streaming and batch. It’s a maintenance nightmare, where dealing with a plethora of frameworks, components, and clusters.

  5. Two programming models
  6. Typically, an undesirable effect of Generic Lambda is that the codebases tend to diverge since the code that executes in a batch world works on a large but finite data set while a real-time stream processing system works on an infinite event stream.

  7. Diverse skill sets
  8. Just to manage the platform, more developers with diverse skill sets are needed than focusing on core business problems.

Conclusion

It is evident that the Generic Lambda λ fits best for the system that has fast data i.e. high velocity of data and Data Lake i.e. system that involves complex processing of both historical (re-computational) and real-time (incremental) aggregated view with nearly unlimited memory capacity and data storage space. Use cases like login gestion (syslog’s, application logs, weblogs), which are one-way data pipelines are some of the areas where the Generic Lambda λ shines the brightest. But having known the limitations of the Generic lambda λ, there is always a constant search to address its limitations. Let’s continue to explore to find solutions in the next part.

Happy Learning!

About the Author:

Bargunan Somasundaram


Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

Automating IT ecosystems with ZIF Remediate

Alwinking N Rajamani

Alwinking N Rajamani


Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™. ZIF comprises of 5 modules, as outlined below.

This article’s focus is on the Remediate function of ZIF. Most ITSM teams envision a future of ticketless ITSM, driven by AI and Automation.

Remediate being a key module ofZIF, has more than 500+ connectors to various ITSMtools, Monitoring, Security and Incident management tools, storage/backup tools and others.Few of the connectors are referenced below that enables quick automation building.

Key Features of Remediate

  • Truly Agent-less software.
  • 300+ readily available templates – intuitive workflow/activity-based tool for process automation from a rich repository of pre-coded activities/templates.
  • No coding or programming required to create/deploy automated workflows. Easy drag & drop to sequence activities for workflow design.
  • Workflow execution scheduling for pre-determined time or triggering from events/notifications via email or SMS alerts.
  • Can be installed on-premise or on the cloud, on physical or virtual servers
  • Self Service portal for end-users/admins/help-desk to handle tasks &remediation automatically
  • Fully automated service management life cycle from incident creation to resolution and automatic closure
  • Has integration packs for all leading ITSM tools

Key features for futuristic Automation Solutions

Although the COVID pandemic has landed us in unprecedented times, we have been able to continue supporting our customers and enabled their IT operations with ZIF Remediate.

  • Self-learning capability to deliver Predictive/Prescriptive actionable alerts.
  • Access to multiple data sources and types – events, metrics, thresholds, logs, event triggers e.g. mail or SMS.
  • Support for a wide range of automation
    • Interactive Automation – Web, SMS, and email
    • Non-interactive automation – Silent based on events/trigger points
  • Supporting a wide range of advanced heuristics.

Benefits of AIOPS driven Automation

  • Faster MTTR
  • Instant identification of threats and appropriate responses
  • Faster delivery of IT services
  • Quality services leading to Employee and Customer satisfaction
  • Fulfillment and Alignment of IT services to business performance

Interactive and Non-interactive automation

Through our automation journey so far, we have understood that the best automation empowers humans, rather than replacing them. By implementing ZIF Remediate, organizations can empower their people to focus their attention on critical thinking and value-added activities and let our platform handle mundane tasks by bringing data-driven insights for decision making.

  • Interactive Automation – Web portal, Chatbot and SMS based
  • Non-interactive automations – Event or trigger driven automation

Involved decision driven Automations

ZIF Remediate has its unique, interactive automation capabilities, where many automation tools do not allow interactive decision making. Need approvals built into an automated change management process that involves sensitive aspects of your environment? Need numerous decision points that demand expert approval or oversight? We have the solution for you. Take an example of Phishing automation, here a domain or IP is blocked based on insights derived by mimicking an SOC engineer’s actions – parsing the observables i.e. URL, suspicious links or attachments in a phish mail and have those observables validated for threat against threat response tools, virus total, and others.

Some of the key benefits realized by our customers which include one of the largest manufacturing organizations, a financial services company, a large PR firm, health care organizations, and others.

  • Reduction of MTTR by 30% across various service requests.
  • Reduction of 40% of incidents/tickets, thus enabling productivity improvements.
  • Ticket triaging process automation resulting in a reduction of time taken by 50%.
  • Reclaiming TBs of storage space every week through snapshot monitoring and approval-driven model for a large virtualized environment.
  • Eliminating manual threat analysis by Phishing Automation, leading to man-hours being redirected towards more critical work.
  • Reduction of potential P1 outages by 40% through self-healing automations.

For more detailed information on ZIF Remediate, or to request a demo please visit https://zif.ai/products/remediate/

About the Author:

Alwin leads the Product Engineering for ZIF Remediate and zIrrus. He has over 20 years of IT experience spanning across Program & Portfolio Management for large customer accounts of various business verticals.

In his free time, Alwin loves going for long drives, travelling to scenic locales, doing social work and reading & meditating the Bible.