Cloud Adoption, Challenges, and Solution Through Monitoring, AI & Automation

Cloud Adoption

Cloud computing is the delivery of computing services including Servers, Database, Storage, Networking & others over the internet. Public, Private & Hybrid clouds are different ways of deploying cloud computing.  

  • In public cloud, the cloud resources are owned by 3rd party cloud service provider
  • A private cloud consists of computing resources exclusively by one business or organization
  • Hybrid provides the best of both worlds, combines on-premises infrastructure, private cloud with public cloud

Microsoft, Google, Amazon, Oracle, IBM, and others are providing cloud platform to users to host and experience practical business solution. The worldwide public cloud services market is forecast to grow 17% in 2020 to total $266.4 billion and $354.6 billion in 2022, up from $227.8 billion in 2019, per Gartner, Inc.

There are various types of Instances, workloads & options available as part of cloud ecosystem, i.e. IaaS, PaaS, SaaS, Multi-cloud, Serverless.

Challenges

When very large, large and medium enterprise decides to move their IT environment from on-premise to cloud, they try to move some/most of their on-premises into cloud and keep the rest under their control on-premise. There are various factors that impact the decision, to name a few,

  1. ROI vs Cost of Cloud Instance, Operation cost
  2. Architecture dependency of the application, i.e. whether it is monolithic or multi-tier or polyglot or hybrid cloud
  3. Requirement and need for elasticity and scalability
  4. Availability of right solution from the cloud provider
  5. Security of some key data

After crossing all, once the IT environment is cloud-enabled, the challenge comes in ensuring the monitoring of the Cloud-enabled IT environment. Here are some of the business and IT challenges

1. How to ensure the various workloads & Instances are working as expected?

While the cloud provider may give high availability & up time depending on the tier we choose, it is important that our IT team monitors the environment, as in the case of IaaS and to some extent in PaaS as well.

2. How to ensure the Instances are optimally used in terms of compute and storage?

Cloud providers give most of the metrics around the Instances, though it may not provide all metrics that we may need to make decision in all scenarios.

The disadvantage with this model is, cost, latency & not straight forward, e.g. the LOG analytics which comes in Azure involves cost for every MB/GB of data that is stored and the latency in getting the right metrics at right time, if there is latency/delay, you may not get a right result

3. How to ensure the Application or the components of a single solution that are spread across on-premise and Cloud environment is working as expected?

Some cloud providers give tools for integrating the metrics from on-premise to cloud environment to have a shared view.

The disadvantage with this model is, it is not possible to bring in all sorts of data together to get the insights straight. That is, observability is always a question. The ownership of getting the observability lies with the IT team who handles the data.

4. How to ensure the Multi-Cloud + On-Premise environment is effectively monitored & utilized to ensure the best End-user experience?

Multi-Cloud environment – With rapid growing Microservices Architecture & Container based cloud enabled model, it is quite natural that the Enterprise may choose the best from different cloud providers like Azure, AWS, Google & others.

There is little support from cloud provider on this space. In fact, some cloud providers do not even support this scenario.

5. How to get a single panel of view for troubleshooting & root cause analysis?

Especially when problem occurs in Application, Database, Middle Tier, Network & 3rd party layers that are spread across multi-cluster, multi-cloud, elastic environment, it is very important to get a Unified view of entire environment.

ZIF (Zero Incident FrameworkTM), provides a single platform for Cloud Monitoring.

ZIF has Discovery, Monitoring, Prediction & Remediate that seamlessly fits for a cloud enabled solution. ZIF provides the unified dashboard with insights across all layers of IT infrastructure that is distributed across On-premise host, Cloud Instance & Containers.

Core features & benefits of ZIF for Cloud Monitoring are,

1. Discovery & Topology

  • Discovers and provides dynamic mapping of resources across all layers.
  • Provides real-time mapping of applications and its dependent layers irrespective of whether the components live on-premise, or on cloud or containerized in cloud.
  • Dynamically built topology of all layers which helps in taking effective decisions.

2. Observability across Multi-Cloud, Hybrid-Cloud & On-Premise tiers

  • It is not just about collecting metrics; it is very important to analyze the monitored data and provide meaningful insights.
  • When the IT infrastructure is spread across multiple cloud platform like Azure, AWS, Google Cloud, and others, it is important to get a unified view of your entire environment along with the on-premise servers.
  • Health of each layers are represented in topology format, this helps to understand the impact and take necessary actions.

3. Prediction driven decision for resource optimization

  • Prediction engine analyses the metrics of cloud resources and predicts the resource usage. This helps the resource owner to make proactive action rather than being reactive.
  • Provides meaningful insights and alerts in terms of the surge in the load, the growth in number of VMs, containers, and the usage of resource across other workloads.
  • Authorize the Elasticity & Scalability through real-time metrics.

4. Container & Microservice support

  • Understand the resource utilization of your containers that are hosted in Cloud & On-Premise.
  • Know the bottlenecks around the Microservices and tune your environment for the spikes in load.
  • Provides full support for monitoring applications distributed across your local host & containers in cloud in a multi-cluster setup.

5. Root cause analysis made simple

  • Quick root cause analysis by analysing various causes captured by ZIF Monitor instead of going through layer by layer. This saves time to focus on problem-solving and arresting instead of spending effort on identifying the root cause.
  • Provides insights across your workload including the impact due to 3rd party layers as well.

6. Automation

  • Irrespective of whether the workload and instance is on-premise or on Azure or AWS or other provider, the ZIF automation module can automate the basics to complex activities

7. Ensure End User Experience

  • Helps to improve the end-user experience who gets served by the workload from cloud.
  • The ZIF tracing helps to trace each & every request of each & every user, thereby it is quite natural for ZIF to unearth the performance bottleneck across all layers, which in turn helps to address the problem and thereby improve the User Experience

Cloud and Container Platform Support

ZIF Seamlessly integrates with following Cloud & Container environments,

  • Microsoft Azure
  • AWS
  • Google Cloud
  • Grafana Cloud
  • Docker
  • Kubernetes

About the Author

Suresh Kumar Ramasamy-Picture

Suresh Kumar Ramasamy


Suresh heads the Monitor component of ZIF at GAVS. He has 20 years of experience in Native Applications, Web, Cloud, and Hybrid platforms from Engineering to Product Management. He has designed & hosted the monitoring solutions. He has been instrumental in conglomerating components to structure the Environment Performance Management suite of ZIF Monitor.

Suresh enjoys playing badminton with his children. He is passionate about gardening, especially medicinal plants.

Generative Adversarial Networks (GAN)

In my previous article (zif.ai/inverse-reinforcement-learning/), I had introduced Inverse Reinforcement Learning and explained how it differs from Reinforcement Learning. In this article, let’s explore Generative Adversarial Networks or GAN; both GAN and reinforcement learning help us understand how deep learning is trying to imitate human thinking.

With access to greater hardware power, Neural Networks have made great progress. We use them to recognize images and voice at levels comparable to humans sometimes with even better accuracy. Even with all of that we are very far from automating human tasks with machines because a tremendous amount of information is out there and to a large extent easily accessible in the digital world of bits. The tricky part is to develop models and algorithms that can analyze and understand this humongous amount of data.

GAN in a way comes close to achieving the above goal with what we call automation, we will see the use cases of GAN later in this article.

This technique is very new to the Machine Learning (ML) world. GAN is a deep learning, unsupervised machine learning technique proposed by Ian Goodfellow and few other researchers including Yoshua Bengio in 2014. One of the most prominent researcher in the deep learning area, Yann LeCun described it as “the most interesting idea in the last 10 years in Machine Learning”.

What is Generative Adversarial Network (GAN)?

A GAN is a machine learning model in which two neural networks compete to become more accurate in their predictions. GANs typically run unsupervised and use a cooperative zero-sum game framework to learn.

The logic of GANs lie in the rivalry between the two Neural Nets. It mimics the idea of rivalry between a picture forger and an art detective who repeatedly try to outwit one another. Both networks are trained on the same data set.

A generative adversarial network (GAN) has two parts:

  • The generator (the artist) learns to generate plausible data. The generated instances become negative training examples for the discriminator.
  • The discriminator (the critic) learns to distinguish the generator’s fake data from real data. The discriminator penalizes the generator for producing implausible results.

GAN can be compared with Reinforcement Learning, where the generator is receiving a reward signal from the discriminator letting it know whether the generated data is accurate or not.

Generative Adversarial Networks

During training, the generator tries to become better at generating real looking images, while the discriminator trains to be better classify those images as fake. The process reaches equilibrium at a point when the discriminator can no longer distinguish real images from fakes.

Generative Adversarial Networks

Here are the steps a GAN takes:

  • The input to the generator is random numbers which returns an image.
  • The output image of the generator is fed as input to the discriminator along with a stream of images taken from the actual dataset.
  • Both real and fake images are given to the discriminator which returns probabilities, a number between 0 and 1, 1 meaning a prediction of authenticity and 0 meaning fake.

So, you have a double feedback loop in the architecture of GAN:

  • We have a feedback loop with the discriminator having ground truth of the images from actual training dataset
  • The generator is, in turn, in a feedback loop along with the discriminator.

Most GANs today are at least loosely based on the DCGAN architecture (Radford et al., 2015). DCGAN stands for “deep, convolution GAN.” Though GANs were both deep and convolutional prior to DCGANs, the name DCGAN is useful to refer to this specific style of architecture.

Applications of GAN

Now that we know what GAN is and how it works, it is time to dive into the interesting applications of GANs that are commonly used in the industry right now.

Generative Adversarial Networks

Can you guess what’s common among all the faces in this image?

None of these people are real! These faces were generated by GANs, exciting and at the same time scary, right? We will focus about the ethical application of the GAN in the article.

GANs for Image Editing

Using GANs, appearances can be drastically changed by reconstructing the images.

GANs for Security

GANs has been able to address the concern of ‘adversarial attacks’.

These adversarial attacks use a variety of techniques to fool deep learning architectures. Existing deep learning models are made more robust to these techniques by GANs by creating more such fake examples and training the model to identify them.

Generating Data with GANs

The availability of data in certain domains is a necessity, especially in domains where training data is needed to model learning algorithms. The healthcare industry comes to mind here. GANs shine again as they can be used to generate synthetic data for supervision.

GANs for 3D Object Generation

GANs are quite popular in the gaming industry. Game designers work countless hours recreating 3D avatars and backgrounds to give them a realistic feel. And, it certainly takes a lot of effort to create 3D models by imagination. With the incredible power of GANs, wherein they can be used to automate the entire process!

GANs are one of the few successful techniques in unsupervised machine learning and it is evolving quickly and improving our ability to perform generative tasks. Since most of the successful applications of GANs have been in the domain of computer vision, generative model sure has a lot of potential, but is not without some drawbacks.

About the Author –

Naresh B

Naresh is a part of Location Zero at GAVS as an AI/ML solutions developer. His focus is on solving problems leveraging AI/ML.
He strongly believes in making success as a habit rather than considering it as a destination.
In his free time, he likes to spend time with his pet dogs and likes sketching and gardening.

Lambda (λ), Kappa (κ) and Zeta (ζ) – The tale of three musketeers (Part-2)

In my previous article https://bit.ly/2T7DO9r, we saw the brief introduction and terminologies of Lambda Architecture. Let’s jump on to its various implementation patterns in the enterprises.

Lambda data processing architecture can be implemented in three ways,

  1. GenericLambdaλArchitecture.
  2. Unified LambdaλArchitecture.
  3. Multi-Agent Lambdaλ Architecture (MALA)

Generic Lambda λ Architecture

The three layers of Generic λ

  • Batch Layer-The The master data is managed here, and the batch views are precomputed.
  • Speed Layer-This layer serves recent data only and increments the real-time views
  • Serving Layer-This layer is responsible for indexing and exposing the views so that they can be queried.

How does Generic Lambdaλ Architecture work?

The new information collected, or ingested data is sent simultaneously to both Batch and Speed/Streaming layers for processing. The batch layer, called ‘Data Lake’, handles two vital tasks.

1) Managing the master data set (Data Lake), which is an immutable append-only raw data.

2) Precomputing the batch views on business-relevant aggregations and metrics.

The computation from Batch Layer is fed into Serving Layer which indexes the batch views, for a low latency query.

In the Speed layer or Streaming layer, the views are transient in nature, since only new data is considered to compensate for the high latency of the writes.

A serving layer can be a presentation side/reporting layer aimed to handle both batch reporting as well as real-time reporting. At the presentation side, queries are answered by merging both batch and real-time views. 

Generic Lambdaλ is technology agnostic

 The data pipeline can be broken down into layers with clear delineation of roles and responsibilities and at each layer, we can choose from several technologies. For instance, in the speed layer any of Apache Storm or Apache Spark Streaming, or spring ‘XD’ (eXtreme Data)could be employed.

Speed Layer Components

The following table represents some of the Stream Processing Frameworks, that are well suited for the speed components.

Apache Storm

Apache Storm is an open-source, distributed, and advanced Big Data processing engine that processes the real-time streaming data at an unprecedented speed, way faster than Apache Hadoop. What Hadoop does for batch processing, Apache Storm does for unbounded streams of data in a reliable manner.

  • Apache Storm can process over a million jobs on a node in a fraction of a second.
  • It is integrated with Hadoop to harness higher throughputs.
  • It is easy to implement and can be integrated with any programming language.

Apache Spark Streaming

Spark Streaming was added to Apache Spark in 2013, an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data is ingested from varied sources like Apache Kafka, Flume, Kinesis and can be processed using complex algorithms expressed with high-level functions like map, reduce, join, and window. The processed data can be pushed out to filesystems, databases, and live dashboards. Spark’s machine learning and graph processing algorithms can be applied to data streams.

Spring XD (eXtreme Data)

Spring XD (eXtreme Data) is a unified, distributed, and extensible service for data ingestion, real-time analytics, batch processing, and data export.

The Spring Data team has via Spring XD has provided support for NoSQL datastores and has also simplified the development experience with Hadoop. Spring XD is built on the fundamental blocks of Apache Hadoop. Also, it uses various pre-existing Spring technologies. For instance, Spring Data supports NoSQL/Hadoop work, Spring Batch is employed to support the workflow orchestration with job state management and retry/restart capabilities, and Spring Integration manages the event-driven data ingestion stream processing and the various Enterprise Application Integration patterns. Spring Reactor provides simplified API for developing asynchronous applications using the LMAX Disruptor.

Batch layer Components

Similarly, in Batch Layer frameworks like Apache Pig, Apache MapReduce and Apache spark can be employed. The Processing Frameworks that are commonly used in the batch layer are outlined below.

Apache Hadoop MapReduce

Hadoop MapReduce is a paradigm and software framework for writing applications that process large amounts of data on large clusters of commodity hardware in a parallel, reliable, fault-tolerant manner. MapReduce programs are written in various languages like Java, Ruby, Python, and C++ can be run in Apache Hadoop platform.

Apache Pig

Apache Pig is an abstraction over MapReduce and a tool/platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. To address the problem of programs generating series of Map and Reduce stages in MapReduce, Apache Pig creates an abstraction over them. The most noticeable property of Pig programs is that their structure is amenable to substantial parallelization, which in turn enables them to handle very large data sets.

Apache Spark

Apache Spark is the largest open source project in Big data processing. It is a blazing-fast cluster computing technology, designed for fast computation. It is based on Hadoop MapReduce and it extends the MapReduce model to efficiently use it for more types of computations, which includes interactive queries and stream processing. The main feature of Spark is its in-memory cluster computing that increases the processing speed of an application.

Serving Layer Components

Technologies or Merge/Low-Latency Databases like Druid, Apache HBase, Elephant DB, Apache SOLR, Elasticsearch, Azure Cosmos DB, MongoDB, VoltDBcan be employed for speed-layer output.

Limitations of GenericLambda λ architecture

  1. Write everything twice
  2. In the generic lambda architecture, the data must be written twice i.e. data is sent to both the speed layer and the batch layer as it is created. Any logic is duplicated and implemented twice. The batch layer takes a while to produce results so the speed layer does the same work so it can answer questions about in-flight events and recent activities.

  3. Two execution paths
  4. There are always two separate execution paths for streaming and batch. It’s a maintenance nightmare, where dealing with a plethora of frameworks, components, and clusters.

  5. Two programming models
  6. Typically, an undesirable effect of Generic Lambda is that the codebases tend to diverge since the code that executes in a batch world works on a large but finite data set while a real-time stream processing system works on an infinite event stream.

  7. Diverse skill sets
  8. Just to manage the platform, more developers with diverse skill sets are needed than focusing on core business problems.

Conclusion

It is evident that the Generic Lambda λ fits best for the system that has fast data i.e. high velocity of data and Data Lake i.e. system that involves complex processing of both historical (re-computational) and real-time (incremental) aggregated view with nearly unlimited memory capacity and data storage space. Use cases like login gestion (syslog’s, application logs, weblogs), which are one-way data pipelines are some of the areas where the Generic Lambda λ shines the brightest. But having known the limitations of the Generic lambda λ, there is always a constant search to address its limitations. Let’s continue to explore to find solutions in the next part.

Happy Learning!

About the Author:

Bargunan Somasundaram


Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

Automating IT ecosystems with ZIF Remediate

Alwinking N Rajamani

Alwinking N Rajamani


Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™. ZIF comprises of 5 modules, as outlined below.

This article’s focus is on the Remediate function of ZIF. Most ITSM teams envision a future of ticketless ITSM, driven by AI and Automation.

Remediate being a key module ofZIF, has more than 500+ connectors to various ITSMtools, Monitoring, Security and Incident management tools, storage/backup tools and others.Few of the connectors are referenced below that enables quick automation building.

Key Features of Remediate

  • Truly Agent-less software.
  • 300+ readily available templates – intuitive workflow/activity-based tool for process automation from a rich repository of pre-coded activities/templates.
  • No coding or programming required to create/deploy automated workflows. Easy drag & drop to sequence activities for workflow design.
  • Workflow execution scheduling for pre-determined time or triggering from events/notifications via email or SMS alerts.
  • Can be installed on-premise or on the cloud, on physical or virtual servers
  • Self Service portal for end-users/admins/help-desk to handle tasks &remediation automatically
  • Fully automated service management life cycle from incident creation to resolution and automatic closure
  • Has integration packs for all leading ITSM tools

Key features for futuristic Automation Solutions

Although the COVID pandemic has landed us in unprecedented times, we have been able to continue supporting our customers and enabled their IT operations with ZIF Remediate.

  • Self-learning capability to deliver Predictive/Prescriptive actionable alerts.
  • Access to multiple data sources and types – events, metrics, thresholds, logs, event triggers e.g. mail or SMS.
  • Support for a wide range of automation
    • Interactive Automation – Web, SMS, and email
    • Non-interactive automation – Silent based on events/trigger points
  • Supporting a wide range of advanced heuristics.

Benefits of AIOPS driven Automation

  • Faster MTTR
  • Instant identification of threats and appropriate responses
  • Faster delivery of IT services
  • Quality services leading to Employee and Customer satisfaction
  • Fulfillment and Alignment of IT services to business performance

Interactive and Non-interactive automation

Through our automation journey so far, we have understood that the best automation empowers humans, rather than replacing them. By implementing ZIF Remediate, organizations can empower their people to focus their attention on critical thinking and value-added activities and let our platform handle mundane tasks by bringing data-driven insights for decision making.

  • Interactive Automation – Web portal, Chatbot and SMS based
  • Non-interactive automations – Event or trigger driven automation

Involved decision driven Automations

ZIF Remediate has its unique, interactive automation capabilities, where many automation tools do not allow interactive decision making. Need approvals built into an automated change management process that involves sensitive aspects of your environment? Need numerous decision points that demand expert approval or oversight? We have the solution for you. Take an example of Phishing automation, here a domain or IP is blocked based on insights derived by mimicking an SOC engineer’s actions – parsing the observables i.e. URL, suspicious links or attachments in a phish mail and have those observables validated for threat against threat response tools, virus total, and others.

Some of the key benefits realized by our customers which include one of the largest manufacturing organizations, a financial services company, a large PR firm, health care organizations, and others.

  • Reduction of MTTR by 30% across various service requests.
  • Reduction of 40% of incidents/tickets, thus enabling productivity improvements.
  • Ticket triaging process automation resulting in a reduction of time taken by 50%.
  • Reclaiming TBs of storage space every week through snapshot monitoring and approval-driven model for a large virtualized environment.
  • Eliminating manual threat analysis by Phishing Automation, leading to man-hours being redirected towards more critical work.
  • Reduction of potential P1 outages by 40% through self-healing automations.

For more detailed information on ZIF Remediate, or to request a demo please visit https://zif.ai/products/remediate/

About the Author:

Alwin leads the Product Engineering for ZIF Remediate and zIrrus. He has over 20 years of IT experience spanning across Program & Portfolio Management for large customer accounts of various business verticals.

In his free time, Alwin loves going for long drives, travelling to scenic locales, doing social work and reading & meditating the Bible.

Lambda (λ), Kappa (κ) and Zeta (ζ) – The Tale of 3 AIOps Musketeers (PART-1)

Bargunan Somasundaram


Architecture inspires people, no wonder so many famous writers, artists, politicians, and designers have such profound and fascinating observations about architecture. Whether embracing minimalism or adoring resplendence, everyone has experiences and tastes that shape the way they interact with the world. The Greek architectural beauties have captured the imagination of many. The crown jewel of their architecture is the “post and lintel” which was used for their grand, large, open-air structures that could accommodate 20,000 spectators.

Greeks are also famous for their Alphabets. When the Greek Architecture and Alphabets are merged, the state-of-the-art overarching “Big Data Processing Architecture” is produced; Lambda λ, kappa κ, and Zeta ζ.

Big Data Architectural patterns

The evolution of the technologies in Big Data in the last decade has presented a history of battles with growing data volume. An increasing number of systems are being built to handle the Volume, Velocity, Variety, Veracity, Validity, and Volatility of Big Data and help gain new insights and make better business decisions. A well-designed big data architecture must handle the 6 V’s of Big Data, save your company money, and help predict future trends.

Lambda (λ) Architecture

The Lambda Architecture λis an emerging paradigm in Big Data computing. The name lambda architecture is derived from a functional point of view of data processingi.e. all data processing is understood as the application of a function to all data.

Lambda architecture is popular for its data processing technique of handling huge amounts of data by taking advantage of both a batch layer and a speed/stream-processing layer. This specific approach attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The outputs from both batch and speed layers can be merged before the presentation.

The efficiency of this architecture becomes evident in the form of increased throughput, reduced latency, and negligible errors, thus resulting in a linearly scalable architecture that scales out rather than scaling up.

Basic tenets of Lambda Architecture

The Lambda Architecture achieves high scalability and low latency due to the following principles,

  • Immutability of Data
  • Data Denormalization
  • Precomputed Views

Immutability of Data

The Big Data immutability is based on similar principles as the immutability in programming data structures. The goal being the same – do not change the data in-place and instead create a new one. The data can’t be altered and deleted. This rule can be defined for eternity or for a specified time period.   

Immutable data is fundamentally simpler than mutable data. The idea here is not to change the data in-place i.e. no updating or deleting of records but creating new ones. Now, this could be time-bound or for the eternity. Thus, write operations only add new data units. In CRUD parlance only CR (Create & Read) and no UD (Update & Delete).

This approach makes data handling highly scalable because it is very easy to distribute and replicate data. This immutable model makes the data aggregation kind of a logging system. With the attributes like “data creation timestamp”, the old and the most recent version can be distinguished. Apache Kafka – an append-only distributed log system is a great example of an immutable data store.

As a drawback, even more, data is generated, and answering queries becomes more difficult. For example, to find the current owner of a brand, the owner for that brand with the latest timestamp must be found.

In the mutable data model, it is no longer possible to find out that the brand Jaguar was once owned by Ford. This is different when using an immutable data model which is achieved by adding a timestamp to each data record.

Now it is possible to get both bits of information: the fact that Jaguar is now owned by Tata Motors (latest timestamp) and the fact it was formerly owned by Ford. It is also much easier to recover from errors because the old information is not deleted.

Data Denormalization

The traditional database systems are named for their storage efficiency and data integrity. It is possible due to the Normalization process like 1NF, 2NF, 3NF, BCNF, 4NF, and 5NF. Due to efficient normalization strategy, data redundancy is eliminated. The same data need not be saved in multiple places (tables) and any updates (partial or full) on the same, need not be done at multiple places (tables). But this makes the traditional databases poor at scaling their read performance since data from multiple places (tables) need to be brought together by complex and costly join operations.

For the sake of performance, Big data systems accept denormalization and duplication of data as a fact of life with the data schema such that data stored in-representation is equivalent to that after performing joins on normalized tables.

In this way, the knowledge about the schema is not necessary, and joins can be avoided, and the query results are faster. This also motivates the query-driven data modeling methodology. Albeit the data exists in multiple places after denormalization, the consistency of the data is ensured via strong consistency, timeline consistency, and eventual consistency models in the event of partial or full updates. This is often acceptable, especially when denormalized representations are used as precomputed views.

Precomputed Views

To give fast and consistent answers to queries on huge amounts of data, precomputed views are prepared both in the batch layer and in the speed layer. In the batch layer, these are constructed by applying a batch function to all the data. This leads to a transformation of the data into a more compact form suitable for answering a pre-defined set of queries. This idea is essentially the same as what is done in data warehousing.

Layers of Lambda

The Lambda Architecture solves the problem of computing arbitrary functions on arbitrary data in real-time by decomposing the problem into three layers,

  1. Batch Layer or Cold Path
  2. Speed Layer or Hot path
  3. Serving Layer

Batch layer or Cold path

The nub of the λ is the master dataset. The master dataset is the source of truth in Lambda Architecture.  The Master dataset must hold the following three properties,

  1. Data is raw.
  2. Data is immutable.
  3. Data is eternally true.

This gives the Lambda architecture ability to reconstruct the application from the master data even if the whole serving layer data set is lost. The batch layer pre-computes results using a distributed processing system that can handle very large quantities of data. The batch layer aims at perfect accuracy by being able to process all available data when generating views. 

The batch layer prefers re-computation algorithms over incremental algorithms. The problem with incremental algorithms is the failure to address the challenges faced by human mistakes. The re-computational nature of the batch layer creates simple batch views as the complexity is addressed during precomputation. Additionally, the responsibility of the batch layer is to historically process the data with high accuracy. Machine learning algorithms take time to train the model and give better results over time. Such naturally exhaustive and time-consuming tasks are processed inside the batch layer.

The problem with the batch layer is high latency. The batch jobs must be run over the entire master dataset. These jobs can process data that can be relatively old as they cannot keep up with the inflow of stream data. This is a serious limitation for real-time data processing. To overcome this limitation, the speed layer is very significant.

Frameworks and solutions such as Hadoop MapReduce, Spark core, Spark SQL, GraphX, and MLLib are the widely adopted big-data tools using batch mode. Batch schedulers include Apache Oozie, Spring Batch, and Unix crontab which, invoke the processing at a periodic interval.

Speed layer or Streaming layer or Hot path

The real-time data processing is realized in the speed layer. The speed layer achieves up-to-date query results and compensates for the high latency of the batch layer.

To create real-time views of the most recent data, this layer sacrifices throughput and decreases latency substantially. The real-time views are generated immediately after the data is received but are not as complete or precise as the batch layer.  In contrast to the re-computation approach of the batch layer, the speed layer adopts incremental computational algorithms. Since the data is not complete i.e less data so less computation. The incremental computation is more complex, but the data handled in the speed layer is vastly smaller and the views are transient.

Most operations on streams are windowed operations operating on slices of time such as moving averages for the stock process every hour, top products sold this week, fraud attempts in banking, etc. Popular choices for stream-processing tools include Apache Kafka, Apache Flume, Apache Storm, Spark Streaming, Apache Flink, Amazon Kinesis, etc.

Serving Layer

The output from both the batch and speed layers are stored in the serving layer.pre-computed batch views are indexed in this layer for faster retrieval. All the on-demand queries from the reporting or presentation layer are served by merging the batch and real-time views and outputs a result.

Query = λ (Complete data) = λ (live streaming data) * λ (Stored data)

The must-haves of the serving layer are,

  • Batch writable

The batch views for a serving layer are produced from scratch. When a new version of a view becomes available, it must be possible to completely swap out the older version with the updated view.

  • Scalable

A serving layer database must be capable of handling views of arbitrary size. As with the distributed filesystems and batch computation framework previously discussed, this requires it to be distributed across multiple machines.

  • Random reads

A serving layer database must support random reads, with indexes providing direct access to small portions of the view. This requirement is necessary to have low latency on queries.

  • Fault-tolerant

Because a serving layer database is distributed, it must be tolerant of machine failures.

This is how Lambda Architecture λ handles humongous amounts of data with low latency queries in a fault-tolerant manner. Let’s see the various implementation of lambda architecture and its applications in the next part.

To be continued…

About the Author:

Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

Monitoring for Success

Do you know if your end users are happy?

(In the context of users of Applications (desktop, web or cloud-based), Services, Servers and components of IT environment, directly or indirectly.)

The question may sound trivial, but it has a significant impact on the success of a company. The user experience is a journey, from the time they use the application or service, till after they complete the interaction. Experience can be determined based on factors like Speed, Performance, Flawlessness, Ease of use, Security, Resolution time, among others. Hence, monitoring the ‘Wow’ & ‘Woe’ moments of the users is vital.

Monitor is a component of GAVS’ AIOps Platform, Zero Incident FrameworkTM (ZIF). One of the key objectives of the Monitor platform is to measure and improve end-user experience. This component monitors all the layers (includes but not limited to application, database, server, APIs, end-points, and network devices) in real-time that are involved in the user experience. Ultimately,this helps to drive the environment towards Zero Incidents.

This figure shows the capability of ZIF monitoring that cut across all layers starting from end-user to storage and how it is linked to other the components of the platform

Key Features of ZIF Monitor are,

  • Unified solution for all IT environment monitoring needs: The platform covers the end-to-end monitoring of an IT landscape. The key focus is to ensure all verticals of IT are brought under thorough monitoring. The deeper the monitoring, the closer an organization is to attaining a Zero Incident EnterpriseTM.
  • Agents with self-intelligence: The intelligent agents capture various health parameters about the environment. When the target environment is already running under low resource, the agent will not task it with more load. It will collect the health-related metrics and communicate through the telemetry channel efficiently and effectively. The intelligence is applied in terms of parameters to be collected, the period of collection and many more.
  • Depth of monitoring: The core strength of Monitor is it comes with a list of performance counters which are defined by SMEs across all layers of the IT environment. This is a key differentiator; the monitoring parameters can be dynamically configured for the target environment. Parameters can be added or removed on a need basis.
  • Agent & Agentless (Remote): The customers can choose from Agent & Agentless options for the solutions. The remote solution is called as Centralized Remote Monitoring Solution (CRMS). Each monitoring parameter can be remotely controlled and defined from the CRMS. Even the agents that are running in the target environment can be controlled from the server console.
  • Compliance: Plays a key role in terms of the compliance of the environment. Compliance ranges from ensuring the availability of necessary services and processes in the target environment and defines the standard of what Application, Make, Version, Provider, Size, etc. that are allowed in the target environment.
  • Auto discovery: Monitor can auto-discover the newer elements (servers, endpoints, databases, devices, etc.) that are getting added to the environment. It can automatically add those newer elements into the purview of monitoring.
  • Auto scale: Centralized Remote Monitoring Solution (CRMS) can auto-scale on its own when newer elements are added for monitoring through auto-discovery. The auto scale includes various aspects, like load on channel, load on individual polling engine, and load on each agentless solution.
  • Real time user & Synthetic Monitoring: Real-time user monitoring is to monitor the environment when the user is active. Synthetic monitoring is through simulated techniques. It doesn’t wait for the user to make a transaction or use the system. Instead, it simulates the scenario and provide insights to make decision proactively.
  • Availability & status of devices connected: Monitor also includes the monitoring of availability and control of USB and COM port devices that are connected.
  • Black box monitoring: It is not always possible to instrument the application to get insights.Hence, the Black Box technique is used. Here the application is treated as a black box and it is monitored in terms of its interaction with the Kernel & OS through performance counters.
High level overview of Monitor’s components,

  • Agents, Agentless: These are the means through which monitoring is done at the target environment, like user devices, servers, network devices, load balancers, virtualized environment, API layers, databases, replications, storage devices, etc.
  • ZIF Telemetry Channel: The performance telemetry that are collected from source to target are passed through this channel to the big data platform.
  • Telemetry Data: Refers to the performance data and other metrics collected from all over the environment.
  • Telemetry Database:This is the big data platform, in which the telemetry data from all sources are captured and stored.
  • Intelligence Engine: This parses the telemetry data in near real time and raises notifications based on rule-based threshold and as well as through dynamic threshold.
  • Dashboard&Alerting Mechanism: These are the means through which the results of monitoring are conveyed as metrics in dashboard and as well as notifications.
  • Integration with Analyze, Predict & Remediate components: Monitoring module communicates the telemetry to Analyze & Predict components of the ZIF platform for it to use the data for analysis and apply Machine Learning for prediction. Both Monitor & Predict components, communicate with Remediate platform to trigger remediation.

The Monitor component works in tandem with Analyze, Predict and Remediate components of the ZIF platform to achieve an incident free IT environment. Implementation of ZIF is the right step to driving an enterprise towards Zero Incidents. ZIF is the only platform in the industry which comes from the single product platform owner who owns the end-to-end IP of the solution with products developed from scratch.

For more detailed information on GAVS’ Monitor, or to request a demo please visit zif.ai/products/monitor/

(To be continued…)

About the Author

Suresh Kumar Ramasamy


Suresh heads the Monitor component of ZIF at GAVS. He has 20 years of experience in Native Applications, Web, Cloud and Hybrid platforms from Engineering to Product Management. He has designed & hosted the monitoring solutions. He has been instrumental in conglomerating components to structure the Environment Performance Management suite of ZIF Monitor.

Suresh enjoys playing badminton with his children. He is passionate about gardening, especially medicinal plants.

READ ALSO OUR NEW UPDATES

Optimizing ITOps for Digital Transformation

The key focus of Digital Transformation is removing procedural bottlenecks and bending the curve on productivity. As Chief Insights Officer, Forbes Media says, Digital Transformation is now “essential for corporate survival”.

Emerging technologies are enabling dramatic innovations in IT infrastructure and operations. It is no longer just about hardware, software, data centers, the cloud or the service desk; it is about backing business strategies. So, here are some reasons why companies should think about redesigning their IT services to embrace digital disruption.

DevOps for Agility

As companies move away from the traditional Waterfall model of software development and adopt Agile methodologies, IT infrastructure and operations also need to become agile and malleable. Agility has become indispensible to stay competitive in this era of dynamism and constant change. What started off as a set of software development methodologies has now permeated all aspects of an organization, ITOps being one of them. Development, QA and IT teams need to come out of their silos and work in tandem for constant productive collaboration, in what is termed DevOps.

Shorter development & deployment cycles have necessitated overall ITOps efficiency and among other things, IT enviroment provisioning to be on-demand and self-service. Provisioning needs to be automated and built into the CI/CD pipeline.  

Downtime Mitigation

With agility being the org-wide mantra, predictable IT uptime becomes a mandate. Outages incur a very high cost and adversely affect the pace of innovation. The average cost of unplanned application downtime for Fortune 1000 companies is anywhere between $1.25 billion to $2.5 billion, says a report by DevOps.com. It further goes on to say that, infrastructure failure can cost the bottom line $100,000/hr and the cost of critical application failure is $500,000 to $1 million/hr.

ITOps must stay ahead of the game by eliminating outdated legacy systems, tools, technologies and workflows. End-to-end automation is key. IT needs to modernize its stack by zeroing-in on tools for Discovery of the complete IT landscape, Monitoring of devices, Analytics for noise reduction and event correlation, AI-based tools for RCA, incident Prediction and Auto-Remediation. All of this intelligent automation will help proactive response rather than a reactive response after the fact, when the damage has already been done.

Moving away from the shadows

Shadow IT, the use of technology outside the IT purview, is becoming a tacitly approved aspect of most modern enterprises. It is a result of proliferation of technology and the cloud offering easy access to applications and storage. Users of Shadow IT systems bypass the IT approval and provisioning process to use unauthorized technology, without the consent of the IT department. There are huge security and compliance risks waiting to happen if this sprawling syndrome is not reined in. To bring Shadow IT under control, the IT dept must first know about it. This is where automated Discovery tools bring in a lot of value by automating the process of application discovery and topology mapping.

Moving towards Hybrid IT

Hybrid IT means the use of an optimal, cost-effective mix of public & private clouds and on-premise systems that enable an infrastructure that is dynamic, on-demand, scalable, and composable. IT spend on datacenters is seeing a downward trend. Most organizations are thinking beyond traditional datacentres to options in the cloud. Colocation is an important consideration since it delivers better availability, energy and time savings, scalability and reduces the impact of network latency. Organizations are only keeping mission-critical processes that require close monitoring & control, on-premise.

Edge computing

Gartner defines edge computing as solutions that facilitate data processing at or near the source of data generation. With huge volumes of data being churned out at rapid rates, for instance by monitoring or IoT devices, it is highly inefficient to stream all this data to a centralized datacenter or cloud for processing. Organizations now understand the value in a decentralized approach to address modern digital infrastructure needs. Edge computing serves as the decentralized extension of the datacenter/cloud and addresses the need for localized computing power.

CyberSecurity

Cyber attacks are on the rise and securing networks and protecting data is posing big challenges. With Hybrid IT, IoT, Edge computing etc, extension of the IT footprint beyond secure enterprise boundaries has increased the number of attack target points manifold. IT teams need to be well versed with the nuances of security set-up in different cloud vendor environments. There is a lot of ambiguity in ownership of data integrity, in the wake of data being spread across on-premise, cloud environments, shared workstations and virtual machines. With Hybrid IT deployments, a comprehensive security plan regardless of the data’s location has gained paramount importance.

Upskilling IT Teams

With blurring lines between Dev and IT, there is increasing demand for IT professionals equipped with a broad range of cross-functional skills in addition to core IT competencies. With constant emergence of new technologies, there is usually not much clarity on the exact skillsets required by the IT team in an organization. More than expertise in one specific area, IT teams need to be open to continuous learning to adapt to changing IT environments, to close the skills gap and support their organization’s Digital Transformation goals.

READ ALSO OUR NEW UPDATES

Out of the trenches to AIOps – the Peacekeeper

The last thing an IT team wants to hear is ‘there is an issue’ which usually has them rushing to ‘battle zones’ to try and resolve – ‘problem with the apps?’, ‘is it the network?’, desperately trying to kill the problem while it grows larger within the Enterprise.  No credits for crumbling SLAs, the fire-fighting continues long and hard sometimes.

IT Operations are most times battling heavy volumes of alerts, having to deal with hundreds of incident tickets that come from the environment, from the performance of its apps and infrastructure. They are constantly overwhelmed trying to manage and respond to every alert in order to avoid the threat of outages and heavy losses.

Increasing components within the infrastructure; today a stack can have more than 10,000 metrics, and that sort of complexity runs the threat of increase in points of failure, and with the addition of speedier change cycles provided / supported by DevOps, cloud computing and so on, there really is very little time to take control or take action. Under such circumstances, AIOps is fast emerging as a powerful solution to deal with the constant battle, with the efficiency that AI and ML can bring in. We are looking more and more into unsupervised methods / processes, to read data and make it coherent, make it ‘see the unknown unknowns’, and remediate/ bring problems into focus before it impacts customers. Adopting AI into IT Operations provide an increased visibility into operations through Machine Learning and the subsequent reduction in incidents, false alarms and the advantage of predictive warnings that can do away with outages.  It means insights are implemented thru automation tools leading to saving time and effort of the concerned teams.

With AIOps gathering and processing data, we require very little or almost nil manual intervention where algorithms help automate, due diligence gets done, and rich business insights are provided. AIOps becomes the much sought-after solution to the multitudinous problems in complex IT Enterprises.

“The global AIops Platform market is expected to generate a revenue of US$ 20,428 billion with a CAGR of 36.2% by 2025. – reports Coherent Market Insights

Gartner recommends that AIOps is adopted in phases. Early adopters typically start by applying machine learning to monitoring, operations and infrastructure data, before progressing to using deep neural networks for service and help desk automation.

The greatest strength with AIOps is that it can find all the potential risks and outages that may happen in the environment which can’t be done or anticipated by humans, and these operations can be conducted with greater consistency and time to value.

The complexity of an IT Enterprise is so huge though this makes an ideal scenario of ML, Data Science and Artificial Intelligence to help solutioning with specific, machine learning algorithms which is impossible for humans to reduce them in simple instructions and remediations. AIOps becomes the real answer to tackle critical issues and at the same time, it eliminates all the false positives that usually makes up a large percentage of ‘events’ that is reflected in monitoring tools.

Gartner predicted that by this year about 25% of the enterprises, globally, would implement an AIOps platform.  And that obviously means increasing complexities and huge data volumes but deep insights and more intelligence within the environment.  Experts say that this implies that AI is going to reach right from the device or environment till the customer.

ChatOps

AIOps is fast paced; it is believed that in the next decade majority of large Enterprises will take to ‘multi-system automations’ and will host digital colleagues – we are going to have virtual engineers to attend to queries and tasks.  IT Service desks are going to be ‘manned’ by digital colleagues, and they are going to take care of the frequent and mundane tasks with almost nil or minimal human intervention.  It is predicted that this year will see the emergence of ChatOps, where enterprises are going to introduce “AI based digital colleagues into chat-based IT Operations”, and digital colleagues will make a major impact on how IT operations function.

Establishing digital service desk bots brings in speed and agility into the service.  Reports say that actions which hitherto took up to 20 steps can now be accomplished with just one phrase and a couple of clarifications from the digital colleague.  This can save human labor hours and have their skills channeled to more important areas with mundane and frequent tasks such as password resets, catalogue requests, access requests and so forth being taken care of by digital colleagues. They can be entrusted with all incoming requests and those which cannot be processed by them are automatically escalated to the right human engineers.  Even L3 & L4 issues are expected to be resolved by digital colleagues with workflows being created by them and approved by human engineers. AI is going to keep recommending better and deeper automations, and we are going to see the true power of human / machine collaboration.

Humans will collaborate more and more with digital colleagues, change requests get created on a simple command with resolutions to be had within minutes / or assigned to human colleagues.  Algorithms are expected to integrate operations more and more.  Life with AI is going to make tasks such as identifying and inviting right people into root cause analysis sessions and have post resolution meetings to ensure continuous learning.

With AIOps, IT operations is going to reconstruct most tasks with AI and automation. It is reported that 38.4% of organizations take a minimum resolution time of 30 minutes on incidents and adopting AIOps is definitely the key. We may be looking at a future where we would have the luxury of an autonomous data center, and human resources in IT can truly spend their time on strategic decisions and business growth, work on innovation and become more visible to an organization’s growth.

Reference
https://www.coherentmarketinsights.com/market-insight/aiops-platform-market-2073

READ ALSO OUR NEW UPDATES

The future of AIOps

AIOps or Artificial Intelligence based IT operations is the buzzword that’s capturing the CXO’s interest in organizations worldwide. Why? Because data explosion is here, and the traditional tools and processes are unable to completely handle its creation, storage, analysis and management. Likewise, humans are unable to thoroughly analyze this data to obtain any meaningful insights. IT teams also face the challenging task of providing speed, security and reliability in an increasingly mobile and connected world.

Add to this the complex, manual and siloed processes that the legacy IT solutions offer to the organizations. As a result, the productivity for IT remains low due to their inability to find the exact root cause of incidents. Plus, the business leaders don’t have a 360-degree view of all their IT and business services across the organization.

AIOps is the Future for IT Operations

AIOps platforms are the foundation on which the organizations will project their future endeavors. Advanced machine learning and analytics are the building blocks to enhance their IT operations through a proactive approach towards service desk, monitoring and automation. Using effective data collection methods that utilize real time analytic technologies, AIOps provide insights to impact business decisions.

Successful AIOps implementations depend on key parameters Index (KPIs) whose impact can be seen on performance variation, service degradation, revenue, customer satisfaction and brand image.

All these impacts the organization’s services including but not limited to supply chain, online or digital. One way in which AIOps can deliver a predictive and proactive IT is by decreasing the MTBF (Mean time between failure), MTTD (Mean time to detection), MTTR (Mean time to resolution) and MTTI (Mean time to investigate) factors.

The future of AIOps is already on the way in the below mentioned use cases. There is just the surface with scope for many more use cases to be added in the future.

Capacity planning

Enterprise workloads are moving to the cloud with providers such as AWS, Google and Azure setting up various configurations for running them. The complexity involved increases as new configurations are added by the architects involving parameters like disk types, memory, network and storage resources.

AIOps can reduce the guesswork in aligning the correct usage of the network, storage and memory resources with the right configurations of servers and VMs through recommendations.

Optimal resource utilization

Enterprises are leveraging cloud elasticity to improve their application scaling in or scaling out automatically. With AIOps, IT administrators can rely on predictive scaling to take the auto scale cloud to the next level. Based on historical data, the workload will automatically determine the resources required by monitoring itself.

Data store management

AIOps can also be utilized to monitor the network and the storage resources that will impact the applications in the operations. When performance degradation issues are seen, the admin will get notified. By using AI for both network and storage management, mundane tasks such as reconfiguring and recalibration can be automated. Through predictive analytics, storage capacity is automatically adjusted by adding new volumes proactively.

Anomaly detection

Anomaly detection is the most important application of AIOps. This can prevent potential outages and disruptions that can be faced by organizations. As anomalies can occur in any part of the technology stack, pinpointing them in real-time, using advanced analytics and machine learning is crucial. AIOps can accurately detect the actual source which can help IT teams in performing efficient root cause analysis almost in real-time.

Threat detection & analysis

Along with anomaly detection, AIOps will play a critical role in enhancing the security of IT infrastructure. Security systems can use ML algorithms and AI’s self-learning capabilities to help the IT teams detect data breached and violations. By correlating various internal sources like log files, network and event logs, with the external information on malicious IPs and domains, AI can be used to detect anomalies and risk events through analysis. Advanced machine learning algorithms can be used to identify unexpected and potentially unauthorized and malicious activity within the infrastructure.

Although still early in deployment, companies are taking advantage of AI and machine learning to improve tech support and manage infrastructure.  AIOps, the convergence of AI and IT ops, will change the face of infrastructure management.

READ ALSO OUR NEW UPDATES

AIOps – IT Infrastructure Services for the Digital Age

The IT infrastructure services landscape is undergoing a significant shift, driven by digitalization. As focus shifts from cost efficiency to digital enablement, organizations need to re-imagine the IT infrastructure services model to deliver the necessary back-end agility, flexibility, and fluidity. Automation, analytics, and Artificial Intelligence (AI) – comprising the “codifying elements” for driving AIOps – help drive this desired level of adaptability within IT infrastructure services. Automation, analytics, and AI – which together comprise the “codifying elements” for driving AIOps– help drive the desired level of adaptiveness within IT infrastructure services. Intelligent automation, leveraging analytics and ML, embeds powerful, real-time business and user context and autonomy into IT infrastructure services. Intelligent automation has made inroads in enterprises in the last two to three years, backed by a rapid proliferation and maturation of solutions in the market.

Artificial Intelligence Operations (AIOps) . Everest Group 2018 Report . IT Infrastructure

Benefits of codification of IT infrastructure services

Progressive leverage of analytics and AI, to drive an AIOps strategy, enables the introduction of a broader and more complex set of operational use cases into IT infrastructure services automation. As adoption levels scale and processes become orchestrated, the benefits potentially expand beyond cost savings to offer exponential value around user experience enrichment, services agility and availability, and operations resilience. Intelligent automation helps maximize value from IT infrastructure services by:

  1. Improving the end-user experience through contextual and personalized support
  2. Driving faster resolution of known/identified incidents leveraging existing knowledge, intelligent diagnosis, and reusable, automated workflows
  3. Avoiding potential incidents and improving business systems performance through contextual learning (i.e., based on relationships among systems), proactive health monitoring and anomaly detection, and preemptive healing

Although the benefits of intelligent automation are manifold, enterprises are yet to realize commensurate advantage from investments in infrastructure services codification. Siloed adoption, lack of well-defined change management processes, and poor governance are some of the key barriers to achieving the expected value.  The design should involve an optimal level of human effort/intervention targeted primarily at training, governing, and enhancing the system, rather than executing routine, voluminous tasks.  A phased adoption of automation, analytics, and AI within IT infrastructure services has the potential to offer exponential business value. However, to realize the full potential of codification, enterprises need to embrace a lean operating model, underpinned by a technology-agnostic platform. The platform should embed the codifying elements within a tightly integrated infrastructure services ecosystem with end-to-end workflow orchestration and resolution.

The market today has a wide choice of AIOps solutions, but the onus is on enterprises to select the right set of tools / technologies that align with their overall codification strategy.

Click here to read the complete whitepaper by Everest Group

READ ALSO OUR NEW UPDATES