Growing Importance of Business Service Reliability

Business services are a set of business activities delivered to an outside party, such as a customer or a partner. Successful delivery of business services often depends on one or more IT services. For example, an IT business service that would support “order to cash”, as an example could be “supply chain service”. The supply chain service could be delivered by an application such as SAP, with the customer of that service being an employee in finance/accounting using the application to perform customer-facing services such as accounts receivable, or the collection of cash from an outside party. A business service is not simply the application that the end-user sees – it is the entire chain that supports the delivery of the service, including physical and virtualized servers, databases, middleware, storage, and networks. A failure in any of these can affect the service – and so it is crucial that IT organizations have an integrated, accurate, and up-to-date view of these components and of how they work together to provide the service.

The technologies for Social Networking, Mobile Applications, Analytics, Cloud (SMAC), and Artificial Intelligence (AI) are redefining the business and the services that businesses provide. Their widespread usage is changing the business landscape, increasing reliability and availability to levels that were unimaginable even a few years ago.

Availability versus Reliability

At first glance, it might seem that if a service has a high availability then it should also have high reliability. However, this is not necessarily the case. Availability and Reliability have different meanings, serve different purposes, and require different strategies to maintain desired standards of service levels. Reliability is the measure of how long a business service performs its intended function, whereas availability is the measure of the percentage of time a business service is operable. For example, a business service may be available 90% of the time, but reliable only 75% of the time from a performance standpoint. Service reliability can be seen as:

  • Probability of success
  • Durability
  • Dependability
  • Quality over time
  • Availability to perform a function

Merely having a service available isn’t sufficient. When a business service is available, it should actually serve the intended purpose under varying and unexpected conditions. One way to measure this performance is to evaluate the reliability of the service that is available to consume. The performance of a business service is now rated not by its availability, but by how consistently reliable it is. Take the example of mobile services – 4 bars of signal strength on your smartphone does not guarantee that the quality of the call you received or going to make. Organizations need to measure how well the service fulfills the necessary business performance needs.

Recognizing the importance of reliability, Google initiated Site Reliability Engineering (SRE) practices with a mission to protect, provide for, and progress the software and systems behind all of Google’s public services — Google Search, Ads, Gmail, Android, YouTube, and App Engine, to name just a few — with an ever-watchful eye on their availability, latency, performance, and capacity.

Zero Incident FrameworkTM (ZIF)

GAVS Technologies developed an AIOps based TechOps platform – Zero Incident FrameworkTM (ZIF) that enables proactive detection and remediation of incidents. The ZIF Platform is, available in two versions for our customers to evaluate and experience the power of AI-driven Business Service Reliability: 

ZIF Business Xpress: ZIF Business Xpress has been engineered for enterprises to evaluate AIOps before adoption. 10 to 40 devices can be connected to ZIFBusiness Xpress, to experiment with the value proposition. 

ZIF Business: Targeted for enterprise-wide adoption.

For more details, please visit https://zif.ai

About the Author:

Sri Chaganty


Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped, and venture-backed startups.

Automating IT ecosystems with ZIF Remediate

Alwinking N Rajamani

Alwinking N Rajamani


Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™. ZIF comprises of 5 modules, as outlined below.

This article’s focus is on the Remediate function of ZIF. Most ITSM teams envision a future of ticketless ITSM, driven by AI and Automation.

Remediate being a key module ofZIF, has more than 500+ connectors to various ITSMtools, Monitoring, Security and Incident management tools, storage/backup tools and others.Few of the connectors are referenced below that enables quick automation building.

Key Features of Remediate

  • Truly Agent-less software.
  • 300+ readily available templates – intuitive workflow/activity-based tool for process automation from a rich repository of pre-coded activities/templates.
  • No coding or programming required to create/deploy automated workflows. Easy drag & drop to sequence activities for workflow design.
  • Workflow execution scheduling for pre-determined time or triggering from events/notifications via email or SMS alerts.
  • Can be installed on-premise or on the cloud, on physical or virtual servers
  • Self Service portal for end-users/admins/help-desk to handle tasks &remediation automatically
  • Fully automated service management life cycle from incident creation to resolution and automatic closure
  • Has integration packs for all leading ITSM tools

Key features for futuristic Automation Solutions

Although the COVID pandemic has landed us in unprecedented times, we have been able to continue supporting our customers and enabled their IT operations with ZIF Remediate.

  • Self-learning capability to deliver Predictive/Prescriptive actionable alerts.
  • Access to multiple data sources and types – events, metrics, thresholds, logs, event triggers e.g. mail or SMS.
  • Support for a wide range of automation
    • Interactive Automation – Web, SMS, and email
    • Non-interactive automation – Silent based on events/trigger points
  • Supporting a wide range of advanced heuristics.

Benefits of AIOPS driven Automation

  • Faster MTTR
  • Instant identification of threats and appropriate responses
  • Faster delivery of IT services
  • Quality services leading to Employee and Customer satisfaction
  • Fulfillment and Alignment of IT services to business performance

Interactive and Non-interactive automation

Through our automation journey so far, we have understood that the best automation empowers humans, rather than replacing them. By implementing ZIF Remediate, organizations can empower their people to focus their attention on critical thinking and value-added activities and let our platform handle mundane tasks by bringing data-driven insights for decision making.

  • Interactive Automation – Web portal, Chatbot and SMS based
  • Non-interactive automations – Event or trigger driven automation

Involved decision driven Automations

ZIF Remediate has its unique, interactive automation capabilities, where many automation tools do not allow interactive decision making. Need approvals built into an automated change management process that involves sensitive aspects of your environment? Need numerous decision points that demand expert approval or oversight? We have the solution for you. Take an example of Phishing automation, here a domain or IP is blocked based on insights derived by mimicking an SOC engineer’s actions – parsing the observables i.e. URL, suspicious links or attachments in a phish mail and have those observables validated for threat against threat response tools, virus total, and others.

Some of the key benefits realized by our customers which include one of the largest manufacturing organizations, a financial services company, a large PR firm, health care organizations, and others.

  • Reduction of MTTR by 30% across various service requests.
  • Reduction of 40% of incidents/tickets, thus enabling productivity improvements.
  • Ticket triaging process automation resulting in a reduction of time taken by 50%.
  • Reclaiming TBs of storage space every week through snapshot monitoring and approval-driven model for a large virtualized environment.
  • Eliminating manual threat analysis by Phishing Automation, leading to man-hours being redirected towards more critical work.
  • Reduction of potential P1 outages by 40% through self-healing automations.

For more detailed information on ZIF Remediate, or to request a demo please visit https://zif.ai/products/remediate/

About the Author:

Alwin leads the Product Engineering for ZIF Remediate and zIrrus. He has over 20 years of IT experience spanning across Program & Portfolio Management for large customer accounts of various business verticals.

In his free time, Alwin loves going for long drives, travelling to scenic locales, doing social work and reading & meditating the Bible.

Modern IT Infrastructure

Infrastructure today has grown beyond the physical confines of the traditional data center, has spread its wings to the cloud, and is increasingly distributed, virtual, and abstract. With the cloud gaining wide acceptance, most enterprises have their workloads spread across data centers, colocations, multi-cloud, and edge locations. On-premise infrastructure is also being replaced by Hyperconverged Infrastructure (HCI) where software-defined, virtualized compute, storage, and network are in one single system, greatly simplifying IT operations. Infrastructure is also becoming increasingly elastic, scales & shrinks on demand and doesn’t have to be provisioned upfront.

Let’s look at a few interesting technologies that are steering the modern IT landscape.

Containers and Serverless

Traditional application deployment on physical servers comes with the overhead of managing the infrastructure, middleware, development tools, and everything in between. Application developers would rather have this grunt work be handled by someone else, so they could focus on just their applications. This is where containers and serverless technologies come into picture. Both are cloud-based offerings and provide different levels of abstraction, in a way that hides layers beyond the front end, from the developer. They typically deploy smaller components of monolithic applications, microservices, and functions.

A Container is like an all-in-one-box, containing the app, and all its dependencies like libraries, executables & config files. The containerized application is highly portable, will run anywhere the container runtime is installed, and behave the same regardless of the OS or hardware it is deployed on. Containers give developers great flexibility and control since they cater to specific application requirements like the OS, S/W versions. The flip side is that there is still a need for manual maintenance of the runtime environment, like security patches, software updates, etc. Secondly, the flexibility it affords translates into high operational costs, since it lacks agility in scaling.

Serverless technologies provide much greater abstraction of the OS and infrastructure. ‘Serverless’ though, does not imply that there are no servers, it just means application developers do not have to worry about the underlying OS, the server environment, or the infra that their applications will be deployed on. Serverless is event-driven and is based on the premise that the application is split into functions that get executed based on events. The developer only needs to deploy function code and define the event(s) that will trigger them! The rest of the magic is done by the cloud service provider (with the help of third parties). 

The biggest advantage of serverless is that consumers are billed only for the running time of the function instances or the number of times the function gets executed, depending on the provider. Since it has zero administrative overhead, it guarantees rapid iterative deployment and faster time to market. Since the architecture is intrinsically auto-scaling, it is a perfect fit for applications with undefinable usage patterns. The other side of the coin is that developers need to deal with a black box back-end environment, so, holistic testing, debugging of the application becomes a challenge. Vendor lock-in is a real problem since the consumer is restricted by the technology stack supported by the vendor. Since serverless best practices dictate light, isolated functions with limited scope, building complex applications can get difficult. Function as a Service (FaaS) is a subset of serverless computing.

Internet of Things (IoT)

IoT is about connecting everyday things – beyond just computing devices or smartphones – to the internet. It is possible to convert practically anything into an IoT device, with a computer chip installation & internet access, and have it communicate independently with the internet – without any human intervention. But why would we want everyday things like for instance a watch or a light bulb, to become IoT devices? It’s in a bid to bridge the chasm between the physical and digital worlds and make the environment around us more intelligent, communicative, and responsive to our needs.

IoT’s use cases are just about everywhere; in personal devices, self-driving cars, smart homes, smart workspaces, smart cities, and industries across all verticals. For instance, live data from sensors in products while in use, gives good visibility into their operations on the ground, helps remediate issues proactively & aids improvements in design/manufacturing processes.

The Industrial Internet of Things (IIoT) is the use of IoT data in business, in tandem with Big Data, AI, Analytics, Cloud, and High-speed networks, with the primary goal of finding efficient business models to improve productivity & optimize expenditure. The need for real-time response to sensor data and advanced analytics to power insights has increased the demand for 5G networks for speed, cloud technologies for storage and computing, edge computing to reduce latency, and hyper-scale data centers for rapid scaling.

With IoT devices extending an organization’s infrastructure landscape, and the likelihood that IT staff may not even be aware of all the IoT devices in it is a security nightmare that could open corporate networks & sensitive data for attacks. Global standards and regulations for IoT device security are in the works. Until then, it is up to the enterprise security team to safeguard against IoT-related vulnerabilities.

Hyperscaling

The ability of infrastructure to rapidly scale out on a massive level is called hyperscaling.

Unprecedented needs for high-power computing and on-demand massive scalability has given rise to a new breed of hyperscale computing architectures, where traditional elements are replaced by hyper-converged, software-defined infrastructure with a high degree of virtualization. These hyperscale environments are characterized by high-density server racks, with software designed and specifically built for scale-out environments. Since high-density implies heavy power consumption, heating problems need to be handled by specialized cooling solutions like liquid cooling. Hyperscale data centre operators usually look for renewable energy options to save on power & cooling.

Today, there are several hundred hyperscale data centers in the world, with the dominant players being Microsoft, Google, Apple, Amazon & Facebook.

Edge Computing

Edge computing as the name indicates means moving data processing away from distant servers or the cloud, closer to the source of data.  This is to reduce latency and network bandwidth used for back & forth communication between the data source and the server. Edge, also called the network edge refers to where the data source connects to the internet. The explosive growth of IoT and applications like self-driving cars, virtual reality, smart cities for instance, that require real-time computing and analytics are paving the way for edge computing. Most cloud providers now provide geographically distributed edge servers. As with IoT devices, data at the edge can be a ticking security time bomb necessitating appropriate security mechanisms.

The evolution of IT technologies continuously raises the bar for the IT team. IT personnel have been forced to move beyond legacy practices and mindsets & constantly up-skill themselves to be able to ride the wave. For customers pampered by sophisticated technologies, round the clock availability of systems and immersive experiences have become baseline expectations. With more & more digitalization, there is increasing reliance on IT infrastructure and hence lesser tolerance for outages. The responsibilities of maintaining a high-performing IT infrastructure with near-zero downtime fall on the shoulders of the IT operations team.

This has underscored the importance of AI in IT operations since IT needs have now surpassed human capabilities. Gavs’ AI-powered Platform for IT operations, ZIF, caters to the entire ITOps spectrum, right from automated discovery of the landscape, monitoring, to predictive and prescriptive analytics that proactively drive the organization towards zero incidents. For more details, please visit https://zif.ai

About the Author:

Padmapriya Sridhar

Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel, and Yoga. She aspires to become a Yoga Instructor someday!

Prediction for Business Service Assurance

Artificial Intelligence for IT operations or AIOps has exploded over the past few years. As more and more enterprises set about their digital transformation journeys, AIOps becomes imperative to keep their businesses running smoothly. 

AIOps uses several technologies like Machine Learning and Big Data to automate the identification and resolution of common Information Technology (IT) problems. The systems, services, and applications in a large enterprise produce volumes of log and performance data. AIOps uses this data to monitor the assets and gain visibility into the behaviour and dependencies among these assets.

According to a Gartner publication, the adoption of AIOps by large enterprises would rise to 30% by 2023.

ZIF – The ideal AIOps platform of choice

Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™.

ZIF comprises of 5 modules, as outlined below.

At the heart of ZIF, lies its Analyze and Predict (A&P) modules which are powered by Artificial Intelligence and Machine Learning techniques. From the business perspective, the primary goal of A&P would be 100% availability of applications and business processes.

Let us understand more about thePredict module of ZIF.

Predictive Analytics is one of the main USP of the ZIF platform. ZIF encompassesSupervised, Unsupervised and Reinforcement Learning algorithms for realization of various business use cases (as shown below).

How does the Predict Module of ZIF work?

Through its data ingestion capabilities, the ZIF platform can receive and process all types of data (both structured and unstructured) from various tools in the enterprise. The types of data can be related to alerts, events, logs, performance of devices, relations of devices, workload topologies, network topologies etc. By analyzing all these data, the platform predicts the anomalies that can occur in the environment. These anomalies get presented as ‘Opportunity Cards’ so that suitable action can be taken ahead of time to eliminate any undesired incidents from occurring. Since this is ‘Proactive’ and not ‘Reactive’, it brings about a paradigm shift to any organization’s endeavour to achieve 100% availability of their enterprise systems and platforms. Predictions are done at multiple levels – application level, business process level, device level etc.

Sub-functions of Prediction Module

How does the Predict module manifest to enterprise users of the platform?

Predict module categorizes the opportunity cards into three swim lanes.

  1. Warning swim lane – Opportunity Cards that have an “Expected Time of Impact” (ETI) beyond 60 minutes.
  2. Critical swim lane – Opportunity Cards that have an ETI within 60 minutes.
  3. Processed / Lost– Opportunity Cards that have been processed or lost without taking any action.

Few of the enterprises that realized the power of ZIF’s Prediction Module

  • A manufacturing giant in the US
  • A large non-profit mental health and social service provider in New York
  • A large mortgage loan service provider in the US
  • Two of the largest private sector banks in India

For more detailed information on GAVS’ Analyze, or to request a demo please visithttps://zif.ai/products/predict/

References:https://www.gartner.com/smarterwithgartner/how-to-get-started-with-aiops/

About the Author:

Vasudevan Gopalan

Vasu heads Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications etc.

Outside of his professional role, Vasu enjoys playing badminton and focusses on fitness routines.

Discover, Monitor, Analyze & Predict COVID-19

Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. Netflix, the world’s largest movie house, own no cinemas. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.”

– Tom Goodwin, an executive at the French media group Havas.

This new breed of companies is the fastest growing in history because they own the customer interface layer. It is the platform where all the value and profit is. “Platform business” is a more wholesome termfor this model for which data is the fuel; Big Data & AI/ML technologies are the harbinger of new waves of productivity growth and innovation.

With Big data and AI/ML is making a big difference in the area of public health, let’s see how it is helping us tackle the global emergency of coronavirus formally known as COVID-19.

“With rapidly spreading disease, a two-week lag is an eternity.”

DISCOVERING/ DETECTING

Chinese technology giant Alibaba has developed an AI system for detecting the COVID-19 in CT scans of patients’ chests with 96% accuracy against viral pneumonia cases. It only takes 20 seconds for the AI to decide, whereas humans generally take about 15 minutes to diagnose the illness as there can be upwards of 300 images to evaluate.The system was trained on images and data from 5,000 confirmed coronavirus cases and has been tested in hospitals throughout China. Per a report, at least 100 healthcare facilities are currently employing Alibaba’s AI to detect COVID-19.

Ping An Insurance (Group) Company of China, Ltd (Ping An) aims to address the issue of lack of radiologists by introducing the COVID-19 smart image-reading system. This image-reading system can read the huge volumes of CT scans in epidemic areas.

Ping An Smart Healthcare uses clinical data to train the AI model of the COVID-19 smart image-reading system. The AI analysis engine conducts a comparative analysis of multiple CT scan images of the same patient and measures the changes in lesions. It helps in tracking the development of the disease, evaluation of the treatment and in prognosis of patients.Ultimately it assists doctors to diagnose, triage and evaluate COVID-19 patients swiftly and effectively.

Ping An Smart Healthcare’s COVID-19 smart image-reading system also supports AI image-reading remotely by medical professionals outside the epidemic areas.Since its launch, the smart image-reading system has provided services to more than 1,500 medical institutions. More than 5,000 patients have received smart image-reading services for free.

The more solutions the better. At least when it comes to helping overwhelmed doctors provide better diagnoses and, thus, better outcomes.

MONITORING

  • AI based Temperature monitoring & scanning

In Beijing, China, subway passengers are being screened for symptoms of coronavirus, but not by health authorities. Instead, artificial intelligence is in-charge.

Two Chinese AI giants, Megvii and Baidu, have introduced temperature-scanning. They have implemented scanners to detect body temperature and send alerts to company workers if a person’s body temperature is high enough to constitute a fever.

Megvii’s AI system detects body temperatures for up to 15 people per second andup to 16 feet. It monitors as many as 16 checkpoints in a single station. The system integrates body detection, face detection, and dual sensing via infrared cameras and visible light. The system can accurately detect and flag high body temperature even when people are wearing masks, hats, or covering their faces with other items. Megvii’s system also sends alerts to an on-site staff member.

Baidu, one of the largest search-engine companies in China, screens subway passengers at the Qinghe station with infrared scanners. It also uses a facial-recognition system, taking photographs of passengers’ faces. If the Baidu system detects a body temperature of at least 99-degrees Fahrenheit, it sends an alert to the staff member for another screening. The technology can scan the temperatures of more than 200 people per minute.

  • AI based Social Media Monitoring

An international team is using machine learning to scour through social media posts, news reports, data from official public health channels, and information supplied by doctors for warning signs of the virus across geographies.The program is looking for social media posts that mention specific symptoms, like respiratory problems and fever, from a geographic area where doctors have reported potential cases. Natural language processing is used to parse the text posted on social media, for example, to distinguish between someone discussing the news and someone complaining about how they feel.

The approach has proven capable of spotting a coronavirus needle in a haystack of big data. This technique could help experts learn how the virus behaves. It may be possible to determine the age, gender, and location of those most at risk quicker than using official medical sources.

PREDICTING

Data from hospitals, airports, and other public locations are being used to predict disease spread and risk. Hospitals can also use the data to plan for the impact of an outbreak on their operations.

Kalman Filter

Kalman filter was pioneered by Rudolf Emil Kalman in 1960, originally designed and developed to solve the navigation problem in the Apollo Project. Since then, it has been applied to numerous cases such as guidance, navigation, and control of vehicles, computer vision’s object tracking, trajectory optimization, time series analysis in signal processing, econometrics and more.

Kalman filter is a recursive algorithm which uses time-series measurement over time, containing statistical noise and produce estimations of unknown variables.

For the one-day prediction Kalman filter can be used, while for the long-term forecast a linear model is used where its main features are Kalman predictors, infected rate relative to population, time-depended features, and weather history and forecasting.

The one-day Kalman prediction is very accurate and powerful while a longer period prediction is more challenging but provides a future trend.Long term prediction does not guarantee full accuracy but provides a fair estimation following the recent trend. The model should re-run daily to gain better results.

GitHub Link: https://github.com/Rank23/COVID19

ANALYZING

The Center for Systems Science and Engineering at Johns Hopkins University has developed an interactive, web-based dashboard that tracks the status of COVID-19 around the world. The resource provides a visualization of the location and number of confirmed COVID-19 cases, deaths and recoveries for all affected countries.

The primary data source for the tool is DXY, a Chinese platform that aggregates local media and government reports to provide COVID-19 cumulative case totals in near real-time at the province level in China and country level otherwise. Additional data comes from Twitter feeds, online news services and direct communication sent through the dashboard. Johns Hopkins then confirms the case numbers with regional and local health departments. This kind of Data analytics platform plays a pivotal role in addressing the coronavirus outbreak.

All data from the dashboard is also freely available in the following GitHub repository.

GitHub Link:https://bit.ly/2Wmmbp8

Mobile version: https://bit.ly/2WjyK4d

Web version: https://bit.ly/2xLyT6v

Conclusion

One of AI’s core strengths when working on identifying and limiting the effects of virus outbreaks is its incredibly insistent nature. AIsystems never tire, can sift through enormous amounts of data, and identify possible correlations and causations that humans can’t.

However, there are limits to AI’s ability to both identify virus outbreaks and predict how they will spread. Perhaps the best-known example comes from the neighboring field of big data analytics. At its launch, Google Flu Trends was heralded as a great leap forward in relation to identifying and estimating the spread of the flu—until it underestimated the 2013 flu season by a whopping 140 percent and was quietly put to rest.Poor data quality was identified as one of the main reasons Google Flu Trends failed. Unreliable or faulty data can wreak havoc on the prediction power of AI.

References:

About the Author:

Bargunan Somasundaram

Bargunan Somasundaram

Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

AI in Healthcare

The Healthcare Industry is going through a quiet revolution. Factors like disease trends, doctor demographics, regulatory policies, environment, technology etc. are forcing the industry to turn to emerging technologies like AI, to help adapt to the pace of change. Here, we take a look at some key use cases of AI in Healthcare.

Medical Imaging

The application of Machine Learning (ML) in Medical Imaging is showing highly encouraging results. ML is a subset of AI, where algorithms and models are used to help machines imitate the cognitive functions of the human brain and to also self-learn from their experiences.

AI can be gainfully used in the different stages of medical imaging- in acquisition, image reconstruction, processing, interpretation, storage, data mining & beyond. The performance of ML computational models improves tremendously as they get exposed to more & more data and this foundation on colossal amounts of data enables them to gradually better humans at interpretation. They begin to detect anomalies not perceptible to the human eye & not discernible to the human brain!

What goes hand-in-hand with data, is noise. Noise creates artifacts in images and reduces its quality, leading to inaccurate diagnosis. AI systems work through the clutter and aid noise- reduction leading to better precision in diagnosis, prognosis, staging, segmentation and treatment.

At the forefront of this use case is Radio genomics- correlating cancer imaging features and gene expression. Needless to say, this will play a pivotal role in cancer research.

Drug Discovery

Drug Discovery is an arduous process that takes several years from the start of research to obtaining approval to market. Research involves laboring through copious amounts of medical literature to identify the dynamics between genes, molecular targets, pathways, candidate compounds. Sifting through all of this complex data to arrive at conclusions is an enormous challenge. When this voluminous data is fed to the ML computational models, relationships are reliably established. AI powered by domain knowledge is slashing down time & cost involved in new drug development.

Cybersecurity in Healthcare

Data security is of paramount importance to Healthcare providers who need to ensure confidentiality, integrity, and availability of patient data. With cyberattacks increasing in number and complexity, these formidable threats are giving security teams sleepless nights! The main strength of AI is its ability to curate massive quantities of data- here threat intelligence, nullify the noise, provide instant insights & self-learn in the process. Predictive & Prescriptive capabilities of these computational models drastically reduces response time.

Virtual Health assistants

Virtual Health assistants like Chatbots, give patients 24/7 access to critical information, in addition to offering services like scheduling health check-ups or setting up appointments. AI- based platforms for wearable health devices and health apps come armed with loads of features to monitor health signs, daily activities, diet, sleep patterns etc. and provide alerts for immediate action or suggest personalized plans to enable healthy lifestyles.

AI for Healthcare IT Infrastructure

Healthcare IT Infrastructure running critical applications that enable patient care, is the heart of a Healthcare provider. With dynamically changing IT landscapes that are distributed, hybrid & on-demand, IT Operations teams are finding it hard to keep up. Artificial Intelligence for IT Ops (AIOps) is poised to fundamentally transform the Healthcare Industry. It is powering Healthcare Providers across the globe, who are adopting it to Automate, Predict, Remediate & Prevent Incidents in their IT Infrastructure. GAVS’ Zero Incident FrameworkTM (ZIF) – an AIOps Platform, is a pure-play AI platform based on unsupervised Machine Learning and comes with the full suite of tools an IT Infrastructure team would need. Please watch this video to learn more.

READ ALSO OUR NEW UPDATES

Disaster Recovery for Modern Digital IT

A Disaster Recovery strategy includes policies, tools and processes for recovery of data and restoration of systems in the event of a disruption. The cause of disruption could be natural, like earthquakes/floods, or man-made like power outages, hardware failures, terror attacks or cybercrimes. The aim of Disaster Recovery(DR) is to enable rapid recovery from the disaster to minimize data loss, extent of damage, and disruption to business. DR is often confused with Business Continuity Planning(BCP). While BCP ensures restoration of the entire business, DR is a subset of that, with focus on IT infrastructure, applications and data.

IT disasters come at the cost of lost revenue, tarnished brand image, lowered customer confidence and even legal issues relating to data privacy and compliance. The impact can be so debilitating that some companies never fully recover from it. With the average cost of IT downtime running to thousands of dollars per minute, it goes without saying that an enterprise-grade disaster recovery strategy is a must-have.

Why do companies neglect this need?

Inspite of the obvious consequences of a disaster, many organizations shy away from investing in a DR strategy due to the associated expenditure. Without a clear ROI in sight, these organizations decide to risk the vulnerability to catastrophic disruptions. They instead make do with just data backup plans or secure only some of the most critical elements of their IT landscape.

Why is Disaster Recovery different today?

The ripple effects of modern digital infrastructure have forced an evolution in DR strategies. Traditional Disaster Recovery methods are being overhauled to cater to the new hybrid IT infrastructure environment. Some influencing factors:

  • The modern IT Landscape

o Infrastructure – Today’s IT environment is distributed between on-premise, colocation facilities, public/private cloud, as-a-service offerings and edge locations. Traditional data centres are losing their prominence and are having to share their monopoly with these modern technologies. This trend has significant advantages such as reduced CapEx in establishing data centers, reduced latency because of data being closer to the user, and high dynamic scalability.

o Data – Adding to the complexity of modern digital infrastructure is the exponential growth in data from varied sources and of disparate types like big data, mobile data, streaming content, data from cloud, social media, edge locations, IoT, to name a few.

  • Applications – The need for agility has triggered the shift away from monolith applications towards microservices that typically use containers to provide their execution environment. Containers are ephemeral and so scale, shrink, disappear or move between nodes based on demand.
  • While innovation in IT helps digital transformation in unimaginable ways, it also makes it that much harder for IT teams to formulate a disaster recovery strategy for today’s IT landscape that is distributed, mobile, elastic and transient.
  • Cybercrimes are becoming increasingly prevalent and are a big threat to organizations. Moderntechnologies fuel increasing sophistication in malware and ransomware. As their complexity increases, they are becoming harder to even detect while they lie low and do their harm quietly inside the environment. By the time they are detected, the damage is done and it’s too late. DR strategies are also constantly challenged by the lucrative underworld of ransomware.

Solution Strategies for Disaster Recovery

  • On-Premise DR: This is the traditional option that translates toheavy upfront investments towardsthe facility, securing the facility, infrastructure including the network connectivity/firewalls/load balancers, resources to scale as needed, manpower, test drills, ongoing management and maintenance, software licensing costs, periodic upgrades for ongoing compatibility with the production environment and much more.

A comprehensive DR strategy involves piecing together several pieces of a complex puzzle. Due to the staggering costs and time involved in provisioning and managing infra for the duplicate storage and compute, companies are asking themselves if it is really worth the investment, and are starting to explore more OpEx based solutions. And, they are discovering that the cloud may be the answer to this challenge of evolving infra, offering cost-effective top-notch resiliency.

  • Cloud-based DR: The easy availability of public cloud infrastructure & services, with affordablemonthly subscription plans and pay per use rates, has caused an organic switch to the cloud for storage, infra and as a Service(aaS) needs. To complement this, replication techniques have also evolved to enable cloud replication. With backup on the cloud, the recovery environment needs to be paid for only when used in the event of a disaster!

Since maintaining the DR site is the vendor’s responsibility, it reduces the complexity in managing the DR site and the associated operating expenses as well. Most DR requirements are intrinsically built into cloud solutions: redundancy, advanced networks, bandwidth, scalability, security & compliance. These can be availed on demand, as necessitated by the environment and recovery objectives. These features have made it feasible for even small businesses to acquire DR capabilities.

Disaster Recovery-as-a-Service(DRaaS) which is fast gaining popularity, is a DR offering on the cloud, where the vendor manages the replication, failover and failback mechanisms as needed for recovery, based on a SLA driven service contract .

On the flip side, as cloud adoption becomes more and more prevalent, there are also signs of a reverse drain back to on-premise! Over time, customers are noticing that they are bombarded by hefty cloud usage bills, way more than what they had bargained for. There is a steep learning curve

in assimilating the nuances of new cloud technologies and the innumerable options they offer. It is critical for organizations to clearly evaluate their needs, narrow down on reliable vendors with mature offerings, understand their feature set and billing nitty-gritties and finalize the best fit for their recovery goals. So, it is Cloud, but with Caution!

  • Integrating DR with the Application: Frank Jablonski, VP of Global Marketing, SIOS Technology Corppredicts that applications will soon have Disaster Recovery architected into their core, as a value-add. Cloud-native implementations will leverage the resiliency features of the cloud to deliver this value.

The Proactive Approach

Needless to say, investing in a proactive approach for disaster prevention will help mitigate the chances for a disaster in the first place. One sure-fire way to optimize IT infrastructure performance, prevent certain types of disasters and enhance business services continuity is to use AI augmented ITOps platforms to manage the IT environment. GAVS’ AIOps platform, Zero Incident FrameworkTM(ZIF) has modules powered by Advanced Machine Learning to Discover, Monitor, Analyze, Predict, and Remediate, helping organizations drive towards a Zero Incident EnterpriseTM. For more information, please visit the ZIF website.

READ ALSO OUR NEW UPDATES

Analyze

Have you heard of AIOps?

Artificial intelligence for IT operations (AIOps) is an umbrella term for the application of Big Data Analytics, Machine Learning (ML) and other Artificial Intelligence (AI) technologies to automate the identification and resolution of common Information Technology (IT) problems. The systems, services and applications in a large enterprise produce immense volumes of log and performance data. AIOps uses this data to monitor the assets and gain visibility into the working behaviour and dependencies between these assets.

According to a Gartner study, the adoption of AIOps by large enterprises would rise to 30% by 2023.

ZIF – The ideal AIOps platform of choice

Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™

ZIF comprises of 5 modules, as outlined below.

At the heart of ZIF, lies its Analyze and Predict (A&P) modules which are powered by Artificial Intelligence and Machine Learning techniques. From the business perspective, the primary goal of A&P would be 100% availability of applications and business processes.

Come, let us understand more about the Analyze function of ZIF.

With Analyzehaving a Big Data platform under its hood, volumes of raw monitoring data, both structured and unstructured, can be ingested and grouped to build linkages and identify failure patterns.

Data Ingestion and Correlation of Diverse Data

The module processes a wide range of data from varied data sources to break siloes while providing insights, exposing anomalies and highlighting risks across the IT landscape. It increases productivity and efficiency through actionable insights.

  • 100+ connectors for leading tools, environments and devices
  • Correlation and aggregation methods uncover patterns and relationships in the data

Noise Nullification

Eliminates duplicate incidents, false positives and any alerts that are insignificant. This also helps reduce the Mean-Time-To-Resolution and event-to-incident ratio.

  • Deep learning algorithms isolate events that have the potential to become incidents along with their potential criticality
  • Correlation and Aggregation methods group alerts and incidents that are related and needs a common remediation
  • Reinforcement learning techniques are applied to find and eliminate false positives and duplicates

Event Correlation

Data from various sources are ingested real-time into ZIF either by push or pull mechanism. As the data is ingested, labelling algorithms are run to label the data based on identifiers. The labelled data is passed through the correlation engine where unsupervised algorithms are run to mine the patterns. Sub-sequence mining algorithms help in identifying unique patterns from the data.

Unique patterns identified are clustered using clustering algorithms to form cases. Every case that is generated is marked by a unique case id. As part of the clustering process, seasonality aspects are checked from historical transactions to derive higher accuracy of correlation.

Correlation is done based on pattern recognition, helping to eliminate the need for relational CMDB from the enterprise. The accuracy of the correlation increases as patterns reoccur. Algorithms also can unlearn patterns based on the feedback that can be provided by actions taken on correlation. As these are unsupervised algorithms, the patterns are learnt with zero human intervention.

Accelerated Root Cause Analysis (RCA)

Analyze module helps in identifying the root causes of incidents even when they occur in different silos. Combination of correlation algorithms with unsupervised deep learning techniques aid in accurately nailing down the root causes of incidents/problems. Learnings from historical incidents are also applied to find root causes in real-time. The platform retraces the user journeys step-by-step to identify the exact point where an error occurs.

Customer Success Story – How ZIF’s A&P transformed IT Operations of a Manufacturing Giant

  • Seamless end-to-end monitoring – OS, DB, Applications, Networks
  • Helped achieve more than 50% noise reduction in 6 months
  • Reduced P1 incidents by ~30% through dynamic and deep monitoring
  • Achieved declining trend of MTTR and an increasing trend of Availability
  • Resulted in optimizingcommand centre/operations head count by ~50%
  • Resulted in ~80% reduction in operations TCO

For more detailed information on GAVS’ Analyze, or to request a demo please visit zif.ai/products/analyze

References: www.gartner.com/smarterwithgartner/how-to-get-started-with-aiops

ABOUT THE AUTHOR

Vasudevan Gopalan


Vasu heads Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications etc.

Outside of his professional role, Vasu enjoys playing badminton and focusses on fitness routines.

READ ALSO OUR NEW UPDATES

Monitoring Microservices and Containers

Monitoring applications and infrastructure is a critical part of IT Operations. Among other things, monitoring provides alerts on failures, alerts on deteriorations that could potentially lead to failures, and performance data that can be analysed to gain insights. AI-led IT Ops Platforms like ZIF use such data from their monitoring component to deliver pattern recognition-based predictions and proactive remediation, leading to improved availability, system performance and hence better user experience.

The shift away from monolith applications towards microservices has posed a formidable challenge for monitoring tools. Let’s first take a quick look at what microservices are, to understand better the complications in monitoring them.

Monoliths vs Microservices

A single application(monolith) is split into a number of modular services called microservices, each of which typically caters to one capability of the application. These microservices are loosely coupled, can communicate with each other and can be deployed independently.

Quite likely the trigger for this architecture was the need for agility. Since microservices are stand-alone modules, they can follow their own build/deploy cycles enabling rapid scaling and deployments. They usually have a small codebase which aids easy maintainability and quick recovery from issues. The modularity of these microservices gives complete autonomy over the design, implementation and technology stack used to build them.

Microservices run inside containers that provide their execution environment. Although microservices could also be run in virtual machines(VMs), containers are preferred since they are comparatively lightweight as they share the host’s operating system, unlike VMs. Docker and CoreOS Rkt are a couple of commonly used container solutions while Kubernetes, Docker Swarm, and Apache Mesos are popular container orchestration platforms. The image below depicts microservices for hiring, performance appraisal, rewards & recognition, payroll, analytics and the like linked together to deliver the HR function.

Challenges in Monitoring Microservices and Containers

Since all good things come at a cost, you are probably wondering what it is here… well, the flip side to this evolutionary architecture is increased complexity! These are some contributing factors:

Exponential increase in the number of objects: With each application replaced by multiple microservices, 360-degree visibility and observability into all the services, their interdependencies, their containers/VMs, communication channels, workflows and the like can become very elusive. When one service goes down, the environment gets flooded with notifications not just from the service that is down, but from all services dependent on it as well. Sifting through this cascade of alerts, eliminating noise and zeroing in on the crux of the problem becomes a nightmare.

Shared Responsibility: Since processes are fragmented and the responsibility for their execution, like for instance a customer ordering a product online, is shared amongst the services, basic assumptions of traditional monitoring methods are challenged. The lack of a simple linear path, the need to collate data from different services for each process, inability to map a client request to a single transaction because of the number of services involved make performance tracking that much more difficult.

Design Differences: Due to the design/implementation autonomy that microservices enjoy, they could come with huge design differences, and implemented using different technology stacks. They might be using open source or third-party software that makes it difficult to instrument their code, which in turn affects their monitoring.

Elasticity and Transience: Elastic landscapes where infrastructure scales or collapses based on demand, instances appear & disappear dynamically, have changed the game for monitoring tools. They need to be updated to handle elastic environments, be container-aware and stay in-step with the provisioning layer. A couple of interesting aspects to handle are: recognizing the difference between an instance that is down versus an instance that is no longer available; data of instances that are no longer alive continue to have value for analysis of operational efficiency or past performance.

Mobility: This is another dimension of dynamic infra where objects don’t necessarily stay in the same place, they might be moved between data centers or clouds for better load balancing, maintenance needs or outages. The monitoring layer needs to arm itself with new strategies to handle moving targets.

Resource Abstraction: Microservices deployed in containers do not have a direct relationship with their host or the underlying operating system. This abstraction is what helps seamless migration between hosts but comes at the expense of complicating monitoring.

Communication over the network: The many moving parts of distributed applications rely completely on network communication. Consequently, the increase in network traffic puts a heavy strain on network resources necessitating intensive network monitoring and a focused effort to maintain network health.

What needs to be measured

This is a high-level laundry list of what needs to be done/measured while monitoring microservices and their containers.

Auto-discovery of containers and microservices:

As we’ve seen, monitoring microservices in a containerized world is a whole new ball game. In the highly distributed, dynamic infra environment where ephemeral containers scale, shrink and move between nodes on demand, traditional monitoring methods using agents to get information will not work. The monitoring system needs to automatically discover and track the creation/destruction of containers and explore services running in them.

Microservices:

  • Availability and performance of individual services
  • Host and infrastructure metrics
  • Microservice metrics
  • APIs and API transactions
    • Ensure API transactions are available and stable
    • Isolate problematic transactions and endpoints
  • Dependency mapping and correlation
  • Features relating to traditional APM

Containers:

  • Detailed information relating to each container
    • Health of clusters, master and slave nodes
  • Number of clusters
  • Nodes per cluster
  • Containers per cluster
    • Performance of core Docker engine
    • Performance of container instances

Things to consider while adapting to the new IT landscape

Granularity and Aggregation: With the increase in the number of objects in the system, it is important to first understand the performance target of what’s being measured – for instance, if a service targets 99% uptime(yearly), polling it every minute would be an overkill. Based on this, data granularity needs to be set prudently for each aspect measured, and can be aggregated where appropriate. This is to prevent data inundation that could overwhelm the monitoring module and drive up costs associated with data collection, storage, and management.    

Monitor Containers: The USP of containers is the abstraction they provide to microservices, encapsulating and shielding them from the details of the host or operating system. While this makes microservices portable, it makes them hard to reach for monitoring. Two recommended solutions for this are to instrument the microservice code to generate stats and/or traces for all actions (can be used for distributed tracing) and secondly to get all container activity information through host operating system instrumentation.    

Track Services through the Container Orchestration Platform: While we could obtain container-level data from the host kernel, it wouldn’t give us holistic information about the service since there could be several containers that constitute a service. Container-native monitoring solutions could use metadata from the container orchestration platform by drilling into appropriate layers of the platform to obtain service-level metrics. 

Adapt to dynamic IT landscapes: As mentioned earlier, today’s IT landscape is dynamically provisioned, elastic and characterized by mobile and transient objects. Monitoring systems themselves need to be elastic and deployable across multiple locations to cater to distributed systems and leverage native monitoring solutions for private clouds.

API Monitoring: Monitoring APIs can provide a wealth of information in the black box world of containers. Tracking API calls from the different entities – microservices, container solution, container orchestration platform, provisioning system, host kernel can help extract meaningful information and make sense of the fickle environment.

Watch this space for more on Monitoring and other IT Ops topics. You can find our blog on Monitoring for Success here, which gives an overview of the Monitorcomponent of GAVS’ AIOps Platform, Zero Incident FrameworkTM (ZIF). You can Request a Demo or Watch how ZIF works here.

About the Author:

Sivaprakash Krishnan


Bio – Siva is a long timer at Gavs and has been with the company for close to 15 years. He started his career as a developer and is now an architect with a strong technology background in Java, Big Data, DevOps, Cloud Computing, Containers and Micro Services. He has successfully designed & created a stable Monitoring Platform for ZIF, and designed & driven cloud assessment and migration, enterprise BRMS and IoT based solutions for many of our customers. He is currently focused on building ZIF 4.0, a new gen business-oriented TechOps platform.

Padmapriya Sridhar


Bio – Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel and Yoga. She aspires to become a Yoga Instructor some day!

The Chatty Bots!

Chatbots can be loosely defined as software to simulate human conversation. They are widely used as textbots or voicebots in social media, in websites to provide the initial engagement with visitors, as part of  customer service/IT operations teams to provide tier 1 support round the clock and for various other organizational needs, as we’ll see later in the blog, in integration with enterprise tools/systems. Their prevalence can be attributed to how easy it has now become to get a basic chatbot up & running quickly, using the intuitive drag-drop interfaces of chatbot build tools. There are also many cloud-based free or low-cost AI platforms for building bots using the provided APIs. Most of these platforms also come with industry-specific content, add-on tools for analytics and more.

Rule-based chatbots can hold basic conversation with scripted ‘if/then’ responses for commonly raised issues/faqs, and redirect appropriately for queries beyond their scope. They use keyword matches to get relevant information from their datastore. Culturally, as we begin to accept and trust bots to solve problems and extend support; with companies beginning to see value in these digital resources; and with heavy investments in AI technologies, chatbots are gaining traction, and becoming more sophisticated. AI-led chatbots are way more complex than their rule-based counterparts and provide dynamically tailored, contextual responses based on the conversation and interaction history. Natural Language Processing capabilities give these chatbots the human-like skill to comprehend nuances of language and gauge the intent behind what is explicitly stated.    

The Artificial Neural Network(ANN) for Natural Language Processing(NLP) 

An ANN is an attempt at a tech equivalent of the human brain! You can find our blog on ANNs and Deep Learning here.

Traditional AI models are incapable of handling highly cognitive tasks like image recognition, image classification, natural language processing, speech recognition, text-speech conversion, tone analysis and the like. There has been a lot of success with Deep Learning approaches for such cerebral use cases. For NLP, handling the inherent complexities of language such as sentiment, ambiguity or insinuation, necessitates deeper networks and a lot of training with enormous amounts of data. Each computational layer of the network progressively extracts finer and more abstract details from the inputs, essentially adding value to the learnings from the previous layers. With each training iteration, the network adapts, auto-corrects and finetunes its weights using optimization algorithms, until it reaches a maturity level where it is almost always correct in spite of input vagaries. The USP of a deep network is that, armed with this knowledge gained from training, it is able to extract correlations & meaning from even unlabeled and unstructured data.

Different types of neural networks are particularly suited for different use cases. Recurrent Neural Networks(RNNs) are good for sequential data like text documents, audio and natural language. RNNs have a feedback mechanism where each neuron’s output is fed back as weighted input, along with other inputs. This gives them ‘memory’ implying they remember their earlier inputs, but with time the inputs get diluted by the presence of new data. A variant of the RNN helps solve this problem. Long Short Term Memory (LSTM) models have neurons(nodes) with gated cells that can regulate whether to ‘remember’ or ‘forget’ their previous inputs, thereby giving more control over what needs to be remembered for a long time versus what can be forgotten. For e.g.: it would help to ‘remember’ when parsing through a text document because the words and sentences are most likely related, but ‘forgetting’ would be better during the move from one text document to the next, since they are most likely unrelated.

The Chatbot Evolution

In the 2019 Gartner CIO Survey, CIOs identified chatbots as the main AI-based application used in their enterprises. “There has been a more than 160% increase in client interest around implementing chatbots and associated technologies in 2018 from previous years”, says Van Baker, VP Analyst at Gartner.

Personal & Business communication morphs into the quickest, easiest and most convenient mode of the time. From handwritten letters to emails to phone calls to SMSs to mere status updates on social media is how we now choose to interact. Mr. Baker goes on to say that with the increase of millennials in the workplace, and their  demand for instant, digital connections, they will have a large impact on how quickly organizations adopt the technology.

Due to these evolutionary trends, more organizations than we think, have taken a leap of faith and added these bots to their workforce. It is actually quite interesting to see how chatbots are being put to innovative use, either stand-alone or integrated with other enterprise systems.

Chatbots in the Enterprise

Customer service & IT service management(ITSM) are use cases through which chatbots gained entry into the enterprise. Proactive personalized user engagement, consistency and ease of interaction, round-the-clock availability & timely address of issues have lent themselves to operational efficiency, cost effectiveness and enhanced user experience. Chatbots integrated into ITSM help streamline service, automate workflow management, reduce MTTR, and provide always-on services. They also make it easier to scale during peak usage times since they reduce the need for customers to speak with human staff, and the need to augment human resources to handle the extra load. ChatOps is the use of chatbots within a group collaboration tool where they run between the tool and the user’s applications and automate tasks like providing relevant data/reports, scheduling meetings, emailing, and ease the collaborative process between siloed teams and processes, like in a DevOps environment where they double up as the monitoring and diagnostic tool for the IT landscape.

In E-commerce, chatbots can boost sales by taking the customer through a linear shopping experience from item search through purchase. The bot can make purchase suggestions based on customer preferences gleaned from product search patterns and order history.

In Healthcare, they seamlessly connect healthcare providers, consumers and information and ease access to each other. These bot assistants come in different forms catering to specific needs like personal health coach, companion bot to provide the much-needed conversational support for patients with Alzheimer’s, confidant and therapist for those suffering from depression, symptom-checker to provide initial diagnosis based on symptoms and enable remote text or video consultation with a doctor as required and so on.

Analytics provide insights but often not fast enough for the CXO. Decision-making becomes quicker when executives can query a chatbot to get answers, rather than drilling through a dashboard. Imagine getting immediate responses to requests like Which region in the US has had the most sales during Thanksgiving? Send out a congratulatory note to the leadership in that region. Which region has had the poorest sales? Schedule a meeting with the team there. Email me other related reports of this region. As can be seen here, chatbots work in tandem with other enterprise tools like analytics tools, calendar and email to make such fascinating forays possible.

Chatbots can handle the mundane tasks of Employee Onboarding, such as verification of mandatory documents, getting required forms filled, directing them to online new-hire training and ensuring completion.

When integrated with IoT devices, they can help in Inventory Management by sending out notifications when it’s time to restock a product, tracking shipment of new orders and alerting on arrival.

Chatbots can offer Financial Advice by recommending investment options based on transactional history, current investments or amounts idling in savings accounts, alerting customer to market impact on current portfolio and so much more.

As is evident now, the possibilities of such domain-specific chatbots are endless, and what we have seen is just a sampling of their use cases!

Choosing the Right Solution

The chatbot vendor market is crowded, making it hard for buyers to fathom where to even begin. The first step is an in-depth evaluation of the company’s unique needs, constraints, main use cases and enterprise readiness. The next big step is to decide between off-the shelf or in-house solutions. An in-house build will be an exact fit to needs, but it might be difficult to get long-term management buy-in to invest in related AI technologies, compute power, storage, ongoing maintenance and a capable data science team. Off-the-shelf solutions need a lot of scrutiny to gauge if the providers are specialists who can deliver enterprise-grade chatbots. Some important considerations:

The solution should (be);

Platform & Device Agnostic so it can be built once and deployed anywhere

Have good Integration Capabilities with tools, applications and systems in the enterprise

Robust with solid security and compliance features

Versatile to handle varied use cases

Adaptable to support future scaling

Extensible to enable additional capabilities as the solution matures, and to leverage innovation to provide advanced features such as multi-language support, face recognition, integration with VR, Blockchains, IoT devices

Have a Personality! Bots with a personality add a human-touch that can be quite a differentiator. Incorporation of soft features such as natural conversational style, tone, emotion, and a dash of humor can give an edge over the competition.

About the Author:

Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel and Yoga. She aspires to become a Yoga Instructor some day!