Growing Importance of Business Service Reliability

Business services are a set of business activities delivered to an outside party, such as a customer or a partner. Successful delivery of business services often depends on one or more IT services. For example, an IT business service that would support “order to cash”, as an example could be “supply chain service”. The supply chain service could be delivered by an application such as SAP, with the customer of that service being an employee in finance/accounting using the application to perform customer-facing services such as accounts receivable, or the collection of cash from an outside party. A business service is not simply the application that the end-user sees – it is the entire chain that supports the delivery of the service, including physical and virtualized servers, databases, middleware, storage, and networks. A failure in any of these can affect the service – and so it is crucial that IT organizations have an integrated, accurate, and up-to-date view of these components and of how they work together to provide the service.

The technologies for Social Networking, Mobile Applications, Analytics, Cloud (SMAC), and Artificial Intelligence (AI) are redefining the business and the services that businesses provide. Their widespread usage is changing the business landscape, increasing reliability and availability to levels that were unimaginable even a few years ago.

Availability versus Reliability

At first glance, it might seem that if a service has a high availability then it should also have high reliability. However, this is not necessarily the case. Availability and Reliability have different meanings, serve different purposes, and require different strategies to maintain desired standards of service levels. Reliability is the measure of how long a business service performs its intended function, whereas availability is the measure of the percentage of time a business service is operable. For example, a business service may be available 90% of the time, but reliable only 75% of the time from a performance standpoint. Service reliability can be seen as:

  • Probability of success
  • Durability
  • Dependability
  • Quality over time
  • Availability to perform a function

Merely having a service available isn’t sufficient. When a business service is available, it should actually serve the intended purpose under varying and unexpected conditions. One way to measure this performance is to evaluate the reliability of the service that is available to consume. The performance of a business service is now rated not by its availability, but by how consistently reliable it is. Take the example of mobile services – 4 bars of signal strength on your smartphone does not guarantee that the quality of the call you received or going to make. Organizations need to measure how well the service fulfills the necessary business performance needs.

Recognizing the importance of reliability, Google initiated Site Reliability Engineering (SRE) practices with a mission to protect, provide for, and progress the software and systems behind all of Google’s public services — Google Search, Ads, Gmail, Android, YouTube, and App Engine, to name just a few — with an ever-watchful eye on their availability, latency, performance, and capacity.

Zero Incident FrameworkTM (ZIF)

GAVS Technologies developed an AIOps based TechOps platform – Zero Incident FrameworkTM (ZIF) that enables proactive detection and remediation of incidents. The ZIF Platform is, available in two versions for our customers to evaluate and experience the power of AI-driven Business Service Reliability: 

ZIF Business Xpress: ZIF Business Xpress has been engineered for enterprises to evaluate AIOps before adoption. 10 to 40 devices can be connected to ZIFBusiness Xpress, to experiment with the value proposition. 

ZIF Business: Targeted for enterprise-wide adoption.

For more details, please visit https://zif.ai

About the Author:

Sri Chaganty


Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped, and venture-backed startups.

Automating IT ecosystems with ZIF Remediate

Alwinking N Rajamani

Alwinking N Rajamani


Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™. ZIF comprises of 5 modules, as outlined below.

This article’s focus is on the Remediate function of ZIF. Most ITSM teams envision a future of ticketless ITSM, driven by AI and Automation.

Remediate being a key module ofZIF, has more than 500+ connectors to various ITSMtools, Monitoring, Security and Incident management tools, storage/backup tools and others.Few of the connectors are referenced below that enables quick automation building.

Key Features of Remediate

  • Truly Agent-less software.
  • 300+ readily available templates – intuitive workflow/activity-based tool for process automation from a rich repository of pre-coded activities/templates.
  • No coding or programming required to create/deploy automated workflows. Easy drag & drop to sequence activities for workflow design.
  • Workflow execution scheduling for pre-determined time or triggering from events/notifications via email or SMS alerts.
  • Can be installed on-premise or on the cloud, on physical or virtual servers
  • Self Service portal for end-users/admins/help-desk to handle tasks &remediation automatically
  • Fully automated service management life cycle from incident creation to resolution and automatic closure
  • Has integration packs for all leading ITSM tools

Key features for futuristic Automation Solutions

Although the COVID pandemic has landed us in unprecedented times, we have been able to continue supporting our customers and enabled their IT operations with ZIF Remediate.

  • Self-learning capability to deliver Predictive/Prescriptive actionable alerts.
  • Access to multiple data sources and types – events, metrics, thresholds, logs, event triggers e.g. mail or SMS.
  • Support for a wide range of automation
    • Interactive Automation – Web, SMS, and email
    • Non-interactive automation – Silent based on events/trigger points
  • Supporting a wide range of advanced heuristics.

Benefits of AIOPS driven Automation

  • Faster MTTR
  • Instant identification of threats and appropriate responses
  • Faster delivery of IT services
  • Quality services leading to Employee and Customer satisfaction
  • Fulfillment and Alignment of IT services to business performance

Interactive and Non-interactive automation

Through our automation journey so far, we have understood that the best automation empowers humans, rather than replacing them. By implementing ZIF Remediate, organizations can empower their people to focus their attention on critical thinking and value-added activities and let our platform handle mundane tasks by bringing data-driven insights for decision making.

  • Interactive Automation – Web portal, Chatbot and SMS based
  • Non-interactive automations – Event or trigger driven automation

Involved decision driven Automations

ZIF Remediate has its unique, interactive automation capabilities, where many automation tools do not allow interactive decision making. Need approvals built into an automated change management process that involves sensitive aspects of your environment? Need numerous decision points that demand expert approval or oversight? We have the solution for you. Take an example of Phishing automation, here a domain or IP is blocked based on insights derived by mimicking an SOC engineer’s actions – parsing the observables i.e. URL, suspicious links or attachments in a phish mail and have those observables validated for threat against threat response tools, virus total, and others.

Some of the key benefits realized by our customers which include one of the largest manufacturing organizations, a financial services company, a large PR firm, health care organizations, and others.

  • Reduction of MTTR by 30% across various service requests.
  • Reduction of 40% of incidents/tickets, thus enabling productivity improvements.
  • Ticket triaging process automation resulting in a reduction of time taken by 50%.
  • Reclaiming TBs of storage space every week through snapshot monitoring and approval-driven model for a large virtualized environment.
  • Eliminating manual threat analysis by Phishing Automation, leading to man-hours being redirected towards more critical work.
  • Reduction of potential P1 outages by 40% through self-healing automations.

For more detailed information on ZIF Remediate, or to request a demo please visit https://zif.ai/products/remediate/

About the Author:

Alwin leads the Product Engineering for ZIF Remediate and zIrrus. He has over 20 years of IT experience spanning across Program & Portfolio Management for large customer accounts of various business verticals.

In his free time, Alwin loves going for long drives, travelling to scenic locales, doing social work and reading & meditating the Bible.

Assess Your Organization’s Maturity in Adopting AIOps

Artificial Intelligence for IT operations (AIOps) is adopted by organizations to deliver tangible Business Outcomes. These business outcomes have a direct impact on companies’ revenue and customer satisfaction.

A survey from AIOps Exchange 2019, reports that 84% of Business Owners who attended the survey, confirmed that they are actively evaluating AIOps to be adopted in their organizations.

So, is AIOps just automation? Absolutely NOT!!

Artificial Intelligence for IT operations implies the implementation of true Autonomous Artificial Intelligence in ITOps, which needs to be adopted as an organization-wide strategy. Organizations will have to assess their existing landscape, processes, and decide where to start. That is the only way to achieve the true implementation of AIOps.

Every organization trying to evaluate AIOps as a strategy should read through this article to understand their current maturity, and then move forward to reach the pinnacle of Artificial Intelligence in IT Operations.

The primary Success Factor in adopting AIOps is derived from the Business Outcomes the organization is trying to achieve by implementing AIOps –that is the only way to calculate ROI.

There are 4 levels of Maturity in AIOps adoption. Based on our experience in developing an AIOps platform and implementing the platform across multiple industries, we have arrived at these 4 levels. Assessing an organization against each of these levels helps in achieving the goal of TRUE Artificial Intelligence in IT Operations.

Level 1: Knee-jerk

Events, logs are generated in silos and collected from various applications and devices in the infrastructure. These are used to generate alerts that are commissioned to command centres to escalate as per the SOPs (standard operating procedures) defined. The engineering teams work in silos, not aware of the business impact that these alerts could potentially create. Here, operations are very reactive which could cost the organization millions of dollars.

Level 2: Unified

Have integrated all events, logs, and alerts into one central locale. ITSM process has been unified. This helps in breaking silos and engineering teams are better prepared to tackle business impacts. SOPs have been adjusted since the process is unified, but this is still reactive incident management.

Level 3: Intelligent

Machine Learning algorithms (either supervised or unsupervised) have been implemented on the unified data to derive insights. There are baseline metrics that are calibrated and will be used as a reference for future events. With more data, the metrics get richer. IT operations team can correlate incidents/events with business impacts by leveraging AI & ML. If Mean Time To Resolve (MTTR) an incident has been reduced by automated identification of the root cause, then the organization has attained level 3 maturity in AIOps.

Level 4: Predictive & Autonomous

The pinnacle of AIOps is level 4. If incidents and performance degradation of applications can be predicted by leveraging Artificial Intelligence, it implies improved application availability. Autonomousremediation bots can be triggered spontaneously based on the predictive insights, to fix incidents that are prone to happen in the enterprise. Level 4 is a paradigm shift in IT operations – moving operations entirely from being reactive, to becoming proactive.

Conclusion:

As IT operations teams move up each level, the essential goal to keep in mind is the long-term strategy that needs to be attained by adopting AIOps. Artificial Intelligence has matured over the past few decades, and it is up to AIOps platforms to embrace it effectively. While choosing an AIOps platform, measure the maturity of the platform’s artificial intelligent coefficient.

About the Author:

Anoop Aravindakshan (Principal Consultant Manager) at GAVS Technologies.


An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming, and Mobile.

Modern IT Infrastructure

Infrastructure today has grown beyond the physical confines of the traditional data center, has spread its wings to the cloud, and is increasingly distributed, virtual, and abstract. With the cloud gaining wide acceptance, most enterprises have their workloads spread across data centers, colocations, multi-cloud, and edge locations. On-premise infrastructure is also being replaced by Hyperconverged Infrastructure (HCI) where software-defined, virtualized compute, storage, and network are in one single system, greatly simplifying IT operations. Infrastructure is also becoming increasingly elastic, scales & shrinks on demand and doesn’t have to be provisioned upfront.

Let’s look at a few interesting technologies that are steering the modern IT landscape.

Containers and Serverless

Traditional application deployment on physical servers comes with the overhead of managing the infrastructure, middleware, development tools, and everything in between. Application developers would rather have this grunt work be handled by someone else, so they could focus on just their applications. This is where containers and serverless technologies come into picture. Both are cloud-based offerings and provide different levels of abstraction, in a way that hides layers beyond the front end, from the developer. They typically deploy smaller components of monolithic applications, microservices, and functions.

A Container is like an all-in-one-box, containing the app, and all its dependencies like libraries, executables & config files. The containerized application is highly portable, will run anywhere the container runtime is installed, and behave the same regardless of the OS or hardware it is deployed on. Containers give developers great flexibility and control since they cater to specific application requirements like the OS, S/W versions. The flip side is that there is still a need for manual maintenance of the runtime environment, like security patches, software updates, etc. Secondly, the flexibility it affords translates into high operational costs, since it lacks agility in scaling.

Serverless technologies provide much greater abstraction of the OS and infrastructure. ‘Serverless’ though, does not imply that there are no servers, it just means application developers do not have to worry about the underlying OS, the server environment, or the infra that their applications will be deployed on. Serverless is event-driven and is based on the premise that the application is split into functions that get executed based on events. The developer only needs to deploy function code and define the event(s) that will trigger them! The rest of the magic is done by the cloud service provider (with the help of third parties). 

The biggest advantage of serverless is that consumers are billed only for the running time of the function instances or the number of times the function gets executed, depending on the provider. Since it has zero administrative overhead, it guarantees rapid iterative deployment and faster time to market. Since the architecture is intrinsically auto-scaling, it is a perfect fit for applications with undefinable usage patterns. The other side of the coin is that developers need to deal with a black box back-end environment, so, holistic testing, debugging of the application becomes a challenge. Vendor lock-in is a real problem since the consumer is restricted by the technology stack supported by the vendor. Since serverless best practices dictate light, isolated functions with limited scope, building complex applications can get difficult. Function as a Service (FaaS) is a subset of serverless computing.

Internet of Things (IoT)

IoT is about connecting everyday things – beyond just computing devices or smartphones – to the internet. It is possible to convert practically anything into an IoT device, with a computer chip installation & internet access, and have it communicate independently with the internet – without any human intervention. But why would we want everyday things like for instance a watch or a light bulb, to become IoT devices? It’s in a bid to bridge the chasm between the physical and digital worlds and make the environment around us more intelligent, communicative, and responsive to our needs.

IoT’s use cases are just about everywhere; in personal devices, self-driving cars, smart homes, smart workspaces, smart cities, and industries across all verticals. For instance, live data from sensors in products while in use, gives good visibility into their operations on the ground, helps remediate issues proactively & aids improvements in design/manufacturing processes.

The Industrial Internet of Things (IIoT) is the use of IoT data in business, in tandem with Big Data, AI, Analytics, Cloud, and High-speed networks, with the primary goal of finding efficient business models to improve productivity & optimize expenditure. The need for real-time response to sensor data and advanced analytics to power insights has increased the demand for 5G networks for speed, cloud technologies for storage and computing, edge computing to reduce latency, and hyper-scale data centers for rapid scaling.

With IoT devices extending an organization’s infrastructure landscape, and the likelihood that IT staff may not even be aware of all the IoT devices in it is a security nightmare that could open corporate networks & sensitive data for attacks. Global standards and regulations for IoT device security are in the works. Until then, it is up to the enterprise security team to safeguard against IoT-related vulnerabilities.

Hyperscaling

The ability of infrastructure to rapidly scale out on a massive level is called hyperscaling.

Unprecedented needs for high-power computing and on-demand massive scalability has given rise to a new breed of hyperscale computing architectures, where traditional elements are replaced by hyper-converged, software-defined infrastructure with a high degree of virtualization. These hyperscale environments are characterized by high-density server racks, with software designed and specifically built for scale-out environments. Since high-density implies heavy power consumption, heating problems need to be handled by specialized cooling solutions like liquid cooling. Hyperscale data centre operators usually look for renewable energy options to save on power & cooling.

Today, there are several hundred hyperscale data centers in the world, with the dominant players being Microsoft, Google, Apple, Amazon & Facebook.

Edge Computing

Edge computing as the name indicates means moving data processing away from distant servers or the cloud, closer to the source of data.  This is to reduce latency and network bandwidth used for back & forth communication between the data source and the server. Edge, also called the network edge refers to where the data source connects to the internet. The explosive growth of IoT and applications like self-driving cars, virtual reality, smart cities for instance, that require real-time computing and analytics are paving the way for edge computing. Most cloud providers now provide geographically distributed edge servers. As with IoT devices, data at the edge can be a ticking security time bomb necessitating appropriate security mechanisms.

The evolution of IT technologies continuously raises the bar for the IT team. IT personnel have been forced to move beyond legacy practices and mindsets & constantly up-skill themselves to be able to ride the wave. For customers pampered by sophisticated technologies, round the clock availability of systems and immersive experiences have become baseline expectations. With more & more digitalization, there is increasing reliance on IT infrastructure and hence lesser tolerance for outages. The responsibilities of maintaining a high-performing IT infrastructure with near-zero downtime fall on the shoulders of the IT operations team.

This has underscored the importance of AI in IT operations since IT needs have now surpassed human capabilities. Gavs’ AI-powered Platform for IT operations, ZIF, caters to the entire ITOps spectrum, right from automated discovery of the landscape, monitoring, to predictive and prescriptive analytics that proactively drive the organization towards zero incidents. For more details, please visit https://zif.ai

About the Author:

Padmapriya Sridhar

Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel, and Yoga. She aspires to become a Yoga Instructor someday!

Prediction for Business Service Assurance

Artificial Intelligence for IT operations or AIOps has exploded over the past few years. As more and more enterprises set about their digital transformation journeys, AIOps becomes imperative to keep their businesses running smoothly. 

AIOps uses several technologies like Machine Learning and Big Data to automate the identification and resolution of common Information Technology (IT) problems. The systems, services, and applications in a large enterprise produce volumes of log and performance data. AIOps uses this data to monitor the assets and gain visibility into the behaviour and dependencies among these assets.

According to a Gartner publication, the adoption of AIOps by large enterprises would rise to 30% by 2023.

ZIF – The ideal AIOps platform of choice

Zero Incident FrameworkTM (ZIF) is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™.

ZIF comprises of 5 modules, as outlined below.

At the heart of ZIF, lies its Analyze and Predict (A&P) modules which are powered by Artificial Intelligence and Machine Learning techniques. From the business perspective, the primary goal of A&P would be 100% availability of applications and business processes.

Let us understand more about thePredict module of ZIF.

Predictive Analytics is one of the main USP of the ZIF platform. ZIF encompassesSupervised, Unsupervised and Reinforcement Learning algorithms for realization of various business use cases (as shown below).

How does the Predict Module of ZIF work?

Through its data ingestion capabilities, the ZIF platform can receive and process all types of data (both structured and unstructured) from various tools in the enterprise. The types of data can be related to alerts, events, logs, performance of devices, relations of devices, workload topologies, network topologies etc. By analyzing all these data, the platform predicts the anomalies that can occur in the environment. These anomalies get presented as ‘Opportunity Cards’ so that suitable action can be taken ahead of time to eliminate any undesired incidents from occurring. Since this is ‘Proactive’ and not ‘Reactive’, it brings about a paradigm shift to any organization’s endeavour to achieve 100% availability of their enterprise systems and platforms. Predictions are done at multiple levels – application level, business process level, device level etc.

Sub-functions of Prediction Module

How does the Predict module manifest to enterprise users of the platform?

Predict module categorizes the opportunity cards into three swim lanes.

  1. Warning swim lane – Opportunity Cards that have an “Expected Time of Impact” (ETI) beyond 60 minutes.
  2. Critical swim lane – Opportunity Cards that have an ETI within 60 minutes.
  3. Processed / Lost– Opportunity Cards that have been processed or lost without taking any action.

Few of the enterprises that realized the power of ZIF’s Prediction Module

  • A manufacturing giant in the US
  • A large non-profit mental health and social service provider in New York
  • A large mortgage loan service provider in the US
  • Two of the largest private sector banks in India

For more detailed information on GAVS’ Analyze, or to request a demo please visithttps://zif.ai/products/predict/

References:https://www.gartner.com/smarterwithgartner/how-to-get-started-with-aiops/

About the Author:

Vasudevan Gopalan

Vasu heads Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications etc.

Outside of his professional role, Vasu enjoys playing badminton and focusses on fitness routines.

Discover, Monitor, Analyze & Predict COVID-19

Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. Netflix, the world’s largest movie house, own no cinemas. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.”

– Tom Goodwin, an executive at the French media group Havas.

This new breed of companies is the fastest growing in history because they own the customer interface layer. It is the platform where all the value and profit is. “Platform business” is a more wholesome termfor this model for which data is the fuel; Big Data & AI/ML technologies are the harbinger of new waves of productivity growth and innovation.

With Big data and AI/ML is making a big difference in the area of public health, let’s see how it is helping us tackle the global emergency of coronavirus formally known as COVID-19.

“With rapidly spreading disease, a two-week lag is an eternity.”

DISCOVERING/ DETECTING

Chinese technology giant Alibaba has developed an AI system for detecting the COVID-19 in CT scans of patients’ chests with 96% accuracy against viral pneumonia cases. It only takes 20 seconds for the AI to decide, whereas humans generally take about 15 minutes to diagnose the illness as there can be upwards of 300 images to evaluate.The system was trained on images and data from 5,000 confirmed coronavirus cases and has been tested in hospitals throughout China. Per a report, at least 100 healthcare facilities are currently employing Alibaba’s AI to detect COVID-19.

Ping An Insurance (Group) Company of China, Ltd (Ping An) aims to address the issue of lack of radiologists by introducing the COVID-19 smart image-reading system. This image-reading system can read the huge volumes of CT scans in epidemic areas.

Ping An Smart Healthcare uses clinical data to train the AI model of the COVID-19 smart image-reading system. The AI analysis engine conducts a comparative analysis of multiple CT scan images of the same patient and measures the changes in lesions. It helps in tracking the development of the disease, evaluation of the treatment and in prognosis of patients.Ultimately it assists doctors to diagnose, triage and evaluate COVID-19 patients swiftly and effectively.

Ping An Smart Healthcare’s COVID-19 smart image-reading system also supports AI image-reading remotely by medical professionals outside the epidemic areas.Since its launch, the smart image-reading system has provided services to more than 1,500 medical institutions. More than 5,000 patients have received smart image-reading services for free.

The more solutions the better. At least when it comes to helping overwhelmed doctors provide better diagnoses and, thus, better outcomes.

MONITORING

  • AI based Temperature monitoring & scanning

In Beijing, China, subway passengers are being screened for symptoms of coronavirus, but not by health authorities. Instead, artificial intelligence is in-charge.

Two Chinese AI giants, Megvii and Baidu, have introduced temperature-scanning. They have implemented scanners to detect body temperature and send alerts to company workers if a person’s body temperature is high enough to constitute a fever.

Megvii’s AI system detects body temperatures for up to 15 people per second andup to 16 feet. It monitors as many as 16 checkpoints in a single station. The system integrates body detection, face detection, and dual sensing via infrared cameras and visible light. The system can accurately detect and flag high body temperature even when people are wearing masks, hats, or covering their faces with other items. Megvii’s system also sends alerts to an on-site staff member.

Baidu, one of the largest search-engine companies in China, screens subway passengers at the Qinghe station with infrared scanners. It also uses a facial-recognition system, taking photographs of passengers’ faces. If the Baidu system detects a body temperature of at least 99-degrees Fahrenheit, it sends an alert to the staff member for another screening. The technology can scan the temperatures of more than 200 people per minute.

  • AI based Social Media Monitoring

An international team is using machine learning to scour through social media posts, news reports, data from official public health channels, and information supplied by doctors for warning signs of the virus across geographies.The program is looking for social media posts that mention specific symptoms, like respiratory problems and fever, from a geographic area where doctors have reported potential cases. Natural language processing is used to parse the text posted on social media, for example, to distinguish between someone discussing the news and someone complaining about how they feel.

The approach has proven capable of spotting a coronavirus needle in a haystack of big data. This technique could help experts learn how the virus behaves. It may be possible to determine the age, gender, and location of those most at risk quicker than using official medical sources.

PREDICTING

Data from hospitals, airports, and other public locations are being used to predict disease spread and risk. Hospitals can also use the data to plan for the impact of an outbreak on their operations.

Kalman Filter

Kalman filter was pioneered by Rudolf Emil Kalman in 1960, originally designed and developed to solve the navigation problem in the Apollo Project. Since then, it has been applied to numerous cases such as guidance, navigation, and control of vehicles, computer vision’s object tracking, trajectory optimization, time series analysis in signal processing, econometrics and more.

Kalman filter is a recursive algorithm which uses time-series measurement over time, containing statistical noise and produce estimations of unknown variables.

For the one-day prediction Kalman filter can be used, while for the long-term forecast a linear model is used where its main features are Kalman predictors, infected rate relative to population, time-depended features, and weather history and forecasting.

The one-day Kalman prediction is very accurate and powerful while a longer period prediction is more challenging but provides a future trend.Long term prediction does not guarantee full accuracy but provides a fair estimation following the recent trend. The model should re-run daily to gain better results.

GitHub Link: https://github.com/Rank23/COVID19

ANALYZING

The Center for Systems Science and Engineering at Johns Hopkins University has developed an interactive, web-based dashboard that tracks the status of COVID-19 around the world. The resource provides a visualization of the location and number of confirmed COVID-19 cases, deaths and recoveries for all affected countries.

The primary data source for the tool is DXY, a Chinese platform that aggregates local media and government reports to provide COVID-19 cumulative case totals in near real-time at the province level in China and country level otherwise. Additional data comes from Twitter feeds, online news services and direct communication sent through the dashboard. Johns Hopkins then confirms the case numbers with regional and local health departments. This kind of Data analytics platform plays a pivotal role in addressing the coronavirus outbreak.

All data from the dashboard is also freely available in the following GitHub repository.

GitHub Link:https://bit.ly/2Wmmbp8

Mobile version: https://bit.ly/2WjyK4d

Web version: https://bit.ly/2xLyT6v

Conclusion

One of AI’s core strengths when working on identifying and limiting the effects of virus outbreaks is its incredibly insistent nature. AIsystems never tire, can sift through enormous amounts of data, and identify possible correlations and causations that humans can’t.

However, there are limits to AI’s ability to both identify virus outbreaks and predict how they will spread. Perhaps the best-known example comes from the neighboring field of big data analytics. At its launch, Google Flu Trends was heralded as a great leap forward in relation to identifying and estimating the spread of the flu—until it underestimated the 2013 flu season by a whopping 140 percent and was quietly put to rest.Poor data quality was identified as one of the main reasons Google Flu Trends failed. Unreliable or faulty data can wreak havoc on the prediction power of AI.

References:

About the Author:

Bargunan Somasundaram

Bargunan Somasundaram

Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

GAVS’ commitment during COVID-19

MARCH 23. 2020

Dear Client leaders & Partners,

I do hope all of you, your family and colleagues are keeping good health, as we are wading through this existential crisis of COVID 19.

This is the time for shared vulnerabilities and in all humility, we want to thank you for your business and continued trust. For us, the well being of our employees and the continuity of clients’ operations are our key focus. 

I am especially inspired by my GAVS colleagues who are supporting some of the healthcare providers in NYC. The GAVS leaders truly believe that they are integral members of these  institutions and it is incumbent upon them to support our Healthcare clients during these trying times.

We would like to confirm that 100% of our client operations are continuing without any interruptions and 100% of our offshore employees are successfully executing their responsibilities remotely using GAVS ZDesk, Skype, collaborating through online Azure ALM Agile Portal. GAVS ZIF customers are 100% supported 24X7 through ROTA schedule & fall back mechanism as a backup.

Most of GAVS Customer Success Managers, Client Representative Leaders, and Corporate Leaders have reached out to you with GAVS Business Continuity Plan and the approach that we have adopted to address the present crisis. We have put communication, governance, and rigor in place for client support and monitoring.  

GAVS is also reaching out to communities and hospitals as a part of our Corporate Social Responsibility.  

We have got some approvals from the local Chennai police authorities in Chennai to support the movement of our leaders from and to the GAVS facility and we have, through US India Strategic Partnership Forum applied for GAVS to be considered an Essential Service Provider in India.  

I have always maintained that GAVS is an IT Service concierge to all of our clients and we individually as leaders and members of GAVS are committed to our clients. We shall also ensure that our employees are safe. 

Thank you, 

Sumit Ganguli
GAVS Technologies


Heroes of GAVS | BronxCare

gavs

“Every day we witness these heroic acts: one example out of many this week was our own Kishore going into our ICU to move a computer without full PPE (we have a PPE shortage). The GAVS technicians who come into our hospital every day are, like our doctors and healthcare workers,  the true heroes of our time.” – Ivan Durbak, CIO, BronxCare

“I am especially inspired by my GAVS colleagues who are supporting some of the healthcare providers in NYC. The GAVS leaders truly believe that they are integral members of these institutions and it is incumbent upon them to support our Healthcare clients during these trying times. We thank the Doctors, Nurses and Medical Professionals of Bronx Care and we are privileged to be associated with them. We would like to confirm that 100% of our client operations are continuing without any interruptions and 100% of our offshore employees are successfully executing their responsibilities remotely using GAVS ZDesk, and other tools.” – Sumit Ganguli, CEO

The Hands that rock the cradle, also crack the code

It was an unguarded moment for my church-going, straight-laced handyman & landscaper, “ I am not sure if I am ready to trust a woman leader”, and finally the loss of first woman Presidential candidate in the US, that led me to ruminate about Women and Leadership and indulge in my most “ time suck” activities, google and peruse through Wikipedia.

I had known about this, but I was fascinated to reconfirm that the first programmer in the world was a woman, and daughter of the famed poet, Lord Byron, no less. The first Programmer in the World, Augusta Ada King-Noel, Countess of Lovelace nee Byron; was born in 1815 and was the only legitimate child of the poet laureate, Lord Byron and his wife Annabella. A month after Ada was born, Byron separated from his wife and forever left England. Ada’s mother remained bitter towards Lord Byron and promoted Ada’s interest in mathematics and logic in an effort to prevent her from developing what she saw as the insanity seen in her father.

Ada grew up being trained and tutored by famous mathematicians and scientists. She established a relationship with various scientists and authors, like Charles Dickens, etc..   Ada described her approach as “poetical science”[6] and herself as an “Analyst & Metaphysician”.

As a teenager, Ada’s prodigious mathematical talents, led her to have British mathematician Charles Babbage, as her mentor. By then Babbage had become very famous and had come to be known as ‘the father of computers’. Babbage was reputed to have developed the Analytical Engine. Between 1842 and 1843, Ada translated an article on the Analytical Engine, which she supplemented with an elaborate set of notes, simply called Notes. These notes contain what many consider to be the first computer program—that is, an algorithm designed to be carried out by a machine. As a result, she is often regarded as the first computer programmer. Ada died at a very young age of 36.

As an ode to her, the mathematical program used in the Defense Industry has been named Ada. And to celebrate our first Programmer, the second Tuesday of October has been named Ada Lovelace Day. ALD celebrates the achievement of women in Science, Technology and Engineering and Math (STEM). It aims to increase the profile of women in STEM and, in doing so, create new role models who will encourage more girls into STEM careers and support women already working in STEM.

Most of us applauded Benedict Cumberbatch’s turn as Alan Turing in the movie,  Imitation Game. We got to know about the contribution, that Alan Turning and his code breaking team at the Bletchley Park, played in singularly cracking the German Enigma code and how the code helped them to proactively know when the Germans were about to attack the Allied sites and in the process could conduct preemptive strikes. In the movie, Kiera Knightly played the role of Joan Clark Joan was an English code-breaker at the British Intelligence wing, MI5, at Bletchley Park during the World War II. She was appointed a Member of the Order of the British Empire (MBE) in 1947, because of the important part she essayed in decoding the famed German Enigma code along with Alan Turing and the team.

Joan Clark attended Cambridge University with a scholarship and there she gained a double first degree in mathematics. But the irony of it all was that she was denied a full degree, as till 1948, Cambridge only awarded degrees to men. The head of the Code-breakers group, Hugh Alexander,  described her as “one of the best in the section”, yet while promoting Joan Clark, they had initially given her a job title of a typist, as women were not allowed to be a Crypto Analyst. Clarke became deputy head of British Intelligence unit, Hut 8 in 1944.  She was paid less than the men and in the later years she believed that she was prevented from progressing further because of her gender.

In World War II the  US Army was tasked with a Herculean job to calculate the trajectories of ballistic missiles. The problem was that each equation took 30 hours to complete, and the Army needed thousands of them. So the Army, started to recruit every mathematician they could find. They placed ads in newspapers;  first in Philadelphia, then in New York City, then in far out west in places like Missouri, seeking women “computers” who could hand-compute the equations using mechanical desktop calculators. The selected applicants would be stationed at the  University of Pennsylvania in Philly. At the height of this program, the US Army employed more than 100 women calculators. One of the last women to join the team was a farm girl named Jean Jennings. To support the project, the US Army-funded an experimental project to automate the trajectory calculations. Engineers John Presper Eckert and John W. Mauchly, who are often termed as the Inventors of Mainframe computers, began designing the Electronic Numerical Integrator and Computer, or ENIAC as it was called.  That experimenting paid off: The 80-foot long, 8-foot tall, black metal behemoth, which contained hundreds of wires, 18,000 vacuum tubes, 40 8-foot cables, and 3000 switches, would become the first all-electric computer called ENIAC.

When the ENIAC was nearing completion in the spring of 1945, the US Army randomly selected six women, computer programmers,  out of the 100 or so workers and tasked them with programming the ENIAC. The engineers handed the women the logistical diagrams of ENIAC’s 40 panels and the women learned from there. They had no programming languages or compilers. Their job was to program ENIAC to perform the firing table equations they knew so well.

The six women—Francis “Betty” Snyder Holberton, Betty “Jean” Jennings Bartik, Kathleen McNulty Mauchly Antonelli, Marlyn Wescoff Meltzer, Ruth Lichterman Teitelbaum, and Frances Bilas Spence—had no documentation and no schematics to work with.

There was no language, no operating system, the women had to figure out what the computer was, how to interface with it, and then break down a complicated mathematical problem into very small steps that the ENIAC could then perform.  They physically hand-wired the machine,  using switches, cables, and digit trays to route data and program pulses. This might have been a very complicated and arduous task. The ballistic calculations went from taking 30 hours to complete by hand to taking mere seconds to complete on the ENIAC.

Unfortunately, ENIAC was not completed in time, hence could not be used during World War II. But 6 months after the end of the war, on February 14, 1946 The ENIAC was announced as a modern marvel in the US. There was praise and publicity for the Moore School of Electrical Engineering at the University of Pennsylvania, Eckert and Mauchly were heralded as geniuses. However, none of the key programmers, all the women were not introduced in the event. Some of the women appeared in photographs later, but everyone assumed they were just models, perfunctorily placed to embellish the photograph.

After the war, the government ran a campaign asking women to leave their jobs at the factories and the farms so returning soldiers could have their old jobs back. Most women did, leaving careers in the 1940s and 1950s and perforce were required to become homemakers. Unfortunately, none of the returning soldiers knew how to program the ENIAC.

All of these women programmers had gone to college at a time when most men in this country didn’t even go to college. So the Army strongly encouraged them to stay, and for the most part, they did, becoming the first professional programmers, the first teachers of modern programming, and the inventors of tools that paved the way for modern software.

The Army opened the ENIAC up to perform other types of non-military calculations after the war and Betty Holberton and Jean Jennings converted it to a stored-program machine. Betty went on to invent the first sort routine and help design the first commercial computers, the UNIVAC and the BINAC, alongside Jean. These were the first mainframe computers in the world.

Today the Indian IT  industry is at $ 160 B and is at 7.7 %age of the Indian GDP and employs approximately 2.5 Million direct employees and a very high percentage of them are women. Ginni Rommeti, Meg Whitman are the CEOs of IBM and HP while Sheryl Sandberg is the COO of Facebook. They along with Padmasree Warrior, ex CTO of CISCO have been able to crack the glass ceiling.    India boasts of Senior Leadership in leading IT companies like Facebook, IBM, CapGemini, HP, Intel  etc.. who happen to be women. At our company, GAVS, we are making an effort to put in policies, practices, culture that attract, retain, and nurture women leaders in IT. The IT industry can definitely be a major change agent in terms of employing a large segment of women in India and can be a transformative force for new vibrant India. We must be having our Indian Ada, Joan, Jean and Betty and they are working at ISRO, at Bangalore and Sriharikota, at the Nuclear Plants at Tarapur.

ABOUT THE AUTHOR

Sumit Ganguli

Sumit Ganguli

AI in Healthcare

The Healthcare Industry is going through a quiet revolution. Factors like disease trends, doctor demographics, regulatory policies, environment, technology etc. are forcing the industry to turn to emerging technologies like AI, to help adapt to the pace of change. Here, we take a look at some key use cases of AI in Healthcare.

Medical Imaging

The application of Machine Learning (ML) in Medical Imaging is showing highly encouraging results. ML is a subset of AI, where algorithms and models are used to help machines imitate the cognitive functions of the human brain and to also self-learn from their experiences.

AI can be gainfully used in the different stages of medical imaging- in acquisition, image reconstruction, processing, interpretation, storage, data mining & beyond. The performance of ML computational models improves tremendously as they get exposed to more & more data and this foundation on colossal amounts of data enables them to gradually better humans at interpretation. They begin to detect anomalies not perceptible to the human eye & not discernible to the human brain!

What goes hand-in-hand with data, is noise. Noise creates artifacts in images and reduces its quality, leading to inaccurate diagnosis. AI systems work through the clutter and aid noise- reduction leading to better precision in diagnosis, prognosis, staging, segmentation and treatment.

At the forefront of this use case is Radio genomics- correlating cancer imaging features and gene expression. Needless to say, this will play a pivotal role in cancer research.

Drug Discovery

Drug Discovery is an arduous process that takes several years from the start of research to obtaining approval to market. Research involves laboring through copious amounts of medical literature to identify the dynamics between genes, molecular targets, pathways, candidate compounds. Sifting through all of this complex data to arrive at conclusions is an enormous challenge. When this voluminous data is fed to the ML computational models, relationships are reliably established. AI powered by domain knowledge is slashing down time & cost involved in new drug development.

Cybersecurity in Healthcare

Data security is of paramount importance to Healthcare providers who need to ensure confidentiality, integrity, and availability of patient data. With cyberattacks increasing in number and complexity, these formidable threats are giving security teams sleepless nights! The main strength of AI is its ability to curate massive quantities of data- here threat intelligence, nullify the noise, provide instant insights & self-learn in the process. Predictive & Prescriptive capabilities of these computational models drastically reduces response time.

Virtual Health assistants

Virtual Health assistants like Chatbots, give patients 24/7 access to critical information, in addition to offering services like scheduling health check-ups or setting up appointments. AI- based platforms for wearable health devices and health apps come armed with loads of features to monitor health signs, daily activities, diet, sleep patterns etc. and provide alerts for immediate action or suggest personalized plans to enable healthy lifestyles.

AI for Healthcare IT Infrastructure

Healthcare IT Infrastructure running critical applications that enable patient care, is the heart of a Healthcare provider. With dynamically changing IT landscapes that are distributed, hybrid & on-demand, IT Operations teams are finding it hard to keep up. Artificial Intelligence for IT Ops (AIOps) is poised to fundamentally transform the Healthcare Industry. It is powering Healthcare Providers across the globe, who are adopting it to Automate, Predict, Remediate & Prevent Incidents in their IT Infrastructure. GAVS’ Zero Incident FrameworkTM (ZIF) – an AIOps Platform, is a pure-play AI platform based on unsupervised Machine Learning and comes with the full suite of tools an IT Infrastructure team would need. Please watch this video to learn more.

READ ALSO OUR NEW UPDATES

Disaster Recovery for Modern Digital IT

A Disaster Recovery strategy includes policies, tools and processes for recovery of data and restoration of systems in the event of a disruption. The cause of disruption could be natural, like earthquakes/floods, or man-made like power outages, hardware failures, terror attacks or cybercrimes. The aim of Disaster Recovery(DR) is to enable rapid recovery from the disaster to minimize data loss, extent of damage, and disruption to business. DR is often confused with Business Continuity Planning(BCP). While BCP ensures restoration of the entire business, DR is a subset of that, with focus on IT infrastructure, applications and data.

IT disasters come at the cost of lost revenue, tarnished brand image, lowered customer confidence and even legal issues relating to data privacy and compliance. The impact can be so debilitating that some companies never fully recover from it. With the average cost of IT downtime running to thousands of dollars per minute, it goes without saying that an enterprise-grade disaster recovery strategy is a must-have.

Why do companies neglect this need?

Inspite of the obvious consequences of a disaster, many organizations shy away from investing in a DR strategy due to the associated expenditure. Without a clear ROI in sight, these organizations decide to risk the vulnerability to catastrophic disruptions. They instead make do with just data backup plans or secure only some of the most critical elements of their IT landscape.

Why is Disaster Recovery different today?

The ripple effects of modern digital infrastructure have forced an evolution in DR strategies. Traditional Disaster Recovery methods are being overhauled to cater to the new hybrid IT infrastructure environment. Some influencing factors:

  • The modern IT Landscape

o Infrastructure – Today’s IT environment is distributed between on-premise, colocation facilities, public/private cloud, as-a-service offerings and edge locations. Traditional data centres are losing their prominence and are having to share their monopoly with these modern technologies. This trend has significant advantages such as reduced CapEx in establishing data centers, reduced latency because of data being closer to the user, and high dynamic scalability.

o Data – Adding to the complexity of modern digital infrastructure is the exponential growth in data from varied sources and of disparate types like big data, mobile data, streaming content, data from cloud, social media, edge locations, IoT, to name a few.

  • Applications – The need for agility has triggered the shift away from monolith applications towards microservices that typically use containers to provide their execution environment. Containers are ephemeral and so scale, shrink, disappear or move between nodes based on demand.
  • While innovation in IT helps digital transformation in unimaginable ways, it also makes it that much harder for IT teams to formulate a disaster recovery strategy for today’s IT landscape that is distributed, mobile, elastic and transient.
  • Cybercrimes are becoming increasingly prevalent and are a big threat to organizations. Moderntechnologies fuel increasing sophistication in malware and ransomware. As their complexity increases, they are becoming harder to even detect while they lie low and do their harm quietly inside the environment. By the time they are detected, the damage is done and it’s too late. DR strategies are also constantly challenged by the lucrative underworld of ransomware.

Solution Strategies for Disaster Recovery

  • On-Premise DR: This is the traditional option that translates toheavy upfront investments towardsthe facility, securing the facility, infrastructure including the network connectivity/firewalls/load balancers, resources to scale as needed, manpower, test drills, ongoing management and maintenance, software licensing costs, periodic upgrades for ongoing compatibility with the production environment and much more.

A comprehensive DR strategy involves piecing together several pieces of a complex puzzle. Due to the staggering costs and time involved in provisioning and managing infra for the duplicate storage and compute, companies are asking themselves if it is really worth the investment, and are starting to explore more OpEx based solutions. And, they are discovering that the cloud may be the answer to this challenge of evolving infra, offering cost-effective top-notch resiliency.

  • Cloud-based DR: The easy availability of public cloud infrastructure & services, with affordablemonthly subscription plans and pay per use rates, has caused an organic switch to the cloud for storage, infra and as a Service(aaS) needs. To complement this, replication techniques have also evolved to enable cloud replication. With backup on the cloud, the recovery environment needs to be paid for only when used in the event of a disaster!

Since maintaining the DR site is the vendor’s responsibility, it reduces the complexity in managing the DR site and the associated operating expenses as well. Most DR requirements are intrinsically built into cloud solutions: redundancy, advanced networks, bandwidth, scalability, security & compliance. These can be availed on demand, as necessitated by the environment and recovery objectives. These features have made it feasible for even small businesses to acquire DR capabilities.

Disaster Recovery-as-a-Service(DRaaS) which is fast gaining popularity, is a DR offering on the cloud, where the vendor manages the replication, failover and failback mechanisms as needed for recovery, based on a SLA driven service contract .

On the flip side, as cloud adoption becomes more and more prevalent, there are also signs of a reverse drain back to on-premise! Over time, customers are noticing that they are bombarded by hefty cloud usage bills, way more than what they had bargained for. There is a steep learning curve

in assimilating the nuances of new cloud technologies and the innumerable options they offer. It is critical for organizations to clearly evaluate their needs, narrow down on reliable vendors with mature offerings, understand their feature set and billing nitty-gritties and finalize the best fit for their recovery goals. So, it is Cloud, but with Caution!

  • Integrating DR with the Application: Frank Jablonski, VP of Global Marketing, SIOS Technology Corppredicts that applications will soon have Disaster Recovery architected into their core, as a value-add. Cloud-native implementations will leverage the resiliency features of the cloud to deliver this value.

The Proactive Approach

Needless to say, investing in a proactive approach for disaster prevention will help mitigate the chances for a disaster in the first place. One sure-fire way to optimize IT infrastructure performance, prevent certain types of disasters and enhance business services continuity is to use AI augmented ITOps platforms to manage the IT environment. GAVS’ AIOps platform, Zero Incident FrameworkTM(ZIF) has modules powered by Advanced Machine Learning to Discover, Monitor, Analyze, Predict, and Remediate, helping organizations drive towards a Zero Incident EnterpriseTM. For more information, please visit the ZIF website.

READ ALSO OUR NEW UPDATES