The Future Is Here
How companies big and small are creating business models around artificial intelligence. And why it will change businesses forever.
Manipal Hospital's 'Tumour Board' has a panel of qualified oncologists from radiation, medical and surgical streams. Since January, 2017, 'Watson'-IBM's artificial intelligence and deep learning framework-has had the honour to occupy one of the seats on the board. Even as specialists evaluate a case, Watson presents its own findings, along with recommended line of treatment. It evaluates the 130-140 parameters of the patient fed into its system against tens of thousands of cases its already been trained on.
Doctors can vary their treatment, or can query Watson live. The doctor is still the boss, though. "Watson is part of decision making process across our seven hospitals for cancer patients," says Dr Ajay Bakshi, MD & CEO, Manipal Hospitals. Manipal evaluated the impact on two fronts: 600-odd research cases of past patients and clinical cases under treatment. Among past patients, 85 per cent of Watson's recommendations were identical to the treatment given by Manipal's doctor. "In another 8-9 per cent, they found Watson recommendations useful and might have considered that option as well," says Bakshi. Among clinical cases, over 1,000 patients are currently signed up at Manipal to be evaluated by Watson and get recommendations for free. "When you work with Watson, you start with a base level of capability it already has," says IBM India's Chief Digital Officer, Nipun Mehrotra.
Welcome to India's tryst with artificial intelligence (AI)-small steps for technology but a giant leap for businesses. Start-ups in particular remain the most enthusiastic adopters. An estimated 200-300 startups have built business models around AI since large tech firms starting sharing their tools and platform two years ago. "But the number of start-ups is about 5,000. That number also needs to go to 80-90 per cent," says IBM's Mehrotra. Mid-sized firms have tried gingerly but big corporates are circumspect. They remain laggards, passing off chatbots, basic automation and big data analytics as AI. It is not.
AI, by definition, is human-like intelligence in machines. An AI machine must pass the 'Turing' test whose origin dates back to 1951 when Alan Turing published a paper proposing 'The Imitation Game' test of machine intelligence. Simply, it means a human interacting with a machine shouldn't be able to figure out whether it's a human or a machine at the other end. That stage of machine intelligence is a three-stage evolution curve. First, in the Smart Stage, machines will take over human actions as robots and software bots already do. Most of AI of today exists in this stage. Next-in the Cognitive Stage-autonomous products and services such as self-driving vehicles will be the norm, though machines will work within the perimeter of expertise. Finally, there's Adaptive AI-a human-like intelligence and learning capability-where machines will exhibit human being's ability to watch, listen, understand, learn, build new capabilities and take decisions on the go.
There's a raucous debate on in the technology world whether humanity should go that far at all! If yes, how much? Tesla and SpaceX founder Elon Musk and Facebook founder Mark Zuckerberg have shot barbs at each other. Others such as Bill Gates, Stephen Hawking joined in (see The Big Debate).
Cisco's iconic Executive Chairman John Chambers, who's transitioned every technology shift in the past 40 years believes AI will happen irrespective of objections: "Benefits far outweigh disadvantages. Artificial intelligence, machine learning, natural language processing (have) huge productivity advantages. You'll live longer. It will solve a lot of key illnesses. Standard income can go up dramatically."
AI: FORWARD AND AHEAD
True to Chambers' spirit, nobody's questioning whether or not to use AI. The question always is: how to use AI?
The world of AI is trying to surmount THE three inferiorities of machines against humans-brain, eyes and ears. The brain's job of understanding, learning and evolving is catered to by deep learning/machine learning; visually recognising and processing information is being addressed by techniques such as optical recognition; and, listening and responding with voice by natural language processing techniques.
Five global tech firms have emerged as the Top Tier of providers of these AI tools-most of them on the cloud. The oldest-IBM's Watson; The most aggressive-Google; The gentle slayer-Microsoft; and, the quiet Gorilla-Amazon. All provide APIs (application programming interfaces) to businesses to build their products and business models using their AI tools. A dark horse is Facebook which neither agreed for a meeting nor responded to BT's questionnaire.
AI of today is no different from what was called 'Neural Networks' in the mid-80s. "What has changed since is the amount of data and computational power," says Amazon's Rastogi. AI got a major boost in the past two years as global tech firms opened up their APIs for developers to build businesses on them. For instance, Google's Tensor Flow machine learning offering was opened to developers as recently as 2015 while Microsoft's Cognitive Toolkit opened in January 2016.
While McKinsey Global Institute reports high tech, telecom, and financial services to be the earliest adopters of machine learning and AI in the world, in India the earliest use cases are in healthcare, HR and e-commerce. Yet, in each case, it's truly disruptive. Some examples:
DEEP LEARNING: SIMULATING THE BRAIN
The fidget spinner goes on and on?and on, only to slow down when he pauses to make a point. Then, again. This time ever so vigorously, as Akhil Gupta narrates angrily, and somewhat mischievously, a murderous attack on the staff and office of nobroker.com-the company Gupta co-founded with 3 others to eliminate brokers from property renting and buying. Nearly 60 local brokers stormed into the company's office in a Bangalore suburb, smashed furniture and computers and thrashed Gupta and co-founder Amit, besides other staff. They were venting their frustration, as much at the company as at its ability to bar them from the website, even when they posed as genuine customers.
Little did they know, says Gupta mischievously, that nobroker deployed multiple AI tools to identify-and shut out-brokers. Gupta won't say how he did it. That's the trade secret. He drops broad hints though: brokers have a peculiar search pattern and have a digital trail on Internet. Google's machine learning software Tensor Flow, Google Analytics, speech recognition and optical character recognition used in tandem were able to identify them.
Just as IBM offers Watson's deep learning, Google has Tensor Flow and Microsoft has Azure Cloud, Facebook has open-sourced Caffe, its own deep learning module. And Amazon has MX Net-an open source deep learning software.
Bangalore-based Tredence was founded as recently as 2013. Today, it has a $12 million business in making sense of unstructured data. For instance, it built a model for one of the world's biggest FMCG firms about where the ice cream trikes should be located in the city for best return on investment.
Based on the client's data, the model scrapes the Internet for public information such as competitors' trikes, location of schools, hospitals, shopping areas, historical sites, even traffic and demographic data to suggest where the ice cream trikes should be placed. Going by results so far, it could grow global ice cream business by 8-10 per cent. It has since been launched in Durban, Bangkok, Madrid and a few cities in Pakistan and India. It is now planning a pan-European launch.
At Devanakonda village of Kurnool district in Andhra Pradesh, Microsoft's cloud agriculture project deployed artificial intelligence and machine learning, big data and analytics to improve crop yields. For rain fed crops, the timing of sowing is the biggest differentiator between a good crop and a failed crop. Microsoft used Azure Cloud platform to compute short term weather prediction, soil quality data and previous crop history to send regular updates to local farmers on their phones in their native language, including informing them when not to sow. When the model computed that soil moisture was sufficient for seed germination and weather forecast predicted more rainfall, it pinged farmers to sow. Those who followed the model's prediction reaped a 30 per cent higher yield.
What began with sowing is now widening to soil nutrition in collaboration with UN agency ICRISAT and also recommendations on when to feed fertiliser or what kind of fertiliser weedicide to use. The government of Telangana has now signed an MoU with Microsoft to deploy this concept in the state.
Diabetic retinopathy patients need to be screened at least once a year to prevent vision loss. A specialised camera takes a shot of the retina which is then graded by doctors on a 5-point scale. Grading is complex and specialised as doctors need to look for very small lesions. At times, smaller lesions get missed. In many parts of the world, due to shortage of eyecare professionals, the delay causes loss of vision before diagnosis. This is entirely preventable.
Just about a year ago, a chance encounter got together a Google employee and doctors at Aravind Eye Hospitals and Sankara Nethralaya who had already begun screening their patients for diabetic retinopathy. "This effort was occurring in parallel without any of us realising. We came up with this project in collaboration," says Lily Peng, Product Manager, Google Brain AI Research Group.
The data was fed to machine learning framework Tensor Flow. "We're studying the impact on efficiency. It will increase the reach of the screening programmes into rural areas. We hope this will democratise healthcare," says Peng who is now taking the programme to US hospitals. Google says this technology can be deployed in identical applications such as cancer biopsies (1 in 12 cancer biopsies misdiagnosed).
In December 2016, Microsoft and Hyderabad's LV Prasad Eye Institute announced a global programme, Microsoft Intelligent Network For Eyecare (MINE), to use AI to prevent avoidable blindness and to provide eye care services at scale around the world. Of the 285 million visually impaired worldwide, 55 million are in India. Microsoft will build AI models for eye care, leveraging Cortana Intelligence Suite.
If you are a seller on Amazon, your application to the e-commerce giant for a loan would likely get approved or rejected by a machine. Amazon uses machine learning not just to help identify new products for sellers to grow their business but also to identify the risk associated with sellers before it lends to them via the Amazon lending business.
"We use machine learning to identify fraudulent sellers. We have so much past data: the number of times customers complained; how many times he didn't ship the product or shipped a broken product. Based on the data, we can predict," says Rajeev Rastogi, Amazon's Director, Machine Learning.
Machine learning and AI power multiple features at ride hailing firm Ola such as the ride sharing feature and Ola Play, its connected car platform where customers can listen to music, radio, and watch TV shows. AI has helped rural e-commerce company Storeking tell its retailers what products to purchase and stock, depending on parameters such as what other retailers are buying in that geography. By suggesting products that are more likely to move, it frees up their working capital.
In the manufacturing world, ABB is working on connected and collaborative robots where humans interact and work together with robots side by side, not behind a fence. "That's sensoring, real time analytics of what the human is doing and what the robot is doing and to ensure that they can work together," says ABB's IDC centre head Wilhelm Wiese at Bangalore.
ABB is already into fleet management of ships. It's combining its predictive maintenance technologies with geo-tagging and weather reports for real time feedback on the ship's performance. "The most expensive thing that can happen in shipping is if it stalls on the high seas. We're telling the captain you have a problem with this machine, if you slow down your speed by 30 per cent then you will be able to reach the next harbour," says ABB's Wiese.
INDIA'S BIG OPPORTUNITY
Mercurial tech investor Mark Cuban believes AI will likely create the world's first dollar trillionaire. Could that be an Indian? Far from it. According to a McKinsey Global Institute paper on AI, in 2016 alone globally companies spent up to $39 billion in developing AI (US companies spent 66 per cent of that, followed by 17 per cent by Chinese). Consulting firm PwC believes AI could grow global GDP by some $30 trillion by 2030, almost half of that in China. So where does that leave India?
AI is where India can create a natural advantage just like in IT, ITeS. It has an English speaking population, millions of tech professionals. Most importantly, it generates DATA-that great fuel behind AI. Those are just the building blocks to emerge as the premier global hub for AI-based products, services and apps. "Oil refiners make more money than drillers. People who will refine the data will ultimately make much more money than people who create data," says IBM's Mehrotra.
But it requires a holistic approach from the government. India must leverage its natural alignment with the US. The world's foremost Tier I players-Google, Amazon Web Services, IBM, Facebook and Microsoft-are all US-based but have restricted or limited presence in China. A bustling AI economy has the ability to generate millions of high-end jobs. The MIT Sloan Management Review says that each innovation job creates at least 5 other jobs-just what India needs right now.
THE EYES: OPTICAL RECOGNITION
A potential customer of a Polish bank uploads a photo of his ID and appears for a video verification. If there's a match between the video and ID, then it has to be ascertained whether it's the national ID or it's forged. The online verification is approved or rejected by Bangalore-based Signzy, even though eventually officers of the Polish bank would accept the applicant as their customer.
"Things that a human would have done by looking at them are being done by APIs and algorithms," says Ankit Ratan, co-Founder Signzy. It has shrunk manual customer verification process from three weeks to less than three days, even though verification and matching happens real time. Signzy deployed IBM Watson's machine learning capability for identifying images, converting speech to text and transformation of documents into digital mode. Besides global banks and FIs, Signzy provides auto verification services to SBI, ICICI, MSwipe, PayU and LinkedIn.
Until a year ago, Amazon.com was already a truly global e-commerce company-yet it was not. For, only products listed in English could be sold around the world. A listing in Italian, for instance, couldn't even be discovered within EU.
Amazon used machine translation to feature even products listed in Italian across its eight European markets in different languages, including in German, French, Spanish or English and vice versa. The technology is now being deployed in other parts of the world. "At some point it would make sense for us to do pages between Indian languages," says Amazon's Rastogi.
Chennai-based Textient has created a cognitive analytics platform to sift through conversations on the Internet, such as product reviews and social media chatter, to provide insights to companies on their products, services or brands.
"We understand human thinking and behavioural aspects. Decoding this is complex because what lies underneath is a psychological aspect. We take more than 50 parameters of a human being," says Sankar Nagarajan, Founder, Textient.
An insurance firm in the western world installs a camera on car dashboards to analyse whether a bump is an accident (in which case it needs to send alerts to the right people). It analyses the vehicle through motion, speed, etc. for the state of the vehicle and potentially the driver's behaviour. Noida-based The Smart Cube works with the firm to work on such video analytics through AI.
In another offering, The Smart Cube scours the Internet to alert its client about an inherent risk in the supply chain. The company came up with the product for its pharma clients who outsource nearly all of their manufacturing and hence need early alerts in case of an 'event'. It tracks in real-time what's being published about the client's suppliers on websites, media and social media. The objective is to figure out whether that information is 'risky' in terms of financial, strategic, materials shortage and reputational risks, even a CEO's exit risk. "Global supply chains have become very complex and very tight. If there's a risk to one supplier, there's a risk to you as a manufacturer," says The Smart Cube founder Sameer Walia. The entire engine is AI-based and uses machine learning and natural language processing to assess whether it's risky or not, the category and the level of risk. It sends those alerts to the category managers and owners so that preventive or proactive actions could be taken.
Used vehicle marketplace Droom uses AI for discounting and promotions. The AI engine takes into account 100 plus factors to determine the discount, including buyer and seller behaviour, buyer and seller history as well as vehicle details. "In the past, we had just one discount. Now, we have two lakh combinations of discounting - humanly, we could only move from one to five," says Founder and CEO Sandeep Agarwal.
TRUSTING THE BOTS
Jet Airways customer on Twitter (sarcastically): Thanks @jetairways for dr opping me in Kolkata and my bags in Hyderabad.
Jet Airways' reply: Thanks. Glad you enjoyed our services.
Sarcasm is surely not one of the virtues of chatbots just yet, even though they have revolutionised customer care by replacing humans as the first point of contact. Truly, bots still don't understand most extreme human emotions, including frustration, anger, taunt or delight. That's why they're right at the bottom of the AI evolution curve and will mostly fail the Turing Test. That's as much a problem as an opportunity.
Just as the Jet Airways chatbot fell prey to sarcasm, most brand or corporate chatbots are either that silly or churn out standard, sanitised and mostly boring responses because they are trained to work within the boundaries of pre-rehearsed FAQs (frequently asked questions). When Microsoft tried an AI powered chatbot 'Tay' on social media platforms it exposed real dangers. In under 24 hours its tweets became racist, including "Hitler was right". Microsoft took down the bot for 'adjustment'. Tay's handle remains silent since.
This July, Facebook shut down bots Bob and Alice (being trained to negotiate with each other) when it realised the two diverged to develop their own language. In one of the exchanges, Bob began by saying "I can i i everything else". To that, Alice said, "Balls have zero to me to me to me?" They created an 'efficient' language using a variation of these two sentences. It may seem gibberish to a casual observer, but Facebook AI Research Lab was alarmed.
Yatra.com agrees its chatbots may not be 100 per cent accurate but in most cases they are able to resolve customer queries on FAQs: "How do I cancel?", "What is the refund process?", "I want to reschedule my flight", among others. What would it take for machines to understand human emotions? "More data and more examples," says Amazon's Rastogi. Deep learning, where information is processed in stacks, changed that. These are complex models which are computationally intensive to train and require lots of data.
THE EARS: NATURAL LANGUAGE PROCESSINGAround 50 per cent of all Internet content is in English, but only 20 per cent of the world reads English. That's the greatest translation challenge for humanity. A decade ago, Google took up the challenge with 'Google Translate' which has one billion monthly active users, covering 99 per cent of online population with almost 10 billion words translated daily.
It may be better than other translators, yet it flattered to deceive, especially in Asian languages where most of Google's potential new users lay. In September 2016, Google Translate moved to AI based translations beginning with Chinese to English. By November 2016, it had expanded to 16 language pairs in 8 languages-Korean, Japanese, Chinese, Turkish, Portuguese, Spanish, German and French. Then eight Indian languages. Google measured a 50-60 per cent improvement. In English-Korean, for instance, usage shot up 75 per cent within five months of relaunch.
While the new system translated to English and vice versa, it still could not transfer between other languages. "We have 103 languages, we need 103 square models to translate. That's a lot and can't be done, even by Google," says Google's Senior Staff Research Scientist Mike Schuster.
Google scientists' solution: use English as an intermediate language to translate between non-English languages. It trained the system putting language pairs into a single model, indicating the target language. "We find some of the languages are directly translated, although this system has never seen examples of Japanese to Korean or Korean to Japanese," says Google's Schuster. It helps Google Pixel earbuds translate 40 languages in real time. But problems remain. Names, numbers and dates are far from accurate. Google registered an 1800 per cent growth in Indian language translations on mobile. Indians are among the most active in feedback and corrections. "We receive over 10 million contributions from more than 5,00,000 Indians to improve translation," says Schuster.
Amazon has taken up the challenge to enable real time translation in different languages. Another area is voice recognition: training Amazon's voice assistant Alexa in languages other than English. Voice recognition is being done using AI, taking human voice, translating into text, figuring out (music) request, then converting speech to text and playing the song. "At some point Alexa will even have conversations," says Amazon's Rastogi. Last year, Amazon launched a voice recognition capability 'Lex' and another service 'Poly' for test-to-speech service, and a third service 'Recognition' for visual similarities for image.
Personal digital assistant such as Apple's Siri, Microsoft's Cortana, Amazon's Alexa and Google's Assistant are all in early stages of AI. "We're on a path where devices will become more and more transparent, more and more natural, in their interactions," says Qualcomm's Gehlhaar.
For travel portals, the call centre at times could be up to 50 per cent of overall costs. If they could be partially more efficient, it's a direct contribution to the bottomline. Bangalore-based Tredence is doing just that for a global travel and ticketing portal.
Its algorithm takes voice samples of a conversation every second, processes it real time for sentiment analysis via natural language interface and decides whether the discussion is satisfying or deteriorating. Accordingly, the system decides whether and who to escalate the call to. "Real time estimation is still years away but within 30 minutes, or at least within the day a resolution is definite. It does that by escalating to the right authorities and not waiting for the call centre employees' judgment," says Shashank Dubey, co-Founder Tredence.
Tredence and the travel portal are now using the same methodology to embed intelligence in voice recognition and working on an intelligent chatbot. The chatbot and IVR combined can then divert as much traffic away from call centre and into an automated resolution system without impacting customer satisfaction.
The world of AI, however, is expanding at a rapid pace. And efforts are aimed at targeting problems, from fundamental to cosmetic. Google's AI division, for instance, uses a new technique called WaveNet to make the virtual assistant's voice sound more human like. WaveNet uses actual speech to train the neural network. Then it uses statistics to generate more natural wave forms.
TRAINING THE MACHINE
Machines need to be trained with enormous amount of data for them to churn out the kind of results expected of them. Amazon, for instance, has been using machine learning algorithms to make product recommendations since the 90s. Both Google and Amazon use machine learning extensively to understand customer preferences but, more importantly, for advertising. Specifically, which advertisments to show to which customer.
When Google switched Google Translate to neural networks, for just English to French it required 2 billion sentence pairs to train. "The training time is 2-3 weeks for one language pair for one model," says Google's Mike Schuster.
Manipal was lucky. Watson's oncology capabilities came pre-trained at one of the world's best known cancer hospitals-the New York-based Memorial Sloan Kettering Cancer Center.
But when Google began working with Aravind Eye Hospitals and Sankara Nethralaya, it gathered as many as 1.3 lakh images graded by 54 opthalmologists who had rendered 880 diagnoses for those images. Google fed that into its neural network.
"We retrained it to identify diabetic retinopathy. It does the five point grading and it also gives a reading on how good the image is, is it gradable?" says Google'sLily Peng.
In the US, AI is being trained to identify mob thieves who enter retail stores to steal items and leave as a mob. For new product introductions, cameras are beginning to identify human emotions to a new product to assess whether a product would do well or not.
SERVER OR TERMINAL
Future computing workloads will arise from biometrics, tracking objects for IP cameras, and AR/VR applications, besides IoT devices that will talk to each other all the time. One of the greatest challenges of all forms of AI is that it can only be processed on the servers. If computing threatens to be the bottleneck in AI mass adoption, how about distributing that computing between the cloud servers and devices! Widespread adoption requires at least a part of the ability, if not all, to reside at the terminal end-phones, laptops, tablets and other devices of the future such as wearables. That may be some time away.
For Google Translate, for instance, right now all the translation happens on the server, not on the phone. But in smartphones, natural language processing, finger print scan, and image recognition capabilities are already being handled by the device.
Yet, that's not going to be possible where high-performance computing is required. An autonomous car, for instance, needs to be connected to the cloud, but for it to be fully autonomous, everything it needs to operate should be fully on board. And more is required. Engineering giant ABB is developing collaborative dashboards for CEOs, or a company's opex managers, where it's able to aggregate key performance indicators of all its plants at the corporate office on real time. "They would like to see the KPIs (key performance indicators) of all the plants in their corporate office and then take decisions, do the fleet management and optimise the assets. They don't have to keep lot of inventory of spares," says ABB India's Chief Technology Officer Akilur Rahman.
Qualcomm is working on compression techniques while other tech firms are working on other technologies to provide high performance networks. But Qualcomm also sees an opportunity in devices working in tandem to create higher processing capability. For instance, devices interconnected in a home could all work together to create a joint model.
"All of our big partners such as Facebook, Google are interested in moving the workload to the device. In markets like India where connectivity is not so good, they want to provide the best experience on the device," says Gehlhaar.
You would imagine AI as a completely self aware device. But we're nowhere near producing those kind of devices just yet. Yet tech firms are working on, for instance, fully on device machine learning powered security, or biometric; fully on device of natural language interfaces; fully on device for photo enhancement. "These things are very achievable. But they're a long stretch from sci-fi AI," says Gehlhaar.
Given the enormity of the challenge, at least some of the competition is giving way to co-opetition to avoid re-inventing the wheel. Last month, Amazon Web Services and Microsoft combined their deep learning abilities into a library called Gluon which will let developers build machine learning models and apps on the platform.
Many of the things that are most important are actually quite simple to understand. For example, AI needs far, far better and precise data. Some of the data today is complete junk. "Let's be clear, We've not solved AI. We've not created intelligence of humans. We're not there," says Amazon's Rastogi.
Those are just the reasons why the human brain remains the greatest inspiration for AI. But science still has little clue about how the brain functions. More importantly, how it learns. If it did, that would be the easiest 'tech' to adapt in AI machines. That is, provided humans-barring a rogue Frankenstein-would ever cede control over machines! Until that's a reality, the quest goes on.@rajeevdubey