Business Today
Malicious intent
Although possible job losses from automation are discussed on an everyday basis, there are more catastrophic threats that are yet to get the spotlight.
Malicious intent

Tech companies are high on Artificial Intelligence at the moment. The applications being researched and developed are so potentially beneficial and transformative that it is easy to sweep the downsides under the carpet. Although possible job losses from automation are discussed on an everyday basis, there are more catastrophic threats that are yet to get the spotlight.

For example, AI can be used by criminals, terrorists, rogue states or anyone with malicious intent to wreak havoc at an unimaginable scale. What the equation will be between attackers and defenders is not easy to predict, but a group of 26 security experts from a number of institutions and universities such as the Future of Humanity Institute, University of Oxford and Centre for the Study of Existential Risk studied the landscape of threats from the potential malicious use of AI and came up with a 100-page report titled 'The Malicious Use of Artificial Intelligence, Forecasting, Prevention and Mitigation'. This can be easily found with a search and is recommended for any companies developing AI applications and solutions.


Experts see threats in three domains: first is the digital domain in which AI is expected to help automate cyberattacks resulting in unprecedented scale and efficiency. New types of attacks will also be brought about, exploiting human vulnerabilities and interfaces such as voice. Software vulnerabilities will also be leveraged for attacks and entire banks of data poisoned.


Cyber-physical attacks will also threaten physical security. Using AI, it will be possible to launch attacks with swarms of micro-drones, for example, and to bring autonomous systems to their knees - such as getting self-driven vehicles to crash. The third domain is political and is an expansion of a threat that already exists. It includes automating mass persuasion - what was thought to have happened with the influence of Russian activities on the US elections - and deception with fake news and fake videos. Social manipulation, which is already quite evident, privacy invasion, surveillance and big data used to not just understand but influence behaviour will also become rampant.


The accelerated use of AI will expand existing threats, bring a twist to the typical character of ongoing threats and introduce entirely novel ones. The group of experts that created this paper strongly suggests that malicious use cases of AI be considered when developing applications and policymakers work closely with technical researchers to investigate, mitigate or prevent potential catastrophes.


Predictive AI


Mini Labs

title=/


For years, there has been a race to miniaturise medical equipment so they can easily be carted off to where they are needed most. In India, that would be far-flung locations where resources are meagre. Now, Siemens Healthineers India has tied up with a start-up, Jana Care, to release what is being touted as the first ever smartphone-based diagnostic system.


Aina, as the system is called, is a portable lab of sorts that can run blood tests right then and there. It's a dongle-like module that attaches to a smartphone just like one. It also takes in a strip, the type used in glucometers, and all it needs is a drop of blood. A lancet and needle - again, the type diabetics will be familiar with - is used to get that drop. Once inputted, the system is able to do an analysis and come up with HbA1c, lipid profile and haemoglobin level. This set-up shows almost the same accuracy as regular lab tests, according to Siemens. The whole idea of this miniaturisation is to make it available at point-of-care locations in the absence of full-fledged labs.

 

Get latest news & live updates on the go on your phone with our News App. Download The Business Today news app on your device
More from FEATURES