The rise of AI needs to be controlled, report warns

A new study takes a look at the potential security weakpoints that could emerge as AI continues to rise, if the industry doesn't take the right steps to secure its creations.

Much ink, digital and otherwise, has been spilled on the vast range of extraordinary opportunities enabled by artificial intelligence (AI) and machine learning (ML).

Now, a team of 26 AI experts, including from Oxford, Cambridge and Yale universities, OpenAI, and the Electronic Frontier Foundation, has zeroed in on how AI could be misused for nefarious ends – an aspect that the experts believe has been on the back burner in academia.

What their report, called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, lays out is not some sort of AI takeover in the distant feature.

Rather, it outlines attacks that could materialize a few years down the road and that involve the co-opting of AI technologies by malicious actors – unless adequate defenses are developed.

The scenarios consider only AI technologies that are, or will probably, be available in the next five years, particularly focusing on those leveraging ML.

In the wrong hands, according to the experts, AI may change the threat landscape essentially in three ways – expand existing threats, usher in new threats, and shift the nature of threats as we know it.

The document, of almost 100 pages, identifies three security domains – digital, physical, and political – in which the potential malicious use of AI is especially relevant.

Let us dwell on how AI and ML may be misused in the digital realm.

Cyberattacks made even easier

AI

Thanks to the scalability and efficiency of AI systems, it could become significantly more effective and easier to carry out labor-intensive cyberattacks. This includes attacks involving social engineering, such as spearphishing, which the report highlights as an example of an existing threat that may expand.

By automating non-trivial tasks that attackers need to perform prior to launching these targeted operations, AI could enable more adversaries, and with less effort, to conduct them.

“Victims’ online information is used to automatically generate custom malicious websites/emails/links they would be likely to click on, sent from addresses that impersonate their real contacts, using a writing style that mimics those contacts,” reads a possible scenario outlined the report.

Attackers might be capable of sophisticated spearphishing en masse, “in a manner that is currently infeasible, and therefore become less discriminate in their choice of target”, according to the report. Realistic chatbots mimicking a friend could add new layers to these threats, too.

“An explosion of network penetrations, personal data theft, and an epidemic of intelligent computer viruses” may ensue in the wake of AI-powered attacks.

In addition, the report envisages that AI could automate and accelerate the discovery of software vulnerabilities before malicious code exploiting the flaws – in a process also aided by AI – is created.

Also within the realm of possibility are denial-of-service attacks involving a “massive crowd of autonomous agents”, according to the researchers.

Using large datasets, potential victims of financially motivated campaigns can be identified more efficiently and ‘by the truckload’. Their online behavior would be used to estimate their personal wealth and to judge their willingness to pay up, thus ultimately increasing the effectiveness of ransomware campaigns.

Also, vulnerabilities in AI systems themselves could be ripe for exploitation, such as through adversarial examples and data poisoning.

“Data poisoning attacks are used to surreptitiously maim or create backdoors in consumer machine learning models,” reads the scenario.

The time to act is now

The report also suggests a set of general interventions in order to lessen the threats associated with the malicious use of AI.

The experts urge policy-makers to work closely with technical researchers, computer scientists and the cybersecurity community to investigate, understand and prepare for possible malicious uses of AI.

The report also calls for expanding the range of stakeholders involved in the prevention and mitigation of risks.

Also, engineers and researchers should be more mindful of misuse-related considerations and reach out to relevant stakeholders in order to prevent AI’s malicious applications.

“The challenge is daunting and the stakes are high,” reads the report.

Editor’s note: contributed blogs like this are part of ChannelBuzz.ca’s annual sponsorship program. Find out more here. This content originally appeared on ESET’s We Live Security blog.)