Artificial Intelligence: What are 4 major cyber threats for 2024?
AI is one of the most powerful innovations of the decade, if not the most powerful. Yet with that power also comes the risk of abuse.
Whenever any new, disruptive technology is introduced to society, if there is a way for it to be abused for the nefarious gain of others, wrongdoers will find it. Thus, the threat of AI is not inherent to the technology itself, but rather an unintended consequence of bad actors using it for purposes that wreak havoc and cause harm. If we do not do something about these cyber threats posed by the misuse of AI, the legitimate, beneficial uses of the technology will be undermined.
1. AI-powered phishing attacks
One of the most obvious examples of a harmful use case of AI technology is the improvement of phishing schemes. Phishing scammers, who attempt to convince a victim to share personal info by impersonating a trusted source, use generative AI technology to make their messages more convincing.
While generative AI is designed for purposes like drafting emails or powering customer service chatbots, a scammer can feed a model a library of written materials by the person they hope to imitate and create a convincing impersonation. This makes it much harder to distinguish between legitimate and fraudulent messages.
2. Deepfakes
Generative AI can also be abused by scammers to create fraudulent images, audio, and video clips known as “deepfakes.” Deepfakes have been in the news recently due to their use for destructive purposes, including reputational damage, blackmail, the spread of misinformation, and manipulation of elections and financial markets. With how advanced this technology has become, it is now exceedingly difficult to distinguish between genuine and doctored content.
3. Automated cyber attacks
The other capability of AI that wrongdoers have leveraged to cause significant harm to society is its capacity to conduct advanced data analytics. While this quality can substantially benefit companies' efficiency and productivity, it can also boost the efficiency of bad actors -- hackers included. Hackers can program an AI model to constantly probe networks for vulnerabilities, thereby increasing the volume of their attacks and making them more difficult to detect and respond to.
4. Attacks on supply chains and critical infrastructure
However, an even more significant threat arises when these automated attacks are targeted against critical infrastructure or supply chains. Virtually everything in our world -- from shipping routes, traffic lights, and air traffic control to power grids, telecommunications systems, and financial markets -- is run on computers. Should a hacker manage to gain control over one of these networks with an automated attack, the potential damage caused (both financially and in terms of loss of life) could be catastrophic.
Fighting back against the abuse of AI
Thankfully, these cyber threats that wrongdoers are leveraging AI to achieve will not be allowed to go unchecked because many of the tools these bad actors use to cause harm can be repurposed to serve a cybersecurity function. The same models that hackers train to identify vulnerabilities, for instance, can be used by network owners to discover weaknesses that need to be repaired. AI models are also being developed to analyze text, images, and audio to determine if they are legitimate or frauds generated by AI.
We also have a powerful tool to fight against these harmful use cases: education. By staying informed about the cyber threats that AI abusers pose, we can help prevent ourselves from falling victim to them. We must use robust cybersecurity practices, including strong passwords and access control, and do our due diligence when handling suspicious messages and determining whether they are scams or authentic.
Artificial intelligence is poised to change the world, but whether that change is for the better or the worse depends on whose hands this technology falls into and how they use it. To create a world where AI can be used to make the world a better place, we must first gain a clearer understanding of how the technology is being used to cause harm, as this is the first step in mitigating these potentially dangerous cyber threats.
Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass -- a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI Future. Board Members and C-level executives at the World's Largest Financial Institutions rely on him for strategic transformational advice. Ed has been featured on Fox News, QR Calgary Radio, and Medical Device News.