How the rush to regulate AI could bring new cyber security challenges

Artificial-intelligence, AI

Since the arrival of generative AI, its potential to increase challenges associated with privacy and cyber security has become a major concern. As a result, government bodies and industry experts are hotly debating how to regulate the AI industry.

So, where are we heading and how is the crossover between AI and cyber security likely to play out? Looking at the lessons learnt from previous efforts to regulate the cyber security market over the past few decades, achieving anything similar for AI is a daunting prospect. However, change is essential if we are to create a regulatory framework that guards against AI's negative potential without also blocking the positive uses that AI is already delivering.

Part of the challenge is that the existing compliance environment is already increasingly complex. For UK multinational companies, for example, the work required to meet regulations such as GDPR, PSN, DORA, and NIS, to name a few, is significant. This does not include client or government requirements for adherence to information standards such as ISO 27001, ISO 22301, ISO 9001, and Cyber Essentials.

You can add to this the rules put in place by individual companies, such as technology vendors and their customers conducting cyber security audits on each other. In both situations, organizations have specific and sometimes unique questions they want to ask, with some requiring proof and evidence. As a result, the overall compliance task becomes even more nuanced and complex – a challenge that, currently, is only likely to increase.

It goes without saying that these rules and regulations are extremely important to ensure minimum performance standards and to protect the rights of individuals and businesses alike. However, the lack of international coordination and uniformity of approach risks making the compliance task untenable.

New rules at home and abroad

Take the EU’s Artificial Intelligence Act, which was adopted in March this year and is designed to ensure “safety and compliance with fundamental rights, while boosting innovation.” It covers a wide range of important cyber security points, from limitations on the use of biometric identification systems by law enforcement and a ban on social scoring and AI used to manipulate or exploit user vulnerabilities to the rights of consumers to launch complaints and receive meaningful explanations.

Compliance breaches could result in significant fines of up to €35 million or 7 percent of global annual turnover for banned AI applications, €15 million or 3 percent of turnover for violations of obligations under the AI Act, and €7.5 million or 1.5 percent of turnover for providing incorrect information.

In addition, it seeks to address the cyber security risks faced by the developers of AI systems. Article 15 states that “high-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities.”

While this also applies in the UK for organizations trading in the EU, there are also moves to enact additional legislation here that would further localize regulations. In February, the UK government published its response to a White Paper consultation process intended to drive the direction of AI regulation in this country – cyber security included. Subject to the outcome of the election, it remains to be seen how this progresses, but whoever is in power, further regulation is inevitable. Elsewhere, legislators are also busy preparing their own approach to how AI should be governed, and from the US and Canada to China, Japan and India, new rules are arriving as part of a rapidly evolving environment.

Regulatory challenges

As these various local and regional laws come into force, so does the level of complexity for organizations building, using, or securing AI technologies. The practical difficulties are considerable, not least because AI's decision-making processes are opaque, making it difficult to explain or audit how they have been reached – a factor that is already a requirement in some regulatory environments.

Some people are also concerned that strict AI regulations could stifle innovation, particularly for smaller companies and open-source initiatives, while larger stakeholders may support regulation to limit competition. There has also been speculation that in these circumstances, AI startups may relocate to countries with fewer regulatory requirements, potentially leading to a ‘race to the bottom’ in regulatory standards and the security risks this could bring.

Add to this the fact that AI is highly resource-intensive -- a fact that is raising concerns about sustainability and energy consumption and creating the potential for further regulatory oversight -- and it can feel like the list goes on and on. Ultimately, however, one of the most important requirements for effectively regulating AI is that governments should, wherever possible, cooperate to develop unified and consistent regulations. For instance, existing privacy laws and considerations vary by region, but core security principles should remain the same.

If these issues aren’t addressed, it’s more likely that we’ll see organizations breaking the rules on a regular basis, and, just as worrying, gaps will appear in AI-related cyber security that threat actors will be all too ready to exploit.

Richard Starnes is CISO at Six Degrees.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.