menu
Regulating AI Gets Real

The chair of the Australian Securities and Investment Commission (ASIC) delivered a keynote address at the University of Technology Sydney Human Technology Institute Shaping Our Future Symposium. The focus of this address was on the current and future state of Artificial Intelligence (AI) regulation and governance. This is a subject which is gaining increased interest as corporations faces the challenges of controlling this rapidly growing technology.

What is AI?

Put simply, AI is a field of computer science which develops and harnesses the intelligence of machines and software. One of the overarching objectives of AI is to simulate human intelligence processes by machines, particularly computers, to permit those machines to perform tasks traditionally performed by humans. This includes reasoning, problem-solving, perception and even creativity.

How Can Corporations Control AI?

Corporations continuously seek new and innovative ways of meeting their corporate objectives and serving their shareholders – whether that is innovation in product lines, embracing new technologies or saving costs. However, this requires a balance. All participants in the corporate and financial system have a duty to balance the innovation which comes with something like AI, with the responsible, safe and ethical use of emerging technologies. ASIC has stressed the fact that existing obligations around good governance and the provision of financial services, don’t change merely because of the introduction of new technology.

“Existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.”

A key question is how the current regulatory framework can prevent AI-facilitated harms?

Current Regulation of AI

It should be noted that some form of current regulation of AI exists, meaning, business and individual parties who develop and use AI are already subject to several Australian laws. These include laws concerning:

  • Corporate governance
  • Privacy
  • Intellectual Property (IP)
  • Online safety
  • Anti-discrimination

Current director obligations under the Corporations Act are not specific duties, rather they are principle-based and apply broadly. Which means that as companies increasingly deploy AI, this is something which directors must pay special attention to, as part of their directors’ duties.

What this all means is that a directors’ responsibility towards good governance does not change, merely because technology changes and advances.

Is Current Regulation Sufficient for AI?

2024 is being referred to as “the year AI grows up” and much is being made of how quickly AI will expand this year. The potential benefits to businesses and the economy are enormous, with an estimated additional $170 billion to $600 billion a year to Australia’s GDP by the year 2030. However, with this exponential growth, comes risk. The danger is that, as a society, we develop a blind reliance on AI, without sufficient human oversight.

To put the rapid growth into perspective, when ChatGPT launched on 30 November 2022 it took just 2 months to get to 100 million users. Compare that with the launch of the World Wide Web in 1991 which took 7 years to reach the same user base. The key question is how regulation can adapt to such rapidity. It is important to note that considered and deliberative regulatory organisation takes time. The risk of malpractice is exacerbated by the availability of vast sets of client data and the use of tools such as AI and machine learning which permit quick iteration and micro-targeting of specific client sets.

ASIC recognises that business practices which mislead or deceive clients (where purposefully or accidentally) have always existing. However, this risk of malpractice is exacerbated by the availability of vast sets of client data and the use of tools such as AI and machine learning which permit quick iteration and micro-targeting of specific client sets.

Particular concerns with the regulation of AI include questions concerning transparency and accountibility, in order to protect clients of AI from harm. There is a need for oversight and clarity when it comes to the responsible and ethical use of AI. It is important that AI technologies are used for the purposes for which they are intended and not for some ulterior motive such as promoting particular financial product lines – actions which could ultimately cause harm to clients.

Where to from here?

Some suggestions for a more in-depth regulation of AI include:

  • Red teaming – the use of ethical hackers, authorised by a company, to find flaws within its AI system
  • AI constitutions – the suggestion that AI can be better understood if it essentially has an in-built constitution which it must follow
  • AI risk assessment – the requirement to complete such a risk assessment before implementing any form of AI

However, none of these approaches are fully secure and indeed, some are in their infancy and need to be adequately tested within the paraments of real commercial environments.

ASIC’s conclusion is that questions of transparency, accountability and rapidity deserve considered attention. They can’t be answered quickly or off-hand. They must be addressed to ensure that the advancement of AI means an advancement for all.

The information contained on this website and in this article is general in nature and does not take into account your personal situation. You should consider whether the information is appropriate to your needs, and where appropriate, seek professional advice from a financial adviser. Taxation, legal and other matters referred to on this website and in this article are of a general nature only and are based on our interpretation of laws existing at the time and should not be relied upon in place of appropriate professional advice. Those laws may change from time to time.

View Comments