ChatGPT and Meta and Google’s generative AI products should be designated “high risk” under dedicated artificial intelligence laws that could strictly regulate or even ban the most risky AI technologies.

That’s the bipartisan recommendation of a special parliamentary inquiry into the rapidly growing tech, which has also levelled an extraordinary accusation at the tech giants that they have committed “unprecedented theft” from creative workers in Australia.

The senators said if Amazon, Meta and Google’s use of copyrighted content without permission or compensation is not already unlawful, “it should be”.

It recommended work begin “urgently” to develop a mechanism for creators to be paid if their work is used to train commercial AI models.

The findings set the stage for the federal government to introduce overarching legislation that could explicitly prohibit certain uses of AI, and a comprehensive framework to cover its use in healthcare, the office, online, or any other part of society.

Husic looks off camera, the lower house chamber behind him.

Ed Husic is developing the government’s response to the rapid rise in popularity of AI. (ABC News: Nick Haggarty)

The government established the parliamentary committee to consider whether it should respond to the rise of AI with “whole-of-economy” legislation, tweaks to existing laws, or a lightest-touch approach of regulations developed in partnership with the industry.

The committee opted for the strongest response.

The inquiry’s chair, Labor senator Tony Sheldon, said AI presented a great opportunity for Australia, but if companies want to operate in this country they should not be able to exploit Australians.

“Artificial intelligence has incredible potential to significantly improve productivity, wealth and wellbeing, but it also creates new risks and challenges to our rights and freedoms that governments around the world need to address,” Senator Sheldon said.

“We need new standalone AI laws to rein in big tech and put strong protections in place for high-risk AI uses while existing laws should be amended as necessary.

“General-purpose AI models must be treated as high-risk by default, with mandated transparency, testing, and accountability requirements. If these companies want to operate AI products in Australia, those products should create value, rather than just strip mine us of data and revenue.”

AI a high risk to democracy, workplace rights

The committee specifically recommended that tools like OpenAI’s ChatGPT, known as large language models, be “explicitly” included on a list of high-risk AI uses, as well as AI tools used in the workplace to surveil workers or track their output.

“In doing so, these developers will be held to higher testing, transparency and accountability requirements than many lower-risk, lower-impact uses of AI,” it said.

It noted that AI-generated content emanating from Russia was employed in an attempt to disrupt and influence the recent United States presidential election, saying AI’s potential to “harm democracy” was perhaps the most significant risk it posed.

It also said the risk of discrimination, bias and errors by AI algorithms was widely recognised, and there was global concern about a lack of transparency — but Amazon, Google and Facebook and Instagram owner Meta had been uncooperative and refused to directly answer questions at the inquiry.

Senators said their interactions with AI developers “only intensified” their concerns about how the models were operating.

A woman appears on a tv screen, with a row of politicians sitting at tables beside it.

Senators said the response of Meta and other large technology companies at the inquiry only caused them more concern. (ABC News: Adam Kennedy)

An AI act could set mandatory guardrails by identifying what kinds of technologies are high-risk or low-risk, such as an AI tool used in surgery or one used in an online chess game, as well as specifically listing particular products where necessary.

Similar legislation introduced in Europe sets a “risk” framework to ban social scoring tools like those used in China, and potentially real-time facial recognition tools like that which Bunnings was recently in breach for using.

The committee found trust in AI was lower in Australia and had led to lower adoption rates here than in other countries, and strong safeguards could give the public confidence that the industry can grow safely.

Senators agreed a “risk-based” approach that could curb the most significant risks of AI without unnecessary intervention in low-risk tools could ensure the multi-billion-dollar industry could develop safely without being stifled.

AI companies committed ‘unprecedented theft’ from creatives

The committee also determined multinational tech companies operating in Australia had committed “unprecedented theft” from creative workers.

It said developers of AI products should be forced to be transparent about the use of copyrighted works in their training datasets, and that the use of that work be appropriately licensed and paid for.

A mechanism to ensure fair remuneration to creators whose work is used should also be developed in consultation with the creative industry.

The inquiry heard a “significant body of evidence” that AI was already impacting creative industries in Australia, and while that could deliver some productivity gains, stakeholders almost unanimously expressed “grave” concerns about the impact of AI on their jobs and the quality of their work.

It said while AI systems in the US were able to take advantage of copyrighted materials, the committee heard that use likely amounts to a breach of copyright under Australia’s more stringent copyright laws.

“There is no part of the workforce more acutely and urgently at risk of the impacts of unregulated AI disruption than the more than one million people working in the creative industries and related supply chains,” the committee said.

“If the widespread theft of tens of thousands of Australians’ creative works by big multinational tech companies, without authorisation or remuneration, is not already unlawful, then it should be.”

It said the notion put forward by Google, Amazon and Meta that their “theft” of Australian content was for the greater good because it ensured Australian culture was represented in AI output was “farcical”.