The Australian publishing industry embraces the possibilities presented by AI technologies to enhance workflow efficiencies, improve research outcomes, support people with a disability, and more. 

However, while offering potentially transformative opportunities and efficiencies, AI technologies also present serious legal and ethical challenges. In representing Australia’s publishing sector, the Australian Publishers Association (APA) recognises that Australia’s creative and knowledge sectors are particularly vulnerable to these challenges. 

The APA therefore urges our governments to ensure that any legislative or policy developments in relation to AI have regard to the following core principles, outlined in detail below: 

  1. Policies must be underpinned by a clearly defined ethical framework 
  2. Transparency is key
  3. Ensure appropriate incentives and protections for creators and rights-holders
  4. Policy settings must be balanced 

Publishing and AI

Australian publishers play a vital role in the nation’s intellectual and creative ecosystem – we drive the creation of works that shape our culture, education, and science, acting as conduits for global knowledge. We nurture new literary voices, develop quality educational materials tailored to local needs, and advance world-class contributions in the fields of scientific, medical, and academic research. Australia’s thriving and diverse publishing sector to date has been underpinned by balanced legal and policy frameworks that values intellectual property and provides appropriate incentives to support the creation of new works. This has been to the benefit of our society overall.

Left unregulated, the rapid development and deployment of AI technologies threatens to detrimentally disrupt this. AI technologies rely on – and will continue to rely on – huge corpuses of ‘data’, including copyright-protected published works such as books, articles, and essays, for training and development. It is undisputed that some of the most significant AI tools to date (including Chat GPT) have been ‘trained’ on content without acknowledgment or permission from the original creators. 

This unfettered appropriation of published works is not only fundamentally unfair, but also deeply damaging to Australia’s creative and knowledge sectors. Poorly regulated AI development risks distorting the publishing value chain, and threatens to displace our creative workforce, and destroy our creative ecosystem of bookstores, literary festivals, and more.

The APA welcomes the Australian Government’s recognition that a cohesive, responsible, and sustainable regulatory framework must be developed in relation to AI technologies. It is imperative that AI regulation in Australia strikes the correct balance so that both AI development and cultural industries can flourish. This necessitates that Australian publishers, authors, and other creators are supported, not displaced to the detriment of our nation's unique cultural landscape and an authentic Australian voice. 

1. Policies must be underpinned by a clearly defined ethical framework

Any development of AI and associated technologies must be done in service of humanity – as such, AI developers and users must be required to adhere to a clear and robust ethical framework that places human rights and dignity at its core. 

We endorse the federal government’s AI Ethics Principles

  • Human, social, and environmental well-being 
  • Human-centred values 
  • Fairness 
  • Privacy
  • Reliability and Safety
  • Transparency and Explainability 
  • Accountability

We note this is currently only a voluntary framework and call for a nationwide compulsory framework. 

2. Transparency is key

Transparency is crucial in ensuring the safe, ethical and fair development of AI technologies. The nature of AI training practices and generative AI outputs carries risk of significant harm – not only by undermining the livelihoods of creators and displacing our creative workforce, but also by creating deep uncertainty amongst consumers and content users. Lack of transparency in AI development also poses a threat to the integrity of science and research, potentially undermining the corpus of human knowledge by embedding societal biases or misinformation. 

Strict transparency parameters must be put in place to guide both AI developers and users, at minimum across the following areas: 

  • Inputs – AI developers must be required to disclose copyrighted materials used for AI training, and for what purpose. There is global precedent for this – the  Draft EU AI Act, for instance, requires foundation model AI developers to publish summaries of copyrighted data used for training. This is important both to avoid serious and well-documented issues of bias, and to appropriately acknowledge the use of original content. 
  • Outputs – Generative AI, with its capacity to rapidly produce huge volumes of text and images, poses particularly significant transparency issues for content users and general consumers. It is essential for educational, research, and cultural institutions, as well as general consumers, to be able to easily identify Generative AI works. AI developers and users must be required to declare when a work is wholly or partially AI-generated.  

3. Ensure appropriate incentives and protections for creators and rightsholders

Australia’s creative and knowledge ecosystem relies on a robust legislative and policy framework that appropriately acknowledges creators and rightsholders for their labour and provides incentives for the creation of further works. These crucial mechanisms form the foundation of Australian cultural life and must not be overlooked in pursuit of AI development. 

The APA calls upon governments ensure that AI policies: 

  • Uphold the integrity of Australia’s existing intellectual property (IP) framework, including copyright law, moral rights, and licensing models. The APA opposes the introduction of new exceptions and limitations, including for Text and Data Mining (TDM)  or other AI uses, which would allow the use copyright-protected content from Australian creators, researchers, and copyright owners without permission, acknowledgement, or remuneration.
  •  Respect human authorship – AI tools, generative or otherwise, may present opportunities to enhance human expression but must not be used to undermine human creativity, which is essential to a resilient, democratic, and modern society.
  • Honour the governments’ existing commitment to support Australia’s cultural industries under Revive: the National Cultural Policy.

4. Policy settings must be balanced 

The potential impact and implications of AI technology are extraordinarily wide-reaching, while the extremely technical nature of AI development means that its centre of power is extraordinarily focused. AI development that can impact millions of Australians is in fact controlled by a handful of commercial organisations, most of which are based overseas. 

It is imperative that any regulations, policy, standards and guidelines should be created in close consultation with key stakeholders across a broad cross-section of the community, including the publishers, authors, and other rightsholders who have already been – and are likely to continue to be – significantly impacted by AI developments.

This content was last updated on 11/21/2023