technology

Boardroom Technology

Using AI: Has Your Board Done Its Homework


Published: August 23, 2025

Read Time: 7 minutes

Using ai has your board done its homework

Artificial intelligence (AI) is in its infancy. Yet, its impacts are profound and evolving.

Not-for-profits and charities (NFPs) have demonstrated a slow uptake of AI, reflecting a neutral or cautious approach to its use. The reluctance can be attributed to a hyperawareness of the costs and risks associated with AI as an emerging tool and perhaps, a gap in understanding that informed and strategic AI use can strengthen operations and bolster outcomes for organisations.1

In February 2025, Australia signed the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (Statement). The signatories to the Statement are all committed to AI governance that is ethical, safe, secure, human-centric, diversity-driven and human rights based.2 The focus of the Government on AI in this way invites NFPs and other sectors to do their homework on how it can be best integrated into their organisations. This article provides a launchpad for boards of NFPs to consider how responsible and effective use of AI can benefit their organisations.

1. Opportunities presented by AI in the NFP Sector

The ability of AI to emulate human intelligence and automate tasks such as visual perception, speech recognition, translation and problem-solving, presents significant opportunities for the adoption of AI in the NFP sector.3

NFPs can engage with AI to boost learning outcomes, improve fundraising capability, fast-track regulatory reporting and increase efficiency of their workforce.4 Examples of the benefits include:

  • Improved Employee Experience and Efficiency: AI can reduce the time and cost spent by employees on administrative tasks (such as data collection and customer enquiries) and enable a shift in focus to innovation and other value-add tasks.

  • Increased Productivity and Economic Gains: It has been reported that use of generative AI could add between $45 and $115 billion in GDP to the Australian economy through productivity improvements, new jobs, products, services and businesses. The biggest opportunities for economic gain are in healthcare, manufacturing, retail, professional and financial services.

  • New Products and Services: AI can help NFPs identify trends, predict demand, design, prototype and test new products and services.

  • Quality Improvements and Reduction in Error: AI can undertake mechanical or repetitive tasks with minimal errors.5 While investment costs may be high, use of AI in this manner would allow for long-term cost reduction, increased quality and reduction in errors.

These opportunities, and others, can be harnessed through good governance based on strong ethical principles.

2. Risks associated with AI

Risks associated with AI are a driving factor of the slow uptake of AI by the NFP sector. However, many of the potential harms from AI system misuse or failure are foreseeable and capable of mitigation. Commonly understood risks include:

  • Bias against vulnerable communities: Research has shown that risks associated with AI systems, whose modus operandi is to learn from large volumes of available data, disproportionately affect vulnerable and marginalised communities. This is because of systemic biases that exist in the data used to train AI models. The under-representation of women and people of colour in data can reduce the accuracy of AI systems, such as facial recognition technologies or computer-aided diagnosis systems.

  • Dehumanisation: Much of the NFP sector relies on human interaction. Increasing reliance on AI may diminish the nuanced individual circumstances of each beneficiary or may oversimplify complex issues leading to inappropriate or ineffective support.

  • Misinformation: The rise of generative AI in politics and pop-culture paints a stark picture of the risks associated with AI in respect of the misinformation AI is capable of spreading. Organisations should educate themselves on managing AI-driven misinformation and how it may affect decisions around hiring and governance.

  • Explainability: As AI systems become more complex, it will become harder to explain how AI systems generate their output and why this output is adopted by an organisation. Organisations must conduct their own research and analysis, particularly where decisions have legal or other significant effect, to be able to justify the reasons for adopting AI system output.6

To maximise the benefits of AI, organisations will need to factor it into their risk management frameworks so that AI risks applying to them are appropriately identified and mitigated.

3. AI Governance

Australia is developing its regulatory approach to AI, In the second half of 2024, it released a paper outlining options to mandate guardrails on those developing and deploying AI in high risk settings7 drawing on developments in other jurisdictions.

UNESCO has offered significant global leadership in AI Governance establishing a Global AI Ethics and Governance Observatory and publishing the first ever global standard on AI ethics in November 2021. This standard is applicable to all 194 member states of UNESCO (of which Australia is a founding member).

UNESCO recommends that AI adopters:

  • Develop an ethical approach to governance in which AI governance mechanisms are inclusive, transparent, multidisciplinary, multilateral and multistakeholder.8

  • Adopt a data policy whereby organisations put in place processes that ensure the consistent evaluation of the quality of training data for AI systems, proper data security and protection measures, and good feedback mechanisms to learn from mistakes.9

  • Assess gender, culture and environment impacts and the potential for AI systems to:

    • contribute to gender equality and ensure that the safety of women and girls is not violated by the use of AI;
    • to the extent possible, embed culture into AI systems;
    • analyse the direct and indirect environmental impact of AI systems.10

As AI governance standards mature, NFPs should, in their use of AI, comply with existing governance and regulatory obligations. Charities must comply with the ACNC Governance Standards, particularly Governance Standard 5.11 Directors and committee members of NFPs that are not registered charities should consider their obligations of care and diligence and obligations to act in the best interests of their organisations in accordance with applicable corporations and incorporated associations legal frameworks.

Below are practical suggestions to empower boards to implement good AI governance:

  • Formally appoint an individual to lead technology governance in the organisation. This individual should be accountable to the board for all AI decisions. Decisions and justifications for the decisions of this person should be documented in an AI register.12

  • Review board and organisational policies to ensure that they deal with AI issues, including documenting the nature and frequency of AI reporting.13

  • Invest in training at all levels and remain cognisant of how AI impacts the workforce.14

  • Introduce an organisational AI use policy, which captures privacy, data governance, cyber security and procurement. Consider how the Statement and other overarching AI best practice frameworks can be incorporated into such policy.15

  • Adopt an AI risk appetite, management and compliance framework.16

4. Key takeaways

The rapid evolution of AI presents opportunities, risks and ethical concerns alike. Boards of NFPs should increase uptake of AI by taking steps to implement robust AI governance.


This article was originally published in the Better Boards Conference Magazine 2025.

This article is not legal advice. If you need specific advice on the topics discussed, please contact the author.


Further Resources

AI and Your Board – Insights into Responsible Governance

Key Drivers Influencing Future Business

Are You Ready For Automation?


References


  1. Institute of Community Directors Australia, Not-for-profits getting smart about artificial intelligence: our survey, 7 February 2024 ↩︎

  2. Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, 11 February 2025 ↩︎

  3. Australian Signals Directorate, Engaging with Artificial Intelligence, 24 January 2024 ↩︎

  4. Australian Charities and Not-for-profits Commission, Guides, Charities and Artificial Intelligence ↩︎

  5. Australian Institute of Company Directors & Human Technology Institute, A Director’s Introduction to AI, pp12-13 ↩︎

  6. As above n5, pp 15-17 ↩︎

  7. Department of Industry, Science and Resources, Safe and Responsible AI in Australia, September 2024, p 8 ↩︎

  8. As above n8, p 27 ↩︎

  9. As above n8, p 29 ↩︎

  10. As above n8, p 33 ↩︎

  11. As above, n4 ↩︎

  12. Australian Institute of Company Directors & Human Technology Institute, Governance of AI, p 19 ↩︎

  13. As above n13, p 21 ↩︎

  14. As above n13, p 24 ↩︎

  15. As above n13, p 26 ↩︎

  16. As above n13, p 27 ↩︎

Author

Partner
Mills Oakley
About

At the time of writing, Vera heads up the Sydney Not-for-Profit, Human Rights & Social Impact team at Mills Oakley. Acting for numerous charities, religious and not-for-profit organisations, Vera has 30 years’ experience in the legal profession.

In her work, Vera is well recognised for her expertise in assisting clients with governance and fundraising issues, restructuring and mergers and regularly advises on constitutions and ACNC/ATO endorsements. Vera has written several academic works, including a chapter within ‘Charity Law’ (2012, 2016 and 2018) published by Thompson Reuters.

Vera sits on numerous charity boards, associations and committees including the ACNC Professional User Group, the Community and Consumer Consultative Group, Cemeteries and Crematoria NSW, Everyday Justice and CatholicCare, Diocese of Parramatta.

Found this article useful or informative?

Join 5,000+ not-for-profit & for-purpose directors receiving the latest insights on governance and leadership.

Receive a free e-book on improving your board decisions when you subscribe.

Unsubscribe anytime. We care about your privacy - read our Privacy Policy .