Navigating generative AI and the Australian Privacy Act: A Microsoft guide
Clayton Noble, Head of Legal, Microsoft Australia and New Zealand
The rise of generative AI is unlocking new possibilities for innovation to enable private and public sector organisations to enhance services, optimise operations and drive change across industry sectors. However, with these exciting opportunities come responsibilities, such as maintaining compliance with the Australian Privacy Act 1988 (Cth) (Privacy Act).
At Microsoft, we understand the need to balance technological progress with the trust and privacy obligations essential to our customers. That’s why we’ve developed a comprehensive guide to help organisations harness the power of generative AI while adhering to their legal responsibilities under the Privacy Act.
Our guide aims to empower organisations to integrate technologies like Microsoft 365 Copilot and Azure OpenAI Service with confidence.
While generative AI introduces new capabilities, the Australian Privacy Principles – which are the cornerstone of the privacy protection framework in the Privacy Act – apply just as they would in any other context, such as when using cloud services. This means organisations should be able to approach the procurement and use of Microsoft’s generative AI technologies with the same confidence as they do our longer-standing cloud solutions.
The guide outlines key obligations under the Privacy Act, including those relating to transparency, security, individuals’ rights and international data transfers. It also discusses how our AI solutions align with these requirements.
Importantly, the guide emphasises that Microsoft does not use customer data, including prompts or outputs from generative AI tools, to train its foundation models without customer permission. We believe that data remains the property of the customer.
One of the key takeaways from the guide is Microsoft’s unwavering commitment to responsible AI. This centres on six principles that align with the objectives of the Privacy Act: fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
These principles guide every stage of our AI development and deployment, ensuring that our technologies are not only compliant but also trustworthy. We offer a variety of resources and tools to support organisations in deploying AI responsibly, helping them meet their regulatory obligations while driving innovation.
Our new privacy guide also highlights Microsoft’s privacy commitments, including those in our Data Protection Addendum. These commitments extend to our AI products, like Microsoft 365 Copilot and Azure OpenAI Service.
Organisations can rest assured that their data is safeguarded by robust governance practices that comply with the Privacy Act, the European Union’s General Data Protection Regulation and other relevant privacy laws applicable to providing our AI products. Microsoft’s AI solutions are built on what we believe is the most trusted cloud platform available today, giving organisations peace of mind as they explore generative AI’s potential.
As AI and data protection regulations continue to evolve, Microsoft remains committed to supporting our customers on their generative AI journey. Our guide not only addresses current requirements but also looks ahead to future developments in AI governance and data privacy, equipping organisations for the changes to come.
For organisations looking to unlock generative AI’s potential while staying compliant with the Privacy Act, Microsoft’s new guide provides the insights needed to move forward confidently. Explore the complete guide to learn how Microsoft can help you responsibly integrate AI into your operations and ensure your organisation is prepared for the future.