There is a common assumption in Australian businesses that AI governance is someone else’s problem â a big-company problem, a tech-company problem, a âwe’ll deal with it when it becomes a problemâ problem.
The Australian Privacy Act does not agree with that assumption. And it has been updated specifically to account for AI-powered tools.
This is the plain-English version of what the legislation actually requires â and what most small and medium businesses are currently missing.
The threshold is lower than you think
The Privacy Act applies to businesses with annual turnover over $3 million, health service providers of any size, and any business that trades in personal information. If your business uses a CRM with customer names and contact details, runs automated email sequences, uses AI tools that process customer queries, or stores client data in cloud platforms â you are almost certainly in scope. And the AI tools you’re using are processing personal information under your responsibility, not the tool vendor’s.
The three things most businesses get wrong
Assuming the vendor is responsible for compliance. When you put customer data into an AI tool â whether that’s a chatbot, a reporting platform, or a CRM with AI features â you are the data controller. The vendor is the processor. The Privacy Act obligations are yours. If the AI tool sends your customer data to a server in the US for processing, you are responsible for ensuring that transfer is compliant. Most SaaS AI tool agreements contain a clause along the lines of âby using this service, you agree that data may be processed in our infrastructure globally.â Most businesses click through without reading it.
No record of what AI tools are in use. ABISA’s compliance audits almost always start with a simple question: âCan you give us a list of every AI-powered tool your business currently uses?â Most businesses can’t. The honest answer is usually: âProbably whatever the team installed.â Shadow AI â tools installed by individual team members without IT or management awareness â is the most common source of compliance exposure in small and medium businesses.
No breach response plan. Under the Notifiable Data Breaches scheme (part of the Privacy Act), businesses in scope must notify affected individuals and the OAIC within 30 days of discovering an eligible data breach. Most businesses have no plan for this scenario. The first time they think about it is when it happens.
What a defensible AI use policy actually looks like
It doesn’t have to be complicated. A defensible AI use policy covers four things.
An inventory of AI tools in use â what tools, who approved them, what data they process, where that data is stored and processed.
Data minimisation â do the AI tools you’re using need access to the data they have access to? Often the answer is no, and access can be restricted without losing the functionality.
Staff training â not a one-hour eLearning module. A clear written guide on what staff can and can’t put into AI tools, with examples specific to your business context.
A breach response plan â who decides if a breach is notifiable, who notifies the OAIC, who contacts affected individuals, and what the timeline is.
An ABISA compliance engagement produces all four. It also produces a board-ready summary â useful if you’re ever asked by a client, a partner, or an auditor to demonstrate your AI governance posture.
The honest picture
Most Australian businesses using AI tools are operating in a grey area that is slowly becoming less grey. The OAIC is increasing enforcement activity. Larger clients and government contracts are starting to require AI governance documentation as part of procurement. The cost of getting ahead of this is low. The cost of being behind it is not.
If you’re not sure where your business stands, the AI Readiness Check includes a compliance section. It’s free, it takes 10 minutes, and the output is a plain-English summary of where your gaps are â no legal jargon, no obligation.
