The latest OAIC guidance provides a comprehensive framework to help organizations meet their obligations under the PrivacyAct 988.
Key Practices for Ensuring Compliance and Reducing Risk:
1. Privacy by Design
Developers must take reasonable steps to embed privacy protections into the design of AI systems. A Privacy Impact Assessment (#IA) should be conducted to identify privacyrisks and offer solutions to manage, mitigate, or eliminate those risks.
Developers should provide disclaimers that outline the model’s capabilities and its limitations, especially when the model relies on outdated or incomplete datasets.
2. Data Collection and Minimisation
Data minimisation is a core principle of privacy law. Only collect personal data that is reasonably necessary for the development of the AI model. This includes limiting the scope of data collection.
Public data is not exempt from privacy obligations.
When using third-party datasets, ensure that proper contracts are in place and obtain assurances that the data was collected in compliance with the Privacy Act.
3. Sensitive Information Handling
Consent is mandatory when collecting sensitive information, such as images or audio recordings that may reveal personal details. AI systems must avoid scraping this type of data from third-party websites without valid consent.
4. Accuracy and Testing
Developers must take reasonable steps to ensure the accuracy of the personal data used in AI models. This includes using high-quality datasets, conducting rigorous testing, and continuously assessing the model’s outputs for errors or inaccuracies.
Developers must communicate AI limitations clearly to users and provide appropriate safeguards.
Regular updates and fine-tuning of AI models are necessary to maintain data accuracy over time. AI systems should have a mechanism to correct errors in training data and adjust outputs when new information becomes available.
5. Transparency and User Consent
Developers must update their privacy policies and notifications to clearly explain how AI systems collect, use, and disclose data.
Notice obligations must be fulfilled, particularly when collecting data through web scraping or using third-party data.
6. Security and Data Protection
Developers must implement security measures to protect training datasets from data breaches and unauthorized access.
7. Ongoing Monitoring and Compliance
Developers should continually monitor the use of personal data in AI models, especially as the models evolve or new use cases arise.
8. Ethical Considerations
AI systems can introduce ethical risks such as bias and discrimination.
Special attention must be given to vulnerable groups and children, as AI models could disproportionately impact these populations.
Author Sebastian Burgemejster
Comments