Skip to main content

Clifford Chance

Clifford Chance
Artificial intelligence<br />

Artificial intelligence

Talking Tech

Facing a reckoning: Australian privacy regulator comments on facial recognition tech in recent ruling

Artificial Intelligence Data Privacy 31 December 2024

Between 2018 and 2021, Australian retail chain Bunnings automatically monitored CCTV footage and processed imagery of individuals' faces. Facial vectors extracted from this imagery was then matched against a database of vectors of the faces of known perpetrators who had conducted violent attacks against staff. Where a match was detected, it was recorded. Where there was no match, the vectors were promptly deleted.

Following a two-year investigation, this use of facial recognition technology has been ruled unlawful (Determination) by Australia’s Privacy Commissioner (Commissioner).

Relevant requirements under Australian privacy law

The Privacy Act 1988 (Cth) (Privacy Act), including the Australian Privacy Principles (APPs):

  • (consent) prohibits the collection of "sensitive information" without an individual's consent. There are a limited number of exemptions to this consent requirement, including "permitted general situations" (APP 3.4);
  • (transparency): requires that, at or before the time it collects personal information about an individual, an organisation subject to the APPs (APP entity) takes reasonable steps to tell them certain things (e.g. in a privacy policy) (APPs 1.3, 1.4 and 5.1); and
  • (data governance): requires that an APP entity takes reasonable steps to implement practices, procedures and systems that will ensure compliance with the APPs (APP 1.2).

"Sensitive information" includes biometric information, biometric templates and "information or an opinion about an individual's … criminal record" (section 6(1)).

Among the list of "permitted general situations" are (section 16A(1)):

  • where "it is unreasonable or impracticable to obtain the individual's consent … and the [APP entity] reasonably believes that the collection, use or disclosure is necessary to lessen or prevent a serious threat to the life, health or safety of any individual, or to public health or safety"; and
  • where "the [APP entity] has reason to suspect that unlawful activity, or misconduct of a serious nature, that relates to the [APP entity]'s functions or activities has been, is being or may be engaged in … and the [APP entity] reasonably believes that the collection, use or disclosure is necessary in order for the [APP entity] to take appropriate action in relation to the matter".

Other Privacy Act requirements related to this context, though not raised by the Commissioner in the Determination, are that:

  • (lawfulness and fairness) an APP entity must collect personal information only by lawful and fair means (APP 3.5); and
  • (purpose limitation) once personal information about an individual is collected, it can generally only be used or disclosed for the notified purposes for which it was collected. Otherwise, the APP entity must obtain the individual's consent to use or disclosure for any new purpose (APP 6.1).

Takeaways from the Determination

On 19 November 2024, the Office of the Australian Information Commissioner (OAIC) published the Determination, finding that Bunnings had breached the Privacy Act 1988 (Cth) by:

  • collecting sensitive information about the perpetrators, and about all individuals who entered the stores (including customers, staff, visitors and contractors), without consent;
  • failing to take reasonable steps to notify individuals of the facts, circumstances and purposes of the collection of their personal information and the consequences of not collecting their personal information;
  • failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs; and
  • failing to include in its privacy policy information about the kinds of personal information collected and held, and why.

Bunnings had argued that it did not "collect" (for the purposes of the Privacy Act) the personal information of non-matched individuals and, even if it did: (i) such collection did not require consent because a "permitted general situation" applied; and (ii) it took steps that were reasonable in the circumstances to notify individuals of the collection of their personal information, including by way of signage at the entrances of, and posters located within, the stores. It also argued that it had implemented practices, procedures and systems to comply with the APPs that were reasonable, including considering privacy from the outset, seeking legal advice, building-in a mechanism for the immediate deletion of non-matched individuals' personal information (such that the information was only held for 0.00417 seconds), and limiting access to the system to only a small number of staff all of whom had received special training.

Interpreting Privacy Act obligations requires regard to what is reasonable in the circumstances. In making her Determination, the OAIC considered the relevant circumstances to include:

  • Bunnings' status as a major retail business;
  • the (probable) "hundreds of thousands of individuals" affected by the use of the facial recognition technology;
  • the amount of personal information collected and used, given the programme involved the ongoing, automated collection and use of information;
  • the length of time personal information was held (in this case, 0.00417 seconds, in relation to non-matched individuals);
  • the type of personal information held (i.e. sensitive information);
  • the format of the personal information collected, being numerical data points that could be read only with the aid of sophisticated technology;
  • the consequences of the collection, which for: (i) the public at large was the collection of their sensitive information without their knowledge or consent; and (ii) for matched individuals was the prospect of being subjected to adverse treatment (e.g. by security guards), regardless of their behaviour at the time of collection; and
  • the nature of the technology, which involved the covert collection of biometric information with the potential to adversely affect individuals' rights and interests.

Bunnings avoided a fine because it had good intentions and cooperated with the OAIC through the investigation, but was required to: (i) cease the use of the facial recognition technology; and (ii) publish a prominent statement on its website detailing its failures and explaining how individuals could submit complaints.

Collection of sensitive information

The Commissioner dismissed Bunnings’ assertion that it had not collected personal information of store visitors at large, noting that collection is collection, even if the information is deleted 0.00417 seconds later, because information has by definition been collected if it must then be immediately deleted from electronic storage. She also adopted broad interpretations of "biometric information" and "biometric templates", noting that the Privacy Act protects information in these categories as "sensitive" information because an individual's physiological and behavioural attributes cannot normally be changed and are persistent and unique to the individual.

Consent and the "permitted general situations"

The OAIC therefore concluded that the facial vectors and criminal records of the (problematic) individuals included in Bunnings’ database were sensitive information and, in the absence of a valid exception, that informed, voluntary, current and specific consent should have been obtained and was not.

The OAIC adopted a narrow interpretation of the "permitted general situations" exception, supporting her decision-making with a balancing test weighing "the privacy impacts resulting from the collection of sensitive information against the benefits gained by the use of the [facial recognition technology] system" and also considering the suitability of the facial recognition system and whether there were less intrusive alternatives available – her reasoning here is reminiscent of the approach taken by European authorities and courts in their analysis of the GDPR's "legitimate interests" lawful basis for the processing of personal data. The Commissioner was satisfied that Bunnings had reason to suspect unlawful activity but not that this large-scale use of the facial recognition technology was necessary in the circumstances. Similarly, she was satisfied that the use of the facial recognition technology could lessen or prevent serious threat situations but, crucially, held that the impact on the privacy of individuals outweighed the benefits that were or could be realised by the use of the system. The "permitted general situations" exception was therefore not available to Bunnings. It needed consent and hadn't obtained it.

Transparency

The Commissioner explained that it was reasonable to take steps to notify individuals about the use of the facial recognition technology because, in particular, the information being collected was sensitive and the consequences for an individual of the collection (e.g. being ejected from the store or other adverse treatment, including in the event of 'false positive' matches) "weighed in favour of greater notification". Simple signage that mentioned "video surveillance" without explicitly referring to facial recognition "did not sufficiently notify individuals that their sensitive information was being collected". Instead, Bunnings ought to have included information on the signage as to the circumstances and purpose of the collection (i.e. data matching and to prevent criminal activity in stores) and the consequences for the individual if the personal information was not collected, and could then have directed the reader to more detailed information about the facial recognition system which could have been published on Bunnings’ website.

Data governance

The Commissioner considered reasonable steps to implement practices, procedures and systems to ensure compliance with the APPs to include:

  • conducting a privacy impact assessment (PIA);
  • implementing policies and procedures addressing how the facial recognition technology works, circumstances in which it can be used, controls on staff access to the system and underlying database, processes for adding individuals to the database, processes for assessing positive matches (including false positives), training requirements and details of how and by whom the efficacy of the system would be periodically reviewed;
  • staff training; and
  • periodic reviews and reporting on the efficacy of the system, the implementation and effectiveness of written policies governing its use and any emerging issues relating to the use of the system.

Bunnings had adopted 'minimum standards' for the use of facial recognition (developed by its corporate parent), but only did so years after the technology was deployed. The Commissioner pointed to deficiencies in Bunnings’ operationalisation and documentation of the minimum standards.

Privacy versus public safety

It is noteworthy that the fundamental features of the Privacy Act, though mostly principles-based, can leave little room for adaptation. Even where there was room for discretion in the application of the "general permitted situations" exception, including in deciding whether the use of facial recognition technology was proportionate in the circumstances, the Commissioner opted to prioritise individuals' privacy.

Does this reflect public sentiment? Recent research suggests that most Australians do not support the use of facial recognition technology to track shoppers in retail settings. However, the same survey suggests that a slim majority do support the use of such technology by retail outlets for "identifying shoplifters and anti-social patrons", while three in four do support the use of such technology for identifying criminal suspects by police. Public opinion appears to turn on the purpose for which facial recognition technology is used and who is using it.

Regulation around the world

Many countries regulate facial recognition technology via their data protection laws, and specific regulation of biometric data exists or is emerging in other jurisdictions.

In the EU and the UK, the (mutually similar) EU GDPR and UK GDPR are more prescriptive than the Privacy Act but share its common basic principles. Any processing (including collection) of facial imagery or other personal data must be "lawful", including satisfaction of one of a series of specified lawful bases, one of which is the "legitimate interests" lawful basis discussed briefly above. It must also be fair and transparent, pursue a specific, explicit and legitimate purpose and comply with the EU/UK GDPR's data minimisation, accuracy, storage limitation, security and (extensive) accountability principles (for example, various forms of processing require the conduct of a "data protection impact assessment", which is broadly equivalent to a PIA under the Privacy Act). Biometric data is, in addition, called out as a "special category" of personal data. The processing of special category data is prohibited unless one of a series of narrow conditions, some of which are fleshed out in national laws on a country-by-country basis, is met. Criminal offence data can only be processed where specifically permitted by national law.

In the EU, the laws of some Member States include conditions allowing processing of biometric and/or criminal offence data for purposes related to the prevention and detection of crime, but typically subject to tight conditions.

In the UK, the Data Protection Act 2018, in place alongside the UK GDPR, does allow the processing of special category and criminal offence data as necessary for the prevention or detection of crime in certain circumstances, provided all the other requirements of the UK GDPR are met. The UK Information Commission's Office (UK ICO) takes the view that facial recognition programmes of the kind implemented by Bunnings may be possible in the UK in certain circumstances, subject to stringent requirements to protect the interests of data subjects. The UK ICO has emphasised the importance of evaluating necessity and proportionality on a case-by-case basis when conducting an assessment of whether a lawful basis is available for data processing in connection with live facial recognition, with many of the factors considered in the Determination likely to also be among the relevant factors for consideration in such assessments.

The EU's new AI Act classifies many forms of biometric systems as "high-risk". This means that, as the Act comes into force, they will have to comply with certain new obligations, including undergoing conformity assessments before being placed on the market or put into service in the EU and complying with safety, risk and quality management, data governance, human oversight, accuracy, robustness and cybersecurity requirements. Additional rules will apply when it comes to the use of AI systems for post-remote biometric identification. Also, the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes will soon be prohibited, subject to limited exceptions and specific conditions. At the EU Member State level, certain countries may already have national laws or proposals on facial recognition, biometric identification and algorithmic surveillance. Looking forward, Member State legislation and initiatives need to be made consistent and to comply with the AI Act's provisions.

In the United States, there is no single legal framework that regulates biometric data. At the federal level, biometric privacy is subject to oversight by the Federal Trade Commission and sector-specific laws, like the Health Insurance Portability and Accountability Act. At the state level, it is subject to biometric-specific laws in Illinois (the Biometric Information Privacy Act), Texas (the Capture and Use of Biometric Identifier Act) and Washington (the Biometric Privacy Protection Act), comprehensive data privacy laws (e.g. the California Consumer Privacy Act, as amended by the California Privacy Rights Act), general data breach notification statutes and sector-specific laws (e.g. New York City's Admin. Code Secs 22-1201 through 1205).

Pointers for organisations operating in Australia

The Determination was accompanied by new guidance on assessing privacy risks before deploying facial recognition technology (FRT Guidance). Regulator guidance is given less weight in Australia than in the EU and UK, but the FRT Guidance sheds light on the OAIC's views on necessity and proportionality (APP 3), consent and transparency (APPs 3 and 5), accuracy, bias and discrimination (APP 10), and governance, accountability and ongoing assurance (APP 1). The FRT Guidance reminds organisations that biometric templates and biometric information are sensitive information for the purposes of the Privacy Act and underscores the importance of conducting a PIA to ensure privacy is embedded into any implementation of facial recognition technology.

The Privacy and Other Legislation Amendment Act 2024 (Cth), which received royal assent on 10 December 2024, creates a new statutory tort for serious invasions of privacy. While a high threshold needs to be met, and defences are available, organisations should observe additional caution when assessing risks and putting controls in place given the level of privacy intrusion associated with facial recognition technology. The Act also expands the OAIC's powers of enforcement and investigation, with new tiers of civil penalties and the ability to issue infringement notices, and introduces an additional requirement that privacy policies must explain substantially automated decisions which significantly affect individuals' rights or interests.

Takeaways for global businesses

The Determination is instructive for organisations around the world who use or are considering using facial recognition technology in publicly accessible spaces. Organisations should:

  • undertake a thorough PIA to identify and mitigate potential privacy risks – and to identify and address all applicable obligations – including ensuring careful consideration of privacy by design, necessity and proportionality, (where relevant) consent, transparency, accuracy, bias and accountability;
  • consider the ongoing suitability of a proposed technological solution to a problem and whether there are, or emerge, less intrusive alternative methods to achieve the same outcome;
  • develop detailed written policies and procedures governing the use of facial recognition technology and ensure appropriate dissemination and implementation across groups of companies where relevant;
  • ensure transparency requirements under all applicable privacy and data protection laws are met; and
  • consider what technical and organisational measures might be appropriate, or "reasonable" in the circumstances, including staff training, access controls and periodic reviews of the efficacy of the facial recognition system in achieving its objectives.

As technical capabilities continue to develop, new laws emerge and regulatory guidance evolves in relation to existing requirements, organisations will need to continually horizon-scan to ensure ongoing compliance with applicable laws and alignment with public sentiment.