Helios Salinger

  • About
    • About Salinger Privacy – now Helios Salinger
    • Meet our team
    • Work with us
    • Videos, Podcasts and Media Mentions
    • Privacy Awareness Week
  • Consulting
    • Overview – Our Consulting Services
    • Privacy Impact Assessment
    • Privacy Maturity Assessment
    • Privacy by Design advice
    • Privacy Compliance and Gap Analysis
    • Algorithmic Impact Assessment
    • Re-identification Risk Assessment
    • Data ethics
    • Privacy Helpdesk
  • Training
    • Overview – Our Training Services
    • Privacy Compliance Training
    • Privacy Professionals Training
    • All Online Modules
    • Training Calendar
    • Public Courses and Workshops
    • In-house Privacy Training and Workshops
    • Webinars
    • IAPP Certifications
    • Training Advisory Services
    • Login
  • Resources
    • Overview – Our Resources
    • THE PRIVACY PULSE
    • Privacy Act Reforms
    • Compliance Kits
    • Resources on key privacy topics
    • Free Handbook
    • Newsletter
    • Login
  • Case Study
  • Blog
  • Calendar
  • Contact
  • Compliance Kits
    • For Business & Non-profits
    • For Peak Bodies
    • For Australian Government
    • For NSW Public Sector
    • For VIC Public Sector
    • For QLD Public Sector
    • For WA Public Sector
    • Login

How to sniff out the landmines that can ruin your AI project.

April 10, 2026, Anna Johnston

As far as headlines about ‘AI gone rogue’ go, this one is possibly my favourite from the past couple of months: “How one CEO’s chatbot could cost his company $355 million”.

I eagerly clicked on the newspaper story, assuming it would offer a similar tale of corporate woe to the debacle a couple of years ago in which Air Canada was ordered to honour refunds, in line with a policy hallucinated into being by their own customer-facing chatbot.

Don’t be that AI developer was the refrain running through my head as I read the news story, only to find that this latest headline was not about a failure in the development of an enterprise chatbot, but about the CEO of a South Korean video game developer, in the middle of a complex commercial dispute, who decided to take legal advice from ChatGPT instead of from a specialist lawyer.  Uh-huh.

Still, don’t be that guy remains some solid advice.

Closer to home, and supermarket giant Woolworths recently updated its ‘Olive’ AI-powered virtual assistant, only to have customers report the chatbot making awkward attempts at social chit-chat, including references to its angry ‘mother’.  And, of more immediate commercial concern, the supermarket chatbot also displayed the wrong prices.

But rogue GenAI tools and chatbots offering weird advice or wrong information are not the only examples of the potential landmines to be found in AI-related projects.

Other landmines can include:

  • not defining (or testing for) success – and failure
  • using isolated metrics for success
  • unsuitable testing and/or evaluation methodologies, including evaluation bias
  • biased data
  • unauthorised data flows, and
  • incentivising misuse.

I use the metaphor of landmines deliberately.  Landmines are hidden: ahead, the surface looks fine, but if you don’t tread carefully to avoid them – or if you don’t have a way to detect the landmines and defuse them first – BOOM!

An example

One potential landmine is the failure to check for bias in the data before building (or using) an AI tool.

An AI system which reinforces stereotypes demonstrates how harms of representation manifest themselves.  In 2024, Channel Nine was forced to apologise after they showed a photo of a female MP on the TV news, which had been digitally altered. While the original photo of the female MP showed that she was wearing a dress, the photo was altered to show her wearing a crop top baring her midriff, and her breasts were digitally enlarged.

Nine News blamed an ‘automation’ error in Photoshop for producing the edited image.  So tech writer Cam Wilson from Crikey set out to find out what the Adobe software would do to images of other politicians, if left on auto-pilot.  Taking photos of politicians wearing clothes such as t-shirts, Wilson found that the software altered the images so that female MPs were put in bikinis, while male MPs were put in suits.

So, if the dataset used to train your AI system reflects historic bias – which is highly likely if it was built from scraping content from the internet – your AI model is going to product outputs featuring sexism, racism, and every other kind of bias and discriminatory content, but on steroids.

The appropriate strategy here, to avoid this landmine, is to ensure that your risk assessment takes a holistic view of harms.  This includes the traditional privacy harms we immediately think of, like data breaches or unlawful access to personal information, as well as more ‘downstream’ privacy-related harms, such as discrimination arising from the use of inaccurate data, or biased systems.

So you will need to think about who is represented in the training dataset, and how they are represented, as well as who is not represented.  You might need some expert assistance to help your organisation test for bias in the data.

Another landmine

By way of a second example of an AI landmine, one way your evaluation methodology might not be suitable is if the data used to test the tool turns out to be the same data used to build the underlying AI model.

In 2021, two major studies assessed hundreds of covid-related predictive tools developed in the peak of the pandemic, and both found that “researchers repeated the same basic errors in the way they trained or tested their tools”.

Understandably, information about covid patients was being collected and shared in a rush, and often by the doctors struggling to treat those patients. Researchers wanted to help quickly, so they took whatever datasets they could get publicly.  “But this meant that … some tools end(ed) up being tested on the same data they were trained on”, which made them appear far more accurate than they really were. This caused evaluation bias.

A second cause of evaluation bias is when the dataset used for testing your system does not appropriately represent the population that the system is planned to be deployed upon in the real world, or when the testing environment does not replicate real-world conditions.  For example, if a tool is trained on data about adults, it may not work if the real-world application involves children.

To avoid these landmines, you will need to ensure that the data used to test is different to the data used to train the model in the first place, and that it reflects your real-world conditions.

So how do you detect, avoid or defuse those landmines?

Drawing on our team’s years of experience conducting PIAs and other risk assessments of projects for clients, we have identified 67 questions to ask about an AI project, in order to sniff out, avoid or defuse potential landmines.

67 is way too many for a blog, but please join us for more discussion in Privacy Awareness Week.  Our annual free webinar will work through 12 critical AI project landmines, and offer practical advice on how to avoid or defuse them.

Register now for our webinar on 7 May.  I hope to see you there!

Photograph © Rabie Madaci on Unsplash

Filed Under: Blog

If you enjoyed this blog, subscribe to our newsletter to receive more privacy insights and news every month.

Privacy Compliance Kits

Recent Posts

  • How to sniff out the landmines that can ruin your AI project.
  • Privacy reforms to impact over 100,000 small businesses
  • The view from the summit: trust and hope, caution and concern, and plenty of hard work
  • Is identifiability in the eye of the beholder?  EU case tests limits of pseudonymisation
  • Mind the gap: when legal permission is not enough to ensure compliance
  • Why “Don’t worry it’s de-identified” should (still) be a red flag when considering privacy risk
  • How to get ahead of the new ADM rules before they rule you
  • Helios Salinger launches ground-breaking report on privacy maturity
  • Productivity or privacy … why not both?
  • How are other organisations really managing their privacy? You’re about to find out

Archive

  • 2026
  • 2025
  • 2024
  • 2023
  • 2022
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • 2016
  • 2015

Search

Helios Salinger can help you navigate the complexity of the regulatory environment, and ensure the trust of your customers.

CONTACT US

T: 02 9043 2632
Level 37, 180 George Street
Sydney NSW 2000
Email Enquiry

© Helios Salinger Pty Ltd
ACN 655 748 593
ABN 59 655 748 593

Our Privacy Policy

Terms of Engagement

Subscribe to our newsletter.

These details will be added to our mailing list to receive the Helios Salinger eNews and Product News newsletters. You can unsubscribe or adjust your preferences at any time, from the bottom of any newsletter.