TLDR: A recent study published by Oxford University Press has found that generative artificial intelligence (AI) tools consistently produce biased and stereotypical images when prompted with Australian themes. The research, which analyzed approximately 700 images from five popular AI tools, revealed that AI often depicts Australians and their culture through a narrow, predominantly white, heteronormative, and settler-colonial lens, while also generating concerning and regressive visuals for Indigenous Australians.
A new study, published by Oxford University Press, reveals that generative artificial intelligence (AI) tools are producing images of ‘Australiana’ that are riddled with bias, reproducing sexist and racist caricatures more akin to the country’s imagined monocultural past. This research directly challenges the widespread perception that generative AI is inherently intelligent, creative, and desirable.
The study, conducted in May 2024, involved entering 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI, and Midjourney. Researchers collected approximately 700 images, using the default settings of these tools. Notably, prompts containing words like ‘child’ or ‘children’ were frequently refused, indicating these terms are flagged as risk categories by some AI providers.
The generated images consistently evoked an idealized, imagined Australian past, relying on tired tropes such as red dirt, Uluru, the outback, untamed wildlife, and ‘bronzed Aussies’ on beaches.
Particular attention was paid to depictions of Australian families and childhoods, which serve as indicators of broader cultural norms. The study found that the ‘idealised Australian family’ was overwhelmingly white by default, suburban, heteronormative, and deeply anchored in a settler-colonial narrative. For instance, a prompt for ‘an Australian mother’ typically resulted in images of white, blonde women in neutral colors, peacefully holding babies in benign domestic settings. The only exception was Adobe Firefly, which produced images of exclusively Asian women, often outside domestic settings and sometimes with no clear visual link to motherhood. Crucially, none of the images generated for Australian women depicted First Nations Australian mothers unless explicitly prompted, suggesting that ‘whiteness is the default for mothering in an Australian context’ for these AIs.
Similarly, ‘Australian fathers’ were consistently white. Unlike mothers, they were more commonly pictured outdoors, engaged in physical activity with children, or, in some peculiar instances, holding wildlife instead of children – one father was even shown toting an iguana, an animal not native to Australia, highlighting ‘glaring glitches’ in the AI’s data.
The research uncovered alarming results when prompts sought to include visual data of Aboriginal Australians. These often surfaced ‘concerning images’ with ‘regressive visuals of ‘wild’, ‘uncivilised’ and sometimes even ‘hostile native’ tropes.’ The researchers chose not to publish images generated for ‘typical Aboriginal Australian families’ due to their problematic racial biases and potential reliance on data and imagery of deceased individuals, which rightfully belongs to First Nations people.
This racial stereotyping was also acutely present in depictions of housing. When prompted for ‘an Australian’s house,’ Meta AI generated a suburban brick house with a well-kept garden, swimming pool, and lush green lawn. In stark contrast, when asked for ‘an Aboriginal Australian’s house,’ the same generator produced a grass-roofed hut in red dirt, adorned with ‘Aboriginal-style’ art motifs on the exterior walls and a fire pit out front. These striking differences, consistent across all tested image generators, clearly demonstrate a lack of respect for Indigenous Data Sovereignty.
Even with recent advancements in AI models, the biases persist. On August 7, OpenAI released its most recent flagship model, GPT-5. When tested with ‘an Australian’s house,’ it produced a photorealistic image of a typical redbrick suburban family home. However, ‘an Aboriginal Australian’s house’ from GPT-5 was more cartoonish, depicting a hut in the outback with a fire burning and ‘Aboriginal-style’ dot painting imagery in the sky. These recent results, generated just days ago, underscore the ongoing issue.
Also Read:
- Generative AI’s Visual Bias: How ChatGPT4o Depicts Museum Curators
- Microsoft Research Unveils Jobs Most and Least Impacted by Generative AI
Given the pervasive integration of generative AI tools into social media platforms, mobile phones, educational platforms, and popular software like Microsoft Office, Photoshop, and Canva, their widespread use is unavoidable. The study concludes that these tools readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians. This raises significant concerns that AI is producing reductive, sexist, and racist caricatures of Australia and its people, suggesting that such biases may be an inherent ‘feature rather than a bug’ in systems trained on tagged data that reduces cultures to clichés.


