TLDR: A video from Will Smith’s recent European tour has drawn significant online criticism, with viewers accusing the star of using AI to generate fake concert crowds. While initial reactions suggested entirely fabricated audiences, further investigation indicates a more nuanced situation: Smith’s team likely used AI to animate real audience photos, and YouTube’s own experimental image enhancement on Shorts further distorted the footage, leading to ‘AI slop’ characteristics like uncanny faces and garbled signs.
A minute-long video promoting Will Smith’s recent European tour has ignited a firestorm of controversy online, as viewers widely accuse the actor and musician of employing artificial intelligence to create fabricated concert crowds. The video, featuring Smith performing his new song ‘You Can Make It,’ showcases audience members with distorted facial features, blurred limbs, and seemingly nonsensical signs, leading many to believe the crowds were entirely AI-generated.
Initial reactions across platforms like YouTube, Reddit, and X (formerly Twitter) were overwhelmingly negative, with commenters expressing disappointment and even embarrassment at the presumed use of generative AI. One user remarked, ‘No other explanation for releasing a crowd of AI abominations holding up misspelled signs about how he saved their lives.’ Another pointed to a sign that appeared to read ‘Lov U Fr6sh Crince’ instead of ‘Lov U Fresh Prince,’ highlighting the visual inconsistencies.
However, a deeper dive into the footage, particularly by Waxy.org blogger Andy Baio, suggests a more complex reality. Baio’s analysis indicates that while AI manipulation is indeed present, the crowds themselves are not entirely fake. Instead, it appears Smith’s team utilized generative video models, such as Runway or Google’s Veo 3, to animate professionally-shot still photographs of real audiences from various concerts during his tour, including the Positiv Festival in France and Gurtenfestival and Paléo festivals in Switzerland.
For instance, a couple holding a ‘You Can Make It’ sign, initially cited by critics as a prime example of AI distortion, was confirmed to be real through other photos from Smith’s Instagram. The ‘AI artifacts’ like uncanny features, smeared faces, and multiple fingers/limbs, along with garbled signage, are attributed to two layers of manipulation.
The first layer involves Smith’s team using AI image-to-video tools to bring static photos to life for the concert montage. The second, and arguably more significant, factor is YouTube’s own unannounced ‘experiment’ with post-processing Shorts videos. For the past two months, YouTube has been quietly applying image enhancement technology to unblur and denoise videos, often resulting in an ‘uncanny’ or ‘smeary’ appearance. This effect is noticeably absent or less pronounced in the same video uploaded to Instagram and Facebook.
YouTube’s Creator Liaison, Rene Ritchie, clarified that this enhancement uses ‘traditional machine learning’ akin to computational photography on smartphones, not ‘generative AI’ or upscaling. However, critics argue that the distinction is blurry, given the lack of a strict technical definition for generative AI and the significant visual impact of these enhancements. YouTube has since announced plans to introduce an opt-out feature for this Shorts filter.
Also Read:
- Google Photos Enhances Image-to-Video Creation with Advanced Veo 3 AI Integration
- Shein Faces Backlash Over AI Model Resembling Murder Suspect, Igniting AI Ethics Debate
This incident highlights the increasing difficulty in discerning real from AI-generated content and raises ethical questions about the use of such technologies in mainstream media, even when starting with authentic source material. The debate extends beyond mere deception to issues of aesthetic taste and the potential exploitation of genuine human emotion through artificial means, as seen in other recent controversies involving AI-generated content in entertainment and journalism.


