TLDR: A team from National Taiwan University has developed ‘Receler,’ an innovative tool that uses a concept-erasing method to prevent generative AI models from producing harmful content, such as violence, deepfakes, and copyright-infringing material, without requiring extensive model retraining.
TAIPEI, Taiwan – Researchers at National Taiwan University (NTU) have introduced a groundbreaking technology named ‘Receler,’ designed to precisely sever generative artificial intelligence (GenAI) models’ connections with high-risk concepts. This innovation aims to enhance the safety of generative AI by proactively preventing the creation of harmful content. The development was announced on November 1, 2025, by Digitimes, with further details emerging from a Taiwan News report on October 30, 2025.
The team, spearheaded by NTU Professor Wang Yu-chiang, has devised a ‘concept-erasing method’ that enables AI to ‘forget’ or block undesirable ideas such as violence, nudity, deepfakes, and the unauthorized copying of artistic styles. A key advantage of Receler is its ability to achieve this without the need to retrain the entire AI model, a process that is typically resource-intensive and time-consuming.
The project received funding from the National Science and Technology Council (NSTC), underscoring Taiwan’s commitment to responsible AI development. The research findings were first presented at the 2024 European Conference on Computer Vision.
Professor Wang highlighted the dual nature of AI, acknowledging its immense utility while also pointing out its propensity to inadvertently or intentionally cross legal and ethical boundaries. He cited instances like ChatGPT generating images in the style of Studio Ghibli or deepfake tools being used to swap faces of public figures, sometimes in inappropriate contexts, as examples of how AI can breach ethical guidelines and copyright laws. Traditional keyword filters often prove inadequate in catching such nuanced harmful content.
Receler addresses these limitations by employing advanced techniques, specifically adversarial learning and cross-attention mechanisms. These methods allow the tool to remove specific high-risk concepts while crucially preserving the AI’s creative capabilities for legitimate applications. NSTC Director-General Hong Le-wen emphasized that this new tool holds significant potential in preventing AI from generating illegal or harmful outputs. He also noted that Receler’s open-source release has already seen widespread adoption in various online models, demonstrating its substantial impact on global AI safety efforts.
Also Read:
- AI-Powered Online Threats Escalate: Activist Reveals Terrifyingly Realistic Attacks
- Pinterest Unveils AI Shopping Assistant and Enhanced Generative AI Controls for Personalized Discovery
The introduction of Receler marks a significant step forward in the ongoing global effort to ensure AI technologies are developed and deployed responsibly, mitigating risks associated with the unchecked generation of harmful digital content.


