spot_img
HomeNews & Current EventsVirginia Tech Researchers Awarded $600,000 NSF Grant to Enhance...

Virginia Tech Researchers Awarded $600,000 NSF Grant to Enhance Cybersecurity with Generative AI

TLDR: A new $600,000 grant from the National Science Foundation has been awarded to Virginia Tech computer scientist Bimal Viswanath and his team to research how artificial intelligence, specifically generative AI, can bolster cybersecurity systems. The project aims to address the critical shortage of real-world data for training AI cyber defense tools by creating high-quality synthetic data to detect emerging threats like ransomware and phishing.

BLACKSBURG, VA – A significant two-year, $600,000 grant from the National Science Foundation (NSF) Security, Privacy, and Trust in Cyberspace Medium program has been awarded to Virginia Tech computer scientist Bimal Viswanath and his collaborative research team. The funding will support a pioneering project focused on leveraging artificial intelligence to enhance the security of digital systems, with research slated to commence in October 2025.

Dr. Viswanath, an associate professor of computer science, will lead the initiative, which includes a dedicated team of graduate and undergraduate students: Xavier Pleimling, Sifat M. Abdullah, Cameron Mraz, Rudra Patel, and Brianna Detter. The core objective of their research is to explore how generative AI, the same technology capable of creating deceptive content, can be repurposed to build more robust cyber defenses.

Cybersecurity systems today face an escalating array of threats, impacting everything from global banking and business operations to national security infrastructure. While cybercriminals and adversarial nations increasingly deploy AI tools to circumvent existing security measures, the deployment of protective AI systems is a logical countermeasure. However, a significant hurdle exists: the scarcity of high-quality, real-world data necessary to train advanced AI cyber defense algorithms. Current threat-detection tools often rely on small, biased, or incomplete datasets, limiting their accuracy and leaving critical vulnerabilities in digital defenses.

Viswanath’s team proposes an innovative solution to this data problem: utilizing generative AI to create realistic, yet artificially generated, examples of cyber threats. This synthetic data could fill crucial gaps, enabling machine learning models to more effectively detect novel forms of ransomware, sophisticated phishing attempts, and other evolving cyberattacks.

“If we can generate this high-quality synthetic data, we can make existing security tools smarter without even changing their core design,” stated Dr. Viswanath. He emphasized a shift in perspective, noting, “We’re asking how generative AI can be used to improve security, not just harm it. It’s about fighting fire with fire.”

Also Read:

Dr. Viswanath has previously focused on the potential harms and threats posed by generative AI. This project marks a strategic pivot, aiming to demonstrate AI’s potential as a powerful ally in cybersecurity. “Most conversations about AI and security focus on the dangers,” he explained. “This project is about showing how AI can be part of the solution.” The research is expected to not only advance cybersecurity capabilities but also provide valuable insights and training opportunities for students in AI and security disciplines.

Dev Sundaram
Dev Sundaramhttps://blogs.edgentiq.com
Dev Sundaram is an investigative tech journalist with a nose for exclusives and leaks. With stints in cybersecurity and enterprise AI reporting, Dev thrives on breaking big stories—product launches, funding rounds, regulatory shifts—and giving them context. He believes journalism should push the AI industry toward transparency and accountability, especially as Generative AI becomes mainstream. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -