TLDR: A qualitative review examines the rapid adoption of Generative AI (GenAI) in scientific practices, covering its applications in research, scientific writing, medical practice, and education. While GenAI offers significant benefits like enhanced research methodologies and streamlined communication, the paper also highlights major concerns including sustainability, equity, transparency, accuracy, and ethical implications, emphasizing the ongoing uncertainties and the need for future research on its governance and long-term impact.
Generative Artificial Intelligence, or GenAI, is rapidly transforming the landscape of scientific research and practice. A recent qualitative review of literature, titled “Generative AI in Science: Applications, Challenges, and Emerging Questions”, delves into how these powerful AI tools are being adopted across various scientific fields, exploring both their promising applications and the significant concerns they raise.
What is GenAI and Why is it Important?
GenAI represents a major leap forward from earlier AI models. Unlike previous systems that were limited by data size, GenAI models are trained on vast datasets and can analyze images, texts, and videos. They are capable of creating new content—such as text, images, or music—based on patterns learned from existing information. A key aspect of GenAI’s broad applicability is its integration with Large Language Models (LLMs), which are trained to predict the likelihood of text sequences. The public release of OpenAI’s ChatGPT in November 2022 significantly boosted the capabilities and accessibility of GenAI, leading to an exponential increase in scientific interest from 2023 onwards.
Applications Across Science
The review highlights GenAI’s diverse applications in science:
In Research: GenAI can act as an advanced ‘search engine’ for literature reviews, helping researchers select relevant articles and draft reviews, freeing up time for methodology and actual research. It can also assist with research summarization, analysis, identifying potential research questions, and even translating statistical analysis. In deeper scientific processes, GenAI is being used in de-novo drug design, testing biological compounds, and developing new protein structures, potentially leading to faster research outcomes.
In Scientific Writing and Publication: GenAI tools are increasingly used to assist with the editorial process, identifying grammatical errors, overcoming writer’s block, and translating papers. There’s a growing expectation that this technology could eventually design experiments, write manuscripts, and even aid in peer review. However, this raises significant debates about authorship, scientific integrity, and the detection of AI-generated content.
In Medical Practice: GenAI shows promise in clinical settings, such as speeding up pre-authorization processes by scanning patient histories. It can generate clinical notes, daily summaries, and discharge summaries, saving doctors time and improving accuracy. ChatGPT’s ability to summarize information in both technical and plain language can enhance communication between clinics and with patients. It can also assist doctors in assessing patients by reviewing health records and aiding radiologists with image diagnostics.
In Education and Training: GenAI offers the potential for more personalized learning experiences through chatbots and adaptive systems. It can provide lessons tailored to individual skill levels, help with academic timetabling, create assessments, and even assist in marking, particularly in medical fields.
Challenges and Concerns
Despite the numerous benefits, the rapid adoption of GenAI in science is accompanied by significant concerns:
Sustainability and Equity: Training and running GenAI models require substantial energy, raising environmental concerns. There are also worries about exacerbating disparities between high-income and low-income countries, as privileged academics may leverage LLMs in ways that further advance their research while widening the gap.
Transparency and Accuracy: GenAI models are often referred to as ‘blackbox’ technologies, meaning users don’t fully understand how outputs are generated. Concerns exist about the accuracy of outputs, with ‘hallucination effects’ leading to made-up or inaccurate information. The potential for misuse and manipulation also increases with more sophisticated algorithms.
Scientific Integrity and Authorship: The use of GenAI in scientific writing raises questions about plagiarism, authorship rights, and the ability of publishers to detect AI-generated content. While GenAI can assist, it cannot take responsibility for the legitimacy and truthfulness of the work, meaning it cannot be an author. Over-reliance on these tools could also lead to a lack of original human thought.
Legal and Ethical Implications: In medical practice, there’s a lack of governing laws for GenAI use, leading to concerns over patient data privacy and accountability for misdiagnosis. Across all fields, issues of ethical data use, copyright, and the equitable and sustainable rollout of these technologies remain largely unaddressed.
Also Read:
- Exploring the Creative Frontier: Can Artificial Intelligence Truly Be Creative?
- Streamlining Proto-Persona Creation with Generative AI: A Case Study on Efficiency and User Acceptance
The Path Forward
The paper concludes that while GenAI’s emergence has sparked considerable debate and its use in science is rapidly increasing, its full long-term implications remain unclear. Science itself is still grappling with the technology’s boundaries and impact. Key issues like ethics, trust, sustainability, responsibility, equity, and governance are raised, but a clear consensus on how to resolve them is yet to emerge. Despite these uncertainties, the review anticipates continued growth in GenAI applications within science and scientific practice, underscoring the need for ongoing research into these critical questions.


