TLDR: DisasterMobLLM is a new AI framework that uses Large Language Models (LLMs) and cross-city data to predict how people move during natural disasters. Unlike traditional models that struggle in emergencies, DisasterMobLLM focuses on understanding people’s “intentions” (like staying put or moving) and refines these predictions using LLMs, significantly improving accuracy and the ability to predict immobility, which is crucial for emergency response.
The increasing vulnerability of cities to natural disasters, driven by rapid urbanization and climate change, highlights a critical need: accurately predicting how people move during these emergencies. Such predictions are vital for tasks like issuing early warnings, allocating rescue resources, and planning humanitarian aid. However, most existing models for human mobility prediction are designed for normal, everyday scenarios. They often fail when disaster strikes because people’s movement patterns shift dramatically under such stressful conditions.
Understanding the Challenge of Disaster Mobility
Traditional mobility prediction models, even sophisticated ones, see a significant drop in performance when applied to disaster scenarios. This is because the spatial distribution of people and the likelihood of them staying put (immobility) change drastically during a crisis. For instance, heavy rainfall can lead to many people staying at home or in shelters, a pattern very different from daily commutes. Developing specialized models for disasters is also challenging due to the sheer diversity of disaster situations and the resulting scarcity of specific mobility data for a particular disaster in a given city.
While transferring knowledge from similar disasters in other cities seems promising, simple transfer learning techniques haven’t been fully effective. This is largely because the physical layouts and points of interest (POIs) in different cities vary greatly, making it hard to directly apply mobility patterns from one city to another.
Introducing DisasterMobLLM: A New Approach
To address these complex issues, researchers have introduced DisasterMobLLM, a novel framework that leverages the power of Large Language Models (LLMs) and cross-city learning to predict human mobility in disaster scenarios. The core idea is to understand how disasters influence human movement at a deeper, more abstract level – the ‘intention’ level – rather than just focusing on exact locations. This allows the model to transfer common knowledge about disaster impacts across different cities and situations.
How DisasterMobLLM Works: A Three-Step Process
DisasterMobLLM integrates into existing deep mobility prediction methods through three main modules:
Predicting Mobility Intentions
First, the framework uses a ‘RAG-Enhanced Intention Predictor’. Instead of directly predicting the next location, it predicts the next ‘intention’ of a person, such as whether they intend to travel, go to work, or stay still. This is done by converting raw movement data into ‘travel features’ and then mapping these features into a shared ‘intention space’. A special component called ‘Intention-CLIP’ helps LLMs understand these intentions by aligning them with language concepts. To overcome data scarcity in a target city during a disaster, the system also retrieves similar historical movement patterns from other cities or from the target city during normal times, using them as external knowledge.
Refining Intentions with Large Language Models
The initial intention prediction is then refined by an ‘LLM-based Intention Refiner’. This module fine-tunes an LLM to understand how different disaster levels affect human mobility patterns. It uses a clever ‘intention-incorporated prompt’ that includes the predicted intention and a ‘Chain-of-Thought’ reasoning process. This guides the LLM to think step-by-step: Is the initial intention correct? If not, should the person stay still? If not, which other intention is most likely? Additionally, a ‘disaster-level-awaring soft prompt’ helps the LLM adapt its reasoning based on the severity of the disaster.
Mapping Intentions to Locations
Finally, an ‘Intention-Modulated Location Predictor’ takes the refined intention and uses it to predict the exact next location. This module integrates the intention information with existing deep-learning mobility prediction models, effectively modulating their predictions based on the learned intentions. This allows the framework to combine the strengths of established mobility models with the LLM’s understanding of disaster-induced behavioral shifts.
Key Innovations and Performance
DisasterMobLLM’s key innovations lie in its use of LLM-adapted intention embeddings, its ability to transfer cross-city knowledge at the intention level, and its explicit modeling of immobility during disasters. Extensive experiments have shown remarkable improvements. Compared to leading baseline methods, DisasterMobLLM achieved a 32.8% improvement in prediction accuracy (Acc@1) and a 35.0% improvement in the F1-score for predicting immobility. This demonstrates its effectiveness in compensating for the performance degradation seen in traditional models during disaster scenarios.
Also Read:
- Connecting Images and Text for Smarter AI: Introducing MMGraphRAG
- PatchTraj: Unifying Time and Frequency for Smarter Pedestrian Trajectory Prediction
Why This Approach Matters
The success of DisasterMobLLM stems from its ability to account for the unique impact of disasters on human mobility patterns and to explicitly model the crucial aspect of immobility. By stimulating the power of LLMs to understand how disasters affect human behavior and by transferring knowledge at the intention level rather than directly transferring entire mobility trajectories, the framework achieves more accurate and reliable predictions. This advancement is crucial for enhancing emergency response, improving resource allocation, and ultimately, saving lives in an increasingly disaster-prone world.
For more in-depth information, you can read the full research paper here: Predicting Human Mobility in Disasters via LLM-Enhanced Cross-City Learning.


