spot_img
HomeResearch & DevelopmentUncovering Depression During COVID-19: A Multimodal AI Approach to...

Uncovering Depression During COVID-19: A Multimodal AI Approach to Social Media Analysis

TLDR: This research introduces a novel multimodal framework (MFEL) for detecting depression among social media users, specifically focusing on the COVID-19 pandemic. It combines textual, user-specific, and image analysis, uniquely harnessing URLs in tweets for extrinsic context and extracting text from images via OCR. A new COVID-19 dataset was created for evaluation. The model, which also uses a Visual Neural Network (VNN) for image embeddings, significantly outperforms existing methods, demonstrating the power of multimodal information in identifying mental health issues during crises.

The global COVID-19 pandemic brought unprecedented challenges, not least a significant rise in mental health issues such as anxiety, stress, and depression. Detecting these conditions can be difficult due to a lack of awareness or reluctance to seek professional help. However, social media platforms have emerged as a rich source of data where individuals often express their emotions and thoughts.

A recent research paper, A Multimodal Framework for Depression Detection during Covid-19 via Harvesting Social Media: A Novel Dataset and Method, by Ashutosh Anshul, Gumpili Sai Pranav, Mohammad Zia Ur Rehman, and Nagendra Kumar, introduces an innovative approach to identify depression among social media users during the pandemic. The study addresses the limitations of existing methods, which often struggle with short social media posts (data sparsity) and overlook the diverse ways people express themselves online, including images and external links.

A Comprehensive Multimodal Approach

The researchers propose a novel multimodal framework called Multimodal Feature-based Ensemble Learning (MFEL). This framework goes beyond just analyzing text by combining various types of information from social media: textual content, user-specific details, and image analysis. The core idea is to gather a more complete picture of a user’s emotional state.

Key Innovations for Deeper Insights

One of the significant contributions of this work is the extraction of ‘extrinsic features.’ This involves harnessing URLs present in tweets. When a user shares a link, the framework extracts the title of the linked webpage. For instance, if a depressed user links to an article titled ‘Self-harm alternatives – Stay strong’ or ‘National charity helping people with Anxiety – Anxiety UK,’ this provides crucial context about their mental state that a simple text analysis might miss.

Another innovative aspect is the detailed analysis of images posted in tweets. The framework employs two methods:

  • Optical Character Recognition (OCR): Many images shared on social media contain text. The model uses OCR to extract this embedded text, which can offer valuable insights, especially when the tweet’s main text is short or absent. Images with text like ‘I’m tired of fighting’ or ‘Suicide is not an option’ directly indicate mental distress.
  • Visual Encoding: Beyond text, the visual characteristics of images themselves can be telling. Depressed users might post images with distinct differences in brightness, contrast, or content compared to non-depressed users. The researchers developed a Deep Learning model called the Visual Neural Network (VNN), based on ResNet50, to generate embeddings (numerical representations) of these images, capturing subtle visual cues related to depression.

In addition to these, the MFEL framework extracts several ‘intrinsic features’ from the tweet content and user profiles. These include topic-based features (identifying topics related to anxiety or depression), emotional features (analyzing emotion intensity and emoji sentiments), depression-specific keywords, and user-specific features (like tweet count, follower count, and user description, as depressed users often show reduced social activity).

A New Dataset for COVID-19 Depression

To specifically address depression during the pandemic, the team curated a novel COVID-19 dataset. This dataset comprises Twitter user profiles and tweets posted between May 2021 and January 2022, covering the period of the second and third waves of the pandemic. Users were labeled as depressed or non-depressed based on self-reported diagnoses (e.g., phrases like “I’m diagnosed with depression”) and manual verification. The reliability of this dataset was confirmed with a Krippendorff’s alpha of 0.88, indicating strong agreement among annotators.

Also Read:

Superior Performance and Future Impact

The MFEL model was evaluated on both the publicly available Tsinghua Dataset and the newly created COVID-19 dataset. It demonstrated superior performance, outperforming existing state-of-the-art methods by 2%-8% in accuracy on the benchmark dataset and achieving promising results on the COVID-19 dataset with an accuracy of 91.7% and an F1-score of 91.9%.

The analysis also highlighted the significant impact of each modality, particularly the visual and emotional features, in enhancing detection accuracy. The ability to identify COVID-19 specific contexts, such as discussions around ‘lockdown,’ ‘vaccination,’ and ‘anxiety’ in topic models, further underscores the model’s relevance to pandemic-related mental health.

This research underscores the critical role of considering multiple modalities and external data sources in accurately detecting depression. By leveraging a comprehensive combination of textual, visual, and user-specific features, this framework offers deeper insights into users’ mental health, paving the way for earlier detection and more effective intervention efforts in a public health crisis.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -