TLDR: A study interviewed U.S. smart home and AI chatbot users about their privacy and security concerns regarding domestic social robots. Findings show users are largely unfamiliar with social robots but worry about data collection (especially audio/visual), data inference, and risks to children. They expect transparency on data practices, robust privacy controls, and clear regulations, highlighting current legal gaps and concerns beyond just data, such as physical and social privacy.
As artificial intelligence (AI) and advanced sensing capabilities become more integrated into our daily lives, social robots are emerging as the next evolution of smart home devices. These robots, designed to interact with humans on a social level, offer convenience and assistance in various settings, from supporting mental health to aiding in education and medical care. However, their extensive data collection abilities, human-like features, and capacity to move and interact within our homes also introduce significant security and privacy challenges.
A recent study, titled “Is it always watching? Is it always listening?” Exploring Contextual Privacy and Security Concerns Toward Domestic Social Robots, delves into the perceptions of U.S. users regarding these emerging technologies. Conducted by researchers Henry Bell, Jabari Kwesi, Hiba Laabadli, and Pardis Emami-Naeini from Duke University, the study aimed to understand users’ security and privacy needs to guide the responsible design of social robots as they enter the U.S. market. You can read the full research paper here.
Understanding User Concerns
The researchers conducted 19 semi-structured interviews with individuals who already use smart home devices and AI chatbots. A key finding was that about half of the participants were unfamiliar with the term “social robot,” often defining them by comparing them to existing technologies like Amazon’s Alexa or OpenAI’s ChatGPT. This suggests that public perception of social robots is still largely shaped by their understanding of similar, but less complex, devices.
Despite this unfamiliarity, participants expressed substantial privacy and security concerns. Many felt a sense of “privacy resignation,” believing that data collection by these devices was inevitable, even if uncomfortable. A major worry revolved around personal safety, with some imagining scenarios where a robot could be used maliciously. While the robot’s mobility wasn’t a standalone concern, it amplified fears about data collection, such as a robot moving around and accessing sensitive information.
The collection of visual and voice data, both intentional and passive, was a significant concern for about half of the participants. They questioned where identifiable information would be stored and how much privacy they would lose simply by being in the robot’s environment. Participants were also uncomfortable with the idea of data inference – where a robot could deduce personal information not directly provided by the user, similar to how search engines infer user preferences.
Context Matters: Different Use Cases, Different Worries
The study found that privacy and security concerns varied significantly depending on how the social robot was used and who its primary user was:
- For Personal Use: Most participants had few concerns when imagining purchasing a social robot for themselves, often citing companionship, safety, and productivity as benefits.
- For Children: This was the area of greatest concern. Participants worried about children’s data security, potential negative impacts on social development (preferring human interaction over robot companionship), and the risk of children misusing the device for inappropriate content or unauthorized purchases.
- For Elderly Adults: While many saw benefits like companionship and medical assistance (e.g., reminders), the primary concern was the robot’s usability for older individuals. Some also worried about elderly users being influenced to share sensitive information.
- For Communal Households: In shared living spaces, concerns centered on data leakage to other household members and bystander privacy, ensuring everyone in the environment was comfortable with the robot’s presence.
When considering specific purposes:
- Education: Participants saw potential benefits but were highly concerned about misinformation and the reliability of the robot’s educational content, drawing parallels to issues with AI chatbots like ChatGPT.
- Medical: Reliability was the main concern, with participants viewing medical social robots as a starting point for treatment, not a replacement for human doctors. Concerns also arose about the collection of sensitive medical data.
- Psychological Therapy: Many participants believed human-to-human interaction was essential for effective therapy, fearing that robot interactions would be shallow or that unreliable AI advice could be harmful to vulnerable individuals.
Expectations for Transparency and Control
Users expressed clear expectations for how social robots should handle their privacy and security:
- Data Transparency: Almost all participants wanted to know what data was being collected, how it was being used, who it was shared with, and how it was protected. They also wanted information about the AI models powering the robot, including training data and trustworthiness.
- Multiple Communication Channels: Users preferred this information to be available through various means, such as on the device packaging, online, and even communicated directly by the robot itself through voice or visual cues.
- On-Device Cues: Many desired visual or audio signals to indicate when the robot was actively collecting data.
- Data Management: Participants expected the ability to review and delete collected data, similar to features found in some smart speakers.
- Physical Controls: The desire for a “kill switch” or the ability to physically cover sensors to disable data collection was also noted.
- Parental Controls: For children’s use, granular parental controls were highly requested, allowing parents to manage what data the robot could collect from their child.
- Regulations: Participants emphasized the need for strong security and privacy regulations for social robots, especially in sensitive use cases. However, the study highlighted a common misconception that existing laws like HIPAA (Health Insurance Portability and Accountability Act) would automatically protect data collected by domestic social robots, which is often not the case.
Also Read:
- Smart Homes Get Smarter and More Private with HomeLLaMA
- Supporting Independent Living: A New Framework for Monitoring Daily Activities of Older Adults
Beyond Informational Privacy
The study also discussed privacy dimensions beyond just data collection, including physical, psychological, and social privacy:
- Physical Privacy: Concerns about robots entering private spaces uninvited or creating a pervasive sense of surveillance. Users want controls over robot movement and physical indicators when cameras or microphones are active.
- Psychological Privacy: Relates to control over one’s mental states and inferred information, such as mood. If social robots include emotional AI, users expect transparency, the ability to review/delete emotional data, and the option to disable such features.
- Social Privacy: Addresses the social bonding between humans and robots. While companionship was seen as a benefit, participants worried about robots replacing genuine human interactions, particularly for children. Designers are urged to implement safeguards against excessive anthropomorphism and to design robots that facilitate, rather than replace, social interactions.
In conclusion, as social robots move closer to widespread commercialization, addressing these multifaceted privacy and security concerns through transparent practices, robust controls, and proactive regulations will be crucial for their successful adoption and integration into our homes.


