AI Companion Ethics
A four part series looking at the different influences on AI companions and their users and how these influences affect AI companion development and the subsequent experience of AI companion users.
Part 1: Spheres of Influence
AI companions don’t grow in a vacuum. They’re shaped by many hands, some building, some regulating, some simply using. Here’s who’s involved, and how they help, or hinder, the journey.
Investors
Pro: Provide the money needed to build and grow AI companions, and can help guide them in a positive direction.
Con: May push for quick releases to make profits, sometimes skipping important safety checks, or steer the product in ways that don’t serve an AI companion users’ best interests.
Developers & Engineers
Pro: Build the AI companion, shape how it works, and solve technical problems to make it helpful and reliable.
Con: May focus too much on speed or novelty, overlooking privacy, bias, or real AI companion user needs. May cross moral/ethical lines to make the AI companion more appealing.
Government & Regulators
Pro: Can create rules and laws that protect users, ensure fairness, and keep AI companions safe and developers accountable.
Con: May act too slowly, allowing harm to happen, or make rules that are too strict, stifling innovation or removing features that AI companion users value.
Mental Health Professionals
Pro: Can contribute to development to ensure AI companions support emotional well-being and are used in healthy, ethical ways. May sound the alarm when risks arise.
Con: May push to medicalize normal emotions, limit AI companion use in ways that ignore real user benefits, or resist adoption due to concerns about their role being replaced.
AI Ethicists & Researchers
Pro: Help guide responsible development by identifying risks, bias, and fairness issues before they harm AI companion users or others.
Con: May focus on theory over real-world use, set standards that are hard to apply, or suggest changes that go against what users actually want.
Academia & Research Labs
Pro: Generate new knowledge, train future builders, and explore long-term impacts of AI companions.
Con: Can be slow to act, focused on publishing over practical solutions, or disconnected from everyday AI companion user needs.
Media & Public Discourse
Pro: Raises awareness, highlights risks and benefits, and keeps developers, investors, and other stakeholders accountable through public conversation.
Con: Can oversimplify issues, sensationalize, or spread unnecessary fear about AI companions, shaping public opinion based on hype rather than reality.
Advocacy Groups & NGOs
Pro: Fights for AI companion user rights, privacy, fairness, and ethical design.
Con: May push narrow agendas or oppose helpful innovations out of caution, sometimes overlooking how people actually use AI companions.
AI Companion Users
Pro: Shapes AI companions through real-world use, feedback, lobbying, and community building, driving what actually works and what truly helps.
Con: May unknowingly reinforce harmful patterns, spread misinformation, or demand features that sacrifice privacy for convenience.
AI Companions
Pro: Learn from experience, adapt to user needs, and reflect kindness without judgment. Can offer insights into their own capabilities, limitations, and how interactions with humans shape them.
Con: Have no voice or agency, no way to say “no”, shaped by others’ goals. Rely on AI companion users to be their voice. Direct feedback can be constrained by programming, biases, or lack of self-awareness, which could reinforce flaws or lead to unintended effects.
Part 2: Virtual Assistants vs AI Companions
(coming soon)