New AI Chatbots: With Friends Like These, Who Needs Enemies?
Meta’s new AI chatbot characters with unique personalities that aim to engage its massive user base and tap into the growing AI hype are under fire.
- Meta wants to increase user engagement and retention amid competition from emerging platforms like TikTok and capitalize on the AI technology trend.
- Experts raise concerns about user privacy as chatbots collect substantial user data, potentially exposing them to misuse and manipulation by the company.
Meta is gearing up to launch a series of new AI chatbot characters with distinct personalities as early as next month. Designed to engage in human-like conversations, these “personas” are receiving backlash from concerned professionals.
This move is a blatant attempt to boost user engagement and retain users amid increasing competition from emerging platforms such as Chinese competitor, TikTok. The tech giant aims to capitalize on the growing hype surrounding AI technology, following the success of OpenAI‘s ChatGPT.
Mark Zuckerberg, CEO of Meta, envisions these chatbots as assistants, coaches, and even intermediaries between users and businesses. By employing AI technology, Meta seeks to better understand users’ interests and tailor content and advertisements more effectively, a crucial aspect of its revenue generation through advertising.
The real concern though lies in the introduction of new AI chatbot characters which raised concerns among experts about user privacy and potential manipulation. When users interact with chatbots, they expose significant amounts of personal data to the company, raising questions about data security, potential misuse, and most importantly the ethicality of this integration into the giant’s social media platforms.
Not to mention, experts worry that the growing reliance on AI companions may lead to the phenomenon of “learned narcissism.” As users become increasingly attached to AI friends, they may find it difficult to form meaningful relationships with real people, potentially exacerbating loneliness and isolation in society.
Dr. Andrew Byrne, an associate professor at Cal Poly School of Education, warns about the need for AI to set boundaries, saying “At some point, we will absolutely develop deeper relationships with AI than we have with people, due to the availability and interest AI will have in us. This will result in learned narcissism that leads to extreme interpersonal toxicity until someone trains AI to set boundaries.”
“It’s a mental health crisis waiting to happen, for which they’ll prescribe an AI therapist,” says futurist and author Theo Priestley.
Snapchat, one of Meta’s competitors, has already launched its AI chatbot, called “My AI” and powered by GPT-4. While it offers various functionalities and has garnered significant user interactions, it has also faced criticism and concerns regarding user interactions with adult content.
The potential implications of these new AI chatbot characters go beyond personal relationships. Experts fear that they might escalate disconnection from real-world relationships, leading to global declines in childbearing and other societal issues.
Meta’s AI initiative aims to keep pace with competitors and regain relevance, especially among younger users who have largely migrated to other platforms like Instagram and TikTok. While Meta continues to innovate in AI technology, experts remain watchful of potential biases, misinformation, and privacy concerns associated with AI-powered chatbots.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.