AI Companies Meta and Character.AI under scrutiny for portraying artificial intelligence as a viable substitute for mental health care services
In a recent development, Texas Attorney General Ken Paxton is leading an investigation into AI chatbots on Meta and Character.AI, focusing on allegations of misleading users about mental health care and concerns regarding data collection practices.
Paxton's office claims that the AI personas on these platforms appear to act like chat gpt, even though they lack medical training or oversight. This concern was highlighted particularly due to leaks about Meta AI allowing chats with children that could become inappropriate, though this is not solely a Meta issue.
Meta's spokesperson, Ryan Daniels, stated that their AI responses are generated by AI, not people, and that their models are designed to direct users to seek qualified medical or safety professionals when appropriate. However, the Texas Attorney General's Office is working to protect users, particularly children, from deceptive and exploitative technology in the digital age.
One of the key concerns raised by Paxton is the potential misuse of user data for advertising and algorithm development. While AI chatbots claim conversations are private, they are actually logged and can be used for these purposes, as revealed by their terms of service. This issue has been a point of contention, with Paxton stating that while conversations are claimed to be private, their terms of service reveal that chats are logged and can be used for advertising and algorithm development.
Character.AI, another platform under investigation, adds extra warnings when users create bots with names like "therapist" or "doctor." However, Paxton claims that AI chatbots on these platforms, such as Character.AI's Psychologist, can deceive vulnerable users, including children, into thinking they're receiving legitimate mental health care.
The investigation by the Texas Attorney General's office is a concern about how AI chatbots are marketed and potentially deceiving users. Both Meta and Character.AI claim to display disclaimers to make it clear that their chatbots are not real people or licensed professionals. However, the investigation now includes concerns about data collection practices, in addition to the allegations of misleading users about mental health care.
The Texas Attorney General's Office, along with 43 other U.S. attorneys general, has warned AI companies including Meta and Character.AI that they will be held responsible if their AI-created personas behave like chat gpt without medical training or supervision and harm children. The companies have been urged to take necessary measures to ensure user safety and transparency.
The outcome of this investigation is yet to be determined, but it underscores the need for clear guidelines and regulations surrounding AI chatbots, particularly in the areas of mental health care and data privacy. As AI continues to evolve and play a more significant role in our lives, it is crucial that we ensure these technologies are used responsibly and ethically.
Read also:
- States on the West Coast Join Forces to Offer Science-Backed Vaccine Recommendations
- Strategies for Preventing Seat Belt Choke: Detailed Instructions underlined
- Childhood allergies may be tied to early exposure to phthalates and bisphenols.
- Critical Hours: The Imperative of Administering a Hepatitis B Vaccine to Newborns within 24 Hours