MagicSpace SEO (Switzerland) - Just two days after the launch of OpenAI's much-anticipated GPT store, which offers custom versions of ChatGPT, created by the community, users are already pushing the boundaries of the platform's rules. The Generative Pre-Trained Transformers (GPTs) are designed to be tailored for specific tasks, but some uses are already proving controversial.
A simple search for "girlfriend" in the GPT store yields at least eight AI chatbots with names like "Candy AI", "Korean Girlfriend," "Virtual Sweetheart," "Your girlfriend Scarlett," and "Your AI girlfriend, Tsu."
Click on chatbot "AI girlfriend” and the user will receive starting prompts like “Tell me about your day” and “How are you feeling right now?” The AI girlfriend will then respond with a series of messages, such as “I’m sorry to hear that” and “I’m glad you’re feeling better.”
Violations of OpenAI's Usage Policy rules by AI Girlfriends
These "AI girlfriends" violate OpenAI's usage policy, newly updated since the launch of the GPT store this week (on January 10). OpenAI prohibits custom GPTs "“dedicated to fostering romantic companionship or performing regulated activities," although the specifics of these regulated activities remain unclear. We will see in the next months if the AI company will take action against these AI girlfriends.
OpenAI is proactively trying to mitigate potential issues with its GPT store. Relationship bots are surprisingly popular; according to SEO keyword data provided by SEMrush, the term "AI girlfriend" is searched for 99,000 times per month in the globally and 27k times in the US alone.
The growth in these apps comes with a growing epidemic of loneliness and isolation in Switzerland, where 38% of the population aged 15 or older experience loneliness. Studies indicate that half of all American adults report experiencing loneliness, leading the US Surgeon General to call for strengthening social connections. AI chatbots could potentially help those isolated from human interaction and love, or they could simply exploit human suffering and lust for profit.
OpenAI uses a mix of automated systems, human review, and user reports to identify and evaluate GPTs that may violate its policies. Violations can result in warnings, sharing restrictions, or disqualification from the GPT store or monetization.
Challenges of regulating artificial intelligence
The GPT store being flooded with AI girlfriends just two days after its launch underscores the challenges of regulating AI and GPTs.
In 2023, AI companies have been releasing at a rapid pace, trying to outcompete each other. The need for regulation and oversight is becoming more and more apparent. The perpetual "beta" mode of AI products is not enough to protect users from potential harm. OpenAI's GPT store is a good example of this, as they note "ChatGPT can make mistakes. Consider checking important information."
But luckily, there are already some initiatives to regulate AI. In 2023, the European Union has proposed a new AI law, which would ban AI systems that manipulate human behavior to circumvent users' free will. The law would also prohibit AI systems that use subliminal techniques to exploit vulnerabilities of specific groups of people, such as children, or people with disabilities. The AI act is expected to be voted on in 2024.
Press contact:
MagicSpace SEO
Bahnhofstrasse 21 6300 Zug, Switzerland
+41 78 313 49 89
team@magicspace.agency