With the toll that AI has been taking on society recently; transforming the job market by automating routine tasks and processes; being leveraged in healthcare for a range of applications, including medical diagnosis, drug discovery, personalized medicine, and telemedicine; used extensively in social media platforms for content recommendation, targeted advertising, and detecting inappropriate content; and so forth, even some of our most used apps have proven to incorporate the so-called “robot” into their functions.
Snapchat, for instance, freshly added a chatbot available to Snapchatters on their “chats” page. The AI responds to multiple messages, ranging from friendly conversations to homework answers. As Snapchat even states “In a chat conversation, My AI can answer a burning trivia question, offer advice on the perfect gift for your BFF’s birthday, help plan a hiking trip for a long weekend, or suggest what to make for dinner.”
The AI ChatGPT viral chatbot engine powers the functionality, which, like the platform, can make suggestions, answer queries, and communicate with users. Users may modify the chatbot’s name, create a custom Bitmoji avatar for it, and use it in discussions with friends in Snapchat’s version.

Despite the pleasures of having artificial intelligence aid you in your daily tasks and questions, the tool can unfortunately mislead you in some of its responses. Given that Snapchat AI is a constantly developing feature, one ought to regularly verify the chatbot’s answers before acting on any advice, and you should never share confidential or sensitive information. Certain Snapchat users are even expressing dissatisfaction with the app by leaving negative reviews in the app store and expressing criticism on social media. They raise concerns about privacy, unsettling interactions, and the inability to remove a specific feature from their chat feed without a premium subscription.
Interacting with the automated Snapchat chatbot may feel less transactional than visiting the ChatGPT website as a result. It may also be less obvious that you’re conversing with a machine. ChatGPT, which relies on massive amounts of data collected through the internet, was earlier chastised for disseminating incorrect information, reacting to users in improper ways, and allowing students to cheat. However, Snapchat’s use of the technology risks exacerbating some of the existing concerns and creating new ones. CNN recently stated the following “Alexandra Hamlet, a clinical psychologist in New York City, said the parents of some of her patients have expressed concern about how their teenager could interact with Snapchat’s tool. There’s also concern around chatbots giving advice and about mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek out interactions that confirm their unhelpful beliefs.”
If a teenager is feeling down and doesn’t have the motivation to feel better, they might intentionally talk to a chatbot that they know will make them feel even worse. This can gradually make them feel less valuable, even though they are aware that they are actually conversing with a bot. When someone is in an emotional state, it becomes difficult for them to think logically in this way. Currently, it is the responsibility of parents to initiate important discussions with their teenagers about the proper ways to interact with AI, especially as these tools become more prevalent in popular apps and services.
Featured image generated by the Artificial Intelligence Midjourney with the prompt “Anxious narcissus aiming his face at the giant cellphone screen, modern design”