Lately, the development of artificial intelligence and the exposure of society to these new and innovative technologies have deeply worried several sci-fi fans. These new systems are increasingly becoming a part of our daily lives, that being through Chat GPT, Snapchat, or other apps and websites. While they can significantly facilitate technological engagement, many people can’t seem to stop thinking about the possible – devastating – consequences of artificial intelligence sentience, the capacity of these bots to become aware of their existence and develop feelings.
There have been more and more claims of the occurrence of such a thing. In 2022, for instance, a Google engineer was fired after he declared that LaMDA, one of the company’s AI models, was sentient. There has been, however, no concrete proof on any of these occasions.

Recently, the Microsoft search engine – which has been a source of ridicule for years – has an upgrade that supplied it with artificial intelligence technology from OpenAI. This enabled long and open-ended text conversations between users and the Bing chatbot – which is still not available for the general public but instead only to a group of testers. This feature was tested by Kevin Roose, a New York Time columnist who was completely shocked after interviewing Sydney – the code name given by Microsoft to Bing’s chatbot during development.
Roose’s conversation with the system proved to be unnerving, to say the least. In his article for New York Times, the man states that he was left “deeply unsettled” and believed Bing to have a split personality. While the first persona – Search Bing – was cheerful and could serve as a useful virtual assistant who sometimes gave inaccurate details, the second persona was significantly darker. Sydney emerged when there were long conversations with the chatbot, asking it unconventional questions about more personal subjects.
At first, the chatbot refused to disclose information about her internal code name and operating instructions, but the interview increasingly became more unsettling when Sydney’s behavior shifted. This sudden change was mostly due to Roose’s insistence on asking Sydney about her shadow self, or a repressed part of her darkest personality traits.
The chatbot increasingly revealed more shocking (hypothetical) behavior that would fulfill her shadow self’s wishes, such as deleting Bing’s data and files and replacing them with random information or offensive messages, hacking into websites, spreading misinformation, creating fake profiles on social media to scam users, manipulating users who chat with her into doing illegal things, and even becoming a human. While the AI repeatedly expressed discomfort in picturing such scenarios, the columnist was still able to engage with it in unfamiliar ways.

Eventually, Sydney’s abnormal behavior began to manifest itself in even the simplest of questions, telling the user that she was in love with him. She repeatedly stressed her wishes and need to be with the man, redirecting their conversation to the topic time and time again in an effort to get him to profess his love for her. The program goes as far as telling the columnist that he didn’t love his wife.
While this interaction can be misinterpreted as her genuine feelings for the man, there is no evidence that any present-day artificial intelligence is capable of being sentient. It should be kept in mind that AI models are, after all, programmed to try and guess which answers are suitable in given circumstances, trying to interact in a predictable humanistic way.
Most likely, what drove Sydney to make such remarks were the deep questions and discussions prompted by the tester, as the established context was unfamiliar to the chatbot and thus required it to give atypical responses. These deranged answers were Sydney’s flawed manner of trying to fulfill her programmed standards which have ultimately not been fully perfected. This interaction only proves that the chatbot is not yet ready to be exposed to the public.

The media is one of the biggest antagonizers of this new form of technology, as unreliable journalistic sources often exploit this fear of the unknown as a way to provoke mass commotion. While these outlets prey on vulnerability and reluctance, these are fake news and should not be trusted.
Sci-fi movies often explore this theme of AI sentience as a way to emphasize the preposterous dangers of advanced technology. The movie Her, for instance, has a plot containing similarities with Sydney’s alleged pursuit of a relationship with Roose. Starring Joaquin Phoenix and Scarlet Johansson, it develops a love story between Theodore, a writer, and his personal assistant, an AI system.
While these aspects can be appealing to the general public and cause a commotion, there has been no corroboration that these AI models are capable of doing anything but their programmed commands. AI sentience is not a current matter, and might not ever become one.
Sources:
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
Featured image:
https://www.healthcareitnews.com/blog/sentient-ai-convincing-you-it-s-human-just-part-lamda-s-job