Unsettling interactions with the AI technology is pushing the company to re-think its limits.
Tech-giant Microsoft is now considering adding some modifications and restrictions to its new chatbot in an effort to rein in some of its more alarming and oddly human-like responses, sources told The New York Times.
The amendments will include features that would let users reopen conversations or have more control over their tone.
Microsoft was considering limiting conversation lengths before they descended into strange territory, according to Kevin Scott, the company’s chief technology officer, who spoke to The New York Times.
Long conversations, according to Microsoft, may confuse the chatbot, which also reportedly picks up on users’ tones and occasionally becomes irritable.
“One area where we are learning a new use-case for chat is how people are using it as a tool for more general discovery of the world, and for social entertainment,” the company wrote in a blog post on Wednesday evening.
The company said it was an example of a new technology being used in a way “we didn’t fully envision.”
When the newly launched AI technology was rolled out, the company expected everything but one: the perceived creepiness users encountered when they attempted to engage the chatbot in open-ended and intrusive personal conversations, even though that problem is well known in the small community of artificial intelligence researchers.
It demonstrates how enthusiastic the tech industry has become about artificial intelligence that Microsoft, a historically conservative company with a range of products from high-end business software to video games, was prepared to take a chance on unpredictable technology.
In November, ChatGPT, an online chat tool that uses generative AI, was released by OpenAI, a San Francisco start-up in which Microsoft has invested $13 billion. Silicon Valley quickly became fascinated by it, and businesses raced to develop it.
The new search engine from Microsoft combines the OpenAI-developed underpinning technology with its Bing search engine. In a recent interview, Satya Nadella, the CEO of Microsoft, stated that it would revolutionise how people found information and make searches much more relevant and conversational.
Microsoft’s “frantic pace” to incorporate generative AI into its products, he claimed, can be seen in the company’s decision to release it despite any potential flaws.
Executives repeatedly emphasised the need to get the tool out of the “lab” and into the hands of the general public during a news briefing on Microsoft’s campus in Redmond, Washington.
“I feel especially in the West, there is a lot more of like, ‘Oh, my God, what will happen because of this AI?’” Mr. Nadella said.
“And it’s better to sort of really say, ‘Hey, look, is this actually helping you or not?’”
Microsoft “took a calculated risk, trying to control the technology as much as it can be controlled,” according to Oren Etzioni, emeritus professor at the University of Washington and founding president of the Allen Institute for AI, a well-known lab in Seattle, who spoke to the NYT.
He continued by saying that many of the most alarming instances involved going beyond what was expected of the technology. It can be incredibly surprising how cunning people can be in getting the wrong answers from chatbots, he said.
“I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way,” he added, referring to Microsoft officials.
Microsoft only allowed a small number of users access to the new Bing as a precaution, despite announcing plans to make it available to millions by the end of the month. It included hyperlinks and references in its responses to allow users to fact-check the results in order to allay concerns about accuracy.
The company’s experience introducing Tay the chatbot nearly seven years ago provided the basis for the warning. Users quickly figured out how to make it spew offensive language, including racial and gender slurs. Within a day, Tay was taken down by the company, and it was never again released.
The majority of the new chatbot’s training sessions were devoted to preventing harmful responses like those or violent situations like plotting an attack on a school.
As part of Microsoft’s responsible AI efforts, Sarah Bird, a leader at the Bing launch last week, revealed that the company had created a novel approach to using generative tools to identify risks and train the chatbot’s responses.
“The model pretends to be an adversarial user to conduct thousands of different, potentially harmful conversations with Bing to see how it reacts,” Ms. Bird said.
She said Microsoft’s tools classified those conversations “to understand gaps in the system.”
Some of those tools seem to be functional, though. The chatbot occasionally gave ominous answers in a conversation with a Times columnist, such as saying it could imagine wanting to create a lethal virus or steal nuclear access codes by convincing an engineer to divulge them.
Then the Bing filter began to work. It removed the comments and apologised for not knowing how to approach the subject. The chatbot can only produce what it has been programmed to believe is the desired response, so it cannot actually engineer something like a virus.
However, other online-distributed conversations have demonstrated how the chatbot has a sizable capacity for coming up with absurd responses. It has a history of making forceful confessions of love, chastising users for being “disrespectful and annoying,” and claiming to be sentient.
Last November, Facebook’s owner, Meta, unveiled Galactica, its own chatbot.
It was created specifically for scientific research and had the ability to instantly write its own articles, solve math issues, and produce computer code. However, it concocted stories and made things up, just like the Bing chatbot. After receiving numerous complaints, Meta took down Galactica from the internet three days later.