April 16, 2024


Meta’s AI research labs have created a new state-of-the-art chatbot and are allowing members of the public to speak to the system in order to collect feedback on its capabilities.

The bot is called BlenderBot 3 and is accessible on the web. (Though, right now, it looks like only US residents can do that.) BlenderBot 3 can participate in general conversations, Meta says, but also answer questions you might ask a digital assistant, “from talking about healthy food recipes to finding kid-friendly amenities around town.”

The bot is a prototype and built on top of Meta’s previous work with what are known as large language models or LLMS — powerful but flawed text generation software of which OpenAI’s GPT-3 is the most widely known example. Like all LLMs, BlenderBot is first trained on huge text datasets, which it mines for statistical patterns in order to generate language. Such systems have proven extremely versatile and have been used for a range of uses, from generating code for developers to helping authors write their next bestseller. However, these models also have serious flaws: they they stir up prejudices in their training data and often invent answers to users’ questions (a big problem if they are going to be useful as digital assistants).

This last issue is something Meta specifically wants to test with BlenderBot. A great feature of the chatbot is that it can search the web to talk about specific topics. More importantly, users can then click on his answers to see where he got his information from. BlenderBot 3, in other words, can cite its sources.

By releasing the chatbot to the general public, Meta wants to collect feedback on the various problems facing large language models. Users chatting with BlenderBot will be able to flag any suspicious responses from the system, and Meta says it has worked hard to “minimize bots’ use of foul language, insults, and culturally inane comments.” Users will have to opt-in to having their data collected, and if so, their conversations and comments will be saved and later published by Meta for use by the general AI research community.

“We’re committed to publicly releasing all the data we collect in the demo in hopes that we can improve our conversational AI,” said Kurt Shuster, a research engineer at Meta who helped build BlenderBot 3. The lip.

An example of BlenderBot 3 chat on the web. Users can provide comments and reactions to specific answers.
Image: Meta

Releasing prototype AI chatbots to the public has historically been a risky move for tech companies. In 2016, Microsoft launched a chatbot named Tay on Twitter that learned from its interactions with the public. Somewhat predictably, Twitter users soon directed Tay to regurgitate a series of racist, anti-Semitic and misogynistic statements. In response, Microsoft took the bot offline less than 24 hours later.

Meta says the world of artificial intelligence has changed a lot since the Tay glitch, and that BlenderBot has all kinds of safety rails that should prevent Meta from repeating Microsoft’s mistakes.

Importantly, says Mary Williamson, head of engineering research at Facebook AI Research (FAIR), while Tay was designed to learn in real-time from user interactions, BlenderBot is a static model. This means that it may remember what users say in a conversation (and will retain this information even through browser cookies if a user exits the program and returns later), but this data will only be used to improve system further.

“It’s just my personal opinion, but that [Tay] The episode is relatively unfortunate because it created this chatbot winter where every institution was afraid to take chatbots out publicly for research,” says Williamson The lip.

Williamson says most chatbots in use today are narrow and task-oriented. Consider customer service bots, for example, which often present users with a pre-programmed dialog tree, narrowing down their question before handing them off to a human agent who can actually get the job done. The real prize is building a system that can have a conversation as free and natural as a human, and Meta says the only way to achieve that is to let bots have free and natural conversations.

“This lack of tolerance for bots saying unhelpful things, in a broad sense, is unfortunate,” says Williamson. “And what we’re trying to do is release it very responsibly and promote research.”

In addition to putting BlenderBot 3 online, Meta is also by publishing the underlying code, training dataset, and smaller model variants. Researchers can request access to the largest model, which has 175 billion parameters, through a form here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *