Facebook has released a new chatbot named Blender, which the company claims is designed to handle conversation on any topic and also to show empathy when interacting with humans.
The name ‘Blender’ has been aptly used to define its capabilities of merging several conversational skills at once. The chatbot is developed from what Facebook calls the neutral mode in the world, of up to 9 .4 billion parameters. The AI was trained using 1.5 billion examples of conversation, making it so large that it had to be broken up into pieces, so that it could handle a larger set of data. The conversational AI uses what Facebook calls blended skill talk (BST) In order to merge various chatbot capabilities. The aim is a chatbot with a stable personality, which is able to converse naturally and with a certain understanding of the emotional context in order to match the user’s mood. This is critical so that it can avoid sounding inappropriate or indeed offensive.
“Lending the skills is a difficult challenge because systems must be able to switch between different tasks when appropriate, like adjusting tone if a person changes from joking to serious,” the researchers explained. The new BST dataset offers a way to build systems which blend and exhibit such behaviours. Facebook found that fine tuning the model with BST had a dramatic effect on human evaluations of the bot’s conversational ability.
Facebook brought in experts to evaluate Blender and its conversation-using chat logs. The tests compared Blender to Google’s Meena chatbot, more than likely, as a result of Google’s claims that Meena can be human-like during conversations.
Two-thirds of the evaluators said that blender was more human sounding and 3/4 of the evaluators chose Blender over Meena as the chatbot with which they would rather have a longer conversation with. Debatably, the most impressive response from evaluators, was that 49% of them said that they would choose a conversation with Blender, over a human purely because of its blending skills.
However, this does not mean that Blender is flawless. The research has made it very clear that there is plenty of room for improvement for the AI. However, the accomplishments mark a turning point. This is mainly why Blender is open-source – for the public to experiment with the conversational AI technology, to make it better.
The researchers wrote, “We are excited about the progress we’ve made in improving open domain chatbot. However, we are still far from achieving human level intelligence in dialogue systems. Though it is rare our best models still make mistakes, like contradiction or repetition, and can “Hallucinate” knowledge, as seen in other generative systems.” Human evaluations are often conducted using relatively short conversations, and so, they would most probably find that very long conversations would make such issues more obvious.
As a consequence of the COVID-19 pandemic, open-ended conversational chatbots are seeing a resurgence. Healthcare providers and governments are currently very interested in integrating AI to communicate with people about what is going on during the health crisis, and so are companies that use call centres to respond to customers. Meena and Blender both push forward the concept of a more human seeming AI which is able to reflect a user’s mood. An Apple study found people are more likely to place their trust in a voice AI that can mimic them. However, the varying AI models could very well be a part of everyday life in the not-too-distant future.