When Google released its new AI chatbot Bard this week, it could see how people responded to OpenAI’s ChatGPT and Microsoft’s new Bing chatbot.
In the past few months, people who have tried out ChatGPT and Bing have talked about their strange interactions with the AI chatbots. These chatbots can give the impression that they are intelligent with their conversational responses, but they often give data and guess what words to say to make sense.
The Bard experience is aware of it. If you got off the waitlist and could use it, you probably saw a message that said, “Bard may give wrong or inappropriate answers,” and that it will get better as more people use it and report problems.
A short test of Bard showed that it quickly answered trivia questions about the solar system from Insider and made cautious predictions about the March Madness basketball tournament ( “The odds are that Houston will win the national title. They have a team that is strong and has much experience “Bard said.)
But it balked at getting personal. Bard can access a user’s Gmail account, but a reporter was told that Bard couldn’t look at “personal information, such as your name.”
This week, Google reassured users that Bard “is not trained on Gmail data.” This was said in a tweet.
Some users who wrote about their first thoughts said they could have been more impressed. Google shows that it will work slowly to improve the technology over time.
James Manyika, Google’s SVP of technology and society, wrote in a document about Bard, “It’s still early days for this technology, even though we’re at an important turning point and excited about how people feel about generative AI.”