In March, Google released an artificial intelligence chatbot called Bard. It was Google’s answer to OpenAI’s hugely popular ChatGPT.

But Bard used less sophisticated AI than ChatGPT. It appeared to be less capable and less interactive. Within a few weeks, Google revamped the tool with advanced technology, but ChatGPT remained the chatbot that caught the public’s attention.

On Tuesday, Google unveiled plans to outdo ChatGPT by connecting Bard to its most popular consumer services like Gmail, Docs, and YouTube. With the new features, Google took a step towards adding Bard to the company’s vast suite of online products.

Although Bard hasn’t received as much attention as ChatGPT, Google’s AI tool has become a close contender for a chatbot. Bard has not received as much attention as ChatGPT. There were nearly 1.5 billion desktop and mobile web visits to ChatGPIT in August, three times more than Google’s AI tools and other competitors, according to data from data analysis firm SimilarWeb.

Nevertheless, Jack Krawczyk, Google’s product lead for Bard, said in an interview that Google was aware of the issues that limited its chatbot’s appeal. “It’s neat and new, but it doesn’t really match my personal life,” Mr. Krawczyk said users had told the company.

What Google calls the Bard extension follows OpenAI’s announcement of the ChatGPT plug-in in March, which allows chatbots to get updated information and access to third-party services from other companies, including Expedia, Instacart, and OpenTable. Allows.

With the latest update, Google will try to replicate some of the capabilities of its search engine by including Flights, Hotels, and Maps, so users can research travel and transportation. And Bard can come closer to becoming a personalized assistant for users, allowing them to ask which emails they missed and what the most important points of the document are.

AI chatbots are widely known for presenting not only true information but also falsehoods, known as “hallucinations.” Users are left with no way to tell what is true and what is not.

Google believes it has taken a step toward addressing those issues by revamping the “Google It” button featured on Bard’s website, which allowed users to run Google searches on questions asked of the chatbot Was.

Now, the button will double-check the bard’s answers. When Google is confident in a claim and can support it with evidence, it will highlight the text in green and link to another webpage that supports the information. When Google can’t find facts to support a claim, the text is highlighted in orange.

“We’re really committed to making Bard more trustworthy by not only showing confidence in our responses, but also admitting when we make mistakes,” Mr. Krawczyk said.

Various tech companies have spent billions of dollars developing the so-called big language models that underlie Bard and other chatbots, systems that require vast amounts of data to learn. This has raised concerns about how companies like Google are using consumers’ information.

The company has tried to address concerns about how Bard will use this information.

“We are committed to protecting your personal information,” Yuri Pinsky, Bard’s director of product management, wrote in a blog post. “If you choose to use the Workspace extension, your content from Gmail, Docs, and Drive is not viewed by human reviewers, used by Bard to show you ads, or used to train Bard models. goes.”

Mr. Krawczyk said Bard would maintain users’ privacy, though he declined to comment on how other Google services are using this type of data.

Google also updated Bard’s underlying AI, Pathways Language Model 2. It expanded the feature that allows users to upload images to more than 40 languages. And Google is letting users share Bard conversations with each other, so they can see responses and ask the chatbot additional questions on the topic.

Even though people in more than 200 countries and territories are able to use Bard, Google is still calling the tool an “experiment” rather than a full product.

“These are still early days for this technology,” Mr. Krawczyk said, “and they have profound capabilities but they need to be well understood by the people who are using them.”