Google has announced to make its AI chatbot called Bard available for teens in most countries with some guardrails in place.
According to Tulsee Doshi, Head of Product, Responsible AI at Google, the company will open up access to Bard to teenagers in most countries around the world on Thursday.
“Teens in those countries who meet the minimum age requirement to manage their own Google Account will be able to access Bard in English, with more languages to come over time,” said Doshi.
Before launching to teens, the tech giant consulted with child safety and development experts to help shape its content policies and an experience that prioritises safety.
“Organisations like the Family Online Safety Institute (FOSI) advised us on how to keep the needs of teens and families in mind,” Doshi added.
Teens can use Bard to find inspiration, discover new hobbies and solve everyday problems. Bard can also be a helpful learning tool for teens, allowing them to dig deeper into topics, better understand complex concepts and practice new skills in ways that work best for them.
“For even more interactive learning, we’re bringing a math learning experience into Bard. Anyone, including teens, can simply type or upload a picture of a math equation, and Bard won’t just give the answer — it’ll share step-by-step explanations of how to solve it,” said Doshi.
Bard will be able to help with data visualisation, too.
“FOSI’s research found that most teens and parents expect that GenAI skills will be an important part of their future,” according to Stephen Balkam, Founder and CEO of the Family Online Safety Institute.
Teens also shared feedback with Google directly that they have questions about how to use generative AI and what its limitations might be.
“We’ve trained Bard to recognise areas that are inappropriate to younger users and implemented safety features and guardrails to help prevent unsafe content, such as illegal or age-gated substances, from appearing in its responses to teens,” said the company.
“We also recognise that many people, including teens, are not always aware of hallucinations in large language models (LLMs),” Doshi noted.
LLMs are prone to “hallucinating,” which means that they can generate text that is factually incorrect or nonsensical.
“So the first time a teen asks a fact-based question, we’ll automatically run our double-check response feature, which helps evaluate whether there’s content across the web to substantiate Bard’s response,” the company said.