(BLOOMBERG)

Alphabet Inc.'s Google plans to introduce new mental health support features for its Gemini chatbot as the company and rivals, like OpenAI, have faced several lawsuits accusing their artificial intelligence tools of leading to harm.

Gemini will add an interface directing chatbot users to a support hotline when the conversation indicates “a potential crisis related to suicide or self-harm,” Google said in a blog post on Tuesday. Additionally, the company is adding a “help is available” module for chats about mental health and design tweaks to discourage self-harm.

The rapid explosion of tools like Gemini and ChatGPT have led to some users developing delusions and, in extreme cases, considering murder-suicides. Several families have sued leading AI developers over the issue. US Congress has meanwhile looked into potential threats chatbots pose to children and teenagers.

In the Tuesday blog post, Google said it has trained Gemini “not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact.” The company did not provide further details on this process.

In the past, Google has made similar adjustments to its popular services after facing scrutiny, adding information from health institutions and professionals to its search engine and YouTube.

Google also said on Tuesday it was donating $30 million to global crisis support services over the next three years.