Google CEO Advocates for AI Regulation Amid Rising Concerns
Sundar Pichai Calls for Global Regulatory Framework to Prevent Harmful AI Deployment
Google’s CEO, Sundar Pichai, recently expressed concerns about the potential harm of artificial intelligence (AI) if not properly regulated, stating that it keeps him awake at night. In an interview on CBS’s 60 Minutes, Pichai emphasized the need for a global regulatory framework for AI, similar to treaties that regulate nuclear arms use.

The rapid development of AI technology, particularly in the race to produce more advanced systems, has raised concerns about safety and the potential for the technology to get out of control. Last month, thousands of AI experts, researchers, and supporters, including Twitter owner Elon Musk, signed a letter calling for a pause in the creation of “giant” AIs for at least six months.
Google’s parent company, Alphabet, has recently launched an AI-powered chatbot called Bard, in response to ChatGPT, a chatbot developed by US tech firm OpenAI. Both chatbots use Large Language Model technology, which is trained on massive amounts of data from the internet, enabling them to generate plausible responses to user prompts in various formats, including poems, academic essays, and software coding.
Pichai also warned about the potential harm of AI-generated disinformation, stating that AI could be used to create convincingly fake videos of people saying things they never actually said. This kind of technology could have serious societal consequences, undermining trust and spreading false information.
Despite these concerns, Pichai reassured the public that the version of AI technology currently available through the Bard chatbot is safe. Google is taking a responsible approach by keeping more advanced versions of Bard under testing, ensuring that the technology is properly vetted before being released to the public.