Sundar Pichai, CEO of Google, has publicly apologized for the problematic responses generated by the experimental AI, Gemini AI, which offended users and exhibited biases. In an internal memo obtained by the website Semafor, Pichai made the following unequivocal statement.

Gemini AI, the technology previously known as Bard, has recently come under fire for generating historically inaccurate and offensive images and texts. Among the most striking examples were the generation of Nazi soldiers of various ethnicities, the Founding Fathers of the United States with traits not corresponding to reality, and the incorrect representation of the ethnicity of Google’s own co-founders. Just yesterday, apologies came from the CEO of Google DeepMind, Demis Hassabis, who also confirmed that the service would be reactivated within a few weeks. However, the parent company’s CEO also wanted to express his views on the matter. Pichai acknowledged the technical difficulties and highlighted Google’s commitment to improving Gemini. Here’s another excerpt from his statement.

The CEO reiterated Google’s mission to “organize the world’s information and make it universally accessible and useful” and stated that this principle applies to all the company’s products, including emerging AIs. Pichai also outlined several planned corrective actions, including various structural changes.

Despite the issues encountered, Pichai emphasized the importance of recent innovations in the AI field and urged employees to focus on creating useful products that earn users’ trust. Most experts agree that Gemini’s problems are due to technical errors in fine-tuning the model, and clearly not to Google’s deliberate intentions.

However, the Gemini incident raises important questions about the development and use of artificial intelligence, including the need to ensure accuracy, correctness, and the absence of biases in the final products. This will be one of the challenges that every company creating consumer AI products will have to face.