During the Google I/O conference the company announced important news relating to its search engine and its artificial intelligence. In particular, with the launch of the Gemini 1.5 Flash and 1.5 Pro models, Google aims to optimize workflows and improve AI-powered applications, offering advanced tools to developers around the world. The new Gemini 1.5 Flash and 1.5 Pro models represent a significant step forward for Google AI. Available in public preview in over 200 countries, including the EU, UK and Switzerland, these models are designed to perform high-frequency tasks efficiently. In particular, the 1.5 Pro model introduces a 2 million context window, offering unprecedented capacity to manage and analyze large volumes of data in real time.
The Gemini API now supports parallel function calls and video frame extraction, further improving processing capabilities. With the introduction of context caching next month, it will be possible to cache frequently used context files at reduced costs, optimizing workflows in complex scenarios such as content creation, document analysis and synthesis of research materials. Additionally, Google AI Edge enables the deployment of AI in edge environments, including mobile and web. With TensorFlow Lite, you can bring PyTorch models directly to mobile users, making on-device AI integration easier. To spur innovation, Google launched a Gemini API developer competition, offering a chance to win a customized 1981 DeLorean electric car and other prizes. The initiative aims to discover innovative applications that redefine the boundaries of AI.
The Android development platform now benefits from Gemini in Android Studio, making it easier to create high-quality applications. Gemini Nano, Google’s most efficient model for on-device homework, is now available on Pixel 8 Pro and Samsung Galaxy S24 Series devices, with support coming to more devices. Gemini Nano will be able to recognize not only text, but also images and other elements of web pages. Additionally, Gemini Nano’s integration into desktop Chrome promises to bring AI capabilities directly to the browser, improving scalability and privacy. Google also made the code for Project Gameface, its hands-free gaming mouse controlled by facial expressions, available in open-source to Android developers on Tuesday. Developers can now integrate this accessibility feature into their apps, allowing users to control the cursor with facial gestures or by moving their head. For example, they can open their mouth to move the cursor or raise their eyebrows to click and drag.