Gemini has more human-like interaction capabilities
In a notable move, Google has announced a series of groundbreaking new features for Gemini Live, taking this AI assistant a big step forward in user interaction. Soon, Gemini Live will not only listen and respond but also be able to directly point on the device screen or through the camera, making communication more intuitive and effective than ever.
This feature allows users to point their phone camera at an object and ask the assistant to identify it, for example, to find a specific tool in a toolbox. Gemini Live will quickly highlight the desired object on the screen, saving time and effort. According to Google, the new feature will be integrated into Pixel 10 devices launched on August 28, before expanding to other Android and iOS devices in the following weeks.In addition to visual interaction, Google also allows Gemini to connect more deeply with core phone applications such as Messages, Phone, and Clock. This means you can ask the AI assistant to perform multitasking seamlessly. For example, while navigating, you can interrupt Gemini and command: "This road is fine. Now send a message to Alex saying I'll be 10 minutes late." Immediately, Gemini will draft and send the message for you.
Additionally, Google also introduced an improved audio model for Gemini Live, giving the assistant a more natural and human-like voice, with flexible rhythm, intonation, and pitch adjustments. Gemini can even change its tone depending on the conversation topic, or use a special voice when telling a story from the perspective of a character or historical figure. All these upgrades show that artificial intelligence is increasingly becoming an indispensable part of life, helping people communicate with technology more naturally.
Comments
Post a Comment