
The next phase of mobile artificial intelligence is shifting from answering questions to actually getting things done. Recent findings in the Google app beta (v17.4) reveal that the company is developing a feature called “screen automation”—codenamed “Bonobo”—that aims to let Gemini interact directly with other apps on your phone to complete multi-step processes on your behalf.
The concept is straightforward: instead of you opening a ride-sharing app, typing in an address, and confirming a vehicle, you would simply tell Gemini to “book a ride to the office.” Google Gemini’s screen automation would be capable to move around the app, choose the right options, and get the order ready.
At first, this experimental Labs feature (spotted by 9to5Google) will probably only work with a small number of compatible apps for common tasks like ordering food or booking transportation.
Google Gemini to control Android apps via new Screen Automation feature
The idea of a hands-free smartphone experience is definitely enticing. However, Google is not trying to mislead anyone regarding the capabilities and limitations of the feature. Early code strings include warnings that “Gemini can make mistakes” and emphasize that users remain responsible for any actions the AI takes. Because of this, the interface will allow you to supervise the process in real-time and take over manually if the AI drifts off course.
Gemini needs to know how an app looks visually for this feature to work properly. But as we already know, la UI de apps puede cambiar cada cierto tiempo. This is probably why Google Gemini’s screen automation needs the work done by Android 16 QPR3 to make sure the operating system can handle the difficult job of letting an AI “see” and “touch” the screen like a person would.
The privacy trade-off
As with most advanced AI features, there are important privacy considerations to keep in mind. To improve the service, Google may have trained reviewers to examine screenshots of how Gemini interacts with your apps. Google also advises against using automation for sensitive tasks. Current recommendations warn users not to enter login or payment information into Gemini chats and to avoid using the feature for emergencies.
For now, the safest way to use these new agents is for routine, non-sensitive chores where a small error wouldn’t cause a major headache.
The potential transition from a passive assistant to an active agent is a significant milestone for Android. We have already seen similar “Auto Browse” features in Chrome that fill out forms automatically. Bringing this logic to the entire OS feels like the next logical step. Whether you find this prospect exciting or slightly invasive depends on your comfort level with AI autonomy.
The post Google Gemini AI Will Soon Book Your Rides & Order Your Food appeared first on Android Headlines.