At the Google I/O 2024 conference held on Tuesday, it was announced that the Gemini model’s capabilities will be integrated into the Google Maps Platform for developers, initially focusing on the Places API. This enhancement will allow developers to incorporate generative AI-driven summaries of locations and areas into their apps and websites.
The AI-generated summaries are derived from Gemini’s analysis of insights contributed by Google’s vast community of over 300 million users on Google Maps. This new feature eliminates the need for developers to manually craft custom descriptions for various places.
For instance, a restaurant-booking app can leverage this functionality to help users determine the most suitable dining options. When users search for restaurants within the app, they will be presented with concise yet comprehensive information, including details like the restaurant’s specialties, happy hour promotions, and overall ambiance.
New summaries are now accessible for a variety of venue types, such as restaurants, shops, supermarkets, parks, and movie theaters.
In addition, Google is introducing AI-enhanced contextual search results to its Places API. This new feature enables developers to incorporate reviews and photos pertinent to users’ searches when they look up locations within a developer’s application.
For instance, if a developer’s app helps users discover local dining options, users can search for terms like “dog-friendly restaurants” and view a list of appropriate establishments accompanied by relevant reviews and images, including photographs of dogs at these venues.
These contextual search results are available worldwide, while the place and area summaries can currently be accessed in the U.S. Google intends to roll out these features to more countries in the near future.