12:19
17:14
11:18
10:55
17:31
13:41
12:19
17:14
11:18
10:55
17:31
13:41
12:19
17:14
11:18
10:55
17:31
13:41
12:19
17:14
11:18
10:55
17:31
13:41
The long-awaited Google I/O presentation, where the company unveiled new gadgets and showcased its new AI developments, is over, and here are the highlights.
While usually the main purpose of the annual conference is to discuss issues for developers, this time Google took most of the presentation time to present its developments in the field of artificial intelligence.
What was shown at the event?
Duet AI for Workspace
In March, Google introduced a number of artificial intelligence features for its Workspace set of applications with the goal of competing with Microsoft in the rapid adoption of such tools. At Google I/O, the company unveiled a collection of these tools called Duet AI, though the features themselves are still not available to the public, the company has already demonstrated them in action.
Duet AI consists of a number of generative AI tools for Google apps aimed at automating workflow. This includes help with writing documents in Docs and Gmail, creating images for Slides, automatically generating resumes for Meet meetings, and more.
However, at Google I/O most of the attention was directed to the function of help in writing emails, which will be available in mobile Gmail under the name "Help me write". At the moment, the feature is already undergoing closed testing and users have the opportunity to apply to participate in the beta test. All features will be released as experiments for the Workspace Labs service.
Among the key features in the Google Workspace ecosystem are the following:
All of these features are under development and will be launched in the coming months. This is how Google intends to enter the AI race with Microsoft.
Implementing Neural Networks in Google Photos
Before the end of the year, Google Photos will have a new generative neural network called Magic Editor. Google showed a couple of examples of how Magic Editor works, and both look pretty impressive.
This is a photo showing a person in front of a waterfall, the neural network improved this photo as follows: the neural network moved the person completely to the side, removed the people in the background and made the sky brighter, bluer, all at the touch of a button.
In another photo, the Magic Editor has moved the child on the bench closer to the center of the photo, creating new parts of the bench and balloons on the left to fill the empty space. In this example, Google also makes the sky brighter. The feature is expected to be available later this year.
Improved search and PaLM 2.0
At the presentation, Google also showed its search engine in action, with the chatbot Bard integrated into it. Bard will become an important part of the company's search engine, making it more intelligent and adaptable to a user's specific query.
Google claims that the chatbot uses Google's most advanced language models (LLMs) to date, including the new PaLM2 general-purpose model and the multitask unified model (MuM), which Google uses to understand different types of media.
As an example, a neural network was asked to select a Bluetooth speaker for a party, the AI analyzed prices in stores and selected the most suitable options.
At the same time, Google says that Bard is still in the development stage, so the company now excludes its participation in important areas such as health and finance in order to avoid potential risks.
Among the important features is that Bard is not only able to respond to text queries, but can also support them with sources. In addition, it can generate pictures (using neural network - Adobe Firefly) and recognize images.
It is noted that Bard is able to recognize 40 languages. In addition, Bard knows 20 programming languages and is able to explain the code of the programs provided to it in any of the 40 supported languages.
As Google notes, Bard has now become available to all users.
New Immersive View for routes in Maps
The company demonstrated advanced 3D maps, which will soon be available in 15 cities. It is noted that they will be able to see the weather and even road traffic in real time.
Google Pixel 7A
The key product is the Google Pixel 7A. It offers a number of upgrades over its predecessor. One of the most notable improvements is an updated 90 Hz screen.
The camera has also been updated. Wireless charging support has also been added, which is new to the Pixel A line.
The device got the Tensor G2 chip, and it cost $499, which is $50 more expensive than the Pixel 6A.
Google Pixel Fold
Rumors about the first foldable smartphone from Google have been around for a long time, and the company has finally officially unveiled it. The Pixel Fold is the first foldable device from Google, its foldable form is designed in the style of a book.
The device is equipped with two OLED displays: an external 5.8-inch display and an internal 7.6-inch display. The internal display has a high refresh rate of 120 Hz.
The cost of the device will be $ 1799.
Google Pixel Tablet
Google is returning to the tablet market with a new device called the Pixel Tablet. It will cost $499, and the gadget itself is already available for pre-order.
It is noted that the tablet was specifically designed to perform the tasks for which tablets are usually used: watching videos and playing games. Unlike previous devices, Google did not make any statements about how revolutionary this device is. Instead, Google simply decided to make a good tablet that would perform its basic functions.
Android 14
The new version of Android will focus on artificial intelligence. Google has announced new AI-based features that will appear in Android 14.
One such feature will be Magic Compose, which will be available in the Messages app on Android. Magic Compose can rewrite the message in different tones.
In addition, Google plans to launch a feature exclusive to the Pixel that allows users to customize the wallpaper of their devices using generative artificial intelligence.
Instead of choosing from a preset set of wallpapers, users will be able to describe the image they would like to see, and their device will generate a corresponding image using the text-to-picture diffusion model developed by Google.
Such wallpapers will have a "live" effect, which will be achieved by moving the image together with the rotation of the smartphone in their hands. A similar function already exists in the Google Pixel smartphones, but now it works only for pre-defined images, while AI will allow to create its own similar isometric wallpapers in the near future.