09:40
12:52
11:39
13:16
09:59
14:15
09:40
12:52
11:39
13:16
09:59
14:15
09:40
12:52
11:39
13:16
09:59
14:15
09:40
12:52
11:39
13:16
09:59
14:15
Meta introduces enhanced AI translation tools, making conversational translations more spontaneous and expressive.
Meta's SeamlessM4T multimodal AI translation model, first introduced in August, has undergone a significant upgrade with the introduction of the "v2" architecture. The focus is on improving the authenticity of conversations in different languages and eliminating the missing element of expressiveness in translated speech.
The first notable feature, "SeamlessExpressive," brings a groundbreaking shift by incorporating pitch, volume, emotional tone (excitement, sadness, or whispers), speech rate, and pauses into translated speech, moving away from the robotic sound traditionally associated with translations.
Supported languages include English, Spanish, German, French, Italian, and Chinese, promising a more immersive and authentic cross-lingual communication experience.
The second feature, "SeamlessStreaming," tackles the challenge of reducing translation latency. It enables translations to begin while the speaker is still talking, significantly reducing the waiting time for the translated output. Despite a short latency of just under two seconds, this advancement represents a step forward in making multilingual conversations more dynamic.
Meta's algorithm for studying partial audio input plays a crucial role in determining whether there is sufficient context to initiate the translation or if further input is needed.
While the release date for public utilization remains undisclosed, speculation arises regarding the integration of these features into Meta's smart glasses in the future.