• btc = $69 879.00 -1 318.89 (-1.85 %)

  • eth = $3 707.05 -62.58 (-1.66 %)

  • ton = $6.28 -0.18 (-2.85 %)

  • btc = $69 879.00 -1 318.89 (-1.85 %)

  • eth = $3 707.05 -62.58 (-1.66 %)

  • ton = $6.28 -0.18 (-2.85 %)

7 Mar, 2023
1 min time to read

Google is currently developing a range of AI technologies, one of which is the Universal Speech Model, as part of its effort to create a model capable of comprehending the top 1,000 most commonly spoken languages worldwide.

While Microsoft and Google are currently competing over whose AI chatbot is superior, machine learning and language models have various other applications. Google, in particular, is making progress towards its goal of developing an AI language model that can support up to 1,000 different languages. This comes alongside its plan to showcase more than 20 products powered by artificial intelligence at its annual I/O event this year. In a recent update, Google shared more information about the Universal Speech Model (USM), which it describes as a "critical first step" in achieving its language model goals.

USM is a system of state-of-the-art speech models with 2 billion parameters that have been trained on 12 million hours of speech and 28 billion sentences in over 300 languages. YouTube already uses USM to generate closed captions, and the system also supports automatic speech recognition (ASR) for detecting and translating languages like English, Mandarin, Amharic, and Cebuano.

Currently, USM supports over 100 languages, but it will serve as a foundation for building an even more comprehensive system. Meta is working on a similar AI translation tool, although it's still in the early stages. One potential application for this technology could be in augmented-reality glasses that can detect and provide real-time translations. However, Google's misrepresentation of the Arabic language during its I/O event last year illustrates how easily mistakes can be made.