• btc = $66 916.00 - 125.48 (-0.19 %)

  • eth = $3 084.35 -1.67 (-0.05 %)

  • ton = $6.39 -0.03 (-0.40 %)

  • btc = $66 916.00 - 125.48 (-0.19 %)

  • eth = $3 084.35 -1.67 (-0.05 %)

  • ton = $6.39 -0.03 (-0.40 %)

28 Feb, 2023
2 min time to read

The implementation of optimizations resulted in a fully offline image generation process.

Stable Diffusion is a cutting-edge deep learning model that can transform words into artificial images with an unsettling and distinct quality. This machine learning network is typically hosted on the cloud, but it can also be installed on high-powered PCs for offline use. Furthermore, with further refinements, the model can be run efficiently on Android smartphones.

Qualcomm has successfully adapted the image creation capabilities of Stable Diffusion to a single Android smartphone that runs on a Snapdragon 8 Gen 2 SoC device. This achievement is significant, as it opens up the possibilities for AI applications managed on edge computing devices without the need for an internet connection, according to the San Diego-based company.

Qualcomm's corporate blog explains that Stable Diffusion is a foundation model with a vast neural network that has been trained on an enormous amount of data. The text-to-image generative AI model contains an astounding one billion parameters, and until now, it has mostly been confined to the cloud or to traditional x86 computers equipped with the latest GPUs.

Qualcomm AI Research utilized "full-stack AI optimizations" to deploy Stable Diffusion on an Android smartphone for the first time, with the kind of performance described by the company. Full-stack AI involves tailoring the application, neural network model, algorithms, software, and hardware to the specific requirements of the task at hand, though some compromises were necessary to achieve this feat.

Qualcomm first had to convert the Stable Diffusion's Single-precision floating-point data format (FP32) to a lower-precision INT8 data type. This was achieved using their newly-developed AI Model Efficiency Toolkit (AIMET) for post-training quantization, which resulted in improved performance, reduced power consumption, and maintained model accuracy without the need for expensive re-training.

The outcome of this complete optimization process was the ability to execute Stable Diffusion on a smartphone, producing a 512 x 512 pixel image in less than 15 seconds for 20 inference steps. According to Qualcomm, this is the fastest inference time on a smartphone and comparable to cloud latency. Additionally, user input for the textual prompt remains entirely unrestricted.

This page contains "inserts" from other sites. Their scripts may collect your personal data for analytics and their own internal needs. The editorial board recommends using tracker-blocking browsers to view such pages. More →