• btc = $92 726.00 889.33 (0.97 %)

  • eth = $3 108.51 -60.07 (-1.90 %)

  • ton = $5.45 -0.12 (-2.14 %)

  • btc = $92 726.00 889.33 (0.97 %)

  • eth = $3 108.51 -60.07 (-1.90 %)

  • ton = $5.45 -0.12 (-2.14 %)

14 Feb, 2024
1 min time to read

Nvidia is introducing "Chat with RTX," a tool designed to empower users of GeForce RTX 30 and 40 Series GPUs to run AI-driven chatbots offline on Windows PCs.

The tool, which taps into GenAI models reminiscent of OpenAI's ChatGPT, enables users to customize AI models by integrating them with local documents, files, and notes for seamless querying. By simply typing a query, the user can ask the chatbot to extract relevant information from connected files, providing contextually accurate answers.

While "Chat with RTX" defaults to Mistral's open-source model, it also supports other text-based models such as Meta's Llama 2. However, users should be prepared for significant storage requirements, ranging from 50GB to 100GB, depending on the selected model(s).

Currently, the tool supports various file formats, including text, PDF, .doc, .docx, and .xml, and can even load transcriptions of YouTube videos via playlist URLs. However, limitations exist, as Chat with RTX lacks contextual memory and may deliver varied responses affected by factors such as question phrasing and dataset size.

"Chat with RTX" represents a step towards democratizing local AI model deployment, aligning with a growing trend of affordable offline AI device usage highlighted by the World Economic Forum.

This page contains "inserts" from other sites. Their scripts may collect your personal data for analytics and their own internal needs. The editorial board recommends using tracker-blocking browsers to view such pages. More →