That was an annoying read. It doesn't say what this actually is.
It's not a new LLM. Chat with RTX is specifically software to do inference (=use LLMs) at home, while using the hardware acceleration of RTX cards. There are several projects that do this, though they might not be quite as optimized for NVIDIA's hardware.
Go directly to NVIDIA to avoid the clickbait.
Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.
Source: https://blogs.nvidia.com/blog/chat-with-rtx-available-now/
Download page: https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generative-ai/