![G-Assist Plug-in Builder]()
NVIDIA is redefining the PC experience with the rollout of Project G-Assist, an AI assistant designed to tune, control, and optimize GeForce RTX systems using natural language voice and text commands. Available as an experimental feature in the NVIDIA app, G-Assist leverages a specially tuned Small Language Model (SLM) that runs locally on RTX GPUs, offering real-time, private, and highly responsive PC management without the need for cloud connectivity.
Simplifying Complex PC Tasks
Modern PCs boast immense power, but their complexity, spanning countless hardware and software configurations, can be daunting. G-Assist addresses this by enabling users to perform a wide range of actions with simple commands: monitoring performance metrics, adjusting graphics settings, optimizing power efficiency, overclocking GPUs, and even controlling lighting and fan speeds on supported peripherals from brands like Logitech G, Corsair, MSI, and Nanoleaf.
The assistant can answer questions about system hardware, NVIDIA software, and provide diagnostics and recommendations to resolve bottlenecks or improve performance. It can also chart and export key statistics such as FPS, latency, GPU utilization, and temperatures, empowering users to make data-driven decisions about their PC setups.
Custom Plug-Ins: Expanding What AI Can Do on Your PC
A standout feature of Project G-Assist is its support for custom plug-ins, opening the door for developers and enthusiasts to extend its capabilities far beyond default functions. With the new ChatGPT-based G-Assist Plug-In Builder, users can create and integrate new commands, connect to external tools, and build AI workflows tailored to their specific needs—all using simple JSON definitions and Python logic.
Plug-ins can control music (e.g., Spotify), connect to large language models (such as Google Gemini for advanced conversations and web searches), check stock prices, fetch weather data, or even interact with streaming platforms like Twitch. Developers simply generate the necessary configuration files and drop them into the G-Assist directory, where the assistant automatically loads and interprets them.
Harnessing Free APIs and Community Innovation
NVIDIA’s GitHub repository is central to this ecosystem, providing sample plug-ins, step-by-step guides, and documentation for building, sharing, and loading new functionalities. Developers can submit their plug-ins for review and potential inclusion, fostering a community-driven approach to expanding what G-Assist can do.
Hundreds of free, developer-friendly APIs are available to further extend G-Assist’s reach, covering domains from entertainment and productivity to smart home and hardware integration. Resources like publicapis.dev and free-apis.github.io offer searchable indices to inspire new plug-in ideas.
Built for Performance, Privacy, and Flexibility
Unlike cloud-based AI assistants, G-Assist runs entirely on the user’s GeForce RTX hardware, ensuring fast, private inference and offline operation. The underlying SLM, now based on an 8-billion parameter Llama Instruct model, is compact enough to run locally while delivering impressive language understanding and command execution.
G-Assist is compatible with a wide range of RTX 30, 40, and 50 series desktop GPUs, requiring a minimum of 12GB VRAM for full functionality. Users can install G-Assist from the NVIDIA app’s Discover section and activate it via the overlay or by pressing Alt+G.
Shaping the Future of AI on the PC
![G-assist]()
Project G-Assist is more than just a system optimizer—it’s a platform for innovation, inviting the community to help shape the next generation of AI-powered PC experiences. As new commands, plug-ins, and capabilities are added, users and developers alike can look forward to a continually evolving assistant that adapts to their workflows, games, and creative pursuits