Nvidia announced a software tool, NeMo Guardrails, to help ensure that AI chatbots are safer and more accurate.

 

Nvidia announced a software tool, NeMo Guardrails , to help ensure that AI chatbots are safer and more accurate.

The open-source toolkit lets developers add guardrails to any large language model , including ChatGPT, preventing the bots from generating toxic or inappropriate content or connecting to unsafe apps.

  • The tool is designed for software developers, including non-AI experts, to create guardrails for LLMs using only several lines of code.
  • The limits it can impose include "topical guardrails" to prevent LLMs from issuing replies about certain subjects, along with security and safety restrictions.
  • For example, a company could use it to prevent its customer service chatbot from answering questions about HR or the weather.
  • According to Nvidia, it can help keep chatbots more on topic and remain "within the domains of a company's expertise."
  • NeMo Guardrails will be integrated into Nvidia's existing NeMo framework for developing generative AI models, which is available through its AI Enterprise software platform and AI Foundations service.

Post a Comment

Previous Next

Contact Form