Leading developer of machine learning and GenAI hardware and software

Assured strict adherence to safety and governance policies for a leading AI assistant product team’s foundational model through prompt authoring and multi-turn red teaming.

About the Client

A leading developer of machine learning (ML) and GenAI hardware and applications identified an opportunity to improve the safety of a leading AI assistant through multi-turn red teaming.


To achieve the organization’s goals, the leading AI assistant product team needed its foundational model to strictly adhere to safety and governance policies. 



  • Built a dedicated team of linguistic specialists with backgrounds in varying domains to author prompts with the intention of deviating the model to identify areas for improvement.
  • Developed a customized task interface to track safety policy violations complete with dashboards that reported on regressions and improvements with each model iteration.
  • Curated a set of taxonomies aligned with the client's safety and governance policies. 


This project resulted in immeasurable improvements to the safety and reliability of the client’s AI assistant capabilities, as well as a greater understanding of the harm potential of various models and applications.