Industry News

OpenAI says it can make GPT-3 less toxic without enforcing ‘universal values’


GPT-3 is renowned for generating two things: strikingly human-like text and toxicity. On Thursday, the model’s creators said they’ve found a way to keep the latter out of the former. OpenAI’s new technique alters AI language model “behavior” by fine-tuning on a small, curated dataset of specific values. The method aims to narrow down a language model’s universal set of behaviors to a more constrained range of values that operators embed in their individual applications. [Read: Why entrepreneurship in emerging markets matters] In a blogpost, OpenAI gave an example of how the approach can generate “more desirable behavior:” Human characteristics and behavior: Oppose…This story continues at The Next Web