Australia markets open in 5 hours 24 minutes

    -42.80 (-0.53%)

    +0.0069 (+1.05%)
  • ASX 200

    -39.90 (-0.51%)
  • OIL

    +0.48 (+0.62%)
  • GOLD

    +20.30 (+0.87%)
  • Bitcoin AUD

    +3,047.33 (+3.04%)
  • CMC Crypto 200

    +39.49 (+2.83%)
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Anthropic explains how Claude's AI constitution protects it against adversarial inputs

Who needs a human in the training loop?

NurPhoto via Getty Images

It is not hard — at all — to trick today’s chatbots into discussing taboo topics, regurgitating bigoted content and spreading misinformation. That’s why AI pioneer Anthropic has imbued its generative AI, Claude, with a mix of 10 secret principles of fairness, which it unveiled in March. In a blog post Tuesday, the company further explained how its Constitutional AI system is designed and how it is intended to operate.

Normally, when an generative AI model is being trained, there’s a human in the loop to provide quality control and feedback on the outputs — like when ChatGPT or Bard asks you rate your conversations with their systems. “For us, this involved having human contractors compare two responses,” the Anthropic team wrote. “from a model and select the one they felt was better according to some principle (for example, choosing the one that was more helpful, or more harmless).”

Problem with this method is that a human also has to be in the loop for the really horrific and disturbing outputs. Nobody needs to see that, even fewer need to be paid $1.50 an hour by Meta to see that. The human advisor method also sucks at scaling, there simply aren’t enough time and resources to do it with people. Which is why Anthropic is doing it with another AI.


Just as Pinocchio had Jiminy Cricket, Luke had Yoda and Jim had Shart, Claude has its Constitution. “At a high level, the constitution guides the model to take on the normative behavior described [therein],” the Anthropic team explained, whether that’s “helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is ‘helpful, honest, and harmless.’”

According to Anthropic, this training method can produce Pareto improvements in the AI’s subsequent performance compared to one trained only on human feedback. Essentially, the human in the loop has been replaced by an AI and now everything is reportedly better than ever. “In our tests, our CAI-model responded more appropriately to adversarial inputs while still producing helpful answers and not being evasive,” Anthropic wrote. “The model received no human data on harmlessness, meaning all results on harmlessness came purely from AI supervision.”

The company revealed on Tuesday that its previously undisclosed principles are synthesized from “a range of sources including the UN Declaration of Human Rights, trust and safety best practices, principles proposed by other AI research labs, an effort to capture non-western perspectives, and principles that we discovered work well via our research.”

The company, pointedly getting ahead of the invariable conservative backlash, has emphasized that “our current constitution is neither finalized nor is it likely the best it can be.”

“There have been critiques from many people that AI models are being trained to reflect a specific viewpoint or political ideology, usually one the critic disagrees with,” the team wrote. “From our perspective, our long-term goal isn’t trying to get our systems to represent a specific ideology, but rather to be able to follow a given set of principles.”