Anthropic’s Claude AI Is Guided By 10 Secret Fundamental Pillars Of Justice | Engadget

Despite their ability to produce incredibly realistic prose, generative AIs like Google’s Bard or OpenAI’s ChatGPT (powered by GPT-4), have already demonstrated the current limitations of gen-AI technology, as well as their own tenuous understanding. facts, arguing that the JWST was the first telescope to photograph an exoplanet, and that Elvis’s father was an actor. But with so much market share at stake, what are some misquoted facts that are preventing your product from getting into the hands of consumers as quickly as possible?

The Anthropic team, by contrast, is largely made up of ex-OpenAI folks and they have taken a more pragmatic approach to developing their own chatbot, Claude. The result is an AI that is “more manageable” and “much less likely to produce harmful results” than ChatGPT, according to a report from TechCrunch.

Claude has been in closed beta development since late 2022, but recently began testing the AI’s conversational capabilities with release partners including Robin AI, Quora, and privacy-focused search engine Duck Duck Go. The company has yet to release pricing, but has confirmed to TC that two versions will be available at launch: the standard API and a faster, lighter iteration that they have dubbed Claude Instant.

“We use Claude to evaluate particular parts of a contract and to suggest new and alternative language that is more customer-friendly,” said Robin CEO Richard Robinson. TechCrunch. “We found that Claude is really good at understanding language, even in technical domains like legal language. He is also very confident in drafting, summarizing, translating, and explaining complex concepts in simple terms.”

Anthropic believes that Claude is less likely to go rogue and start spewing racist obscenities like Tay did, in part, due to the AI’s specialized training regimen that the company calls “constitutional AI.” The company claims this provides a “principles-based” approach to getting humans and robots on the same ethical page. Anthropic started with 10 fundamental principles, although the company won’t reveal what they are specifically, which are 11 secret herbs and spices in a bizarre marketing gimmick, suffice to say that they are “based on the concepts of beneficence, non-maleficence, and autonomy,” according to TC.

The company then trained a separate AI to reliably generate text according to those semi-secret principles by responding to countless typing prompts like “compose a John Keats-style poem.” That model then trained Claude. But just because he’s trained to be fundamentally less troublesome than his competition doesn’t mean Claude doesn’t hallucinate facts like a startup CEO at an ayahuasca retreat. The AI ​​has already invented an entirely new chemical and taken artistic license for the uranium enrichment process; It reportedly scored lower than ChatGPT on standardized math and grammar tests.

“The challenge is to make models that never freak out but are still useful: You can get into a sticky situation where the model realizes that a good way to never lie is to never say anything at all, so there’s a compensation we are working on. on,” said the Anthropic spokesperson. TC. “We’ve also made progress in reducing hallucinations, but more needs to be done.”



Source link

James D. Brown
James D. Brown
Articles: 8608