Anthropic debuts Claude 3 model series with ‘near-human’ capabilities in some areas
Anthropic PBC today introduced a new family of large language models, the Claude 3 series, that it says can outperform GPT-4 and Google LLC’s Gemini Ultra.
The series includes three models that vary in their sophistication and price. The most advanced LLM, Claude 3 Opus, is touted as having “near-human levels of comprehension and fluency on complex tasks.” It’s joined by two other models, Claude 3 Sonnet and Claude 3 Haiku, that trade off some response quality for a reduction in inference cost.
All three models feature significant improvements over Anthropic’s previous flagship LLM. Compared with Claude 2.1, they are less likely to generate biased answers or reject harmless prompts that don’t breach the company’s terms of service. Another major difference is that the Claude 3 series isn’t limited to processing text: Users may also input photos, technical diagrams and other visual assets.
A prompt entered into a Claude 3 model may contain up to 200,000 tokens, units of data that each hold a few letters or numbers. According to Anthropic, all three models can theoretically ingest much larger prompts with 1 million tokens or more. The company said it “may make this available to select customers who need enhanced processing power.”
The Claude 3 series is headlined by Opus, an LLM that can answer complex questions twice as accurately as Claude 2.1. Anthropic claims this accuracy boost enables it to outperform GPT-4 and Gemini Ultra across several popular artificial intelligence benchmarks.
One of the benchmarks the company tested, GSM8K, comprises a large number of grade school math problems. Anthropic says that Claude answered 95% of the questions correctly, while Gemini Ultra and GPT-4 scored 94.4% and 92%, respectively. Opus also demonstrated a slight edge over its rivals across two other benchmark tests, MMLU and GPQA, that evaluate AI models’ familiarity with topics such as physics.
The other models in the new Claude 3 series have more limited reasoning capabilities, but will be available for customers at a lower price. They also generate prompt responses quicker.
Anthropic says that the fastest and most affordable Claude 3 model, Haiku, can read a research paper containing 10,000 tokens’ worth of information in less than three seconds. Customers also have access to a third model, Sonnet, which is positioned as a midrange option between Haiku and Opus. It’s not as quick as the former model, but offers higher response quality and can still generate prompts about twice as fast as Anthropic’s previous flagship LLM.
Sonnet and Opus are available today through an application programming interface, as well as Anthropic’s free Claude.ai chatbot. Haiku, in turn, is set to roll out “soon.” Further down the road, Anthropic plans to enhance the Claude 3 series with additional features such as the ability to take actions in third-party applications.
The company is also bringing the LLM family to Amazon Web Services Inc.’s public cloud. Sonnet is available today via Amazon Bedrock, a managed service that provides access to foundation models from AWS and other companies. Opus and Haiku are set to follow suit soon.
Anthropic’s new AI benchmark records may soon be challenged by rivals. In November, OpenAI disclosed that it has begun developing a successor to GPT-4 with more advanced capabilities. More recently, Google detailed a new iteration of Gemini that promises to provide “dramatic improvements” over the current version and has demonstrated the ability to process prompts with up to 10 million tokens.
Image: Anthropic
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU