In a significant development in the AI industry, Anthropic has unveiled Claude Pro, a paid version of its highly proficient AI chatbot “Claude 2.” It’s poised to provide fierce competition for ChatGPT. This new market entrant has been setting new standards with its remarkable context capability. While ChatGPT Plus manages a context of 32K tokens, Claude Pro astounds with a massive 100K tokens, offering a more enriched user experience. That means Claude Pro can handle longer and more complex prompts than its rival.

Anthropic had been contemplating the introduction of a paid subscription model, keenly surveying the market dynamics and user preferences. The company conducted polls among its users of its free service to gauge their willingness to spend $25 per month for a subscription—a strategy aimed at aligning their offerings with user expectations.

According to the official announcement, Claude Pro promises "at least 5 times more usage compared to the free version of Claude." That feature is set to cater to the demands of power users. This premium offering is structured to reset every 8 hours, thereby making it easier for users to initiate new conversations without any hindrance, and ensuring a seamless interaction experience.

AD

ChatGPT Plus also caps its usage and has been expanding it over time. Its latest update put the bar at around 50 messages every 3 hours for GPT-4, it’s more capable model.

Anthropic recently raised over $400 million in a funding round led by Google. The company was founded by former OpenAI researchers. In Contrast, OpenAI is mainly backed by Microsoft and has reached a valuation of nearly $30 billion.

Claude Pro is designed to natively handle extensive conversations, even when they involve large attachments. The document advises users to "start new conversations for new topics," a guideline aimed at optimizing Claude's performance and avoiding unnecessary re-uploads of files, which in turn conserves the message limit and expedites response times.

AD

As previously reported by Decrypt, Anthropic’s innovative “Constitutional” training method for Claude differs from OpenAI’s Reinforcement Learning Through Human Feedback. OpenAI’s method requires human annotators to review the model’s inputs and outputs, which has made it more prone to human bias. Anthropic’s method provides the model with a “Constitution,” a set of general rules that guide the AI to favor good over bad interactions, making it able to self-improve without human interaction, detecting bad behaviors and adapting its conduct.

Stay on top of crypto news, get daily updates in your inbox.