AIIndustry

Anthropic Launches Claude Managed Agents — Enterprise AI Agents Go from Prototype to Production in Days Instead of Months

Mubboo Editorial Team

Mubboo Editorial Team

April 10, 2026 · 4 min read

Anthropic launched Claude Managed Agents on April 9, a public beta that gives developers a hosted platform for building and running autonomous AI agents without setting up their own infrastructure. Notion, Rakuten, Asana, and Sentry are already using it in production. The cost: standard Claude API token pricing plus eight cents per session hour of active runtime, with web searches priced at $10 per 1,000 queries.

The product is a suite of composable APIs that handle the infrastructure problems most teams spend months solving before writing any agent logic: sandboxed code execution, checkpointing and state persistence, credential management, scoped permissions, and end-to-end tracing. Anthropic describes it as moving teams "from prototype to launch in days rather than months."

What the platform handles — and what developers still own

Before Managed Agents, building a production AI agent meant assembling a stack of custom infrastructure. Sandboxing so the agent can't break things. State management so it remembers what it was doing after a network interruption. Credential vaults so it can access internal tools without exposing secrets. Permission systems so it can read a spreadsheet but not delete a database. Logging so you can trace what it did and why.

Anthropic now runs all of that. Developers define the agent's task, connect its tools, and set guardrails. The agent can run autonomously for extended periods, persist through disconnections, and pick up where it left off. On internal benchmarks, Anthropic reports a 10-point improvement in task success rates on structured file generation compared to standard prompting — a gain that comes from the platform's ability to let agents iterate against success criteria rather than attempting tasks in a single pass.

Research previews — not yet generally available — include multi-agent coordination, where an agent can spin up subordinate agents for complex tasks, and persistent memory across interactions.

Four companies, four different agent patterns

Rakuten stood up enterprise AI agents across product, sales, marketing, finance, and HR — each deployment taking roughly a week. The agents plug into Slack and Microsoft Teams, accepting natural language requests and returning deliverables like spreadsheets and slide decks. A week per department, across five departments, is the kind of rollout speed that previously required dedicated platform engineering teams working for quarters.

Sentry paired its existing Seer debugging agent with a Claude-powered agent that writes code fixes and opens pull requests. Indragie Karunaratne, Senior Director of Engineering for AI/ML at Sentry, said the team "chose Claude Managed Agents because it gives us a secure, fully managed agent runtime, allowing us to focus on building a seamless developer experience."

Notion is running Custom Agents in private alpha, where engineers ship code while knowledge workers generate presentations and websites. The system handles dozens of parallel tasks within a single workspace. Asana built "AI Teammates" that work alongside humans inside project management workflows, picking up tasks and drafting deliverables. The team reports adding advanced features "dramatically faster" than with previous approaches.

Each adoption pattern is distinct: enterprise operations at Rakuten, developer tooling at Sentry, knowledge work at Notion, project management at Asana. The common thread is that none of these companies built their own agent infrastructure.

Anthropic's first infrastructure-as-a-service play

Managed Agents runs exclusively on Anthropic's infrastructure. It is not available through AWS Bedrock or Google Vertex AI, the two cloud platforms that currently distribute Claude's standard API. That exclusivity is a strategic choice: if agents become the primary interface between enterprises and AI, the company hosting the agent runtime captures the relationship — not the cloud provider reselling API access.

The pricing model supports that reading. At $0.08 per session hour plus standard token rates, Anthropic is pricing for volume adoption rather than premium margins. A Rakuten-style deployment running agents across five departments during business hours would cost modest infrastructure fees relative to the engineering salaries those agents are augmenting.

This positions Anthropic more directly against OpenAI and Microsoft's enterprise AI offerings, where the competitive advantage shifts from model quality alone to the full stack: model, runtime, tooling, and security.

Mubboo's Take

Claude Managed Agents is the clearest indication yet that AI is moving from conversation to execution. When Rakuten can stand up an enterprise AI agent in a week that handles sales, marketing, and finance tasks in Slack, the speed of AI deployment has crossed a threshold. For platforms like Mubboo that build on Claude for content production and operational management, the trajectory is straightforward: what starts as a managed coding agent today becomes a managed product comparison agent or customer service agent tomorrow. The infrastructure for AI agents that do things — not just say things — is now available as a service.

AIIndustry
LinkedInX
Mubboo Editorial Team

Mubboo Editorial Team

The Mubboo Editorial Team covers the latest in AI, consumer technology, e-commerce, and travel.

Related articles

AIShoppingIndustry

Amazon CEO Jassy Discloses AWS AI Revenue for the First Time: $15 Billion Run Rate and Growing 260 Times Faster Than Early AWS

In his annual shareholder letter on April 9, Amazon CEO Andy Jassy revealed AWS's AI services are generating a $15 billion annualized revenue run rate — the first time Amazon has ever disclosed AI-specific revenue. He defended the company's planned $200 billion in capital expenditure for 2026: 'We're not investing on a hunch.'

4 min read·Apr 10, 2026
AIShoppingIndustry

Meta Launches Muse Spark, Its First Proprietary AI Model — With a Shopping Mode That Turns Instagram Into a Personal Stylist

Meta abandoned its open-source playbook to release Muse Spark, a proprietary model built by Alexandr Wang's Superintelligence Labs. It will power Meta AI across 3 billion users — with a Shopping mode that lets consumers find clothes and furniture through AI conversation.

5 min read·Apr 10, 2026
AIIndustry

Anthropic's Revenue Triples to $30 Billion Run Rate in Three Months — Signs Largest Compute Deal Yet with Google and Broadcom

Anthropic's annualized revenue has surged from $9 billion at end of 2025 to $30 billion — a threefold increase in roughly three months. The company simultaneously signed a deal for 3.5 gigawatts of Google TPU compute capacity via Broadcom, its most significant infrastructure commitment ever.

4 min read·Apr 9, 2026
AIIndustry

Anthropic Leads 'Project Glasswing' to Confront AI-Powered Cybersecurity Threats to Critical Software

Anthropic is leading a tech sector initiative called Project Glasswing to address how advanced AI models are creating increasingly dangerous cybersecurity threats to critical software. The project drew praise from Senate Intelligence Committee Ranking Member Mark Warner.

3 min read·Apr 9, 2026