Skip to main content

5 Facts About AI Coding Agents from Comprehensive Benchmarking

AI coding agents are becoming more capable, but evaluating them is harder than it looks. Most benchmarks focus on a single dimension of agent capabilities; for instance, the popular SWE-Bench benchmark only focuses on fixing issues on open source Python repositories. Real-world software engineering involves fixing bugs of course, but it is a lot more multifaceted: in any single week a software developer may also debug complex issues, building a new greenfield script or app, improving test coverage, fix bugs on a frontend repo, research unfamiliar APIs – the list goes on.

The OpenHands Index addresses this by building a much broader benchmark evaluating language models across five distinct categories: Issue Resolution (fixing bugs), Greenfield development (building new applications), Frontend development (UI tasks requiring visual understanding), Testing (generating tests to reproduce bugs), and Information Gathering (research and documentation tasks). This diversity matters because no single benchmark can capture the full range of what developers actually need from AI assistants.

We’ve evaluated many models to date, including commercial APIs and open-weights models, across five benchmark categories. All results, including complete agent trajectories, are published openly on the site. Here are five key findings.

1. Open Models Achieve Strong Top Performance at an Order of Magnitude Less Cost

The most expensive models don’t always deliver proportionally better results. Across all five benchmark categories, the performance spread between models is often narrower than their cost differences.

Top-tier commercial models achieve average scores in the 55–65% range across all categories. Meanwhile, more economical options, including some open-weights models, achieve 45–55% at a fraction of the per-task cost. For a typical development workflow involving hundreds of agent invocations per month, this cost difference compounds quickly.

The takeaway: Teams can start with the most capable models to establish the feasibility of incorporating AI agents into their workflow, but if cost becomes a concern there are plenty of competitive options at a fraction of the cost.

2. Locally Deployable Models Now Compete with Commercial APIs

Related to the above, the gap between open-weights and commercial models has narrowed significantly. In our latest evaluations, several open-weights models achieved average scores within a few percentage points of leading commercial offerings across all benchmark categories.

In addition to cost, this matters for organizations with specific requirements around data privacy, on-premises deployment, or customization. Open-weights models can be fine-tuned for specific codebases, integrated with internal tooling, and deployed on dedicated hardware—options not available with API-only services.

The takeaway: Open-weights alternatives are now viable for production use cases, not just experimentation.

3. No Single Model Dominates All Categories

Performance varies substantially across task types. A model that leads in bug fixing (SWE-Bench) may rank mid-pack for greenfield development (Commit0) or information gathering (GAIA).

In our evaluations, the top performer in issue resolution scored only 56% on application building tasks. Conversely, the leader in information gathering achieved 80% on that benchmark but ranked fourth on bug fixing.

The takeaway: Model selection should be driven by your team’s actual task distribution. The OpenHands Index can serve as an initial guide about what models to take a look at, and then you can do “vibe checks”, systematic evaluations, or A/B testing with the top contenders.

4. Multimodal Tasks Remain Challenging

Frontend development tasks where agents must interpret screenshots, mockups, and visual requirements show the widest performance variance across models.

On SWE-Bench Multimodal, scores range from 22% to 42%, with most models clustering in the 27–36% range. Even top-performing models struggle with tasks requiring visual understanding combined with code generation.

The takeaway: Multimodal capabilities are still maturing. Teams working heavily on frontend development should expect more iteration cycles when using AI agents.

5. Transparent Benchmarking Catches Issues That Aggregate Scores Miss

Comprehensive evaluation reveals failure modes invisible in single-number scores. By publishing full agent trajectories, we’ve identified cases where models achieved correct outcomes through unintended shortcuts.

One recent example: analysis of our Commit0 (application building) results revealed that some models were retrieving code from git history rather than implementing it from scratch. After identifying this behavior through trajectory analysis, we updated the benchmark methodology, and several models’ scores dropped by 10–30 percentage points.

The takeaway: Transparent, reproducible benchmarks enable continuous improvement. Single-number leaderboards can obscure important details about how models actually perform.

Methodology

The OpenHands Index evaluates models across five benchmark categories:

  • SWE-Bench Verified – Fixing real GitHub issues from Python repositories
  • Commit0 – Building applications from specifications
  • SWE-Bench Multimodal – Frontend tasks requiring visual understanding
  • SWT-Bench – Generating tests to reproduce bugs
  • GAIA – Information gathering and research tasks

Each model runs in a sandboxed environment with access to standard developer tools. We measure accuracy (task completion rate), cost per task, and average runtime. All evaluation code is open source at github.com/OpenHands/benchmarks, and complete results—including agent trajectories—are published at github.com/OpenHands/openhands-index-results.

Explore the full results at index.openhands.dev.



from DevOps.com https://ift.tt/mwn78fR

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Google’s Scion Gives Developers a Smarter Way to Run AI Agents in Parallel

Running multiple AI agents on the same project sounds straightforward — until they start stepping on each other. Different agents accessing the same files, sharing credentials, or colliding on the same codebase can quickly turn a promising setup into a coordination nightmare. That’s the problem Google set out to solve with Scion. Scion is an experimental multi-agent orchestration testbed built to manage concurrent AI agents running in containers across local machines and remote clusters. Google recently open-sourced the project, giving developers a hands-on way to experiment with parallel agent execution across tasks like research, coding, auditing, and testing. Think of it as a control layer that keeps agents working together without getting in each other’s way. What Makes Scion Different Most agent frameworks treat AI as a library or prompt-chaining script that runs directly in your environment. Scion takes a different approach — it treats agents as system processes, wrapping ...