Apple is bringing agentic coding to Xcode. On Tuesday, the company announced the release of Xcode 26.3, which will allow developers to use agentic tools, including Anthropic’s Claude Agent and OpenAI’s Codex, directly in Apple’s official app development suite.
The Xcode 26.3 Release Candidate is available to all Apple Developers today from the developer website and will hit the App Store a bit later.
This latest update comes on the heels of the Xcode 26 release last year, which first introduced support for ChatGPT and Claude within Apple’s integrated development environment (IDE) used by those building apps for iPhone, iPad, Mac, Apple Watch, and Apple’s other hardware platforms.
The integration of agentic coding tools allows AI models to tap into more of Xcode’s features to perform their tasks and perform more complex automation.
The models will also have access to Apple’s current developer documentation to ensure they use the latest APIs and follow the best practices as they build.
At launch, the agents can help developers explore their project, understand its structure and metadata, then build the project and run tests to see if there are any errors and fix them, if so.

To prepare for this launch, Apple said it worked closely with both Anthropic and OpenAI to design the new experience. Specifically, the company said it did a lot of work to optimize token usage and tool calling, so the agents would run efficiently in Xcode.
Xcode leverages MCP (Model Context Protocol) to expose its capabilities to the agents and connect them with its tools. That means that Xcode can now work with any outside MCP-compatible agent for things like project discovery, changes, file management, previews and snippets, and accessing the latest documentation.
Developers who want to try the agentic coding feature should first download the agents they want to use from Xcode’s settings. They can also connect their accounts with the AI providers by signing in or adding their API key. A drop-down menu within the app allows developers to choose which version of the model they want to use (e.g. GPT-5.2-codex vs. GPT-5.1-mini).
In a prompt box on the left side of the screen, developers can tell the agent what sort of project they want to build or change to the code they want to make using natural language commands. For instance, they could direct Xcode to add a feature to their app that uses one of Apple’s provided frameworks, and how it should appear and function.

As the agent starts working, it breaks down tasks into smaller steps, so it’s easy to see what’s happening and how the code is changing. It will also look for the documentation it needs before it begins coding. The changes are highlighted visually within the code, and the project transcript on the side of the screen allows developers to learn what’s happening under the hood.
This transparency could particularly help new developers who are learning to code, Apple believes. To that end, the company is hosting a “code-along” workshop on Thursday on its developer site, where users can watch and learn how to use agentic coding tools as they code along in real-time with their own copy of Xcode.
At the end of its process, the AI agent verifies that the code it created works as expected. Armed with the results of its tests on this front, the agent can iterate further on the project if need be to fix errors or other problems. (Apple noted that asking the agent to think through its plans before writing code can sometimes help to improve the process, as it forces the agent to do some pre-planning.)
Plus, if developers are not happy with the results, they can easily revert their code back to its original at any point in time, as Xcode creates milestones every time the agent makes a change.