RSVP Event

Meet us at HIMSS26

Booth #10224 | Venetian Level 1 | AI Pavilion to explore our secure, scalable, and compliant AI solutions for healthcare’s toughest challenges.

Your REST API Isn’t Agent-Ready—Here’s How MCP Transforms It

profileImg
Varun Srinivas
22 Aug 20255 min

Your REST API Isn't Agent-Ready—Here's How MCP Transforms It

"APIs are how software talks. The problem is, most of them are speaking the wrong language." — Andrej Karpathy

Modern AI agents powered by large language models (LLMs) are changing how we build and interact with software. Yet many of our existing systems, especially traditional REST APIs, are not "agent-ready."

In other words, they are not designed for AI agents to understand or use effectively. To bridge this gap, we need a new approach that speaks the language of LLMs. This is where Context Engineering and the Model Context Protocol (MCP) come into play.

In this article, we will explore why current APIs fall short for AI agents and how MCP can transform your REST API into an AI-friendly interface. Along the way, we will discuss Karpathy's vision of Software 3.0 and how MCP is a practical step toward that future.

The New Language of Software: From Code to Prompts

Software is undergoing a paradigm shift. Andrej Karpathy famously categorized software development into three eras, each with its own "language" and way of programming computers:

  • Software 1.0: Code you write (explicit human-written instructions running on computers). This is classic programming, deterministic, precise, and under full human control.

  • Software 2.0: Code you train (machine-learned models, that is, neural network weights). Here, instead of writing rules, developers provide data and let the system learn. The "code" is a trained model's parameters.

  • Software 3.0: Code you prompt. In this new age, prompts written in natural language become the programs, and LLMs act as the execution engine. As Karpathy put it, "your prompts are now programs that program the LLM… written in English." In other words, natural language is the new programming language.

We are firmly entering the Software 3.0 era, where LLMs are the new computers, or a new kind of operating system. Instead of calling functions or writing API calls step by step, developers and even end users describe what they want in a prompt, and the AI figures out how to implement it.

This is powerful, but it also demands a rethinking of how our software systems expose functionality. If English, or more generally, natural language plus structured hints, is the "API" for LLMs, how do our traditional tools fit in?

Why Traditional REST APIs Aren't Ready for AI Agents

Conventional REST APIs are designed for consumption by human developers or programs that have been explicitly coded to use them. A REST API typically has endpoints, requires specific parameters, and returns JSON or XML data.

This works great when a human programmer knows the API's documentation and can write code to call the right endpoints in the right sequence. But an AI agent is not a human programmer – it's an automated reasoning system that needs to decide on actions on the fly based on a goal or prompt.

The result? Miscommunication. A vanilla REST API speaks a language of fixed endpoints and rigid request/response formats. It has no built-in way to explain itself to an AI.

An LLM agent interfacing with a raw API would face a huge challenge:

Lack of self-description

APIs don't usually advertise to a machine what operations are available or when to use them. A human can read the docs, but an LLM doesn't innately know the API's capabilities (unless you literally paste the docs into its prompt, which is a fragile solution at best).

Procedural usage expectations

Most APIs expect the caller to know exactly which calls to make and in what order. An AI agent would have to infer this procedure. Without clear guidance, the agent might call the wrong endpoint, pass wrong parameters, or not call the API at all. In some cases, it might even hallucinate an answer instead of using the system.

Rigid input/output

APIs speak JSON (or XML). While LLMs can parse and produce JSON, they don't inherently know what JSON to send unless instructed. From the agent's perspective, a raw API is a black box that yields some data if used correctly, but figuring out correct usage is non-trivial.

In short, most APIs today are "speaking the wrong language" for LLMs. They were designed for precise calls by deterministic code, not for flexible, context-driven usage by an AI reasoning in natural language.

To truly unlock the potential of AI agents in our systems, something needs to change in how APIs present themselves. As Karpathy and others have suggested, we should start "building for agents" rather than for traditional programs. This is where Context Engineering and MCP come in.

Context Engineering: Feeding AI the Right Information (at the Right Time)

Before diving into MCP, it's important to understand Context Engineering – a new discipline emerging in the era of LLMs. If prompt engineering is about crafting the right query for an LLM, context engineering is about ensuring the LLM has all the relevant information and tools available to accomplish a task.

"Context engineering is the delicate art and science of filling the context window with just the right information for the next step." — Andrej Karpathy

In practice, context engineering means dynamically providing the AI with what it needs when it needs it: relevant data, documents, or even functions it can call. It's not just stuffing the prompt with a big blob of text. It's about structured, curated, and timely context.

As one analysis put it, we're moving toward a world where "context is dynamic, structured, and personalized. It's not just what the agent knows – it's how and when it knows it."

In other words, to get reliable results from an AI agent, you often must pre-engineer the situation: fetch the necessary data, provide tools for interactivity, and lay out a "workspace" for the AI to operate in.

Think of an AI agent trying to solve a user's query. If the agent were a human, you'd hand it a folder of relevant documents, a phone to call an expert, or a key to the archive room, depending on the task.

Context engineering is the analogous process for AI – giving the LLM the info and tools it needs at runtime. This might involve retrieval (searching a vector database or knowledge base for facts), providing an API key to an external service, or specifying functions it can call (like a calculator or a weather API).

Crucially, adding tools and APIs to an AI's repertoire is part of context engineering. Instead of hoping the model "just knows" something or can solve it alone, we equip it with external capabilities.

That is why OpenAI added function calling to GPT-4. This is why LangChain/LLamaIndex provides tool integration, and why MCP was created. All of these are methods to engineer context for the model, by expanding what it can see and do beyond its base training.

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is a new open standard, originally introduced by Anthropic in late 2024. It aims to standardize how AI models access external data, tools, and APIs.

You can think of MCP as a layer that sits on top of your existing data or services and communicates with AI agents in a language they understand. In essence, MCP lets you expose the capabilities of a system in a structured, machine-readable way that an LLM can interpret and use.

Here are the key points about MCP:

MCP is a specification, not a library

It is akin to what REST or GraphQL is for web services, but designed for AI agents. It defines a standard format for describing "tools" (functions/operations) that an AI model can call, along with their inputs and outputs.

Any AI agent that understands MCP can connect to any MCP-compliant server. This decouples the AI models from custom integrations – build an MCP server once, and many different agents (Claude, GPT-4, etc.) can use it. No more one-off plugin implementations for each AI vendor.

It's about capabilities and context

An MCP server essentially tells the model "Here are the things you can do, and here's how to use them." It provides metadata and structured descriptions of available tools. The model can then plan actions using those tools instead of guessing.

For example, if you have a database, an MCP server might expose a queryDatabase tool with a description and a schema for inputs/outputs. The model sees that and knows it can call queryDatabase when it needs info from your DB (instead of trying to recall stale training data).

Two-way and secure by design

MCP enables two-way connections between AI and data sources. It's not just the AI reading data; it can also take actions (with proper guardrails). Importantly, you define exactly what the AI can do – each tool is explicitly programmed by you, so there's no unbounded access.

The model can only invoke the tools you expose, with the inputs you allow, and you can enforce auth, permissions, and validations just like any API. "MCP is about surfacing capabilities, not giving blanket access," as one guide puts it. This addresses the safety aspect: AI agents stay within the lanes you've defined.

In short, MCP acts as a universal translator and toolbox provider between AI agents and your system. Instead of custom logic to integrate each API with each AI, you get a standard interface. And instead of the AI floundering or hallucinating, it has a clear menu of actions it can take on your system.

MCP vs. REST: Speaking the AI's Language

You might be thinking, "We already have a REST API. Do we really need MCP?" It helps to contrast what an MCP server offers compared to a traditional API.

Self-describing "Tool Manifest"

An MCP server exposes a list of tools in a machine-readable manifest (typically JSON) describing each tool's name, purpose, inputs, and outputs. Think of it as an automatic API discovery document for AI agents.

In REST, we have things like OpenAPI/Swagger, but those are aimed at developers; MCP's manifest is aimed at the model itself. The LLM (or the AI runtime hosting it) can read this manifest and understand what it can do without needing a human to hard-code calls.

For example, the manifest might say:

Tool: "getUserProfile",Description: "Fetches a user's profile by ID from the CRM", Inputs: {"user_id": "string"}, Outputs: {"name": "string", "email": "string", ...}

Armed with this, the agent can reason like: "The user asked for their account status, I should call getUserProfile with the user's ID." In a traditional API, the model wouldn't know such a function exists unless it was explicitly described in the prompt. MCP gives the model a structured map of the API's capabilities upfront.

Dynamic decision-making vs. hard-coded calls

In REST, clients (apps) decide when and what API calls to make. In MCP, the agent (model) decides, based on the task and the manifest, which "tool" to use and when. As the Vercel team succinctly put it: "An API is made for apps to call directly… It's built for humans/programs that know what they're doing. An MCP server is made for models… providing structured descriptions of what tools do and when to use them.

Unlike APIs that need hard-coded calls, models use this context to figure out what to do." In other words, MCP shifts some of the planning logic into the AI's domain – the agent orchestrates the sequence of actions, guided by the metadata. This is a fundamental difference in philosophy. We're empowering the AI to be an autonomous API client.

Unified interface for many models

By implementing MCP, you make your service universally accessible to any AI system that speaks MCP. For instance, Anthropic's Claude, OpenAI models (via certain tool-use frameworks), or open-source agent frameworks can all plug into the same MCP server.

"Write the integration once. Any MCP-compatible agent can use it. No custom logic for each model. No vendor lock-in." This is a big deal for developers: instead of writing separate plugins or adapters for each AI platform (which was the norm in 2023's "plugin" craze), you have one standard. It's akin to how writing a web service in compliance with HTTP+JSON standards means any programming language can consume it; here, any AI agent can consume your MCP-described service.

To illustrate how MCP actually looks under the hood, here's a simplified example of a tool manifest entry (conceptually):

{
  "tools": [
    {
      "name": "searchInventory",
      "description": "Search the product inventory by keyword and return matching items.",
      "input_schema": {
        "type": "object",
        "properties": {
          "query": { "type": "string", "description": "Search term for product names or descriptions" }
        },
        "required": ["query"]
      },
      "output_schema": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "product_id": { "type": "string" },
            "name": { "type": "string" },
            "price": { "type": "number" }
          }
        }
      }
    }
  ]
}

In a running MCP server, the above information would be provided to the LLM (usually via an initial handshake or on first connection). The AI doesn't need to guess how to get inventory data – it knows there's a "searchInventory" tool and exactly how to use it.

So if a user asks, "Do we have any red running shoes in stock?", the model can decide to call searchInventory with {"query": "red running shoes"}. The MCP server might internally translate that to a database query or an internal API call, then return real results. The model sees the JSON result and can incorporate that into its answer: "Yes, we have 12 different red running shoes available. The cheapest is …" – grounded on actual data rather than a guess.

This approach dramatically reduces hallucination and errors. The model no longer says, "I think we probably have some in stock" – it knows by asking your system.

As the MCP FAQ notes, "Models continue to rely on their own trained knowledge and reasoning, but now they have access to specialized tools from MCP servers to fill in the gaps… If it reaches limits in its understanding, it can call real functions, get real data, and stay within guardrails you define instead of fabricating answers."

How MCP Transforms Your API (and Your Architecture)

Implementing an MCP server for your existing API or data source is essentially an act of context engineering your system for AI. You're creating a new AI-facing façade that packages up context (data) and actions in the right format for an LLM to use.

At Coditas, we've been building MCP servers and have seen firsthand how it changes the game:

From Endpoints to "Tools"

We take traditional API endpoints or operations and register them as MCP tools. For example, an internal REST endpoint GET /orders/{id}/status might become a tool called getOrderStatus(order_id). We provide a description like "Retrieve the current status of an order by order ID" and define the input/output schema. Suddenly, an AI agent can discover and use this functionality without a human pointing it to the specific HTTP path.

Contextual Responses vs. Raw Data

We often add an interpretation layer or formatting within the MCP server. Instead of dumping a raw JSON blob as a response, the MCP server might trim it or format it, knowing that a model will read it. Some MCP implementations even let you include prompt templates or transformations on outputs, so the data is presented in a way that's most useful for the model's next step. In essence, the API is no longer just a data pipe – it's part of the AI's context.

Controlled Autonomy

By defining what the AI agent can do (and nothing beyond that), MCP gives us confidence to let the agent operate autonomously within its defined limits. For instance, you might allow it to create a support ticket via a tool, but perhaps not delete one (unless specifically authorized). Each tool is an explicit permission.

This granularity means you can safely automate more tasks. We've found that business leaders (CTOs/CIOs) appreciate this control – it's not a rogue AI hacking into your system, it's an AI invoking well-defined functions just like any other client, with full audit logs and permission checks.

A useful analogy

As described in the MCP documentation, the AI agent is the chef, and the MCP server is the kitchen. The chef (agent) decides what to make (plans the task), but can only use the tools and ingredients available in the kitchen (your MCP-exposed functions and data).

The kitchen sets boundaries – you can fry, bake, and mix using these appliances. If it is not in the kitchen, the chef can't magically do it. This keeps the agent's behavior safe, predictable, and within scope while still allowing creativity within those bounds.

A step toward LLM-native architecture

From an architectural standpoint, adding an MCP layer is a step toward making your whole stack "LLM-native." Karpathy, in his talk, highlighted the need for "paving the roads for agents" in our software infrastructure.

Instead of UIs or APIs built only for human consumption, we also build interfaces for AI consumption. Companies like OpenAI and Anthropic predict a future where a significant portion of usage comes from AI agents rather than direct human interaction – OpenAI even projects that agent use-cases will eclipse direct chat usage of LLMs by 2030.

Think about that: your customers might increasingly interact with your services via an AI intermediary. MCP is one answer to, "How do we prepare for that AI-driven world?" It provides the roads and bridges for agents to navigate your systems.

Opinionated Perspective: Embrace the Shift or Be Left Behind

It's rare to see a new protocol with the potential to reshape how software components interact. Skeptics might say, "Isn't this just function calling or plugging in an SDK? We've had integrations for years." But in our opinion, MCP represents a genuine shift.

It's a recognition that Software 3.0 applications need a different kind of connectivity – one centered on contextual understanding rather than rigid commands.

By focusing on what the model can do instead of how a developer must call it, MCP flips the integration model on its head. It takes a strong stance on standardization (which is good – the last thing we need is a dozen incompatible "AI plugin" formats).

It's also ambitious: it wants to be the universal connector between AI and the world's software, much like HTTP is the universal connector of client-server on the web.

Why this matters now

From a developer's perspective, adopting MCP early can provide a competitive edge. It can make your platform AI-friendly with relatively little effort (there are SDKs and examples in multiple languages to get started quickly). Instead of writing custom prompt instructions for each API call or building one-off integrations, you describe your capabilities once.

If you've ever written an OpenAPI spec, this will feel familiar – but now you're writing it for an AI audience. Engineers who grasp context engineering will be in high demand, as they can design systems that augment AI effectively. As one VC firm insight noted, "Tools and platforms will emerge that help teams design, store, recall, test, and deploy context modules the same way they ship code." In many ways, building an MCP server is creating a context module for your data/service.

The product advantage

From a CxO or product leader perspective, making your product agent-ready opens up new possibilities. Imagine an AI agent that can use your SaaS product just like a power user would – but without a human. By exposing MCP tools, you enable automation and integrations that plain language commands can drive.

Your end-users might not even know MCP is involved, but they'll notice your app can suddenly do smarter, more proactive things. For example, instead of a user manually clicking through a dashboard, they could ask an AI, "Generate the monthly report and email it to the team," and behind the scenes, your MCP interface lets the AI fetch data and trigger report generation.

As Vercel's blog put it, "Instead of chatbots giving vague answers, users get precise responses backed by real data. They'll be able to ask an AI to do things on their behalf… and it actually executes them."

That's a better user experience, and it differentiates products in a crowded market.

Conclusion: From APIs to AI-PIs (AI-Programmatic Interfaces)

It's time to acknowledge that yesterday's interfaces aren't always up to tomorrow's challenges. REST APIs aren't going away, but their role is evolving. In the era of Software 3.0, we need to augment our APIs with AI-ready context and semantics. The Model Context Protocol offers a powerful way to do just that. It takes the stable, battle-tested concept of an API and makes it intelligible to an LLM agent.

By implementing MCP (or similar context-engineering strategies), you're essentially creating an AI Programmatic Interface for your software – one that speaks in terms of goals and functions, not just low-level endpoints. You are telling the AI, "Here are the building blocks, go build something for the user, and I'll ensure each block works as advertised."

The future of software likely belongs to those who embrace agents as first-class users. It's a provocative idea to design systems not just for human users or developers, but also for AI colleagues. It requires a shift in mindset – thinking about exposing intent safely rather than just data. But as we've argued, the shift is happening regardless. MCP is one practical, opinionated way to ride that wave.

In summary, making your REST API agent-ready is about speaking the AI's language. Context engineering fills the AI's world with the right information and tools, and MCP provides the syntax and grammar for those tools. Together, they transform a mute API into a conversational partner for the AI.

The question isn't "will this be useful?" – it's "how soon will others do this, and will I be ahead of the curve or playing catch-up?"

The software landscape is changing (again). By adopting protocols like MCP, you ensure that your APIs aren't left speaking only to humans, but are ready to collaborate with the intelligent agents that are increasingly becoming part of the team.

In the end, an agent-ready API is one that can carry on a dialogue with AI – and in that dialogue, truly get things done.

Featured Blog Posts
AI, ArtificialIntelligence, GenerativeAI, AITrends, AIInnovation, TechAI, AIRevolution, TechTrends, AIIntegration, MachineLearning, AIAdoption
Understanding the AI Maturity Curve and Why Most AI Ambitions Collapse Before They Scale
profileImg
Abhishek Ghosh
1 August 20255 min read

Need help with your Business?

Don’t worry, we’ve got your back.

0/1000