Why AI Engineers Can’t Stop Talking About MCP

1 week ago 9

Those in the field of AI have likely come across the term Model Context Protocol (MCP) quite often. Several enterprises and companies are already adopting it and building MCP servers as it expands the capabilities of AI systems by enabling them to interact with diverse data sources and external tools in real time.

happyllama

Even Google’s chief, Sundar Pichai, recently posted on X, “To MCP or not to MCP, that’s the question.” Following this, people started to discuss on the topic and Glean CEO Arvind Jain said, they are using MCP because enterprise agents need open standards, context. He also shared a demo of Glean’s integration with OpenAI’s Agents SDK using MCP.

Notably, OpenAI recently announced that MCP is now integrated with OpenAI Agents SDK, allowing developers to connect their MCP servers directly to agents. OpenAI is also working on bringing MCP support to the OpenAI API and the ChatGPT desktop app, with more updates expected in the coming months.

What is MCP? In simple terms, it is an open standard that allows AI models, particularly LLMs, to interact with external systems, tools, and data sources in a consistent and structured way. Instead of being limited to the knowledge they were trained on, models using MCP can perform real-time actions like querying a database, calling an API, reading files, or executing workflows. It is like a USB-C port for AI applications—a universal connector for AI systems.

With MCP, developers can connect their LLMs with apps, databases and tools. 

For instance, without MCP, developers would be stuck using a dozen different proprietary cables for their laptops instead of a universal USB-C port. Every time an API changes, they would be forced to rewrite their integration code all over again. MCP standardises how models access and share context.

On the internet, there are several GitHub repositories with a collection of several MCP servers. When Anthropic launched MCP, it shared pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

Notably, companies are building servers for those who don’t want to. For instance, last month, AI startup Composio announced that it is building the largest source of fully managed MCP servers, complete with authentication support. “You can now integrate apps with your Cursor AI, WindSurf AI, or Anthropic’s Claude desktop without dealing with infrastructure, reliability, or authentication complexities,” Karan Vaidya, co-founder of Composio, said.

Similarly, Cloudflare recently announced that it is partnering with Auth0 and Stytch as authorisation partners for MCP. This will make it easier for developers and development teams of all sizes to delegate permissions to agents, dramatically simplifying agent deployment. This comes after its recent announcement that made it easier to build and deploy remote MCP servers to Cloudfare.

Meanwhile, ElevenLabs introduced an MCP server that allows users to give Claude and Cursor access to the entire ElevenLabs AI audio platform via simple text prompts. Users can even spin up voice agents to perform outbound calls, like ordering pizza. Moreover, ElevenLabs’ growth officer, Luke Harries, built a WhatsApp MCP server, which can now send and receive images, videos, and voice notes.

On the other hand, Microsoft has rolled out an Agent mode in Visual Studio Code (VS Code) to all users, offering a new autonomous coding assistant that supports multi-step tasks and integrates with the MCP. This update positions Agent mode as a pair programming assistant that can analyse codebases, propose file edits, run terminal commands, and iterate through errors to complete coding tasks.

AWS has already joined the bandwagon, offering MCP support across its platform. It includes Bedrock Agents integration via the Inline Agents API, open-source MCP servers for code assistants, guides for running MCP infrastructure, and upcoming support in the Amazon Q Developer CLI.

MCP has received positive feedback from Latent Space and Andreessen Horowitz (a16z).  However, LangChain shared a more balanced view. Nuno Campos, the creator of LangGraph, said MCP needs to be easier to implement, less complex, and better at managing server quality and scale.

How Is It Different From RAG & APIs? 

While RAG focuses on augmenting the LLM’s response by retrieving and incorporating external information during query processing, MCP addresses the broader challenge of integrating AI models with various external tools, facilitating not only data retrieval but also enabling the execution of actions through external tools.

“In some way, RAG can also be seen as a tool, which means that it’s possible to build MCP servers on top of RAG services or solutions,” Elvis Saravia, co-founder at Dair.ai, wrote on X.  “In other words, MCP doesn’t replace RAG; it complements it, as is the case with other innovations like long-context LLM and large reasoning models.” 

AI + MCP > AI + API

In an X thread, Santiago Valdarrama, founder of Tideily explained that MCP is not just another API lookalike. An API exposes its functionality using a set of fixed and predefined endpoints, such as products, orders, or invoices.

Whether one wants to change the number of parameters for such endpoints or add new capabilities to an API, the client will also need modifications.

However, while discussing MCP, Valdarrama said, “Let’s say you change the number of parameters required by one of the tools in your server. Contrary to the API world, with MCP, you won’t break any clients using your server. They will adapt dynamically to the changes!”

Challenges with MCP

In a LinkedIn post, Dharmesh Shah, founder of HubSpot, said that he loves the idea of MCP but pointed out a few important considerations. First is authentication—how to determine who has access to which capabilities? Then comes trust and deciding which MCP servers are reliable enough to use. 

He added that provisioning is another challenge, as MCP servers are often just GitHub repositories that users need to self-host. Finally, there’s security. Given how LLMs interact with tools exposed by MCP servers, new risks emerge. That’s why it’s crucial to use known clients with trusted servers.

Similarly, Michael Hunger, head of product innovation and developer product strategy at Neo4j, said in a blog post that MCP is still in its early days, and many challenges need to be addressed, especially around security, observability, and discovery, before MCP can be integrated into dependable AI systems.

However, it is subject to attacks and companies are building security platforms for agentic AI and MCP systems.

Guy Goldenberg, a Wiz software engineer, identified severe vulnerabilities in the MCP servers. These vulnerabilities, he said, could allow attackers to bypass protections, gain access to system files, and execute commands. 

Meanwhile, Anthropic acknowledged the need to improve the security features. Justin Spahr-Summers, a member of the technical staff at Anthropic, said on a Hacker News thread, “Although MCP is powerful, and we hope it’ll really unlock a lot of potential, there are still risks like prompt injection and misconfigured or malicious servers that could cause a lot of damage if left unchecked.”

However, Shah believes that there’s a billion-dollar startup idea waiting to be built with MCP. He pointed out that finding the right MCP servers and plugging them into something like ChatGPT is currently messy and intimidating. 

The idea, according to him, is to build a centralised network of MCP servers that makes it frictionless to get started and delivers a fast time to joy. “I’d call it MCP.net.”

He mentioned that it could be thought of as the ‘Hugging Face of MCP’—a way to discover and connect to MCP servers.

Read Entire Article