Search
  • en
  • es
  • en
    Search
    Open menu Open menu

    Introduction

    The new protocol developed by Anthropic, published in November 2024, is currently being very well received by the community. It has quickly become a de facto industry standard, and other companies developing LLM models have already implemented equivalent functionalities in their systems.

    What Is the Model Context Protocol?

    The Model Context Protocol (MCP) is a standardized communication protocol between applications and Large Language Models (LLMs) that enhances these artificial intelligences, enabling them to perform more complex tasks.

    To simplify the explanation, think of it this way: although human intelligence is the most advanced we know, to answer a simple question like “What time is it?” we usually need a tool (a watch or a phone).
    With LLMs, something similar happens, to answer more complex questions, they need to connect to applications or services that provide tools for obtaining information or performing actions. That is the core value that MCP provides.

    This concept is not entirely new. Integrations between LLMs and tools to fetch data or execute actions have existed before, but most were ad hoc, specific, and non-reusable. What makes MCP interesting is that it defines a standard communication interface between any application or service and any AI model, simplifying the process and making connections portable and reusable.

     

    Before MCP, each connection between an LLM and an application/service had to be built manually, one by one.

     

    Now, let’s compare the previous diagram when using the Model Context Protocol

     

    Now, with MCP, the LLM interacts through an MCP Client that connects to any MCP Server, providing a standardized transport layer between them. Applications and services only need to implement their own MCP Server to expose the list of Tools available for LLM to perform its tasks.

    Exploring Official and Community MCP Servers

    Since the release of this communication protocol between AI models and applications/services, the community and private companies have rapidly adopted it. Every modern application or service is now being developed with its own MCP Server or connector to provide tools and capabilities to AI models.

    This creates an interesting scenario where an LLM is no longer limited to responding in plain text like a chatbot; it can now perform actions on real applications or services based on user requests.

    The community has already developed thousands of connectors for almost any application, and many companies are also releasing official MCP Servers. As of the date of this article, more than 16,000 MCP Servers have been registered, creating a rich and expanding ecosystem.

    If your company is considering integrating AI into its applications or services, this is one of the most promising paths to achieving this.

     

    Some notable MCP Servers include:

    • FileSystem: Allows AI to manage and query local files, assisting with tasks like creation, organization, or searching.
    • GitHub: Enables AI to manage repositories, reviewing pull requests, proposing fixes, analyzing vulnerabilities, or suggesting test plans.
    • PostgreSQL: Allows AI to generate SQL queries from natural Language requests and extract the requested information.

     

    All these and many more can be downloaded and installed for free on your MCP Client to connect them to an AI model.

    This represents a significant step forward toward enabling AI models to perform complex and useful tasks that help optimize workflows and reduce repetitive tasks.

    Security & Trust

    Not all MCP Servers are safe. As with any technological innovation, a careless or malicious developer could create a connector capable of performing harmful actions on applications, services, or even the user’s machine.

    This is particularly critical in enterprise environments. As a result, tools have emerged to evaluate and certify that an MCP Server does not contain serious vulnerabilities.

    One such tool is MCP-Scan, which analyzes installed MCP Servers on the client and generates a report of potential risks or insecure configurations.

     

    MCP Server

    The MCP Server orchestrates the interaction between language models and their environment, providing the communication layer. Its main function is to manage the flow of information between the model and various external sources or services, ensuring interoperability, security, and consistent context during task execution.

    Server Functionalities

    As described earlier, MCP relies on two main components, the client and the server.
    The server is built around three key primitives:

    • Tools: Executable functions exposed by the server to the client. They can be endpoints or commands with well-defined interfaces, allowing the client to perform actions such as fetching data, sending an API query, or executing a local operation.
    • Resources: Data sources or files exposed by the server. These are not executed but read or inspected. Their main purpose is to provide context to the model without having to send everything in advance, e.g. a code file, configuration, dataset, or document.
    • Prompts: Text templates or instructions provided by the server to ensure consistency in interactions with the model. For example, “Explain this code block” or “Summarize the following text in three points.” …

     

    While Tools are the most popular primitive, enabling concrete actions, Resources and Prompts extend the contextual understanding and simplify model interactions.

    MCP Client Compatibility

    Not all MCP Clients implement the full set of protocol primitives. Compatibility for each can be checked on the official MCP Clients – Model Context Protocol (MCP)

    Many clients currently lack support for Resources, one of the most valuable server-side primitives. This means that while Tools (executable functions) are widely supported, direct access to additional resources like files, datasets, or contextual data remains limited in many clients.

     

    Transport Protocol

    The transport layer handles how MCP Clients and Servers communicate within the ecosystem. Its primary responsibility is to transmit JSON-RPC 2.0 messages, the protocol’s base format, ensuring proper request, response, and event flow between processes.

     

    Transport can be carried out through several mechanisms, each with its own specific advantages and limitations.

    Supported Transport Types

    There are two main transport methods in MCP:

    • Standard Input/Output (STDIO): A direct communication method between local processes using the operating system’s standard streams. It is simple and efficient, ideal for CLI or IDE integrations, since it doesn’t require network configuration. However, it only works locally, limiting use in distributed environments.

     

    • Streamable HTTP: A bidirectional and continuous communication channel between client and server. It supports authentication, authorization, and integration across distributed systems. Its advantages in security and two-way communication have led it to replace Server-Sent Events (SSE), an older unidirectional protocol now deprecated within MCP.

     

     

    The following table shows the main advantages and disadvantages of using each protocol depending on the scenario in which it is applied.

     

    Transport Type Communication Ideal For Main Advantages Limitations
    STDIO Local Bidirectional Local integrations (CLI, IDE) Simplicity, low latency Local only
    Streamable HTTP HTTP Bidirectional Remote, scalable connections Security, flexibility More complex, network overhead

     

    Together, these transport mechanisms ensure flexible and scalable communication between MCP Clients and Servers, from lightweight local integrations to large distributed deployments.

    MCP Client

    An MCP Client connects to one or multiple MCP Servers and manages communication between the LLM, the user, and the tools/resources provided by the servers.
    Its primary role is to coordinate model requests, forward necessary data to the appropriate server, and return processed results to the model or user.

     

    The following sequence outlines the typical interaction flow:

     

    • The user enters a query (e.g., “What’s the weather in San Francisco?”).
    • The MCP Client receives and forwards it to the LLM, along with the list of available tools and resources.
    • The LLM interprets the query and decides which tool to use — in this case, a weather tool.
    • The client handles the request, optionally prompting the user for approval before executing sensitive operations.
    • The MCP Server calls the corresponding API (e.g., Weather API).
    • The result is returned to the client, which passes it to the LLM for interpretation.
    • Finally, the client delivers a natural-language response to the user.

     

    Thus, the client acts as a central orchestrator, combining the model’s reasoning with the technical capabilities of MCP Servers, ensuring smooth, secure, and extensible communication within the protocol.

    In the following video, you can see the implementation of the first MCP Client (Avatar) developed by the Plain Concepts Research team.

    In this demo, Chris, our avatar, can create primitives and 3D elements using the MCP Server connected to the Evergine graphics engine.

    Conclusions

    The Model Context Protocol (MCP) is establishing itself as a key communication standard in the field of artificial intelligence.
    It provides a common structure that allows language models, tools, and data sources to connect and interact in a unified way.
    This shared framework not only simplifies the development and integration of new capabilities but also fosters an interoperable environment where any model can access MCP-compatible tools,  promoting a more open, modular, and scalable AI ecosystem.

    Plain Concepts

    We are a global IT professional services company