Monday, March 9, 2026

The Architecture of Agency: Why the Model Context Protocol is Replacing the 'Skills' Paradigm

In the rapidly evolving landscape of generative AI, a fundamental architectural shift is underway. For the past year, developers have relied on 'skills'—bespoke, hard-coded bridges between Large Language Models (LLMs) and external data. However, the emergence of the Model Context Protocol (MCP) by Anthropic marks the end of this fragmented era. This briefing explores the transition from artisanal skill-building to a standardised, universal protocol for machine intelligence, and what this means for Singapore’s ambition to become the world’s premier 'Intelligent Island'.

The Fragmented Atelier of Early AI

Walking through the sun-drenched corridors of a fintech hub in Robinson Road, one overhears a recurring lament among Chief Technology Officers. The excitement of the initial ChatGPT 'aha' moment has given way to the grueling reality of integration. For much of 2023 and 2024, the industry operated like a collection of high-end boutiques, each crafting bespoke 'skills' to allow their AI agents to talk to their databases, their Slack channels, or their proprietary CRM systems.

In this 'skills-based' era, if you wanted an AI agent to check a shipping manifest in the Port of Singapore, you had to write a specific function—a skill—that translated the model’s intent into a precise API call. It was manual, brittle, and notoriously difficult to scale. Every new tool required a new bridge. The result was a digital archipelago: isolated islands of data connected by shaky, hand-built causeways.

But as Singapore intensifies its National AI Strategy 2.0, the limitations of this artisanal approach have become a bottleneck. The city-state’s vision of a seamless, AI-integrated economy requires something more robust than a collection of custom scripts. It requires a standard. Enter the Model Context Protocol (MCP).

Understanding the 'Skills' Bottleneck

To appreciate the shift, one must first understand the limitations of the status quo. The 'skills' architecture—often referred to as function calling or tool use—is essentially a dictionary provided to the LLM. You tell the model: "If the user asks for X, call function Y with parameters Z."

While effective for simple tasks, this approach suffers from three primary defects:

1. The Burden of Maintenance

Every time an external API changes, the 'skill' breaks. For a multinational firm operating out of Marina Bay, maintaining hundreds of bespoke skills across different departments becomes a resource-heavy endeavour that distracts from core innovation.

2. Contextual Isolation

Skills are often 'blind'. The model can call a function to fetch data, but it doesn't truly understand the underlying structure of the data source. It is like asking a waiter to describe a dish they haven't tasted; they can relay the name, but the nuance is lost.

3. Vendor Lock-in

Skills are frequently tied to specific frameworks. A skill built for an OpenAI-based agent might not easily port to a Llama-3 implementation or a Claude-powered system. In a world where model performance fluctuates monthly, this lack of portability is a strategic risk.

The MCP Revolution: A Universal Data Bus

The Model Context Protocol represents a philosophical shift from teaching an agent how to act (skills) to connecting an agent to a world of data (MCP). Developed as an open standard, MCP acts as a universal connector—think of it as the USB-C port for the AI age.

Instead of a developer writing a specific skill for every database, a company can deploy an MCP Server. This server sits in front of the data source and speaks a standardised language. Any MCP-compatible 'Client' (the AI agent) can then plug into that server and immediately understand what data is available, how to query it, and what actions it can perform.

The Host-Server Relationship

At the heart of MCP is a clean separation of concerns. The Host (the AI application, such as Claude Desktop or a bespoke enterprise IDE) connects to a Server. The Server provides three main things:

  • Resources: Static or dynamic data (like a README file or a live database table).

  • Prompts: Pre-defined templates that help the model understand how to interact with the data.

  • Tools: Executable functions that can change the state of the world (like sending an email or updating a Jira ticket).

This architecture mirrors the early days of the World Wide Web. Before HTTP, connecting to different computers was a proprietary nightmare. HTTP provided the protocol that allowed any browser to talk to any server. MCP is doing the same for the relationship between intelligence and information.

The Singapore Lens: Infrastructure for a Smart Nation

Singapore is uniquely positioned to lead the adoption of MCP. Unlike larger, more fragmented geographies, the city-state’s 'Smart Nation' initiative has already laid the groundwork for high-quality, structured data through platforms like Singpass and the various Open Government Products (OGP).

A Vignette from the CBD

Consider a logistics manager at a warehouse in Jurong. Under the old 'skills' regime, integrating an AI assistant into the warehouse management system, the local weather forecast, and the PSA Singapore port schedules would have required three separate, expensive development projects.

With MCP, the PSA could provide a public MCP Server. The logistics company’s AI agent—regardless of which model it uses—simply 'subscribes' to the PSA server. There is no custom code to write. The protocol handles the handshake. This isn't just a technical upgrade; it's a massive reduction in the 'friction of doing business'—a metric Singapore monitors with obsessive precision.

Government Policy and the Sandbox

The Infocomm Media Development Authority (IMDA) has long championed the 'sandbox' approach to tech regulation. MCP fits perfectly into this ethos. Because MCP allows for local servers (the data doesn't have to leave the company's firewall to be understood by the protocol), it addresses one of the primary concerns of the Singaporean regulator: data sovereignty.

Local banks, such as DBS or UOB, can build internal MCP servers that allow their AI analysts to query sensitive financial data securely. The LLM remains the 'reasoning engine' in the cloud, while the data stays securely in the Lion City, connected only through the thin, controlled pipe of the Model Context Protocol.

Moving from 'Doing' to 'Knowing'

The most profound shift from Skills to MCP is the move from action-orientation to context-orientation. Skills are about doing: "Go get this file." MCP is about knowing: "Here is the structure of my entire project; reason across it."

In the 'Skills' era, the AI was a sophisticated remote control. In the 'MCP' era, the AI becomes a collaborator with a seat at the table. For a creative agency in Tiong Bahru, this means an AI that doesn't just 'generate a logo' (a skill) but an AI that understands the client's brand history, the current market trends in Southeast Asia, and the technical constraints of the printing press, because it is connected to all those data sources via a single protocol.

The Developer Experience (DX)

For the developers at the National University of Singapore (NUS) or the Nanyang Technological University (NTU), MCP represents a liberation from 'plumbing'. Currently, an estimated 60-70% of AI development time is spent on data ingestion and API mapping. By adopting MCP, this time can be redirected toward fine-tuning the reasoning capabilities of the agents or designing better user experiences.

The Economic Implications for the Region

Singapore has always thrived as a 'middleman'—a hub where the world's trade routes converge. In the AI economy, the 'trade routes' are data streams. By championing a standardised protocol like MCP, Singapore can position itself as the 'MCP Hub' of Asia.

Imagine a future where a regional HQ in Singapore manages AI agents that coordinate supply chains across Vietnam, Indonesia, and Malaysia. If all these entities use the Model Context Protocol, the interoperability would be seamless. The 'Silicon Island' would not just be producing chips or code, but maintaining the very standards that allow the global AI economy to function.

Risks and Considerations

Of course, no architectural shift is without its perils. The move to a universal protocol requires a level of openness that some legacy vendors may find threatening.

  1. Security: While MCP allows for local data hosting, the protocol itself must be hardened against injection attacks. If an agent can 'discover' all the tools in a server, it must be strictly governed by permissions.

  2. Standard War: While Anthropic has open-sourced MCP, other giants like OpenAI or Google may push their own standards. Singapore’s role as a neutral, pro-business actor will be vital in navigating these 'protocol wars'.

  3. The Talent Gap: Transitioning from traditional software engineering to 'agentic' engineering requires a mindset shift. The focus moves from 'writing code' to 'designing contexts'.

The Future: Toward an Agentic Society

As we look toward the end of the decade, the distinction between a 'user' and an 'agent' will blur. We will have personal agents that manage our schedules, our health (integrated with the HealthHub SG app), and our investments.

The 'Skills' approach would lead to a cluttered, unmanageable digital life—a hundred different apps that don't talk to each other. The MCP approach leads to a cohesive digital ecosystem. It is the difference between a city of disconnected kampongs and the integrated, high-functioning metropolis that Singapore is today.

In the boardrooms of Temasek and GIC, the conversation is shifting. It is no longer about if AI will change the world, but how the infrastructure will support it. The Model Context Protocol is the first piece of that infrastructure that feels truly permanent. It is the foundation upon which the next generation of intelligent enterprise will be built.

Key Practical Takeaways

  • Audit Your Integrations: Businesses should review their current AI 'skills' and identify where bespoke code can be replaced by the Model Context Protocol to reduce technical debt.

  • Adopt an 'MCP-First' Mentality: When selecting new software vendors, prioritise those who offer MCP servers. This ensures your data is immediately 'AI-ready' without further integration costs.

  • Invest in Context, Not Just Action: Shift development resources away from building individual 'tools' and toward building rich, data-dense MCP 'resources' that provide models with the full picture.

  • Focus on Data Sovereignty: Leverage MCP’s architecture to keep sensitive data on-premises or within local cloud regions (like AWS Singapore), using the protocol to provide context to cloud-based LLMs safely.

  • Upskill for the Protocol Era: Train engineering teams in Singapore to move beyond API mapping and toward the design of standardised prompts and resource templates within the MCP framework.

Frequently Asked Questions

How does MCP differ from a standard REST API?

While a REST API requires the developer to write specific code to handle every request and response, MCP provides a standardized 'handshake'. An MCP-compatible agent can automatically discover the capabilities of an MCP server without the developer needing to write custom integration code for every new tool.

Is MCP restricted to Anthropic’s Claude models?

No. While Anthropic initiated the protocol and released it as an open standard, MCP is designed to be model-agnostic. Any AI model (from OpenAI, Google, or open-source providers like Meta) can implement the 'Client' side of the protocol to connect to any MCP Server.

What is the immediate benefit for a Singapore-based SME?

For an SME, the primary benefit is cost and speed. Instead of hiring expensive consultants to build custom AI integrations for their accounting or inventory software, they can use off-the-shelf MCP servers. This allows them to deploy sophisticated AI agents in days rather than months, keeping them competitive in a high-cost environment.

No comments:

Post a Comment