On December 9, 2025, the Linux Foundation announced the formation of the Agentic AI Foundation (AAIF) — a new open-source governance body co-founded by Anthropic, OpenAI, and Block. Its inaugural projects: Anthropic’s Model Context Protocol, Block’s goose agent framework, and OpenAI’s AGENTS.md standard. The announcement marked a turning point. MCP, launched just thirteen months earlier as an internal Anthropic tool, had become infrastructure-grade technology — the kind of protocol that gets donated to neutral foundations precisely because the industry cannot afford to let it be owned by anyone.
For enterprise clients evaluating how their AI stacks connect to external data — including observation data — this matters.
What MCP Is and Why It Spread So Fast
Model Context Protocol is an open standard that defines how AI agents connect to external tools, data sources, and services. Before MCP, every integration between an AI system and an external data source required a custom connector. As AI agents proliferated across an enterprise, integration complexity grew quadratically — each new agent needed its own bespoke connections to each data source. BCG described the problem succinctly: without a common protocol, integration effort rises quadratically as AI spreads through an organisation. With MCP, it rises only linearly.
That efficiency argument, combined with the simplicity of the protocol itself, drove adoption at a pace that few open standards achieve. MCP SDK downloads grew from around 100,000 in November 2024 to 8 million by April 2025. By the end of 2025, over 10,000 MCP servers had been published, monthly SDK downloads had reached 97 million, and the protocol had been adopted natively by Claude, ChatGPT, Microsoft Copilot, GitHub Copilot, AWS Bedrock, Google Gemini, and Azure AI Agent Service. In April 2026, the first MCP Dev Summit North America drew approximately 1,200 attendees to New York City.
This is not a niche developer protocol anymore. It is the plumbing of the agentic AI era.
What This Means for Enterprise AI Strategy
The implication for enterprise AI strategy is straightforward: if your AI stack supports MCP — and at this point, most serious AI platforms do — you can connect your agents to any MCP-compatible data source without custom integration work. The question shifts from “how do we connect our AI to this data?” to “what data sources should our AI be able to reach?”
That is a better question. It forces a focus on the value of the data rather than the mechanics of access. And for enterprise clients whose AI pipelines depend on high-quality, real-world observation data, it means the integration barrier between their agents and their observation programs is effectively removed.
Sentinel Watch’s MCP Server
Sentinel Watch operates a live MCP server hosted at app.sentinel-watch.org. Each enterprise client receives their own dedicated, scoped MCP connection — not a shared endpoint, but a connection provisioned specifically for their programs, their data, and their AI stack.
What this means in practice: an enterprise client whose AI agent supports MCP — whether that is Claude, a GPT-based system, a LangChain pipeline, or any other MCP-compatible framework — can connect that agent directly to their Sentinel Watch observation data. The agent can query observation records, filter by event type or geography, retrieve quality-reviewed datasets, and trigger observation program actions, all through the standard MCP tool interface. No custom integration. No data transfer pipeline to build. No manual export and import cycle between systems.
As MCP becomes the universal standard for how AI agents interact with external data, having a dedicated MCP connection to your observation data source is not a technical nicety. It is the difference between observation data that lives in a silo and observation data that is a live, queryable input to your AI pipeline.
The Broader Significance of AAIF Governance
The Linux Foundation’s stewardship of MCP through the AAIF matters beyond symbolic governance. The Linux Foundation has a track record of stewarding the protocols that became the invisible infrastructure of the modern internet. When critical standards move under neutral governance, they become stable enough for enterprises to build on with confidence — not dependent on any single vendor’s roadmap, not subject to unilateral change, not at risk of being deprecated when commercial incentives shift.
MCP is following that path. For enterprise clients making infrastructure decisions about how their AI agents connect to data, that stability is meaningful. The protocol they integrate against today will not be a proprietary dead end in two years. It will be the foundation that every major AI platform is building on.
Sentinel Watch’s MCP server is built on that foundation. Enterprise clients who integrate with us through MCP are not adopting a proprietary interface. They are using the same standard their AI stack already speaks.
References
- Linux Foundation. (2025). Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF). linuxfoundation.org
- Model Context Protocol Blog. (2025). One Year of MCP: November 2025 Spec Release. blog.modelcontextprotocol.io
- Wikipedia. (2026). Model Context Protocol. wikipedia.org
- Thoughtworks. (2025). The Model Context Protocol’s Impact on 2025. thoughtworks.com

