TLDR: Microsoft has launched a public preview enabling Azure Logic Apps (Standard) to function as Model Context Protocol (MCP) servers. This innovation allows developers to expose Logic Apps workflows, powered by over 1,400 connectors, as discoverable and callable ‘agent tools’ for Large Language Models (LLMs) and AI agents, streamlining enterprise integration and AI agent development.
Microsoft has announced the public preview of a significant new capability for Azure Logic Apps (Standard), allowing these applications to operate as Model Context Protocol (MCP) servers. This development transforms existing Logic Apps connectors and workflows into callable agent tools, making them accessible to Large Language Models (LLMs), AI agents, and MCP clients.
The Model Context Protocol (MCP) is an open standard designed to facilitate secure, discoverable, and structured interaction between AI components and external systems. It defines how tools are described, executed, and authenticated, enabling AI agents to perform real-world tasks such as querying databases, sending emails, interacting with APIs, or triggering complex business workflows. This capability is poised to significantly reduce development overhead, enhance reusability, and provide a streamlined path for integrating diverse enterprise systems, offering both scalability and adaptability for complex scenarios.
According to Kent Weare, a principal program manager at Microsoft, this dynamic composition of tools within Logic Apps empowers developers to “rapidly construct agents that are both scalable and adaptable to complex enterprise scenarios.” The integration leverages Azure Logic Apps’ extensive library of over 1,400 connectors, allowing organizations to front existing workflows and a vast catalog of cloud and on-premises systems through MCP, effectively turning them into sophisticated agent tools.
Technically, tools must be implemented as HTTP Request trigger workflows, followed by a Response action. Authentication is managed by OAuth 2.0, with Easy Auth enforcing client, identity, and tenant restrictions. While streamable HTTP transport works out-of-the-box, Server-Sent Events (SSE) require VNET integration and specific host.json settings. The MCP APIs are enabled by adding extensions.workflow.McpServerEndpoints.enable=true to the host.json configuration.
Furthermore, Microsoft is providing an API Center registration path (currently in preview) where MCP servers can be created and registered. This allows selected managed connector actions to become cataloged and governed tools. However, this preview comes with certain limitations: developers must start with an empty Standard logic app resource, use only one connector per MCP server, and built-in service-provider and custom connectors are not supported in this path, only managed connectors. Additionally, only one action is permitted per tool.
Also Read:
- GitHub Launches MCP Registry to Streamline AI Tool Discovery and Integration
- Universal Tool Calling Protocol (UTCP): Streamlining AI Agent Integration with External Tools
This initiative underscores Microsoft’s commitment to fostering a robust ecosystem for AI agent development, providing enterprise-grade tools for operationalizing AI capabilities within existing business processes. Sneha Daggubati, a senior cloud architect at Microsoft, also commented on this development, highlighting its potential.


