A Tale of Two Conferences - Microsoft Build vs Google I/O - The Agentic Parts
Does Yahoo Pipes make a comeback?
Over the last two weeks, Google and Microsoft both held their developer conferences, and of course AI agents were front and back at both. AI Agents have a great potential for aspiring entrepreneurs to take aim at building a multi-billion-dollar company. Yet they also have the most revenue potential for Microsoft and Google as they increase data center utilization, improving the ROI of such investments for both companies.
Obviously they push this narrative.
But how did their offerings compare?
Let’s start with Microsoft, the undisputed Enterprise champion.
Microsoft
According to Satya Nadella, during his keynote, they see AI Agents as a replacement for the app layer.
“We envision a world in which agents operate across individual, organizational, team and end-to-end business contexts. This emerging vision of the internet is an open agentic web, where AI agents make decisions and perform tasks on behalf of users or organizations.” (source)
What I found most interesting at the Microsoft conference was largely related to integrations into their existing solution stack.
For example, Windows AI Foundry. Here, Microsoft aims to offer a unified platform supporting the AI developer lifecycle from training to inference. The benefit is that together with pre-built code agents, co-pilots, custom agent building blocks, and multi-agent capabilities, enterprises can build upon their existing IT investments across teams to increase productivity and reliability while keeping costs in one place. I’d conclude that Teams, as a natural “chat” interface might evolve into an even more important part of the corporate productivity workflow.
Since they also bring new capabilities and focus to Semantic Kernel and AutoGen by introducing a single, developer-focused SDK and also Google’s Agent-to-Agent (A2A) and Anthropic’s Model Context Protocol (MCP) support, we can see standards in the agent space emerging. Especially for MCP, Microsoft aims to integrate it into a variety of services including but not limited to GitHub, Copilot Studio, Dynamics 365, Azure AI Foundry, Semantic Kernel and Windows 11.
Copilot Tuning, is another interesting idea in my opinion as it allows devs to fine-tune models and build agents using their own company data, workflows, and processes in a simple, low-code way. These agents can then perform highly accurate, domain-specific tasks securely from within the Microsoft 365 service boundary. For enterprise clients, this is exactly what I expect from Microsoft.
One of the challenges for enterprises, please remember the “recall” disaster, is effective Memory Management. A topic that I have also tackled repeatedly. Among the challenges they mentioned was the importance of managing context, fragmentation of memory, retrieval precision, and the legal repercussions of who owns all corporate knowledge created by employees. I.e., recall 2.0.
At least for precision, Microsoft introduced with Structured RAG a solution that extracts information from each conversation and structures it. I understand it as just meta information (KV-cache) about conversations, allowing for more efficient retrieval.
They also introduced TypeAgent, a repository that explores an architecture for building a single personal agent with a natural language interface. I suppose the goal of the TypeAgent team is to solve one major integration problem. I.e., how can work be done safely and efficiently when combining stochastic systems like language models with traditional software components?
Another interesting concept that was introduced was NLWeb. NLWeb is open-source and might just play a similar role as what HTML was for the 1997 version of the agentic web. So when MCP is HTTP, then NLWeb will be HTML. Easy enough, I guess. The interesting concept here is that every NLWeb endpoint will also be an MCP server.
Model Router finally, solved one of the problems of integrating genAI and agents effectively into enterprise workflows, i.e selecting the right model for the right task. I had always done this manually, but Model Router aims to do that automatically. This would allow for a higher degree of dynamism in operational workflows. I am not 100% certain if that is actually is needed.
Now for Google.
Google I/O 2025
(Web) Google is working off the assumption that ADK + A2A will be their new GCP stack. And also at Google’s I/O 2025, agents were a focus point. While Microsoft's vision was deeply integrated into the enterprise developer ecosystem, Google's pitch was aimed more at the general web and consumer layers, with ambitious intentions but less depth in execution. In many cases, it felt like demo magic. But at least, Google has also provided GitHub repos so practitioners like me can try it out.
Let’s start with WebAI. From the presentation of Jason Mayes, I think that Google aims to build a new “frontpage” for the Internet, where agents run your portfolio updates, relevant email, etc, from one page.
I still remember that Google was successful because it was very simple. Just focused on "search. Now, WebAI is Google’s initiative to make the web more agent-friendly by turning standard web services into agent-interoperable environments. It supports agent input and output routing, essentially letting LLM-based agents interact natively with web content and APIs. Maybe we can think of it as a successor to Google Assistant, where the assistant communicates with every site or app that exposes interfaces to agents via structured semantics and endpoints. Sounds like the Google equivalent to Microsoft’s NLWeb.
Together with Agentspace, it is promoted as a “game changer” and debuted earlier at another Google Cloud event. Now though, Agentspace integrates with the agent stack as a sort of agent dashboard, offering a portal-like interface to manage micro-errands across services like Calendar, Gmail, Docs, Drive, and Search. The evolution announced at I/O is multi-agent orchestration: different task-specific agents (e.g., one for scheduling, one for research) can collaborate within Agentspace. In theory, this enables seamless productivity, but practical examples are still limited to scripted demos.
Feels like AOL again. source
Or maybe Yahoo Pipes.
ADK (Agent Development Kit) is designed to help developers create agents that are context-aware, browser-native, and composable. And I still haven’t gotten around to making a code clinic for it just yet. ADK allows developers to define tasks, goals, and workflows that agents can pick up and iterate on using user-specific memory and contextual signals from Workspace, Gmail, Chrome, and Search history. Given the recent MCP vulnerability, managing and understanding permissions will be a critical value driver.
A2A (Agent-to-Agent communication) was introduced as a foundational capability that enables agents to dynamically delegate tasks to other agents in a secure, structured manner. In that sense, it solves a similar problem as Autogen. For instance, a research agent working in Gmail could pass a query to another agent that specializes in summarizing documents stored in Drive, who then returns the result back for user delivery.
Ads and the Agentic Web. I think we all understand that Google’s ad-driven business model will be fundamentally disrupted by agents. So the agent has to become the ad. If a user delegates all discovery and navigation to an AI agent, traditional web ads lose visibility. Google hinted at embedding ad slots into agent flows. This would then be a kind of sponsored recommendation engine inside multi-agent systems. Likely, this will happen.
Verdict
In my opinion, I was more impressed by the Microsoft offering compared to Google‘s. Google’s still leaning heavily on demo magic, with limited hands-on capabilities outside developer previews. Microsoft is executing on a coherent vision across products, with robust SDKs, enterprise-grade capabilities, and early market readiness. Google, meanwhile, is still leaning heavily on demo magic—lots of potential, but limited hands-on capabilities outside developer previews.
That said, Google's foundational pieces (WebAI + ADK + A2A) point toward a bold long-term bet: that the entire web becomes navigable and actionable via agents. It’s a compelling vision for the agentic web, but it remains early-stage—more promise than product for now.
I think my main problem is that if we really understand agents as a replacement of the app layer, then we need to make sure that we truly understand why we need to replace it and what the value proposition for the replacement is.
Let’s see what Apple is up to in early June.