Using Google's Agent2Agent Protocol to Source Trending Topics
Built on Google Cloud, Vertex AI API, and the Agent2Agent protocol
One of the problems I have in the quickly evolving investment and autonomous cognitive agent space is keeping up to date with trending topics and relevant new ideas and approaches. I think that having this knowledge will always be a technical advantage, as you will have more time to assess if an idea is worthwhile or not. One way to solve this is to track Arxiv. Another is to doom scroll on social media. Mainstream media seems to be occupied with other things these days. The latter two especially don’t sound fun. So…how about we let an agent do this work for us?
Fortunately, Google wants us to make their cloud investments worthwhile, so they provided this quickstart notebook, which solves exactly that problem.
How convenient is that?
A quick reminder for my paying users, if you want to use this tutorial and run into issues, reach out in Chat, and I can see where I can help.
Setup and configurations
If you are an avid reader of my Substack, I already know you have a solid understanding of async Python. However, opposite to other SDK/Cloud agent integrations, Google’s setup requires a couple of additional steps that are not in the “quickstart”.
First, you need to have a Google Cloud Project with the right credentials, especially for the Vertex AI API enabled. Do this first; you will need the project number later.
My dashboard looks like this, you can see the project on the left:
Note down the project details. In case you need to → set up your own project.
Then you need to set up, as usual, your virtual environment with Python 3.11+.
For this purpose, I created an “A2A” folder.
mkdir A2A
cd A2A
And then create and activate your virtual environment like this.
python3.12 -m venv venv
source venv/bin/activate
Note that I run several Python binaries concurrently on my Ubuntu machine, so I have to reference the correct 3.11+ Python binary accordingly.
Google Cloud SDK
While we are still in the CLI, I can already tell you that the A2A agents, in order to run Gemini through the Vertex AI API, require access to your Google Cloud account (incl. billing) and use this SDK to facilitate it.
You can download it like this.
#Download the tar file
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz
#Unpack the tar file
tar -xf google-cloud-cli-linux-x86_64.tar.gz
Once this is done, you will need to run the installation shell script.
./google-cloud-sdk/install.sh
If Google asks you to “modify profile to update your $PATH and enable shell command completion?”, I’d think in most cases you should say “yes”. Then it will ask you if you are using the right .bashrc. Please confirm if this is the one you are using.
After the basic setup is done, you can initialize the connection to the cloud services.
./google-cloud-sdk/bin/gcloud init
The script will ask you for your Google Cloud account email and Project ID, so that’s why it made sense to do this first.
Later, when you run the script, it will begin with these packages. Note the use of Google Agent Development Kit and the A2A SDK.
%pip install --upgrade -q sympy google-genai google-adk a2a-sdk python-dotenv aiohttp uvicorn requests mermaid-python nest-asyncio
The Agent Architecture
As shown above, in this Notebook, we will be using 3 distinct agents.
Trending Topics Agent
This agent searches the Google Search resources to identify what topics are trending.
When asked about trends:
1. Search for "trending topics today" or similar queries
2. Extract the top 3 trending topics
3. Return them in a JSON format
.If you look at the prompt, especially interesting is that the return format must be in the format seen below. This is similar to the approaches I recommended here and here.
{
"topic": "Topic name",
"description": "Brief description (1-2 sentences)",
"reason": "Why it's trending"
},
Trend Analyzer Agent
Once it receives topics from the Trending Topics Agent, this agent gathers and analyzes quantitative signals such as search volume trends, sentiment scores, and historical popularity. It ranks or scores the trends using statistical models, helping to distinguish between short-lived hype and meaningful shifts.
In the script, the initialization of the Agent looks like this. You will observe that Google uses the Agent object from the Agent Development Kit we had installed before. I wanted to use this as an example showing how standardized most agent setups have become. You will note that instructions are like the system prompt, and tools are, as usual, an array.
Similar to Smolagents, an agent can be a “manager” agent, being responsible for orchestrating “worker drones”.
Host Agent
The Host Agent finally coordinates the workflow, making sure each agent runs in the correct sequence and passes information smoothly between them. It handles input/output, manages exceptions, and can optionally enrich the pipeline by adding prompts or parameters based on user goals or context (e.g., focusing on trends in autonomous agents).
The screenshot is a bit abbreviated, but I want to point out that the agent uses two tools to orchestrate the agent and not the agents themselves. Both of these tools have been implemented in the A2AToolClient class.
If you recall that Agent2Agent is also a client-server based model similar to MCP, but with the distinction that with MCP, you serve tools, while with A2A, you serve full agents. Of course, in general, an MCP tool can also be an agent.
Overall, A2A’s client-server architecture is based on the following paradigm, directly from the horse’s mouth:
A2A Client (Client Agent): An application, service, or another AI agent that acts on behalf of the user to request actions or information from a remote agent. The client initiates communication using the A2A protocol.
A2A Server (Remote Agent): An AI agent or agentic system that exposes an HTTP endpoint implementing the A2A protocol. It receives requests from clients, processes tasks, and returns results or status updates. The remote agent operates as an "opaque" system from the client's perspective, meaning the client doesn't need to know its internal workings, memory, or tools. Agent-to-Agent Server information is stored in an “Agent Card” metadata JSON. The JSON includes essential information about the agent’s identity, such as its name, description, service endpoint URL, and version, as well as its supported A2A capabilities like streaming or push notifications, the skills it offers, default input/output modalities, and any authentication requirements.
Given this information, it becomes clear how the orchestrator interacts with the agents.
Serving Agents
One of the main benefits of running A2A is that, similar to MCP, agents are discoverable and standardized in their communications protocol. This is actually one aspect that I liked about A2A is that with it, it is quite straightforward to run multiple agent servers locally.
Like this:
trending_thread = run_agent_in_background(create_trending_agent_server, 10020, "Trending Agent")
analyzer_thread = run_agent_in_background(create_analyzer_agent_server, 10021, "Analyzer Agent")
host_thread = run_agent_in_background(create_host_agent_server, 10022, "Host Agent")
I won’t go too much into details in the implementation of the “run_agent” function, only that it leverages uvicorn, which is a lightweight ASGI server for its asynchronous Python web applications that operates, you guessed it, without blocking I/O.
The statements above then start the servers, making them discoverable.
You should see an output like this once you run it:
Creating Tasks
Of course, these agents are not worth much if they can’t do any actual work for us. You do this by executing the “create_task” function with the agent server URL and message.
trending_topics = await a2a_client.create_task("http://localhost:10020", "What's trending in autonomous agents today?")
print(trending_topics)
Although the above string looks like a prompt, it is not (yet), as it is a message that is handed over to the Agent server through its payload.
Once executed, the agent then runs the instructions provided in the message and returns a valid JSON. You can run each “worker” agent separately or the orchestrator agent. In my case, it looked like this:
In Conclusion
What I liked about A2A is that, similar to MCP, it formalizes a clean, extensible interface for agent communication. It sure does feel like overkill in some parts, though. Adopting JSON-RPC as its backbone was the right decision decision allowing A2A agents to interact without having to make assumptions about internal design, making cross-agent collaboration predictable and robust. This should have a noticeable impact on reliability.
Also, standardized metadata in the form of AgentCards and URL endpoints is are useful step towards an agentic web. Using this, agent swarms can be orchestrated sequentially or in parallel, supporting structured workflows across distributed systems. I think this is essential infrastructure for building systems where agents are autonomous participants in a shared process.
It’s quite useful and I only spent 13 cents.