In contrast to conventional AI fashions that reply to single prompts (like ChatGPT’s primary Q&A mode), AI brokers can plan, cause, and execute multi-step duties by interacting with instruments, knowledge sources, APIs, and even different brokers.
Sounds summary? That’s as a result of it’s. Whereas most may agree with this definition or expectation for what agentic AI can do, it’s so theoretical that many AI brokers obtainable right this moment wouldn’t make the grade.
As my colleague Sean Falconer famous just lately, AI brokers are in a “pre-standardization section.” Whereas we’d broadly agree on what they ought to or might do, right this moment’s AI brokers lack the interoperability they’ll have to not simply do one thing, however truly do work that issues.
Take into consideration what number of knowledge programs you or your purposes have to entry each day, resembling Salesforce, Wiki pages, or different CRMs. If these programs aren’t at the moment built-in or they lack suitable knowledge fashions, you’ve simply added extra work to your schedule (or misplaced time spent ready). With out standardized communication for AI brokers, we’re simply constructing a brand new kind of information silo.
Irrespective of how the trade modifications, having the experience to show the potential of AI analysis into manufacturing programs and enterprise outcomes will set you aside. I’ll break down three open protocols which can be rising within the agent ecosystem and clarify how they may allow you to construct helpful AI brokers—i.e., brokers which can be viable, sustainable options for complicated, real-world issues.
The present state of AI agent growth
Earlier than we get into AI protocols, let’s evaluation a sensible instance. Think about we’re concerned with studying extra about enterprise income. We might ask the agent a easy query through the use of this immediate:
Give me a prediction for Q3 income for our cloud product.
From a software program engineering perspective, the agentic program makes use of its AI fashions to interpret this enter and autonomously construct a plan of execution towards the specified objective. How it accomplishes that objective relies upon solely on the checklist of instruments it has entry to.
When our agent awakens, it’s going to first seek for the instruments underneath its /instruments listing. This listing can have guiding information to evaluate what’s inside its capabilities. For instance:
/instruments/checklist
/Planner
/GenSQL
/ExecSQL
/Choose
You may as well have a look at it based mostly on this diagram:

Confluent
The principle agent receiving the immediate acts as a controller. The controller has discovery and administration capabilities and is liable for speaking immediately with its instruments and different brokers. This works in 5 basic steps:
- The controller calls on the planning agent.
- The planning agent returns an execution plan.
- The choose critiques the execution plan.
- The controller leverages GenSQL and ExecSQL to execute the plan.
- The choose critiques the ultimate plan and offers suggestions to find out if the plan must be revised and rerun.
As you’ll be able to think about, there are a number of occasions and messages between the controller and the remainder of the brokers. That is what we’ll seek advice from as AI agent communication.
Budding protocols for AI agent communication
A battle is raging within the trade over the best option to standardize agent communication. How can we make it simpler for AI brokers to entry instruments or knowledge, talk with different brokers, or course of human interactions?
At present, we now have Mannequin Context Protocol (MCP), Agent2Agent (A2A) protocol, and Agent Communication Protocol (ACP). Let’s check out how these AI agent communication protocols work.
Mannequin Context Protocol
Mannequin Context Protocol (MCP), created by Anthropic, was designed to standardize how AI brokers and fashions handle, share, and make the most of context throughout duties, instruments, and multi-step reasoning. Its client-server structure treats the AI purposes as purchasers that request data from the server, which offers entry to exterior sources.
Let’s assume all the information is saved in Apache Kafka matters. We will construct a devoted Kafka MCP server, and Claude, Anthropic’s AI mannequin, can act as our MCP consumer.
In this instance on GitHub, authored by Athavan Kanapuli, Akan asks Claude to connect with his Kafka dealer and checklist all of the matters it comprises. With MCP, Akan’s consumer utility doesn’t have to know learn how to entry the Kafka dealer. Behind the scenes, his consumer sends the request to the server, which takes care of translating the request and operating the related Kafka operate.
In Akan’s case, there have been no obtainable matters. The consumer then asks if Akan want to create a subject with a devoted variety of partitions and replication. Similar to with Akan’s first request, the consumer doesn’t require entry to data on learn how to create or configure Kafka matters and partitions. From right here, Akan asks the agent to create a “international locations” matter and later describe the Kafka matter.
For this to work, you want to outline what the server can do. In Athavan Kanapuli’s Akan undertaking, the code is within the handler.go file. This file holds the checklist of features the server can deal with and execute on. Right here is the CreateTopic
instance:
// CreateTopic creates a brand new Kafka matter
// Non-compulsory parameters that may be handed through FuncArgs are:
// - NumPartitions: variety of partitions for the subject
// - ReplicationFactor: replication issue for the subject
func (ok *KafkaHandler) CreateTopic(ctx context.Context, req Request) (*mcp_golang.ToolResponse, error) {
if err := ctx.Err(); err != nil {
return nil, err
}
if err := ok.Consumer.CreateTopic(req.Matter, req.NumPartitions, req.ReplicationFactor); err != nil {
return nil, err
}
return mcp_golang.NewToolResponse(mcp_golang.NewTextContent(fmt.Sprintf("Matter %s is created", req.Matter))), nil
}
Whereas this instance makes use of Apache Kafka, a broadly adopted open-source expertise, Anthropic generalizes the tactic and defines hosts. Hosts are the giant language mannequin (LLM) purposes that provoke connections. Each host can have a number of purchasers, as described in Anthropic’s MCP structure diagram:

Anthropic
An MCP server for a database can have all of the database functionalities uncovered by way of the same handler. Nevertheless, if you wish to change into extra subtle, you’ll be able to outline present immediate templates devoted to your service.
For instance, in a healthcare database, you would have devoted features for affected person well being knowledge. This simplifies the expertise and offers immediate guardrails to guard delicate and personal affected person data whereas guaranteeing correct outcomes. There may be way more to be taught, and you’ll dive deeper into MCP right here.
Agent2Agent protocol
The Agent2Agent (A2A) protocol, invented by Google, permits AI brokers to speak, collaborate, and coordinate immediately with one another to unravel complicated duties with out frameworks or vendor lock-in. A2A is expounded to Google’s Agent Growth Equipment (ADK) however is a definite part and never a part of the ADK bundle.
A2A ends in opaque communication between agentic purposes. Meaning interacting brokers don’t have to reveal or coordinate their inside structure or logic to change data. This provides completely different groups and organizations the liberty to construct and join brokers with out including new constraints.
In follow, A2A requires that brokers are described by metadata in identification information generally known as agent playing cards. A2A purchasers ship requests as structured messages to A2A servers to devour, with real-time updates for long-running duties. You’ll be able to discover the core ideas in Google’s A2A GitHub repo.
One helpful instance of A2A is this healthcare use case, the place a supplier’s brokers use the A2A protocol to speak with one other supplier in a unique area. The brokers should guarantee knowledge encryption, authorization (OAuth/JWT), and asynchronous switch of structured well being knowledge with Kafka.
Once more, take a look at the A2A GitHub repo when you’d prefer to be taught extra.
Agent Communication Protocol
The Agent Communication Protocol (ACP), invented by IBM, is an open protocol for communication between AI brokers, purposes, and people. In accordance with IBM:
In ACP, an agent is a software program service that communicates by way of multimodal messages, primarily pushed by pure language. The protocol is agnostic to how brokers operate internally, specifying solely the minimal assumptions obligatory for clean interoperability.
Should you check out the core ideas outlined within the ACP GitHub repo, you’ll discover that ACP and A2A are comparable. Each have been created to remove agent vendor lock-in, pace up growth, and use metadata to make it simple to find community-built brokers whatever the implementation particulars. There may be one essential distinction: ACP allows communication for brokers by leveraging IBM’s BeeAI open-source framework, whereas A2A helps brokers from completely different frameworks talk.
Let’s take a deeper have a look at the BeeAI framework to know its dependencies. As of now, the BeeAI undertaking has three core parts:
- BeeAI platform – To find, run, and compose AI brokers;
- BeeAI framework – For constructing brokers in Python or TypeScript;
- Agent Communication Protocol – For agent-to-agent communication.
What’s subsequent in agentic AI?
At a excessive degree, every of those communication protocols tackles a barely completely different problem for constructing autonomous AI brokers:
- MCP from Anthropic connects brokers to instruments and knowledge.
- A2A from Google standardizes agent-to-agent collaboration.
- ACP from IBM focuses on BeeAI agent collaboration.
Should you’re concerned with seeing MCP in motion, take a look at this demo on querying Kafka matters with pure language. Each Google and IBM launched their agent communication protocols solely just lately in response to Anthropic’s profitable MCP undertaking. I’m desirous to proceed this studying journey with you and see how their adoption and evolution progress.
Because the world of agentic AI continues to develop, I like to recommend that you simply prioritize studying and adopting protocols, instruments, and approaches that prevent effort and time. The extra adaptable and sustainable your AI brokers are, the extra you’ll be able to deal with refining them to unravel issues with real-world influence.
Adi Polak is director of advocacy and developer expertise engineering at Confluent.
—
Generative AI Insights offers a venue for expertise leaders—together with distributors and different exterior contributors—to discover and focus on the challenges and alternatives of generative synthetic intelligence. The choice is wide-ranging, from expertise deep dives to case research to knowledgeable opinion, but additionally subjective, based mostly on our judgment of which matters and coverings will finest serve InfoWorld’s technically subtle viewers. InfoWorld doesn’t settle for advertising and marketing collateral for publication and reserves the best to edit all contributed content material. Contact doug_dineley@foundryco.com.