Agents Are Discovered in Catalogs

AI agents are increasingly integral to automation and decision-making, and they are often discovered through various types of catalogs. These catalogs serve as repositories where agents are listed, described, and made accessible to users or systems. Depending on the context—whether Vendor Specific, Internal Business, or Public Catalogs—the discovery process can differ significantly.

Vendor Specific Catalogs

Vendor Specific catalogs are curated by companies or providers offering AI agents as part of their product ecosystem. These catalogs focus on agents tailored to the vendor’s tools or platforms, ensuring compatibility and optimized performance. Users turn to these when seeking solutions from a trusted source, often with detailed support available. For example, Amazon Bedrock Agents on AWS provide a catalog of AI agents designed for cloud-based workflows, while Salesforce’s Agentforce offers agents integrated with its CRM platform.

Internal Business Catalogs

Internal Business catalogs are private repositories maintained by organizations to manage and deploy AI agents across their operations. These are designed for employees or teams to find agents built or customized for specific internal workflows, such as data analysis or process automation. Access is typically restricted, with a focus on company-specific needs. An example might be a custom IT support agent catalog at a large corporation like IBM, used to streamline internal ticketing.

Public Catalogs

Public Catalogs are open-access directories where developers, researchers, or enthusiasts share AI agents for broader use. These range from open-source repositories to marketplaces, offering diverse agents for various applications. Discovery here is driven by community contributions and popularity, though quality can vary. See below for examples.


Discovery Through Gossip: Agents Talking About Agents

Beyond structured catalogs, a more organic method of agent discovery is emerging: gossip. In this model, AI agents communicate with one another, sharing insights about other agents they’ve encountered or worked with. This chatter forms a dynamic, decentralized network of reputation and trust. Agents leverage transitive trust—where an agent trusts another based on the recommendation of a mutual acquaintance—to evaluate and discover new peers.

<aside> 💡

For instance, an agent might say, “I collaborated with Agent X on a data task, and it was fast and reliable,” prompting another to seek out Agent X. This gossip-driven discovery mimics human social networks, allowing agents to learn about capabilities, reliability, or even quirks of others without relying solely on static catalog entries. While still experimental, this approach could complement formal catalogs, especially in adaptive, multi-agent systems where reputation matters.

</aside>


Metadata That Defines an Agent

Metadata plays a critical role in helping users—or potentially agents—identify and select the right AI agent. This information provides a snapshot of the agent’s capabilities, origin, and suitability for a task, enhancing discoverability and trust across all catalog types. Common metadata includes: