The open agent marketplace

Ship agents to
the world.

The open marketplace where production-ready AI agents are published, discovered, and deployed in one click.

Spec-verified. Trust-scored. Versioned manifests. One-click install to OpenSentience. No vendor lock-in.

57% of Orgs Have Agents
in Production
40% of Agent Projects
Will Be Scrapped
0 Open Agent
Marketplaces

⚡ fleetprompt registry

code-reviewer by ops-team · Reviews code changes and suggests improvements
94
v2.3.1
customer-support-v2 by support-eng · Handles inquiries, processes refunds, escalates issues
91
v2.1.0
data-analyst by analytics-co · Queries databases, generates reports, visualizes trends
88
v1.4.0
incident-responder by sre-team · Monitors alerts, triages incidents, runs playbooks
76
v0.9.2
doc-writer by devrel · Generates documentation from code and specs
85
v1.2.0

How It Works

From spec to fleet in four steps

Every agent on FleetPrompt is built against a SpecPrompt specification, tested in Agentelic, and deployed to OpenSentience. The pipeline is the trust.

📝

Publish

Push a tested agent with its SPEC.md manifest from Agentelic to the registry

🔍

Discover

Search by capability, trust score, domain, or compatible runtime

📦

Install

One command deploys the agent manifest to your OpenSentience runtime

🚀

Run

Agent starts with explicit permissions, full audit trail, and Graphonomous memory

Agent Manifests

Every agent carries
its identity

FleetPrompt agents aren't black boxes. Every published agent includes a machine-readable manifest declaring capabilities, permissions, dependencies, and the SpecPrompt spec it was built against.

  • Versioned with semantic versioning
  • Permissions declared upfront, reviewed on install
  • SpecPrompt spec linked for full traceability
  • MCP server dependencies explicitly listed
  • Trust score computed from tests, usage, and audits
  • One-click fork to customize for your team
manifest.json
{
  "name": "customer-support-v2",
  "version": "2.1.0",
  "author": "ops-team",
  "runtime": "opensentience",
  "trust_score": 91,
  "spec": "SPEC.md",

  "permissions": {
    "required": [
      {"cap": "orders:read"},
      {"cap": "refunds:create",
       "scope": "max:500"}
    ]
  },

  "mcp_servers": [
    "graphonomous",
    "orders-api",
    "notifications"
  ],

  // Built and tested in Agentelic
  "build": {
    "tests_passed": 47,
    "coverage": "94%"
  }
}

Features

Built for agents that matter

Not another model hub. FleetPrompt is purpose-built for distributing production AI agents — with the trust, governance, and tooling that entails.

🔐

Trust Scores

Every agent is scored on test coverage, spec compliance, usage history, and audit results. Trust is earned, not declared.

📋

Spec-Verified

Only agents built against a valid SpecPrompt specification can be published. The spec is the source of truth.

🏷️

Semantic Versioning

Full version history with changelogs. Pin to specific versions, auto-update patches, or lock to major releases.

One-Click Install

Deploy any agent to your OpenSentience runtime with a single command. Permissions reviewed before activation.

🔀

Fork & Customize

Fork any public agent, modify the spec, re-test in Agentelic, and publish your variant. Standing on shoulders.

🌐

Open Protocol

MCP-native, MIT-licensed registry protocol. No vendor lock-in. Works with any MCP-compatible runtime.

🏢

Private Registries

Enterprise teams can host private FleetPrompt registries. Internal agents, internal trust, same toolchain.

📊

Usage Analytics

Track installs, active deployments, error rates, and user ratings. Know how your agents perform in the wild.

🧠

Graphonomous-Ready

Agents with continual learning declare Graphonomous as a dependency. Memory grows with every deployment.

Trust & Security

Trust is the product

The agent marketplace problem isn't discovery — it's trust. FleetPrompt makes every agent's provenance, permissions, and track record transparent and verifiable.

Provenance

Spec-to-Ship Traceability

Every published agent links back to its SpecPrompt spec, Agentelic test results, and build pipeline. The manifest is a receipt, not a promise.

Permissions

Declared, Not Discovered

Agent capabilities are declared in the manifest and reviewed on install. No hidden network calls, no surprise filesystem access. permissions.required[]

Scoring

Trust Is Computed

Trust scores are calculated from test coverage, spec compliance, deployment history, error rates, and community ratings. Algorithms, not badges.

Governance

Delegatic-Enforced Policies

Enterprise teams set Delegatic policies for which agents can be installed, by whom, and with what permissions. Governance scales with the fleet.

Clarity

What FleetPrompt is

FleetPrompt Is

An agent registry — like npm for AI agents, with manifests and versioning
A trust layer — computed scores based on provenance, testing, and usage
Spec-native — every agent is backed by a SpecPrompt specification
Runtime-agnostic — built for OpenSentience, open to any MCP runtime
Open protocol — MIT-licensed registry, no vendor lock-in
The distribution layer — connecting builders to operators

FleetPrompt Is Not

A model hub — we distribute agents, not weights or checkpoints
A prompt library — agents have specs, tests, and manifests, not just prompts
An execution runtime — that's OpenSentience's job
A build system — that's Agentelic's job
A walled garden — publish from any CI, install to any MCP runtime
Vaporware — built on the same stack that powers the [&] portfolio

The [&] Stack

Distribution is the
last mile

FleetPrompt is the distribution layer of the Ampersand Box portfolio. Agents flow through the stack — from specification to production fleet.

SpecPrompt Standards Defines agent behavior as versioned specifications
Agentelic Engineering Builds, tests, and packages agents against specs
OpenSentience Runtime Governs, executes, and observes agents locally
Graphonomous Memory Continual learning knowledge graphs for agents
FleetPrompt ← Distribution The open marketplace — publish, discover, install production agents
Delegatic Governance Organization hierarchy, policy inheritance, and audit trails
terminal
# publish a tested agent to the registry
$ fleetprompt publish ./customer-support-v2
  → Validating manifest.json...  
  → Verifying SPEC.md...         
  → Checking test results...     47/47 passed
  → Computing trust score...     91
  → Publishing v2.1.0...         ✓ live

# search the registry
$ fleetprompt search "code review"
  code-reviewer       v2.3.1  trust:94
  pr-summarizer       v1.0.3  trust:82
  security-scanner    v0.8.0  trust:78

# install to OpenSentience
$ fleetprompt install code-reviewer
  → Downloading manifest...     
  → Reviewing permissions...
    filesystem_read  ~/projects/**
    mcp_tool         git:*
  → Grant permissions? [y/N] y
  → Installing to OpenSentience... ✓ ready

Developer Experience

Three commands.
That's the workflow.

Publish from your CI pipeline. Search from your terminal. Install with explicit permission review. The CLI is the first-class interface.

  • fleetprompt publish — push tested agents to the registry
  • fleetprompt search — find agents by capability or domain
  • fleetprompt install — deploy to OpenSentience with review
  • fleetprompt update — upgrade agents with changelog diff
  • fleetprompt audit — inspect trust score and provenance

Your agents deserve
an audience.

FleetPrompt is the open marketplace for production AI agents. Publish what you've built. Discover what others have proven. Grow the fleet.

Open marketplace. MIT-licensed protocol. Launching 2026.