The open marketplace where production-ready AI agents are published, discovered, and deployed in one click.
Spec-verified. Trust-scored. Versioned manifests. One-click install to OpenSentience. No vendor lock-in.
How It Works
Every agent on FleetPrompt is built against a SpecPrompt specification, tested in Agentelic, and deployed to OpenSentience. The pipeline is the trust.
Push a tested agent with its SPEC.md manifest from Agentelic to the registry
Search by capability, trust score, domain, or compatible runtime
One command deploys the agent manifest to your OpenSentience runtime
Agent starts with explicit permissions, full audit trail, and Graphonomous memory
Agent Manifests
FleetPrompt agents aren't black boxes. Every published agent includes a machine-readable manifest declaring capabilities, permissions, dependencies, and the SpecPrompt spec it was built against.
{
"name": "customer-support-v2",
"version": "2.1.0",
"author": "ops-team",
"runtime": "opensentience",
"trust_score": 91,
"spec": "SPEC.md",
"permissions": {
"required": [
{"cap": "orders:read"},
{"cap": "refunds:create",
"scope": "max:500"}
]
},
"mcp_servers": [
"graphonomous",
"orders-api",
"notifications"
],
// Built and tested in Agentelic
"build": {
"tests_passed": 47,
"coverage": "94%"
}
}
Features
Not another model hub. FleetPrompt is purpose-built for distributing production AI agents — with the trust, governance, and tooling that entails.
Every agent is scored on test coverage, spec compliance, usage history, and audit results. Trust is earned, not declared.
Only agents built against a valid SpecPrompt specification can be published. The spec is the source of truth.
Full version history with changelogs. Pin to specific versions, auto-update patches, or lock to major releases.
Deploy any agent to your OpenSentience runtime with a single command. Permissions reviewed before activation.
Fork any public agent, modify the spec, re-test in Agentelic, and publish your variant. Standing on shoulders.
MCP-native, MIT-licensed registry protocol. No vendor lock-in. Works with any MCP-compatible runtime.
Enterprise teams can host private FleetPrompt registries. Internal agents, internal trust, same toolchain.
Track installs, active deployments, error rates, and user ratings. Know how your agents perform in the wild.
Agents with continual learning declare Graphonomous as a dependency. Memory grows with every deployment.
Trust & Security
The agent marketplace problem isn't discovery — it's trust. FleetPrompt makes every agent's provenance, permissions, and track record transparent and verifiable.
Every published agent links back to its SpecPrompt spec, Agentelic test results, and build pipeline. The manifest is a receipt, not a promise.
Agent capabilities are declared in the manifest and
reviewed on install. No hidden network calls, no
surprise filesystem access.
permissions.required[]
Trust scores are calculated from test coverage, spec compliance, deployment history, error rates, and community ratings. Algorithms, not badges.
Enterprise teams set Delegatic policies for which agents can be installed, by whom, and with what permissions. Governance scales with the fleet.
Clarity
The [&] Stack
FleetPrompt is the distribution layer of the Ampersand Box portfolio. Agents flow through the stack — from specification to production fleet.
# publish a tested agent to the registry $ fleetprompt publish ./customer-support-v2 → Validating manifest.json... ✓ → Verifying SPEC.md... ✓ → Checking test results... 47/47 passed → Computing trust score... 91 → Publishing v2.1.0... ✓ live # search the registry $ fleetprompt search "code review" code-reviewer v2.3.1 trust:94 pr-summarizer v1.0.3 trust:82 security-scanner v0.8.0 trust:78 # install to OpenSentience $ fleetprompt install code-reviewer → Downloading manifest... ✓ → Reviewing permissions... filesystem_read ~/projects/** mcp_tool git:* → Grant permissions? [y/N] y → Installing to OpenSentience... ✓ ready
Developer Experience
Publish from your CI pipeline. Search from your terminal. Install with explicit permission review. The CLI is the first-class interface.
fleetprompt publish
— push tested agents to the registry
fleetprompt search
— find agents by capability or domain
fleetprompt install
— deploy to OpenSentience with review
fleetprompt update
— upgrade agents with changelog diff
fleetprompt audit
— inspect trust score and provenance
FleetPrompt is the open marketplace for production AI agents. Publish what you've built. Discover what others have proven. Grow the fleet.
Open marketplace. MIT-licensed protocol. Launching 2026.