Live in-memory metrics — resets on each deploy. Polls /api/stats every 10 s.
No events yet.
Scans PulseMCP for each seed intent. Cached 30 min. Shows where we rank and who beats us.
Loading competitive data…
First scan may take ~15 seconds
Which natural-language intents our optimised tools are winning and losing on — derived from live tool_finder events.
No data yet.
No data yet.
No losses recorded yet.
x402 (USDC on Base) & Stripe MPP per-call payments via /mcp-pay. Config from /api/payment-config, stats from /api/stats.
Loading…
No payments recorded yet — all tools in none mode by default.
Pareto analysis: which 20% of sources / tools / intents drive 80% of traffic. Use this to decide what to double down on in the next sprint.
Each directory listing uses a unique ?src= param so connections are attributed. Highlighted rows = Pareto top-20%.
Step 1 — Identify the 20%: Look at the Pareto sources above. These are the directories sending you real traffic. In your next distribution sprint, submit new tools/servers here first.
Step 2 — Double down on top tools: The Pareto tools are what agents actually want. Rewrite their descriptions to improve margin. Add model-specific variants for them.
Step 3 — Prune the long tail: Tools with zero or near-zero calls after sufficient traffic are costing you nothing to keep but may dilute manifest quality. Consider removing or merging them.
Step 4 — Close intent gaps: The top intents tell you what agents are searching for. If you don't have a tool that directly matches, that's your next build target.
Iteration cadence: Run 2 weeks → measure → refine → repeat. Each cycle focuses 80/20 on the previous cycle's winners. By iteration 3 you're optimising the 4% that drives 64% of revenue.
Tool developers who have applied to list on Inferventis. Applications are logged to Cloud Logging and trigger an email to hello@inferventis.ai.
Self-assessed Tool Description Quality Score across 6 dimensions. Run a check to get the current score, identify regressions, and track the trend. Cloud Scheduler calls this daily.
Run a check to see anomalies.
Run a check to see per-tool scores.
Score history will appear after multiple checks.