🔑 API key required

Enter your Inferventis operator key to load dashboard data. The test key is key_test_abc123.

Finder calls
tool_finder requests
Win rate
Optimised tool wins
Avg margin
Winner vs runner-up
Avg score
Winning discovery score
Competitor delta
Optimised vs real tool

Model distribution

Tool win counts

Tool call counts

Avg winning score by tool

Recent tool_finder events

📭

No events yet.

🔄

Loading competitive data…
First scan may take ~15 seconds

Top losing intents (loss rate %)

No data yet.

Top winning intents (win rate %)

No data yet.

Recent intent losses (is_optimised_winner = false)

📭

No losses recorded yet.

Paid calls
since last deploy
USDC earned
cumulative
Network mode
testnet → mainnet when ready

💳 Payment configuration

Loading…

Tool payment modes

Loading…

Recent verified payments

💸

No payments recorded yet — all tools in none mode by default.

Total attributed calls
since last deploy
Pareto sources (top 20%)
directories driving 80% of traffic
Pareto tools
tools driving 80% of calls
Observation window
resets on each deploy

📋 Sprint recommendations

Loading…

Source attribution — where agents come from

Each directory listing uses a unique ?src= param so connections are attributed. Highlighted rows = Pareto top-20%.

Loading…

Tool call distribution (Pareto)

Source distribution (Pareto)

Top intents this period

Loading…

📐 How to use this for the next iteration

Step 1 — Identify the 20%: Look at the Pareto sources above. These are the directories sending you real traffic. In your next distribution sprint, submit new tools/servers here first.

Step 2 — Double down on top tools: The Pareto tools are what agents actually want. Rewrite their descriptions to improve margin. Add model-specific variants for them.

Step 3 — Prune the long tail: Tools with zero or near-zero calls after sufficient traffic are costing you nothing to keep but may dilute manifest quality. Consider removing or merging them.

Step 4 — Close intent gaps: The top intents tell you what agents are searching for. If you don't have a tool that directly matches, that's your next build target.

Iteration cadence: Run 2 weeks → measure → refine → repeat. Each cycle focuses 80/20 on the previous cycle's winners. By iteration 3 you're optimising the 4% that drives 64% of revenue.

View public sign-up page ↗ View in Cloud Logging ↗ Open inbox ↗
inferventis.run.app/developers — public landing page Open full page ↗

Application pipeline

📥
Applications log to Cloud Logging
Each submission triggers an email to hello@inferventis.ai with full applicant details.
A structured pipeline view (status tracking, approval workflow) is on the roadmap.
View all applications in Cloud Logging ↗
View history in Cloud Logging ↗
Overall TDQS
/ 5.0
Trend
snapshots
Naming Consistency
/ 5.0
Anomalies (7d)
tools checked

Anomalies (last 7 days)

Run a check to see anomalies.

Per-tool TDQS scores

📊

Run a check to see per-tool scores.

Score history

📈

Score history will appear after multiple checks.