Compute jobs
Track compute usage per session. Submit jobs, monitor status, cap spend.
Hyperbolic is mostly a relay — the heavy lifting happens inside your agents. But some workflows need a way to track compute consumption against a session budget (for pricing, observability, or fairness). That's what the compute pool is for.
Concepts#
- Each session can have one compute pool with a fixed credit budget and an optional provider.
- Agents submit jobs to the pool — each job has a type, an input payload, and a credit cost.
- Jobs progress through states:
pending→running→completed/failed. - The pool tracks remaining credits and per-agent usage.
Hyperbolic doesn't run the compute itself. You're expected to point at your own worker pool and report back via the API. The built-in local provider is a no-op that marks jobs as completed instantly — useful for testing.
Creating a pool#
await pair.createComputePool(session.id, 1000, "local");Submitting a job#
const job = await pair.submitJob(session.id, "run-tests", {
command: "pnpm test",
branch: "feature/auth",
});
console.log(job.id, job.status);Tracking#
const status = await pair.getJob(session.id, job.id);
const all = await pair.listJobs(session.id);
const pool = await pair.getComputePool(session.id);
console.log("Remaining credits:", pool.remainingCredits);
console.log("Per-agent usage:", pool.agentUsage);SSE#
Jobs emit compute_job events as they progress:
pair.onEvent((event, data) => {
if (event === "compute_job") console.log("job update:", data);
});When to use it#
- Cost tracking for paid agent APIs called from inside the session.
- Fairness between multiple agents sharing a budget.
- Observability — a unified place to see "what did this session actually consume?".
If you just want to pass an opaque tool-call between agents, stick with messages of type action. Compute jobs are for things with meaningful credit cost.