Query & Analysis¶
3 endpoints for persisting named queries, re-executing them against fresh datasets, and retrieving async job results.
| Method |
Endpoint |
Purpose |
POST |
/v1/save |
Persist a named query definition for later re-execution |
POST |
/v1/query/{query_id} |
Execute a saved query against a dataset |
GET |
/v1/jobs/{job_id} |
Poll the status and retrieve results of an async analytics job |
Note
/v1/save and /v1/jobs/{job_id} are planned Phase 2 features. Both currently return 501 Not Implemented. Check the changelog for general availability dates.
Python SDK Examples¶
Save a named query¶
from toolkitapi import Analytics
with Analytics(api_key="tk_...") as analytics:
saved = analytics.save({
"name": "monthly_revenue_by_region",
"description": "Total revenue grouped by region for a given month",
"query": "What is the total revenue by region for each month?",
"dataset_id": "dset_abc123",
})
print(saved["query_id"])
Run a saved query¶
from toolkitapi import Analytics
with Analytics(api_key="tk_...") as analytics:
result = analytics.run_saved_query(
"qry_xyz789",
{
"dataset_id": "dset_abc123",
"parameters": {"month": "2024-05"},
},
)
print(result["status"])
for row in result["results"]["rows"]:
print(row)
Run a large query asynchronously¶
import time
from toolkitapi import Analytics
with Analytics(api_key="tk_...") as analytics:
job = analytics.run_saved_query(
"qry_xyz789",
{"dataset_id": "dset_large", "execution_mode": "async"},
)
job_id = job["job_id"]
# Poll until done
while True:
status = analytics._client.get(f"jobs/{job_id}")
if status["status"] in ("succeeded", "failed"):
break
time.sleep(3)
print(status["results"])
Request Parameters¶
POST /v1/save¶
| Parameter |
Type |
Required |
Description |
name |
string |
Yes |
Short, URL-friendly identifier for the query (e.g. monthly_revenue) |
query |
string |
Yes |
Natural-language question or structured query expression to save |
description |
string |
No |
Human-readable description of what the query computes |
dataset_id |
string |
No |
Bind the saved query to a specific dataset; omit for dataset-agnostic queries |
POST /v1/query/{query_id}¶
| Parameter |
Type |
Required |
Description |
query_id |
string (path) |
Yes |
Unique identifier of the saved query returned by /v1/save |
dataset_id |
string |
No |
Dataset to run against — overrides the one bound at save time |
execution_mode |
string |
No |
"sync" (default) or "async" — use "async" for large datasets |
parameters |
object |
No |
Key-value pairs substituted into the query template at execution time |
GET /v1/jobs/{job_id}¶
| Parameter |
Type |
Required |
Description |
job_id |
string (path) |
Yes |
Unique identifier of the async job returned when a query runs with execution_mode: async |
Response Fields¶
Save¶
| Field |
Type |
Description |
query_id |
string |
Unique identifier for the saved query — use with /v1/query/{query_id} |
name |
string |
The name provided at save time |
created_at |
string |
ISO 8601 timestamp of when the query was saved |
status |
string |
Always "saved" on success |
Run Saved Query¶
| Field |
Type |
Description |
query_id |
string |
The executed query's identifier |
dataset_id |
string |
The dataset the query ran against |
execution_mode |
string |
"sync" or "async" as requested |
status |
string |
"succeeded", "failed", or "queued" (async only) |
results |
object |
Present for sync executions. Contains columns, rows, and row_count |
results.columns |
array |
Ordered list of column name strings |
results.rows |
array |
Array of arrays — each inner array is one result row |
results.row_count |
integer |
Total number of rows returned |
job_id |
string | null |
Populated for async executions — use with /v1/jobs/{job_id} |
created_at |
string |
ISO 8601 timestamp of when the execution was triggered |
Get Job Status¶
| Field |
Type |
Description |
job_id |
string |
Unique identifier for this async job |
query_id |
string |
The saved query that was executed |
dataset_id |
string |
The dataset the query ran against |
status |
string |
Lifecycle state: queued, running, succeeded, or failed |
progress |
integer |
Completion percentage (0–100); meaningful when status is running |
results |
object | null |
Populated on succeeded. Contains columns, rows, and row_count |
error |
string | null |
Human-readable error message; populated only when status is "failed" |
created_at |
string |
ISO 8601 timestamp when the job was enqueued |
completed_at |
string | null |
ISO 8601 timestamp when the job finished, otherwise null |
Tip
Poll /v1/jobs/{job_id} at a 2–5 second interval. Start at 2 s and increase exponentially if the job has been running for more than 30 seconds. Completed job results are retained for 24 hours after completed_at.