Recipes

API Recipes

Pre-built workflows that chain multiple Toolkit API endpoints together. Copy the code, swap in your API key, and run.

Email DNS Geo

Lead Enrichment Pipeline

Validate an email address, check the domain's mail infrastructure, then geolocate the mail server to build a rich lead profile.

1

Validate the email address

Email Toolkit — POST /v1/validate

Returns syntax validity, deliverability score, disposable/role-based flags.

2

Look up MX records for the domain

DNS Toolkit — GET /v1/mx?domain=example.com

Returns the mail exchange servers and their priorities.

3

Geolocate the primary mail server

Geo Toolkit — GET /v1/ip-lookup?ip={mx_ip}

Returns country, city, ISP, and coordinates for the mail server IP.

Show Python code
import httpx

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}

email = "[email protected]"
domain = email.split("@")[1]

# Step 1: Validate email
r1 = httpx.post("https://email.toolkitapi.io/v1/validate",
                headers=HEADERS, json={"email": email})
validation = r1.json()

# Step 2: Get MX records for the domain
r2 = httpx.get(f"https://dns.toolkitapi.io/v1/mx?domain={domain}",
               headers=HEADERS)
mx_records = r2.json()

# Step 3: Geolocate the primary mail server
primary_mx = mx_records["data"]["records"][0]["exchange"]
r3 = httpx.get(f"https://geo.toolkitapi.io/v1/ip-lookup?host={primary_mx}",
               headers=HEADERS)
location = r3.json()

print(f"Email valid: {validation['data']['is_valid']}")
print(f"Mail server: {primary_mx}")
print(f"Server location: {location['data']['city']}, {location['data']['country']}")
DNS + DNS + DNS

Domain Intelligence Report

Pull DNS records, WHOIS registration data, and SSL certificate details for a domain in parallel to build a comprehensive intelligence report.

1

Fetch core DNS records

DNS Toolkit — GET /v1/a, /v1/mx, /v1/ns, /v1/txt

2

Retrieve WHOIS registration data

DNS Toolkit — GET /v1/whois?domain=example.com

3

Check SSL certificate details

DNS Toolkit — GET /v1/ssl?domain=example.com

Show Python code
import httpx
import asyncio

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}
BASE = "https://dns.toolkitapi.io/v1"
domain = "example.com"

async def domain_report():
    async with httpx.AsyncClient(headers=HEADERS) as client:
        # Run all lookups in parallel
        a, mx, ns, txt, whois, ssl = await asyncio.gather(
            client.get(f"{BASE}/a?domain={domain}"),
            client.get(f"{BASE}/mx?domain={domain}"),
            client.get(f"{BASE}/ns?domain={domain}"),
            client.get(f"{BASE}/txt?domain={domain}"),
            client.get(f"{BASE}/whois?domain={domain}"),
            client.get(f"{BASE}/ssl?domain={domain}"),
        )
    return {
        "dns": {"a": a.json(), "mx": mx.json(), "ns": ns.json(), "txt": txt.json()},
        "whois": whois.json(),
        "ssl": ssl.json(),
    }

report = asyncio.run(domain_report())
print(f"IPs: {report['dns']['a']['data']['records']}")
print(f"Registrar: {report['whois']['data']['registrar']}")
print(f"SSL issuer: {report['ssl']['data']['issuer']}")
SEO Scrape Image

Website Audit Workflow

Run an SEO audit, scrape the page content for analysis, and capture a visual screenshot — all from a single URL.

1

Run full SEO audit

SEO Toolkit — POST /v1/audit

Returns meta tags, headings, Open Graph, structured data, and page speed metrics.

2

Scrape page content as Markdown

Scrape Toolkit — POST /v1/scrape

Returns clean Markdown text suitable for LLM ingestion or content analysis.

3

Generate a visual screenshot

Image Toolkit — POST /v1/screenshot

Returns a full-page PNG screenshot of the URL.

Show Python code
import httpx

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}
url = "https://example.com"

# Step 1: SEO audit
r1 = httpx.post("https://seo.toolkitapi.io/v1/audit",
                headers=HEADERS, json={"url": url}, timeout=30)
audit = r1.json()

# Step 2: Scrape page content
r2 = httpx.post("https://scrape.toolkitapi.io/v1/scrape",
                headers=HEADERS, json={"url": url}, timeout=30)
content = r2.json()

# Step 3: Screenshot
r3 = httpx.post("https://image.toolkitapi.io/v1/screenshot",
                headers=HEADERS, json={"url": url}, timeout=30)
# Save the screenshot
with open("screenshot.png", "wb") as f:
    f.write(r3.content)

print(f"Title: {audit['data']['meta']['title']}")
print(f"SEO score: {audit['data']['score']}")
print(f"Content length: {len(content['data']['markdown'])} chars")
print("Screenshot saved to screenshot.png")
Convert PDF Image

Document Processing Pipeline

Convert Markdown documentation to HTML, generate a PDF, and create a thumbnail preview image — a common publishing workflow.

1

Convert Markdown to HTML

Convert Toolkit — POST /v1/markdown-to-html

2

Generate PDF from the HTML

PDF Toolkit — POST /v1/from-html

3

Create a thumbnail of the first page

Image Toolkit — POST /v1/resize

Show Python code
import httpx
import base64

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}

markdown = """# Project Report
## Summary
This quarter we shipped 3 major features...
"""

# Step 1: Markdown → HTML
r1 = httpx.post("https://convert.toolkitapi.io/v1/markdown-to-html",
                headers=HEADERS, json={"markdown": markdown})
html = r1.json()["data"]["html"]

# Step 2: HTML → PDF
r2 = httpx.post("https://pdf.toolkitapi.io/v1/from-html",
                headers=HEADERS, json={"html": html}, timeout=30)
with open("report.pdf", "wb") as f:
    f.write(r2.content)

# Step 3: Create thumbnail (resize first page image)
r3 = httpx.post("https://image.toolkitapi.io/v1/resize",
                headers=HEADERS,
                json={"url": "report-page-1.png", "width": 300})
with open("thumbnail.png", "wb") as f:
    f.write(r3.content)

print("Pipeline complete: report.pdf + thumbnail.png")
Email Auth Auth

Secure User Onboarding

Validate a new user's email, hash their password securely, and generate a TOTP secret for two-factor authentication — all in one flow.

1

Validate the user's email

Email Toolkit — POST /v1/validate

Reject disposable or invalid addresses before creating the account.

2

Hash the password with bcrypt

Auth Toolkit — POST /v1/hash/bcrypt

Returns a salted bcrypt hash ready to store in your database.

3

Generate a TOTP secret for 2FA

Auth Toolkit — POST /v1/totp/generate

Returns a secret key and QR code URI for authenticator apps.

Show Python code
import httpx

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}

email = "[email protected]"
password = "s3cure-p@ssw0rd"

# Step 1: Validate email first
r1 = httpx.post("https://email.toolkitapi.io/v1/validate",
                headers=HEADERS, json={"email": email})
if not r1.json()["data"]["is_valid"]:
    raise ValueError("Invalid email address")

# Step 2: Hash password
r2 = httpx.post("https://auth.toolkitapi.io/v1/hash/bcrypt",
                headers=HEADERS, json={"text": password})
password_hash = r2.json()["data"]["hash"]

# Step 3: Generate TOTP secret for 2FA
r3 = httpx.post("https://auth.toolkitapi.io/v1/totp/generate",
                headers=HEADERS, json={"issuer": "MyApp", "account": email})
totp = r3.json()["data"]

# Store in your database:
user = {
    "email": email,
    "password_hash": password_hash,
    "totp_secret": totp["secret"],
    "totp_qr_uri": totp["uri"],
}
print(f"User created: {email}")
print(f"2FA QR URI: {totp['uri']}")
Scrape DevTools Convert

Content Pipeline for LLMs

Scrape a web page into structured data, extract and format the metadata as JSON, then convert to YAML for your LLM training config.

1

Scrape page and extract metadata

Scrape Toolkit — POST /v1/metadata

Returns title, description, Open Graph tags, and structured data.

2

Pretty-print the JSON

Dev Toolkit — POST /v1/format/json

Formats and validates the JSON structure for inspection.

3

Convert to YAML for config files

Convert Toolkit — POST /v1/json-to-yaml

Outputs clean YAML ready for LLM training configs or documentation.

Show Python code
import httpx
import json

API_KEY = "your-api-key"
HEADERS = {"X-API-Key": API_KEY}

url = "https://example.com/blog/post-1"

# Step 1: Extract structured metadata
r1 = httpx.post("https://scrape.toolkitapi.io/v1/metadata",
                headers=HEADERS, json={"url": url}, timeout=30)
metadata = r1.json()["data"]

# Step 2: Format the JSON (validate + pretty-print)
r2 = httpx.post("https://dev.toolkitapi.io/v1/format/json",
                headers=HEADERS, json={"input": json.dumps(metadata)})
formatted = r2.json()["data"]["formatted"]

# Step 3: Convert JSON to YAML
r3 = httpx.post("https://convert.toolkitapi.io/v1/json-to-yaml",
                headers=HEADERS, json={"json": json.dumps(metadata)})
yaml_output = r3.json()["data"]["yaml"]

print("--- YAML output ---")
print(yaml_output)

Build your own workflows

Every endpoint is a standard REST call. Mix and match toolkits to build exactly the pipeline your project needs.