LLM Tools

7 endpoints that apply Gemini LLMs to common text intelligence tasks — all via a simple REST call with no LLM SDK required. Supported models: gemini-2.0-flash, gemini-2.0-flash-lite, gemini-1.5-flash.

Endpoint Purpose
POST /v1/llm-extract Extract structured data from text via JSON Schema
POST /v1/llm-classify Classify text into one or more categories
POST /v1/llm-summarize Summarize text (paragraph, bullets, or one-line)
POST /v1/llm-sentiment Detect sentiment and emotional tone
POST /v1/llm-translate Translate text to any language
POST /v1/llm-rewrite Rewrite text in a different tone or style
POST /v1/llm-entities Extract named entities (persons, orgs, locations, dates)

Python SDK Examples

Extract structured data from text

Use llm-extract when you need to pull specific fields out of unstructured text and get back a typed JSON object.

from toolkitapi import DevTools

invoice_text = """
Invoice #1042 from Acme Corp dated 15 June 2024.
Total amount due: $4,250.00
Due date: July 15, 2024
Contact: [email protected]
"""

schema = {
    "type": "object",
    "properties": {
        "invoice_number": {"type": "string"},
        "vendor": {"type": "string"},
        "total": {"type": "number"},
        "due_date": {"type": "string"},
        "contact_email": {"type": "string"},
    },
}

with DevTools(api_key="tk_...") as dt:
    result = dt.llm_extract(invoice_text, schema=schema)
    print(result["data"])
    # {
    #   "invoice_number": "1042",
    #   "vendor": "Acme Corp",
    #   "total": 4250.00,
    #   "due_date": "2024-07-15",
    #   "contact_email": "[email protected]"
    # }

Classify text into categories

from toolkitapi import DevTools

with DevTools(api_key="tk_...") as dt:
    result = dt.llm_classify(
        text="My order arrived damaged and customer support hasn't responded in 3 days.",
        categories=["billing", "shipping", "product_quality", "customer_support", "returns"],
    )
    print(result["category"])     # "customer_support"
    print(result["confidence"])   # 0.91

    # Multi-label: allow multiple matching categories
    result = dt.llm_classify(
        text="The package was crushed and the invoice was wrong.",
        categories=["billing", "shipping", "product_quality", "customer_support"],
        multi_label=True,
    )
    print(result["categories"])   # ["product_quality", "billing"]

Summarize text

from toolkitapi import DevTools

article = """
Artificial intelligence is transforming industries at an unprecedented pace.
From healthcare diagnostics to financial modelling, AI systems are now capable
of performing tasks that previously required years of human expertise...
[...long article text...]
"""

with DevTools(api_key="tk_...") as dt:
    # Paragraph summary
    result = dt.llm_summarize(article, style="paragraph", max_length=80)
    print(result["summary"])

    # Bullet points
    result = dt.llm_summarize(article, style="bullets", max_length=100)
    print(result["summary"])

    # One-liner
    result = dt.llm_summarize(article, style="one_line")
    print(result["summary"])

Detect sentiment and emotions

from toolkitapi import DevTools

reviews = [
    "Absolutely love this product! Best purchase I've made all year.",
    "It's okay, nothing special. Works as described but feels cheap.",
    "Terrible quality. Broke after two days. Complete waste of money.",
]

with DevTools(api_key="tk_...") as dt:
    for review in reviews:
        result = dt.llm_sentiment(review)
        print(result["sentiment"])     # "positive" / "neutral" / "negative"
        print(result["score"])         # -1.0 to 1.0
        print(result["emotions"])      # {"joy": 0.8, "anger": 0.0, ...}

Translate text

from toolkitapi import DevTools

with DevTools(api_key="tk_...") as dt:
    # Translate to Spanish (formal tone)
    result = dt.llm_translate(
        text="Welcome to our platform. Please review our terms and conditions.",
        target_language="Spanish",
        tone="formal",
    )
    print(result["translated"])
    print(result["detected_language"])   # "en"

    # Translate to Japanese (auto-detect source)
    result = dt.llm_translate(
        text="Hello! How are you doing today?",
        target_language="Japanese",
    )
    print(result["translated"])

Rewrite text in a different tone

from toolkitapi import DevTools

original = "We regret to inform you that your request has been denied due to insufficient documentation."

with DevTools(api_key="tk_...") as dt:
    # Make it friendlier
    result = dt.llm_rewrite(original, tone="friendly")
    print(result["rewritten"])

    # Make it more concise
    result = dt.llm_rewrite(original, tone="concise")
    print(result["rewritten"])

    # Custom instructions
    result = dt.llm_rewrite(
        original,
        instructions="Rewrite as a short SMS message under 160 characters.",
    )
    print(result["rewritten"])

Extract named entities

from toolkitapi import DevTools

text = """
On Monday, Apple CEO Tim Cook met with European Commission President
Ursula von der Leyen in Brussels to discuss AI regulation proposals.
The meeting lasted approximately two hours.
"""

with DevTools(api_key="tk_...") as dt:
    # All entity types
    result = dt.llm_entities(text)
    print(result["entities"])
    # [
    #   {"text": "Apple", "type": "organization", "start": ...},
    #   {"text": "Tim Cook", "type": "person", ...},
    #   {"text": "Brussels", "type": "location", ...},
    #   ...
    # ]

    # Specific entity types only
    result = dt.llm_entities(text, entity_types=["person", "organization"])
    print(result["entities"])

Choosing a Model

All LLM endpoints accept an optional model parameter:

Model Speed Quality Best for
gemini-2.0-flash-lite Fastest Good High-volume classification, simple extraction
gemini-2.0-flash Fast Better Most tasks — default choice
gemini-1.5-flash Moderate High Complex extraction, nuanced summarization
with DevTools(api_key="tk_...") as dt:
    result = dt.llm_summarize(
        long_doc,
        style="bullets",
        model="gemini-1.5-flash",   # use higher quality for important content
    )

Tip

For structured extraction pipelines, use llm-extract with a well-defined JSON Schema. The more specific your schema, the more reliable the output. Combine with json-schema-validate to verify the result before using it downstream.