The Advanced Playbook
You've mastered the basics. Now it's time for the patterns that make people ask, "Wait, you built that with YAML?" Multi-provider orchestration, conditional logic, self-improving loops, and more.
What You'll Master
- Using different AI models for different tasks
- Building tools that adapt to their input
- Self-critiquing workflows that iterate to perfection
- Calling external tools from within your SmartTools
Right Model, Right Job
Different AI models have different strengths. A smart tool uses them strategically:
Fast & Cheap
Haiku, Grok-mini
Extraction, formatting, simple tasks
Balanced
Sonnet, GPT-4o-mini
Analysis, writing, code review
Maximum Power
Opus, GPT-4, DeepSeek
Complex reasoning, synthesis
name: smart-analyzer
version: "1.0.0"
description: Uses the right model for each task
steps:
# Fast model extracts structure
- type: prompt
provider: opencode-grok
prompt: |
Extract all key facts, dates, and names from this text.
Return as a bullet list, nothing else.
{input}
output_var: facts
# Powerful model does the thinking
- type: prompt
provider: claude-sonnet
prompt: |
Based on these extracted facts, provide:
1. A summary of what happened
2. Key insights or patterns
3. Questions that remain unanswered
Facts:
{facts}
output_var: analysis
output: "{analysis}"
Pro Tip: Cost Optimization
Use fast models for extraction (structured data from messy text) and powerful models for synthesis (insights from structured data). You'll cut costs by 80% with no quality loss.
Tools That Adapt to Their Input
The best tools don't assume what they're getting. They figure it out:
name: universal-parser
version: "1.0.0"
description: Handles JSON, CSV, or plain text
steps:
# Detect format
- type: code
code: |
import json
text = input.strip()
if text.startswith('{') or text.startswith('['):
try:
json.loads(text)
format_type = "json"
except:
format_type = "text"
elif ',' in text and '\n' in text:
format_type = "csv"
else:
format_type = "text"
format_instructions = {
"json": "Parse this JSON and describe its structure.",
"csv": "Analyze this CSV data and summarize the columns.",
"text": "Summarize the key points of this text."
}
instruction = format_instructions[format_type]
output_var: format_type, instruction
# Process accordingly
- type: prompt
provider: claude
prompt: |
This input is {format_type} format.
{instruction}
Input:
{input}
output_var: result
output: "[{format_type}] {result}"
The Self-Improving Loop
Want better quality? Make your tool critique itself:
name: perfect-summary
version: "1.0.0"
description: Summarizes, then improves itself
steps:
# First attempt
- type: prompt
provider: claude-haiku
prompt: |
Write a 3-sentence summary of this text:
{input}
output_var: draft
# Self-critique
- type: prompt
provider: claude-sonnet
prompt: |
Rate this summary from 1-10 and list specific improvements:
Original text:
{input}
Summary:
{draft}
Be harsh but constructive.
output_var: critique
# Final polish
- type: prompt
provider: claude-sonnet
prompt: |
Rewrite this summary addressing the feedback.
Keep it to 3 sentences.
Original summary: {draft}
Feedback: {critique}
output_var: final
output: "{final}"
Warning: Know When to Stop
More iterations don't always mean better output. 2-3 passes is usually the sweet spot. Beyond that, you're paying for diminishing returns.
Dynamic Prompt Building
Let Python construct your prompts on the fly:
name: multi-task
version: "1.0.0"
description: One tool, many abilities
arguments:
- flag: --task
variable: task
default: "summarize"
description: "Task: summarize, explain, critique, expand, translate"
- flag: --style
variable: style
default: "professional"
steps:
- type: code
code: |
prompts = {
"summarize": f"Summarize in a {style} tone",
"explain": f"Explain for a beginner, {style} style",
"critique": f"Provide {style} constructive criticism",
"expand": f"Expand with more detail, keep {style}",
"translate": f"Translate, maintaining {style} register"
}
instruction = prompts.get(task, prompts["summarize"])
# Add context based on input length
length = len(input)
if length > 5000:
instruction += ". Focus on the most important parts."
elif length < 100:
instruction += ". Be thorough despite the short input."
output_var: instruction
- type: prompt
provider: claude
prompt: |
{instruction}
{input}
output_var: result
output: "{result}"
Calling External Tools
SmartTools can wrap any command-line tool:
name: lint-explain
version: "1.0.0"
description: Runs pylint and explains the results
steps:
# Run the linter
- type: code
code: |
import subprocess
import tempfile
import os
# Write code to temp file
with tempfile.NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
f.write(input)
temp_path = f.name
try:
result = subprocess.run(
['pylint', '--output-format=text', temp_path],
capture_output=True,
text=True,
timeout=30
)
lint_output = result.stdout or result.stderr or "No issues found!"
except FileNotFoundError:
lint_output = "ERROR: pylint not installed"
except subprocess.TimeoutExpired:
lint_output = "ERROR: Linting timed out"
finally:
os.unlink(temp_path)
output_var: lint_output
# Explain in plain English
- type: prompt
provider: claude
prompt: |
Explain these linting results to a Python beginner.
For each issue, explain WHY it's a problem and HOW to fix it.
Results:
{lint_output}
output_var: explanation
output: |
## Lint Results
```
{lint_output}
```
## Explanation
{explanation}
Try It: Build a Research Assistant
Boss Level Exercise
Create a tool that:
- Detects whether input is a question or a topic
- Uses a fast model to generate 3 research angles
- Uses a powerful model to explore the best angle
- Adds a code step to format with headers and timestamps
See the solution
name: research
version: "1.0.0"
description: Deep research on any topic
steps:
# Detect input type
- type: code
code: |
text = input.strip()
is_question = text.endswith('?') or text.lower().startswith(('what', 'how', 'why', 'when', 'who', 'where'))
input_type = "question" if is_question else "topic"
output_var: input_type
# Generate angles
- type: prompt
provider: opencode-grok
prompt: |
This is a {input_type}: {input}
Suggest 3 interesting angles to explore this.
Return as a numbered list.
output_var: angles
# Deep dive
- type: prompt
provider: claude-sonnet
prompt: |
Research request: {input}
Possible angles:
{angles}
Pick the most interesting angle and provide:
1. Background context
2. Key facts and insights
3. Different perspectives
4. Remaining questions
output_var: research
# Format nicely
- type: code
code: |
from datetime import datetime
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M")
formatted = f'''
# Research Report
Generated: {timestamp}
Query: {input}
{research}
---
*Angles considered: {angles}*
'''
output_var: formatted
output: "{formatted}"
Performance Secrets
Speed Tricks
- Use Haiku/Grok for extraction
- Combine related tasks in one prompt
- Skip AI when Python can do it
Cost Tricks
- Powerful models only for synthesis
- Truncate long inputs with code first
- Cache repeated operations
What's Next?
You've learned the advanced patterns. Now go parallel:
- Parallel Orchestration - Run multiple tools simultaneously
- Publishing Tools - Share your creations with the world