AI Features
NORA integrates with three major AI providers — Google Gemini, OpenAI, and Anthropic — through four specialized node types. This guide explains how to configure AI providers, choose models, and use each AI-powered node effectively.
Setting Up AI Providers
Before using any AI node, you need at least one API key configured.
Getting API Keys
| Provider | Where to Get a Key | Key Format |
|---|---|---|
| Google Gemini | Google AI Studio | AIza... |
| OpenAI | OpenAI API Keys | sk-... |
| Anthropic | Anthropic Console | sk-ant-... |
Saving Keys
- Open Settings → System tab
- Scroll to AI Provider Keys
- Paste your key into the appropriate field
- Click Save Settings
Each key shows its status: green “✓ Key saved” or amber “⚠️ Key not set”. You only need keys for providers you plan to use.
Keys are stored locally in ~/.nora/config/settings.json and are sent only to the respective AI provider’s API.
Choosing a Model
Every AI node lets you select a provider and model. Here’s how to choose.
⚠️ Cost Estimates — Use as Rough Guidance Only
The cost indicators below ($ to $$$$) and token pricing shown throughout NORA are approximate estimates based on publicly available pricing at the time of writing. Actual costs may vary significantly based on your API tier, usage volume, promotional credits, and provider pricing changes. Always verify current pricing directly with your AI provider before relying on these estimates for budgeting or billing purposes. NORA’s cost tracking is intended as a convenience feature, not an authoritative billing system.
Google Gemini Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| gemini-2.5-flash ⭐ | General tasks, best price/performance (recommended) | ★★★★★ | $ |
| gemini-2.5-flash-lite | Fastest/cheapest option | ★★★★★ | $ |
| gemini-2.5-pro | Most advanced, complex analysis | ★★★ | $$$ |
| gemini-3-flash-preview | Preview — frontier-class, fast | ★★★★ | $$ |
| gemini-3.1-pro-preview | Preview — advanced reasoning | ★★★ | $$$ |
| gemini-2.0-flash | ⚠️ Deprecated — migrate to 2.5 | ★★★★ | $ |
| gemini-1.5-flash | Legacy | ★★★★ | $ |
| gemini-1.5-pro | Legacy | ★★★ | $$ |
OpenAI Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| gpt-5.4-mini ⭐ | Fast, strong mini (recommended) | ★★★★★ | $ |
| gpt-5.4 | Flagship reasoning and coding | ★★★★ | $$$ |
| gpt-5.4-nano | Cheapest GPT-5.4-class | ★★★★★ | $ |
| gpt-4o-mini | Legacy — fast general tasks | ★★★★ | $ |
| gpt-4o | Legacy — high-quality | ★★★★ | $$$ |
| gpt-4.1 | Legacy — strong general purpose | ★★★ | $$$ |
| gpt-4.1-mini | Legacy — fast with quality | ★★★★ | $$ |
| o4-mini | Reasoning tasks, cost-effective | ★★★ | $$ |
| o3 | Advanced reasoning | ★★ | $$$$ |
Anthropic Models
| Model | Best For | Speed | Cost |
|---|---|---|---|
| Claude Opus 4.6 | Most intelligent — agents and coding | ★★ | $$$$ |
| Claude Sonnet 4.6 | Best balance of speed and intelligence | ★★★★ | $$$ |
| Claude 4.5 Haiku ⭐ | Fastest/cheapest Claude (recommended for cost) | ★★★★★ | $ |
| Claude Sonnet 4 | Legacy | ★★★★ | $$$ |
| Claude Opus 4 | Legacy | ★★ | $$$$ |
Custom Models
All AI nodes support a Custom model option. Select “Custom” from the model dropdown, then type any model identifier. This lets you use new models that haven’t been added to NORA’s dropdown yet.
AI Router Node
The AI Router classifies text from a file and routes the workflow based on the classification. Think of it as an intelligent switch — it reads a document, determines what category it belongs to, and sends the workflow down the matching path.
How It Works
- The node reads the most recent file from a configured input folder
- The file’s content is sent to the AI provider with a classification prompt
- The AI returns a category (one of your predefined options)
- The workflow continues along the edge whose label matches that category
Configuration
| Field | Description |
|---|---|
| AI Provider | Gemini (default), OpenAI, or Anthropic |
| AI Model | Select from the provider’s model list |
| Input Folder Path | Folder to scan for files |
| File Pattern | Which files to consider (e.g., *.txt;*.html;*.md) |
| Sort By | Modified or Created — determines which file is “latest” |
| Categories | Comma-separated list of route labels |
| Custom Prompt | Override the default classification prompt |
| API Key | Optional per-node key (overrides global Settings key) |
Setting Up Categories
- Enter your category names in the Categories field, separated by commas
- Example:
urgent, follow-up, archive, spam - Connect outgoing edges from the Router node to downstream nodes
- Label each edge with one of your category names (click the edge in Edit Mode → Edit Label)
- Add an edge labeled
otheras a catch-all for unrecognized responses
The AI will classify each file into exactly one category. If its choice doesn’t match any edge label, it falls back to other.
CSV Export
Enable CSV export to save classification results to a file:
– Toggle CSV Export on
– Set the Output Directory for the CSV file
– Each classification appends a row with: filename, category, summary, extracted fields
Output Display
After processing, the node shows:
– 🏷️ Category — the AI’s classification
– 📝 Summary — brief description of the content
– 📄 File — which file was classified
– 💰 Cost — API cost in USD with token counts
– 🧩 Fields — any structured data the AI extracted
Debug Mode (🐛)
Click the 🐛 button in the node header to toggle Debug Mode. When active:
– A 🐛 Raw Response panel appears below the output, showing the full JSON object returned by the AI API
– The panel is scrollable and has a one-click Copy button
– Use this to inspect classification confidence, token usage metadata, or model-specific response fields
Debug mode is persisted on the node — it stays ON across workflow reloads until you toggle it off.
AI Agent Node
The AI Agent reads context files and calls a tool or provides a final answer. Unlike the Router (which just classifies), the Agent can take action by calling one of your defined tools.
How It Works
- The node loads context files from a Memory Folder
- The file content is sent to the AI with your prompt and available tool list
- The AI either:
- Calls a tool — the workflow routes along the edge labeled with that tool’s name
- Returns a final answer — the workflow routes along the “success” edge
Configuration
| Field | Description |
|---|---|
| AI Provider | Gemini, OpenAI, or Anthropic |
| AI Model | Select from the provider’s model list |
| Memory Folder Path | Folder of context files for the AI to read |
| File Pattern | Which files to load (e.g., *.txt;*.md;*.json) |
| Max Memory Files | How many files to load (default: 1) |
| Tools | Comma-separated list of tool names |
| Custom Prompt | Instructions for the agent |
| Conversation Mode | Single-turn (one response) or Multi-turn (back-and-forth) |
| Max Turns | For multi-turn mode (default: 5) |
Tools
Define tool names as a comma-separated list. The AI decides which tool to call based on the context and your prompt. Each tool name becomes an output handle on the right side of the node:
| Handle | Color | When Used |
|---|---|---|
| Tool name | Green | AI chose to call this tool |
| Success | Blue | AI gave a final answer |
| Error | Red | An error occurred |
Multi-Turn Mode
In multi-turn mode, the agent can make multiple calls to the AI (up to your max turns limit). The turn counter displays on the node during execution. Use this when the task requires back-and-forth reasoning.
Output Display
After execution, the node shows:
– 🔧 Tool Called or ✅ Final Answer
– 💭 Reasoning — the AI’s explanation for its decision
– 📝 Response content
– 📚 Memory Used — which context files were loaded
– 💰 Cost with token breakdown
Debug Mode (🐛)
Click the 🐛 button in the node header to toggle Debug Mode. When active:
– A 🐛 Raw Response panel appears after execution, showing the complete JSON response from the AI API
– The panel is scrollable (max height 140px) and has a one-click Copy button
– Use this to inspect tool call arguments, raw model output, finish reason, or trace unexpected routing decisions
Debug mode is persisted on the node — it stays ON across workflow reloads until you toggle it off.
AI Autonomous Agent Node
The Autonomous Agent is NORA’s most powerful AI node. It independently executes multiple tools in a loop to complete a complex task — planning what to do, executing tools, evaluating results, and continuing until the task is done.
How It Works
Your Request → AI Plans → Executes Tool → Evaluates Result →
→ Need more tools? → Executes next tool → Evaluates → ...
→ Task complete? → Returns summary
→ Need info from you? → Asks question → Waits → Resumes
Configuration
| Field | Description |
|---|---|
| AI Provider | Gemini, OpenAI, or Anthropic |
| AI Model | Select from the provider’s model list |
| Goal Prompt | System-level instructions defining the agent’s role |
| User Request | The task for the agent to accomplish |
| Default Working Dir | Base directory for file operations |
| Tools | Tool definitions — from the Tool Library or defined inline |
| Max Iterations | Maximum tool-call cycles (default: 10) |
| Timeout Minutes | Overall time limit (default: 30) |
| Budget Limit ($) | Maximum AI API spend (default: $10.00) |
| Memory Folder | Optional context files |
Built-in Tools
Every Autonomous Agent has these tools available automatically (no configuration needed):
| Tool | What It Does |
|---|---|
| read_file | Read the contents of any file (up to 50,000 characters) |
| write_file | Create or overwrite a file with specified content (auto-creates parent directories) |
| list_directory | List files and folders in a directory (recursive option available) |
| file_exists | Check if a file or directory exists at the given path |
| run_command | Run shell commands and capture stdout/stderr (with timeout control) |
These are the same core tools available to Custom Script Agents, so the AA can read, write, and execute just like CSA.
You can add more tools from the Tool Library or define them inline with a name, command, and parameters. You can also drag script files directly from Windows File Explorer onto the agent node to add them as tools instantly.
Tool Discovery (Dynamic Tool Resolution)
Enable Tool Discovery to let the agent find and use tools from your Tool Library at runtime — without pre-configuring them on the node.
| Setting | Description |
|---|---|
| Allow Tool Discovery | Checkbox in the edit panel under Safety Limits |
When enabled:
1. The agent receives a special search_tools built-in that searches your Tool Library by keyword
2. If the agent calls an unknown tool, NORA attempts lazy resolution — searching the Tool Library for a matching tool name
3. Resolved tools are cached for the session, so subsequent calls are fast
This is ideal for general-purpose agents that need flexibility. Instead of pre-defining 20 tools, enable Tool Discovery and let the agent find what it needs.
Example: An agent with Tool Discovery enabled receives the task “Generate a thumbnail for this video”. It searches for “thumbnail” or “image”, discovers
ffmpeg-thumbnailin your Tool Library, and uses it — all without you pre-configuring that tool on the node.
Live Progress
The Autonomous Agent streams progress in real time:
- A status bar shows: running status, current iteration (e.g., “Iteration 3/10”), and cumulative cost
- A conversation panel displays the agent’s thinking, tool calls, and results as chat bubbles
- An execution log tracks every step with timestamps, tool names, success/failure status, and duration
Interactive Chat
You can communicate with the agent while it’s running:
- The agent may pause and ask you a question (the status shows “💬 Waiting for input”)
- Type your response in the chat panel and press Enter
- The agent resumes with your input
Safety Controls
| Control | Default | Description |
|---|---|---|
| Max Iterations | 10 | Hard limit on tool-call cycles |
| Budget Limit | $10.00 | Stops when cumulative AI cost exceeds this amount |
| Timeout | 30 min | Overall execution time limit |
| Max Tokens | 16,384 | Maximum output tokens per LLM response. Claude supports up to 65,536; GPT-4 up to 16,384; Gemini up to 8,192 |
| Cancel Button | — | Stop the agent manually at any time |
Tip: The Max Tokens setting controls how much the AI can output per response. A higher limit allows the agent to write larger files or produce comprehensive documents in a single response. If you see “Response was truncated” errors, increase this value up to 65,536 for Claude.
Output Routing
| Handle | Color | When Used |
|---|---|---|
| Complete | Green | Task finished successfully |
| Partial | Yellow | Max iterations reached before completion |
| Needs Input | Blue | Agent paused waiting for user input |
| Error | Red | An error occurred |
Execution Log
Click the log section to expand a detailed record of the agent’s actions:
▶ Agent started
[1] 🔧 list_directory — "Checking available files"
[1] ✓ Found 12 files (23ms)
[2] 🔧 read_file — "Reading config.json"
[2] ✓ Read 2,450 characters (15ms)
[3] 🔧 write_file — "Creating summary"
[3] ✓ Wrote 890 characters (8ms)
✅ Agent completed — "Summary generated successfully"
Each entry shows the iteration number, tool name, reasoning, result, and duration. Failed tool calls show error details that can be expanded.
Debug Mode (🐛)
Click the 🐛 button in the node header to toggle Debug Mode. When active:
– Tool results are no longer truncated — the full result text is shown for every tool call in the execution log (instead of the default 200-character preview)
– debugInfo fields — if a tool result includes a debugInfo object, it is rendered inline in the log entry (JSON formatted, blue text)
– Useful for diagnosing why a tool call returned unexpected output or tracing data flow through multi-step tool chains
Debug mode is persisted on the node — it stays ON across workflow reloads until you toggle it off.
Upstream Context
If a prior node in the workflow passes context data (via aiToolParams), the Autonomous Agent receives it automatically. The node shows “📥 Has upstream context from prior node” when this data is present.
Session Persistence & Resumability
Autonomous Agent sessions are persistent — if execution is interrupted (browser refresh, app restart, or manual pause), the agent can resume from where it left off:
- Conversation history is preserved across restarts
- The agent shows “Resume” instead of “Run” when a prior session exists
- Click Clear Session to reset and start fresh
- Session state includes: current iteration, pending questions, partial results, and cost tracking
This allows you to safely pause long-running agents, review their progress, and resume without losing work.
Custom Script Agent Node
The Custom Script Agent executes your own Python or Node.js scripts that follow a JSON communication protocol. Your script is the decision-maker — it can use any LLM provider, call tools, ask the user questions, report cost, and choose which route to take.
Templates included. NORA ships starter templates in the
custom-script-templates/folder. Copy one, replace the placeholder logic with your LLM calls, and go.
How It Differs
| Feature | AI Nodes (Router/Agent/Autonomous) | Custom Script Agent |
|---|---|---|
| Logic source | AI provider (LLM) managed by NORA | Your script — any LLM, any logic |
| API keys | Configured in Settings | Your script manages its own keys (.env file) |
| Communication | API calls | JSON stdin/stdout protocol |
| Routing | Predefined handles | Custom route labels you define |
| AI usage | Built-in to the node | Optional — your script can call LLMs internally, or not use AI at all |
Configuration
| Field | Description | Default |
|---|---|---|
| Script Path | Absolute path to your .py or .js script file |
(required) |
| Script Type | Auto-detect (from extension), Python, or Node.js | Auto-detect |
| Working Directory | The directory your script runs in. Affects relative file paths | Script’s folder |
| Route Labels | Comma-separated labels that become output edges (e.g., approve, reject, review) |
(empty — just success/error) |
| User Request | The task or prompt to send to your script | (empty) |
| Memory / Input Folder Path | Folder containing context files to pass to the script automatically | (empty) |
| File Pattern | Semi-colon separated glob patterns for memory files | *.txt;*.md;*.json;*.csv |
| Sort By | How to sort memory files: Last Modified or Created Time | Last Modified |
| Max Memory Files | Maximum number of matching files to include as context | 5 |
| Max Runtime (minutes) | Script will be force-terminated after this duration (max 120) | 30 |
| Tools | Tools from the Tool Library your script can invoke at runtime | (none) |
Quick-Open Buttons
The script path badge on the node has two buttons:
– ✎ (purple) — Opens the script in the dashboard’s inline Monaco editor
– ↗ (green) — Opens the script in your configured external editor (VS Code, Notepad++, etc.)
Agent Folder Quick Access
If you’ve configured an Agent Folder (the recommended setup), you’ll see it displayed on the node with:
– Open — Opens the folder in File Explorer
– Copy — Copies the folder path to clipboard
Chat Panel Controls
| Control | When Visible | Action |
|---|---|---|
| Stop ⏹ | While running | Immediately terminates the script process (kills entire process tree on Windows) |
| Send 📤 | Always | Sends chat input to the script |
| Rerun | When saved task exists | Re-runs the original task from scratch |
| Clear | When anything to clear | Resets chat, execution log, and status to idle |
| Copy | When messages exist | Copies conversation history to clipboard |
API Key Management
Scripts manage their own API keys — the dashboard does NOT handle this for Custom Script Agents. This gives you full flexibility to use any provider.
Option 1: .env file (recommended) — Create a .env file in the same folder as your script:
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
OPENAI_API_KEY=sk-your-key-here
GOOGLE_API_KEY=AIza-your-key-here
The included templates have a built-in .env parser that loads automatically on script start — no need to install python-dotenv or dotenv.
Option 2: System environment variables — Set them via PowerShell, CMD, or system settings. Your script inherits all environment variables from the dashboard process.
JSON Protocol
Your script communicates with NORA through JSON messages on stdin (incoming) and stdout (outgoing). One JSON object per line. Always flush output.
Messages your script receives (stdin):
| Action | When | Contains |
|---|---|---|
start |
Immediately after launch | config object with route labels, tools, memory file contents, conversation history, user request, working directory |
tool_result |
After executing a requested tool | id, success, output, durationMs |
user_message |
When the user responds to a question | content |
stop |
When the user clicks Cancel | — |
Messages your script sends (stdout, one JSON per line):
| Event | Purpose | Key Fields |
|---|---|---|
log |
Write to the execution log panel | message, level (info/warn/error/debug) |
thinking |
Show progress steps in the UI | iteration, message |
tool_request |
Ask NORA to execute a configured tool | id, tool, params |
message |
Display a chat message | role, content |
needs_input |
Pause and ask the user a question | question |
cost |
Report token usage and cost | inputTokens, outputTokens, model, usd |
complete |
Finish execution and route to an edge | route, output |
error |
Signal a failure | message |
Non-JSON stdout is automatically captured as debug-level log entries. Stderr output appears as warn-level logs.
The Start Config Object
When your script receives the start message, config contains everything the node knows:
config = start_msg["config"]
route_labels = config["routeLabels"] # ["approve", "reject", "review"]
tools = config["tools"] # [{"name": "write_file", ...}]
memory_files = config["memoryFiles"] # [{"name": "doc.md", "content": "..."}]
conversation_history = config["conversationHistory"] # previous messages (multi-turn)
user_request = config["userRequest"] # "Evaluate this application"
working_dir = config["workingDir"] # "C:\\agents\\my-agent"
Route Labels
Define custom route labels (e.g., approve, reject, review). Each label becomes an output handle on the node: route_approve, route_reject, route_review. The built-in success and error handles are always available.
Your script’s complete event specifies which route to take:
if credit_score >= 700:
emit_complete("approve", "Score meets threshold")
elif credit_score < 500:
emit_complete("reject", "Score below minimum")
else:
emit_complete("review", "Borderline — needs manual review")
Asking the User for Input
Your script can pause mid-execution and ask the user a question:
emit_needs_input("The credit score is 680 (borderline). Should I approve anyway?")
response = read_message() # Blocks until user responds
if response["action"] == "user_message":
user_answer = response["content"]
# Continue based on their answer...
elif response["action"] == "stop":
sys.exit(0) # User cancelled
The node shows a “waiting for input” state with the question displayed, and the chat input activates for the user to type a response.
Using Tools
Tools configured on your node can be executed by your script. The workflow handles tool execution — your script just requests it and waits for the result:
result = request_tool("write_file", {
"path": "output/report.md",
"content": "# Analysis Report\n..."
})
if result["success"]:
emit_log(f"File written in {result['durationMs']}ms")
else:
emit_log(f"Tool failed: {result['output']}", level="error")
Multi-Turn Conversations
The Custom Script Agent supports multi-turn chat. Users can type follow-up messages in the chat area on the node. On each follow-up, the script receives the full conversationHistory so it can respond in context. When the user clicks Re-run (↻), the conversation resets.
Interactive Conversations (Asking Follow-Up Questions)
Scripts can ask the user follow-up questions using the needs_input event:
emit("needs_input", question="What color would you like the report?")
sys.exit(0) # Exit cleanly - new process will continue
How it works:
1. Script sends needs_input with a question
2. Script should exit cleanly (the frontend will spawn a fresh process when user responds)
3. User types response and hits Send
4. NORA spawns a new process with the full conversationHistory including the user’s answer
5. Script checks len(conversationHistory) > 1 to detect it’s a continuation, then reads prior Q&A from history
This conversation-based (stateless) approach means each turn runs in a fresh process — no complex session management needed. See interactive_agent.py and interactive_agent.js templates for working examples.
Included Templates
NORA ships three starter templates in the custom-script-templates/ folder:
| File | Language | Description |
|---|---|---|
agent_template.py |
Python | Full-featured template with helper functions and commented examples for OpenAI, Ollama, and Anthropic |
agent_template.js |
Node.js | Same features, JavaScript version with async/await support and message queue |
agent_anthropic_claude.py |
Python | Ready-to-use Claude routing agent with built-in cost tracking and .env key loading |
interactive_agent.py |
Python | Interactive conversation template — demonstrates multi-turn Q&A with needs_input |
interactive_agent.js |
Node.js | Same interactive pattern in JavaScript |
All templates include:
– Complete communication helper functions (emit_log, emit_thinking, emit_complete, read_message, request_tool, etc.)
– Built-in .env file parser (no external dependency needed)
– Error handling boilerplate
– Commented LLM provider examples you can uncomment immediately
Python Template (excerpt)
#!/usr/bin/env python3
import json, sys, time
def emit(event_type, **data):
"""Send an event to the workflow UI."""
print(json.dumps({"event": event_type, **data}), flush=True)
def emit_log(message, level="info"):
emit("log", message=message, level=level)
def emit_thinking(iteration, message):
emit("thinking", iteration=iteration, message=message)
def emit_complete(route, output):
emit("complete", route=route, output=output)
def emit_error(message):
emit("error", message=message)
def read_message():
line = sys.stdin.readline()
if not line:
return None
return json.loads(line.strip())
def request_tool(tool_name, params):
"""Request tool execution and wait for result."""
req_id = f"req_{tool_name}_{int(time.time() * 1000)}"
emit("tool_request", id=req_id, tool=tool_name, params=params)
result = read_message()
if result and result.get("action") == "tool_result":
return {"success": result.get("success", False),
"output": result.get("output", ""),
"durationMs": result.get("durationMs", 0)}
return {"success": False, "output": "No response from tool", "durationMs": 0}
def main():
start_msg = read_message()
if not start_msg or start_msg.get("action") != "start":
emit_error("Expected start message")
return
config = start_msg.get("config", {})
route_labels = config.get("routeLabels", [])
tools = config.get("tools", [])
memory_files = config.get("memoryFiles", [])
user_request = config.get("userRequest", "")
emit_log(f"Received request: {user_request[:100]}...")
emit_log(f"Available routes: {', '.join(route_labels)}")
emit_thinking(iteration=1, message="Analyzing request...")
# =============================================
# YOUR AGENT LOGIC HERE — use any LLM provider
# =============================================
emit_complete(route="success", output="Analysis complete")
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
emit_log("Agent interrupted", level="warn")
sys.exit(0)
except Exception as e:
emit_error(f"Agent crashed: {str(e)}")
sys.exit(1)
Node.js Template (excerpt)
#!/usr/bin/env node
const readline = require('readline');
function emit(eventType, data = {}) {
console.log(JSON.stringify({ event: eventType, ...data }));
}
function emitLog(message, level = 'info') { emit('log', { message, level }); }
function emitThinking(iteration, message) { emit('thinking', { iteration, message }); }
function emitComplete(route, output) { emit('complete', { route, output }); }
function emitError(message) { emit('error', { message }); }
const rl = readline.createInterface({ input: process.stdin, terminal: false });
const messageQueue = [];
let messageResolver = null;
rl.on('line', (line) => {
try {
const msg = JSON.parse(line);
if (messageResolver) { const r = messageResolver; messageResolver = null; r(msg); }
else { messageQueue.push(msg); }
} catch (e) { emitLog(`Failed to parse: ${e.message}`, 'error'); }
});
function readMessage() {
return new Promise((resolve) => {
if (messageQueue.length > 0) resolve(messageQueue.shift());
else messageResolver = resolve;
});
}
async function requestTool(toolName, params) {
const reqId = `req_${toolName}_${Date.now()}`;
emit('tool_request', { id: reqId, tool: toolName, params });
const result = await readMessage();
if (result && result.action === 'tool_result') {
return { success: result.success || false, output: result.output || '', durationMs: result.durationMs || 0 };
}
return { success: false, output: 'No response from tool', durationMs: 0 };
}
async function main() {
const startMsg = await readMessage();
if (!startMsg || startMsg.action !== 'start') { emitError('Expected start message'); return; }
const config = startMsg.config || {};
const userRequest = config.userRequest || '';
emitLog(`Received request: ${userRequest.substring(0, 100)}...`);
emitThinking(1, 'Analyzing request...');
// =============================================
// YOUR AGENT LOGIC HERE — use any LLM provider
// =============================================
emitComplete('success', 'Analysis complete');
rl.close();
}
main().catch(err => { emitError(`Agent crashed: ${err.message}`); process.exit(1); });
Compatible LLM Providers
Because Custom Script Agents manage their own API calls, you can use any LLM provider:
| Provider | Setup |
|---|---|
| OpenAI | pip install openai — uses OPENAI_API_KEY |
| Anthropic Claude | pip install anthropic — uses ANTHROPIC_API_KEY |
| Google Gemini | pip install google-generativeai — uses GOOGLE_API_KEY |
| Ollama (local, free) | HTTP calls to localhost:11434 — no key needed |
| OpenRouter | Uses OpenAI SDK with base_url="https://openrouter.ai/api/v1" |
| LM Studio (local) | Uses OpenAI SDK with base_url="http://localhost:1234/v1" |
| Azure OpenAI | pip install openai — uses AzureOpenAI client |
| Any OpenAI-compatible | Uses OpenAI SDK with a custom base_url (vLLM, TGI, LocalAI, etc.) |
Debug Mode
Enable Debug Mode on the Custom Script Agent node to see detailed diagnostics:
- Green boxes — Successful operations (tool calls, messages received)
- Red boxes — Errors and failures
- Full JSON payloads visible for troubleshooting protocol issues
- Separate stderr/stdout display for script debugging
Debug mode is invaluable when developing new scripts or troubleshooting communication issues.
Built-in Tools
Every Custom Script Agent has these tools available automatically — no configuration needed:
| Tool | What It Does | Parameters |
|---|---|---|
| read_file | Read the contents of a file | {"path": "file.txt"} |
| write_file | Write content to a file (creates directories as needed) | {"path": "out.txt", "content": "data"} |
| list_directory | List files and folders in a directory | {"path": "."} |
| file_exists | Check whether a file exists | {"path": "file.txt"} |
| search_tools | Search the Tool Library for available tools | {"query": "search keyword"} |
| run_command | Execute a shell command in the working directory | {"command": "pip install requests", "timeout": 30000} |
run_command uses the system shell (cmd.exe on Windows) with a configurable timeout (default 30s, max 120s). Output is combined stdout + stderr, capped at 1 MB.
You can add more tools from the Tool Library or define them via the node’s Tools configuration.
Important: These are the ONLY tools available to Custom Script Agents. If a generated script references tools like
search_web,browse_url, orfetch_url, those will fail at runtime with “Tool not found.” For web access, use the Self-Tooling Pattern below.
Self-Tooling Pattern
When an agent needs a capability not covered by built-in tools (like fetching a web page, parsing HTML, or calling an external API), it can create and run its own helper scripts on the fly:
- Write a helper script to disk using
write_file - Execute it via
run_command - Read the output from the tool result’s stdout, then continue reasoning
Example: Fetching a web page
# Step 1: Agent writes a URL fetcher script
request_tool("write_file", {
"path": "_fetch_url.py",
"content": "import urllib.request, sys, re\n"
"url = sys.argv[1]\n"
"req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})\n"
"html = urllib.request.urlopen(req, timeout=15).read().decode('utf-8', errors='replace')\n"
"text = re.sub(r'<[^>]+>', ' ', html)\n"
"print(re.sub(r'\\s+', ' ', text).strip()[:8000])"
})
# Step 2: Agent runs it with the target URL
result = request_tool("run_command", {
"command": "python _fetch_url.py https://example.com/blog/",
"timeout": 20000
})
# Step 3: result["output"] contains the page text — agent continues reasoning
This pattern works for any language or library available on the system (requests, BeautifulSoup, pandas, etc.). The agent can pip install packages via run_command if needed.
Directory Safety
The Custom Script Agent’s working directory is validated at startup. If the resolved directory is inside a protected system location (Electron install directory, Windows system folders, Program Files), NORA automatically falls back to Documents\NORA\csa-workdir and logs a warning.
File operations through built-in tools (read_file, write_file) also block:
– Path traversal attempts (../ sequences)
– Writes to protected system directories
Process ID (PID) Badge
After the script launches, the node displays the OS process ID of the spawned subprocess below the description. Click the badge to copy the PID to clipboard.
The PID is useful for:
– Manually inspecting the process with Task Manager or tasklist
– Running kill <pid> or taskkill /PID <pid> from a terminal
– Attaching a debugger to a running script
The badge resets automatically when you click Run again.
Dynamic Tool Discovery
Custom Script Agents support runtime tool discovery with intelligent matching:
- Fuzzy matching — Typos like
read_fleautomatically matchread_file - Levenshtein distance — Finds the closest tool name even with significant typos
search_toolsfunction — Query available tools programmatically from your script:
# Request a tool search
emit("tool_request", id="search_1", tool="search_tools", params={"query": "file"})
result = read_message() # Returns matching tools: read_file, write_file, delete_file...
This is especially useful when building agents that need to discover capabilities at runtime.
Timeout, Cancellation & Shutdown
- Timeout: Default 30 minutes, configurable up to 120. When timeout triggers, the script receives
SIGTERM, thenSIGKILLafter 5 seconds. - Cancellation: When the user clicks Stop, the script receives
{"action": "stop"}on stdin. Clean up and exit within ~1 second. - Exit behavior: If your script exits with code 0 without emitting
complete, the node routes tosuccesswith empty output. A non-zero exit code routes toerror.
Testing Scripts Standalone
You can test outside the dashboard by piping a start message:
echo '{"action":"start","config":{"routeLabels":["approve","reject"],"tools":[],"memoryFiles":[],"conversationHistory":[],"userRequest":"Test request","workingDir":"."}}' | python my_agent.py
AI Cost Tracking
NORA tracks the cost of every AI API call and displays it across the interface.
Where Costs Appear
| Location | What’s Shown |
|---|---|
| On the node | Cost in USD + token counts after each execution |
| Execution History | Per-run cost column + grand total across all runs |
| Email Notifications | Cost summary in notification emails |
| Autonomous Agent status bar | Running cumulative cost during execution |
How Cost Is Calculated
Input Cost = (prompt tokens / 1,000,000) × model input price
Output Cost = (completion tokens / 1,000,000) × model output price
Total Cost = Input Cost + Output Cost
Pricing Reference (per 1M tokens)
Google Gemini:
| Model | Input | Output |
|---|---|---|
| gemini-2.0-flash | $0.10 | $0.40 |
| gemini-2.0-flash-lite | $0.075 | $0.30 |
| gemini-2.5-flash | $0.30 | $2.50 |
| gemini-2.5-pro | $1.25 | $10.00 |
OpenAI:
| Model | Input | Output |
|---|---|---|
| gpt-4o-mini | $0.15 | $0.60 |
| gpt-4o | $2.50 | $10.00 |
| gpt-4.1-mini | $0.40 | $1.60 |
| gpt-4.1 | $2.00 | $8.00 |
Anthropic:
| Model | Input | Output |
|---|---|---|
| Claude 4.5 Haiku | $1.00 | $5.00 |
| Claude 4.5 Sonnet | $3.00 | $15.00 |
| Claude 4.5 Opus | $5.00 | $25.00 |
Pricing last verified: December 2025. For custom or unlisted models, NORA shows token counts but marks cost as “pricing unavailable.”
Agent Session Logs
Autonomous Agent and Custom Script Agent sessions are logged to disk for debugging and review.
What’s Logged
Each session creates a folder at ~/.nora/agent_logs/{sessionId}/ containing:
| File | Contents |
|---|---|
| metadata.json | Session ID, node ID, node title, agent type, start time |
| conversation.json | Full conversation history with timestamps |
| execution-log.json | Every tool call with parameters, results, success/failure, cost, and duration |
| summary.json | Final status, total cost, iteration count, outcome |
Log Cleanup
Agent logs older than 30 days are automatically cleaned up. This matches the execution history log retention setting.
Tips for Using AI Nodes
Start with Cheaper Models
Use gemini-2.0-flash or gpt-4o-mini while developing your workflows. Switch to more capable (and expensive) models once your prompts and tools are working correctly.
Write Clear Prompts
The Custom Prompt field is the most important configuration. Be specific about:
– What the AI should do
– What format you want the output in
– What categories/tools are available and when to use each one
Set Appropriate Limits
For Autonomous Agents:
– Start with low max iterations (3–5) until you trust the agent’s behavior
– Set a budget limit to prevent runaway costs during development
– Use the timeout as a safety net
Test with One File First
For AI Router nodes, put a single test file in the input folder and run the node manually. Verify the classification is correct before running the full workflow.
Monitor Costs
Check Settings → Execution History periodically to review your cumulative AI costs. The grand total at the bottom shows spending across all recorded executions.
What’s Next?
- Tool Library — Create tools for AI agents to use
- Settings & Configuration — Configure AI keys and other settings
- Node Types Reference — Detailed reference for all node types
- Reference — AI model pricing, troubleshooting, and glossary