Build AI Agents with Your Own Scripts (Custom Script Agent)
Most AI agent frameworks lock you into a specific LLM provider, a specific language, and a specific way of managing conversations. If the framework doesn’t support your model or your architecture, you’re out of luck.
NORA’s Custom Script Agent node takes a different approach: you write the agent logic. NORA handles everything else.
How It Works
The Custom Script Agent node runs a Python or Node.js script that communicates with NORA through a JSON protocol over stdin/stdout. The script sends structured JSON messages. NORA reads them, acts on them, and sends results back.
The workflow is straightforward:
- Your script starts and receives context (input data, conversation history, available tools)
- Your script calls whatever LLM API it wants — OpenAI, Anthropic, Gemini, Ollama, a local model, anything
- When the LLM decides to use a tool, your script sends a tool-call request to NORA via stdout
- NORA executes the tool and returns the result via stdin
- Your script feeds the result back to the LLM and continues the loop
- When finished, your script outputs a final result and exits
Your script manages its own API keys directly. NORA never stores or touches them. This means you can use any provider, any model, any authentication method — including local models running on your own hardware through Ollama or similar.
Use Any LLM Provider
Because the script makes the API calls directly, the Custom Script Agent works with:
- OpenAI — GPT-5.5, GPT-5.4, GPT-5.4-mini, GPT-5.4-nano
- Anthropic — Claude Opus 4.7, Sonnet 4.6, Haiku 4.5
- Google — Gemini 2.5 Flash/Flash-Lite/Pro, plus Gemini 3 preview variants
- Ollama — Any local model (Llama, Mistral, Phi, etc.)
- Any other provider with an HTTP API
This is the key difference from locked-in agent frameworks. In many scripts, switching from OpenAI to a local Ollama model can be as little as changing the model client/config line — not migrating to a different platform.
Dynamic Tool Discovery
Toggle on Dynamic Tool Discovery and the agent can search and use any tool in your Tool Library at runtime. NORA uses Levenshtein fuzzy matching so the agent doesn’t need to know exact tool names — a close match is enough.
When discovery is enabled:
- The agent gets access to a built-in
search_toolsfunction - It can search your Tool Library by name or description
- NORA resolves the best match and sends back the tool’s full parameter schema
- The agent calls the tool with the correct parameters
- NORA executes it and returns the result
This means you can build agents that dynamically choose from your entire tool collection without hardcoding tool names into the script.
User-Defined Output Routes
Standard nodes have fixed outputs. The Custom Script Agent lets you define your own output labels — approve, reject, review, escalate, or whatever fits your workflow.
Your script decides which route to take based on the agent’s reasoning. NORA routes execution accordingly. This makes it possible to build decision-making agents that feed into different downstream workflows without any conditional logic nodes.
Real-Time Visibility
Every agent execution streams step-by-step progress through SSE (Server-Sent Events):
- Each tool call and its result appears in the live execution log
- Token costs are tracked per API call in real-time USD
- The full conversation history is visible as it builds
- Debug mode toggle shows the raw JSON protocol messages between script and NORA
No black-box execution. Every step is visible and logged.
What You Need to Build an Agent
A Custom Script Agent script is a standard Python or Node.js file. It reads JSON from stdin, writes JSON to stdout. The protocol is documented and minimal — typically under 50 lines of boilerplate before you’re writing actual agent logic.
NORA provides:
- Conversation history persistence — the agent remembers prior turns across iterations
- Loop iteration memory — context carries forward through multi-step reasoning
- Rich parameter schemas — tools send full type and description info so the LLM knows how to call them
- Cost tracking — per-call USD totals recorded in execution history
When to Use Custom Script Agent vs. Built-In AI Nodes
NORA has three AI agent node types. Here’s when to use each:
| Node | Best For |
|---|---|
| AI Agent Node | Single/multi-turn LLM calls using NORA’s built-in provider connections. Quick setup, no script needed. |
| AI Autonomous Agent Node | Autonomous multi-step tasks with built-in file system tools. NORA manages the agent loop. |
| Custom Script Agent | Full control over agent logic, any LLM provider, custom architectures, local models. |
Use the Custom Script Agent when you need control over the agent loop itself — custom prompting strategies, multi-model pipelines, provider switching, or architectures that don’t fit the standard request-response pattern.
Getting Started
Download NORA at software.reibuys.com/nora. Install on Windows 10 or later. A paid license key is required to unlock full functionality. Drag a Custom Script Agent node onto the canvas, write your agent script in the built-in Monaco editor, connect it to tools from the Tool Library, and run it.
One-time purchase — no subscription. 30-day money-back guarantee.