Tool Library
The Tool Library lets you package reusable scripts and commands into versioned tools that AI agents can call during workflow execution. Instead of re-configuring the same script across multiple workflows, create a tool once and reference it anywhere.
What Is a Tool?
A tool is a packaged command-line script (Python, Node.js, PowerShell, batch, or any executable) with:
- Name and description — so AI agents understand what the tool does
- Command — the actual command to execute
- Default parameters — preset values that get passed to the script
- Version — semantic versioning for tracking changes
Tools are used by AI Autonomous Agent and Custom Script Agent nodes. When an AI agent decides to call a tool, NORA executes the tool’s command and passes parameters via a JSON file.
Opening the Tool Library
Access the Tool Library drawer in any of these ways:
- Press Ctrl+Shift+L (Edit Mode only)
- Click the 🧰 TOOL LIBRARY tab on the right edge of the screen
- Click “+ Add from Tool Library” inside an AI Autonomous Agent or Custom Script Agent node’s edit form
The Tool Library drawer slides in from the right side of the screen.
Browsing Tools
The drawer shows all installed tools, grouped by Tool ID (which supports hierarchical names like data-quality/validate-csv).
Each tool group displays:
– Tool name and ID
– Description (if provided)
– Open button — opens the tool’s folder in your file explorer
– Delete button — removes all versions of the tool
Under each group, individual version cards show:
– Version number (e.g., 0.1.0)
– Creation date
– Origin info — which node it was created from
– Interactive badge (if the tool requires terminal input)
Creating a Tool
You create tools by packaging existing canvas nodes.
From a Canvas Node
- Select an AI Autonomous Agent or Custom Script Agent node on the canvas
- Open the Tool Library drawer
- In the “Add selected” section, click to expand the form
- Fill in:
| Field | Description | Example |
|---|---|---|
| Tool ID | Unique identifier (supports / for hierarchy) |
data-processing/csv-cleaner |
| Version | Version number | 0.1.0 |
| Name | Human-readable name | CSV Cleaner |
| Description | What the tool does | Cleans and validates CSV files |
- Click Create tool
What Gets Packaged
When you create a tool from a node:
– The command is captured and rewritten to use relative paths
– If the command references a script file (.py, .js, .ps1, etc.), the script is copied into the tool package
– If the command includes --config "path/to/config.json", the config file is read and parameter definitions (paramDefs) are extracted automatically (see Parameter Definitions)
– If Bundle config is checked, the config file itself is also copied into the tool package for portability
– If your command does NOT include --config, both paramDefs and entry.configFile will be null in tool.json. This is expected — these fields are only populated from a --config JSON file. See Scripts Without --config for how to add parameter definitions manually
– CLI arguments that duplicate config keys are stripped from the command (e.g., --format csv is removed because format is already in the config)
– The node’s default parameters are saved
– A complete node template is saved for future insertion
Bundle Config Option
When the tool creation form detects --config in your command, a “Bundle config file” checkbox appears:
| Setting | Behavior |
|---|---|
| Unchecked (default) | Config stays at its original absolute path on your machine. paramDefs are still extracted, but the tool’s command references the external file |
| Checked | Config is copied into the tool’s files/ directory. The command is rewritten to --config "files/config.json". The config FILE travels with the tool |
Optimal for agents: check “Bundle config”. Bundling ensures the config file is always found when the tool runs. Without bundling, the command points to a hardcoded absolute path that breaks if the file moves or the tool is shared.
Note: Bundling copies the config file as-is. If your config contains machine-specific values (like
"input_dir": "C:/Users/Me/data"), those values are still in the bundled copy. NORA extracts those paths intoparamDefswithnulldefaults so the AI knows to provide a value at runtime — but the bundled config still has the original values as fallback. For truly portable tools, use relative paths or require the agent to supply absolute paths.
Tool Storage
Tools are stored at:
~/.nora/tools/{tool-id}/{version}/
├── tool.json # Tool manifest (includes paramDefs)
├── template.json # Node template for canvas insertion
└── files/ # Bundled script files
├── my_script.py
└── config.json # Only present if "Bundle config" was checked
Adding Tools to Nodes
Adding to an AI Agent Node
When you have an AI Autonomous Agent or Custom Script Agent node selected on the canvas:
- Open the Tool Library drawer
- Each version card shows an “Add” button and a checkbox
- Click Add on individual tools, or:
- Check multiple tools → click “Add Selected (N)” in the header
- Click “Add All (N)” to add your entire library at once
Duplicate tools (same ID and version already on the node) are automatically skipped.
Recommendation: Keep 5–15 focused tools per agent. Large tool sets may reduce AI accuracy and increase response time and cost.
Inserting as a Canvas Node
When no AI Agent node is selected (or you’re in browse mode):
– Click Insert on a version card
– A new node is created on the canvas with the tool’s saved template
Version Management
Each tool can have multiple versions installed simultaneously.
Creating a New Version
- Modify your script or command on the canvas node
- Open the Tool Library drawer
- Create a new tool with the same Tool ID but a different version number
Both versions coexist in the library.
Switching Versions on a Node
When you select a node that references a tool (toolRef), the drawer’s Selected Tool Node panel appears at the top:
- The current
toolId@versionis shown - A version dropdown lists all installed versions
- Select a different version
- Click “Update selected node”
The node’s reference is updated to point to the new version.
Deleting Versions
- Delete one version: Click the Delete button on a specific version card
- Delete all versions: Click the Delete button on the tool group header
Both actions require confirmation.
Source Sync
When you create a tool from a canvas node that references a script file, the Tool Library tracks the original source path. This enables automatic synchronization when you update your scripts outside of NORA.
Sync Status Indicators
Each version card shows a sync status badge:
| Badge | Meaning |
|---|---|
| ✓ In Sync (green) | The tool’s bundled script matches the original source file |
| ⚠️ Out of Sync (amber) | The source file has been modified since the tool was created |
| ? Source Not Found (gray) | The original source file no longer exists at the tracked path |
Syncing Changes
When a tool shows Out of Sync:
- Click the Sync button on the version card
- The updated script is copied from the source into the tool package
- The status changes back to In Sync
Accessing the Original Source Quickly
For tools with source tracking enabled, each version card shows a compact source line:
- Click
Source: ...to open the tracked original file in your configured external editor - Click
Copy pathto copy the full absolute source path to your clipboard
This is useful when you want to inspect, edit, or share the exact script path before syncing.
Updating Nodes Using a Tool
After syncing, you may want to update all nodes that reference this tool:
- The drawer shows “Update N nodes” if any canvas nodes use this tool
- Click to update all affected nodes with the new tool configuration
- Nodes are updated in-place without losing their connections
Safety Guards
- Sync is blocked during workflow execution — You cannot sync or update tools while a workflow is running that uses them
- Backup on sync — The previous version of bundled scripts is preserved before overwriting
Disabling Source Tracking
If you prefer manual version management:
1. Edit the tool.json file directly
2. Remove or clear the sourcePath property
3. The tool will no longer show sync status
Installing Tools from Folders
You can install tools that were created on another machine or shared with you.
Using the Install Tool Form
At the bottom of the Tool Library drawer you’ll find the Install tool section:
- Source folder — Enter the path to a folder containing a
tool.jsonfile, or click Choose… to browse with a folder picker (desktop app only; in-browser users paste the path manually) - Overwrite — Check “Overwrite if this version already exists” if you want to replace an existing installation
- Click Install
What Happens When You Install
When you click Install, NORA:
- Validates the source folder — Checks that the path exists, is a directory, and contains a
tool.jsonfile - Reads
tool.json— Extracts thetoolIdandversionfields (both are required; install fails without them) - Determines the destination — Maps
toolId+versionto~/.nora/tools/<toolId>/<version>/ - Checks for conflicts — If that version is already installed:
- Without overwrite: install fails with “Tool already installed:
toolId@version“ - With overwrite: the existing version folder is deleted first
- Copies everything — The entire source folder is copied recursively into the destination, including
tool.json,template.json, and thefiles/directory with your bundled scripts - The tool appears in the library — It’s immediately available to add to agent nodes
What’s Required in the Source Folder
At minimum, the source folder must contain a valid tool.json with these fields:
{
"toolId": "my-tool-id",
"version": "0.1.0"
}
A typical shared tool folder looks like:
my-tool/
├── tool.json # Required — tool manifest (toolId, version, paramDefs, etc.)
├── template.json # Optional — node template for canvas insertion
└── files/ # Optional — bundled script files
├── my_script.py
└── config.json
Note: Source tracking won’t work after install if the original script paths don’t exist on the receiving machine. The sync badge will show “? Source Not Found” — this is normal. The bundled scripts in
files/still work independently.
Sharing Tools
To share a tool with someone:
1. Open the tool’s folder (click Open in the drawer)
2. Copy the entire version folder (e.g., ~/.nora/tools/my-tool/0.1.0/)
3. Send the folder to the other person
4. They install it using the Install Tool feature
Customize on Insert
When the “Customize on insert” checkbox is enabled in the drawer:
- Adding a tool to a node creates a local copy in the workflow’s directory
- The copy lives at
<workflow-folder>/.nora/tools/<tool-id>/<version>/ - You can modify the local copy’s scripts without affecting the global library version
- Useful when you need workflow-specific tweaks to a standard tool
This option is only available after the workflow has been saved to a file.
How Tools Run
When an AI agent calls a tool during workflow execution:
- The agent provides parameters as a JSON object
- NORA merges parameters: config file → paramDefs/defaultParams → agent values (agent wins — see Parameter Merge Order)
- The merged parameters are written to a temporary JSON file
- The
AI_TOOL_PARAMS_PATHenvironment variable is set to that file path - The tool’s command is executed
- Your script reads parameters from the file at
AI_TOOL_PARAMS_PATH - Output is captured from stdout; errors from stderr
Reading Parameters in Your Script
Python:
import os, json
params_path = os.environ.get('AI_TOOL_PARAMS_PATH')
if params_path:
with open(params_path) as f:
params = json.load(f)
Node.js:
const fs = require('fs');
const paramsPath = process.env.AI_TOOL_PARAMS_PATH;
if (paramsPath) {
const params = JSON.parse(fs.readFileSync(paramsPath, 'utf-8'));
}
Exit Codes
- Exit 0 = success (stdout is captured as the result)
- Exit 1 (or any non-zero) = failure (stderr is captured as the error)
⚠️ Error Handling (CRITICAL)
NORA captures stderr to report errors back to the agent. If your script fails silently or writes errors only to a log file, the AI agent receives no feedback and cannot retry or diagnose the problem.
✅ DO THIS — Print errors to stderr:
Python:
import sys
try:
result = process_data(params)
print(json.dumps({"success": True, "result": result}))
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
Node.js:
try {
const result = await processData(params);
console.log(JSON.stringify({ success: true, result }));
} catch (error) {
console.error(`Error: ${error.message}`);
process.exit(1);
}
❌ DON’T DO THIS — Writing only to log files:
# BAD: NORA never sees this error!
try:
result = process_data(params)
except Exception as e:
with open("error.log", "a") as f:
f.write(f"Error: {e}\n") # Silent failure
sys.exit(1)
What Happens When Tools Fail
When a tool fails (non-zero exit), NORA:
- Preserves the temporary config file — The JSON file with merged parameters is NOT deleted, so you can inspect it
- Shows a reproduction command — The exact command to run manually for debugging
- Reports stderr to the agent — The AI sees your error message and can respond appropriately
Example error output:
Error: API rate limit exceeded
--- Debug Info ---
Config file preserved at: C:\Users\You\.nora\.workflow-temp\tool_my_script_abc123.json
To reproduce manually:
cd "C:\projects\my-tool" && python my_script.py --config "C:\Users\You\.nora\.workflow-temp\tool_my_script_abc123.json"
Inline Tool Definitions
You don’t have to use the Tool Library for every tool. AI Agent nodes also support inline tool definitions directly in the node’s Tools JSON:
[
{
"name": "my-script",
"description": "Processes data files",
"command": "python process.py",
"workingDir": "C:/projects/scripts",
"defaultParams": { "format": "csv", "verbose": true }
}
]
Inline tools are self-contained — they don’t reference the library and won’t benefit from version management. Use inline definitions for one-off tools you don’t plan to reuse.
Tool Organization
Tool IDs support hierarchical naming with / separators:
data-processing/csv-cleaner
data-processing/json-validator
web-scraping/page-downloader
web-scraping/link-extractor
notifications/send-slack
This creates a logical grouping in the Tool Library drawer. The tools sort alphabetically by their full ID.
Tips
Start Simple
Create tools from working nodes. Get your script running correctly on the canvas first, then package it into the library.
Write Good Descriptions
AI agents use your tool’s name and description to decide when to call it. Clear, specific descriptions lead to better tool selection by the AI.
Parameter Definitions (paramDefs)
When you create a tool from a canvas node, NORA automatically extracts parameter definitions (paramDefs) from your script’s --config JSON file. These definitions tell the AI agent what parameters exist, their types, default values, and descriptions.
Where the Config File Comes From
The config file is a regular JSON file that you create and maintain alongside your script. You reference it in the node’s Command field using the --config argument:
python my_script.py --config "C:\projects\my-tool\config.json"
The config file is your file — NORA doesn’t generate it. You write it as the companion configuration for your script, containing all the parameters your script expects. A typical workflow:
- Write your script — e.g.,
my_script.py— designed to read parameters from a JSON file - Create a config.json — populate it with your working parameter values
- Test on the canvas — paste the command with
--configinto a Script node, verify it works - Package into Tool Library — NORA reads the config, extracts paramDefs, and optionally bundles the file
Key point: The config.json is the single source of truth for your tool’s parameters. When you add
_note_descriptions (see below), you’re documenting your tool’s interface right where the values live.
How paramDefs Are Extracted
When your command includes --config "path/to/config.json", NORA reads that file and builds paramDefs:
Your config.json:
{
"_note_input_dir": "Directory containing files to process",
"input_dir": "C:/Users/Me/data",
"_note_output_format": "Output format: csv, json, or xml",
"output_format": "csv",
"_note_verbose": "Enable detailed logging",
"verbose": true,
"max_retries": 3
}
Extracted paramDefs (stored in tool.json):
{
"paramDefs": {
"input_dir": {
"type": "string",
"description": "Directory containing files to process",
"default": null
},
"output_format": {
"type": "string",
"description": "Output format: csv, json, or xml",
"default": "csv"
},
"verbose": {
"type": "boolean",
"description": "Enable detailed logging",
"default": true
},
"max_retries": {
"type": "number",
"description": "",
"default": 3
}
}
}
Scripts Without --config
If your script does not use the --config convention (e.g., it reads AI_TOOL_PARAMS_PATH directly, or takes no parameters at all), the following fields in tool.json will be null after import:
{
"entry": {
"configFile": null
},
"paramDefs": null
}
This is normal. These fields are only auto-populated when the tool’s command contains a --config argument pointing to a JSON file.
Adding paramDefs Manually
If you want AI agents to know your tool’s parameters (recommended), you can edit tool.json directly after creating the tool:
- Open the tool’s folder (click Open in the Tool Library drawer)
- Edit
tool.json - Replace
"paramDefs": nullwith your parameter definitions:
{
"paramDefs": {
"url": {
"type": "string",
"description": "The target URL to post to",
"default": null
},
"title": {
"type": "string",
"description": "Post title",
"default": ""
},
"publish": {
"type": "boolean",
"description": "Publish immediately (true) or save as draft (false)",
"default": false
}
}
}
Once paramDefs are present, AI agents will see the correct parameter names, types, and descriptions — preventing them from inventing wrong parameter names.
Tip: If your script reads parameters from
AI_TOOL_PARAMS_PATH, adding paramDefs is the best way to document what your script expects. Without them, the AI agent has no way to know which parameters to pass.
The _note_ Convention
To provide descriptions for your parameters, add a companion key prefixed with _note_:
| Config Key | Description Key |
|---|---|
input_dir |
_note_input_dir |
output_format |
_note_output_format |
api_key |
_note_api_key |
The _note_ keys are not passed to your script — they’re only used to generate descriptions for the AI agent.
Type Inference
NORA infers parameter types from the JSON values:
| JSON Value | Inferred Type |
|---|---|
"hello" |
string |
123 or 3.14 |
number |
true or false |
boolean |
[1, 2, 3] |
array |
null |
string (default) |
Absolute Paths Are Nullified
Important: When extracting defaults, NORA automatically sets absolute paths to null:
// Your config
{ "input_dir": "C:/Users/Me/Documents/data" }
// Extracted paramDef
{ "input_dir": { "type": "string", "default": null } }
This prevents machine-specific paths from being hardcoded into your portable tool. The AI agent (or you) must provide the actual path at runtime.
⚠️ paramDefs Defaults Persist Unless Agent Overrides
When a tool is imported, paramDefs is extracted from your config file. These values become the defaults that persist unless the agent explicitly passes a different value.
There are two scenarios:
1. Null defaults (absolute paths get nullified):
{ "output_dir": { "default": null } }
Script receives null unless agent provides a path.
2. Placeholder defaults (non-path values stay as-is):
{ "site_url": { "default": "https://yoursite.com" } }
Script receives "https://yoursite.com" unless agent provides a different URL or an empty string to suppress it.
The placeholder problem is more dangerous because the script may actually try to use a fake URL/credential, whereas a null will typically cause an obvious error or fallback.
Best practice:
– Before importing, remove placeholder values from your config that could cause problems
– After importing, edit paramDefs in tool.json to set placeholder values to null
– Ensure agent instructions list ALL parameters that need empty string overrides
Parameter Merge Order
When a tool runs, NORA merges parameters from multiple sources. Understanding this order is critical — placeholders in paramDefs persist unless the agent explicitly overrides them.
Merge Order (Last Wins)
Config file values → paramDefs/defaultParams → Agent values
(lowest) (middle) (highest)
Agent values win only for parameters the agent actually passes. If the agent doesn’t pass a parameter, the paramDefs value is used.
The Placeholder Problem
When you import a tool, NORA extracts paramDefs from your config file — including placeholder values:
{
"site_url": { "default": "https://yoursite.com" },
"username": { "default": "your_wp_username" },
"credentials_file": { "default": null }
}
If the agent only passes some parameters:
{ "credentials_file": "E:/creds/wp_credentials.txt" }
What the script receives:
{
"site_url": "https://yoursite.com", // ← PLACEHOLDER (agent didn't override)
"username": "your_wp_username", // ← PLACEHOLDER (agent didn't override)
"credentials_file": "E:/creds/wp_credentials.txt" // ✅ Agent value
}
The script then tries to connect to yoursite.com instead of reading credentials from the file.
Why This Happens
The agent follows its instructions, which might say “use credentials_file instead of hardcoding credentials.” The agent correctly passes credentials_file — but doesn’t know to also pass empty strings for site_url, username, and app_password to suppress the placeholders.
The Fix: Pass Empty Strings to Suppress Placeholders
If your script treats empty string as “not provided,” the agent can suppress placeholders by passing "":
{
"site_url": "",
"username": "",
"app_password": "",
"credentials_file": "E:/creds/wp_credentials.txt"
}
For this to work:
1. Your script must treat "" the same as None/missing
2. Agent instructions must explicitly list which parameters need empty string overrides
3. The agent must actually follow those instructions
Best Practice: Avoid Placeholders in paramDefs
The safest approach is to ensure paramDefs doesn’t contain placeholder values that could cause problems:
| Parameter Type | Recommended paramDefs Value |
|---|---|
| Paths (machine-specific) | null — automatically nullified |
| Credentials | null or remove from paramDefs |
| URLs with placeholder | null or remove |
| Safe defaults (format, timeout) | Keep the default value |
If your config has "site_url": "https://yoursite.com", consider either:
– Removing it from the config before import
– Editing paramDefs in tool.json after import to set it to null
– Ensuring agent instructions explicitly pass "" to suppress it
“url”: “http://www.enteryoursite.com”
}
“`
3. Rely on AI providing values — well-described parameters help the AI know what to pass
Setting Sensible Defaults
Where to Set Defaults
| Location | Best For |
|---|---|
| Config JSON file | Initial extraction — your working values become paramDefs |
| defaultParams | Overriding paramDef defaults per-tool or per-workflow |
| Inline tool definition | One-off tools with simple, static defaults |
Best Practices
1. Start with your working config
Get your script running with a config.json that has all your real values. When you create the tool, those become the paramDefs defaults.
2. Use _note_ descriptions liberally
The AI makes better decisions when it understands what each parameter does:
{
"_note_quality": "Image quality: 'low' (fast), 'medium' (balanced), 'high' (slow but best)",
"quality": "medium"
}
3. Nullify secrets and machine-specific values
Don’t let API keys, passwords, or local paths get baked into defaults:
{
"api_key": null,
"input_dir": null,
"output_format": "json"
}
4. Provide reasonable operational defaults
For retry counts, timeouts, batch sizes — pick values that work for most cases:
{
"max_retries": 3,
"timeout_seconds": 60,
"batch_size": 100
}
5. Edit tool.json directly for post-creation tweaks
After creating a tool, you can manually edit ~/.nora/tools/<tool-id>/<version>/tool.json to adjust paramDefs or defaultParams without re-creating the tool.
Avoiding Conflicts
defaultParams vs paramDefs
If the same key exists in both defaultParams and paramDefs, defaultParams wins. To avoid confusion:
- Use paramDefs for the “schema” (type, description, baseline default)
- Use defaultParams only when you need to override paramDefs for a specific use case
Parameter Name Collisions
When your script accepts both CLI args and config file params, NORA strips CLI args that match paramDef keys from the command template. This prevents double-specification:
# Original command (on canvas)
python script.py --config "config.json" --format csv --verbose
# After tool creation (CLI args stripped, values in config)
python script.py --config "files/config.json"
Type Coercion
Your script should handle type coercion gracefully. The AI might pass:
– "true" instead of true
– "123" instead of 123
Always validate and coerce in your script:
verbose = str(params.get('verbose', 'false')).lower() == 'true'
max_retries = int(params.get('max_retries', 3))
Version When You Change
When you modify a tool’s script or default parameters, create a new version rather than overwriting. This lets you roll back if something breaks, and existing workflows continue using the version they reference.
Keep Tool Sets Focused
When assigning tools to an Autonomous Agent, only include tools relevant to that agent’s task. A focused set of 5–15 tools gives better AI accuracy than a huge library of 50+ tools.
What’s Next?
- AI Features — How AI agents use tools during execution
- Settings & Configuration — Configure AI keys and app settings
- Running Workflows — Execute workflows that use tools
- Reference — API endpoints, troubleshooting, and glossary