AI Agent Workflow Automation: How AI Agents Execute Multi-Step Tasks
Most “AI automation” is just an API call with a prompt. Send text in, get text out. That works for summarization and translation, but it falls apart the moment a task requires more than one step.
Real work looks like this: read a folder of files, decide which ones need processing, transform them, write the results somewhere, and handle errors along the way. That requires an agent — an LLM that can use tools, read the results, and decide what to do next.
NORA ships with three levels of AI agent nodes, each designed for a different complexity tier. All three run locally on a Windows desktop. No cloud infrastructure. No container orchestration. No API gateway.
What Makes an Agent Different from an API Call
A standard LLM API call is stateless. You send a prompt, you get a response. If you need multiple steps, you write the glue code yourself — parse the output, call the next API, handle failures, manage state.
An agent adds two things on top of the LLM:
- Tool use — the LLM can request that specific tools be executed (read a file, run a command, call an API)
- Decision loops — after each tool result comes back, the LLM evaluates it and decides the next action
This loop — think, act, observe, repeat — is what turns a language model into something that can complete multi-step tasks without human intervention at each step.
NORA implements this pattern at three levels of complexity.
Level 1: AI Agent Node — LLM with Tool Routing
The AI Agent Node connects an LLM to downstream workflow nodes. The LLM receives a prompt, decides which tool to invoke, and NORA routes execution to the corresponding downstream node.
Key capabilities:
- Single-turn or multi-turn dialogue. Configure max conversation turns for iterative refinement.
- Tool integration. Each downstream connection is a tool the LLM can invoke. The LLM decides which one based on the input.
- Memory folders. Point the node at a directory of context files (
.txt,.md,.json,.html). The LLM reads these as background knowledge before making decisions. - Tool parameters from upstream nodes. Pass dynamic values from earlier workflow steps into tool calls.
- Conversation history persistence. Multi-turn sessions retain context across turns.
- Email summaries with token cost. Get a Gmail notification with what the agent did and what it cost.
When to use it: Tasks where the LLM needs to choose between a small set of known actions — classify a document and route it, triage an incoming request, decide whether to approve or escalate.
Three AI providers available: OpenAI (GPT-5.5, GPT-5.4, mini, nano), Anthropic (Claude Opus 4.7, Sonnet 4.6, Haiku 4.5), and Google (Gemini 2.5 Flash/Flash-Lite/Pro plus Gemini 3 preview variants). Configure per node with your own API keys.
Level 2: AI Autonomous Agent — Full Autonomy with Safety Limits
The AI Autonomous Agent Node is a self-directing agent. Give it a goal, and it figures out the steps. The LLM decides which tools to call, executes them, reads the results, decides the next step, and repeats until the task is complete.
Built-in tools (no configuration needed):
read_file— read any file from the working directorywrite_file— create or overwrite fileslist_directory— enumerate folder contentsfile_exists— check before readingrun_command— execute shell commands
Three ways to add more tools:
- Connect downstream workflow nodes as tools
- Load tools from the Tool Library (
~/.nora/tools/) - Use the built-in file system tools as-is
Safety controls:
- Max iterations: Default 10, configurable up to 100. The agent stops after this many tool-call cycles regardless of completion state.
- Timeout: Default 30 minutes, configurable up to 120 minutes.
- Budget limit in USD. Default $10. Set a dollar cap on API costs. The agent stops when the budget is reached.
- Output routing: The agent’s final state routes to one of four paths —
complete,partial,needs-input, orerror. Downstream nodes handle each case differently.
SSE streaming: Watch the agent’s reasoning and tool calls in real time via the dashboard.
Upstream context merging: The agent automatically receives output from upstream nodes as additional context, so it builds on work already done in the workflow.
When to use it: Tasks with unpredictable step counts — “analyze these 50 CSV files and write a summary report,” “refactor all Python files in this directory to use type hints,” “read the error logs and fix the configuration.”
Level 3: Custom Script Agent — Write Your Own Agent Logic
The Custom Script Agent Node lets you write the agent loop yourself in Python or Node.js. Your script communicates with NORA through a JSON stdin/stdout protocol.
Your script receives instructions via stdin, calls its own LLM (with its own API keys), and sends tool_call requests back to NORA when it needs to execute tools. NORA runs the tools and returns results to the script’s stdin. The script repeats until done.
Key capabilities:
- Any language model. Your script manages its own API keys and can call any provider — not limited to the three built into NORA.
- Dynamic Tool Discovery. Your script can query NORA’s Tool Library with fuzzy search to find available tools at runtime.
- Dynamic routes. The script decides its own output route labels, creating flexible downstream branching.
- SSE streaming. Stream progress updates back to the NORA dashboard in real time.
- Debug mode. Verbose logging of every JSON message between the script and NORA.
When to use it: When you need full control over the agent loop — custom retry logic, specialized prompting strategies, integration with APIs that aren’t in the Tool Library, or agent architectures that don’t fit the built-in autonomous pattern.
Cost Tracking Across All Agent Types
Every AI node in NORA tracks costs in real time. Per-call token counts, per-provider pricing rates, per-workflow totals, and per-tool breakdowns are all recorded in execution history.
The Autonomous Agent’s budget limit makes this actionable — set a $2.00 cap on an experimental workflow and it stops before overspending.
Practical Use Cases
Document processing pipeline. An AI Autonomous Agent reads a folder of mixed-format files, classifies each one, extracts key data, writes structured output to a results directory. If a file can’t be parsed, the agent routes to the error output for manual review.
Code generation with file I/O. Give an Autonomous Agent a spec file and tell it to generate the implementation. The agent reads the spec, writes code files, runs tests via run_command, reads test output, fixes failures, and repeats until tests pass — all within a configurable iteration and budget limit.
Data analysis with tool chains. A Custom Script Agent receives a dataset path, calls a Python analysis library through NORA’s tool system, reads the statistical output, decides which visualizations to generate, and produces a final report — using whatever LLM provider your script is configured for.
Getting Started
Download NORA at software.reibuys.com/nora. Install the .exe on Windows 10 or later. A paid license key is required. Drag an AI Agent Node onto the canvas, connect your API key for OpenAI, Anthropic, or Google, wire up downstream tools, and run it.
One-time purchase. No subscription. 30-day money-back guarantee.
| Feature | AI Agent Node | AI Autonomous Agent | Custom Script Agent |
|---|---|---|---|
| LLM decides tool calls | Yes | Yes | Yes (your script) |
| Multi-turn conversation | Yes | Yes | Your implementation |
| Built-in file system tools | No | Yes (5 tools) | Via Tool Library |
| Tool Library access | Via connections | Yes | Yes + fuzzy search |
| Budget limit | No | Yes (USD cap) | Your implementation |
| Max iterations | Max turns config | 100 cap | Your implementation |
| Timeout | N/A | 120 min max | Your implementation |
| Custom agent logic | No | No | Full control |
| SSE streaming | No | Yes | Yes |
| Output routing | Based on LLM choice | 4 routes | Dynamic routes |
Get NORA at software.reibuys.com/nora. One-time purchase. 30-day money-back guarantee.