This repository contains the netrun project, a flow-based development (FBD) runtime system.
The project is split into two main components:
netrun-sim is a Rust library that simulates the flow of packets through a network of interconnected nodes. It does not execute actual node logic or manage packet data—instead, it tracks packet locations, validates flow conditions, and manages the lifecycle of node executions (called "epochs").
This separation of concerns allows the actual execution and data storage to be implemented independently of the flow mechanics.
netrun is a pure Python package built on top of netrun-sim. It provides:
- Flow-based network execution via the
Netclass (integrates with netrun-sim) - Configuration system (
NetConfig,NodeConfig, etc.) with JSON/TOML support - RPC (Remote Procedure Call) communication primitives
- Worker pool management (threads, processes, remote)
- High-level execution orchestration via
ExecutionManager - Node factories for creating nodes from functions or broadcast patterns
- CLI for validation, inspection, and config conversion
- Tools for template resolution, action execution, and recipe management
See netrun/PROJECT_SPEC.md for the full specification.
Important: The netrun package uses nblite for literate programming. Before writing any code for netrun, you must read netrun/NBLITE_INSTRUCTIONS.md carefully. Key points:
- Source code lives in
.pct.pyfiles (percent notebooks), not in the exported Python modules - Never edit files in
src/netrun/directly - they are auto-generated - After editing
.pct.pyfiles, runnbl export --reversethennbl export - nblite does NOT auto-generate
__init__.pyfiles. You must create__init__.pyfiles manually insrc/for any package that needs re-exports. These manual__init__.pyfiles are not overwritten bynbl export.
netrun-ui is a visual editor for creating and editing netrun flow configurations. The frontend is built with SvelteKit (Svelte 5) and SvelteFlow. The backend is a FastAPI Python app (netrun_ui_backend/) that handles file I/O, factory resolution, config validation, and action execution.
Reference Documentation (read these for context when working on netrun-ui):
netrun-ui/README.md- Project overview, architecture, and development setupnetrun-ui/BACKEND_README.md- Backend CLI usage and installation
Important: netrun-ui/netrun_ui_backend/vis_assets/ is intentionally NOT gitignored (so hatchling includes it in the wheel), but the built assets inside it must NEVER be committed to git. They are generated at build time by npm run build:app in netrun-ui-vis/. The CI release workflow creates them before building the wheel.
Each subproject may have a _dev_notes/ directory containing implementation plans and development documentation. These are critical for maintaining context across sessions.
- Always check for existing plans: Before starting work on a subproject, check its
_dev_notes/directory for the latest plan (files are numbered with prefix like00_,01_, etc.) - Follow the current plan: If there's an active plan for your task, follow it step by step
- Create new plans for major features: When starting a major new feature, create a new plan file in
_dev_notes/with the next number prefix (e.g.,01_feature_name.md) - Plans should include: Overview, approach/strategy, implementation steps, files to modify, verification steps
Example structure:
netrun-ui/
├── _dev_notes/
│ ├── 00_multi_tab_support_plan.md
│ └── 01_next_feature_plan.md
repo/
├── CLAUDE.md # This file
├── README.md # Project README
├── netrun-sim/ # Simulation engine (Rust + Python bindings)
│ ├── Cargo.toml # Rust workspace root
│ ├── core/ # Rust library
│ │ ├── Cargo.toml
│ │ ├── src/
│ │ │ ├── lib.rs # Module exports
│ │ │ ├── _utils.rs # Utility functions
│ │ │ ├── graph.rs # Graph topology types
│ │ │ ├── graph_tests.rs # Graph tests (separate file)
│ │ │ ├── net.rs # Network runtime state
│ │ │ ├── net_tests.rs # Net tests (separate file)
│ │ │ └── test_fixtures.rs # Test helpers (feature-gated)
│ │ ├── tests/ # Integration tests
│ │ └── examples/ # Rust examples
│ └── python/ # Python bindings (PyO3)
│ ├── Cargo.toml # PyO3 crate
│ ├── pyproject.toml # Maturin config
│ ├── src/ # Rust binding code
│ ├── python/ # Python package
│ │ └── netrun_sim/
│ └── examples/ # Python examples
├── netrun/ # Runtime (pure Python, nblite project)
│ ├── PROJECT_SPEC.md # Full specification
│ ├── NBLITE_INSTRUCTIONS.md # How to write code (READ THIS FIRST)
│ ├── nblite.toml # nblite configuration
│ ├── nbs/ # Jupyter notebooks (.ipynb)
│ │ ├── netrun/ # Source notebooks
│ │ └── tests/ # Test notebooks
│ ├── pts/ # Percent notebooks (.pct.py) - EDIT THESE
│ │ ├── netrun/ # Source percent notebooks
│ │ └── tests/ # Test percent notebooks
│ └── src/ # Auto-generated code (DO NOT EDIT)
│ ├── netrun/ # Generated Python package
│ └── tests/ # Generated test files
└── netrun-ui/ # Visual editor (SvelteKit + FastAPI)
├── src/ # Frontend source (Svelte 5, SvelteFlow)
└── netrun_ui_backend/ # Python backend (FastAPI)
All major modules are fully implemented:
- Net Module (
netrun.net) - Flow-based execution with netrun-sim integration - Packets (
netrun.packets) - Packet value storage (PacketStore,LazyPacketValueSpec) - Storage (
netrun.storage) - Serialization, compression, backends, caching, file storage - RPC Layer (
netrun.rpc) - Bidirectional message-passing channels - Pool Layer (
netrun.pool) - Worker pool management - ExecutionManager (
netrun.execution_manager) - High-level execution orchestration - Node Factories (
netrun.node_factories) - Function and broadcast factories - CLI (
netrun.cli) - Validation, inspection, conversion commands - Tools (
netrun.tools) - Template resolution, action execution, recipes - Core (
netrun.core) - Convenience re-exports of Net, NetConfig, etc.
_base- Timestamp generationhashing- Hash computation utilitiespickling- Pickling method handlingvar_ref-VarRef,EnvVar(alias), andVarResolvableModelfor variable reference resolution
PacketStore- Thread-safe storage for packet valuesLazyPacketValueSpec- Lazy value specification for deferred evaluation (func_import_path, args, kwargs)
Storage layer for packet data persistence, caching, and file storage:
config-StorageConfig,NodeStorageConfig,CacheConfig,NodeCacheConfig,NodeFileStorageConfig, backend configs (LocalBackendConfig,S3BackendConfig,SSHBackendConfig,RcloneBackendConfig,GCSBackendConfig)_serialization- Serialization methods_compression- Compression methods_backends- Storage backend implementations_retrieval- Packet retrieval utilities_cache- Epoch caching logic_file_storage- File-based storage
The RPC layer provides bidirectional (key, data) message passing between components.
Base Classes (netrun.rpc.base):
RPCChannel- Protocol for async bidirectional message passingSyncRPCChannel- Protocol for sync bidirectional message passing- Exceptions:
RPCError,ChannelClosed,ChannelBroken,RecvTimeout
Implementations:
| Module | Classes | Use Case |
|---|---|---|
rpc.aio |
AsyncChannel |
Async task communication via asyncio.Queue |
rpc.thread |
ThreadChannel, SyncThreadChannel |
Thread communication via queue.Queue |
rpc.multiprocess |
ProcessChannel, SyncProcessChannel |
Process communication via multiprocessing.Queue |
rpc.remote |
WebSocketChannel |
Network communication via WebSockets |
The Pool layer manages collections of workers that process messages.
Base Classes (netrun.pool.base):
Pool- Protocol for worker poolsWorkerMessage- Message from a worker (worker_id, key, data)WorkerFn- Type for worker function:Callable[[SyncRPCChannel, int], None]- Exceptions:
PoolError,PoolNotStarted,PoolAlreadyStarted,WorkerException,WorkerCrashed,WorkerTimeout
Implementations:
| Module | Class | Description |
|---|---|---|
pool.thread |
ThreadPool |
Multiple worker threads in same process |
pool.multiprocess |
MultiprocessPool |
Multiple subprocesses, each with worker threads |
pool.remote |
RemotePoolServer, RemotePoolClient |
Network-based pool hosting via WebSockets |
pool.aio |
SingleWorkerPool |
Single async coroutine in main event loop |
Common Pool API:
pool = ThreadPool(worker_fn, num_workers=4)
await pool.start()
await pool.send(worker_id=0, key="task", data={"x": 1})
msg = await pool.recv(timeout=5.0) # Returns WorkerMessage
await pool.broadcast(key="config", data={...}) # Send to all workers
result = await pool.try_recv() # Returns None if no message
await pool.close()
# Or use as context manager:
async with ThreadPool(worker_fn, num_workers=4) as pool:
...MultiprocessPool Features:
- stdout/stderr capture with timestamps
- Output buffering with configurable flush intervals
flush_stdout(process_idx)/flush_all_stdout()methods
RemotePool Usage:
# Server side
server = RemotePoolServer()
server.register_worker("my_worker", worker_fn)
async with server.serve_background("0.0.0.0", 8765):
...
# Client side
async with RemotePoolClient("ws://localhost:8765") as client:
await client.create_pool("my_worker", num_processes=2)
await client.send(0, "task", data)
msg = await client.recv()High-level orchestration for executing functions across different pool types.
Key Classes:
ExecutionManager- Main orchestratorJobResult- Result from job execution (timestamps, result, print buffer)RunAllocationMethod- Worker selection strategy (ROUND_ROBIN, RANDOM, LEAST_BUSY)
Usage:
manager = ExecutionManager({
"thread_pool": (ThreadPool, {"num_workers": 4}),
"process_pool": (MultiprocessPool, {"num_processes": 2, "threads_per_process": 2}),
})
async with manager:
# Send a function to workers
await manager.send_function_to_pool("thread_pool", "my_func", my_function)
# Run the function
result = await manager.run(
pool_id="thread_pool",
worker_id=0,
func_import_path_or_key="my_func",
send_channel=False,
func_args=(1, 2),
func_kwargs={"x": 3},
)
print(result.result) # Function return value
print(result.print_buffer) # Captured print statementsExecutionManager Protocol Keys:
RUN- Execute a functionSEND_FUNCTION- Register a function by keyUP_RUN_STARTED- Confirmation function startedUP_RUN_RESPONSE- Return resultUP_PRINT_BUFFER- Captured print statements
The Net module provides flow-based network execution by integrating with netrun-sim.
Key Classes:
Net- Main orchestrator: manages pools, executes epochs, routes packetsNetConfig- Top-level configuration (pools, graph, output queues, storage)NodeConfig- Node definition (ports, salvo conditions, factory, execution config)NodeExecutionConfig- Execution settings (pools, retries, rate limiting, type checking)GraphConfig- Graph topology (nodes, edges, output queues)PortConfig- Port definition with optional type annotationEdgeConfig- Connection between ports (source_node,source_port,target_node,target_port)SalvoConditionConfig- Rules for triggering epochs or sending outputOutputQueueConfig- Collects packets from unconnected output portsNodeExecutionContext- Context passed to node functions (consume/create packets, print, etc.)TargetInputSalvo- Target-based execution support
Net Lifecycle:
# Async usage
async with Net(config) as net:
net.inject_data("Source", "in", [value1, value2])
await net.run_until_blocked()
results = net.flush_output_queue("results")
# Sync usage
with Net(config) as net:
net.inject_data("Source", "in", [value1, value2])
net.run_until_blocked_sync()
results = net.flush_output_queue("results")Lifecycle methods: init() / close() (async), init_sync() / close_sync() (sync). The context managers call these automatically.
Sub-objects:
net.cache— Cache inspection and management (net.cache.stats(),net.cache.entries(node),net.cache.clear(), etc.)net.logs— Log query and printing (net.logs.print_all(),net.logs.for_node(name),net.logs.for_epoch(id), etc.)
Epoch Lifecycle Callbacks: Register callbacks to observe epoch starts/ends in real time:
# Net-level: fires for ALL nodes
remove = net.on_epoch_start(lambda node_name, epoch_id: print(f"{node_name} started"))
remove = net.on_epoch_end(lambda node_name, epoch_id, record: print(f"{node_name} ended"))
remove() # deregister
# Node-scoped: fires only for THAT node
remove = net.on_epoch_start(callback, node="fetch")
remove = net.on_epoch_end(callback, node="fetch")Both sync and async callbacks are supported. on_epoch_end receives the EpochLog (with was_cancelled, ended_at, started_at, logs, etc.).
Features: signals, controls, pause/resume, rate limiting, retries, type checking, print capture, output queues, caching, file storage, epoch lifecycle callbacks.
Command-line interface (Typer-based) for working with netrun configs:
validate- Validate a config filestructure- Output graph topology as JSONconvert- Convert between config formats (JSON/TOML)factory-info- Show factory parameters and portsinfo- Show net informationnodes- List all nodesnode- Show specific node details
Utilities for action execution and recipe management:
ActionConfig,ActionContext,ActionResult- Action modelsRecipeConfig- Recipe configurationexecute_action(),execute_command()- Execute shell commands with environment- Template variable resolution
Convenience re-exports of the most commonly used classes:
Net, NetConfig, NodeConfig, NodeExecutionConfig, PoolConfig, PortConfig, EdgeConfig, GraphConfig, SalvoConditionConfig, OutputQueueConfig, TargetInputSalvo, EnvVar.
Node variables are typed key-value pairs accessible to nodes via ctx.vars. They support a two-level inheritance model:
- Net-level (
NetConfig.node_vars): Global defaults applied to all nodes - Node-level (
NodeExecutionConfig.node_vars): Per-node overrides that take precedence
NodeVariable.value is optional (str | int | float | bool | VarRef | None, default None). Variables without values are placeholders that can be filled in via NetConfig.from_file() overrides:
NetConfig.from_file(
"netrun.json",
global_node_vars={"run_name": "my_run"}, # fills net-level placeholders
node_vars={"my_node": {"model": "gpt-4"}}, # fills node-level placeholders
)Calling resolve_value() on a variable with value=None raises ValueError — unless optional=True (see below). The UI frontend must handle variable.value being undefined (absent from JSON) gracefully.
NodeVariable.optional: bool = False allows a variable to legitimately have no value. When optional=True and value=None, resolve_value() returns None (instead of raising). If value is set, it resolves normally (including options validation). When optional=True and value=None, options validation is skipped. For inherited vars, optional is inherited from the global variable (cannot be explicitly set with inherit=True).
NodeVariable.inherit: bool = False enables node-level vars to inherit from net-level vars of the same name. When inherit=True on a node-level var:
- Requires a net-level var of the same name (errors if missing at resolve time)
type,options, andoptionalmust not be explicitly set (they are inherited from the global var)valuemay optionally be set to override just the value; type/options still come from global- If
valueis not set, the global value is used entirely
Net-level vars must never have inherit=True (error at resolve time).
The inherit validation uses model_fields_set but only rejects if the value differs from the default (to allow serialization round-trips where model_dump() includes type: "str").
- Model definition:
pts/netrun/06_net/00_config/01_nodes.pct.py(class NodeVariable) - Variable merge logic (graph resolve):
pts/netrun/06_net/00_config/02_graph.pct.py - Variable merge logic (preprocessor):
pts/netrun/06_net/01_net/00_context.pct.py(create_net_func_preprocessor) from_fileoverrides:pts/netrun/06_net/00_config/03_net_config.pct.py(_set_node_vars_in_data)- UI store:
netrun-ui/src/lib/stores/variablesStore.ts - UI components:
netrun-ui/src/lib/components/NodeVariablesSection.svelte,AllNodeVariablesSection.svelte
Node factories provide a way to create NodeConfig objects dynamically. This enables reusable node templates, function-based nodes, and configuration-driven node creation.
A factory module must export exactly two functions:
get_node_config(_net_config=None, *, **factory_args) -> NodeConfig
_net_configis a mandatory first parameter, always injected by the system (never by users)- The
*makes all factory_args keyword-only - Returns the graph structure (ports, salvo conditions, metadata)
- Must NOT include
execution_config(it will be stripped/ignored) - Cannot return closures or unpickleable objects
- Subgraph factories may return
SubgraphConfiginstead ofNodeConfig
get_node_funcs(_net_config=None, *, **factory_args) -> tuple[exec_func, init_func, close_func, on_failure_func]
_net_configis injected the same way as above- Returns the execution functions as a 4-tuple
- Typically:
(exec_func, None, None, None)— (exec, init, close, on_failure) - Can capture
factory_argsin closures (functions are resolved on each worker) - Not needed for subgraph factories
The _net_config parameter provides access to the full NetConfig, enabling factories to resolve relative file paths against project_root_path. The underscore prefix signals it is internal and not user-facing. It is filtered from CLI factory-info output and UI parameter displays.
1. CONFIG CREATION
NodeConfig(factory="module.path", factory_args={...})
→ Factory and args stored, execution_config is None
2. RESOLUTION (in Net.__init__)
NodeConfig.resolve(net_config=...) calls get_node_config(_net_config=net_config, **factory_args)
→ Returns config with ports/salvos, execution_config still None
→ Factory and factory_args preserved for worker-level resolution
3. WORKER EXECUTION
NetFuncPreprocessor._resolve_factory(node_name)
→ Imports factory module on worker
→ Reconstructs NetConfig from serialized net_config_data
→ Calls get_node_funcs(_net_config=net_config, **factory_args)
→ Gets actual exec_func, caches for reuse
[[graph.nodes]]
factory = "netrun.node_factories.function"
[graph.nodes.factory_args]
func = "mymodule.my_function"
[graph.nodes.execution_config]
pools = ["main"]
type_checking_enabled = true # Settings go here, NOT in factory_argsImportant: Execution settings like type_checking_enabled, pools, etc. are configured in execution_config, NOT in factory_args. The factory only provides graph structure and execution functions.
-
netrun.node_factories.function- Creates nodes from regular Python functionsfactory_args:func(callable or import path string)- Parses function signature for input/output ports
- Handles special parameters (
ctx,print)
-
netrun.node_factories.broadcast- Fan-out replication nodefactory_args: configurable input/output portsCopyModeenum:none,shallow,deep- Replicates input packets to multiple output ports
- Factory protocol implementation:
pts/netrun/06_net/00_config/01_nodes.pct.py - Function factory:
pts/netrun/07_node_factories/00_from_function.pct.py - Broadcast factory:
pts/netrun/07_node_factories/01_broadcast.pct.py - Factory resolution in Net:
pts/netrun/06_net/01_net/02_net.pct.py - Example usage:
sample_projects/(see07_node_factories,10_controls_and_signals)
- Edit
.pct.pyfiles inpts/netrun/orpts/tests/ - Export to notebooks:
nbl export --reverse - Export to Python modules:
nbl export
Never edit files in src/ directly - they are auto-generated.
cd netrun
# Run all tests
uv run pytest src/tests/
# Run specific test modules
uv run pytest src/tests/pool/test_thread.py -v
uv run pytest src/tests/execution_manager/ -v
# Run with output
uv run pytest src/tests/pool/test_multiprocess.py -v -scd netrun
uv sync # Install dependenciesCRITICAL — STOP AND ASK if something feels wrong: If at any point — while implementing a feature, writing a test, building a sample project, or doing anything else — you find yourself writing code that works around a bug, feels like a hack, or is not in the spirit of the package, you MUST stop immediately and ask the user for instructions. Do NOT silently work around the problem. Do NOT introduce tech debt. Do NOT write "temporary" fixes. Instead, describe what you encountered, why the current approach feels wrong, and ask how to proceed. This applies to ALL work, not just core library code — tests, examples, and sample projects are equally important.
- No hacks or workarounds: If you find yourself writing code like
time.sleep(0.01)to "get different timestamps" or similar workarounds, STOP and discuss with the user. There's likely a better design that captures the data properly at the source. - Capture data at the source: Timestamps, metadata, and context should be captured when events occur, not approximated later. For example,
ctx.print()should capture the timestamp when called, not when the buffer is flushed. - Ask before implementing workarounds: If the current design doesn't support what you need, propose a design change rather than working around it.
The netrun-sim library simulates packet flow through a network. It is designed to be used by external code (like netrun) that:
- Defines the graph topology (nodes, ports, edges)
- Handles actual node execution logic
- Manages packet data/payloads
- Responds to network events
The Graph represents the static topology of the network:
- Nodes (
Node): Processing units with input and output ports - Ports (
Port): Connection points on nodes, either input or output- Each port has a
slots_specdefining capacity (InfiniteorFinite(n))
- Each port has a
- Edges (
Edge): Connections between output ports of one node and input ports of another - Salvo Conditions (
SalvoCondition): Rules that define when packets can trigger an epoch or be sent
The NetSim represents the runtime state of a network:
- Packets (
Packet): Units that flow through the network- Identified by
PacketID(ULID) - Have a
locationtracking where they are
- Identified by
- Epochs (
Epoch): Execution instances of a node- A single node can have multiple simultaneous epochs
- Lifecycle:
Startable→Running→Finished
- Salvos (
Salvo): Collections of packets that enter or exit a node together
Packets can be in one of five locations:
enum PacketLocation {
Node(EpochID), // Inside a running/startable epoch
InputPort(NodeName, PortName), // Waiting at a node's input port
OutputPort(EpochID, PortName), // Loaded into an epoch's output port
Edge(Edge), // In transit between nodes
OutsideNet, // External to the network
}Salvo conditions define when packets can trigger actions:
- Input Salvo Conditions: Define when packets at input ports can trigger a new epoch
- Output Salvo Conditions: Define when packets at output ports can be sent out
Each condition has:
term: A boolean expression over port states (empty, full, equals N, etc.)ports: Which ports' packets are included when the condition triggersmax_salvos: Maximum number of times this condition can trigger (must be 1 for input salvos)
When RunStep is called (via do_action(NetAction::RunStep) or the convenience method net.run_step()), the network automatically:
-
Moves packets from edges to input ports
- Checks if the destination port has available slots
- Respects port capacity limits
-
Checks input salvo conditions
- After each packet arrives at an input port, checks all input salvo conditions
- First satisfied condition wins (checked in order)
- Creates a
Startableepoch with the packets from the specified ports
-
Repeats until blocked
- Blocked = no packets can move (either no packets on edges, or all destinations are full)
The convenience method net.run_until_blocked() repeatedly calls RunStep until no more progress can be made.
Important: All mutations to the NetSim state must go through do_action(NetAction). This ensures:
- All operations return the list of
NetEvents that transpired - External code can track exactly what operations have been performed
- Consistent event-driven architecture
External code controls the network through actions:
| Action | Description |
|---|---|
RunStep |
Run automatic packet flow until no progress can be made |
CreatePacket(Option<EpochID>) |
Create a new packet (inside an epoch or outside the net) |
ConsumePacket(PacketID) |
Remove a packet from the network |
DestroyPacket(PacketID) |
Destroy a packet (abnormal removal, e.g., due to error) |
StartEpoch(EpochID) |
Transition a Startable epoch to Running |
FinishEpoch(EpochID) |
Complete a Running epoch (must be empty of packets) |
CancelEpoch(EpochID) |
Cancel an epoch and destroy its packets |
CreateEpoch(NodeName, Salvo) |
Manually create an epoch with specified packets |
LoadPacketIntoOutputPort(PacketID, PortName) |
Move a packet from inside an epoch to its output port |
SendOutputSalvo(EpochID, SalvoConditionName) |
Send packets from output ports onto edges |
TransportPacketToLocation(PacketID, PacketLocation) |
Move a packet to any location (with restrictions on running epochs) |
Actions produce events that track what happened:
PacketCreated,PacketConsumed,PacketDestroyed,PacketMoved,PacketOrphanedEpochCreated,EpochStarted,EpochFinished,EpochCancelledInputSalvoTriggered,OutputSalvoTriggered
Actions can be reversed via net.undo_action(action, events), enabling undo/redo workflows. Requires both the original action and the events it produced.
- Define the graph: Create nodes with ports and salvo conditions, connect with edges
- Create a NetSim: Initialize runtime state from the graph
- Inject packets: Create packets and place them on edges or in input ports
- Run the network: Call
run_step()orrun_until_blocked()to move packets and trigger epochs - Handle startable epochs: External code decides when to start each epoch
- Simulate node execution: External code "runs" the node logic
- Output results: Load packets into output ports and send output salvos
- Finish epochs: Mark epochs as finished when done
- Repeat: Continue running the network until processing is complete
1. Packets arrive on edges
2. run_until_blocked() moves them to input ports
3. Input salvo condition satisfied → Epoch created (Startable)
4. External code calls StartEpoch → Epoch now Running
5. External code "executes" the node (outside this library)
6. External code creates output packets, loads into output ports
7. External code calls SendOutputSalvo → Packets move to edges
8. External code calls FinishEpoch → Epoch complete
9. run_until_blocked() continues the flow to next nodes
- Separation of concerns: The library handles flow mechanics; external code handles execution
- Event-driven: All state changes produce events for observability
- Explicit control: External code explicitly starts epochs and sends salvos
- Deterministic: Salvo conditions are checked in order; first match wins
- Action-based mutations: All
NetSimstate changes must go throughdo_action(NetAction)to ensure event tracking and auditability
cd netrun-sim
cargo build -p netrun-sim
cargo test -p netrun-sim
cargo run -p netrun-sim --example linear_flowcd netrun-sim/python
uv venv .venv && uv sync
uv run maturin develop
uv run python examples/linear_flow.pySee netrun-sim/python/README.md for full Python documentation.