Architecture
Technical architecture and implementation details of the CoW Protocol Watch TowerHigh-level architecture
The Watch Tower follows an event-driven architecture with three main components:- Event Monitor - Listens for blockchain events and indexes new conditional orders
- Registry/Storage - Maintains state of all active conditional orders using LevelDB
- Order Poller - Continuously evaluates conditional orders and submits to OrderBook API
Event monitoring flow
The watch tower monitors the ComposableCoW contract for two critical events:ConditionalOrderCreated event
Emitted when a single conditional order is created by a user:- Watch tower detects the event via
eth_getLogsRPC calls - Decodes the event data to extract owner address and order parameters
- Validates the order against filter policies (if configured)
- Stores the order in the registry indexed by owner
- Increments metrics for tracking
The watch tower uses the event topic hash to efficiently filter logs:
keccak256("ConditionalOrderCreated(address,(address,bytes32,bytes))").MerkleRootSet event
Emitted when a batch of conditional orders (merkle tree) is set for a safe:- Detects the merkle root event
- Flushes any existing orders for the owner with different merkle roots
- Decodes the proof data containing multiple orders
- Extracts each order’s parameters and merkle path
- Stores all orders with their merkle proofs
Merkle roots allow users to efficiently set multiple conditional orders in a single transaction, reducing gas costs. The watch tower reconstructs each individual order from the merkle proof.
Registry and storage architecture
The watch tower uses LevelDB as its persistent storage layer, chosen for its ACID guarantees and simplicity as a key-value store.Database implementation
Storage schema
The watch tower maintains the following keys in LevelDB:| Key | Description | Data Structure |
|---|---|---|
LAST_PROCESSED_BLOCK_{chainId} | Last block processed by the watch tower | {number, timestamp, hash} |
CONDITIONAL_ORDER_REGISTRY_{chainId} | Map of all active conditional orders by owner | Map<Owner, Set<ConditionalOrder>> |
CONDITIONAL_ORDER_REGISTRY_VERSION_{chainId} | Schema version for migrations | number |
LAST_NOTIFIED_ERROR_{chainId} | Timestamp of last error notification | Date (ISO string) |
Registry data model
TheRegistry class manages the in-memory and persistent state:
ConditionalOrder contains:
Atomic writes
All database writes are batched for atomicity:If a write fails, the watch tower throws an error and exits. On restart, it re-processes from the last successfully indexed block, ensuring eventual consistency with the blockchain.
Order polling and submission
After indexing conditional orders, the watch tower continuously polls them to check if execution conditions are met.Polling process
For each block processed, the watch tower:- Iterates through all registered owners and their conditional orders
- Calls the order’s
pollmethod from the Composable SDK - Evaluates the returned
PollResult - Submits discrete orders to the OrderBook API if conditions are satisfied
By default, the watch tower polls using the current processing block (not latest), since it indexes every block. This ensures consistent state and prevents issues with block reorgs.
Posting to OrderBook API
When a conditional order’s conditions are met, the watch tower:- Extracts the discrete order parameters from the
PollResult - Signs the order (if required by the order type)
- Posts to the CoW Protocol OrderBook API using the SDK
- Records the submitted order UID in the registry
- Updates metrics
Block processing flow
The watch tower processes blocks in two phases:Phase 1: Warm-up (sync)
When starting, the watch tower syncs from the last processed block to the current chain tip:- Paging - Fetches blocks in chunks (default 5000) to avoid RPC limits
- Ordered processing - Processes blocks sequentially to maintain state consistency
- Resume capability - Starts from last processed block on restart
The
pageSize option defaults to 5000 blocks (Infura’s limit). If using your own RPC node, set pageSize: 0 to fetch all blocks in one request for faster syncing.Phase 2: Real-time monitoring
Once synced, the watch tower subscribes to new blocks:- Increments the reorg counter metric
- Records the reorg depth
- Re-processes the reorganized block(s)
- Updates the registry with the canonical chain state
Block processing pipeline
For each block, the watch tower executes:- Process new order events - Index any new conditional orders
- Poll existing orders - Check all registered orders for execution conditions
- Submit discrete orders - Post eligible orders to OrderBook API
- Persist state - Atomically write registry and last processed block
The
processEveryNumBlocks option allows you to reduce RPC calls by only polling orders every N blocks. Default is 1 (poll every block).Multi-chain support
The watch tower can monitor multiple chains simultaneously using parallel chain contexts:- Independent provider connection (HTTP or WebSocket)
- Separate registry namespace in LevelDB
- Dedicated OrderBook API instance
- Isolated metrics by chain ID
Monitoring and observability
The watch tower exposes comprehensive monitoring capabilities:API endpoints
By default on port 8080:GET /- Root endpoint (returns “Moooo!”)GET /api/version- Version and build informationGET /config- Current configurationGET /api/dump/:chainId- Dump registry for a chainGET /health- Health check for all chainsGET /metrics- Prometheus metrics
Prometheus metrics
Key metrics exposed:Logging
Structured logging with configurable levels viaLOG_LEVEL environment variable:
Health checks
The watch tower maintains health status for each chain:Watchdog
A watchdog thread monitors for stalled chains:If running in Kubernetes, the watch tower sets sync status to UNKNOWN instead of exiting, allowing the pod to remain running for debugging. Outside Kubernetes, it exits immediately to trigger a restart.
Error handling and resilience
Atomic operations
All state changes are atomic - if any operation fails during block processing:- The error is logged and metrics updated
- Database writes are aborted (batch not committed)
- Watch tower exits with error code
- On restart, re-processes from last successful block
Retry logic
OrderBook API calls include exponential backoff:Filter policies
Optional filter policies allow dropping problematic orders before indexing:Performance considerations
RPC optimization
- Paging - Configurable
pageSizeto batch historical event queries - Block skipping -
processEveryNumBlocksoption to reduce polling frequency - Address filtering - Optional
ownersconfig to only monitor specific addresses - Connection type - WebSocket providers reduce latency vs HTTP polling
Storage optimization
- Expired orders - Automatically removed from registry to conserve space
- Cancelled orders - Detected and removed during processing
- JSON encoding - Uses custom serializers for Map/Set types
Concurrency
- Multiple chains processed in parallel
- Events within a block processed sequentially for consistency
- Metrics and logging are thread-safe
For production deployments, use a WebSocket RPC connection for lower latency and reduced overhead compared to HTTP polling.