Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions docs/ACTION_PLAN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Action Plan: Create `ChainhookAggregatorDO`

This plan outlines the steps to create a new Durable Object, `ChainhookAggregatorDO`, for managing and relaying blockchain events from an external chainhook service. It follows modern Cloudflare best practices, including RPC-style communication and SQLite-backed storage, and aligns with the project's existing architecture as described in `docs/START.md`.

## Phase 1: Scaffolding and Configuration

1. **Create New Durable Object File**:

- Create `src/durable-objects/chainhook-aggregator-do.ts`.
- Define the `ChainhookAggregatorDO` class extending `DurableObject`.
- Add a constructor and placeholder RPC methods (`handleEvent`, `getStatus`) and an `alarm()` handler to establish the basic structure.

2. **Update Worker Configuration**:

- In `worker-configuration.d.ts`, update the `Env` interface to include:
```typescript
CHAINHOOK_AGGREGATOR_DO: DurableObjectNamespace<ChainhookAggregatorDO>;
HIRO_PLATFORM_API_KEY: string; // For the external service
RELAY_WORKER_URL: string; // The endpoint for the relay logic
```
- In `wrangler.toml` (or `.jsonc`):
- Add a new durable object binding for `CHAINHOOK_AGGREGATOR_DO`.
- Add a migration for `ChainhookAggregatorDO` using `new_sqlite_classes` to enable the SQLite backend.
- Add secrets for `HIRO_PLATFORM_API_KEY` and `RELAY_WORKER_URL`.

3. **Update Entrypoint for Routing (`src/index.ts`)**:
- Export the new `ChainhookAggregatorDO` class.
- Add a new route like `/chainhook-event/[do-name]` to the main `fetch` handler. This will be the public endpoint the external chainhook service calls.
- This route handler will:
1. Get the DO stub via `env.CHAINHOOK_AGGREGATOR_DO.idFromName('[do-name]')`.
2. Call an RPC method on the stub (e.g., `await stub.handleEvent(request)`), forwarding the request.

## Phase 2: Durable Object Core Logic

4. **Implement the `ChainhookAggregatorDO` Constructor**:

- Initialize services like `Logger`.
- Use `ctx.blockConcurrencyWhile()` to load the DO's state (e.g., `chainhook_id`, `last_block_hash`) from `this.ctx.storage`. If state doesn't exist, trigger the initial chainhook creation logic and set the first alarm.

5. **Implement State Management**:

- Define class properties for the DO's state (`chainhook_id`, `last_activity_timestamp`, etc.).
- Use `this.ctx.storage.put()` to persist state to durable storage after it's modified.

6. **Implement RPC Method: `handleEvent(request)`**:

- This method receives the webhook payload from the main worker.
- **Log the raw request body.** This fulfills the requirement to capture the payload structure for future typing.
- Extract the `block_hash` and update `last_activity_timestamp` in storage.
- Forward the payload to the `RELAY_WORKER_URL` using `fetch()`.
- Log the result of the forwarding action.

7. **Implement RPC Method: `getStatus()`**:
- Create this method to return the DO's current state from memory for debugging purposes, as planned in `START.md`.

## Phase 3: Lifecycle Management and External API Interaction

8. **Implement `alarm()` Handler**:

- The alarm will periodically trigger this method.
- Inside, implement the health check: call the Hiro Platform API to get the status of the managed `chainhook_id`.
- If the hook is unhealthy or stale, log the issue and trigger a recreation method.
- Finally, call `this.ctx.storage.setAlarm()` to schedule the next health check.

9. **Implement Chainhook Creation/Recreation Logic**:
- Create a private method (e.g., `_recreateChainhook()`).
- This method will make an authenticated API call to the Hiro Platform to create a new chainhook, providing the public URL for the webhook (`/chainhook-event/[do-name]`).
- It will store the new `chainhook_id` in `this.ctx.storage`.
151 changes: 151 additions & 0 deletions docs/START.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
# Chainhook Relay Infrastructure (MVP)

## Overview

This system ensures reliable delivery of block-level webhook events from an external blockchain event source ("Chainhook"). Using Cloudflare Durable Objects and Workers, the architecture provides:

- **Resilience to duplication**
- **Robust handling of chainhook failure**
- **Efficient, scalable event delivery**
- **Future-ready fan-out support to multiple destinations**

## Core Goals

- Receive **every anchored block** event from the chainhook
- Prevent duplicate forwarding via global deduplication (using `block_hash`)
- Deliver payloads to at least **one downstream service**, with future fan-out support
- Handle chainhook creation, health monitoring, and re-creation in a modular way
- Scale horizontally as new API keys and use cases are added

## Architecture Summary

This will be a new DO specifically for handling chainhooks.

External Chainhook Service → ChainhookDO (per Hiro Platform API key) → RelayWorker → Destination Webhook(s)

- ChainhookDO handles:
- Chainhook lifecycle (auth, create, monitor, recreate)
- Receiving webhook payloads from created chainhook
- Forwarding payloads to RelayWorker for processing
- RelayWorker handles:
- Deduplication via KV
- Forwarding to destination(s)
- KV Store handles:
- Global deduplication keyed by `block_hash`

## Components

### 1. Durable Object: `ChainhookDO`

#### Responsibilities

- **Initialize and manage** the chainhook (via external API)
- **Receive POSTs** from the chainhook
- **Forward payloads** to `RelayWorker`
- **Monitor health** and recreate chainhook if it's stale or failed

#### State Stored (per DO instance)

- `chainhook_id`
- Last known `block_hash`
- Last activity timestamp

#### Endpoints

- `POST /event` – handles incoming block payloads
- `GET /status` – returns internal DO state (for debugging)

#### Periodic Logic (`alarm()` or scheduler)

- Check chainhook status via external API
- Compare expected vs. actual block delivery timing
- Recreate hook if needed

---

### 2. Cloudflare Worker: `RelayWorker`

#### Responsibilities

- Receive block payload from DO
- Extract `block_hash`
- Check for deduplication in KV store
- If new:
- Store hash in KV
- Forward payload to downstream endpoint
- log event and stats in KV
- If duplicate:
- Drop payload silently
- log event and stats in KV

#### KV Schema

Namespace: `KV_BLOCKS`

- **Key**: block hash (e.g., `blk_0xabc123...`)
- **Value**: typed object that includes `"delivered"`, timestamp, helpful info
- **TTL**: Infinite, if we need to update can overwrite but not expecting to

Namespace: `KV_LOGS`

- **Key**: ISO timestamp, something that auto sorts itself like YYYYMMDD but more unique
- **Value**: typed object that represents a possible outcome e.g. SUCCESS, ERROR with detail where appropriate
- **TTL**: Infinite, can bundle up and store in R2 in later phase

#### Environment Bindings

- `KV_BLOCKS` – KV namespace for deduplication of blocks
- `KV_LOGS` - KV namespace for any logged messages
- `DESTINATION_URL` – initial destination for payloads (delivered via POST)

## Logging

Create a consistent object structure and make sure everything has exported TypeScript types for easy reference.

We will use a downstream UI to read and interpret the data from KV separate to the main project here.

### `ChainhookDO`

| Event | Log Message |
| ---------------- | ----------------------------------------------- |
| Startup | `"DO started for API key: {key}"` |
| Hook creation | `"Created chainhook: {id}"` |
| Incoming webhook | `"Received block: {block_hash}"` |
| Forwarded | `"Forwarded block {block_hash} to RelayWorker"` |
| Health check | `"Checking chainhook health"` |
| Recreation | `"Recreated chainhook for {key}"` |
| Error | `"Error handling block {block_hash}: {error}"` |

### `RelayWorker`

| Event | Log Message |
| -------------- | -------------------------------------------------- |
| Incoming | `"Received block: {block_hash}"` |
| Duplicate | `"Duplicate block: {block_hash}"` |
| Forwarded | `"Forwarded block {block_hash} to {destination}"` |
| Failed forward | `"Failed to deliver block {block_hash}: {status}"` |

## Future Enhancements

- Fan-out to multiple destinations
- Retry and queueing for failed deliveries
- Chain reorg detection and rollback handling
- Signature verification of incoming payloads
- Dashboard for chainhook status and logs

## Next Steps

This document forms the foundation for the implementation task plan.

Tasks will include:

- [ ] DO scaffold with fetch + alarm
- [ ] Chainhook API integration
- [ ] Worker with KV dedup and forward logic
- [ ] Logging utility functions
- [ ] Deployment scripts + testing

## Notes

- Each block has a globally unique `block_hash`, making it ideal for use as the KV deduplication key.
- No payload filtering is done at this stage — every anchored block is delivered.
Loading