From fca6fcb9ced23688de36d1d87040fbb1297b3aca Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Tue, 29 Jul 2025 17:47:45 -0700 Subject: [PATCH 1/8] feat: fill in starting doc for design --- docs/START.md | 174 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 174 insertions(+) create mode 100644 docs/START.md diff --git a/docs/START.md b/docs/START.md new file mode 100644 index 0000000..18074dc --- /dev/null +++ b/docs/START.md @@ -0,0 +1,174 @@ +# Chainhook Relay Infrastructure (MVP) + +## Overview + +This system ensures reliable delivery of block-level webhook events from an external blockchain event source ("Chainhook"). Using Cloudflare Durable Objects and Workers, the architecture provides: + +* **Resilience to duplication** +* **Robust handling of chainhook failure** +* **Efficient, scalable event delivery** +* **Future-ready fan-out support to multiple destinations** + +--- + +## Core Goals + +* Receive **every anchored block** event from the chainhook +* Prevent duplicate forwarding via global deduplication (using `block_hash`) +* Deliver payloads to at least **one downstream service**, with future fan-out support +* Handle chainhook creation, health monitoring, and re-creation in a modular way +* Scale horizontally as new API keys and use cases are added + +--- + +## Architecture Summary + +External Chainhook Service → ChainhookDO (per API key) → RelayWorker → Destination Webhook(s) + +* ChainhookDO handles: + + * Chainhook lifecycle (create, monitor, recreate) + * Receiving webhook payloads + * Forwarding to Worker +* RelayWorker handles: + + * Deduplication via KV + * Forwarding to destinations +* KV Store handles: + + * Global deduplication keyed by `block_hash` + +--- + +## Components + +### 1. Durable Object: `ChainhookDO` + +#### Responsibilities + +* **Initialize and manage** the chainhook (via external API) +* **Receive POSTs** from the chainhook +* **Forward payloads** to `RelayWorker` +* **Monitor health** and recreate chainhook if it's stale or failed + +#### State Stored (per DO instance) + +* `chainhook_id` +* Last known `block_hash` (optional) +* Last activity timestamp + +#### Endpoints + +* `POST /event` – handles incoming block payloads +* `GET /status` – returns internal DO state (for debugging) + +#### Periodic Logic (`alarm()` or scheduler) + +* Check chainhook status via external API +* Compare expected vs. actual block delivery timing +* Recreate hook if needed + +--- + +### 2. Cloudflare Worker: `RelayWorker` + +#### Responsibilities + +* Receive block payload from DO +* Extract `block_hash` +* Check for deduplication in KV store +* If new: + + * Store hash in KV + * Forward payload to downstream endpoint +* If duplicate: + + * Drop silently (or log) + +#### KV Schema + +* **Key**: block hash (e.g., `blk_0xabc123...`) +* **Value**: `"delivered"` or timestamp (optional) +* **TTL**: \~24–48 hours (adjust for expected reorgs) + +#### Environment Bindings + +* `KV_BLOCKS` – KV namespace for deduplication +* `DESTINATION_URL` – initial destination for payloads + +--- + +### 3. KV Store: `KV_BLOCKS` + +Used globally to deduplicate block events across all DOs and Workers. + +| Key | Value | TTL | +| ------------ | ------------- | -------- | +| `block_hash` | `"delivered"` | 1–2 days | + +--- + +## Logging + +### `ChainhookDO` + +| Event | Log Message | +| ---------------- | ----------------------------------------------- | +| Startup | `"DO started for API key: {key}"` | +| Hook creation | `"Created chainhook: {id}"` | +| Incoming webhook | `"Received block: {block_hash}"` | +| Forwarded | `"Forwarded block {block_hash} to RelayWorker"` | +| Health check | `"Checking chainhook health"` | +| Recreation | `"Recreated chainhook for {key}"` | +| Error | `"Error handling block {block_hash}: {error}"` | + +### `RelayWorker` + +| Event | Log Message | +| -------------- | -------------------------------------------------- | +| Incoming | `"Received block: {block_hash}"` | +| Duplicate | `"Duplicate block: {block_hash}"` | +| Forwarded | `"Forwarded block {block_hash} to {destination}"` | +| Failed forward | `"Failed to deliver block {block_hash}: {status}"` | + +--- + +## Scaling Plan + +| Component | Scales How? | Notes | +| ------------- | ------------------------------ | -------------------------------------- | +| `ChainhookDO` | Horizontally, 1 per API key | Clean sharding model | +| `RelayWorker` | Cloudflare handles concurrency | Stateless, fast | +| `KV Store` | Global, distributed | Efficient for deduplication | +| Destinations | Must scale with traffic | Fan-out and retry queues planned later | + +--- + +## Future Enhancements + +* Fan-out to multiple destinations +* Retry and queueing for failed deliveries +* Chain reorg detection and rollback handling +* Signature verification of incoming payloads +* Dashboard for chainhook status and logs + +--- + +## Next Steps + +This document forms the foundation for the implementation task plan. + +Tasks will include: + +* [ ] DO scaffold with fetch + alarm +* [ ] Chainhook API integration +* [ ] Worker with KV dedup and forward logic +* [ ] Logging utility functions +* [ ] Deployment scripts + testing + +--- + +## Notes + +* Each block has a globally unique `block_hash`, making it ideal for use as the KV deduplication key. +* No payload filtering is done at this stage — every anchored block is delivered. From 1c339559222ff96412afbf4b18ee0f0703f477ca Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 11:56:00 -0700 Subject: [PATCH 2/8] fix: add some manual human-led revisions --- docs/START.md | 161 ++++++++++++++++++++++---------------------------- 1 file changed, 69 insertions(+), 92 deletions(-) diff --git a/docs/START.md b/docs/START.md index 18074dc..042e492 100644 --- a/docs/START.md +++ b/docs/START.md @@ -4,41 +4,34 @@ This system ensures reliable delivery of block-level webhook events from an external blockchain event source ("Chainhook"). Using Cloudflare Durable Objects and Workers, the architecture provides: -* **Resilience to duplication** -* **Robust handling of chainhook failure** -* **Efficient, scalable event delivery** -* **Future-ready fan-out support to multiple destinations** - ---- +- **Resilience to duplication** +- **Robust handling of chainhook failure** +- **Efficient, scalable event delivery** +- **Future-ready fan-out support to multiple destinations** ## Core Goals -* Receive **every anchored block** event from the chainhook -* Prevent duplicate forwarding via global deduplication (using `block_hash`) -* Deliver payloads to at least **one downstream service**, with future fan-out support -* Handle chainhook creation, health monitoring, and re-creation in a modular way -* Scale horizontally as new API keys and use cases are added - ---- +- Receive **every anchored block** event from the chainhook +- Prevent duplicate forwarding via global deduplication (using `block_hash`) +- Deliver payloads to at least **one downstream service**, with future fan-out support +- Handle chainhook creation, health monitoring, and re-creation in a modular way +- Scale horizontally as new API keys and use cases are added ## Architecture Summary -External Chainhook Service → ChainhookDO (per API key) → RelayWorker → Destination Webhook(s) +This will be a new DO specifically for handling chainhooks. -* ChainhookDO handles: +External Chainhook Service → ChainhookDO (per Hiro Platform API key) → RelayWorker → Destination Webhook(s) - * Chainhook lifecycle (create, monitor, recreate) - * Receiving webhook payloads - * Forwarding to Worker -* RelayWorker handles: - - * Deduplication via KV - * Forwarding to destinations -* KV Store handles: - - * Global deduplication keyed by `block_hash` - ---- +- ChainhookDO handles: + - Chainhook lifecycle (auth, create, monitor, recreate) + - Receiving webhook payloads from created chainhook + - Forwarding payloads to RelayWorker for processing +- RelayWorker handles: + - Deduplication via KV + - Forwarding to destination(s) +- KV Store handles: + - Global deduplication keyed by `block_hash` ## Components @@ -46,27 +39,27 @@ External Chainhook Service → ChainhookDO (per API key) → RelayWorker → Des #### Responsibilities -* **Initialize and manage** the chainhook (via external API) -* **Receive POSTs** from the chainhook -* **Forward payloads** to `RelayWorker` -* **Monitor health** and recreate chainhook if it's stale or failed +- **Initialize and manage** the chainhook (via external API) +- **Receive POSTs** from the chainhook +- **Forward payloads** to `RelayWorker` +- **Monitor health** and recreate chainhook if it's stale or failed #### State Stored (per DO instance) -* `chainhook_id` -* Last known `block_hash` (optional) -* Last activity timestamp +- `chainhook_id` +- Last known `block_hash` +- Last activity timestamp #### Endpoints -* `POST /event` – handles incoming block payloads -* `GET /status` – returns internal DO state (for debugging) +- `POST /event` – handles incoming block payloads +- `GET /status` – returns internal DO state (for debugging) #### Periodic Logic (`alarm()` or scheduler) -* Check chainhook status via external API -* Compare expected vs. actual block delivery timing -* Recreate hook if needed +- Check chainhook status via external API +- Compare expected vs. actual block delivery timing +- Recreate hook if needed --- @@ -74,41 +67,42 @@ External Chainhook Service → ChainhookDO (per API key) → RelayWorker → Des #### Responsibilities -* Receive block payload from DO -* Extract `block_hash` -* Check for deduplication in KV store -* If new: - - * Store hash in KV - * Forward payload to downstream endpoint -* If duplicate: - - * Drop silently (or log) +- Receive block payload from DO +- Extract `block_hash` +- Check for deduplication in KV store +- If new: + - Store hash in KV + - Forward payload to downstream endpoint + - log event and stats in KV +- If duplicate: + - Drop payload silently + - log event and stats in KV #### KV Schema -* **Key**: block hash (e.g., `blk_0xabc123...`) -* **Value**: `"delivered"` or timestamp (optional) -* **TTL**: \~24–48 hours (adjust for expected reorgs) +Namespace: `KV_BLOCKS` -#### Environment Bindings +- **Key**: block hash (e.g., `blk_0xabc123...`) +- **Value**: typed object that includes `"delivered"`, timestamp, helpful info +- **TTL**: Infinite, if we need to update can overwrite but not expecting to -* `KV_BLOCKS` – KV namespace for deduplication -* `DESTINATION_URL` – initial destination for payloads +Namespace: `KV_LOGS` ---- +- **Key**: ISO timestamp, something that auto sorts itself like YYYYMMDD but more unique +- **Value**: typed object that represents a possible outcome e.g. SUCCESS, ERROR with detail where appropriate +- **TTL**: Infinite, can bundle up and store in R2 in later phase -### 3. KV Store: `KV_BLOCKS` +#### Environment Bindings -Used globally to deduplicate block events across all DOs and Workers. +- `KV_BLOCKS` – KV namespace for deduplication of blocks +- `KV_LOGS` - KV namespace for any logged messages +- `DESTINATION_URL` – initial destination for payloads (delivered via POST) -| Key | Value | TTL | -| ------------ | ------------- | -------- | -| `block_hash` | `"delivered"` | 1–2 days | +## Logging ---- +Create a consistent object structure and make sure everything has exported TypeScript types for easy reference. -## Logging +We will use a downstream UI to read and interpret the data from KV separate to the main project here. ### `ChainhookDO` @@ -131,28 +125,13 @@ Used globally to deduplicate block events across all DOs and Workers. | Forwarded | `"Forwarded block {block_hash} to {destination}"` | | Failed forward | `"Failed to deliver block {block_hash}: {status}"` | ---- - -## Scaling Plan - -| Component | Scales How? | Notes | -| ------------- | ------------------------------ | -------------------------------------- | -| `ChainhookDO` | Horizontally, 1 per API key | Clean sharding model | -| `RelayWorker` | Cloudflare handles concurrency | Stateless, fast | -| `KV Store` | Global, distributed | Efficient for deduplication | -| Destinations | Must scale with traffic | Fan-out and retry queues planned later | - ---- - ## Future Enhancements -* Fan-out to multiple destinations -* Retry and queueing for failed deliveries -* Chain reorg detection and rollback handling -* Signature verification of incoming payloads -* Dashboard for chainhook status and logs - ---- +- Fan-out to multiple destinations +- Retry and queueing for failed deliveries +- Chain reorg detection and rollback handling +- Signature verification of incoming payloads +- Dashboard for chainhook status and logs ## Next Steps @@ -160,15 +139,13 @@ This document forms the foundation for the implementation task plan. Tasks will include: -* [ ] DO scaffold with fetch + alarm -* [ ] Chainhook API integration -* [ ] Worker with KV dedup and forward logic -* [ ] Logging utility functions -* [ ] Deployment scripts + testing - ---- +- [ ] DO scaffold with fetch + alarm +- [ ] Chainhook API integration +- [ ] Worker with KV dedup and forward logic +- [ ] Logging utility functions +- [ ] Deployment scripts + testing ## Notes -* Each block has a globally unique `block_hash`, making it ideal for use as the KV deduplication key. -* No payload filtering is done at this stage — every anchored block is delivered. +- Each block has a globally unique `block_hash`, making it ideal for use as the KV deduplication key. +- No payload filtering is done at this stage — every anchored block is delivered. From eff357fdd8b9d55a3967532ab19439b9f345d1e2 Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 19:20:27 -0700 Subject: [PATCH 3/8] fix: restore original state for migration --- wrangler.toml | 1 + 1 file changed, 1 insertion(+) diff --git a/wrangler.toml b/wrangler.toml index 8adda6d..7f32666 100644 --- a/wrangler.toml +++ b/wrangler.toml @@ -25,6 +25,7 @@ routes = [] # fixing a deploy error from old code [[env.preview.migrations]] tag = "20250417" +new_classes = ["ChainhooksDO"] [[env.preview.migrations]] tag = "20250530" From e1d66cd0e4c6588f5f31b35e7d404332336864a9 Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 19:27:56 -0700 Subject: [PATCH 4/8] fix: add missing migration then delete it possible deploy issue with preview URLs, on the other hand if generated per PR maybe can remove? --- wrangler.toml | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/wrangler.toml b/wrangler.toml index 7f32666..e123cdf 100644 --- a/wrangler.toml +++ b/wrangler.toml @@ -22,7 +22,6 @@ new_classes = ["ContractCallsDO"] routes = [] -# fixing a deploy error from old code [[env.preview.migrations]] tag = "20250417" new_classes = ["ChainhooksDO"] @@ -31,6 +30,14 @@ new_classes = ["ChainhooksDO"] tag = "20250530" deleted_classes = ["ChainhooksDO"] +[[env.preview.migrations]] +tag = "20250611" +new_classes = ["StacksAccountDO"] + +[[env.preview.migrations]] +tag = "20250730" +deleted_classes = ["StacksAccountDO"] + [[env.preview.kv_namespaces]] binding = "AIBTCDEV_CACHE_KV" id = "beb302875cfa41eb86fb24eeb3b9373a" From 457e94c73a08de61c51f59515f862b2f13c5c57e Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 19:43:15 -0700 Subject: [PATCH 5/8] docs: capture CF docs related to plan --- ...practices-access-durable-object-storage.md | 305 ++++++++++++++++++ ...lare-docs-best-practices-error-handling.md | 99 ++++++ ...lare-docs-best-practices-invoke-methods.md | 297 +++++++++++++++++ ...flare-docs-best-practices-use-websocket.md | 305 ++++++++++++++++++ ...lare-docs-lifecycle-of-a-durable-object.md | 78 +++++ ...loudflare-docs-what-are-durable-objects.md | 124 +++++++ ...cloudflare-example-build-a-rate-limiter.md | 290 +++++++++++++++++ 7 files changed, 1498 insertions(+) create mode 100644 docs/context/cloudflare-docs-best-practices-access-durable-object-storage.md create mode 100644 docs/context/cloudflare-docs-best-practices-error-handling.md create mode 100644 docs/context/cloudflare-docs-best-practices-invoke-methods.md create mode 100644 docs/context/cloudflare-docs-best-practices-use-websocket.md create mode 100644 docs/context/cloudflare-docs-lifecycle-of-a-durable-object.md create mode 100644 docs/context/cloudflare-docs-what-are-durable-objects.md create mode 100644 docs/context/cloudflare-example-build-a-rate-limiter.md diff --git a/docs/context/cloudflare-docs-best-practices-access-durable-object-storage.md b/docs/context/cloudflare-docs-best-practices-access-durable-object-storage.md new file mode 100644 index 0000000..65aaf03 --- /dev/null +++ b/docs/context/cloudflare-docs-best-practices-access-durable-object-storage.md @@ -0,0 +1,305 @@ +--- +title: Access Durable Objects Storage · Cloudflare Durable Objects docs +description: Durable Objects are a powerful compute API that provides a compute + with storage building block. Each Durable Object has its own private, + transactional, and strongly consistent storage. Durable Objects Storage API + provides access to a Durable Object's attached storage. +lastUpdated: 2025-05-21T09:44:01.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/ + md: https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/index.md +--- + +Durable Objects are a powerful compute API that provides a compute with storage building block. Each Durable Object has its own private, transactional, and strongly consistent storage. Durable Objects Storage API provides access to a Durable Object's attached storage. + +A Durable Object's [in-memory state](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/) is preserved as long as the Durable Object is not evicted from memory. Inactive Durable Objects with no incoming request traffic can be evicted. There are normal operations like [code deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) that trigger Durable Objects to restart and lose their in-memory state. For these reasons, you should use Storage API to persist state durably on disk that needs to survive eviction or restart of Durable Objects. + +## Access storage + +Recommended SQLite-backed Durable Objects + +Cloudflare recommends all new Durable Object namespaces use the [SQLite storage backend](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class). These Durable Objects can continue to use storage [key-value API](https://developers.cloudflare.com/durable-objects/api/storage-api/#kv-api). + +Additionally, SQLite-backed Durable Objects allow you to store more types of data (such as tables), and offers Point In Time Recovery API which can restore a Durable Object's embedded SQLite database contents (both SQL data and key-value data) to any point in the past 30 days. + +The [key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) remains for backwards compatibility, and a migration path from KV storage backend to SQLite storage backend for existing Durable Object namespaces will be available in the future. + +Storage billing on SQLite-backed Durable Objects + +Storage billing is not yet enabled for Durable Object classes using the SQLite storage backend. SQLite-backed Durable Objects will incur [charges for requests and duration](https://developers.cloudflare.com/durable-objects/platform/pricing/#compute-billing). Storage billing for SQLite-backed Durable Objects will be enabled at a later date with advance notice with the [shared pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend). + +[Storage API methods](https://developers.cloudflare.com/durable-objects/api/storage-api/#methods) are available on `ctx.storage` parameter passed to the Durable Object constructor. Storage API has several methods, including SQL, point-in-time recovery (PITR), key-value (KV), and alarm APIs. + +Only Durable Object classes with a SQLite storage backend can access SQL API. + +### Create SQLite-backed Durable Object class + +Use `new_sqlite_classes` on the migration in your Worker's Wrangler file: + +- wrangler.jsonc + + ```jsonc + { + "migrations": [ + { + "tag": "v1", + "new_sqlite_classes": ["MyDurableObject"] + } + ] + } + ``` + +- wrangler.toml + + ```toml + [[migrations]] + tag = "v1" # Should be unique for each entry + new_sqlite_classes = ["MyDurableObject"] # Array of new classes + ``` + +[SQL API](https://developers.cloudflare.com/durable-objects/api/storage-api/#exec) is available on `ctx.storage.sql` parameter passed to the Durable Object constructor. + +SQLite-backed Durable Objects also offer [point-in-time recovery API](https://developers.cloudflare.com/durable-objects/api/storage-api/#pitr-point-in-time-recovery-api), which uses bookmarks to allow you to restore a Durable Object's embedded SQLite database to any point in time in the past 30 days. + +### Initialize instance variables from storage + +A common pattern is to initialize a Durable Object from [persistent storage](https://developers.cloudflare.com/durable-objects/api/storage-api/) and set instance variables the first time it is accessed. Since future accesses are routed to the same Durable Object, it is then possible to return any initialized values without making further calls to persistent storage. + +```ts +import { DurableObject } from 'cloudflare:workers'; + +export class Counter extends DurableObject { + value: number; + + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + + // `blockConcurrencyWhile()` ensures no requests are delivered until + // initialization completes. + ctx.blockConcurrencyWhile(async () => { + // After initialization, future reads do not need to access storage. + this.value = (await ctx.storage.get('value')) || 0; + }); + } + + async getCounterValue() { + return this.value; + } +} +``` + +### Remove a Durable Object's storage + +A Durable Object fully ceases to exist if, when it shuts down, its storage is empty. If you never write to a Durable Object's storage at all (including setting alarms), then storage remains empty, and so the Durable Object will no longer exist once it shuts down. + +However if you ever write using [Storage API](https://developers.cloudflare.com/durable-objects/api/storage-api/), including setting alarms, then you must explicitly call [`storage.deleteAll()`](https://developers.cloudflare.com/durable-objects/api/storage-api/#deleteall) to empty storage and [`storage.deleteAlarm()`](https://developers.cloudflare.com/durable-objects/api/storage-api/#deletealarm) if you've configured an alarm. It is not sufficient to simply delete the specific data that you wrote, such as deleting a key or dropping a table, as some metadata may remain. The only way to remove all storage is to call `deleteAll()`. Calling `deleteAll()` ensures that a Durable Object will not be billed for storage. + +```ts +export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + // Clears Durable Object storage + async clearDo(): Promise { + // If you've configured a Durable Object alarm + await this.ctx.storage.deleteAlarm(); + + // This will delete all the storage associated with this Durable Object instance + // This will also delete the Durable Object instance itself + await this.ctx.storage.deleteAll(); + } +} +``` + +## SQL API Examples + +[SQL API](https://developers.cloudflare.com/durable-objects/api/storage-api/#exec) examples below use the following SQL schema: + +```ts +import { DurableObject } from 'cloudflare:workers'; + +export class MyDurableObject extends DurableObject { + sql: SqlStorage; + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + this.sql = ctx.storage.sql; + + this.sql.exec(`CREATE TABLE IF NOT EXISTS artist( + artistid INTEGER PRIMARY KEY, + artistname TEXT + );INSERT INTO artist (artistid, artistname) VALUES + (123, 'Alice'), + (456, 'Bob'), + (789, 'Charlie');`); + } +} +``` + +Iterate over query results as row objects: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist;'); + +for (let row of cursor) { + // Iterate over row object and do something +} +``` + +Convert query results to an array of row objects: + +```ts +// Return array of row objects: [{"artistid":123,"artistname":"Alice"},{"artistid":456,"artistname":"Bob"},{"artistid":789,"artistname":"Charlie"}] +let resultsArray1 = this.sql.exec('SELECT * FROM artist;').toArray(); +// OR +let resultsArray2 = Array.from(this.sql.exec('SELECT * FROM artist;')); +// OR +let resultsArray3 = [...this.sql.exec('SELECT * FROM artist;')]; // JavaScript spread syntax +``` + +Convert query results to an array of row values arrays: + +```ts +// Returns [[123,"Alice"],[456,"Bob"],[789,"Charlie"]] +let cursor = this.sql.exec('SELECT * FROM artist;'); +let resultsArray = cursor.raw().toArray(); + +// Returns ["artistid","artistname"] +let columnNameArray = this.sql.exec('SELECT * FROM artist;').columnNames.toArray(); +``` + +Get first row object of query results: + +```ts +// Returns {"artistid":123,"artistname":"Alice"} +let firstRow = this.sql.exec('SELECT * FROM artist ORDER BY artistname DESC;').toArray()[0]; +``` + +Check if query results have exactly one row: + +```ts +// returns error +this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;').one(); + +// returns { artistid: 123, artistname: 'Alice' } +let oneRow = this.sql.exec('SELECT * FROM artist WHERE artistname = ?;', 'Alice').one(); +``` + +Returned cursor behavior: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;'); +let result = cursor.next(); +if (!result.done) { + console.log(result.value); // prints { artistid: 123, artistname: 'Alice' } +} else { + // query returned zero results +} + +let remainingRows = cursor.toArray(); +console.log(remainingRows); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }] +``` + +Returned cursor and `raw()` iterator iterate over the same query results: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;'); +let result = cursor.raw().next(); + +if (!result.done) { + console.log(result.value); // prints [ 123, 'Alice' ] +} else { + // query returned zero results +} + +console.log(cursor.toArray()); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }] +``` + +`sql.exec().rowsRead()`: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist;'); +cursor.next(); +console.log(cursor.rowsRead); // prints 1 + +cursor.toArray(); // consumes remaining cursor +console.log(cursor.rowsRead); // prints 3 +``` + +## TypeScript and query results + +You can use TypeScript [type parameters](https://www.typescriptlang.org/docs/handbook/2/generics.html#working-with-generic-type-variables) to provide a type for your results, allowing you to benefit from type hints and checks when iterating over the results of a query. + +Warning + +Providing a type parameter does _not_ validate that the query result matches your type definition. In TypeScript, properties (fields) that do not exist in your result type will be silently dropped. + +Your type must conform to the shape of a TypeScript [Record](https://www.typescriptlang.org/docs/handbook/utility-types.html#recordkeys-type) type representing the name (`string`) of the column and the type of the column. The column type must be a valid `SqlStorageValue`: one of `ArrayBuffer | string | number | null`. + +For example, + +```ts +type User = { + id: string; + name: string; + email_address: string; + version: number; +}; +``` + +This type can then be passed as the type parameter to a `sql.exec()` call: + +```ts +// The type parameter is passed between angle brackets before the function argument: +const result = this.ctx.storage.sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id).one(); +// result will now have a type of "User" + +// Alternatively, if you are iterating over results using a cursor +let cursor = this.sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id); +for (let row of cursor) { + // Each row object will be of type User +} + +// Or, if you are using raw() to convert results into an array, define an array type: +type UserRow = [id: string, name: string, email_address: string, version: number]; + +// ... and then pass it as the type argument to the raw() method: +let cursor = sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id).raw(); + +for (let row of cursor) { + // row is of type User +} +``` + +You can represent the shape of any result type you wish, including more complex types. If you are performing a `JOIN` across multiple tables, you can compose a type that reflects the results of your queries. + +## Indexes in SQLite + +Creating indexes for your most queried tables and filtered columns reduces how much data is scanned and improves query performance at the same time. If you have a read-heavy workload (most common), this can be particularly advantageous. Writing to columns referenced in an index will add at least one (1) additional row written to account for updating the index, but this is typically offset by the reduction in rows read due to the benefits of an index. + +## SQL in Durable Objects vs D1 + +Cloudflare Workers offers a SQLite-backed serverless database product - [D1](https://developers.cloudflare.com/d1/). How should you compare [SQLite in Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/) and D1? + +**D1 is a managed database product.** + +D1 fits into a familiar architecture for developers, where application servers communicate with a database over the network. Application servers are typically Workers; however, D1 also supports external, non-Worker access via an [HTTP API](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/query/), which helps unlock [third-party tooling](https://developers.cloudflare.com/d1/reference/community-projects/#_top) support for D1. + +D1 aims for a "batteries included" feature set, including the above HTTP API, [database schema management](https://developers.cloudflare.com/d1/reference/migrations/#_top), [data import/export](https://developers.cloudflare.com/d1/best-practices/import-export-data/), and [database query insights](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights). + +With D1, your application code and SQL database queries are not colocated which can impact application performance. If performance is a concern with D1, Workers has [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/#_top) to dynamically run your Worker in the best location to reduce total Worker request latency, considering everything your Worker talks to, including D1. + +**SQLite in Durable Objects is a lower-level compute with storage building block for distributed systems.** + +By design, Durable Objects are accessed with Workers-only. + +Durable Objects require a bit more effort, but in return, give you more flexibility and control. With Durable Objects, you must implement two pieces of code that run in different places: a front-end Worker which routes incoming requests from the Internet to a unique Durable Object, and the Durable Object itself, which runs on the same machine as the SQLite database. You get to choose what runs where, and it may be that your application benefits from running some application business logic right next to the database. + +With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1. + +SQL query pricing and limits are intended to be identical between D1 ([pricing](https://developers.cloudflare.com/d1/platform/pricing/), [limits](https://developers.cloudflare.com/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sql-storage-billing), [limits](https://developers.cloudflare.com/durable-objects/platform/limits/)). + +## Related resources + +- [Zero-latency SQLite storage in every Durable Object blog post](https://blog.cloudflare.com/sqlite-in-durable-objects) diff --git a/docs/context/cloudflare-docs-best-practices-error-handling.md b/docs/context/cloudflare-docs-best-practices-error-handling.md new file mode 100644 index 0000000..0bb485c --- /dev/null +++ b/docs/context/cloudflare-docs-best-practices-error-handling.md @@ -0,0 +1,99 @@ +--- +title: Error handling · Cloudflare Durable Objects docs +description: Any uncaught exceptions thrown by a Durable Object or thrown by + Durable Objects' infrastructure (such as overloads or network errors) will be + propagated to the callsite of the client. Catching these exceptions allows you + to retry creating the DurableObjectStub and sending requests. +lastUpdated: 2025-04-06T14:39:24.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/best-practices/error-handling/ + md: https://developers.cloudflare.com/durable-objects/best-practices/error-handling/index.md +--- + +Any uncaught exceptions thrown by a Durable Object or thrown by Durable Objects' infrastructure (such as overloads or network errors) will be propagated to the callsite of the client. Catching these exceptions allows you to retry creating the [`DurableObjectStub`](https://developers.cloudflare.com/durable-objects/api/stub) and sending requests. + +JavaScript Errors with the property `.retryable` set to True are suggested to be retried if requests to the Durable Object are idempotent, or can be applied multiple times without changing the response. If requests are not idempotent, then you will need to decide what is best for your application. + +JavaScript Errors with the property `.overloaded` set to True should not be retried. If a Durable Object is overloaded, then retrying will worsen the overload and increase the overall error rate. + +It is strongly recommended to retry requests following the exponential backoff algorithm in production code when the error properties indicate that it is safe to do so. + +## How exceptions are thrown + +Durable Objects can throw exceptions in one of two ways: + +- An exception can be thrown within the user code which implements a Durable Object class. The resulting exception will have a `.remote` property set to `True` in this case. +- An exception can be generated by Durable Object's infrastructure. Some sources of infrastructure exceptions include: transient internal errors, sending too many requests to a single Durable Object, and too many requests being queued due to slow or excessive I/O (external API calls or storage operations) within an individual Durable Object. Some infrastructure exceptions may also have the `.remote` property set to `True` -- for example, when the Durable Object exceeds its memory or CPU limits. + +Refer to [Troubleshooting](https://developers.cloudflare.com/durable-objects/observability/troubleshooting/) to review the types of errors returned by a Durable Object and/or Durable Objects infrastructure and how to prevent them. + +## Example + +This example demonstrates retrying requests using the recommended exponential backoff algorithm. + +```ts +import { DurableObject } from 'cloudflare:workers'; + +export interface Env { + ErrorThrowingObject: DurableObjectNamespace; +} + +export default { + async fetch(request, env, ctx) { + let userId = new URL(request.url).searchParams.get('userId') || ''; + const id = env.ErrorThrowingObject.idFromName(userId); + + // Retry behavior can be adjusted to fit your application. + let maxAttempts = 3; + let baseBackoffMs = 100; + let maxBackoffMs = 20000; + + let attempt = 0; + while (true) { + // Try sending the request + try { + // Create a Durable Object stub for each attempt, because certain types of + // errors will break the Durable Object stub. + const doStub = env.ErrorThrowingObject.get(id); + const resp = await doStub.fetch('http://your-do/'); + + return Response.json(resp); + } catch (e: any) { + if (!e.retryable) { + // Failure was not a transient internal error, so don't retry. + break; + } + } + let backoffMs = Math.min(maxBackoffMs, baseBackoffMs * Math.random() * Math.pow(2, attempt)); + attempt += 1; + if (attempt >= maxAttempts) { + // Reached max attempts, so don't retry. + break; + } + await scheduler.wait(backoffMs); + } + return new Response('server error', { status: 500 }); + }, +} satisfies ExportedHandler; + +export class ErrorThrowingObject extends DurableObject { + constructor(state: DurableObjectState, env: Env) { + super(state, env); + + // Any exceptions that are raised in your constructor will also set the + // .remote property to True + throw new Error('no good'); + } + + async fetch(req: Request) { + // Generate an uncaught exception + // A .remote property will be added to the exception propagated to the caller + // and will be set to True + throw new Error('example error'); + + // We never reach this + return Response.json({}); + } +} +``` diff --git a/docs/context/cloudflare-docs-best-practices-invoke-methods.md b/docs/context/cloudflare-docs-best-practices-invoke-methods.md new file mode 100644 index 0000000..3a2817a --- /dev/null +++ b/docs/context/cloudflare-docs-best-practices-invoke-methods.md @@ -0,0 +1,297 @@ +--- +title: Invoke methods · Cloudflare Durable Objects docs +description: All new projects and existing projects with a compatibility date + greater than or equal to 2024-04-03 should prefer to invoke Remote Procedure + Call (RPC) methods defined on a Durable Object class. Legacy projects can + continue to invoke the fetch handler on the Durable Object class indefinitely. +lastUpdated: 2025-04-06T14:39:24.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/ + md: https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/index.md +--- + +## Invoking methods on a Durable Object + +All new projects and existing projects with a compatibility date greater than or equal to [`2024-04-03`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#durable-object-stubs-and-service-bindings-support-rpc) should prefer to invoke [Remote Procedure Call (RPC)](https://developers.cloudflare.com/workers/runtime-apis/rpc/) methods defined on a Durable Object class. Legacy projects can continue to invoke the `fetch` handler on the Durable Object class indefinitely. + +### Invoke RPC methods + +By writing a Durable Object class which inherits from the built-in type `DurableObject`, public methods on the Durable Objects class are exposed as [RPC methods](https://developers.cloudflare.com/workers/runtime-apis/rpc/), which you can call using a [DurableObjectStub](https://developers.cloudflare.com/durable-objects/api/stub) from a Worker. + +All RPC calls are [asynchronous](https://developers.cloudflare.com/workers/runtime-apis/rpc/lifecycle/), accept and return [serializable types](https://developers.cloudflare.com/workers/runtime-apis/rpc/), and [propagate exceptions](https://developers.cloudflare.com/workers/runtime-apis/rpc/error-handling/) to the caller without a stack trace. Refer to [Workers RPC](https://developers.cloudflare.com/workers/runtime-apis/rpc/) for complete details. + +- JavaScript + + ```js + import { DurableObject } from 'cloudflare:workers'; + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx, env) { + super(ctx, env); + } + + async sayHello() { + return 'Hello, World!'; + } + } + + // Worker + export default { + async fetch(request, env) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName('foo'); + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + // Methods on the Durable Object are invoked via the stub + const rpcResponse = await stub.sayHello(); + + return new Response(rpcResponse); + }, + }; + ``` + +- TypeScript + + ```ts + import { DurableObject } from 'cloudflare:workers'; + + export interface Env { + MY_DURABLE_OBJECT: DurableObjectNamespace; + } + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + async sayHello(): Promise { + return 'Hello, World!'; + } + } + + // Worker + export default { + async fetch(request, env) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName('foo'); + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + // Methods on the Durable Object are invoked via the stub + const rpcResponse = await stub.sayHello(); + + return new Response(rpcResponse); + }, + } satisfies ExportedHandler; + ``` + +Note + +With RPC, the `DurableObject` superclass defines `ctx` and `env` as class properties. What was previously called `state` is now called `ctx` when you extend the `DurableObject` class. The name `ctx` is adopted rather than `state` for the `DurableObjectState` interface to be consistent between `DurableObject` and `WorkerEntrypoint` objects. + +Refer to [Build a Counter](https://developers.cloudflare.com/durable-objects/examples/build-a-counter/) for a complete example. + +### Invoking the `fetch` handler + +If your project is stuck on a compatibility date before [`2024-04-03`](https://developers.cloudflare.com/workers/configuration/compatibility-flags/#durable-object-stubs-and-service-bindings-support-rpc), or has the need to send a [`Request`](https://developers.cloudflare.com/workers/runtime-apis/request/) object and return a `Response` object, then you should send requests to a Durable Object via the fetch handler. + +- JavaScript + + ```js + import { DurableObject } from 'cloudflare:workers'; + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx, env) { + super(ctx, env); + } + + async fetch(request) { + return new Response('Hello, World!'); + } + } + + // Worker + export default { + async fetch(request, env) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName('foo'); + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + // Methods on the Durable Object are invoked via the stub + const response = await stub.fetch(request); + + return response; + }, + }; + ``` + +- TypeScript + + ```ts + import { DurableObject } from 'cloudflare:workers'; + + export interface Env { + MY_DURABLE_OBJECT: DurableObjectNamespace; + } + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + async fetch(request: Request): Promise { + return new Response('Hello, World!'); + } + } + + // Worker + export default { + async fetch(request, env) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName('foo'); + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + // Methods on the Durable Object are invoked via the stub + const response = await stub.fetch(request); + + return response; + }, + } satisfies ExportedHandler; + ``` + +The `URL` associated with the [`Request`](https://developers.cloudflare.com/workers/runtime-apis/request/) object passed to the `fetch()` handler of your Durable Object must be a well-formed URL, but does not have to be a publicly-resolvable hostname. + +Without RPC, customers frequently construct requests which corresponded to private methods on the Durable Object and dispatch requests from the `fetch` handler. RPC is obviously more ergonomic in this example. + +- JavaScript + + ```js + import { DurableObject } from "cloudflare:workers"; + + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + + private hello(name) { + return new Response(`Hello, ${name}!`); + } + + + private goodbye(name) { + return new Response(`Goodbye, ${name}!`); + } + + + async fetch(request) { + const url = new URL(request.url); + let name = url.searchParams.get("name"); + if (!name) { + name = "World"; + } + + + switch (url.pathname) { + case "/hello": + return this.hello(name); + case "/goodbye": + return this.goodbye(name); + default: + return new Response("Bad Request", { status: 400 }); + } + } + } + + + // Worker + export default { + async fetch(_request, env, _ctx) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName("foo"); + + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + + // Invoke the fetch handler on the Durable Object stub + let response = await stub.fetch("http://do/hello?name=World"); + + + return response; + }, + }; + ``` + +- TypeScript + + ```ts + import { DurableObject } from 'cloudflare:workers'; + + export interface Env { + MY_DURABLE_OBJECT: DurableObjectNamespace; + } + + // Durable Object + export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + private hello(name: string) { + return new Response(`Hello, ${name}!`); + } + + private goodbye(name: string) { + return new Response(`Goodbye, ${name}!`); + } + + async fetch(request: Request): Promise { + const url = new URL(request.url); + let name = url.searchParams.get('name'); + if (!name) { + name = 'World'; + } + + switch (url.pathname) { + case '/hello': + return this.hello(name); + case '/goodbye': + return this.goodbye(name); + default: + return new Response('Bad Request', { status: 400 }); + } + } + } + + // Worker + export default { + async fetch(_request, env, _ctx) { + // Every unique ID refers to an individual instance of the Durable Object class + const id = env.MY_DURABLE_OBJECT.idFromName('foo'); + + // A stub is a client used to invoke methods on the Durable Object + const stub = env.MY_DURABLE_OBJECT.get(id); + + // Invoke the fetch handler on the Durable Object stub + let response = await stub.fetch('http://do/hello?name=World'); + + return response; + }, + } satisfies ExportedHandler; + ``` diff --git a/docs/context/cloudflare-docs-best-practices-use-websocket.md b/docs/context/cloudflare-docs-best-practices-use-websocket.md new file mode 100644 index 0000000..65aaf03 --- /dev/null +++ b/docs/context/cloudflare-docs-best-practices-use-websocket.md @@ -0,0 +1,305 @@ +--- +title: Access Durable Objects Storage · Cloudflare Durable Objects docs +description: Durable Objects are a powerful compute API that provides a compute + with storage building block. Each Durable Object has its own private, + transactional, and strongly consistent storage. Durable Objects Storage API + provides access to a Durable Object's attached storage. +lastUpdated: 2025-05-21T09:44:01.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/ + md: https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/index.md +--- + +Durable Objects are a powerful compute API that provides a compute with storage building block. Each Durable Object has its own private, transactional, and strongly consistent storage. Durable Objects Storage API provides access to a Durable Object's attached storage. + +A Durable Object's [in-memory state](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/) is preserved as long as the Durable Object is not evicted from memory. Inactive Durable Objects with no incoming request traffic can be evicted. There are normal operations like [code deployments](https://developers.cloudflare.com/workers/configuration/versions-and-deployments/) that trigger Durable Objects to restart and lose their in-memory state. For these reasons, you should use Storage API to persist state durably on disk that needs to survive eviction or restart of Durable Objects. + +## Access storage + +Recommended SQLite-backed Durable Objects + +Cloudflare recommends all new Durable Object namespaces use the [SQLite storage backend](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/#create-sqlite-backed-durable-object-class). These Durable Objects can continue to use storage [key-value API](https://developers.cloudflare.com/durable-objects/api/storage-api/#kv-api). + +Additionally, SQLite-backed Durable Objects allow you to store more types of data (such as tables), and offers Point In Time Recovery API which can restore a Durable Object's embedded SQLite database contents (both SQL data and key-value data) to any point in the past 30 days. + +The [key-value storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#create-durable-object-class-with-key-value-storage) remains for backwards compatibility, and a migration path from KV storage backend to SQLite storage backend for existing Durable Object namespaces will be available in the future. + +Storage billing on SQLite-backed Durable Objects + +Storage billing is not yet enabled for Durable Object classes using the SQLite storage backend. SQLite-backed Durable Objects will incur [charges for requests and duration](https://developers.cloudflare.com/durable-objects/platform/pricing/#compute-billing). Storage billing for SQLite-backed Durable Objects will be enabled at a later date with advance notice with the [shared pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sqlite-storage-backend). + +[Storage API methods](https://developers.cloudflare.com/durable-objects/api/storage-api/#methods) are available on `ctx.storage` parameter passed to the Durable Object constructor. Storage API has several methods, including SQL, point-in-time recovery (PITR), key-value (KV), and alarm APIs. + +Only Durable Object classes with a SQLite storage backend can access SQL API. + +### Create SQLite-backed Durable Object class + +Use `new_sqlite_classes` on the migration in your Worker's Wrangler file: + +- wrangler.jsonc + + ```jsonc + { + "migrations": [ + { + "tag": "v1", + "new_sqlite_classes": ["MyDurableObject"] + } + ] + } + ``` + +- wrangler.toml + + ```toml + [[migrations]] + tag = "v1" # Should be unique for each entry + new_sqlite_classes = ["MyDurableObject"] # Array of new classes + ``` + +[SQL API](https://developers.cloudflare.com/durable-objects/api/storage-api/#exec) is available on `ctx.storage.sql` parameter passed to the Durable Object constructor. + +SQLite-backed Durable Objects also offer [point-in-time recovery API](https://developers.cloudflare.com/durable-objects/api/storage-api/#pitr-point-in-time-recovery-api), which uses bookmarks to allow you to restore a Durable Object's embedded SQLite database to any point in time in the past 30 days. + +### Initialize instance variables from storage + +A common pattern is to initialize a Durable Object from [persistent storage](https://developers.cloudflare.com/durable-objects/api/storage-api/) and set instance variables the first time it is accessed. Since future accesses are routed to the same Durable Object, it is then possible to return any initialized values without making further calls to persistent storage. + +```ts +import { DurableObject } from 'cloudflare:workers'; + +export class Counter extends DurableObject { + value: number; + + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + + // `blockConcurrencyWhile()` ensures no requests are delivered until + // initialization completes. + ctx.blockConcurrencyWhile(async () => { + // After initialization, future reads do not need to access storage. + this.value = (await ctx.storage.get('value')) || 0; + }); + } + + async getCounterValue() { + return this.value; + } +} +``` + +### Remove a Durable Object's storage + +A Durable Object fully ceases to exist if, when it shuts down, its storage is empty. If you never write to a Durable Object's storage at all (including setting alarms), then storage remains empty, and so the Durable Object will no longer exist once it shuts down. + +However if you ever write using [Storage API](https://developers.cloudflare.com/durable-objects/api/storage-api/), including setting alarms, then you must explicitly call [`storage.deleteAll()`](https://developers.cloudflare.com/durable-objects/api/storage-api/#deleteall) to empty storage and [`storage.deleteAlarm()`](https://developers.cloudflare.com/durable-objects/api/storage-api/#deletealarm) if you've configured an alarm. It is not sufficient to simply delete the specific data that you wrote, such as deleting a key or dropping a table, as some metadata may remain. The only way to remove all storage is to call `deleteAll()`. Calling `deleteAll()` ensures that a Durable Object will not be billed for storage. + +```ts +export class MyDurableObject extends DurableObject { + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + } + + // Clears Durable Object storage + async clearDo(): Promise { + // If you've configured a Durable Object alarm + await this.ctx.storage.deleteAlarm(); + + // This will delete all the storage associated with this Durable Object instance + // This will also delete the Durable Object instance itself + await this.ctx.storage.deleteAll(); + } +} +``` + +## SQL API Examples + +[SQL API](https://developers.cloudflare.com/durable-objects/api/storage-api/#exec) examples below use the following SQL schema: + +```ts +import { DurableObject } from 'cloudflare:workers'; + +export class MyDurableObject extends DurableObject { + sql: SqlStorage; + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + this.sql = ctx.storage.sql; + + this.sql.exec(`CREATE TABLE IF NOT EXISTS artist( + artistid INTEGER PRIMARY KEY, + artistname TEXT + );INSERT INTO artist (artistid, artistname) VALUES + (123, 'Alice'), + (456, 'Bob'), + (789, 'Charlie');`); + } +} +``` + +Iterate over query results as row objects: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist;'); + +for (let row of cursor) { + // Iterate over row object and do something +} +``` + +Convert query results to an array of row objects: + +```ts +// Return array of row objects: [{"artistid":123,"artistname":"Alice"},{"artistid":456,"artistname":"Bob"},{"artistid":789,"artistname":"Charlie"}] +let resultsArray1 = this.sql.exec('SELECT * FROM artist;').toArray(); +// OR +let resultsArray2 = Array.from(this.sql.exec('SELECT * FROM artist;')); +// OR +let resultsArray3 = [...this.sql.exec('SELECT * FROM artist;')]; // JavaScript spread syntax +``` + +Convert query results to an array of row values arrays: + +```ts +// Returns [[123,"Alice"],[456,"Bob"],[789,"Charlie"]] +let cursor = this.sql.exec('SELECT * FROM artist;'); +let resultsArray = cursor.raw().toArray(); + +// Returns ["artistid","artistname"] +let columnNameArray = this.sql.exec('SELECT * FROM artist;').columnNames.toArray(); +``` + +Get first row object of query results: + +```ts +// Returns {"artistid":123,"artistname":"Alice"} +let firstRow = this.sql.exec('SELECT * FROM artist ORDER BY artistname DESC;').toArray()[0]; +``` + +Check if query results have exactly one row: + +```ts +// returns error +this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;').one(); + +// returns { artistid: 123, artistname: 'Alice' } +let oneRow = this.sql.exec('SELECT * FROM artist WHERE artistname = ?;', 'Alice').one(); +``` + +Returned cursor behavior: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;'); +let result = cursor.next(); +if (!result.done) { + console.log(result.value); // prints { artistid: 123, artistname: 'Alice' } +} else { + // query returned zero results +} + +let remainingRows = cursor.toArray(); +console.log(remainingRows); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }] +``` + +Returned cursor and `raw()` iterator iterate over the same query results: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist ORDER BY artistname ASC;'); +let result = cursor.raw().next(); + +if (!result.done) { + console.log(result.value); // prints [ 123, 'Alice' ] +} else { + // query returned zero results +} + +console.log(cursor.toArray()); // prints [{ artistid: 456, artistname: 'Bob' },{ artistid: 789, artistname: 'Charlie' }] +``` + +`sql.exec().rowsRead()`: + +```ts +let cursor = this.sql.exec('SELECT * FROM artist;'); +cursor.next(); +console.log(cursor.rowsRead); // prints 1 + +cursor.toArray(); // consumes remaining cursor +console.log(cursor.rowsRead); // prints 3 +``` + +## TypeScript and query results + +You can use TypeScript [type parameters](https://www.typescriptlang.org/docs/handbook/2/generics.html#working-with-generic-type-variables) to provide a type for your results, allowing you to benefit from type hints and checks when iterating over the results of a query. + +Warning + +Providing a type parameter does _not_ validate that the query result matches your type definition. In TypeScript, properties (fields) that do not exist in your result type will be silently dropped. + +Your type must conform to the shape of a TypeScript [Record](https://www.typescriptlang.org/docs/handbook/utility-types.html#recordkeys-type) type representing the name (`string`) of the column and the type of the column. The column type must be a valid `SqlStorageValue`: one of `ArrayBuffer | string | number | null`. + +For example, + +```ts +type User = { + id: string; + name: string; + email_address: string; + version: number; +}; +``` + +This type can then be passed as the type parameter to a `sql.exec()` call: + +```ts +// The type parameter is passed between angle brackets before the function argument: +const result = this.ctx.storage.sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id).one(); +// result will now have a type of "User" + +// Alternatively, if you are iterating over results using a cursor +let cursor = this.sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id); +for (let row of cursor) { + // Each row object will be of type User +} + +// Or, if you are using raw() to convert results into an array, define an array type: +type UserRow = [id: string, name: string, email_address: string, version: number]; + +// ... and then pass it as the type argument to the raw() method: +let cursor = sql.exec('SELECT id, name, email_address, version FROM users WHERE id = ?', user_id).raw(); + +for (let row of cursor) { + // row is of type User +} +``` + +You can represent the shape of any result type you wish, including more complex types. If you are performing a `JOIN` across multiple tables, you can compose a type that reflects the results of your queries. + +## Indexes in SQLite + +Creating indexes for your most queried tables and filtered columns reduces how much data is scanned and improves query performance at the same time. If you have a read-heavy workload (most common), this can be particularly advantageous. Writing to columns referenced in an index will add at least one (1) additional row written to account for updating the index, but this is typically offset by the reduction in rows read due to the benefits of an index. + +## SQL in Durable Objects vs D1 + +Cloudflare Workers offers a SQLite-backed serverless database product - [D1](https://developers.cloudflare.com/d1/). How should you compare [SQLite in Durable Objects](https://developers.cloudflare.com/durable-objects/best-practices/access-durable-objects-storage/) and D1? + +**D1 is a managed database product.** + +D1 fits into a familiar architecture for developers, where application servers communicate with a database over the network. Application servers are typically Workers; however, D1 also supports external, non-Worker access via an [HTTP API](https://developers.cloudflare.com/api/resources/d1/subresources/database/methods/query/), which helps unlock [third-party tooling](https://developers.cloudflare.com/d1/reference/community-projects/#_top) support for D1. + +D1 aims for a "batteries included" feature set, including the above HTTP API, [database schema management](https://developers.cloudflare.com/d1/reference/migrations/#_top), [data import/export](https://developers.cloudflare.com/d1/best-practices/import-export-data/), and [database query insights](https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights). + +With D1, your application code and SQL database queries are not colocated which can impact application performance. If performance is a concern with D1, Workers has [Smart Placement](https://developers.cloudflare.com/workers/configuration/smart-placement/#_top) to dynamically run your Worker in the best location to reduce total Worker request latency, considering everything your Worker talks to, including D1. + +**SQLite in Durable Objects is a lower-level compute with storage building block for distributed systems.** + +By design, Durable Objects are accessed with Workers-only. + +Durable Objects require a bit more effort, but in return, give you more flexibility and control. With Durable Objects, you must implement two pieces of code that run in different places: a front-end Worker which routes incoming requests from the Internet to a unique Durable Object, and the Durable Object itself, which runs on the same machine as the SQLite database. You get to choose what runs where, and it may be that your application benefits from running some application business logic right next to the database. + +With SQLite in Durable Objects, you may also need to build some of your own database tooling that comes out-of-the-box with D1. + +SQL query pricing and limits are intended to be identical between D1 ([pricing](https://developers.cloudflare.com/d1/platform/pricing/), [limits](https://developers.cloudflare.com/d1/platform/limits/)) and SQLite in Durable Objects ([pricing](https://developers.cloudflare.com/durable-objects/platform/pricing/#sql-storage-billing), [limits](https://developers.cloudflare.com/durable-objects/platform/limits/)). + +## Related resources + +- [Zero-latency SQLite storage in every Durable Object blog post](https://blog.cloudflare.com/sqlite-in-durable-objects) diff --git a/docs/context/cloudflare-docs-lifecycle-of-a-durable-object.md b/docs/context/cloudflare-docs-lifecycle-of-a-durable-object.md new file mode 100644 index 0000000..a85e2ab --- /dev/null +++ b/docs/context/cloudflare-docs-lifecycle-of-a-durable-object.md @@ -0,0 +1,78 @@ +--- +title: Lifecycle of a Durable Object · Cloudflare Durable Objects docs +description: This section describes the lifecycle of a Durable Object. +lastUpdated: 2025-07-30T15:17:17.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/concepts/durable-object-lifecycle/ + md: https://developers.cloudflare.com/durable-objects/concepts/durable-object-lifecycle/index.md +--- + +This section describes the lifecycle of a [Durable Object](https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/). + +To use a Durable Object you need to create a [Durable Object Stub](https://developers.cloudflare.com/durable-objects/api/stub/). In its simplest form, this looks like the following snippet: + +```ts +// Assume a DurableObjectNamespace binding MY_DURABLE_OBJECT +// Every unique ID refers to an individual instance of the Durable Object class +const id = env.MY_DURABLE_OBJECT.idFromName('foo'); +const stub = env.MY_DURABLE_OBJECT.get(id); +``` + +Once we have the Durable Object Stub, we can now invoke methods on the Durable Object. Note that the above two lines do not yet send any request to the remote Durable Object. + +The following line invokes the `sayHello()` method (which is an [RPC method](https://developers.cloudflare.com/durable-objects/best-practices/create-durable-object-stubs-and-send-requests/#invoke-rpc-methods)) of the Durable Object class bound to the `MY_DURABLE_OBJECT` binding: + +```ts +// All invoked methods need to be awaited. +const rpcResponse = await stub.sayHello(); +``` + +At this point, the caller sends a request to the Durable Object identified by the stub. The lifecycle of the Durable Object begins. + +## Durable Object Lifecycle state transitions + +A Durable Object can be in one of the following states at any moment: + +| State | Description | +| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Active, in-memory** | The Durable Object runs, in memory, and handles incoming requests. | +| **Idle, in-memory non-hibernateable** | The Durable Object waits for the next incoming request/event, but does not satisfy the criteria for hibernation. | +| **Idle, in-memory hibernateable** | The Durable Object waits for the next incoming request/event and satisfies the criteria for hibernation. It is up to the runtime to decide when to hibernate the Durable Object. Currently, it is after 10 seconds of inactivity while in this state. | +| **Hibernated** | The Durable Object is removed from memory. Hibernated WebSocket connections stay connected. | +| **Inactive** | The Durable Object is completely removed from the host process and might need to cold start. This is the initial state of all Durable Objects. | + +This is how a Durable Object transitions among these states (each state is in a rounded rectangle). + +![Lifecycle of a Durable Object](https://developers.cloudflare.com/_astro/lifecycle-of-a-do.BreLW03C_ct5dt.webp) + +Assuming a Durable Object does not run, the first incoming request or event (like an alarm) will execute the `constructor()` of the Durable Object class, then run the corresponding function invoked. + +At this point the Durable Object is in the **active in-memory state**. + +If it continuously receives requests or events within 10 seconds of each other, the Durable Object will remain in this state. + +After 10 seconds of no incoming request or events, the runtime can now hibernate the Durable Object. Hibernation will only occur if **all** of the below are true: + +- No `setTimeout`/`setInterval` scheduled callbacks are set. +- No in-progress `fetch()` waiting for a remote request exists. +- No WebSocket standard API is used. +- No request/event is still being processed. + +If all conditions are met, the Durable Object will transition into a **hibernated** state. + +Warning + +When hibernated, the in-memory state is discarded, so ensure you persist all important information in the Durable Object's storage. + +If any of the above conditions are false, the Durable Object remains in-memory, in the **idle, in-memory, non-hibernateable** state. + +In case of an incoming request or event while in the **hibernated** state, the `constructor()` will run again, and the corresponding function invoked will run. + +While in the **idle, in-memory, non-hibernateable** state, after 70-140 seconds of inactivity (no incoming requests or events), the Durable Object will be evicted entirely from memory and potentially from the Cloudflare host and transition to the **inactive** state. + +Objects in the **hibernated** state keep their Websocket clients connected, and the runtime decides if and when to move the object to a different host, thus restarting the lifecycle. + +The next incoming request or event starts the cycle again. + +As explained in [When does a Durable Object incur duration charges?](https://developers.cloudflare.com/durable-objects/platform/pricing/#when-does-a-durable-object-incur-duration-charges), a Durable Object incurs charges only when it is **actively running in-memory**, or when it is **idle in-memory and non-hibernateable**. diff --git a/docs/context/cloudflare-docs-what-are-durable-objects.md b/docs/context/cloudflare-docs-what-are-durable-objects.md new file mode 100644 index 0000000..ad119f9 --- /dev/null +++ b/docs/context/cloudflare-docs-what-are-durable-objects.md @@ -0,0 +1,124 @@ +--- +title: What are Durable Objects? · Cloudflare Durable Objects docs +description: 'A Durable Object is a special kind of Cloudflare Worker which + uniquely combines compute with storage. Like a Worker, a Durable Object is + automatically provisioned geographically close to where it is first requested, + starts up quickly when needed, and shuts down when idle. You can have millions + of them around the world. However, unlike regular Workers:' +lastUpdated: 2025-07-30T08:17:23.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/ + md: https://developers.cloudflare.com/durable-objects/concepts/what-are-durable-objects/index.md +--- + +A Durable Object is a special kind of [Cloudflare Worker](https://developers.cloudflare.com/workers/) which uniquely combines compute with storage. Like a Worker, a Durable Object is automatically provisioned geographically close to where it is first requested, starts up quickly when needed, and shuts down when idle. You can have millions of them around the world. However, unlike regular Workers: + +- Each Durable Object has a **globally-unique name**, which allows you to send requests to a specific object from anywhere in the world. Thus, a Durable Object can be used to coordinate between multiple clients who need to work together. +- Each Durable Object has some **durable storage** attached. Since this storage lives together with the object, it is strongly consistent yet fast to access. + +Therefore, Durable Objects enable **stateful** serverless applications. + +## Durable Objects highlights + +Durable Objects have properties that make them a great fit for distributed stateful scalable applications. + +**Serverless compute, zero infrastructure management** + +- Durable Objects are built on-top of the Workers runtime, so they support exactly the same code (JavaScript and WASM), and similar memory and CPU limits. +- Each Durable Object is [implicitly created on first access](https://developers.cloudflare.com/durable-objects/api/namespace/#get). User applications are not concerned with their lifecycle, creating them or destroying them. Durable Objects migrate among healthy servers, and therefore applications never have to worry about managing them. +- Each Durable Object stays alive as long as requests are being processed, and remains alive for several seconds after being idle before hibernating, allowing applications to [exploit in-memory caching](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/) while handling many consecutive requests and boosting their performance. + +**Storage colocated with compute** + +- Each Durable Object has its own [durable, transactional, and strongly consistent storage](https://developers.cloudflare.com/durable-objects/api/storage-api/) (up to 10 GB[1](#user-content-fn-1)), persisted across requests, and accessible only within that object. + +**Single-threaded concurrency** + +- Each [Durable Object instance has an identifier](https://developers.cloudflare.com/durable-objects/api/id/), either randomly-generated or user-generated, which allows you to globally address which Durable Object should handle a specific action or request. +- Durable Objects are single-threaded and cooperatively multi-tasked, just like code running in a web browser. For more details on how safety and correctness are achieved, refer to the blog post ["Durable Objects: Easy, Fast, Correct — Choose three"](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/). + +**Elastic horizontal scaling across Cloudflare's global network** + +- Durable Objects can be spread around the world, and you can [optionally influence where each instance should be located](https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint). Durable Objects are not yet available in every Cloudflare data center; refer to the [where.durableobjects.live](https://where.durableobjects.live/) project for live locations. +- Each Durable Object type (or ["Namespace binding"](https://developers.cloudflare.com/durable-objects/api/namespace/) in Cloudflare terms) corresponds to a JavaScript class implementing the actual logic. There is no hard limit on how many Durable Objects can be created for each namespace. +- Durable Objects scale elastically as your application creates millions of objects. There is no need for applications to manage infrastructure or plan ahead for capacity. + +## Durable Objects features + +### In-memory state + +Each Durable Object has its own [in-memory state](https://developers.cloudflare.com/durable-objects/reference/in-memory-state/). Applications can use this in-memory state to optimize the performance of their applications by keeping important information in-memory, thereby avoiding the need to access the durable storage at all. + +Useful cases for in-memory state include batching and aggregating information before persisting it to storage, or for immediately rejecting/handling incoming requests meeting certain criteria, and more. + +In-memory state is reset when the Durable Object hibernates after being idle for some time. Therefore, it is important to persist any in-memory data to the durable storage if that data will be needed at a later time when the Durable Object receives another request. + +### Storage API + +The [Durable Object Storage API](https://developers.cloudflare.com/durable-objects/api/storage-api/) allows Durable Objects to access fast, transactional, and strongly consistent storage. A Durable Object's attached storage is private to its unique instance and cannot be accessed by other objects. + +There are two flavors of the storage API, a [key-value (KV) API](https://developers.cloudflare.com/durable-objects/api/storage-api/#kv-api) and an [SQL API](https://developers.cloudflare.com/durable-objects/api/storage-api/#sql-api). + +When using the [new SQLite in Durable Objects storage backend](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/#enable-sqlite-storage-backend-on-new-durable-object-class-migration), you have access to both the APIs. However, if you use the previous storage backend you only have access to the key-value API. + +### Alarms API + +Durable Objects provide an [Alarms API](https://developers.cloudflare.com/durable-objects/api/alarms/) which allows you to schedule the Durable Object to be woken up at a time in the future. This is useful when you want to do certain work periodically, or at some specific point in time, without having to manually manage infrastructure such as job scheduling runners on your own. + +You can combine Alarms with in-memory state and the durable storage API to build batch and aggregation applications such as queues, workflows, or advanced data pipelines. + +### WebSockets + +WebSockets are long-lived TCP connections that enable bi-directional, real-time communication between client and server. Because WebSocket sessions are long-lived, applications commonly use Durable Objects to accept either the client or server connection. + +Because Durable Objects provide a single-point-of-coordination between Cloudflare Workers, a single Durable Object instance can be used in parallel with WebSockets to coordinate between multiple clients, such as participants in a chat room or a multiplayer game. + +Durable Objects support the [WebSocket Standard API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/#websocket-standard-api), as well as the [WebSockets Hibernation API](https://developers.cloudflare.com/durable-objects/best-practices/websockets/#websocket-hibernation-api) which extends the Web Standard WebSocket API to reduce costs by not incurring billing charges during periods of inactivity. + +### RPC + +Durable Objects support Workers [Remote-Procedure-Call (RPC)](https://developers.cloudflare.com/workers/runtime-apis/rpc/) which allows applications to use JavaScript-native methods and objects to communicate between Workers and Durable Objects. + +Using RPC for communication makes application development easier and simpler to reason about, and more efficient. + +## Actor programming model + +Another way to describe and think about Durable Objects is through the lens of the [Actor programming model](https://en.wikipedia.org/wiki/Actor_model). There are several popular examples of the Actor model supported at the programming language level through runtimes or library frameworks, like [Erlang](https://www.erlang.org/), [Elixir](https://elixir-lang.org/), [Akka](https://akka.io/), or [Microsoft Orleans for .NET](https://learn.microsoft.com/en-us/dotnet/orleans/overview). + +The Actor model simplifies a lot of problems in distributed systems by abstracting away the communication between actors using RPC calls (or message sending) that could be implemented on-top of any transport protocol, and it avoids most of the concurrency pitfalls you get when doing concurrency through shared memory such as race conditions when multiple processes/threads access the same data in-memory. + +Each Durable Object instance can be seen as an Actor instance, receiving messages (incoming HTTP/RPC requests), executing some logic in its own single-threaded context using its attached durable storage or in-memory state, and finally sending messages to the outside world (outgoing HTTP/RPC requests or responses), even to another Durable Object instance. + +Each Durable Object has certain capabilities in terms of [how much work it can do](https://developers.cloudflare.com/durable-objects/platform/limits/#how-much-work-can-a-single-durable-object-do), which should influence the application's [architecture to fully take advantage of the platform](https://developers.cloudflare.com/reference-architecture/diagrams/storage/durable-object-control-data-plane-pattern/). + +Durable Objects are natively integrated into Cloudflare's infrastructure, giving you the ultimate serverless platform to build distributed stateful applications exploiting the entirety of Cloudflare's network. + +## Durable Objects in Cloudflare + +Many of Cloudflare's products use Durable Objects. Some of our technical blog posts showcase real-world applications and use-cases where Durable Objects make building applications easier and simpler. + +These blog posts may also serve as inspiration on how to architect scalable applications using Durable Objects, and how to integrate them with the rest of Cloudflare Developer Platform. + +- [Durable Objects aren't just durable, they're fast: a 10x speedup for Cloudflare Queues](https://blog.cloudflare.com/how-we-built-cloudflare-queues/) +- [Behind the scenes with Stream Live, Cloudflare's live streaming service](https://blog.cloudflare.com/behind-the-scenes-with-stream-live-cloudflares-live-streaming-service/) +- [DO it again: how we used Durable Objects to add WebSockets support and authentication to AI Gateway](https://blog.cloudflare.com/do-it-again/) +- [Workers Builds: integrated CI/CD built on the Workers platform](https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/) +- [Build durable applications on Cloudflare Workers: you write the Workflows, we take care of the rest](https://blog.cloudflare.com/building-workflows-durable-execution-on-workers/) +- [Building D1: a Global Database](https://blog.cloudflare.com/building-d1-a-global-database/) +- [Billions and billions (of logs): scaling AI Gateway with the Cloudflare Developer Platform](https://blog.cloudflare.com/billions-and-billions-of-logs-scaling-ai-gateway-with-the-cloudflare/) +- [Indexing millions of HTTP requests using Durable Objects](https://blog.cloudflare.com/r2-rayid-retrieval/) + +Finally, the following blog posts may help you learn some of the technical implementation aspects of Durable Objects, and how they work. + +- [Durable Objects: Easy, Fast, Correct — Choose three](https://blog.cloudflare.com/durable-objects-easy-fast-correct-choose-three/) +- [Zero-latency SQLite storage in every Durable Object](https://blog.cloudflare.com/sqlite-in-durable-objects/) +- [Workers Durable Objects Beta: A New Approach to Stateful Serverless](https://blog.cloudflare.com/introducing-workers-durable-objects/) + +## Get started + +Get started now by following the ["Get started" guide](https://developers.cloudflare.com/durable-objects/get-started/) to create your first application using Durable Objects. + +## Footnotes + +1. Storage per Durable Object with SQLite is currently 1 GB. This will be raised to 10 GB for general availability. [↩](#user-content-fnref-1) diff --git a/docs/context/cloudflare-example-build-a-rate-limiter.md b/docs/context/cloudflare-example-build-a-rate-limiter.md new file mode 100644 index 0000000..9cb7ed9 --- /dev/null +++ b/docs/context/cloudflare-example-build-a-rate-limiter.md @@ -0,0 +1,290 @@ +--- +title: Build a rate limiter · Cloudflare Durable Objects docs +description: Build a rate limiter using Durable Objects and Workers. +lastUpdated: 2025-04-15T13:47:35.000Z +chatbotDeprioritize: false +source_url: + html: https://developers.cloudflare.com/durable-objects/examples/build-a-rate-limiter/ + md: https://developers.cloudflare.com/durable-objects/examples/build-a-rate-limiter/index.md +--- + +This example shows how to build a rate limiter using Durable Objects and Workers that can be used to protect upstream resources, including third-party APIs that your application relies on and/or services that may be costly for you to invoke. + +This example also discusses some decisions that need to be made when designing a system, such as a rate limiter, with Durable Objects. + +The Worker creates a `RateLimiter` Durable Object on a per IP basis to protect upstream resources. IP based rate limiting can be effective without negatively impacting latency because any given IP will remain within a small geographic area colocated with the `RateLimiter` Durable Object. Furthermore, throughput is also improved because each IP gets its own Durable Object. + +It might seem simpler to implement a global rate limiter, `const id = env.RATE_LIMITER.idFromName("global");`, which can provide better guarantees on the request rate to the upstream resource. However: + +- This would require all requests globally to make a sub-request to a single Durable Object. +- Implementing a global rate limiter would add additional latency for requests not colocated with the Durable Object, and global throughput would be capped to the throughput of a single Durable Object. +- A single Durable Object that all requests rely on is typically considered an anti-pattern. Durable Objects work best when they are scoped to a user, room, service and/or the specific subset of your application that requires global co-ordination. + +Note + +If you do not need unique or custom rate-limiting capabilities, refer to [Rate limiting rules](https://developers.cloudflare.com/waf/rate-limiting-rules/) that are part of Cloudflare's Web Application Firewall (WAF) product. + +The Durable Object uses a token bucket algorithm to implement rate limiting. The naive idea is that each request requires a token to complete, and the tokens are replenished according to the reciprocal of the desired number of requests per second. As an example, a 1000 requests per second rate limit will have a token replenished every millisecond (as specified by milliseconds_per_request) up to a given capacity limit. + +This example uses Durable Object's [Alarms API](https://developers.cloudflare.com/durable-objects/api/alarms) to schedule the Durable Object to be woken up at a time in the future. + +- When the alarm's scheduled time comes, the `alarm()` handler method is called, and in this case, the alarm will add a token to the "Bucket". +- The implementation is made more efficient by adding tokens in bulk (as specified by milliseconds_for_updates) and preventing the alarm handler from being invoked every millisecond. More frequent invocations of Durable Objects will lead to higher invocation and duration charges. + +The first implementation of a rate limiter is below: + +- JavaScript + + ```js + import { DurableObject } from 'cloudflare:workers'; + + // Worker + export default { + async fetch(request, env, _ctx) { + // Determine the IP address of the client + const ip = request.headers.get('CF-Connecting-IP'); + if (ip === null) { + return new Response('Could not determine client IP', { status: 400 }); + } + + // Obtain an identifier for a Durable Object based on the client's IP address + const id = env.RATE_LIMITER.idFromName(ip); + + try { + const stub = env.RATE_LIMITER.get(id); + const milliseconds_to_next_request = await stub.getMillisecondsToNextRequest(); + if (milliseconds_to_next_request > 0) { + // Alternatively one could sleep for the necessary length of time + return new Response('Rate limit exceeded', { status: 429 }); + } + } catch (error) { + return new Response('Could not connect to rate limiter', { status: 502 }); + } + + // TODO: Implement me + return new Response('Call some upstream resource...'); + }, + }; + + // Durable Object + export class RateLimiter extends DurableObject { + static milliseconds_per_request = 1; + static milliseconds_for_updates = 5000; + static capacity = 10000; + + constructor(ctx, env) { + super(ctx, env); + this.tokens = RateLimiter.capacity; + } + + async getMillisecondsToNextRequest() { + this.checkAndSetAlarm(); + + let milliseconds_to_next_request = RateLimiter.milliseconds_per_request; + if (this.tokens > 0) { + this.tokens -= 1; + milliseconds_to_next_request = 0; + } + + return milliseconds_to_next_request; + } + + async checkAndSetAlarm() { + let currentAlarm = await this.ctx.storage.getAlarm(); + if (currentAlarm == null) { + this.ctx.storage.setAlarm(Date.now() + RateLimiter.milliseconds_for_updates * RateLimiter.milliseconds_per_request); + } + } + + async alarm() { + if (this.tokens < RateLimiter.capacity) { + this.tokens = Math.min(RateLimiter.capacity, this.tokens + RateLimiter.milliseconds_for_updates); + this.checkAndSetAlarm(); + } + } + } + ``` + +- TypeScript + + ```ts + import { DurableObject } from 'cloudflare:workers'; + + export interface Env { + RATE_LIMITER: DurableObjectNamespace; + } + + // Worker + export default { + async fetch(request, env, _ctx): Promise { + // Determine the IP address of the client + const ip = request.headers.get('CF-Connecting-IP'); + if (ip === null) { + return new Response('Could not determine client IP', { status: 400 }); + } + + // Obtain an identifier for a Durable Object based on the client's IP address + const id = env.RATE_LIMITER.idFromName(ip); + + try { + const stub = env.RATE_LIMITER.get(id); + const milliseconds_to_next_request = await stub.getMillisecondsToNextRequest(); + if (milliseconds_to_next_request > 0) { + // Alternatively one could sleep for the necessary length of time + return new Response('Rate limit exceeded', { status: 429 }); + } + } catch (error) { + return new Response('Could not connect to rate limiter', { status: 502 }); + } + + // TODO: Implement me + return new Response('Call some upstream resource...'); + }, + } satisfies ExportedHandler; + + // Durable Object + export class RateLimiter extends DurableObject { + static readonly milliseconds_per_request = 1; + static readonly milliseconds_for_updates = 5000; + static readonly capacity = 10000; + + tokens: number; + + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + this.tokens = RateLimiter.capacity; + } + + async getMillisecondsToNextRequest(): Promise { + this.checkAndSetAlarm(); + + let milliseconds_to_next_request = RateLimiter.milliseconds_per_request; + if (this.tokens > 0) { + this.tokens -= 1; + milliseconds_to_next_request = 0; + } + + return milliseconds_to_next_request; + } + + private async checkAndSetAlarm() { + let currentAlarm = await this.ctx.storage.getAlarm(); + if (currentAlarm == null) { + this.ctx.storage.setAlarm(Date.now() + RateLimiter.milliseconds_for_updates * RateLimiter.milliseconds_per_request); + } + } + + async alarm() { + if (this.tokens < RateLimiter.capacity) { + this.tokens = Math.min(RateLimiter.capacity, this.tokens + RateLimiter.milliseconds_for_updates); + this.checkAndSetAlarm(); + } + } + } + ``` + +While the token bucket algorithm is popular for implementing rate limiting and uses Durable Object features, there is a simpler approach: + +- JavaScript + + ```js + import { DurableObject } from 'cloudflare:workers'; + + // Durable Object + export class RateLimiter extends DurableObject { + static milliseconds_per_request = 1; + static milliseconds_for_grace_period = 5000; + + constructor(ctx, env) { + super(ctx, env); + this.nextAllowedTime = 0; + } + + async getMillisecondsToNextRequest() { + const now = Date.now(); + + this.nextAllowedTime = Math.max(now, this.nextAllowedTime); + this.nextAllowedTime += RateLimiter.milliseconds_per_request; + + const value = Math.max(0, this.nextAllowedTime - now - RateLimiter.milliseconds_for_grace_period); + return value; + } + } + ``` + +- TypeScript + + ```ts + import { DurableObject } from 'cloudflare:workers'; + + // Durable Object + export class RateLimiter extends DurableObject { + static milliseconds_per_request = 1; + static milliseconds_for_grace_period = 5000; + + nextAllowedTime: number; + + constructor(ctx: DurableObjectState, env: Env) { + super(ctx, env); + this.nextAllowedTime = 0; + } + + async getMillisecondsToNextRequest(): Promise { + const now = Date.now(); + + this.nextAllowedTime = Math.max(now, this.nextAllowedTime); + this.nextAllowedTime += RateLimiter.milliseconds_per_request; + + const value = Math.max(0, this.nextAllowedTime - now - RateLimiter.milliseconds_for_grace_period); + return value; + } + } + ``` + +Finally, configure your Wrangler file to include a Durable Object [binding](https://developers.cloudflare.com/durable-objects/get-started/#4-configure-durable-object-bindings) and [migration](https://developers.cloudflare.com/durable-objects/reference/durable-objects-migrations/) based on the namespace and class name chosen previously. + +- wrangler.jsonc + + ```jsonc + { + "name": "my-counter", + "main": "src/index.ts", + "durable_objects": { + "bindings": [ + { + "name": "RATE_LIMITER", + "class_name": "RateLimiter" + } + ] + }, + "migrations": [ + { + "tag": "v1", + "new_sqlite_classes": ["RateLimiter"] + } + ] + } + ``` + +- wrangler.toml + + ```toml + name = "my-counter" + main = "src/index.ts" + + + [[durable_objects.bindings]] + name = "RATE_LIMITER" + class_name = "RateLimiter" + + + [[migrations]] + tag = "v1" + new_sqlite_classes = ["RateLimiter"] + ``` + +### Related resources + +- Learn more about Durable Object's [Alarms API](https://developers.cloudflare.com/durable-objects/api/alarms) and how to configure alarms. +- [Understand how to troubleshoot](https://developers.cloudflare.com/durable-objects/observability/troubleshooting/) common errors related with Durable Objects. +- Review how [Durable Objects are priced](https://developers.cloudflare.com/durable-objects/platform/pricing/), including pricing examples. From d2ace6d1bfa5f54e37f5388ec1e4fb076f2674bd Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 20:00:11 -0700 Subject: [PATCH 6/8] docs: capture hiro platform docs related to plan --- ...iro-docs-chainhooks-endpoint-references.md | 312 ++++++++++++++++++ .../hiro-docs-platform-api-basic-usage.md | 42 +++ .../hiro-docs-platform-api-overview.md | 7 + 3 files changed, 361 insertions(+) create mode 100644 docs/context/hiro-docs-chainhooks-endpoint-references.md create mode 100644 docs/context/hiro-docs-platform-api-basic-usage.md create mode 100644 docs/context/hiro-docs-platform-api-overview.md diff --git a/docs/context/hiro-docs-chainhooks-endpoint-references.md b/docs/context/hiro-docs-chainhooks-endpoint-references.md new file mode 100644 index 0000000..e182543 --- /dev/null +++ b/docs/context/hiro-docs-chainhooks-endpoint-references.md @@ -0,0 +1,312 @@ +# Get all chainhooks + +Get all of your chainhooks through the Hiro Platform + +## Endpoint + +``` +GET /v1/ext/{apiKey}/chainhooks +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------ | ------ | -------- | ------------ | +| apiKey | string | Yes | Hiro API key | + +## Request Example \ + +\ + +```bash \ +curl -X GET "https://api.platform.hiro.so/v1/ext/example/chainhooks" \ +``` + +## Response + +### 200 OK + +Default Response + +```json +[{}] +``` + +## Authentication + +This endpoint does not require authentication. + +# Get a specific chainhook + +Get a specific chainhook through the Hiro Platform + +## Endpoint + +``` +GET /v1/ext/{apiKey}/chainhooks/{chainhookUuid} +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------------- | ------ | -------- | -------------- | +| apiKey | string | Yes | Hiro API key | +| chainhookUuid | string | Yes | Chainhook UUID | + +## Request Example \ + +\ + +```bash \ +curl -X GET "https://api.platform.hiro.so/v1/ext/0941f307fd270ace19a5bfed67fbd3bc/chainhooks/aa3626dc-2090-49cd-8f1e-8f9994393aed" \ +``` + +## Response + +### 200 OK + +Default Response + +```json +{} +``` + +### 404 Not Found + +Default Response + +## Authentication + +This endpoint does not require authentication. + +# Get a chainhook status + +Retrieve the status of a specific chainhook through the Hiro Platform + +## Endpoint + +``` +GET /v1/ext/{apiKey}/chainhooks/{chainhookUuid}/status +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------------- | ------ | -------- | -------------- | +| apiKey | string | Yes | Hiro API key | +| chainhookUuid | string | Yes | Chainhook UUID | + +## Request Example \ + +\ + +```bash \ +curl -X GET "https://api.platform.hiro.so/v1/ext/0941f307fd270ace19a5bfed67fbd3bc/chainhooks/aa3626dc-2090-49cd-8f1e-8f9994393aed/status" \ +``` + +## Response + +### 200 OK + +Successfully retrieved chainhook status + +```json +{ + "status": { + "info": { + "expired_at_block_height": 1, + "last_evaluated_block_height": 1, + "last_occurrence": 1, + "number_of_blocks_evaluated": 1, + "number_of_times_triggered": 1 + }, + "type": "string" + }, + "enabled": true +} +``` + +### 404 Not Found + +Chainhook not found + +## Authentication + +This endpoint does not require authentication. + +# Create a chainhook + +Create a chainhook through the Hiro Platform + +## Endpoint + +``` +POST /v1/ext/{apiKey}/chainhooks +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------ | ------ | -------- | ------------ | +| apiKey | string | Yes | Hiro API key | + +## Request Body + +Chainhook predicate configuration + +**Required**: Yes + +### Content Type: `application/json` + +```json +[object Object] +``` + +## Request Example \ + +\ + +```bash \ +curl -X POST "https://api.platform.hiro.so/v1/ext/0941f307fd270ace19a5bfed67fbd3bc/chainhooks" \ +``` + +## Response + +### 200 OK + +Default Response + +```json +{ + "status": "string", + "chainhookUuid": "string" +} +``` + +## Authentication + +This endpoint does not require authentication. + +# Update a chainhook + +Update a chainhook through the Hiro Platform + +## Endpoint + +``` +PUT /v1/ext/{apiKey}/chainhooks/{chainhookUuid} +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------------- | ------ | -------- | -------------- | +| apiKey | string | Yes | Hiro API key | +| chainhookUuid | string | Yes | Chainhook UUID | + +## Request Body + +Chainhook predicate configuration + +**Required**: No + +### Content Type: `application/json` + +```json +[object Object] +``` + +## Request Example \ + +\ + +```bash \ +curl -X PUT "https://api.platform.hiro.so/v1/ext/0941f307fd270ace19a5bfed67fbd3bc/chainhooks/aa3626dc-2090-49cd-8f1e-8f9994393aed" \ +``` + +## Response + +### 200 OK + +Default Response + +```json +{ + "status": "string", + "chainhookUuid": "string" +} +``` + +### 404 Not Found + +Default Response + +### 500 Internal Server Error + +Default Response + +### Error Responses + +| Status | Description | +| ------ | ---------------- | +| 404 | Default Response | +| 500 | Default Response | + +## Authentication + +This endpoint does not require authentication. + +# Delete a chainhook + +Delete a chainhook through the Hiro Platform + +## Endpoint + +``` +DELETE /v1/ext/{apiKey}/chainhooks/{chainhookUuid} +``` + +## Parameters + +### Path Parameters + +| Name | Type | Required | Description | +| ------------- | ------ | -------- | -------------- | +| apiKey | string | Yes | Hiro API key | +| chainhookUuid | string | Yes | Chainhook UUID | + +## Request Example \ + +\ + +```bash \ +curl -X DELETE "https://api.platform.hiro.so/v1/ext/0941f307fd270ace19a5bfed67fbd3bc/chainhooks/aa3626dc-2090-49cd-8f1e-8f9994393aed" \ +``` + +## Response + +### 200 OK + +Default Response + +```json +{ + "status": "string", + "chainhookUuid": "string", + "message": "string" +} +``` + +## Authentication + +This endpoint does not require authentication. diff --git a/docs/context/hiro-docs-platform-api-basic-usage.md b/docs/context/hiro-docs-platform-api-basic-usage.md new file mode 100644 index 0000000..c2330c8 --- /dev/null +++ b/docs/context/hiro-docs-platform-api-basic-usage.md @@ -0,0 +1,42 @@ +## Usage + +The Platform API is built on REST principles, enforcing HTTPS for all requests to ensure data security, integrity, and privacy. + +### Base URL + +```console -c +https://api.platform.hiro.so +``` + +### Making requests + +To make a request to the Platform API, you can paste the curl command below in your terminal. + +```terminal +$ curl -L 'https://api.platform.hiro.so/v1/ext/{apiKey}/chainhooks' \ + -H 'Accept: application/json' +``` + +### Authentication + +The Platform API uses path-based authentication. Replace `{apiKey}` in the URL path with your Platform API key. + +```terminal +$ curl -L 'https://api.platform.hiro.so/v1/ext/{apiKey}/chainhooks' \ + -H 'Accept: application/json' \ + -H 'Content-Type: application/json' +``` + +## Response codes + +The Platform API uses standard HTTP response codes to indicate request success or failure. + +| Code | Description | +| ----- | -------------------------------------- | +| `200` | Successful request | +| `400` | Check that the parameters were correct | +| `401` | Missing API key | +| `403` | Invalid API key | +| `404` | Resource not found | +| `429` | Rate limit exceeded | +| `5xx` | Server errors | diff --git a/docs/context/hiro-docs-platform-api-overview.md b/docs/context/hiro-docs-platform-api-overview.md new file mode 100644 index 0000000..2773e81 --- /dev/null +++ b/docs/context/hiro-docs-platform-api-overview.md @@ -0,0 +1,7 @@ +# Platform API + +Programmatic access to devnet and chainhook management via REST. + +## Overview + +The Platform API provides programmatic control over Hiro Platform services through a REST interface. Built for automation and CI/CD integration, it enables you to create and manage chainhooks, control platform-hosted devnets, and interact with blockchain services directly from your application code. From e842ac9f5bed23c32ebcd06b577de8a6a16797f9 Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 20:15:01 -0700 Subject: [PATCH 7/8] docs: Add ACTION_PLAN.md for ChainhookAggregatorDO implementation Co-authored-by: aider (openrouter/google/gemini-2.5-pro) --- docs/ACTION_PLAN.md | 62 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 docs/ACTION_PLAN.md diff --git a/docs/ACTION_PLAN.md b/docs/ACTION_PLAN.md new file mode 100644 index 0000000..dcc2644 --- /dev/null +++ b/docs/ACTION_PLAN.md @@ -0,0 +1,62 @@ +# Action Plan: Create `ChainhookAggregatorDO` + +This plan outlines the steps to create a new Durable Object, `ChainhookAggregatorDO`, for managing and relaying blockchain events from an external chainhook service. It follows modern Cloudflare best practices, including RPC-style communication and SQLite-backed storage, and aligns with the project's existing architecture as described in `docs/START.md`. + +## Phase 1: Scaffolding and Configuration + +1. **Create New Durable Object File**: + * Create `src/durable-objects/chainhook-aggregator-do.ts`. + * Define the `ChainhookAggregatorDO` class extending `DurableObject`. + * Add a constructor and placeholder RPC methods (`handleEvent`, `getStatus`) and an `alarm()` handler to establish the basic structure. + +2. **Update Worker Configuration**: + * In `worker-configuration.d.ts`, update the `Env` interface to include: + ```typescript + CHAINHOOK_AGGREGATOR_DO: DurableObjectNamespace; + HIRO_CHAINHOOK_API_KEY: string; // For the external service + RELAY_WORKER_URL: string; // The endpoint for the relay logic + ``` + * In `wrangler.toml` (or `.jsonc`): + * Add a new durable object binding for `CHAINHOOK_AGGREGATOR_DO`. + * Add a migration for `ChainhookAggregatorDO` using `new_sqlite_classes` to enable the SQLite backend. + * Add secrets for `HIRO_CHAINHOOK_API_KEY` and `RELAY_WORKER_URL`. + +3. **Update Entrypoint for Routing (`src/index.ts`)**: + * Export the new `ChainhookAggregatorDO` class. + * Add a new route like `/chainhook-event/[do-name]` to the main `fetch` handler. This will be the public endpoint the external chainhook service calls. + * This route handler will: + 1. Get the DO stub via `env.CHAINHOOK_AGGREGATOR_DO.idFromName('[do-name]')`. + 2. Call an RPC method on the stub (e.g., `await stub.handleEvent(request)`), forwarding the request. + +## Phase 2: Durable Object Core Logic + +4. **Implement the `ChainhookAggregatorDO` Constructor**: + * Initialize services like `Logger`. + * Use `ctx.blockConcurrencyWhile()` to load the DO's state (e.g., `chainhook_id`, `last_block_hash`) from `this.ctx.storage`. If state doesn't exist, trigger the initial chainhook creation logic and set the first alarm. + +5. **Implement State Management**: + * Define class properties for the DO's state (`chainhook_id`, `last_activity_timestamp`, etc.). + * Use `this.ctx.storage.put()` to persist state to durable storage after it's modified. + +6. **Implement RPC Method: `handleEvent(request)`**: + * This method receives the webhook payload from the main worker. + * **Log the raw request body.** This fulfills the requirement to capture the payload structure for future typing. + * Extract the `block_hash` and update `last_activity_timestamp` in storage. + * Forward the payload to the `RELAY_WORKER_URL` using `fetch()`. + * Log the result of the forwarding action. + +7. **Implement RPC Method: `getStatus()`**: + * Create this method to return the DO's current state from memory for debugging purposes, as planned in `START.md`. + +## Phase 3: Lifecycle Management and External API Interaction + +8. **Implement `alarm()` Handler**: + * The alarm will periodically trigger this method. + * Inside, implement the health check: call the Hiro Platform API to get the status of the managed `chainhook_id`. + * If the hook is unhealthy or stale, log the issue and trigger a recreation method. + * Finally, call `this.ctx.storage.setAlarm()` to schedule the next health check. + +9. **Implement Chainhook Creation/Recreation Logic**: + * Create a private method (e.g., `_recreateChainhook()`). + * This method will make an authenticated API call to the Hiro Platform to create a new chainhook, providing the public URL for the webhook (`/chainhook-event/[do-name]`). + * It will store the new `chainhook_id` in `this.ctx.storage`. From eac465822a87f55c4bb9a249a77c5f501b89b94e Mon Sep 17 00:00:00 2001 From: Jason Schrader Date: Wed, 30 Jul 2025 20:18:38 -0700 Subject: [PATCH 8/8] fix: formatting, name change --- docs/ACTION_PLAN.md | 76 ++++++++++++++++++++++++--------------------- 1 file changed, 41 insertions(+), 35 deletions(-) diff --git a/docs/ACTION_PLAN.md b/docs/ACTION_PLAN.md index dcc2644..007ce01 100644 --- a/docs/ACTION_PLAN.md +++ b/docs/ACTION_PLAN.md @@ -5,58 +5,64 @@ This plan outlines the steps to create a new Durable Object, `ChainhookAggregato ## Phase 1: Scaffolding and Configuration 1. **Create New Durable Object File**: - * Create `src/durable-objects/chainhook-aggregator-do.ts`. - * Define the `ChainhookAggregatorDO` class extending `DurableObject`. - * Add a constructor and placeholder RPC methods (`handleEvent`, `getStatus`) and an `alarm()` handler to establish the basic structure. + + - Create `src/durable-objects/chainhook-aggregator-do.ts`. + - Define the `ChainhookAggregatorDO` class extending `DurableObject`. + - Add a constructor and placeholder RPC methods (`handleEvent`, `getStatus`) and an `alarm()` handler to establish the basic structure. 2. **Update Worker Configuration**: - * In `worker-configuration.d.ts`, update the `Env` interface to include: - ```typescript - CHAINHOOK_AGGREGATOR_DO: DurableObjectNamespace; - HIRO_CHAINHOOK_API_KEY: string; // For the external service - RELAY_WORKER_URL: string; // The endpoint for the relay logic - ``` - * In `wrangler.toml` (or `.jsonc`): - * Add a new durable object binding for `CHAINHOOK_AGGREGATOR_DO`. - * Add a migration for `ChainhookAggregatorDO` using `new_sqlite_classes` to enable the SQLite backend. - * Add secrets for `HIRO_CHAINHOOK_API_KEY` and `RELAY_WORKER_URL`. + + - In `worker-configuration.d.ts`, update the `Env` interface to include: + ```typescript + CHAINHOOK_AGGREGATOR_DO: DurableObjectNamespace; + HIRO_PLATFORM_API_KEY: string; // For the external service + RELAY_WORKER_URL: string; // The endpoint for the relay logic + ``` + - In `wrangler.toml` (or `.jsonc`): + - Add a new durable object binding for `CHAINHOOK_AGGREGATOR_DO`. + - Add a migration for `ChainhookAggregatorDO` using `new_sqlite_classes` to enable the SQLite backend. + - Add secrets for `HIRO_PLATFORM_API_KEY` and `RELAY_WORKER_URL`. 3. **Update Entrypoint for Routing (`src/index.ts`)**: - * Export the new `ChainhookAggregatorDO` class. - * Add a new route like `/chainhook-event/[do-name]` to the main `fetch` handler. This will be the public endpoint the external chainhook service calls. - * This route handler will: - 1. Get the DO stub via `env.CHAINHOOK_AGGREGATOR_DO.idFromName('[do-name]')`. - 2. Call an RPC method on the stub (e.g., `await stub.handleEvent(request)`), forwarding the request. + - Export the new `ChainhookAggregatorDO` class. + - Add a new route like `/chainhook-event/[do-name]` to the main `fetch` handler. This will be the public endpoint the external chainhook service calls. + - This route handler will: + 1. Get the DO stub via `env.CHAINHOOK_AGGREGATOR_DO.idFromName('[do-name]')`. + 2. Call an RPC method on the stub (e.g., `await stub.handleEvent(request)`), forwarding the request. ## Phase 2: Durable Object Core Logic 4. **Implement the `ChainhookAggregatorDO` Constructor**: - * Initialize services like `Logger`. - * Use `ctx.blockConcurrencyWhile()` to load the DO's state (e.g., `chainhook_id`, `last_block_hash`) from `this.ctx.storage`. If state doesn't exist, trigger the initial chainhook creation logic and set the first alarm. + + - Initialize services like `Logger`. + - Use `ctx.blockConcurrencyWhile()` to load the DO's state (e.g., `chainhook_id`, `last_block_hash`) from `this.ctx.storage`. If state doesn't exist, trigger the initial chainhook creation logic and set the first alarm. 5. **Implement State Management**: - * Define class properties for the DO's state (`chainhook_id`, `last_activity_timestamp`, etc.). - * Use `this.ctx.storage.put()` to persist state to durable storage after it's modified. + + - Define class properties for the DO's state (`chainhook_id`, `last_activity_timestamp`, etc.). + - Use `this.ctx.storage.put()` to persist state to durable storage after it's modified. 6. **Implement RPC Method: `handleEvent(request)`**: - * This method receives the webhook payload from the main worker. - * **Log the raw request body.** This fulfills the requirement to capture the payload structure for future typing. - * Extract the `block_hash` and update `last_activity_timestamp` in storage. - * Forward the payload to the `RELAY_WORKER_URL` using `fetch()`. - * Log the result of the forwarding action. + + - This method receives the webhook payload from the main worker. + - **Log the raw request body.** This fulfills the requirement to capture the payload structure for future typing. + - Extract the `block_hash` and update `last_activity_timestamp` in storage. + - Forward the payload to the `RELAY_WORKER_URL` using `fetch()`. + - Log the result of the forwarding action. 7. **Implement RPC Method: `getStatus()`**: - * Create this method to return the DO's current state from memory for debugging purposes, as planned in `START.md`. + - Create this method to return the DO's current state from memory for debugging purposes, as planned in `START.md`. ## Phase 3: Lifecycle Management and External API Interaction 8. **Implement `alarm()` Handler**: - * The alarm will periodically trigger this method. - * Inside, implement the health check: call the Hiro Platform API to get the status of the managed `chainhook_id`. - * If the hook is unhealthy or stale, log the issue and trigger a recreation method. - * Finally, call `this.ctx.storage.setAlarm()` to schedule the next health check. + + - The alarm will periodically trigger this method. + - Inside, implement the health check: call the Hiro Platform API to get the status of the managed `chainhook_id`. + - If the hook is unhealthy or stale, log the issue and trigger a recreation method. + - Finally, call `this.ctx.storage.setAlarm()` to schedule the next health check. 9. **Implement Chainhook Creation/Recreation Logic**: - * Create a private method (e.g., `_recreateChainhook()`). - * This method will make an authenticated API call to the Hiro Platform to create a new chainhook, providing the public URL for the webhook (`/chainhook-event/[do-name]`). - * It will store the new `chainhook_id` in `this.ctx.storage`. + - Create a private method (e.g., `_recreateChainhook()`). + - This method will make an authenticated API call to the Hiro Platform to create a new chainhook, providing the public URL for the webhook (`/chainhook-event/[do-name]`). + - It will store the new `chainhook_id` in `this.ctx.storage`.