Workshop: Build serverless edge applications with Cloudflare Workers and Fauna
In this workshop, you will learn how to build a distributed serverless application using Cloudflare Workers and Fauna. Traditional databases often present challenges for modern, edge-compute systems due to limitations in scalability, latency, and flexibility. Edge-compute systems require low latency and high availability, but conventional databases are often centralized, leading to increased response times and potential bottlenecks. They also require complex infrastructure to handle scaling, consistency, and performance at a global level.
To address these challenges, a distributed serverless architecture like the combination of Cloudflare Workers and Fauna is ideal. This approach allows applications to run on a globally distributed network of servers, ensuring low-latency responses regardless of user location. Fauna offers key attributes such as true serverless scalability, multi-region support, and strong consistency, making it well-suited for modern applications that demand high availability, strong consistency, and predictable performance.
Why Cloudflare Workers and Fauna?
Cloudflare workers are serverless functions that run on Cloudflare’s edge network. They are written in JavaScript, TypeScript, Rust, or Python and can be used to build serverless applications that run close to your users, reducing latency and improving performance.
Fauna is a globally distributed, low-latency, strongly consistent, and serverless database. It is designed to work well with serverless functions like Cloudflare Workers and provides a powerful and flexible data platform for building modern applications.
Fauna is a database delivered as an API. Fauna is globally distributed. Your data is always close to your users, reducing latency and improving performance.
By combining Cloudflare Workers and Fauna, you can build serverless applications that are fast, reliable, and scalable, with low operational overhead.
Prerequisites
Creating the Cloudflare Worker
-
Install Cloudflare Wrangler:
npm install -g wrangler@latest
Ensure Cloudflare Wrangler is v3.87 or higher.
-
Create a new Cloudflare Worker project:
npm create cloudflare -- my-fauna-worker cd my-fauna-worker
When running
npm create cloudflare …
, you’re prompted with multiple questions. When asked which example and template, choose the "Hello World" one.For language, choose "TypeScript". When it asks if you want to deploy your application, select "No".
-
Open the newly created project in your favorite code editor:
Create a Fauna Database
Next, create a Fauna database. You can create a new database from the Fauna dashboard or using the Fauna CLI. For this workshop, we will create a new database using the Fauna CLI.
-
Create a new Fauna database:
fauna create-database mydb
-
Initialize a new Fauna project directory in the Cloudflare Worker project:
fauna project init
When prompted, enter:
-
schema
for the schema directory used to store schema files. If the directory doesn’t exist, the command creates it. -
A default environment name, such as
dev
. See Environments for more info. -
A default endpoint to use for Fauna CLI commands. Enter the endpoint name you set up while installing the Fauna CLI.
-
An existing default database for the project. In this case, select
mydb
.
-
-
Deploy the app to register it with Cloudflare Workers:
wrangler deploy
Integrating Fauna with Cloudflare Workers
You can integrate Fauna using the Cloudflare dashboard or the Wrangler CLI. For this workshop, use the Cloudflare dashboard.
-
Open the Cloudflare dashboard and navigate to the Workers & Pages section.
-
Select the my-worker Worker you created earlier.
-
Select the Integration tab.
-
Under Fauna, select Add Integration and authenticate with your existing Fauna account.
-
When prompted, select the Fauna database you created earlier.
-
Select a database security role. For this workshop, you can select the server role. For a production deployment, you should create a custom role before this step.
Accessing data from Fauna in Cloudflare Workers
You can either use the Fauna driver to access data from Fauna in your Cloudflare Workers. Fauna is a database that is delivered as an API. The Fauna driver is a lightweight wrapper around the Fauna API that makes it easy to interact with Fauna databases from your Cloudflare Workers.
-
Install the Fauna driver in your Cloudflare Worker project:
npm install fauna
-
Replace the Hello World template script with the following code in
src/index.ts
:import { Client, fql} from "fauna"; export interface Env { FAUNA_SECRET: string; } export default { async fetch( request: Request, env: Env, ctx: ExecutionContext ): Promise<Response> { // Make a query to Fauna const client = new Client({ secret: env.FAUNA_SECRET }); try { const result = await client.query(fql` Product.all() `); return new Response(JSON.stringify(result.data)); } catch (error) { return new Response("An error occurred", { status: 500 }); } }, };
Define data relationships with FSL
We will define one-to-many relationships between two collections using FSL. In
the sample application, we will have Product
and Category
collections. Each
product belongs to a category.
-
Go back to the root project directory, then into the schema directory.
cd ../schema
-
Create two files,
product.fsl
andcategory.fsl
, in theschema
directory. -
Add the following code to the
category.fsl
file:collection Category { name: String description: String compute products: Set<Product> = (category => Product.byCategory(category)) unique [.name] index byName { terms [.name] } }
-
Add the following code to the
product.fsl
file:collection Product { name: String description: String // Use an Integer to represent cents. // This avoids floating-point precision issues. price: Int category: Ref<Category>? stock: Int // Use a unique constraint to ensure no two products have the same name. unique [.name] check stockIsValid (product => product.stock >= 0) check priceIsValid (product => product.price > 0) index byCategory { terms [.category] } index sortedByCategory { values [.category] } index byName { terms [.name] } index sortedByPriceLowToHigh { values [.price, .name, .description, .stock] } }
-
Go back to the root project directory.
cd ..
-
Run the following command to push the schema to Fauna:
fauna schema push
When prompted, accept and stage the schema.
-
Check the status of the staged schema:
fauna schema status
-
When the status is
ready
, commit the staged schema to the database:fauna schema commit
The commit applies the staged schema to the database.
Document-relational model
One of Fauna’s key strengths is its flexibility to support both document and relational data patterns, making it suitable for a wide range of use cases.
In the example above, we demonstrated how to define relationships using
FSL (Fauna Schema Language). You
can think of the Product
and Category
collections as representing a typical
relational model (one-to-many), where products are linked to categories.
What makes Fauna unique is its capability to perform relational-like joins within a document-based system.
For example in the Product
collection, the category
field is a reference to
a Category
document. You can query all the products in a specific category by
using the category
field and in the Product
collection.
// Get all products in the Electronics category
Product.where(.category == Category.byName("Electronics").first())
To optimize this query we created an index byCategory()
in the Product
collection. You can use the byCategory()
index to query all products in a
specific category.
Product.byCategory(Category.byName("Electronics").first())
Fauna provides you SQL-like relational capabilities while maintaining the flexibility of a document-based database.
Learn more about data relationships in Fauna.
Adding REST endpoints
We will add REST endpoints to our Cloudflare Worker to interact with the Fauna database.
Add Fauna secret to wrangler for local development
First, create a new Fauna secret and add it to your wrangler.toml file to facilitate local development on your computer.
-
Create a new Fauna secret:
fauna create-key --environment='' mydb server
-
Add the secret to your wrangler.toml file:
[vars] # Fauna variables FAUNA_SECRET = "<Your-Generated-Secret>"
-
Replace the code in
src/index.ts
with the following:
import { Client, fql, FaunaError } from 'fauna';
export interface Env {
FAUNA_SECRET: string;
}
interface RequestBody {
operation: 'create' | 'update' | 'delete';
id?: string; // Only required for "update" and "delete" operations
fields?: Record<string, any>; // Data fields for "create" and "update" operations
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
// Extract the method from the request
const { method } = request;
switch (method) {
case 'GET':
return getAllProducts(env);
case 'POST':
const body = (await request.json()) as RequestBody;
return createNewProduct(body, env);
default:
return new Response('Method Not Allowed', { status: 405 });
}
}
}
Next up, lets implement the getAllProducts
and createNewProduct
functions to
get all products and create a new product, respectively.
GET products endpoint
Implement the following code at the bottom of src/index.ts
to get all products
from Fauna:
async function getAllProducts(env: Env): Promise<Response> {
// Custom GET logic here (e.g., fetching data from Fauna)
const client = new Client({ secret: env.FAUNA_SECRET });
try {
const result = await client.query(fql`
Product.all()
`);
return new Response(JSON.stringify(result.data));
} catch (error) {
if (error instanceof FaunaError) {
return new Response(error.message, { status: 500 });
}
return new Response('An error occurred', { status: 500 });
}
}
POST products endpoint
Implement the following code at the bottom of src/index.ts
to create a new
product in Fauna:
async function createNewProduct(body: any, env: Env): Promise<Response> {
const { name, price, description, category, stock } = body;
if (!name || !price || !description || !category || !stock) {
return new Response('Missing required fields', { status: 400 });
}
const client = new Client({ secret: env.FAUNA_SECRET });
try {
// Custom POST logic here (e.g., storing data to Fauna)
const result = await client.query(fql`
// Get the category by name. We can use .first() here because we know that the category
// name is unique.
let category = Category.byName(${category}).first()
// If the category does not exist, abort the query.
if (category == null) abort("Category does not exist.")
// Create the product with the given values.
let args = { name: ${name}, price: ${price}, stock: ${stock}, description: ${description}, category: category }
let product: Any = Product.create(args)
// Use projection to only return the fields you need.
product {
id,
name,
price,
description,
stock,
category {
id,
name,
description
}
}
`);
return new Response(JSON.stringify(result.data));
} catch (error) {
console.error(error);
return new Response('An error occurred', { status: 500 });
}
}
Test the application
Add records to Fauna
Before we can test the Cloudflare Worker, we need to add some documents to the Fauna database.
-
Run the following command to connect to the Fauna shell from your terminal:
fauna shell
-
Run the following command to enter editor mode in the Fauna shell:
.editor
-
Create a new category document. Write the following code in the editor and press Ctrl+D to execute the code:
Category.create({ name: "Electronics", description: "Electronic products" })
-
Create a new product document. Write the following code in the editor and press Ctrl+D to execute the code:
Product.create({ name: "Laptop", description: "A laptop computer", price: 500, stock: 10, category: Category.byName("Electronics").first() })
-
Run the following command to exit the Fauna shell:
.exit
Run the Cloudflare Worker locally
-
Run the following command to start the Cloudflare Worker locally:
wrangler dev
-
Send a GET request to the
/products
endpoint to get all products.curl http://localhost:8787/products
-
Send a POST request to the
/products
endpoint to create a new product.curl -X POST http://localhost:8787/products \ -d '{ "name": "Smartphone", "description": "A smartphone", "price": 300, "stock": 20, "category": "Electronics" }'
Now that you can read and write documents to Fauna, let’s build in the ability to do dynamic queries.
Deploy the Cloudflare Worker
-
Deploy the Cloudflare Worker:
wrangler deploy
Test the Cloudflare Worker by sending a GET request to the root endpoint, but be sure to use the DNS address of your new Cloudflare Worker instead of localhost.
You can find the full source code for this workshop in the following GitHub repository.
Is this article helpful?
Tell Fauna how the article can be improved:
Visit Fauna's forums
or email docs@fauna.com
Thank you for your feedback!