The Fauna service will be ending on May 30, 2025.

For more information on the service wind down, see our announcement and the Fauna Service End-of-Life FAQ.

Migrate off Fauna

The Fauna service will be ending at 12:00pm PT on Friday May 30, 2025 at which time you need to have exported data from your Fauna databases. During that time we are committed to keeping the service operational and achieving our SLAs while you work on your migration.

For more information on the service wind down, see our announcement and the Fauna Service End-of-Life FAQ.

This article covers:

Export data from Fauna

We recommend you use snapshot exports to export data from Fauna collections. For live exports, you can combine a snapshot export with event feeds or event streams to capture real-time changes.

For smaller collections, you can also use FQL queries to export your data. See Export data using FQL queries.

Snapshot export

A snapshot export lets you create a point-in-time snapshot of document data from a database or specific user-defined collections. The exported data is stored as JSON files in an AWS S3 bucket you specify.

You can create an export using the Fauna CLI's fauna export create s3 command:

# Export the 'Product' and 'Category' collections in
# the 'us/my_db' database. Store the export
# in the 'fauna_exports/my_db/2099-12-31'
# path of the 'doc-example-bucket' S3 bucket. Format
# document data using the 'simple' data format.
fauna export create s3 \
  --database us/my_db \
  --collection Product Category \
  --bucket doc-example-bucket \ # Replace with your bucket.
  --path fauna_exports/my_db/2099-12-31 \
  --format simple

For more information, see Snapshot export .

Capture real-time changes with event feeds or event streams

To export Fauna data to a live system, you can use a snapshot export with an event feed and event stream for change data capture (CDC).

After performing an initial snapshot export, you use the event feed or event stream to replay any changes made to the exported data and sync it to the new system.

The following table links to event feed or event stream examples for the Fauna client drivers.

Client driver Event feeds Event streams

JavaScript driver

Python driver

Go driver

C# driver

JVM driver

Export data using FQL queries

For smaller collections, you can retrieve the collection’s documents as a Set using an FQL query. You can then use a script to write the Set’s document data to a JSON file or send it to another system.

Fauna automatically paginates Sets with 16 or more items. You can paginate through Sets using one of the following methods:

Driver pagination

The Fauna client drivers include methods for automatically iterating through paginated Sets.

In the following examples, you provide a list of collections to export. For each collection, the example uses the driver’s pagination method to retrieve all documents in the collection and save them to a JSON file.

JavaScriptPythonGo.NET/C#Java

 

import { Client, fql, FaunaError } from "fauna";
import fs from "fs";

// Route queries to a specific database
// using the authentication secret in
// the `FAUNA_SECRET` environment variable.
const client = new Client();

// Specify the collections to export.
// You can retrieve a list of user-defined collections
// using a `Collection.all()` query.
const collectionsToExport = ["Product", "Category"];

// Loop through the collections.
for (const collectionName of collectionsToExport) {
  try {
    // Compose a query using an FQL template string.
    // The query returns a Set containint all documents
    // in the collection.
    const query = fql`
      let collection = Collection(${collectionName})
      collection.all()`;

    // Run the query.
    const pages = client.paginate(query);

    // Iterate through the resulting document Set.
    const documents = [];
    for await (const page of pages.flatten()) {
      documents.push(page);
    }

    // Convert the 'documents' array to a JSON string.
    const jsonData = JSON.stringify(documents, null, 2);

    // Write the JSON string to a file named `<collectionName>.json`.
    fs.writeFileSync(`${collectionName}.json`, jsonData, "utf-8");

    console.log(
      `${collectionName} collection data written to ${collectionName}.json`
    );
  } catch (error) {
    if (error instanceof FaunaError) {
      console.error(`Error exporting ${collectionName}:`, error);
    } else {
      console.error(
        `An unexpected error occurred for ${collectionName}:`,
        error
      );
    }
  }
}

// Close the Fauna client.
client.close();
from fauna.client import Client
from fauna import fql, Page
from fauna.errors import FaunaError
import os
import json
from datetime import datetime

# Route queries to a specific database
# using the authentication secret in
# the `FAUNA_SECRET` environment variable.
fauna_secret = os.getenv("FAUNA_SECRET")
client = Client(secret=fauna_secret, additional_headers={"x-format": "simple"})

# Specify the collections to export.
# You can retrieve a list of user-defined collections
# using a `Collection.all()` query.
collections_to_export = ["Product", "Category"]

# Loop through the collections.
for collection_name in collections_to_export:
    try:
        # Compose a query using an FQL template string.
        # The query returns a Set containing all documents
        # in the collection.
        query = fql(
            """
            let collection = Collection(${collection_name})
            collection.all()
        """,
            collection_name=collection_name,
        )

        # Run the query. Get the initial page of results.
        initial_response = client.query(query)

        documents = []
        cursor = None

        if "after" in initial_response.data:
            # Add first page documents
            documents.extend(initial_response.data["data"])
            cursor = initial_response.data.get("after")

            # While there's a cursor, fetch next pages
            while cursor is not None:
                next_query = fql("Set.paginate(${cursor})", cursor=cursor)
                next_response = client.query(next_query)

                if "data" in next_response.data:
                    documents.extend(next_response.data["data"])
                    cursor = next_response.data.get("after")
                else:
                    cursor = None

        else:
            # If no Page object, just add the single response
            documents.append(initial_response.data)

        # Convert the 'documents' list to a JSON string.
        json_data = json.dumps(documents, indent=2)

        # Write the JSON string to a file named `<collection_name>.json`.
        with open(f"{collection_name}.json", "w", encoding="utf-8") as file:
            file.write(json_data)

        print(f"{collection_name} collection data written to {collection_name}.json")
    except FaunaError as error:
        print(f"Error exporting {collection_name}:", error)
    except Exception as error:
        print(f"An unexpected error occurred for {collection_name}:", error)

# Close the Fauna client.
client.close()
package main

import (
	"encoding/json"
	"fmt"
	"os"

	"github.com/fauna/fauna-go/v3"
)

func main() {
	// Route queries to a specific database
	// using the authentication secret in
	// the `FAUNA_SECRET` environment variable.
	client, err := fauna.NewDefaultClient()
	if err != nil {
		panic(fmt.Sprintf("Failed to create Fauna client: %v", err))
	}

	// Specify the collections to export.
	// You can retrieve a list of user-defined collections
	// using a `Collection.all()` query.
	collectionsToExport := []string{"Product", "Category"}

	// Loop through the collections.
	for _, collectionName := range collectionsToExport {
		fmt.Printf("Exporting collection: %s\n", collectionName)

		// Compose a query using an FQL template string.
		// The query returns a Set containing all documents
		// in the collection.
		query, err := fauna.FQL(fmt.Sprintf(`
			let collection = Collection("%s")
			collection.all()
		`, collectionName), nil)

		if err != nil {
			fmt.Printf("Error creating query for %s: %v\n", collectionName, err)
			continue
		}

		// Run the query.
		paginator := client.Paginate(query)

		// Initialize an array to collect all documents.
		var documents []interface{}

		// Iterate through the resulting document Set.
		for {
			page, err := paginator.Next()
			if err != nil {
				fmt.Printf("Error fetching page for %s: %v\n", collectionName, err)
				break
			}

			var pageItems []interface{}
			err = page.Unmarshal(&pageItems)
			if err != nil {
				fmt.Printf("Error unmarshaling page for %s: %v\n", collectionName, err)
				break
			}

			// Add page items to 'documents' array.
			documents = append(documents, pageItems...)

			// Check if there are more pages.
			if !paginator.HasNext() {
				break
			}
		}

		// Convert the 'documents' array to a JSON string.
		jsonData, err := json.MarshalIndent(documents, "", "  ")
		if err != nil {
			fmt.Printf("Error marshaling JSON for %s: %v\n", collectionName, err)
			continue
		}

		// Write the JSON string to a file named `<collectionName>.json`.
		err = os.WriteFile(fmt.Sprintf("%s.json", collectionName), jsonData, 0644)
		if err != nil {
			fmt.Printf("Error writing file for %s: %v\n", collectionName, err)
			continue
		}

		fmt.Printf("%s collection data written to %s.json\n", collectionName, collectionName)
	}
}
using Fauna;
using Fauna.Exceptions;
using System;
using System.Collections.Generic;
using System.IO;
using System.Text.Json;
using System.Threading.Tasks;
using static Fauna.Query;

class Program
{
  static async Task Main(string[] args)
  {
    // Route queries to a specific database
    // using the authentication secret in
    // the `FAUNA_SECRET` environment variable.
    var client = new Client();

    try
    {
      // Specify the collections to export.
      // You can retrieve a list of user-defined collections
      // using a `Collection.all()` query.
      var collectionsToExport = new List<string> { "Product", "Category" };

      // Loop through the collections.
      foreach (var collectionName in collectionsToExport)
      {
        try
        {
          // Compose a query using an FQL template string.
          // The query returns a Set containing all documents
          // in the collection.
          var query = FQL($@"
                        let collection = Collection({collectionName})
                        collection.all()");

          // Run the query.
          var pages = client.PaginateAsync<Dictionary<string, object>>(query);

          // Initialize a list to collect all documents.
          var documents = new List<Dictionary<string, object>>();

          // Iterate through the resulting document Set.
          await foreach (var page in pages)
          {
            foreach (var doc in page.Data)
            {
              documents.Add(doc);
            }
          }

          // Convert the documents list to a JSON string.
          var jsonData = JsonSerializer.Serialize(documents, new JsonSerializerOptions
          {
            WriteIndented = true
          });

          // Write the JSON string to a file named `<collectionName>.json`.
          File.WriteAllText($"{collectionName}.json", jsonData, System.Text.Encoding.UTF8);

          Console.WriteLine($"{collectionName} collection data written to {collectionName}.json");
        }
        catch (FaunaException error)
        {
          Console.Error.WriteLine($"Error exporting {collectionName}: {error}");
        }
      }
    }
    catch (Exception error)
    {
      Console.Error.WriteLine($"An unexpected error occurred: {error}");
    }
  }
}
package app;

import com.fauna.client.Fauna;
import com.fauna.client.FaunaClient;
import com.fauna.client.PageIterator;
import com.fauna.exception.FaunaException;
import com.fauna.query.builder.Query;
import com.fauna.types.Document;
import com.fauna.types.Page;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import static com.fauna.query.builder.Query.fql;

public class App {

    public static void main(String[] args) {
        try {
            // Route queries to a specific database
            // using the authentication secret in
            // the `FAUNA_SECRET` environment variable.
            FaunaClient client = Fauna.client();

            // Specify the collections to export.
            // You can retrieve a list of user-defined collections
            // using a `Collection.all()` query.
            List<String> collectionsToExport = List.of("Product", "Category");

            // Loop through the collections.
            for (String collectionName : collectionsToExport) {
                // Compose a query using an FQL template string.
                // The query returns a Set containing all documents
                // in the collection.
                Query query = fql("""
                    let collection = Collection(${collectionName})
                    collection.all()
                    """,
                    Map.of("collectionName", collectionName)
                );

                // Run the query and paginate through all results.
                PageIterator<Document> pageIterator = client.paginate(query, Document.class);

                // Initialize a list to collect all documents.
                List<Map<String, Object>> documents = new ArrayList<>();

                // Iterate through the resulting document Set.
                while (pageIterator.hasNext()) {
                    Page<Document> page = pageIterator.next();
                    for (Document document : page.getData()) {
                        // Create a new map to include the document's ID, collection, timestamp, and data.
                        Map<String, Object> docMap = new HashMap<>();
                        docMap.put("id", document.getId());
                        docMap.put("collection", document.getCollection().toString()); // Assuming Module has a proper toString()
                        docMap.put("timestamp", document.getTs().toString());
                        docMap.putAll(document.getData());

                        // Add the map to the documents list.
                        documents.add(docMap);
                    }
                }

                // Convert the 'documents' list to a JSON string.
                ObjectMapper objectMapper = new ObjectMapper();
                objectMapper.enable(SerializationFeature.INDENT_OUTPUT);

                // Write the JSON string to a file named `<collectionName>.json`.
                File outputFile = new File(collectionName + ".json");
                objectMapper.writeValue(outputFile, documents);

                System.out.printf("%s collection data written to %s.json%n", collectionName, collectionName);
            }
        } catch (FaunaException e) {
            System.err.println("Fauna error occurred: " + e.getMessage());
            e.printStackTrace();
        } catch (IOException e) {
            System.err.println("Error writing JSON file: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

Paginate with ranged searches

If you’re using the Fauna Core HTTP API, you can use the collection.all() method and index range searches to export data using a series of FQL queries.

Fauna implements collection.all() as a built-in collection index. The index uses the ascending document id as its only index value. You’ll use these IDs as a filter for the range searches.

The following steps outline the algorithm. For an example implementation, see Example: Range searches using Bash.

  1. Use set.first() and set.last() to get the first and last document ID from collection.all(). You’ll use these as the bounds of your range searches.

    let collSet = Product.all()
    let firstDoc = collSet.first()
    let lastDoc = collSet.last()
    
    {
      firstDocId: firstDoc!.id,
      lastDocId: lastDoc!.id
    }
    {
      firstDocId: "1",    // First document ID for the collection
      lastDocId: "999",   // Last document ID for the collection
    }
  2. To get the first page of results, run the following FQL query. Adjust set.count() to change the page size of the Set result.

    // Get an initial page of `Product`
    // collection documents.
    Product.all({ from: "1", to: "999" })
      .take(100)  // Adjust `take()` to change page size.
      .toArray()  // Optionally convert the resulting Set
                  // to an array for easier JSON serialization.
    [
      {
        id: "1",
        name: "single lime",
        price: 35
      },
      ...
      {
        id: "100",
        name: "pizza",
        price: 499
      }
    ]
  3. To get the next page of results:

    • Update from in all() to the id of the last document from the previous results.

    • Take page size + 1 documents.

    • Drop the first item, which is the last document from the previous results.

    // Gets the next page of `Product` collection documents,
    // starting with the `id` of the last document
    // from the previous results.
    Product.all({ from: "100", to: "999" })
      .take(100 + 1)  // Page size + 1
      .drop(1)        // Drop the first document
      .toArray()
    [
      {
        id: "101",
        name: "organic limes",
        price: 499
      },
      ...
      {
        id: "200",
        name: "giraffe pinata",
        price: 2799
      }
    ]
  4. Repeat the previous step until the query returns an empty result:

    // The last item from the previous
    // result is the same as the upper bound.
    Product.all({ from: "999", to: "999" })
      .take(100 + 1)
      .drop(1)
      .toArray()
    // Returns an empty result.
    []

Example: Range searches using Bash

The following Bash script shows how you can use range searches to export Fauna data. The script uses the Fauna CLI and jq. FQL data is encoded to JSON using the simple data format.

#!/bin/bash

set -e

# Specify the collections to export.
COLLECTIONS=("Product" "Category")
OUTPUT_DIR="./exports"
PAGE_SIZE=100

# Check if FAUNA_SECRET environment variable is set.
if [ -z "$FAUNA_SECRET" ]; then
  echo "Error: FAUNA_SECRET environment variable is not set"
  exit 1
fi

# Create the output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"

# Loop through each collection and export its documents.
for COLLECTION in "${COLLECTIONS[@]}"; do
  echo "Starting export of collection: $COLLECTION"
  OUTPUT_FILE="$OUTPUT_DIR/${COLLECTION}.json"
  echo "Results will be saved to: $OUTPUT_FILE"

  # Get the collection's doc boundaries and count.
  echo "Determining the collection doc boundaries and count..."
  BOUNDS_QUERY=$(cat << EOF
let collSet = $COLLECTION.all()
let firstDoc = collSet.first()
let lastDoc = collSet.last()

{
  firstDocId: firstDoc!.id,
  lastDocId: lastDoc!.id
}
EOF
)

  BOUNDS=$(fauna query "$BOUNDS_QUERY" --secret "$FAUNA_SECRET" --json)
  FIRST_ID=$(echo "$BOUNDS" | jq -r '.firstDocId')
  LAST_ID=$(echo "$BOUNDS" | jq -r '.lastDocId')

  echo "Collection boundaries: First doc ID = $FIRST_ID, Last doc ID = $LAST_ID"

  # Initialize an empty array in the output file.
  echo "[]" > "$OUTPUT_FILE"

  # Proceed with pagination if doc count > page size.
  CURRENT_FROM="$FIRST_ID"
  TOTAL_DOCS=0

  # Fetch pages until we get an empty result.
  while true; do
    echo "Fetching docs, starting with ID $CURRENT_FROM..."

    TAKE_COUNT=$((PAGE_SIZE + 1))

    # Build the FQL query.
    if [ "$CURRENT_FROM" = "$FIRST_ID" ]; then
      QUERY="$COLLECTION.all({ from: \"$CURRENT_FROM\", to: \"$LAST_ID\" }).take($PAGE_SIZE).toArray()"
    else
      QUERY="$COLLECTION.all({ from: \"$CURRENT_FROM\", to: \"$LAST_ID\" }).take($TAKE_COUNT).drop(1).toArray()"
    fi

    # Run the query and save the results to a temp file.
    TEMP_FILE=$(mktemp)
    fauna query "$QUERY" --secret "$FAUNA_SECRET" --json > "$TEMP_FILE"

    # Break if we get an empty result.
    RESULT_SIZE=$(jq -r 'length' "$TEMP_FILE")

    if [ "$RESULT_SIZE" -eq 0 ]; then
      echo "Reached the end of the collection."
      break
    fi

    # Append documents to the output file using jq.
    jq -s 'add' "$OUTPUT_FILE" "$TEMP_FILE" > "${OUTPUT_FILE}.tmp" && mv "${OUTPUT_FILE}.tmp" "$OUTPUT_FILE"

    # Get the ID of the last document.
    CURRENT_FROM=$(jq -r '.[-1].id' "$TEMP_FILE")

    # Update the total doc count.
    DOCS_IN_PAGE=$RESULT_SIZE
    TOTAL_DOCS=$((TOTAL_DOCS + DOCS_IN_PAGE))
    echo "Exported $DOCS_IN_PAGE documents (total: $TOTAL_DOCS)"

    # Clean up the temp file.
    rm "$TEMP_FILE"
  done

  echo "Export complete. $TOTAL_DOCS documents exported to $OUTPUT_FILE"
  echo "-------------------------------------"
done

echo "All collections exported successfully!"

Export FSL schema

You can use the Fauna CLI's fauna schema pull command to pull a database’s schema into a local directory:

# Pull the 'us/my_db' database's active schema
# to a local directory.
fauna schema pull \
  --database us/my_db \
  --dir /path/to/local/dir \
  --active

Migration Tips

This section provides answers to specific questions related to migrating off Fauna.

How do I flatten or transform my Fauna data for export?

When using snapshot export, documents are structured according to their schema and formatted based on the data format specified when creating the export. We recommend that you transform your data as JSON after export. For example, you can use jq or a JSON manipulation library in your preferred programming language.

If you’re exporting a small collection using FQL queries, you can use projection or mapping to transform the data before export.

How do I export system collections?

An export of your database’s FSL schema will include FSL representations of the following system collections:

In most cases, you don’t need to export other system collections, such as Key or Token. If you need to export these collections, you can use an FQL query export method.

How do I translate my FQL queries to other query languages?

The following guides translate common Fauna Query Language (FQL) queries to other popular query languages:

How do I migrate application logic currently stored in UDFs?

In most cases, you’ll need to convert this logic into application code using an ORM or similar tool for your database.

In PostgreSQL, you can use session transactions with a serializable isolation level for equivalent transaction guarantees. For example:

DO $$
BEGIN
    BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    UPDATE accounts SET balance = balance - 100 WHERE account_id = 1;
    COMMIT;
EXCEPTION
    WHEN serialization_failure THEN
        RAISE NOTICE 'Serialization failure, retrying...';
        ROLLBACK;
END;
$$;
\