Migrate off Fauna
The Fauna service will be ending at 12:00pm PT on Friday May 30, 2025 at which time you need to have exported data from your Fauna databases. During that time we are committed to keeping the service operational and achieving our SLAs while you work on your migration.
For more information on the service wind down, see our announcement and the Fauna Service End-of-Life FAQ.
This article covers:
-
How to export data from Fauna
-
How to export Fauna database schema
-
Other migration tips
Export data from Fauna
We recommend you use snapshot exports to export data from Fauna collections. For live exports, you can combine a snapshot export with event feeds or event streams to capture real-time changes.
For smaller collections, you can also use FQL queries to export your data. See Export data using FQL queries.
Snapshot export
A snapshot export lets you create a point-in-time snapshot of document data from a database or specific user-defined collections. The exported data is stored as JSON files in an AWS S3 bucket you specify.
You can create an export using the Fauna CLI's
fauna export create s3
command:
# Export the 'Product' and 'Category' collections in
# the 'us/my_db' database. Store the export
# in the 'fauna_exports/my_db/2099-12-31'
# path of the 'doc-example-bucket' S3 bucket. Format
# document data using the 'simple' data format.
fauna export create s3 \
--database us/my_db \
--collection Product Category \
--bucket doc-example-bucket \ # Replace with your bucket.
--path fauna_exports/my_db/2099-12-31 \
--format simple
For more information, see Snapshot export .
Capture real-time changes with event feeds or event streams
To export Fauna data to a live system, you can use a snapshot export with an event feed and event stream for change data capture (CDC).
After performing an initial snapshot export, you use the event feed or event stream to replay any changes made to the exported data and sync it to the new system.
The following table links to event feed or event stream examples for the Fauna client drivers.
Client driver | Event feeds | Event streams |
---|---|---|
JavaScript driver |
||
Python driver |
||
Go driver |
||
C# driver |
||
JVM driver |
Export data using FQL queries
For smaller collections, you can retrieve the collection’s documents as a Set using an FQL query. You can then use a script to write the Set’s document data to a JSON file or send it to another system.
Fauna automatically paginates Sets with 16 or more items. You can paginate through Sets using one of the following methods:
-
Driver pagination (Recommended)
Driver pagination
The Fauna client drivers include methods for automatically iterating through paginated Sets.
In the following examples, you provide a list of collections to export. For each collection, the example uses the driver’s pagination method to retrieve all documents in the collection and save them to a JSON file.
Paginate with ranged searches
If you’re using the Fauna Core HTTP
API, you can use the collection.all()
method and index range searches
to export data using a series of FQL queries.
Fauna implements collection.all()
as a
built-in collection index. The index uses
the ascending document id
as its
only index value. You’ll use these
IDs as a filter for the range searches.
The following steps outline the algorithm. For an example implementation, see Example: Range searches using Bash.
-
Use
set.first()
andset.last()
to get the first and last document ID fromcollection.all()
. You’ll use these as the bounds of your range searches.let collSet = Product.all() let firstDoc = collSet.first() let lastDoc = collSet.last() { firstDocId: firstDoc!.id, lastDocId: lastDoc!.id }
{ firstDocId: "1", // First document ID for the collection lastDocId: "999", // Last document ID for the collection }
-
To get the first page of results, run the following FQL query. Adjust
set.count()
to change the page size of the Set result.// Get an initial page of `Product` // collection documents. Product.all({ from: "1", to: "999" }) .take(100) // Adjust `take()` to change page size. .toArray() // Optionally convert the resulting Set // to an array for easier JSON serialization.
[ { id: "1", name: "single lime", price: 35 }, ... { id: "100", name: "pizza", price: 499 } ]
-
To get the next page of results:
-
Update
from
inall()
to theid
of the last document from the previous results. -
Take page size + 1 documents.
-
Drop the first item, which is the last document from the previous results.
// Gets the next page of `Product` collection documents, // starting with the `id` of the last document // from the previous results. Product.all({ from: "100", to: "999" }) .take(100 + 1) // Page size + 1 .drop(1) // Drop the first document .toArray()
[ { id: "101", name: "organic limes", price: 499 }, ... { id: "200", name: "giraffe pinata", price: 2799 } ]
-
-
Repeat the previous step until the query returns an empty result:
// The last item from the previous // result is the same as the upper bound. Product.all({ from: "999", to: "999" }) .take(100 + 1) .drop(1) .toArray()
// Returns an empty result. []
Example: Range searches using Bash
The following Bash script shows how you can use range searches to export Fauna data. The script uses the Fauna CLI and jq. FQL data is encoded to JSON using the simple data format.
#!/bin/bash
set -e
# Specify the collections to export.
COLLECTIONS=("Product" "Category")
OUTPUT_DIR="./exports"
PAGE_SIZE=100
# Check if FAUNA_SECRET environment variable is set.
if [ -z "$FAUNA_SECRET" ]; then
echo "Error: FAUNA_SECRET environment variable is not set"
exit 1
fi
# Create the output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
# Loop through each collection and export its documents.
for COLLECTION in "${COLLECTIONS[@]}"; do
echo "Starting export of collection: $COLLECTION"
OUTPUT_FILE="$OUTPUT_DIR/${COLLECTION}.json"
echo "Results will be saved to: $OUTPUT_FILE"
# Get the collection's doc boundaries and count.
echo "Determining the collection doc boundaries and count..."
BOUNDS_QUERY=$(cat << EOF
let collSet = $COLLECTION.all()
let firstDoc = collSet.first()
let lastDoc = collSet.last()
{
firstDocId: firstDoc!.id,
lastDocId: lastDoc!.id
}
EOF
)
BOUNDS=$(fauna query "$BOUNDS_QUERY" --secret "$FAUNA_SECRET" --json)
FIRST_ID=$(echo "$BOUNDS" | jq -r '.firstDocId')
LAST_ID=$(echo "$BOUNDS" | jq -r '.lastDocId')
echo "Collection boundaries: First doc ID = $FIRST_ID, Last doc ID = $LAST_ID"
# Initialize an empty array in the output file.
echo "[]" > "$OUTPUT_FILE"
# Proceed with pagination if doc count > page size.
CURRENT_FROM="$FIRST_ID"
TOTAL_DOCS=0
# Fetch pages until we get an empty result.
while true; do
echo "Fetching docs, starting with ID $CURRENT_FROM..."
TAKE_COUNT=$((PAGE_SIZE + 1))
# Build the FQL query.
if [ "$CURRENT_FROM" = "$FIRST_ID" ]; then
QUERY="$COLLECTION.all({ from: \"$CURRENT_FROM\", to: \"$LAST_ID\" }).take($PAGE_SIZE).toArray()"
else
QUERY="$COLLECTION.all({ from: \"$CURRENT_FROM\", to: \"$LAST_ID\" }).take($TAKE_COUNT).drop(1).toArray()"
fi
# Run the query and save the results to a temp file.
TEMP_FILE=$(mktemp)
fauna query "$QUERY" --secret "$FAUNA_SECRET" --json > "$TEMP_FILE"
# Break if we get an empty result.
RESULT_SIZE=$(jq -r 'length' "$TEMP_FILE")
if [ "$RESULT_SIZE" -eq 0 ]; then
echo "Reached the end of the collection."
break
fi
# Append documents to the output file using jq.
jq -s 'add' "$OUTPUT_FILE" "$TEMP_FILE" > "${OUTPUT_FILE}.tmp" && mv "${OUTPUT_FILE}.tmp" "$OUTPUT_FILE"
# Get the ID of the last document.
CURRENT_FROM=$(jq -r '.[-1].id' "$TEMP_FILE")
# Update the total doc count.
DOCS_IN_PAGE=$RESULT_SIZE
TOTAL_DOCS=$((TOTAL_DOCS + DOCS_IN_PAGE))
echo "Exported $DOCS_IN_PAGE documents (total: $TOTAL_DOCS)"
# Clean up the temp file.
rm "$TEMP_FILE"
done
echo "Export complete. $TOTAL_DOCS documents exported to $OUTPUT_FILE"
echo "-------------------------------------"
done
echo "All collections exported successfully!"
Export FSL schema
You can use the Fauna CLI's
fauna schema pull
command to pull
a database’s schema into a local directory:
# Pull the 'us/my_db' database's active schema
# to a local directory.
fauna schema pull \
--database us/my_db \
--dir /path/to/local/dir \
--active
Migration Tips
This section provides answers to specific questions related to migrating off Fauna.
How do I flatten or transform my Fauna data for export?
When using snapshot export, documents are structured according to their schema and formatted based on the data format specified when creating the export. We recommend that you transform your data as JSON after export. For example, you can use jq or a JSON manipulation library in your preferred programming language.
If you’re exporting a small collection using FQL queries, you can use projection or mapping to transform the data before export.
How do I export system collections?
An export of your database’s FSL schema will include FSL representations of the following system collections:
In most cases, you don’t need to export other system collections, such as
Key
or
Token
. If you need to export these
collections, you can use an FQL query export method.
How do I translate my FQL queries to other query languages?
The following guides translate common Fauna Query Language (FQL) queries to other popular query languages:
How do I migrate application logic currently stored in UDFs?
In most cases, you’ll need to convert this logic into application code using an ORM or similar tool for your database.
In PostgreSQL, you can use session transactions with a serializable isolation level for equivalent transaction guarantees. For example:
DO $$
BEGIN
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE accounts SET balance = balance - 100 WHERE account_id = 1;
COMMIT;
EXCEPTION
WHEN serialization_failure THEN
RAISE NOTICE 'Serialization failure, retrying...';
ROLLBACK;
END;
$$;