Schema
A schema controls a database’s structure and behavior.
Fauna Schema Language
In Fauna, you define schema using Fauna Schema Language (FSL). You use FSL to create and update schema for:
-
Access providers for authentication
-
Collections, including document types
Collectively, these constitute the database schema.
Manage schema as .fsl
files
You can create and manage schema using any of the following:
The Fauna CLI lets you
manage schema as .fsl
files. Using .fsl
files lets you:
-
Store
.fsl
schema files alongside your application code -
Pull and push schema to your Fauna database from a local directory
-
Place database schema under version control
-
Deploy schema with CI/CD pipelines
-
Change your production schema as your app evolves using progressive schema enforcement and zero-downtime migrations
For more information, see Manage schema as
.fsl
files.
FQL schema methods
Fauna stores each schema as an FQL document in a related system collection.
You can use methods for these system collections to programmatically create and manage schema using FQL queries.
FSL schema | FQL system collection |
---|---|
Collection schema
Reference: FSL collection schema |
---|
A collection schema defines the structure and behavior of a collection and its documents. It can include:
-
A document type definition that controls what fields are accepted in a collection’s documents. The document type definition consists of:
-
Field definitionsthat define document fields
-
A wildcard constraint that allows or disallows arbitrary ad hoc fields in documents
-
-
A migrations block for handling changes to the document type
-
Index definitions for efficient querying
-
Unique constraints to ensure fields contain unique values
-
Check constraints for data validation
You create and manage collection schema in FSL:
collection Product {
// Field definitions.
// Define the structure of the collection's documents.
name: String?
description: String?
price: Int = 0
stock: Int = 0
creationTime: Time = Time.now()
creationTimeEpoch: Int?
typeConflicts: { *: Any }?
// Wildcard constraint.
// Allows or disallows arbitrary ad hoc fields.
*: Any
// Migrations block.
// Used for schema migrations.
// Instructs Fauna how to handle updates to a collection's
// field definitions and wildcard constraint.
// Contains imperative migration statements.
migrations {
add .typeConflicts
add .stock
add_wildcard
backfill .stock = 0
drop .internalDesc
move_conflicts .typeConflicts
move .desc -> .description
split .creationTime -> .creationTime, .creationTimeEpoch
}
// Index definition.
// You use indexes to filter and sort documents
// in a performant way.
index byName {
terms [.name]
values [desc(.stock), desc(mva(.categories))]
}
// Unique constraint.
// Ensures a field value or combination of field values
// is unique for each document in the collection.
// Supports multivalue attribute (`mva`) fields, such as Arrays.
unique [.name, .description, mva(.categories)]
// Check constraint.
// Ensures a field value meets provided criteria
// before writes. Written as FQL predicate functions.
check posStock ((doc) => doc.stock >= 0)
// Computed field.
// A document field that derives its value from a
// user-defined, read-only FQL function that runs on every read.
compute InventoryValue: Number = (.stock * .price)
// Controls whether you can write to the `ttl` field for collection
// documents. If the collection schema doesn't contain field
// definitions, `document_ttls` defaults to `true`. Otherwise,
// `document_ttls` defaults to `false`.
document_ttls true
// Sets the default `ttl` for documents in days from their creation
// timestamp. You can override the default ttl` during document
// creation.
ttl_days 5
// Controls document history retention.
history_days 3
}
Document type definitions
A collection’s schema can include a document type definition. The definition controls what fields are accepted in a collection’s documents. You define a document’s type using:
Field definitions
Reference: FSL collection schema: Field definitions |
---|
Field definitions define fields for a collection’s documents. A field definition consists of:
-
A field name
-
Accepted data types for the field’s values
-
An optional default value
You can use field definitions to:
-
Ensure each document in a collection contains a specific field
-
Limit a field’s values to specific types
-
Set a default value for documents missing a field
-
Enumerate accepted values
collection Product {
// `name` is optional (nullable).
// Accepts `String` or `null` values.
name: String? // Equivalent to `name: String | Null`
// `price` is optional (nullable).
// Accepts `Int` or `null` values.
price: Int?
// `stock` is non-nullable.
// Accepts only `Int` values.
// If missing, defaults to `0`.
stock: Int = 0
// `creationTime` is non-nullable.
// Accepts only `Time` or `Number` values.
// If missing, defaults to the current time.
creationTime: Time | Number = Time.now()
// `category` is non-nullable.
// Accepts only the enumerated "grocery",
// "pharmacy", or "home goods" values.
// If missing, defaults to "grocery".
category: "grocery" | "pharmacy" | "home goods" = "grocery"
}
Wildcard constraint
Reference: Wildcard constraints |
---|
An ad hoc field is an arbitrary document field that doesn’t have a field definition.
You can use a collection schema’s wildcard constraint to allow or disallow ad hoc fields in the collection’s documents.
collection Product {
name: String? // Equivalent to `name: String | Null`
...
// Wildcard constraint.
// This example accepts ad hoc fields of any type.
*: Any
}
Computed fields
Reference: FSL collection schema: Computed field definitions |
---|
Computed fields derive their field value from a provided function. They let you create new fields based on existing data or calculations.
You can use a computed field to:
-
Combine or transform other field values
-
Assign a value based on an
if ... else
expression -
Assign a value based on one or more ranges
Computed fields aren’t part of the original document or persistently stored. Instead, the field’s value is computed on each read.
Document type enforcement
Fauna rejects attempts to write documents that don’t conform to a collection’s field definitions and wildcard constraint.
You can use the collection’s field definitions and wildcard constraint to adjust how strictly you enforce a predefined structure on collection documents:
Strategy | Description | Field definitions | Wildcard constraint |
---|---|---|---|
Schemaless |
Accepts ad hoc fields of any type. No fields are predefined. |
No field definitions |
No wildcard constraint |
Permissive |
Accepts ad hoc fields and predefined fields. Fields must conform to the structure of their definitions. |
One or more field definitions |
A wildcard constraint |
Strict |
Only accepts predefined fields |
One or more field definitions |
No wildcard constraint |
Schemaless by default
If a collection has no field definitions, it’s schemaless by default. It implicitly accepts ad hoc fields of any type.
Progressive enforcement
Using permissive document types is often helpful earlier in an application’s development. Allowing ad hoc fields lets you add fields as needed.
As your data evolves, you can use zero-downtime migrations to add field definitions for ad hoc fields and normalize field values. This lets you move from a permissive document type to strict one (or the reverse).
Tutorial: Progressively enforce a document type |
---|
Zero-downtime schema migrations
A schema migration is an update to a collection schema’s field definitions or wildcard constraint. Schema migrations require no downtime or locks on your database.
Migrations block
Reference: FSL collection schema: Migrations block |
---|
To handle migrations, you include a migrations block in the collection schema. The block contains one or more imperative migration statements.
The statements instruct Fauna on how to migrate from the collection’s current field definitions and wildcard constraint to the new ones.
collection Product {
...
*: Any
migrations {
// Applied 2099-05-06
add .typeConflicts
add .stock
move_conflicts .typeConflicts
backfill .stock = 0
drop .internalDesc
move .desc -> .description
split .creationTime -> .creationTime, .creationTimeEpoch
// Applied 2099-05-20
// Make `price` a required field.
split .price -> .price, .tempPrice
drop .tempPrice
backfill .price = 1
}
}
Run a schema migration
A typical schema migration involves the following steps:
-
Update the field definitions and wildcard constraint in the collection schema.
-
Add one or more related migration statements to the collection schema’s migrations block. Include comments to group and annotate statements related to the same migration.
Fauna runs each new migration statement sequentially from top to bottom. Fauna ignores unchanged migration statements from previous migrations.
-
Commit the updated collection schema to Fauna with a staged schema change.
You can’t use a staged schema change to delete or rename schema. Instead, delete or rename the schema in a separate unstaged schema change.
To run a staged schema change using the CLI:
-
Use
fauna schema push
to stage the schema changes.fauna schema push
stages schema changes by default:fauna schema push
A database can have one staged schema change at a time. You can update staged schema using
fauna schema push
.When a database has staged schema, any access or updates done using FQL’s schema commands on related system collections interact with the staged schema, not the database’s active schema.
For example, when schema changes are staged,
Collection.all()
returnsCollection
documents for the staged collection schema, not the database’sCollection
documents.If a database has staged schema, you can’t edit the database’s active schema using FQL, the Dashboard, or an unstaged schema change. You must first abandon the staged schema change.
-
Use
fauna schema status
to check the status of the staged schema:fauna schema status
Possible statuses:
Staged status Description pending
Changes are being processed. New indexes are still being built.
ready
All indexes have been built. Changes are ready to commit.
failed
There was an error during the staging process.
-
When the status is
ready
, usefauna schema commit
to apply the staged schema to the database:fauna schema commit
You can only commit staged schema with a status of
ready
.If you no longer wish to apply the staged schema or if the status is
failed
, usefauna schema abandon
to unstage the schema:fauna schema abandon
-
Once committed, changes from the migration are immediately visible in any subsequent queries.
Migration errors
When you submit a collection schema, Fauna checks the schema’s field definitions and migration statements for potential conflicts.
If a change could conflict with the collection’s data, Fauna rejects the schema with an error message. The check doesn’t require a read or scan of the collection’s documents.
Index definitions
An index stores, or covers, specific document field values for quick retrieval. Using indexes can significantly improve query performance and reduce costs, especially for large datasets.
See Indexes |
---|
Unique constraints
Reference: FSL collection schema: Unique constraint definitions |
---|
Unique constraints ensure a field value or combination of field values is unique for each document in a collection. Fauna rejects document writes that don’t meet the constraint.
Check constraints
Reference: FSL collection schema: Check constraint definitions |
---|
Check constraints ensure field values meet a pre-defined rule. For example, you can check that field values are in an allowed range.
You define a check constraint as a read-only FQL
predicate. Fauna only
allows document writes if the predicate evaluates to true
.
Document time-to-live (TTL)
A document can include an optional ttl
(time-to-live) field that contains the
document’s expiration timestamp. After the ttl
timestamp passes, Fauna
permanently deletes the document.
You can use a collection schema’s ttl_days
field to set a default ttl
for
collection documents. See
Set a default ttl.
You can use a collection schema’s document_ttls
field to control whether you
can write to the ttl
field for collection documents. See
Enable or disable ttl writes.
See Document time-to-live (TTL) |
---|
Document history
Fauna stores snapshots of each document’s history. Fauna creates these snapshots each time the document receives a write.
You can use a collection schema’s history_days
field to set how many days of
document history Fauna retains for a collection’s documents.
See Document history |
---|
Protected mode
Protected mode is a database setting that prohibits destructive operations on a database’s collections.
When you create a database in the Fauna Dashboard, you select one of the following Protected Mode options:
Option | Description |
---|---|
Disabled |
Default. No operations are prohibited. |
Enabled |
Prohibits destructive operations. See Prohibited operations. |
Inherit |
Sets the database’s Protected Mode setting to the nearest ancestor’s Protected Mode setting that is not Inherit. This option is only available for child databases. |
Prohibited operations
When Protected Mode is Enabled, the following operations are prohibited.
Resource or field | Prohibited operations | Exceptions |
---|---|---|
Delete |
||
Change |
|
|
Decrease |
||
Decrease |
||
Change |
Database schema validation
When you write to a schema, Fauna parses and validates the entire database schema in a single transaction.
Concurrent schema writes in the same database can cause contended transactions, even if the changes affect different resources. To avoid errors, perform schema changes sequentially instead.
Schema version
Fauna maintains a schema_version
for the database schema that’s returned in
Query HTTP API responses. Fauna increments this version when you write to any
schema for the database.
The schema number acts as a comparative value to help clients determine the minimum schema version used for query execution.
Considerations
Keep the following in mind when working with the schema_version
:
-
The
schema_version
is cached and may change without schema modifications. -
The
schema_version
is not permanently persisted in Fauna. -
You should only use the
schema_version
to verify that a query request ran against a specific minimum schema version. Do not rely onschema_version
being consistent across requests.
Client drivers
Fauna’s client drivers include the schema_version
values in a query info or
query response class. This class is used for both successful query responses and
errors:
-
JavaScript driver:
QueryInfo
-
Python driver:
QueryInfo
-
Go driver:
QueryInfo
-
.NET/C# driver:
QueryResponse
-
JVM driver:
QueryResponse
Is this article helpful?
Tell Fauna how the article can be improved:
Visit Fauna's forums
or email docs@fauna.com
Thank you for your feedback!