Migrations block

Learn: Schema

Schema migrations are in beta. By performing a schema migration, you automatically opt-in to the beta.

To register for the beta and sign up for production support, go to https://go.fauna.com/schemabetaproductionsupport.

A migrations block instructs Fauna how to handle updates to a collection’s field definitions or top-level wildcard constraint.

This process, called a schema migration, lets you change the structure of a collection’s documents. For a tutorial, see Progressively enforce a document type.

You include a migrations block in a collection schema. A collection schema can only contain one migration block. The block must include one or more migration statements:

collection Product {
  ...
  migrations {
    // Applied 2099-05-06
    add .typeConflicts
    add .stock
    move_conflicts .typeConflicts
    backfill .stock = 0
    drop .internalDesc
    move .desc -> .description
    split .creationTime -> .creationTime, .creationTimeEpoch

    // Applied 2099-05-20
    // Make `price` a required field.
    split .price -> .price, .tempPrice
    drop .tempPrice
    backfill .price = 1

    // Applied 2099-06-01
    // Re-add wildcard
    add_wildcard
  }
}

Syntax

migrations {
  [add <field> . . .]
  [add_wildcard . . .]
  [backfill <field> = <value> . . .]
  [drop <field> . . .]
  [move <origField> -> <newField> . . .]
  [move_conflicts <field> . . .]
  [move_wildcard <field> . . .]
  [split <origField> -> <splitField>, <splitField>[, <splitField> . . .] . . .]
}

Migration statements

Keyword Required Description

add

Adds a field definition. For examples, see Add a nullable field and Add and backfill a non-nullable field.

Requires a <field> accessor. Supports dot notation and bracket notation.

If the schema accepted ad hoc fields before migration, a move_conflicts statement must follow the add statement. If the field is present in existing documents, Fauna assigns non-conforming values to the move_conflicts statement’s catch-all field.

add_wildcard

Adds a top-level wildcard constraint. For an example, see Add a top-level wildcard constraint.

An add_wildcard statement is not required when you first add field definitions to a collection schema.

backfill

Backfills a new field with a value. For examples, see Add and backfill a non-nullable field.

Requires a <field> accessor and a field <value>. The accessor supports dot notation and bracket notation.

A backfill statement is required for any migration that could result in an empty non-nullable field.

The backfill operation only affects existing documents where the field is missing. It does not affect documents added after the migration.

The field value can be an FQL expression. The expression can have no effect other than to:

Fauna evaluates the expression at schema update time.

You can use a document as a backfill value. See Backfill using a document.

drop

Removes an existing field and its values. For an example, see Drop a field.

Requires a <field> accessor. Supports dot notation and bracket notation.

move

Renames an existing field. For an example, see Rename a field.

Requires <origField> and <newField> accessors. The accessors support dot notation and bracket notation.

move_conflicts

Assigns non-conforming values for fields in previous add migration statements to a catch-all field. For examples, see:

The move_conflicts statement only affects existing documents. It does not affect documents added after the migration.

Requires a <field> accessor for the catch-all field. The accessor supports dot notation and bracket notation.

The catch-all field’s type must be { *: Any }?. The statement nests non-conforming values in the catch-all field using the original field name as a property key.

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

move_wildcard

Assigns fields without a field definition to a catch-all field. Required to remove a top-level wildcard constraint. For an example, see Remove a top-level wildcard constraint.

Requires a <field> accessor for the catch-all field. The accessor supports dot notation and bracket notation.

The catch-all field’s type must be { *: Any }?. The statement nests values in the catch-all field using the original field name as a property key.

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

split

Splits an existing field into multiple fields based on data type. For examples, see Split a field.

Requires an <origField> accessor and two or more <splitField> accessors. The <origField> can be one of the <splitField> accessors.

The <origField> field’s values are assigned to the first <splitField> field with a matching type. Fields are checked from left to right. For an example, see Match values to split fields.

Run a schema migration

A typical schema migration involves three steps:

  1. Update the field definitions and wildcard constraint in the collection schema.

  2. Add one or more related migration statements to the collection schema’s migrations block. Include comments to group and annotate statements related to the same migration.

  3. Submit the updated collection schema to Fauna using the Fauna Dashboard the Fauna CLI's schema push command.

Fauna runs each new migration statement sequentially from top to bottom. Fauna ignores unchanged migration statements from previous migrations.

Changes from the migration are immediately visible in any subsequent queries.

Migration errors

When you submit a collection schema, Fauna checks the schema’s field definitions and migration statements for potential conflicts.

If a change could conflict with the collection’s data, Fauna rejects the schema with an error message. The check doesn’t require a read or scan of the collection’s documents.

Previous migration statements

For documentation purposes, you can retain migration statements from previous schema migrations in a collection schema. This lets you apply the same changes to other databases. For example, you could copy migration statements used for a staging database to run a similar migration on a production database.

Use caution when copying migration statements that depend on default field values across databases. These migrations can produce different results on different databases based on:

  • The state of documents at migration time

  • Previously applied migrations

Migrations for empty collections

If a collection has never contained a document, you can change its field definitions and top-level wildcard constraint without a migrations block. If the collection schema includes a migrations block, Fauna ignores it.

Limitations

A migration statement’s field accessors can’t reference nested fields, including:

  • Properties of an object field

  • Elements of an array field

Examples

Add a nullable field

Starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds a nullable field to the collection. Nullable fields aren’t required in new collection documents.

collection Product {
  // Adds the `description` field.
  // Accepts `String` or `null` values.
  description: String?
  // Adds the `typeConflicts` field as a catch-all field for
  // existing `description` values that aren't `String` or `null`.
  // Because `typeConflicts` is used in a `move_conflicts`statement,
  // it must have a type of `{ *: Any }?`.
  // If the schema didn't accept ad hoc field before
  // the migration, a catch-all field isn't needed.
  typeConflicts: { *: Any }?

  // The schema now includes field definitions.
  // Adds an explicit wildcard constraint to continue
  // accepting documents with ad hoc fields.
  *: Any

  migrations {
    // Adds the `typeConflicts` field.
    add .typeConflicts
    // Adds the `description` field.
    add .description
    // Nests non-conforming `description` and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    // If the schema didn't accept ad hoc fields before the
    // migration, a `move_conflicts` statement isn't needed.
    move_conflicts .typeConflicts
  }
}

How a catch-all field works

The previous migration uses a move_conflicts statement to reassign non-conforming description field values to the typeConflicts catch-all field.

The following examples show how the migration would affect existing documents that contain a description field.

The catch-all field for a move_wildcard statement works similarly.

Migrate a document with no changes

The migration does not affect existing documents that contain a field value of an accepted type.

{
  ...
  // `description` contains an accepted data type.
  // The field stays the same throughout the migration.
  description: "Conventional Hass, 4ct bag",
  ...
}

Similarly, a move_wildcard statement does not affect existing fields that conform to a field definition.

Migrate a non-conforming field value

If an existing document contains a description field with a non-conforming value, the migration nests the value in the typeConflicts catch-all field.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  ...
}

// After migration:
{
  ...
  // The `description` field is nested in
  // the `typeConflicts` catch-all field.
  typeConflicts: {
    description: 5
  }
  ...
}
The catch-all field already exists as an object

If the document already contains the catch-all field as an object, the migration uses the existing field.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already exists as an object.
  typeConflicts: {
    backordered: "yes"
  }
  ...
}

// After migration:
{
  ...
  // The `description` field is nested in
  // the existing `typeConflicts` catch-all field.
  typeConflicts: {
    description: 5,
    backordered: "yes"
  }
  ...
}
The catch-all field already exists with non-conforming values

If you add the catch-all field in the same migration, Fauna nests any existing, non-conforming values for the field in itself.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already exists but isn't an object.
  // The field contains an unaccepted type.
  typeConflicts: true
  ...
}

// After migration:
{
  ...
  // The existing `typeConflicts` field value doesn't conform
  // to the new `typeConflicts` field definition. The migration
  // nests the existing, non-conforming `typeConflicts` field
  // value in itself.
  typeConflicts: {
    description: 5,
    typeConflicts: true
  }
  ...
}
The catch-all field already contains the field key

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already contains a nested
  // `description` field.
  typeConflicts: {
    description: "Conventional Hass, 4ct bag"
  }
  ...
}

// After migration:
{
  ...
  typeConflicts: {
    description: "Conventional Hass, 4ct bag",
    // The new key is prepended with an underscore.
    _description: 5
  }
  ...
}

Add and backfill a non-nullable field

Starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds a non-nullable field to the collection. Non-nullable fields must include a backfill statement for existing documents.

collection Product {
  // Adds the `stock` field.
  stock: Int
  // Adds the `typeConflicts` field as a catch-all field for
  // existing `stock` values that aren't `Int`.
  // Because `typeConflicts` is used in a `move_conflicts`statement,
  // it must have a type of `{ *: Any }?`.
  // If the schema didn't accept ad hoc field before
  // the migration, a catch-all field isn't needed.
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    // Adds the `typeConflicts` field.
    add .typeConflicts
    // Adds the `stock` field.
    add .stock
    // Nests non-conforming `stock` and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    // If the schema didn't accept ad hoc fields before the
    // migration, a `move_conflicts` statement isn't needed.
    move_conflicts .typeConflicts
    // Set `stock` to `0` for existing documents
    // with a `null` (missing) or non-conforming `stock` value.
    backfill .stock = 0
  }
}

For examples of how the migration’s move_conflicts statement reassigns non-conforming field values, see How a catch-all field works.

Backfill using today’s date

Use Date.today() to use today’s date as a backfill value:

collection Product {
  // Adds the `creationDate` field.
  creationDate: Date
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .creationDate
    move_conflicts .typeConflicts
    // Set `creationDate` to today for existing documents.
    backfill .creationDate = Date.today()
  }
}

Fauna evaluates the expression at schema update time.

Backfill using the current time

Use Time.now() to use the current time as a backfill value:

collection Product {
  // Adds the `creationTime` field.
  creationTime: Time
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .creationTime
    move_conflicts .typeConflicts
    // Set `creationTime` to now for existing documents.
    backfill .creationTime = Time.now()
  }
}

Fauna evaluates the expression at schema update time.

Backfill using an ID

Use newId() to use a unique ID as a backfill value. You must cast the ID to a String using string.toString():

collection Product {
  // Adds the `productId` field.
  productId: String = newId().toString()
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .productId
    move_conflicts .typeConflicts
    // Set `productId` to an ID for existing documents.
    backfill .productId = newId().toString()
  }
}

Fauna uses the same ID value to backfill existing documents. The backfilled ID is not unique among documents.

Backfill using a document

You can use a document as a backfill value:

collection Product {
  // Adds the `category` field.
  category: Ref<Category>
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .category
    move_conflicts .typeConflicts
    // Set `category` to a `Category` collection document.
    // Replace `400684606016192545` with a `Category` document ID.
    backfill .category = Category("400684606016192545")
  }
}

Fauna doesn’t guarantee the document exists. You can’t fetch the document using an FQL expression.

Add multiple fields with the same catch-all field

Multiple fields can use the same move_conflicts statement during a migration.

For example, starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds multiple fields. If the fields are present in existing documents, they nest the values in the field specified by the next move_conflicts statement:

collection Product {
  description: String?
  price: Int

  typeConflicts: { *: Any }?
  *: Any

  migrations {
    add .typeConflicts
    add .description
    add .price
    // Nests non-conforming `description`, `price`, and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    move_conflicts .typeConflicts
    backfill .price = 1
  }
}

Add multiple fields with different catch-all fields

A migration can include multiple move_conflicts statements. This lets you use different catch-all fields for different fields.

For example, starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration includes multiple move_conflicts statements:

collection Product {
  description: String?
  price: Int
  stock: Int?


  typeConflicts: { *: Any }?
  stockTypeConflicts: { *: Any }?
  *: Any

  migrations {
    add .typeConflicts
    add .description
    add .price
    // Nests non-conforming `description`, `price`, and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    move_conflicts .typeConflicts
    backfill .price = 1

    add .stockTypeConflicts
    add .stock
    // Nests non-conforming `stock` and `stockTypeConflicts`
    // field values in the `stockTypeConflicts` catch-all field.
    move_conflicts .stockTypeConflicts
  }
}

Drop a field

Starting with the following collection schema:

collection Product {
  price: Int = 0
  internalDesc: String?
}

The following migration removes the internalDesc field and its values from the collection’s documents:

collection Product {
  price: Int = 0
  // Removed the `internalDesc` field.

  migrations {
    drop .internalDesc
  }
}

Drop a document reference field

You can’t delete a collection that’s referenced by a field definition or other schema. To delete a collection and drop any related document reference fields for the collection:

  1. Run migrations to drop any field definitions that reference the collection. For example, starting with the following collection schema:

    collection Product {
      name: String
      // Accepts `Category` collection documents and `null`.
      category: Ref<Category>?
    }

    The following migration removes the document reference field:

    collection Product {
      name: String
      // Removed the `category` field.
    
      migrations {
        drop .category
      }
    }
  2. Remove references to the collection in any other schema. For example, remove references to the collection from any role schema.

  3. Remove the collection schema for the collection you want to delete.

  4. Save your changes in the Fauna Dashboard or push the changes or upload the schema changes using the Fauna CLI's schema push command.

Rename a field

Starting with the following collection schema:

collection Product {
  desc: String?
}

The following migration renames the desc field to description:

collection Product {
  // Renamed `desc` to `description`.
  description: String?

  migrations {
    move .desc -> .description
  }
}

Split a field

Starting with the following collection schema:

collection Product {
  creationTime: Time | Number?
}

The following migration reassigns creationTime field values in the following order:

  1. Time values to creationTime

  2. Number values to creationTimeEpoch

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeEpoch` accepts `Number` values.
  creationTimeEpoch: Number?

  migrations {
    split .creationTime -> .creationTime,  .creationTimeEpoch
  }
}

Match values to split fields

split assigns field values to the first field with a matching type. Keep this in mind when using superset types, such as Number or Any.

For example, starting with the following collection schema:

collection Product {
  creationTime: Time | Number?
}

The following migration would reassign creationTime field values in the following order:

  1. Time values to creationTime

  2. Number values, including Int values, to creationTimeNum

  3. Int values to creationTimeInt

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeNum` accepts any `Number` value, including `Int` values.
  creationTimeNum: Number?
  // `creationTimeInt` accepts `Int` values.
  creationTimeInt: Int?

  migrations {
    split .creationTime -> .creationTime, .creationTimeNum, .creationTimeInt
  }
}

Because creationTimeNum precedes creationTimeInt, split would never assign a value to the creationTimeInt field.

Instead, you can reorder the split statement as follows:

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeInt` accepts `Int` values.
  creationTimeInt: Int?
  // `creationTimeNum` accepts any other `Number` value.
  creationTimeNum: Number?

  migrations {
    split .creationTime -> .creationTime, .creationTimeInt, .creationTimeNum
  }
}

Now, split assigns any creationTime values with an Int type to creationTimeInt. Any remaining Number values are assigned to creationTimeNum.

Narrow a field’s accepted types

You can use migration statements to narrow a field’s accepted data types, including Null. For example, you can convert a nullable field to a non-nullable field.

Starting with the following collection schema:

collection Product {
  // Accepts `String` and `null` values.
  description: String?
  price: Int?
}

The following migration:

  • Uses split to reassign null values for the description field to a temporary tmp field.

  • Drops the tmp field and its values.

  • Backfills any description values that were previously null with the "default" string.

collection Product {
  // Accepts `String` values only.
  description: String
  price: Int?

  migrations {
    split .description -> .description, .tmp
    drop .tmp
    backfill .description = "default"
  }
}

Because it follows a split statement, backfill only affects documents where the description field value was null and split to tmp.

Add a top-level wildcard constraint

Starting with the following collection schema:

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?
}

The following migration adds a top-level wildcard constraint. Once added, the collection accepts documents with ad hoc fields.

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  *: Any

  migrations {
    add_wildcard
  }
}

Remove a top-level wildcard constraint

Starting with the following collection schema:

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  *: Any
}

The following migration removes the collection’s top-level wildcard constraint. Once removed, the collection no longer accepts documents with ad hoc fields.

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  // Removes the `*: Any` wildcard constraint.

  // Adds the `typeConflicts` field as a catch-all field for
  // existing ad hoc fields that don't
  // have a field definition.
  typeConflicts: { *: Any }?

  migrations {
    add .typeConflicts
    move_conflicts .typeConflicts
    // Nests existing ad hoc field values without a field definition
    // in the `typeConflicts` catch-all field.
    move_wildcard .typeConflicts
  }
}

The move_wildcard statement’s catch-all field works similarly to a move_conflict statement’s catch-all field. See How a catch-all field works.

Copy a migration that depends on default values

The following example shows how migration statements that depend on default values can produce different results when copied to a database in a different state. It uses two example databases: Dev and Staging.

  1. In the Dev database, create a Product collection with the following collection schema:

    collection Product {
      stock: Int = 0
    }
  2. Create a Product document with no fields:

    Product.create({})
  3. Migrate the collection schema to add a price field with a default value of 0:

    collection Product {
      stock: Int = 0
      // Accepts `Int` and `String` values.
      // Defaults to `0`.
      price: Int | String = 0
    
      migrations {
        // Migration #1 (Current)
        add .price
      }
    }

    The document you previously created now has a price of 0:

    {
      id: "<DOCUMENT_ID>",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      price: 0
    }
  4. Migrate the schema to split price field values based on data type:

    collection Product {
      stock: Int = 0
      // Adds the `priceInt` field.
      // `priceInt` defaults to `1`.
      // `price` previously defaulted to `0`.
      priceInt: Int = 1
      // Adds the `priceStr` field.
      priceStr: String = ""
    
      migrations {
        // Migration #1 (Previous)
        // Already run. Fauna ignores
        // previously run migration statements.
        add .price
    
        // Migration #2 (Current)
        // Splits `price` field values.
        // `Int` values are assigned to `priceInt`.
        // `String` values are assigned to `priceStr`.
        split .price -> .priceInt, .priceStr
      }
    }

    The document you previously created now has a priceInt of 0:

    {
      id: "<DOCUMENT_ID>",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      priceInt: 0,
      priceStr: ""
    }

    The document’s price was previously 0. The split statement reassigned the price value to priceInt. The priceInt field’s default value is not applied.

  5. In a Staging database, create a Product collection with the same initial schema:

    collection Product {
      stock: Int = 0
    }
  6. Create a Product document in the Staging database:

    Product.create({})
  7. In the Staging database, run a migration on the Product collection schema that combines the two previous migrations:

    collection Product {
      stock: Int = 0
      // `priceInt` defaults to `1`.
      priceInt: Int = 1
      priceStr: String = ""
    
      migrations {
        add .price
    
        split .price -> .priceInt, .priceStr
      }
    }

    In the Staging database, the document has a priceInt value of 1:

    {
      id: "<DOCUMENT_ID>",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      // `priceInt` field
      priceInt: 1,
      priceStr: ""
    }

    Because the document didn’t previously contain a price field, the split statement didn’t affect the document. Instead, the document uses the default priceInt value.

Is this article helpful? 

Tell Fauna how the article can be improved:
Visit Fauna's forums or email docs@fauna.com

Thank you for your feedback!