FSL collection schema: Migrations block

Learn: Migrations block

A migrations block instructs Fauna how to handle updates to a collection’s field definitions or top-level wildcard constraint.

This process, called a schema migration, lets you change the structure of a collection’s documents. For a tutorial, see Progressively enforce a document type.

You define a migrations block as part of an FSL collection schema. A collection schema can only contain one migration block. The block must include one or more migration statements:

collection Product {
  ...
  migrations {
    // Applied 2099-05-06
    add .typeConflicts
    add .stock
    move_conflicts .typeConflicts
    backfill .stock = 0
    drop .internalDesc
    move .desc -> .description
    split .creationTime -> .creationTime, .creationTimeEpoch

    // Applied 2099-05-20
    // Make `price` a required field.
    split .price -> .price, .tempPrice
    drop .tempPrice
    backfill .price = 1

    // Applied 2099-06-01
    // Re-add wildcard
    add_wildcard
  }
}

You can create and manage schema using any of the following:

Fauna stores each collection schema as an FQL document in the Collection system collection. The Collection document’s migrations field contains FQL versions of the collection’s migrations block.

FSL syntax

migrations {
  [add <field> . . .]
  [add_wildcard . . .]
  [backfill <field> = <value> . . .]
  [drop <field> . . .]
  [move <origField> -> <newField> . . .]
  [move_conflicts <field> . . .]
  [move_wildcard <field> . . .]
  [split <origField> -> <splitField>, <splitField>[, <splitField> . . .] . . .]
}

Migration statements

Keyword Required Description

add

Adds a field definition. For examples, see Add a nullable field and Add and backfill a non-nullable field.

Requires a <field> accessor. Supports dot notation and bracket notation.

If the schema accepted ad hoc fields before migration, a move_conflicts statement must follow the add statement. If the field is present in existing documents, Fauna assigns non-conforming values to the move_conflicts statement’s catch-all field.

add_wildcard

Adds a top-level wildcard constraint. For an example, see Add a top-level wildcard constraint.

An add_wildcard statement is not required when you first add field definitions to a collection schema.

backfill

Backfills a new field with a value. For examples, see Add and backfill a non-nullable field.

Requires a <field> accessor and a field <value>. The accessor supports dot notation and bracket notation.

A backfill statement is required for any migration that could result in an empty non-nullable field.

The backfill operation only affects existing documents where the field is missing. It does not affect documents added after the migration.

The field value can be an FQL expression. The expression can have no effect other than to:

Fauna evaluates the expression at schema update time.

Document references for the following system collections are supported:

References to named system collection documents are not supported. See Backfill using a document reference.

drop

Removes an existing field and its values. For an example, see Drop a field.

Requires a <field> accessor. Supports dot notation and bracket notation.

move

Moves or renames an existing field. For examples, see Rename a field and Move a nested field.

Requires <origField> and <newField> accessors. The accessors support dot notation and bracket notation.

move_conflicts

Assigns non-conforming values for fields in previous add migration statements to a catch-all field. For examples, see:

The move_conflicts statement only affects existing documents. It does not affect documents added after the migration.

Requires a <field> accessor for the catch-all field. The accessor supports dot notation and bracket notation. You can’t use move_conflicts to handle conflicts within an object’s nested fields.

The catch-all field’s type must be { *: Any }?. The catch-all field can be nested in an object. The statement nests non-conforming values in the catch-all field using the original field name as a property key.

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

move_wildcard

Assigns top-level fields without a field definition to a catch-all field. Required to remove a top-level wildcard constraint. For an example, see Remove a top-level wildcard constraint.

Requires a <field> accessor for the catch-all field. The accessor supports dot notation and bracket notation. You can’t use move_wildcard to handle conflicts within an object’s nested fields.

The catch-all field’s type must be { *: Any }?. The catch-all field can be a nested in an object. The statement nests values in the catch-all field using the original field name as a property key.

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

split

Splits an existing field into multiple fields based on data type. For examples, see Split a field.

Requires an <origField> accessor and two or more <splitField> accessors. The <origField> can be one of the <splitField> accessors.

The <origField> field’s values are assigned to the first <splitField> field with a matching type. Fields are checked from left to right. For an example, see Match values to split fields.

Run a schema migration

A typical schema migration involves the following steps:

  1. Update the field definitions and wildcard constraint in the collection schema.

  2. Add one or more related migration statements to the collection schema’s migrations block. Include comments to group and annotate statements related to the same migration.

    Fauna runs each new migration statement sequentially from top to bottom. Fauna ignores unchanged migration statements from previous migrations.

  3. Commit the updated collection schema to Fauna with a staged schema change.

    A staged schema change lets you change one or more collection schema without index downtime due to index builds.

    You can’t use a staged schema change to delete or rename schema. Instead, delete or rename the schema in a separate unstaged schema change.

    To run a staged schema change using the CLI:

    1. Use fauna schema push to stage the schema changes. fauna schema push stages schema changes by default:

      fauna schema push

      A database can have one staged schema change at a time. You can update staged schema using fauna schema push.

      When a database has staged schema, any access or updates done using FQL’s schema commands on related system collections interact with the staged schema, not the database’s active schema.

      For example, when schema changes are staged, Collection.all() returns Collection documents for the staged collection schema, not the database’s Collection documents.

      If a database has staged schema, you can’t edit the database’s active schema using FQL, the Dashboard, or an unstaged schema change. You must first abandon the staged schema change.

    2. Use fauna schema status to check the status of the staged schema:

      fauna schema status

      Possible statuses:

      Staged status Description

      pending

      Changes are being processed. New indexes are still being built.

      ready

      All indexes have been built. Changes are ready to commit.

      failed

      There was an error during the staging process.

    3. When the status is ready, use fauna schema commit to apply the staged schema to the database:

      fauna schema commit

      You can only commit staged schema with a status of ready.

      If you no longer wish to apply the staged schema or if the status is failed, use fauna schema abandon to unstage the schema:

      fauna schema abandon

Once committed, changes from the migration are immediately visible in any subsequent queries.

Migration errors

When you submit a collection schema, Fauna checks the schema’s field definitions and migration statements for potential conflicts.

If a change could conflict with the collection’s data, Fauna rejects the schema with an error message. The check doesn’t require a read or scan of the collection’s documents.

Previous migration statements

For documentation purposes, you can retain migration statements from previous schema migrations in a collection schema. This lets you apply the same changes to other databases. For example, you could copy migration statements used for a staging database to run a similar migration on a production database.

Use caution when copying migration statements that depend on default field values across databases. These migrations can produce different results on different databases based on:

  • The state of documents at migration time

  • Previously applied migrations

Migrations for empty collections

If a collection has never contained a document, you can change its field definitions and top-level wildcard constraint without a migrations block. If the collection schema includes a migrations block, Fauna ignores it.

Limitations

A migration statement’s field accessors can’t reference nested fields in an object Array.

Examples

Add a nullable field

Starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds a nullable field to the collection. Nullable fields aren’t required in new collection documents.

collection Product {
  // Adds the `description` field.
  // Accepts `String` or `null` values.
  description: String?
  // Adds the `typeConflicts` field as a catch-all field for
  // existing `description` values that aren't `String` or `null`.
  // Because `typeConflicts` is used in a `move_conflicts`statement,
  // it must have a type of `{ *: Any }?`.
  // If the schema didn't accept ad hoc field before
  // the migration, a catch-all field isn't needed.
  typeConflicts: { *: Any }?

  // The schema now includes field definitions.
  // Adds an explicit wildcard constraint to continue
  // accepting documents with ad hoc fields.
  *: Any

  migrations {
    // Adds the `typeConflicts` field.
    add .typeConflicts
    // Adds the `description` field.
    add .description
    // Nests non-conforming `description` and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    // If the schema didn't accept ad hoc fields before the
    // migration, a `move_conflicts` statement isn't needed.
    move_conflicts .typeConflicts
  }
}

How a catch-all field works

The previous migration uses a move_conflicts statement to reassign non-conforming description field values to the typeConflicts catch-all field.

The following examples show how the migration would affect existing documents that contain a description field.

The catch-all field for a move_wildcard statement works similarly.

Migrate a document with no changes

The migration does not affect existing documents that contain a field value of an accepted type.

{
  ...
  // `description` contains an accepted data type.
  // The field stays the same throughout the migration.
  description: "Conventional Hass, 4ct bag",
  ...
}

Similarly, a move_wildcard statement does not affect existing fields that conform to a field definition.

Migrate a non-conforming field value

If an existing document contains a description field with a non-conforming value, the migration nests the value in the typeConflicts catch-all field.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  ...
}

// After migration:
{
  ...
  // The `description` field is nested in
  // the `typeConflicts` catch-all field.
  typeConflicts: {
    description: 5
  }
  ...
}
The catch-all field already exists as an object

If the document already contains the catch-all field as an object, the migration uses the existing field.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already exists as an object.
  typeConflicts: {
    backordered: "yes"
  }
  ...
}

// After migration:
{
  ...
  // The `description` field is nested in
  // the existing `typeConflicts` catch-all field.
  typeConflicts: {
    description: 5,
    backordered: "yes"
  }
  ...
}
The catch-all field already exists with non-conforming values

If you add the catch-all field in the same migration, Fauna nests any existing, non-conforming values for the field in itself.

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already exists but isn't an object.
  // The field contains an unaccepted type.
  typeConflicts: true
  ...
}

// After migration:
{
  ...
  // The existing `typeConflicts` field value doesn't conform
  // to the new `typeConflicts` field definition. The migration
  // nests the existing, non-conforming `typeConflicts` field
  // value in itself.
  typeConflicts: {
    description: 5,
    typeConflicts: true
  }
  ...
}
The catch-all field already contains the field key

If the catch-all field already contains a nested field with the same key, Fauna prepends the new key with an underscore (_).

// Before migration:
{
  ...
  // `description` contains an unaccepted type.
  description: 5,
  // The `typeConflicts` catch-all field already contains a nested
  // `description` field.
  typeConflicts: {
    description: "Conventional Hass, 4ct bag"
  }
  ...
}

// After migration:
{
  ...
  typeConflicts: {
    description: "Conventional Hass, 4ct bag",
    // The new key is prepended with an underscore.
    _description: 5
  }
  ...
}

Add and backfill a non-nullable field

Starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds a non-nullable field to the collection. Non-nullable fields must include a backfill statement for existing documents.

collection Product {
  // Adds the `stock` field.
  stock: Int
  // Adds the `typeConflicts` field as a catch-all field for
  // existing `stock` values that aren't `Int`.
  // Because `typeConflicts` is used in a `move_conflicts`statement,
  // it must have a type of `{ *: Any }?`.
  // If the schema didn't accept ad hoc field before
  // the migration, a catch-all field isn't needed.
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    // Adds the `typeConflicts` field.
    add .typeConflicts
    // Adds the `stock` field.
    add .stock
    // Nests non-conforming `stock` and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    // If the schema didn't accept ad hoc fields before the
    // migration, a `move_conflicts` statement isn't needed.
    move_conflicts .typeConflicts
    // Set `stock` to `0` for existing documents
    // with a `null` (missing) or non-conforming `stock` value.
    backfill .stock = 0
  }
}

For examples of how the migration’s move_conflicts statement reassigns non-conforming field values, see How a catch-all field works.

Backfill using today’s date

Use Date.today() to use today’s date as a backfill value:

collection Product {
  // Adds the `creationDate` field.
  creationDate: Date
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .creationDate
    move_conflicts .typeConflicts
    // Set `creationDate` to today for existing documents.
    backfill .creationDate = Date.today()
  }
}

Fauna evaluates the expression at schema update time.

Backfill using the current time

Use Time.now() to use the current time as a backfill value:

collection Product {
  // Adds the `creationTime` field.
  creationTime: Time
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .creationTime
    move_conflicts .typeConflicts
    // Set `creationTime` to now for existing documents.
    backfill .creationTime = Time.now()
  }
}

Fauna evaluates the expression at schema update time.

Backfill using an ID

Use newId() to use a unique ID as a backfill value. You must cast the ID to a String using toString():

collection Product {
  // Adds the `productId` field.
  productId: String = newId().toString()
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .productId
    move_conflicts .typeConflicts
    // Set `productId` to an ID for existing documents.
    backfill .productId = newId().toString()
  }
}

Fauna uses the same ID value to backfill existing documents. The backfilled ID is not unique among documents.

Backfill using a document reference

You can use a document reference as a backfill value:

collection Product {
  // Adds the `category` field.
  category: Ref<Category>
  typeConflicts: { *: Any }?

  *: Any

  migrations {
    add .typeConflicts
    add .category
    move_conflicts .typeConflicts
    // Set `category` to a `Category` collection document.
    // Replace `400684606016192545` with a `Category` document ID.
    backfill .category = Category("400684606016192545")
  }
}

Fauna doesn’t guarantee the document exists. You can’t fetch the document using an FQL expression.

Document references for the following system collections are supported:

References to named system collection documents are not supported.

Add multiple fields with the same catch-all field

Multiple fields can use the same move_conflicts statement during a migration.

For example, starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration adds multiple fields. If the fields are present in existing documents, they nest the values in the field specified by the next move_conflicts statement:

collection Product {
  description: String?
  price: Int

  typeConflicts: { *: Any }?
  *: Any

  migrations {
    add .typeConflicts
    add .description
    add .price
    // Nests non-conforming `description`, `price`, and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    move_conflicts .typeConflicts
    backfill .price = 1
  }
}

Add multiple fields with different catch-all fields

A migration can include multiple move_conflicts statements. This lets you use different catch-all fields for different fields.

For example, starting with the following collection schema:

collection Product {
  // Contains no field definitions.
  // Accepts ad hoc fields of any type.
  // Has an implicit wildcard constraint of
  // `*: Any`.
}

The following migration includes multiple move_conflicts statements:

collection Product {
  description: String?
  price: Int
  stock: Int?

  typeConflicts: { *: Any }?
  stockTypeConflicts: { *: Any }?
  *: Any

  migrations {
    add .typeConflicts
    add .description
    add .price
    // Nests non-conforming `description`, `price`, and `typeConflicts`
    // field values in the `typeConflicts` catch-all field.
    move_conflicts .typeConflicts
    backfill .price = 1

    add .stockTypeConflicts
    add .stock
    // Nests non-conforming `stock` and `stockTypeConflicts`
    // field values in the `stockTypeConflicts` catch-all field.
    move_conflicts .stockTypeConflicts
  }
}

Drop a field

Starting with the following collection schema:

collection Product {
  price: Int = 0
  internalDesc: String?
}

The following migration removes the internalDesc field and its values from the collection’s documents:

collection Product {
  price: Int = 0
  // Removed the `internalDesc` field.

  migrations {
    drop .internalDesc
  }
}

Drop a document reference field

You can’t delete a collection that’s referenced by a field definition or other schema. To delete a collection and drop any related document reference fields for the collection:

  1. Run migrations to drop any field definitions that reference the collection. For example, starting with the following collection schema:

    collection Product {
      name: String
      // Accepts a reference to a `Category` collection document or `null`.
      category: Ref<Category>?
    }

    The following migration removes the document reference field:

    collection Product {
      name: String
      // Removed the `category` field.
    
      migrations {
        drop .category
      }
    }
  2. Remove references to the collection in any other schema. For example, remove references to the collection from any role schema.

  3. Remove the collection schema for the collection you want to delete.

  4. Commit your changes to Fauna using a staged schema change.

Rename a field

Starting with the following collection schema:

collection Product {
  desc: String?
}

The following migration renames the desc field to description:

collection Product {
  // Renamed `desc` to `description`.
  description: String?

  migrations {
    move .desc -> .description
  }
}

Split a field

Starting with the following collection schema:

collection Product {
  creationTime: Time | Number?
}

The following migration reassigns creationTime field values in the following order:

  1. Time values to creationTime

  2. Number values to creationTimeEpoch

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeEpoch` accepts `Number` values.
  creationTimeEpoch: Number?

  migrations {
    split .creationTime -> .creationTime,  .creationTimeEpoch
  }
}

Match values to split fields

split assigns field values to the first field with a matching type. Keep this in mind when using superset types, such as Number or Any.

For example, starting with the following collection schema:

collection Product {
  creationTime: Time | Number?
}

The following migration would reassign creationTime field values in the following order:

  1. Time values to creationTime

  2. Number values, including Int values, to creationTimeNum

  3. Int values to creationTimeInt

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeNum` accepts any `Number` value, including `Int` values.
  creationTimeNum: Number?
  // `creationTimeInt` accepts `Int` values.
  creationTimeInt: Int?

  migrations {
    split .creationTime -> .creationTime, .creationTimeNum, .creationTimeInt
  }
}

Because creationTimeNum precedes creationTimeInt, split would never assign a value to the creationTimeInt field.

Instead, you can reorder the split statement as follows:

collection Product {
  // `creationTime` accepts `Time` values.
  creationTime: Time?
  // `creationTimeInt` accepts `Int` values.
  creationTimeInt: Int?
  // `creationTimeNum` accepts any other `Number` value.
  creationTimeNum: Number?

  migrations {
    split .creationTime -> .creationTime, .creationTimeInt, .creationTimeNum
  }
}

Now, split assigns any creationTime values with an Int type to creationTimeInt. Any remaining Number values are assigned to creationTimeNum.

Narrow a field’s accepted types

You can use migration statements to narrow a field’s accepted data types, including Null. For example, you can convert a nullable field to a non-nullable field.

Starting with the following collection schema:

collection Product {
  // Accepts `String` and `null` values.
  description: String?
  price: Int?
}

The following migration:

  • Uses split to reassign null values for the description field to a temporary tmp field.

  • Drops the tmp field and its values.

  • Backfills any description values that were previously null with the "default" string.

collection Product {
  // Accepts `String` values only.
  description: String
  price: Int?

  migrations {
    split .description -> .description, .tmp
    drop .tmp
    backfill .description = "default"
  }
}

Because it follows a split statement, backfill only affects documents where the description field value was null and split to tmp.

Add a top-level wildcard constraint

Starting with the following collection schema:

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?
}

The following migration adds a top-level wildcard constraint. Once added, the collection accepts documents with ad hoc fields.

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  *: Any

  migrations {
    add_wildcard
  }
}

Remove a top-level wildcard constraint

Starting with the following collection schema:

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  *: Any
}

The following migration removes the collection’s top-level wildcard constraint. Once removed, the collection no longer accepts documents with ad hoc fields.

collection Product {
  name: String?
  description: String?
  price: Int?
  stock: Int?

  // Removes the `*: Any` wildcard constraint.

  // Adds the `typeConflicts` field as a catch-all field for
  // existing ad hoc fields that don't
  // have a field definition.
  typeConflicts: { *: Any }?

  migrations {
    add .typeConflicts
    move_conflicts .typeConflicts
    // Nests existing ad hoc field values without a field definition
    // in the `typeConflicts` catch-all field.
    move_wildcard .typeConflicts
  }
}

The move_wildcard statement’s catch-all field works similarly to a move_conflict statement’s catch-all field. See How a catch-all field works.

Migrate nested fields

A nested field is a field within an object. For example:

collection Product {
  // `metadata` is an object field.
  metadata: {
    // `name` is a nested field
    // in the `metadata` object.
    "name": String?
  }
}

For more information, see Objects in the field definition docs.

Access nested fields in migration statements

A top-level field name must be a valid identifier. A nested field name can be any valid string, including an identifier.

You can access identifier field names in a migration statement using dot notation:

collection Product {
  metadata: {
    name: String?
    internalDesc: String?
  }

  migrations {
    // Uses dot notation to add the
    // nested `internalDesc` field.
    add .metadata.internalDesc
  }
}

You can access non-identifier field names in a migration statement using bracket notation:

collection Product {
  metadata: {
    name: String?
    "internal description": String?
  }

  migrations {
    // Uses bracket notation to add the
    // nested `internal description` field.
    add .metadata["internal description"]
  }
}

Move a nested field

Starting with the following collection schema:

collection Product {
  metadata: {
    name: String
    internalDesc: String
  }
}

The following migration moves the nested name field from the metadata object to the top level:

collection Product {
  name: String
  metadata: {
    internalDesc: String
  }

  migrations {
    move .metadata.name -> .name
  }
}

Add and backfill an object

If you add a field definition for an object, you must include add statements for any fields in the object. You must also include backfill statements for any non-nullable fields in the object.

Starting with the following collection schema:

collection Customer {
  name: String
  email: String
}

The following migration adds a field definition for a non-nullable address object:

collection Customer {
  name: String
  email: String
  address: {
    street: String
    city: String
  }

  migrations {
    // The following statements are implicit:
    // add .address
    // backfill .address = {}

    // Adds the nested `address.street` field
    add .address.street
    // Adds the nested `address.city` field
    add .address.city

    // Set `address.street` to `unknown street`
    // for existing documents.
    backfill .address.street = "unknown street"
    // Set `address.city` to `unknown city`
    // for existing documents.
    backfill .address.city = "unknown city"
  }
}

Add and backfill a non-nullable nested field

If you add a non-nullable field to an existing object, you must include a backfill statement. Starting with the following collection schema:

collection Customer {
  address: {
    street: String
    city: String
    state: String
    postalCode: String
  }
}

The following migration adds a non-nullable country field to the address object:

collection Customer {
  address: {
    street: String
    city: String
    state: String
    postalCode: String
    country: String
  }

  migrations {
    // Adds the nested `country` field to the `address` object.
    add .address.country
    // Set `address.country` to `US` for existing documents.
    backfill .address.country = "US"
  }
}

Add a nested wildcard constraint

An add_wildcard statement isn’t required to add a wildcard constraint to an object. Starting with the following collection schema:

collection Product {
  metadata: {
    name: String
  }
}

The following migration adds a wildcard constraint to the metadata object:

collection Product {
  metadata: {
    name: String
    *: Any
  }
}

Documents added after the migration can contain ad hoc fields in the metadata object.

Nested field migrations with wildcard constraints

You can’t run migrations on a nested field that has a neighboring wildcard constraint.

Starting with the following collection schema:

collection Product {
  name: String
  metadata: {
    *: Any
  }
}

Create a Product document with a nested productUpc field in the metadata object:

Product.create({
  name: "key limes",
  metadata: {
    productUpc: "00123456789012"
  }
})

The following migration is disallowed and returns an error:

collection Product {
  name: String
  metadata: {
    productUpc: Int?
    *: Any
  }

  migrations {
    // Error! `metadata`contains a
    // wildcard constraint.
    // You can't run migrations on
    // `metadata.productUpc` field.
    add .metadata.productUpc
  }
}

The add_wilcard, remove_wildcard, and move_wildcard migration statements only support top-level wildcard constraints, not nested wildcard constraints. These statements let you safely handle conflicts between ad hoc and defined fields.

Remove a nested wildcard constraint

A move_wildcard statement isn’t required to remove a wildcard constraint from an object. Starting with the following collection schema:

collection Product {
  metadata: {
    name: String
    *: Any
  }
}

The following migration removes the wildcard constraint from the metadata object:

collection Product {
  metadata: {
    name: String
    // Removes the nested `*: Any?` wildcard constraint.
  }

  migrations {
    // Reassigns non-conforming `metadata` objects
    // to the `tmp` field.
    split .metadata -> .metadata, .tmp
    // Backfills documents whose `metadata` objects
    // were reassigned.
    backfill .metadata = { name: "" }
    // Removes the `tmp` field.
    drop .tmp
  }
}

Copy a migration that depends on default values

The following example shows how migration statements that depend on default values can produce different results when copied to a database in a different state. It uses two example databases: Dev and Staging.

  1. In the Dev database, create a Product collection with the following collection schema:

    collection Product {
      stock: Int = 0
    }
  2. Create a Product document with no fields:

    Product.create({})
  3. Migrate the collection schema to add a price field with a default value of 0:

    collection Product {
      stock: Int = 0
      // Accepts `Int` and `String` values.
      // Defaults to `0`.
      price: Int | String = 0
    
      migrations {
        // Migration #1 (Current)
        add .price
      }
    }

    The document you previously created now has a price of 0:

    {
      id: "111",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      price: 0
    }
  4. Migrate the schema to split price field values based on data type:

    collection Product {
      stock: Int = 0
      // Adds the `priceInt` field.
      // `priceInt` defaults to `1`.
      // `price` previously defaulted to `0`.
      priceInt: Int = 1
      // Adds the `priceStr` field.
      priceStr: String = ""
    
      migrations {
        // Migration #1 (Previous)
        // Already run. Fauna ignores
        // previously run migration statements.
        add .price
    
        // Migration #2 (Current)
        // Splits `price` field values.
        // `Int` values are assigned to `priceInt`.
        // `String` values are assigned to `priceStr`.
        split .price -> .priceInt, .priceStr
      }
    }

    The document you previously created now has a priceInt of 0:

    {
      id: "111",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      priceInt: 0,
      priceStr: ""
    }

    The document’s price was previously 0. The split statement reassigned the price value to priceInt. The priceInt field’s default value is not applied.

  5. In a Staging database, create a Product collection with the same initial schema:

    collection Product {
      stock: Int = 0
    }
  6. Create a Product document in the Staging database:

    Product.create({})
  7. In the Staging database, run a migration on the Product collection schema that combines the two previous migrations:

    collection Product {
      stock: Int = 0
      // `priceInt` defaults to `1`.
      priceInt: Int = 1
      priceStr: String = ""
    
      migrations {
        add .price
    
        split .price -> .priceInt, .priceStr
      }
    }

    In the Staging database, the document has a priceInt value of 1:

    {
      id: "111",
      coll: Product,
      ts: Time("2099-07-19T18:48:58.985Z"),
      stock: 0,
      // `priceInt` field
      priceInt: 1,
      priceStr: ""
    }

    Because the document didn’t previously contain a price field, the split statement didn’t affect the document. Instead, the document uses the default priceInt value.

Is this article helpful? 

Tell Fauna how the article can be improved:
Visit Fauna's forums or email docs@fauna.com

Thank you for your feedback!