Billing

Fauna billing is primarily based on the resources that you use in your queries. Fauna provides a generous free tier and you are only billed if you exceed the free tier’s limits. You can also choose to purchase higher tiers that provide predictable pricing, or support. See the pricing page for more information.

This page describes how resource usage is counted.

On November 19, 2020, Fauna, Inc. introduced new billing plans. Users with accounts created prior to this date remain on their existing plans until February 1, 2021, or move to a new plan when they update their billing settings. Users who signup on this date, or later, are automatically assigned to a new billing plan.

This page describes the way billing works both before and after the transition date, with "legacy billing" in the left column, and "current billing" in the right column.

Definitions

Document

A document is any record stored within a Fauna database, which includes user-provided documents and Fauna schema documents, such as those describing databases, collections, indexes, keys, user-defined functions, roles, etc.

Query

An expression of one or more FQL functions intended to achieve, or return, a specific result. Queries are executed as an all-or-nothing transaction by Fauna.

Resources

For billing purposes, use of the following resources is counted:

Read operations

  • One read operation is counted when any document is read from storage.

    When a query involves a distinct document multiple times, the document is only read once, not once per instance in the query.

  • One read operation is counted when a page of tuples is fetched from an index.

    When a query involves a distinct page from an index multiple times, the index page is only read once, not once per use in the query.

  • Read operations are always counted, whether the query fails or not.

  • If a query has to be retried due to conflicts with other concurrent queries that are writing to the same documents, the query is retried and incurs read operations again.

  • Read operations are not counted for the portions of a query involved in write operations. For example, if you Create a document, the result is incidentally read after the write completes, but does not accrue read operations.

See the x-read-ops response header in the Per query metrics section.

  • One read operation is counted when up to 4KB of any document is read from storage. A 20KB document requires 5 read operations, a 4.1KB document requires 2 read operations.

    When a query involves a distinct document multiple times, the document is only read once, not once per instance in the query.

  • One read operation is counted when up to 4KB of tuples is fetched from an index.

    Additionally, one read operation per index partition is counted. An index with no terms defined has 8 partitions, so 7 additional read operations are counted above the number required to read a page from the index.

    When a query involves a distinct page from an index multiple times, the index page is only read once, not once per use in the query.

  • Read operations for authentication or identity checks are counted according to the size of the token or key; one read operation is counter when up to 4KB is read.

  • Read operations are always counted, whether the query fails or not.

  • If a query has to be retried due to conflicts with other concurrent queries that are writing to the same documents, the query is retried and incurs read operations again.

  • Read operations are not counted for the portions of a query involved in write operations. For example, if you Create a document, the result is incidentally read after the write completes, but does not accrue read operations.

See the x-byte-read-ops response header in the Per query metrics section.

For both legacy and current billing:

Write operations

  • One write operation is counted when any document is written to storage.

  • Index writes, for documents that are created, updated, or deleted, do not incur write operations.

  • For queries that fail for any reason, write operations are not counted.

See the x-write-ops response header in the Per query metrics section.

  • One write operation is counted when up to 1KB of any document is written to storage. A 20KB document requires 20 write operations, a 1.1KB document requires 2 write operations.

  • Index writes, for documents that are created, updated, or deleted, do incur write operations: one write operation is counter when up to 1k of document data is indexed.

  • For queries that fail for any reason, write operations are not counted.

See the x-byte-write-ops response header in the Per query metrics section.

For both legacy and current billing: - The following functions perform writes:

Compute operations

  • Compute operations are not counted nor billed.

  • One compute operation is counted per fifty function calls, or portion thereof.

    User-defined functions might call many functions with each invocation; all function calls are counted. Compute operations might grow rapidly when using functions such as Map and Reduce, or when calling a UDF recursively.

See the x-compute-ops response header in the Per query metrics section.

Streaming operations

  • Streaming operations are not counted nor billed.

  • While streaming is in preview, streaming operations are not counted nor billed.

    When streaming moves to production, one streaming operation is counted for each minute that a stream is active.

Storage

  • Documents are stored on disk, and the amount of space occupied is charged monthly.

  • Indexes are also stored on disk, and contribute to the storage that is charged monthly. The size of indexes varies with the size of the data that is indexed.

  • Documents are stored on disk, and the amount of space occupied is charged monthly.

  • Indexes are also stored on disk, and contribute to the storage that is charged monthly. The size of indexes varies with the size of the data that is indexed.

Storage reporting is a continuous process, where the storage occupied in each database is determined approximately once per week. The billed amount for storage is determined by taking an average of the weekly storage reports in a calendar month.

There can be some inaccuracy in storage reporting due to replica topology changes. When this occurs, the reported storage is less than the actual, resulting in lower billing.

One non-obvious contributor to storage is that Fauna stores all revisions to a document separately: each update contributes to the storage total. Deleting unused documents directly reduces required storage. Setting a document’s ttl field, or a collection’s history_days or ttl_days fields, can indirectly reduce storage.

Removal is handled by a background task, so once a document (including collections, databases, indexes, keys, roles, and tokens) "expires" due to the setting in the ttl field, it could be some time (hours or days) before the removal occurs. There is no guarantee that removal actually occurs.

As of version 3.0.0, the ttl field is honored on read — a document that should have been removed behaves as if it has been removed. However, until removal actually occurs due to background task processing, you can continue to access the history of the document, provided you have its reference, via the Events function.

Data transfer

The amount of data transfer required to provide query responses is charged monthly.

If the client executing queries is co-located within the same cloud provider and region as a Fauna replica, data transfer is not billed for those queries.

See the x-query-bytes-out header in the Per query metrics section. Note that it does not tell you whether the data transfer was billed or not.

Data transfer is, effectively, included in the cost calculation for read, write, and compute operations. There is no separate billing for data transfer.

Per query metrics

Fauna FQL queries are performed over HTTP connections, and responses include headers that indicate the resources used in the current query.

For example, for the following FQL query performed with the JavaScript driver:

client.query(
  q.Map(
    q.Paginate(q.Match(q.Index('all_letters'))),
    q.Lambda("X", q.Get(q.Var("X")))
  )
)
.then((ret) => console.log(ret))

The following response headers were included with the query result:

{
  'alt-svc': 'clear',
  'content-length': '4459',
  'content-type': 'application/json;charset=utf-8',
  date: 'Tue, 17 Nov 2020 22:57:46 GMT',
  via: '1.1 google',
  'x-byte-read-ops': '34',
  'x-byte-write-ops': '0',
  'x-compute-ops': '2',
  'x-faunadb-build': '20.11.00.rc8-01f9c94',
  'x-query-bytes-in': '120',
  'x-query-bytes-out': '4459',
  'x-query-time': '7',
  'x-read-ops': '27',
  'x-storage-bytes-read': '3047',
  'x-storage-bytes-write': '0',
  'x-txn-retries': '0',
  'x-txn-time': '1605653866258457',
  'x-write-ops': '0'
}

The query reads from a collection containing all 26 letters of the English alphabet, and it involved a read from the all_letters index, which is why x-read-ops is 27.

You can use this information to accurately determine the resource cost of running your queries, especially if your application(s) execute them frequently.

The Web Shell provides this information as a tooltip for each query result (hover your pointer over the i in the white circle):

Billable operations in the Web Shell

The Fauna GraphQL API does not currently provide per-query billing headers. You would have to correlate your API usage with the reporting available in the Fauna Dashboard. Unfortunately, the reporting there is not real-time, lagging behind query usage by several hours (at least).

Was this article helpful?

We're sorry to hear that.
Tell us how we can improve!
Visit Fauna's Discourse forums or email docs@fauna.com

Thank you for your feedback!