Streaming is a feature where client code can subscribe to a document stored in a Fauna database and any changes to that document are immediately streamed to the client as event notifications. The primary intended use case is for immediate user interface updates based on activity in your Fauna database.

Streaming is a much better alternative to the standard approach of polling. Polling occurs when client code repeatedly issues queries to the database at regular intervals to discover document updates. With pay-as-you-go pricing, polling is the much more expensive alternative, and your code is only aware of changes when query results are returned.


A sequence diagram demonstrating the communications during database polling


A sequence diagram demonstrating the communications during database streaming

The polling diagram demonstrates that the client has to execute many more queries in order to discover when a document has been updated. For a streaming client, the subscription happens once and events are automatically broadcast to the client whenever the subscribed document is updated.

There is a cost in compute operations to hold a stream open. See Billing for details.
Streaming works with HTTP/1.x, but HTTP/2 is much more efficient so use that if your client environment supports it.


Fauna streaming uses a protocol inspired by HTTP Server-Sent Events (SSE). Fauna streams events for a subscribed document to the client, keeping the connection open (where possible) to minimize transmission delays. Client event handling (in supported drivers) is similar to WebSockets, however streams are unidirectional: the client cannot send events to the server via the stream.

Similar to the SSE protocol, events are communicated over a text-based channel. Each event is formatted as a single-line JSON object. Unlike SSE, Fauna adds an additional line ending, \r\n, to delimit payloads, which helps with JSON parsing when network middleware splits the event payload into multiple packets.

Here is an example event, with the JSON formatted for easy identification of the structure:

    "type": "version",
    "txn": 1614043435980000,
    "event": {
        "action": "update",
        "document": {
            "data": {
                "score": 12
            "ref": {
                "@ref": {
                    "collection": {
                        "@ref": {
                            "collection": {
                                "@ref": {
                                    "id": "collections"
                            "id": "Status"
                    "id": "1"
            "ts": 1614043435980000

The outermost structure is a "metadata" wrapper for the event, which contains the fields:

  • type: the type of event payload. One of:

    • start: An event marking the start of the stream. Use the txn field as the stream’s starting timestamp.

    • version: An event containing information about a given document.

    • error: An event in response to an error with the stream.

    • history_rewrite: An event containing information about a historical change, such as when the subscribed document’s history is revised.

  • txn: the timestamp of the transaction emitting the event.

  • event: an object describing the particular event.

The event object contains the fields:

  • action: the type of event. One of:

    • create: Occurs when a document is created.

    • update: Occurs when an existing document is updated.

    • delete: Occurs when an existing document is deleted.

    • add: Occurs when a document is added to a set.

    • remove: Occurs when a document is removed from a set.

  • document: An object containing the subscribed document’s details. For update events, only the modified fields are included.

    The document's ts field is the document’s timestamp expressed as a Long. It is often the same as the event wrapper’s txn field, but it is not guaranteed to be identical.

The methods to respond to events differ in each driver that supports streaming:


  • Avoid running a query to fetch a document and then establishing a stream. Multiple events may have modified the document prior to stream startup, which can lead to inaccurate representation of the document data in your application.

    For the JavaScript driver, you can use the document helper, which takes care of this problem for you.


For the initial release of streaming, the following limitation exist:

  • Active stream count:

    • Only 100 simultaneous streams per browser. Browsers manage the number of concurrent HTTP2 streams using a hard-coded limit. No matter how many windows or tabs are open, you cannot exceed 100 streams simultaneously.

    • Using a driver that supports streaming (currently, the JavaScript and JVM drivers), you can have more than 100 streams active at once by creating additional connection objects: each connection supports up to 100 streams.

    • There may be other limits based on each host language’s HTTP2 implementation but we have not encountered those yet.

  • Node.js clients are not currently supported.

    Node.js' HTTP/2 implementation has an issue that currently prevents stream disconnections (and possibly other error conditions) from being reported correctly — no error event is triggered in these situations, so your client would wait for stream events that can never arrive.

  • A stream can be established only to a single user-created document. It is not currently possible to stream a schema document, such as for a collection, or index/set.

  • A stream only reports events to the fields and values within a document’s data field.

  • No support for GraphQL subscriptions is available.

  • Driver support is currently limited. See the JavaScript and JVM driver pages for example client code.

Was this article helpful?

We're sorry to hear that.
Tell us how we can improve!
Visit Fauna's Discourse forums or email

Thank you for your feedback!