Isolation levels refers to how a database maintains the integrity of transactions. When a transaction is processed, the isolation level that is in effect determines whether or how the transaction might be affected by other concurrent database operations.
The following table summarizes the consistency levels:
None (documents only)
Only serialized or unique indexes
Indexes without serialization
Read-only transactions are, by default, serializable for
reads that don’t involve indexes. Reads can opt-in to strict
serializability by using the
linearized endpoint or by including
a no-op write in the transaction. You linearize an endpoint by
/linearized to the URL. For example:
Serializability guarantees that the execution of a set of transactions on multiple items is equivalent to some serial-ordered execution of those transactions. Serializability is regarded as the gold standard of isolation. A system that guarantees serializability can process transactions concurrently but guarantees that the final result is equivalent to processing the transactions serially, one after the other, with no concurrency. This powerful guarantee has withstood the test of time, enabling resilient and bug-free applications to be built on top of it.
What makes serializable isolation so powerful is that the application developer doesn’t have to reason about concurrency at all. The developer has to focus only on the correctness of individual transactions in isolation. As long as each individual transaction doesn’t violate the semantics of an application, the developer is ensured that running many of them concurrently also doesn’t violate the semantics of the application.
Although serializability works fine in the context of a single database server, it isn’t as consistent as it seems when it comes to a distributed global database.
The first limitation of serializability is that it doesn’t constrain how the equivalent serial order of transactions is chosen. Given a set of concurrently executing transactions, the system guarantees that they are processed equivalently to a serial order, but it doesn’t guarantee a serial order. As a result, two replicas that are given an identical set of concurrent transactions to process can end with different final states because they chose to process the transactions in different equivalent serial orders. Therefore, database replication that guarantees only serializability can’t occur by replicating the input, with each replica processing the input simultaneously. Instead, one replica must process the workload first, and a detailed set of state changes generated from that initial processing is replicated, thereby increasing the amount of data that must be sent over the network and causing other replicas to lag behind the first replica.
The second, related limitation of serializability is that the
chosen order of transactions doesn’t have to be related to the
order of transactions submitted to the system. A transaction
submitted after transaction
X might be processed in an equivalent
serial order with
Strict serializability adds an extra constraint on serializability.
Y starts after transaction
Y aren’t, by definition, concurrent. A system that guarantees
strict serializability guarantees:
The final state is equivalent to processing transactions in a serial order.
Xmust be before
Yin that serial order.
For an application that applies transactions serially, these guarantees
seem obvious in that when
Y comes after
X is before
serializability is hard to attain in distributed
databases. In a typical distributed database, the factors influencing
whether strict serializability can be maintained, or is possible,
Fauna indexes maintain a virtual snapshot of participating documents. When an index is used in a transaction, the documents associated with the index are evaluated, guaranteeing isolation for read and write queries.
Is this article helpful?
Tell Fauna how the article can be improved:
Thank you for your feedback!