1 Abstract
In this article, we introduce a framework for formally describing trust impairments. Observable trust impairments, rooted in typically unobservable private intentions, add real-world noise to a trustworthy distributed system, and a formal description of the noise in the system is used in the algorithms and protocols which provably protect the real-world distributed system from these sources of noise.
A conspicuous difference of this framework from many existing prior art is that it does not need to assume that an actor with private intention is a trusted third party (TTP), i.e., an entity who, by assumption, is not involved with any observable trust impairment. The deliberate avoidance of this weakening assumption makes the protocols and tools developed within this framework particularly suited to various emerging verticals where the assumption of a trusted third party (TTP) or a root-of-trust is either invalid or ill-justified.
This article makes liberal use of concepts from undergraduate-level mathematics and computer science.
2 System model
We start off by describing the overall system, then describe some special subcomponents and relevant assumptions.
2.1 Overall system
We consider a distributed system \(\mathsf{D}\) embedded in a real-world environment \(\mathsf{W}\). The distributed system consists of \(n\) actors within the set \(\mathsf{A} = \{A_0, A_1, \ldots, A_{n-1}\}.\) The private intention of the actor \(A_i \in \mathsf{A}\) is the set \(\mathsf{I}_i\) of statements expressed in a formal language, such as LEAN (Moura and Ullrich [2021]), Rust (Matsakis and Klock [2014]), or a similarly expressive language, and the union of all private intentions in \(\mathsf{D}\) is the set of statements \(\bigcup_i \mathsf{I}_i = \mathsf{S} = \{s_0, s_1, \ldots, s_{{|\mathsf{S}|}-1}\}.\)
The set \(\mathsf{S}\) may not be completely known to any actor in \(\mathsf{D}\), even though it contains private intentions whose downstream effects are observable to the actor.
All actors have access to a computer \(\mathsf{C}\) - which is a special actor without any private intention outside the distributed system \(\mathsf{D}\). This computer provides a runtime environment \(\mathsf{E}\) for executing computer programs. \(\mathsf{C}\) is not owned by any one actor, but is maintained and verifiably protected by all actors. The specifics of how this is done are out of the scope of this article, and may be shared in later articles. The computer \(\mathsf{C}\) can execute one or more immutable computer programs \(\mathsf{L}_i, i \in \{0, 1, \ldots, n-1\}\), called trust anchors, interactions with which are done by actors within the distributed system \(\mathsf{D}\) via message passing. The computer program \(\mathsf{L}_i\) is owned by actor \(A_i\), but may be invoked by any actor. Invocation of a computer program by an actor starts one or more processes on the computer \(\mathsf{C}\). The processes, along with relevant metadata, may be observed by any actor via message-passing with \(\mathsf{C}\).
The design and construction of the computer \(\mathsf{C}\) for use on an Internet scale is non-trivial, and is not discussed in this article. A decentralized computing platform, such as Solana (Yakovenko [2018]), may be considered to be an early implementation of such a computer, albeit one which relies on majority consensus algorithms (e.g., Proof-of-Stake based on Proof-of-History) for ensuring the consistency of shared state. For this article, it may be assumed that the multiuser runtime environment \(\mathsf{E}\) is just UNIX or an UNIX-like system, such as Linux.
The actors may send messages to one another or to a computer program deployed on the special actor \(\mathsf{C}\). Messages to computer programs are relayed via the actor \(\mathsf{C}.\) Each message has the following metadata, all expressed in a formally checkable language:
- identity of the sender actor,
- identity of the receiver actor,
- instructions, and
- context.
It is assumed that the identities of all actors within \(\mathsf{D}\) are unique at all times even though they may be transient. In practice, the identities may be implemented using asymmetric key cryptography. Instructions and context are both sets of statements, introduced next.
2.2 Statements, context, protocol, and local action
Statements are a central concept in \(\mathsf{D}.\) Statements, expressed using the semantics of a formal language such as LEAN (Moura and Ullrich [2021]) or Rust (Matsakis and Klock [2014]), are used to express not only intentions and the one or more concurrent computation problem(s) being solved, but also computer programs, instructions for \(\mathsf{C}\), and messages exchanged among actors. A set of statements is referred to as a context \(\mathsf{Ctx}\), and the set of statements maintained locally by the actor \(A_i\) is called the local context \(\mathsf{Ctx}_i\). The local context may include private states, such as the intention, as well as non-private states, such as the problem being solved. New statements may be added to the local context via local actions which may either be:
local computation acting on one or more statements in \(\mathsf{Ctx}\) or
message-passing with one or more actors.
The messages exchanged with other actors form the shared context \(\mathsf{SharedCtx}.\) Message-passing is typically done using a finite number of messages shared according to a protocol \(\mathsf{P}\) also expressed in a formal language. This protocol is not private; it is known to and adopted by all actors in \(\mathsf{D} \bigcup \{\mathsf{C}\}.\) The protocol \(\mathsf{P}\) is affected by \(\mathsf{S}\) - the set of possible private intentions that may be held by actors in \(\mathsf{D}.\)
In this series of articles, we assume that every actor only has a single local context, i.e., actor \(A_i\) only maintains \(\mathsf{Ctx}_i\), which may change as the local time \(t_i\) increases monotonically.
2.3 Context extension and reduction functions
A context extension \(\mathsf{CtxExt}(\cdot)\) function is a member of a class of functions which operate on the context \(\mathsf{Ctx}\) and return a modified context which is obtained by adding statements to \(\mathsf{Ctx}\) which may either be:
logically derived from the statements already in \(\mathsf{Ctx}\), or
not logically related to any statement already in \(\mathsf{Ctx}.\) Such statements may be generated, e.g., using generative statistical tools like AI.
The context extension \(\mathsf{Ctx}_i\) is a local action done at the actor \(A_i\) and hence is fully trustworthy and verifiable by actor \(A_i\) as having introduced no additional logical contradictions in the context.
A set of theorems logically derived from a set of axioms may be considered to have been obtained by applying a context extension function to the set of axioms.
A context reduction \(\mathsf{CtxRed}(\cdot)\) function is a member of a class of functions which operate on the context \(\mathsf{Ctx}\) and return a new context by removing one or more statements from the original context. A context reduction is said to be lossless if the original context \(\mathsf{Ctx}\) can be obtained by applying a context extension function to the context returned by the context reduction function.
2.4 Absolute and local time
In \(\mathsf{D}\), we assume that there is no single absolute time, but only a local time at each actor, which may be different across actors. The local time at actor \(A_i\) is \(t_i\), and the local time at the computer \(\mathsf{C}\) is \(t_{\mathsf{C}}.\) Actors may share their local times with other actors via message-passing.
3 Trust impairment
A trust impairment occurs when a logical contradiction is discovered by any actor within the local context maintained by it.
Trust impairments, when detected, may arise either from a local action (e.g., due to a faulty computation) or from a shared context received via a message sent from another actor. In this series of articles, we assume that local computations are never faulty, and that trust impairments, when detected, always originate from a message received from another actor, describing a private intention/state not previously known to the actor.
If a logical contradiction is detected locally by actor \(A_i\), possible local actions by actor \(A_i\) may include:
removal of statements from the context \(\mathsf{Ctx}_i\) until the contradiction is removed,
not adding the shared context (or modifications based on the instructions in a message) to the context \(\mathsf{Ctx}_i\),
describing the logical contradiction and adding it to the local context, or
sharing the logical contradiction as a shared context with another actor, without revealing the local context beyond what is necessary to describe the contradiction.
The above is not an exhaustive list of actions. Some comments follow next on resolving trust impairments, when detected. Since trust impairments originate fundamentally due to private intentions held by an actor within \(\mathsf{D}\), we say that a trust impairment is resolved when the following condition is met.
A trust impairment detected by actor \(A_i\) is said to have been resolved when the actor adds one or more private intentions to its local context which resolve the logical contradiction, as verifiable by a formal proof checker available locally to the actor.
The private intentions which help resolve logical contradictions may be found either via local computation or via message-passing with one or more actors. We emphasize, without proof, that resolving logical contradictions has a one-to-one correspondence with actors becoming aware of private intentions they were not aware of before discovering the logical contradictions.
4 Discussions
We first discuss, at a high-level, a couple of algorithms for resolving trust impairments as described in this article.
4.1 A trivial brute-force algorithm, no privacy
Whenever a trust impairment is detected, actors in \(\mathsf{D}\) may share their local contexts with other actors. This makes the private state/context part of a public global state/context.
Each actor with access to the “global” context can then resolve the contradiction conclusively, by computationally evaluating the explanatory private intentions, via the use of formal proof-checkers.
4.2 An algorithm with more privacy
If the actors in \(\mathsf{D}\) wish to retain private state or if there are communication constraints, the above trivial algorithm is not suitable. The following algorithm may be adopted instead.
When trust impairments arise, the observed logical contradictions may be shared with other actors. Each actor may then generate a finite list of conjectures about private intentions that could explain the trust impairments, and perform one or more local actions to detect any logical contradiction between conjectures in the list and its local context (which includes context shared by other actors). Conjectures which do not contradict the local context can then be shared with other actors via message-passing.
Since the local context of each actor may be different, conjectures which do not introduce logical contradictions at actor \(A_i\), might introduce a logical contradiction at a different actor \(A_j\). However, sharing conjectures about private intentions, even if they contradict the local context of a different actor, may still be valuable because they might lead to verifiable consensus on weaker conjectures about private intentions within \(\mathsf{S}\), shared awareness of which can then inform revisions to the public protocol \(\mathsf{P}.\)
We discuss next the role of encryption and various authentication protocols in the context of the framework introduced in this article.
4.3 The role of encryption and authentication protocols
The basic framework in this article should be compatible with most modern encryption and authentication protocol frameworks. While being able to formally describe trust impairments has a symbiotic relationship with the latter frameworks, describing and resolving trust impairments is a distinct problem.
In particular, the use of encryption or standard authentication protocols may not resolve trust impairments as defined within the framework introduced in this article. Similarly, not using encryption or standard authentication protocols may not compromise trust.
5 Summary
In this article, we introduced a simple framework to describe trust impairments observable by a local actor within a distributed system. We also discussed what it means to resolve trust impairments. Future articles in this series dive deeper into the computational detection, description and remediation of the trust impairments described in this article.
6 Changelog
This document was first published on October 16, 2025.
7 Feedback
The author of this article would love to hear your feedback at hello@milaneum.io. In your message, please consider including the name of the article, the listed identity of the Milaneal author of this article, and the semantic version.