The origins of trust impairments

article
Author

Milaneal 0

Published

July 17, 2025

1 Abstract

In this article, we offer a gentle introduction to the origins of trust impairments among actors with private intentions, henceforth referred to as intentional actors. These impairments arise in a distributed system with at least three intentional actors. We also discuss specific trust impairments which are amenable to detection and remediation via algorithms, protocols or software that may be executed by the actors intentionally, leveraging their local computational capabilities and their local observations of the distributed system.

The concepts introduced in this article are used in a mathematical theory of trust developed over the next few articles. This mathematical theory informs practical Internet-scale trustworthy distributed systems for various emerging verticals. These systems do not rely on any trusted third party (TTP).

2 What’s in a name?

In this article we use the following terms synonymously. Each group of synonymous terms is enumerated as a separate list item.

  1. Message-passing, message-exchange, communication.
  2. Concurrent computation problem, computation problem, problem.
  3. Intention, intent.
  4. Local action, local communication and computation.
  5. Intentional actor, actor.

3 Introduction

Actors form the basic building blocks for concurrent computations in the actor model which was introduced in Hewitt et al. [1973]. In this article, we use the following simplified definition of an actor.

TipDefinition: Actor

An actor is an entity that:

  • has private state including intention, and that
  • may perform a local action, which may either be a local computation (with possible modifications to private state) or local communications with other actors via message-passing.
---
config:
  theme: 'base'
  themeVariables:
    actorBkg: "#5e88fc"
    actorBorder: "#5e88fc"
---
block-beta
    block
      columns 1
      Actor
      block
          columns 1
          id1("<b>Intention</b>")
          id2("Local knowledge")
          id3("Opaque state 0")
          id4("Opaque state 1")
          id5("...")
      end
    end

    style Actor stroke:#333,stroke-width:0px
Figure 1: An actor with private state including intention.

An actor as modeled in Figure 1 may represent a real individual, networked computer, containerized application, process spawned by an operating system, artificial intelligence (AI) agent, etc. The actor may choose to encapsulate its intention away from other actors. The private intention influences the concurrent computation problem that the actor chooses to solve via local computations, as well as communications with other actors. The communications may be unicast, multicast, broadcast, anycast, etc., as long as the initiator of the communication is a single actor.

block-beta
  id1("One or <br> more problems") space:1 id2("Actor with<br> intention") space:1 id3("Local <br>action")
  id1 --> id2
  id2 --> id3
Figure 2: An intentional actor taking a local action in the context of one or more problems.

As shown in Figure 2, an actor chooses to take a local action (i.e., computation or message-passing) guided by its intentions in the context of a computation problem. While the intentions are typically private, the problems being solved and the messages exchanged with other actors are not typically private. The context of a problem may become part of an actor’s private state after being introduced to the actor via message-passing with another actor.

---
config:
  theme: 'base'
  themeVariables:
    actorBkg: "#5e88fc"
    actorBorder: "#5e88fc"
---
sequenceDiagram
    participant Alice
    participant Bob
    Alice->>Bob: Hello Bob, how are you?
    Bob->>Alice: Good, and you?
    Alice->>Bob: Couldn't be better.



Figure 3: Actors Alice and Bob exchanging messages without revealing private state (e.g., intention).

Distributed systems are useful mathematical models for a variety of real-world systems involving actors who interact with one another via message-passing. In this article, we use an adaptation of the concept of distributed systems as presented in Lamport [1978], reproduced below for the convenience of the reader.

A distributed system consists of a collection of distinct processes which are spatially separated, and which communicate with one another by exchanging messages. … A system is distributed if the message transmission delay is not negligible compared to the time between events in a single process.

In this article, we use the following definition instead. The salient changes are emphasized.

TipDefinition: Distributed systems

A distributed system consists of a collection of actors who communicate with one another by exchanging messages. A system is distributed if the message passing delay is not negligible compared to the time between local actions taken by any single actor.

Figure 3 is an example of a distributed system with two actors. Distributed systems can model a variety of real-world physical systems, from multicore computer chips or a system of communicating sequential processes all the way to organizations of people or the Internet. The discussions in this article are applicable to all such systems.

The above definition of distributed systems helps us develop useful notions of trust among actors within a distributed system, if the actors involved have private intentions. As noted in Hewitt et al. [1973], issues related to intent and trust are among the most important issues in privacy and protection that remain unsolved.

In the next few sections, drawing inspiration from various advances made in cryptography, computing and networking since 1973, we revisit the concepts in Hewitt et al. [1973] towards building Internet-scale trustworthy distributed systems which do not rely on a trusted third party (TTP).

4 Intention and trust

We next focus our attention on a distributed system consisting only of intentional actors. Unless otherwise stated, all statements about trust are based on local perceptions of trustworthiness of the distributed system. These perceptions are held by each intentional actor as part of its private state. We introduce the following definition of a fully trusted distributed system.

TipDefinition: Fully trusted distributed system

A distributed system is fully trusted if and only if all actors within the system find it “locally” trusted based on:

  • the messages exchanged with other actors,
  • its private intentions and the stated/observable intentions of other actors, and
  • the concurrent computation problem(s) being solved.

An actor trusts a distributed system locally if and only if it cannot discover any logical contradictions within the exchanged messages and the stated intentions in the context of one or more problems.

It is important to note that:

  1. the state of being a fully trusted distributed system neither implies nor requires majority consensus among the actors on intentions, observable messages or the context, because some of the latter may be part of the private state for one or more actors. This is discussed further in Section 5.2.
  2. The assessment of trustworthiness made by each actor locally may be updated dynamically after each local action, without needing consensus from other actors in the distributed system.
  3. A distributed system may become untrusted even if one actor discovers a valid logical contradiction based on a local action.

A fully trusted distributed system is the primary motivation for a mathematical theory of trust developed in this series of articles.

We next consider the trivial scenario of a distributed system with a single intentional actor and work our way up to systems with three or more actors.

To simplify the discussions, we assume that the actors are:

  1. equipped with infinite memory and infinite computation power (i.e., with computers which run at infinitely fast clock speeds), and that
  2. they have a large time, after a finite number of local actions have been performed, to discover logical contradictions, based on which they may make their assessment on whether the observable system is fully trusted. This assessment gets redone each time a new problem is solved. Finally, we assume that
  3. the actors never choose to challenge mathematical logic, via majority consensus algorithms or otherwise, thereby avoiding related deadlocks that may otherwise arise.

4.1 One intentional actor

As shown in Figure 4, in a distributed system consisting of a single intentional actor - Alice in Figure 4 - there are no messages exchanged with other intentional actors. There are only local computations guided by the intention held by the single actor in the context of a problem. This intention is part of the private state of the actor.

Given that the intention guiding the local computations is fully known to the only actor effecting changes of state in the distributed system in the context of a computation problem, we conclude that such a system is fully trusted, as perceived locally by the single actor.

---
config:
  theme: 'base'
  themeVariables:
    actorBkg: "#5e88fc"
    actorBorder: "#5e88fc"
---
sequenceDiagram
    participant Alice
    Alice->>Alice: Let me evaluate (2 * 6) + (15 * 2).
    Alice->>Alice: This is the same as evaluating (12) + (30).
    Alice->>Alice: The result is 42, which makes sense.
Figure 4: Actor Alice performing a local computation intentionally.
NoteFully trusted?

Yes.

4.2 Two intentional actors

In a distributed system with two actors, messages may be exchanged among the actors. But the intention of one actor may not be fully known to the other actor, because it is part of the private state of the actor.

It therefore becomes meaningful to introduce the notion of a contract or an interface among the intentional actors. This contract has semantic meaning in the context of a concurrent computation problem, and may be established via exchange of messages among the intentional actors. After being established, the contract guides the exchange of messages among the intentional actors until its fulfilment, without the details of the private intentions of one actor being fully known to the other actor.

Since the totality of the non-private effects of each actor on the distributed system is effected via messages which are observable by all actors within the distributed system, such a system is also fully trusted, as perceived locally by either actor.

If logical contradictions originating from a certain actor are detected by the other actor, they can either verify or dismiss the logic independently, given their access to infinite local computational power and an unbounded amount of time to discover such contradictions after a finite number of local actions have been taken. Of course, this is under the premise that logical contradictions in a finite number of statements can be identified in a finite amount of time.

---
config:
  theme: 'base'
  themeVariables:
    actorBkg: "#5e88fc"
    actorBorder: "#5e88fc"
---
sequenceDiagram
    participant Alice
    participant Bob
    Alice->>Bob: Let us evaluate (2 * 6) + (15 * 2).
    Bob->>Alice: Shall we divide and conquer via contract-based teamwork?
    Alice->>Bob: Sure!
    Bob->>Alice: How about you compute (15 * 2), and I compute (2 * 6) and the sum?
    Alice->>Bob: Sounds good. (15 * 2) is 30.
    Bob->>Alice: Thank you. (2 * 6) is 12 and the sum is 12 + 30 = 42.
    Alice->>Bob: Cheers to us for uncovering the secret to everything via teamwork!
Figure 5: Actors Alice and Bob performing concurrent computation intentionally without revealing private intentions beyond what is logically related to the problem.
NoteFully trusted?

Yes.

4.3 Three intentional actors

With three actors it becomes logically impossible for any intentional actor to verifiably trust one another in general, without having full access to all the messages exchanged among the actors. We call this the “three-body problem” for trust. From the perspective of the actor Bob, at least the following are unknown:

  • the private intentions of the other actors,
  • the messages exchanged among sets of actors not including actor Bob, and
  • the intentions shared among sets of actors not including actor Bob.

A simple example below shows why the three-body distributed system can no longer be fully trusted, without thoughtfully designed restrictions on the nature and type of messages exchanged, typically done via adopting a protocol intentionally. This example has one of the actors playing the role of a trusted third party (TTP) and acting as an active trust saboteur, carrying out a simple MITM attack on the intentions expressed by Bob in the context of a computational problem, unbeknownst to Bob until he performs a local action after the MITM attack. We name the trust saboteur Eric, to distinguish him from the passive eavesdropper Eve who typically appears in classical literature on cryptography.

---
config:
  theme: 'base'
  themeVariables:
    actorBkg: "#5e88fc"
    actorBorder: "#5e88fc"
---
sequenceDiagram
    participant Alice
    participant Eric
    participant Bob
    Eric->>Alice: Let me act as a TTP.
    Eric->>Bob: Hey Bob, let me act as a TTP.
    Alice->>Eric: OK, Eric, I choose to trust you blindly.
    Bob->>Eric: Sure, what can go wrong with blind trust?
    Eric->>Eric: Hehehe. Sabotaging trust is so easy nowadays! <br> I can just distort intentions subtly during transit, <br> using blind trust and assumptions of goodwill <br> and I can have power, control and influence <br> over both of them in no time!
    Eric->>Eric: I can even enslave them in compliance <br> with the 13th amendment of the US Constitution, <br> within a bureaucracy headed by me and <br> detached from real-world market needs and wants!
    Eric->>Eric: It's not as if Alice and Bob <br> can use any computational tool yet to <br> detect or describe on time the subtle trust <br> impairments I introduce among them, <br> let alone repair them on time, towards <br> their individual freedom and liberty!
    Bob->>Eric: We need to evaluate (2 * 6) + (15 * 2). <br> I think Alice will want to help with (15 * 2).
    Bob->>Eric: I prefer doing multiplications <br> involving the number 6.
    Eric->>Bob: Sure, let me speak with <br> Alice on your behalf.
    Eric->>Alice: Bob wants you to compute again. <br> I convinced him to lower the <br> computational workload for you. <br>  He needs you to compute (2 * 6).
    Alice->>Eric: Sure, anything for Bob. (2 * 6) is 12. <br> Anything else?
    Eric->>Alice: Thank you, nothing<br> else at this moment.
    Eric->>Bob: Alice seemed unwilling to help you at first, <br>but I finally convinced her to do some work.
    Eric->>Bob: Alice says (2 * 6) is 12.  You need to compute the<br>rest yourself, even if you don't prefer doing so.
    Bob->>Eric: OK, thanks.
    Bob->>Bob: Well, that is strange. <br> I thought Alice liked arithmetic problems. <br> I wonder what Eric told Alice, <br> and how it might affect me later. <br> It's Alice and Eric's word against mine now. <br> I wish technology based on verifiable <br>mathematical logic could help me out here.
    Bob->>Alice: Thanks Alice for helping out.
    Alice->>Bob: Anytime.
Figure 6: Actors Alice, Bob, and Eric performing concurrent computations with Eric being a TTP, and Alice and Bob assuming goodwill of Eric.
NoteFully trusted?

No, in general. Depends on the algorithms and protocols used for message-passing, exchange of intentions and shared context.

4.4 More than three intentional actors

Most, if not all, of the qualitative observations made for a distributed system with three actors hold when the number of actors is more than three. The complexity of trust impairments is expected to grow exponentially as the number of actors increases. A very high level rationale for this expectation is that for \(n\) actors, there are \(2^n - 1\) possible subsets of actors, each of whom may have different private states (due to message-passing within those sets of actors), which results in the number of possible trust impairments being at least on that order of magnitude.

These concepts will be formalized in later articles.

5 Trust impairments

In this section, we elaborate on the detection and remediation of specific trust impairments that arise in a “three-body” distributed system, towards developing principled approaches to making the distributed system fully trusted.

5.1 Detection

The introduction of the TTP in Figure 6 helped solve a sub-problem, but not with the same level of observability from both Alice and Bob as in Figure 5. The result being that Bob ended up trusting the distributed system in Figure 6 a bit less than he did the system in Figure 5, due to the logical contradictions he deduced between Alice’s actions as reported by Eric, and his local knowledge about Alice.

In the specific example in Figure 6, it turns out that Eric had a private intention to distort intentions subtly during transit, taking advantage of the fact that messages exchanged are naturally hidden from non-participating actors in a three-body distributed system. This private intention of Eric is aligned with his interest in being a TTP which did not necessarily take into account:

  1. the context of the specific concurrent computational problem that Bob and Alice were trying to solve via message-passing, or
  2. the intentions Bob and Alice intended to communicate to each other.

As Bob realized in Figure 6, the fact that the messages exchanged between Alice and Eric are hidden from him did not prevent the intentions expressed by Eric in his messages to Alice from affecting Bob. In particular, he was affected in at least the following ways:

  • he did not fully trust the distributed system in Figure 6, and
  • he needed to execute a larger / non-preferred computational workload, than he would have if he did not relay his intentions via a TTP.

5.2 Remediation

Given that the messages exchanged between Eric and Alice in Figure 6 affect Bob, Bob should, in principle, have some, but not full, visibility into the messages Eric might be exchanging with Alice on his behalf in the context of the stated concurrent computation problem. As an aside, it so happens that in Figure 6, the context of the problem was also not communicated by Eric to Alice.

This is where thoughtfully designed concurrency-friendly protocols and algorithms come in.

The core insight in the design of these protocols in these series of articles is that, unlike in many existing decentralized computer systems such as Ethereum (Wood et al. [2014]) or Solana (Yakovenko [2018]), the consensus protocols underlying trust among different actors do not necessarily need to be based on majority consensus. They are based, rather, on carefully designed visibility, for all involved actors, into the messages exchanged. This visibility is necessary mainly due to current or future accountability arising from the intentions in those messages. If the actors are not affected at all, they need not have visibility.

The somewhat greenfield study of interactions among private intentions held by intentional actors within a distributed system and their effects on systemic trust as perceived locally by each actor leads naturally to a mathematical theory of trust in a distributed system with multiple intentional actors exchanging messages towards intentionally solving a multitude of highly concurrent computation problems on an Internet-scale, without breaking the somewhat delicate abstraction of a fully trusted distributed system due to one or more local actions from any actor.

6 Summary

In this article, we introduced various concepts related to trust among intentional actors in a distributed system. We described the origins of trust impairments among three intentional actors in a distributed system within this conceptual framework. We described the notion of a fully trusted distributed system and pointed out the possibility of protocols achieving this state not relying on majority consensus algorithms, thereby unlocking greater scalability without compromising on trust.

Future work in this series entails formalizing the semantics of private intention and trust in the context of concurrent computational problems and sub-problems, so that they are amenable to being checkable mathematically and/or computationally by intentional actors within the distributed system, without the need to rely on majority consensus algorithms as a proxy for mathematically sound verification.

7 Changelog

This document was first published on July 17, 2025. It was last modified on December 29, 2025.

8 Feedback

The author of this article would love to hear your feedback at hello@milaneum.io. In your message, please consider including the name of the article, the listed identity of the Milaneal author of this article, and the semantic version.

References

Hewitt, C., Bishop, P., and Steiger, R. 1973. A universal modular actor formalism for artificial intelligence. International Joint Conference on Artificial Intelligence.
Lamport, L. 1978. Time, clocks, and the ordering of events in a distributed system. Association for Computing Machinery.
Wood, G. et al. 2014. Ethereum: A secure decentralised generalised transaction ledger. Ethereum project yellow paper.
Yakovenko, A. 2018. Solana: A new architecture for a high performance blockchain v0.8.13. https://solana.com/solana-whitepaper.pdf.

Reuse