pull down to refresh

Trust is perhaps the most powerful aspect of human relationships. In his book 📖 The Thin Book of Trust1, Charles Feltman proposes a framework of sorts to reason and talk about trust. The choice to trust consists of four distinct assessments about how someone is likely to act. These assessments are: care, sincerity, reliability, and competence.
For example, someone may be competent, or able to do the work, but not reliable. Rather than simply labeling someone as untrustworthy, one can reason about why they don't feel comfortable trusting that person.

Trust in Teams

Not too long ago, when I used to lead many engineers and engineering managers, one thing I used to tell them when they first joined was to work on being worthy of their teammates' trust. I also encouraged them to work on trusting their teammates, and when that wasn't the case, to address it using these ideas. When it comes to teams and collaborative 👥 effort, being able to trust one's peers is powerful and important.

Trustless Security

On the other hand, when you go into the security 🔐 universe, one of the first things you learn is "don't trust, verify." That's also why we use cryptography to secure Bitcoin and many other important human technologies. The point is that we are now living in a world where we treat our peers as adversaries because we have been disappointed so many times by trusting people.
So, we build censorship-resistant technology and trustless2 protocols so we don't need to trust others to be honest but rather incentivize them to behave honestly. We develop zero-knowledge3 architectures where we protect ourselves from bad actors by not knowing anything, and the list goes on.

Moving Forward

This brings me to the contradiction in question. How do we move forward as a species and decide when to pursue and expect full trust versus when we should assume we are all threats and start devoting our energy to being safe?

Footnotes