Dear HTTP service, how can I trust you not to break my heart?

Avatar of the author Willem Schots
12 Mar, 2026
~4 min.
RSS

When you send your most intimate data to an AI-platform, how do you know it’s not being shared with third parties? Or, how do you know it won’t be used to train the next generation of models?

More generally, how do you know that any web service will fulfill its social, contractual and legal obligations?

From the outside you might be able to see that an endpoint is using SSL, but you’ll have no guarantees what happens beyond that.

For example, if the service were to forward your data in plaintext to an unauthorized third party you’d have no way of knowing.

Sequence diagram where an user sends sensitive data to a service, and the service forwards the data to a third party
Client wouldn't know if the service shared data with a third party

A service might have its source code available for review, so you could verify whether your data is handled appropriately.

But, how do you know that this actually corresponds to the system that is deployed and handling your requests?

This is an issue for the service operator as well. How do they know their development tooling isn’t actually compromising deployments?

Sequence diagram where a developer approves a build, but the CI system deploys a compromised build
How will the developer know if a deployment is compromised?

It’s a matter of trust.

But trust-based accountability only goes so far, and with the developments in the world… we need something better.

Verify first, trust later

Last year I was asked by the fine folk at Confident Security to help out on a project. This project turned into Open Private Cloud Compute (OpenPCC): An open source toolkit for building verifiably private systems.

It’s inspired by Apple’s PCC, but targets commodity hardware/software and SAAS-style deployments.

In this article I’ll only discuss the core design at a high level, this doesn’t describe the entire system and skips important concepts. I’ll go into more detail in later articles, but if you want to dive in now, go read the OpenPCC whitepaper.

At the time of writing, 90% of the functionality is there, we're sanding down rough edges and 2-3 components are still using temporary implementations.

OpenPCC essentially provides a HTTP client with an important limitation: It will only send requests to services that it has verified beforehand.

Verification consists of rules that ensure a service is running expected hardware and software.

If we model this process in a sequence diagram, it looks somewhat like this:

Client verifies evidence from the service, and encrypts its data so that the evidence must be valid when the service decrypts the request to handle it

As you can see, after a service is verified, the client will send encrypted requests to that service.

This encryption is crucial to protect the user data from third-parties outside of the verified service boundaries.

Additionally, and this makes OpenPCC stand out among similar solutions, the encryption ties the request to the verified evidence.

If the state no longer matches the evidence at the time of decryption, the service won’t be able to decrypt the request.

This binding process is implemented using a Trusted Platform Module (TPM), this is a (virtual) chip dedicated to cryptographic operations, present on most modern systems. It allows binding cryptographic keys to system state.

The TPM is a standalone system designed to be resistant to attacks from (system) users.

This doesn’t mean TPM’s are immune to attacks though.

Anonymity

While hardware, or hypervisor-level attacks are considered out of scope, OpenPCC does go to great lengths to prevent such attacks from being able to target specific users. This “non-targetability” is one of OpenPCC’s primary design goals.

From the perspective of the service, incoming OpenPCC requests are fully anonymous.

With a small amount of users, this might require artificial traffic to mask genuine traffic.

In a multi-node deployment, the compromise of a single node won’t allow an attacker (or malicious service operator) to target a specific user, as they can’t selectively route traffic to that node.

Transparency

During operation the OpenPCC client requires externally-sourced data. Most notably build manifests and several public keys.

The OpenPCC client requires this data to be published to a public append-only log, a so-called transparency log. Anyone in the world can track changes to this data.

This means that a service operator won’t be able to, for example, create user-identifying keys without leaving a public trace of their malicious behavior.

Wrapping up

There’s a lot more to cover (and build), and I’ll be doing that in future articles.

If you’re curious how OpenPCC fits together, the whitepaper walks through the full design, and the repository has the code.

Career choice: Learn skills to mitigate the upcoming AI privacy disaster*

Join 800+ devs reading my newsletter

*Everyone and their mother is sending sensitive data to AI systems with little concern for their privacy. If you read the fineprint, vendors and platforms actually offer very little guarantees. It's a matter of time before it goes wrong.

From March 2026 onwards, I'll be writing about development of verifiably-secure services using OpenPCC.

Avatar of the author
Willem Schots

Hello! I'm the Willem behind willem.dev

I created this website to help new Go developers, I hope it brings you some value! :)

You can follow me on Bluesky, Twitter/X or LinkedIn.

Thanks for reading!