return to list of articles

Disrupting the entire internet

3 simple ideas for HTTP protocols that can pave the way for a frictionless, convenient, private internet

The internet currently sucks

I wrote about why current authentication and identification protocols suck in a post earlier this year. This post introduces 3 ideas that are a much better solution to fixing the internet.

Last month I challenged myself to find a way to rid the internet of those annoying browser cookie warnings. The best way would be to get rid of cookies altogether because then those warnings wouldn’t be needed anymore, right?

I tackled the issue and after sleeping on my thoughts for a few days, I think I’ve come to the conclusion that I was wrong. An internet without cookies is indeed possible. And it can also be far better than the internet of today.

Hating on cookies

Cookies use browser-based storage but also require explicit user permission on a domain-by-domain basis. Thanks EU for your well-intentioned but poorly implemented legal policies. Now the web has become a wasteland of cookie permission request roadblocks.

But more insidious than UX annoyances is the fact that cookies can result in your internet browsing behaviour being tracked, logged, and analyzed. If you’re interested, here’s how third party cookies work.

Fingerprinting

More bad news… Even in situations where your have your cookies disabled or you’re browsing in an incognito mode window, you are still identifiable.

How?

Every page visit is accompanied by a payload of HTTP headers. Information like your timezone, user-agent, font library, list of browser plugins and a significant list of other identifiers can be used to uniquely identify or de-anonymise you.

Not even using a VPN can obscure this information.

So it’s all doom and gloom?

Not with a bit of imagination and the collaboration of a lot of smart people. Hopefully you’re one of the smart people that can help turn today’s internet on its head.

Introducing HTTP-DRUID

Here’s a crazy idea.

Instead of using existing devices such as domain-generated cookies, sessions can instead rely on a similar browser-generated device.

Instead of making requests with cookie headers attached to them, browser-generated hashes could be generated by the browser per domain on an initial visit. For this first, and every subsequent request, the browser would inject an HTTP header for the responding server to parse and identify a visitor, much like they already do for regular cookies.

So how would this work?

A Domain Restricted Unique Identification Device would be just another HTTP header that is both immutable and user-revokable. Optimally (and similar to how the uBlock Origin plugin works) a browser would natively provide the ability for users to:

  • include an HTTP-DRUID header for all requests globally (unique per domain)
  • exclude HTTP-DRUID headers for all requests globally
  • include or exclude an HTTP-DRUID header based on a whitelist or blacklist of domains

The path to this migration requires a submission of a DRUID standard via a RFC to the IETF. Once a standard is well defined and scrutinised, there would be a framework for web developers to build DRUID-compatible authentication services.

Benefits to HTTP-DRUID

DRUID strings would be frictionless and would not require any setup or acceptance by users because they are automatically generated by the client they’re using. They also form part of web requests since they’d be HTTP headers. This means that they should (IANAL) bypass the EU cookie law so any website that adopts the protocol can scrap their cookie warning banners.

Another benefit would be that account registration and session management on websites would be significantly simplified leading to a new era of usability benefits. Imagine (almost) never having to remember a password or login details again, wouldn’t that be amazing?

Implementation

Because users with HTTP-DRUID enabled browsers would be uniquely identifiable to the websites they visit, password-based authentication would no longer be needed.

In fact, at the lowest friction end of the usability spectrum, any website can instantly convert any new visitor into a registered user without any action required on their part. Websites can already do that but if a user’s cookie is deleted then it becomes impossible for them to log back in because they have no credentials to use. And of course this method would not be recommended because DDoS “registration attacks” would cripple the underlying database on which any website relies on.

A single registration button with a RECAPTCHA code would be a solution to overcome this problem. It would truly be a “single-click” registration process. Erm, RECAPTCHA uses cookies so maybe something a bit different here. Because the identifier information is automagically sent with each HTTP request, users will remain authenticated across browsing sessions, unless they voluntarily revoke their DRUID for a specific website.

Additional information required to complete an account profile can be requested incrementally post-registration (this should already be the norm if usability is a factor in your website design). An email address can and should be linked to the user’s account once registered so that they might be able to link their account to additional browser installations across their owned devices.

Combining the above with the Slack-esque magic login link technique, passwords would no longer be required for most web services. An initial HTTP request in a secondary browser to a magic URL can allow the web service to append the new DRUID to the user’s list of account DRUIDS in the service’s database.

What about the fingerprinting problem?

Here’s a second, crazier idea.

A potential solution to counteract the “leakiness” of identifiable web requests is to split every request up into a cluster of requests. Instead of providing a boatload of HTTP information, a browser can make several requests wherein each request provides only a single (or a limited amount of) header(s).

The first request would include the user-agent and timezone, the second request would include the browser fonts, the third request will include the Accept and encoding header etc.

The browser can then combine all responses once the burst of requests is completed and ultimately render or update a page. This burst technique of requests will reintroduce the possibility of completely anonymous browsing by supplying a broken stream of requests that a responding server would be unable to identify.

HTTP-BURST

This Broken Unidentifiable Request Style Transfer can be implemented either at a browser level or alternatively a “polyfill” can be made available by means of a browser extension which either strips HTTP headers prior to making a burst request, or fakes header information during a round-robin style technique where each burst request only has one valid or real header per request.

Of course this BURST technique can only provide anonymised browsing for web users that have cookie-less sessions enabled and are visiting websites that do not require authentication.

HTTP-SWARM

This is where SWARM comes in. Please put away your judgemental looks because this is the most ridiculous idea of them all. The above 2 are probably somewhat viable, this one is just a throw-away idea.

The anonymity provided by HTTP-BURST web browsing is obviously not compatible with any web service where authentication is required. Even with HTTP header obscurity, a session acts as an identifier of who is currently making web requests.

But pseudo-anonymity might still be possible.

Given the existence of a trustless, decentralised, and logless throughput proxy, browsing habits can be polluted to the point where user profiling can be severely restricted.

This is where the Secure Wallet Authentication Request Mechanism comes in.

Whether through HTTP-DRUID headers or session based authentication, a proxy service can bundle any HTTP GET request with an arbitrary number of other HTTP GET requests to the same domain. This type of anonymity-by-pollution would lead to an unreliable source of profiling of all users that channel their requests through this proxy.

SWARM can either pollute any request by bundling it with a list of several other requests for the same user to different endpoints and discard all but the requested endpoint, or:

The mechanism for this technique would require the proxy to hold either the DRUID or session string for a collection of users in a secure wallet residing in each node in the network. For every request for user A, the proxy would make a batch of requests using the authentication devices for users B, C, D etc. to either the same endpoint or a range of different endpoints on the same target domain.

Because every request is a valid request, the web service would not be able to identify which of the endpoints was the one intended for the request maker. The remainder of the requests would simply be discarded by the proxy before returning the response to the request maker.

Why only GET requests?

Other HTTP requests would not be covered by this technique as identification can obviously be derived by which user account is creating, updating or deleting a resource. This protocol would also not be recommended for any kind of service where critical personal information is used i.e. banking services, tax filing services, visa applications etc.

The feasibility of this approach requires that any and every node in a decentralised proxy network is running completely immutable code and can never be accessed without self-destructing.

Preferably, a mass-revoke request would be made at the time of self-destruction to alert every other node that potentially-compromised DRUID or session identifiers are no longer safe.

Whether a proxy network like this could be funded via a donation-style method like Wikipedia, or via cryptocurrencies, or some other form is beyond the scope of the introduction of the idea.

Major challenges

For HTTP-DRUID, the key challenge would be to formalize and approve a standard with the IETF that web masters would be able to implement.

A second, equally large challenge, would be to persuade any website to eschew cookies and stateful authentication and to instead adopt this technique. It’s not an insurmountable task, but from an incentives-based perspective, retargeting and data-mining is orthogonal to this idea.

A non-challenge is user uptake. Because DRUID is frictionless, a tech un-savvy internet user wouldn’t need to learn anything to use the new protocol.

Conclusion

The scope of this post was to introduce 3 crazy ideas to pave way for a better internet. Stateless internet sessions, broken down HTTP requests and proxy-bundled requests can lead a way to a more convenient, secure, and private internet experience.

Technical details have been omitted due to the fact that I do not possess the technical expertise to make recommendations on how to implement these ideas. This post merely serves as a talking point for smarter people to take these ideas and further develop them, or for me to be wildly ridiculed.

Finally, my attention span is pretty short and this short post is self-contained and includes every aspect of my thinking around these ideas. I will not be able to answer any questions or provide any additional input. If you have any questions around implementation or development of these ideas, I am certain you’re already more skilled than I am to address your own questions.

Good luck to any pioneers and I look forward to a better internet.

Tip jar

Monero: 4GdoN7NCTi8a5gZug7PrwZNKjvHFmKeV11L6pNJPgj5QNEHsN6eeX3DaAQFwZ1ufD4LYCZKArktt113W7QjWvQ7CW9NKr5o7UHwBsW31ti


Get notified when Pawel releases new posts, guides, or projects