In the face of disasters both sudden and creeping brought about by climate change, I’ve been struck by the connections between the difficult future that we are facing, the tenets and technologies of decentralization, and the need for resilience in our communities and communications. There is a convincing argument to be made that our warming world demands the local-first and interconnected systems that our distributed community is developing.

Disaster Response

This past summer, Pacific Gas and Electric, which provides power to residents of California, announced that it was going to disconnect large portions of the California power grid in certain weather conditions in an effort to lower the risk of starting wild fires. PG&E tried to keep its customers informed of the latest developments through their website, but of course those without power could not access the internet (even if they still had a charged laptop or phone), and those who still had power were hitting the website so hard that it went down. By making themselves the single source of information, PG&E both caused the failure of their own means of communication and deprived their customers of important news after their servers failed.

I don’t mean to pick on PG&E here, they’re simply an example of a larger pattern: The more popular and vital a given piece of information is, the more people are going to try to access it, and the more difficult it will then become to obtain. Modern cloud companies usually try to solve this on the server end, by allocating bigger network connections and more CPUs, but those solutions aren’t available to everyone, and can take time and expertise (not to mention money) to implement. That ends up being a luxury that can’t always be relied on, particularly for minority populations in a disaster situation.

There are a couple decentralized protocols that could have been useful to PG&E if they had been used during the blackouts this summer. Both of them, Dat and IPFS, are slightly different takes on the same concept: authenticatable peer to peer distribution of data. They work by breaking up larger pieces of data into small chunks, each with a specific ID, called a “hash” which is derived from the data inside the chunk using a specific algorithm. This process is called “hashing” and its an extremely useful technique that plays a part in a huge number of decentralized tools. People looking to retrieve a certain piece of information (such as a copy of a website) can use any device to request the chunks they need from other peers on the network, and check what they get back from others by comparing the hash with the chunk. Using such a system, once a few people download the PG&E web page on ongoing outages (for example), others can start retrieving the information directly from those other peers, saving the official server from being overwhelmed.

Another benefit of this type of peer to peer dissemination of information is that it doesn’t have to rely on the internet’s infrastructure. In a situation where the elecrical grid is inaccessible, internet accessibility is going to be spotty or non-existant as well. And, of course, the longer the situation persists the greater the impact to expect. Communication via the traditional internet is going to be unreliable. Fortunately, the Dat and IPFS protocols have a solution.

Both Dat and IPFS can operate without access to the global internet, sharing data between whatever peers they can find on the local network. In combination with an established local area network or an ad-hoc mesh network constructed between members of a community, these protocols can move information even when the centralized infrastructure fails. Decentralized power generation and distribution, such as from solar panels, local hydro generation, or wind generators, can keep nodes online and distributing data in a crisis.

These ideas have already started to crop up in real world disaster situations. For example, we saw their use in Puerto Rico after hurricane Maria destroyed most of the island’s infrastructure in September of 2017. While the technologies used in the hurricane’s aftermath did not use software like Dat or IPFS, mesh networking was used to connect puertorriqueños to each other to exchange vital information.

The Slow Challenge of Rising Seas

Its easy to think of the internet as existing in some vague “other place” that is unrelated to our physical existence. The truth, however, is that the servers, routers, and cables that make up the substrate upon which our “shared hallucination” operates are very much made of atoms. And we have recently started to grapple with the fact that a lot of that infrastructure is poorly prepared to face the trials of climate change. They’re succeptible in particular to the drastic effects of storm surge due to strengthened hurricanes and typhoons, as well as gradual but consistent sea level rise. (As well as other impacts from climate change such as wild fires, but I’m focusing here on sea levels.)

A paper presented at the 2018 convening of the Applied Networking Research Workshop by Durairajan, Barford, and Barford explored the potential impacts on the internet from climate change. “Lights Out: Climate Change Risk to Internet Infrastructure” took existing models of sea level rise and compared them to physical points of internet infrastructure gathered from the [Internet Atlas](link here). In short, lots of the machines that move the bits from point A to point B around the world are located in places that are likely to be underwater within just the next 15 years.

We have already seen the leading edge of this future in 2012 when Hurricane Sandy hit New York City, causing around $70 billion worth of damage. In addition to the obvious impacts of power outages due to high winds, flooding created problems for internet connectivity. The storm caused outages at at least eight major New York based data centers and Internet Exchange Points. Problems ranged from a simple lack of fuel to run emergency backup generators to flooding of basements taking critical electrical systems offline. Major websites such as Gawker and Huffington Post were down and one internet backbone carrier advised people to expect “possible routing issues to and from the U.S.". One analysis of Border Gateway Protocol (BGP) announcements during the storm estimated that around 2,500 routes were lost during the storm, with around 1,000 of those routes coming back rapidly and the rest coming back over the course of three days.

Coastal areas are perfect places to build data centers and network exchange points. Submarine cables make landfall on the coast and the tyrrany of the speed of light means that networks want to connect to each other physically close to those landing sites. Some of the biggest Internet Exchange Points (IXPs) are therefore clustered near the ocean. Manhattan, Amsterdam, Miami, Sydney, Manila, Rio de Janeiro, Okinawa, and Singapore are just a few examples.

What does all of this have to do with decentralization? First of all, the longer the route your data needs to travel to reach its destination, the more likely it will be dropped or delayed along the way by the forces of sea level rise. Two people in Lagos exchanging messages with each other will have a lot more luck if their packets stay within the city rather than taking a junket to San Jose, California and back. Local-first applications in general aim to function properly even when access to the broader internet is spotty or completely gone, and to resume sharing data with the larger network opportunistically once they can.

Secondly, even if you live in the same city as a massive IXP or datacenter, it doesn’t do you any good if their routers are under water or their backup generators ran out of fuel. People with solar panels, peer to peer or mesh radio devices, and decentralized protocols will still be able to communicate with others in their local area.