Microhistory and Macrohistory
In the bottom-up view, history is written to the ledger. If everything that happened gets faithfully recorded, history is then just the analysis of the log files. To understand this view we’ll discuss the idea of history as a trajectory. Then we’ll introduce the concepts of microhistory and macrohistory, by analogy to microeconomics and macroeconomics. Finally, we’ll unify all this with the new concept of cryptohistory.
What happens when you propel an object into the air? The first thing that comes to mind is the trajectory of a ball. Throw it and witness its arc. Just a simple parabola, an exercise in freshman physics. But there are more complicated trajectories.
- A boomerang flies forward and comes back to the origin.
- A charged particle in a constant magnetic field is subject to a force at right angles, and moves in a circle.
- A rocket with sufficient fuel can escape the earth’s atmosphere rather than coming back down.
- A curveball, subject to the Magnus effect, can twist in mid-air en route to its destination.
- A projectile launched into a sufficiently thick gelatin decelerates without ever hitting the ground.
- A powered drone can execute an arbitrarily complicated flight path, mimicking that of a bumblebee or helix.
So, how a system evolves with time — its trajectory — can be complex and counterintuitive, even for something small. This is a good analogy for history. If the flight path of a single inanimate object can be this surprising, think about the dynamics of a massive multi-agent system of highly animate people. Imagine billions of humans springing up on the map, forming clusters, careening into each other, creating more humans, and throwing off petabytes of data exhaust the whole way. That’s history.
And the timeframes involved make it tough to study. The rock you throw into the air doesn’t take decades to play out its flight path. Humans do. So a historical observer can literally die before seeing the consequences of an action.
Moreover, the subjects of the study don’t want to be studied. A mere rock isn’t a stealth bomber. It has neither the motive nor the means to deceive you about its flight path. Humans do. The people under the microscope are fogging the lens.
So: the scale is huge, the timeframe is long, and the measurements aren’t just noisy but intentionally corrupted.
We can encode all of this into a phrase: history is a cryptic epic of twisting trajectories. Cryptic, because the narrators are unreliable and often intentionally misleading. Epic, because the timescales are so long that you have to consciously sample beyond your own experience and beyond any human lifetime to see patterns. Twisting, because there are curves, cycles, collapses, and non-straightforward patterns. And trajectories, because history is ultimately about the time evolution of human beings, which maps to the physical idea of a dynamical system, of a set of particles progressing through time.
Put that together, and it wipes out both the base-rater’s view that today’s order will remain basically stable over the short-term, and the complementary view of a long-term “the arc of the moral universe is long, but it bends toward justice.” It also contests the idea that the fall of the bourgeoisie “and the victory of the proletariat are equally inevitable,” or that “no two countries on a Bitcoin standard will go to war with each other,” or even that technological progress has been rapid, so we can assume it will continue and society will not collapse.
Those phrases come from different ideologies, but each of them verbally expresses the clean parabolic arc of the rock. History isn’t really like that at all. It’s much more complicated. There are certainly trends, and those phrases do identify real trends, but there is also pushback to those trends, counterforces that arise in response to applied forces, syntheses that form from theses and antitheses, and outright collapses. Complex dynamics, in other words.
And how do we study complex dynamical systems? The first task is to measure.
Microhistory is the history of a reproducible system, one which has few enough variables that it can be reset and replayed from the beginning in a series of controlled experiments. It is history as a quantitative trajectory, history as a precise log of measurements. For example, it could be the record of all past values of a state space vector in a dynamical system, the account of all moves made by two deterministic algorithms playing chess against each other, or the chronicle of all instructions executed by a journaling file system after being restored to factory settings.
Microhistory is an applied subject, where accurate historical measurement is of direct technical and commercial importance. We can see this with technologies like the Kalman filter, which was used for steering the spaceship used in the moon landing. You can see the full technical details here, but roughly speaking the Kalman filter uses past measurements x[t−1],x[t−2],x[t−3] to inform the estimate of a system’s current state x[t], the action that should be taken u[t], and the corresponding prediction of the future state x[t+1] should that action be taken. For example, it uses past velocity, direction headings, fuel levels, and the like to recommend how a space shuttle should be steered at the current timestep. Crucially, if the microhistory is not accurate enough, if the confidence intervals around each measurement are too wide, or if (say) the velocity estimate is wrong altogether, then the Kalman Filter does not work and Apollo doesn’t happen.
At a surface level, the Kalman filter resembles the kind of time series analysis that’s common in finance. The key difference is that the Kalman filter is used on reproducible systems while finance is typically a non-reproducible system. If you’re using the Kalman filter to guide a drone from point A to point B, but you have a bug in your code and the drone crashes, you can simply pick up the drone21, put it back on the launch pad at point A, and try again. Because you can repeat the experiment over and over, you can eventually get very precise measurements and a functioning guidance algorithm. That’s a reproducible system.
In finance, however, you usually can’t just keep re-running a trading algorithm that makes money and get the same result. Eventually your counterparties will adapt and get wise. A key difference relative to our drone example is the presence of animate objects (other humans) who won’t always do the same thing given the same input.22 In fact, they can often be adversarial, observing and reacting to your actions, intentionally confounding your predictions, especially if they can profit from doing so. Past performance is no guarantee of future results in finance, as opposed to physics. Unlike the situation with the drone, a market isn’t a reproducible system.
Microhistory thus has its limits, but it’s an incredibly powerful concept. If we have good enough measurements on the past, then we have a better prediction of the future in an extremely literal sense. If we have tight confidence intervals on our measurements of the past, if the probability distribution P(x[t−1]) is highly peaked, then we get correspondingly tight confidence intervals on the present P(x[t]) and the future P(x[t+1]). Conversely, the more uncertainty about your past, the more confused you are about where you’re from and where you’re going, the more likely your rocket will crash. It’s Orwell more literally than he ever expected: he who controls the past controls the future, in the direct sense that he has better control theory. Only a civilization with a strong capacity for accurate microhistory could ever make it to the moon.
This is a powerful analogy for civilization. A group of people who doesn’t know who they are or where they came from won’t ever make it to the moon, let alone to Mars.
Can we make it more than an analogy?
Macrohistory is the history of a non-reproducible system, one which has too many variables to easily be reset and replayed from the beginning. It is history that is not directly amenable to controlled experiment. At small scale, that’s the unpredictable flow of a turbulent fluid; at very large scale, it’s the history of humanity.
We think of macrohistory as being on a continuum with microhistory. Why? We’ll make a few points and then tie them all together.
First, science progresses by taking phenomena formerly thought of as non-reproducible (and hence unpredictable) systems, isolating the key variables, and turning them into reproducible (and hence predictable) systems. For example, Koch’s postulates include the idea of transmission pathogenesis, which turned the vague concept of infection via “miasma” into a reproducible phenomenon: expose a mouse to a specific microorganism in a laboratory setting and an infection arises, but not otherwise.
Second, and relatedly, science progresses by improved instrumentation, by better recordkeeping. Star charts enabled celestial navigation. Johann Balmer’s documentation of the exact spacing of hydrogen’s emission spectra led to quantum mechanics. Gregor Mendel’s careful counting of pea plants led to modern genetics. Things we counted as simply beyond human ken — the stars, the atom, the genome — became things humans can comprehend by simply counting.
Third, how do we even know anything about the history of ancient Rome or Egypt or Medieval Europe? From artifacts and written records. Thousands of years ago, people were scratching customer reviews into a stone tablet, one of the first tablet-based apps. We know who Abelard and Heloise were from their letters to each other. We know what the Romans were like from what they recorded. To a significant extent, what we know about history is what we’ve recovered from what people wrote down.
Fourth, today, we have digital documentation on an unprecedented scale. We have billions of people using social media each day for almost a decade now. We also have billions of phones taking daily photographs and videos. We have countless data feeds of instruments. And we have massive hard drives to store it all. So, if reckoned on the basis of raw bytes, we likely record more information in a day than all of humanity recorded up to the year 1900. It is by far the most comprehensive log of human activity we’ve ever had.
We can now see the continuum23 between macrohistory and microhistory. We are collecting the kinds of precise, quantitative, microhistorical measurements that typically led to the emergence of a new science…but at the scale of billions of people, and going into our second decade.
So, another term for “Big Data” should be “Big History.” All data is a record of past events, sometimes the immediate past, sometimes the past of months or years ago, sometimes (in the case of Google Books or the Digital Michelangelo project) the past of decades or centuries ago. After all, what’s another word for data storage in a computer? Memory. Memory, as in the sense of human memory, and as in the sense of history.
That memory is commercially valuable. A technologist who neglects history ensures their users will get exploited. Proof? Consider reputation systems. Any scaled marketplace has them. The history of an Uber driver or rider’s on-platform behavior partially predicts their future behavior. Without years of star ratings, without memories of past actions of millions of people, these platforms would be wrecked by fraud. Macrohistory makes money.
This is just one example. There are huge short and long-term incentives to record all this data, all this microhistory and macrohistory. And future historians24 will study our digital log to understand what we were like as a civilization.
There are some catches to the concept of digital macrohistory, though: silos, bots, censors, and fakes. As we’ll show, Bitcoin and its generalizations provide a powerful way to solve these issues.
First, let’s understand the problems of silos, bots, censors, and fakes. The macrohistorical log is largely siloed across different corporate servers, on the premises of Twitter and Facebook and Google. The posts are typically not digitally signed or cryptographically timestamped, so much of the content is (or could be) from bots rather than humans. Inconvenient digital history can be deleted by putting sufficient pressure on centralized social media companies or academic publishers, censoring true information in the name of taking down “disinformation,” as we’ve already seen. And the advent of AI allows highly realistic fakes of the past and present to be generated. If we’re not careful, we could drown in fake data.
So, how could someone in the future (or even the present) know if a particular event they didn’t directly observe was real? The Bitcoin blockchain gives one answer. It is the most rigorous form of history yet known to man, a history that is technically and economically resistant to revision. Thanks to a combination of cryptographic primitives and financial incentives, it is very challenging to falsify the who, what, and when of transactions written to the Bitcoin blockchain.
Who initiated this transfer, what amount of Bitcoin did they send, what metadata did they attach to the transaction, and when did they send it? That information is recorded in the blockchain and sufficient to give a bare bones history of the entire Bitcoin economy since 2009. And if you sum up that entire history to the present day, you also get the values of how much BTC is held by each address. It’s an immediatist model of history, where the past is not even past - it’s with us at every second.
In a little more detail, why is the Bitcoin blockchain so resistant to the rewriting of history? To falsify the “who” of a single transaction you’d need to fake a digital signature, to falsify the “what” you’d need to break a hash function, to falsify the “when” you’d need to corrupt a timestamp, and you’d need to do this while somehow not breaking all the other records cryptographically connected to that transaction through the mechanism of composed block headers.
Some call the Bitcoin blockchain a timechain, because unlike many other blockchains, its proof-of-work mechanism and difficulty adjustment ensure a statistically regular time interval between blocks, crucial to its function as a digital history.
(I recognize that these concepts and some of what follows is technical. Our whirlwind tour may provoke either familiar head-nodding or confused head-scratching. If you want more detail, we’ve linked definitions of each term, but fully explaining them is beyond the scope of this work. However, see The Truth Machine for a popular treatment and Dan Boneh’s Cryptography course for technical detail.)
Nevertheless, here’s the point for even a nontechnical reader: the Bitcoin blockchain gives a history that’s hard to falsify. Unless there’s an advance in quantum computing, a breakthrough in pure math, a heretofore unseen bug in the code, or a highly expensive 51% attack that probably only China could muster, it is essentially infeasible to rewrite the history of the Bitcoin blockchain — or anything written to it. And even if such an event does happen, it wouldn’t be an instantaneous burning of Bitcoin’s Library of Alexandria. The hash function could be replaced with a quantum-safe version, or another chain robust to said attack could take Bitcoin’s place, and back up the ledger of all historical Bitcoin transactions to a new protocol.
With that said, we are not arguing that Bitcoin is infallible. We are arguing that it is the best technology yet invented for recording human history. And if the concept of cryptocurrency can endure past the invention of quantum decryption, we will likely think of the beginning of cryptographically verifiable history as on par with the beginning of written history millennia ago. Future societies may think of the year 2022 AD as the year 13 AS, with “After Satoshi” as the new “Anno Domini,” and the block clock as the new universal time.
For the price of a single transaction, the Bitcoin blockchain can be generalized to provide a cryptographically verifiable record of any historical event, a proof-of-existence.
For example, perhaps there is some off-chain event of significant importance where you want to store it for the record. Suppose it’s the famous photo of Stalin with his cronies, because you anticipate the rewriting of history. The proof-of-existence technique we’re about to describe wouldn’t directly be able to prove the data of the file was real, but you could establish the metadata on the file — the who, what, and when — to a future observer.
Specifically, given a proof-of-existence, a future observer would be able to confirm that a given digital signature (who) put a given hash of a photo (what) on chain at a given time (when). That future observer might well suspect the photo could still be fake, but they’d know it’d have to be faked at that precise time by the party controlling that wallet. And the evidence would be on-chain years before the airbrushed official photo of Stalin was released. That’s implausible under many models. Who’d fake something so specific years in advance? It’d be more likely the official photo was fake than the proof-of-existence.
So, let’s suppose that this limited level of proof was worth it to you. You are willing to pay such that future generations can see an indelible record of a bit of history. How would you get that proof onto the Bitcoin blockchain?
The way you’d do this is by organizing your arbitrarily large external
dataset (a photo, or something much larger than that) into a Merkle
tree, calculating a string of fixed length called a Merkle root, and
then writing that to the Bitcoin blockchain through
furnishes a tool for proof-of-existence for any digital file.
You can do this as a one-off for a single piece of data, or as a periodic backup for any non-Bitcoin chain. So you could, in theory, put a digital summary of many gigabytes of data from another chain on the Bitcoin blockchain every ten minutes for the price of a single BTC transaction, thereby proving it existed. This would effectively “back up” this other blockchain and give it some of the irreversibility properties of Bitcoin. Call this kind of chain a subchain.
By analogy to the industrial use of gold, this type of “industrial” use case of a Bitcoin transaction may turn out to be quite important. A subchain with many millions of off-Bitcoin transactions every ten minutes could likely generate enough economic activity to easily pay for a single Bitcoin transaction.25
And as more people try to use the Bitcoin blockchain, given its capacity limits, it might turn out that only industrial use cases like this could afford to pay sufficient fees in this manner, as direct individual use of the Bitcoin blockchain could become expensive.
So, that means we can use the proof-of-existence technique to log arbitrary data to the Bitcoin blockchain, including data from other chains.
We just zoomed in to detail how you’d log a single transaction to the Bitcoin blockchain to prove any given historical event happened. Now let’s zoom out.
As noted, the full scope of what the Bitcoin blockchain represents is nothing less than the history of an entire economy. Every transaction is recorded since t=0. Every fraction of a BTC is accounted for, down to one hundred millionth of a Bitcoin. Nothing is lost.
Except, of course, for all the off-chain data that accompanies a transaction - like the identity of the sender and receiver, the reason for their transaction, the SKU of any goods sold, and so on. There are usually good reasons for these things to remain private, or partially private, so you might think this is a feature.
The problem is that Bitcoin’s design is a bit of a tweener, as it doesn’t actually ensure that public transactions remain private. Indeed there are companies like Elliptic and Chainalysis devoted entirely to the deanonymization of public Bitcoin addresses and transactions. The right model of the history of the Bitcoin economy is that it’s in a hybrid state, where the public has access to the raw transaction data, but private actors (like Chainalysis and Elliptic) have access to much more information and can deanonymize many transactions.
Moreover, Bitcoin can only execute Bitcoin transactions, rather than all the other kinds of digital operations you could facilitate with more blockspace. But people are working on all of this.
- Zero-knowledge technology like ZCash, Ironfish, and Tornado Cash allow on-chain attestation of exactly what people want to make public and nothing more.
- Smart contract chains like Ethereum and Solana extend the capability of what can be done on chain, at the expense of higher complexity.
- Decentralized social networks like Mirror and DeSo put social events on chain alongside financial transactions.
- Naming systems like the Ethereum Name Service (ENS) and Solana Name Service (SNS) attach identity to on-chain transactions.
- Incorporation systems allow the on-chain representation of corporate abstractions above the level of a mere transaction, like financial statements or even full programmable company-equivalents like DAOs.
- New proof techniques like proof-of-solvency and proof-of-location extend the set of things one can cryptographically prove on chain from the basic who/what/when of Bitcoin.
- Cryptocredentials, Non-Fungible Tokens (NFTs), Non-Transferable Fungibles (NTFs), and Soulbounds allow the representation of non-financial data on chain, like diplomas or endorsements.
This is a breakthrough in digital macrohistory that addresses the issues of silos, bots, censors, and fakes. Public blockchains aren’t siloed in corporations, but publicly accessible. They provide new tools, like staking and ENS-style identity, that allow separation of bots from humans. They can incorporate many different proof techniques, including proof-of-existence and more, to address the problem of deepfakes. And they can have very strong levels of censorship resistance by paying transaction fees to hash their chain state to the Bitcoin blockchain.
We can now see how the expansion of blockspace is on track to give us a cryptographically verifiable macrohistory, or cryptohistory for short.
This is the log of everything that billions of people choose to make public: every decentralized tweet, every public donation, every birth and death certificate, every marriage and citizenship record, every crypto domain registration, every merger and acquisition of an on-chain entity, every financial statement, every public record — all digitally signed, timestamped, and hashed in freely available public ledgers.26
The thing is, essentially all of human behavior has a digital component now. Every purchase and communication, every ride in an Uber, every swipe of a keycard, and every step with a Fitbit — all of that produces digital artifacts.
So, in theory you could eventually download the public blockchain of a network state to replay the entire cryptographically verified history of a community.25 That’s the future of public records, a concept that is to the paper-based system of the legacy state what paper records were to oral records.
It’s also a vision for what macrohistory will become. Not a scattered letter from an Abelard here and a stone tablet from an Egyptian there. But a full log, a cryptohistory. The unification of microhistory and macrohistory in one giant cryptographically verifiable dataset. We call this indelible, computable, digital, authenticatable history the ledger of record.
This concept is foundational to the network state. And it can be used for good or ill. In decentralized form, the ledger of record allows an individual to resist the Stalinist rewriting of the past. It is the ultimate expression of the bottom-up view of history as what’s written to the ledger. But you can also imagine a bastardized form, where the cryptographic checks are removed, the read/write access is centralized, and the idea of a total digital history is used by a state to create an NSA/China-like system of inescapable, lifelong surveillance.27
This in turn leads us to a top-down view of history, the future trajectory we want to avoid, where political power is used to defeat technological truth.