Crypto Speculator
The thing with cryptos is not to consolidate wealth in the current economy. It is to consolidate liquidity for the machine economy, the post modern economy.
Transactions will happen millions of times per second and will mostly be between autonomous agents. Automated micro payments will exist as a base layer in the economy, occurring simultaneously, continuously. Money, or, value will transfer between nodes (human or machine) with very low friction. Idealistically, goods and services in the economy will approach the most accurate prices ever. Things will be priced correctly so long as decentralized networks maintain a presence in the market. You can pay by acknowledgment. You can pay as little as you’d like for something. There is potential for all of this to happen without explicit authority. The functionality will be implicit within the global brain. the ai. the abstraction layer built on top of human-machine interface.
The key here is that “crypto” or “blockchain” as we know it today is simply a piece of the puzzle. These systems are only relevant in the context of the compute available for resource allocation (resource allocation for information processing). Greater resource allocation can be achieved in two ways: through increase in compute (increase transistors) or through better use of compute already available. Both trends are developed in parallel at large scale. “Blockchain(s)”, in some sense, is the killer app that helps us better use existing compute.
This space is directly relevant to software 2.0 and machine learning. They facilitate each others progress.
The human brain runs as a massive parallel machine. Computers run as massive serial machines. Much effort is used to organize and actuate the massive network of compute in a parallel manner. Effectively mimicking the human brain. This results in emergent phenomena. We already see this in complex engineering. We leverage the huge advantage of series compute in tandem with our own brains parallel system. Producing things like satellites and nano technology. These things may not exist with either system alone. They are interdependent and together they create emergent phenomena. We want to have more control over the outcomes. But, therein lies a threshold where there is no longer control over emergent phenomena. no individual, no group, no agent will have any ability to actuate the outcomes. In the same way that no independent neuron in our brain has any control on our own behavior. Emergent AI will not feel like a species that is more intelligent than us as we are to chimps, we may not even perceive it at all.
Computers as we know them are highly dependent on established priors. We have invented systems that work and everything is built on top of them. The transistor, binary, logic gates, instruction sets. As you move up the abstraction layers systems become more modular and dynamic. It is tempting to add abstraction layers and continue to build on top of existing architecture. Cryptos are largely built on top of existing architecture. Something like taking a combination of programming languages to create a new language with a novel use case. Network infrastructures are organized to efficiently transfer information in these architectures. But is the underlying system the best for massive distributed compute? We may need to rethink some fundamental aspects of computer architecture to create entirely new capabilities. The transistor, the cpu, compression.
Now we have the problem of systems that are so big that no agent can explicitly describe what should happen, never the less engineer it. This is the essence of the emergence of software 2.0. It is simply too difficult to code the new systems in the traditional way. This is interesting because 2.0 (essentially Deep Learning) is not built at the higher level of abstraction. It uses resources much closer to silicon and in fact does not run optimally on cpu and gpu compute as they currently exist. New hardware is being developed. Might new information coding, compression, logic be possible?
How do you code the global brain? the same way you might code the human brain. through learning. each node has to be independently learned. it will not be possible to write a script that works on one node that also works on all other nodes. This is because neuron spikes do not have a consistent location in the brain human to human nor is there consistency in location based on time series. The global computer will be the same way because the network will be highly dynamic and difficult to predict. Something akin to the difficulty to predict the behavior of a gas molecule.
To predict a gas molecule, think of maxwells demon. See my other write up. What is the most efficient way to open and close the door. Or, rather, what is the most efficient way to decide to open the door. If explicit knowledge of the molecules breaks down when the information is eventually lost, is there a way to open the door with compute alone? Can you achieve this with a purely random system, the only requirement being the information associated with knowing when the particles are isolated?
In the context of 2.0 and distributed compute, value is associated with finding this lowest cost framework for information.
Distributed peer to peer encrypted blockchains may be a very good way to achieve this.