Guest Post — Decrypting the Future: Programmable Cryptography And Its Role in Modern Tech
Guest Post By Felix Xu, Co-Founder and CEO of ARPA Network and Bella Protocol
As cryptographic technologies continue to advance and create new uses in our lives, the processes they carry out become increasingly complex. While a tremendous amount can be done with simple cryptographic primitives, it is what they can achieve when combined that is the most exciting.
Even more impressive is the idea that some cryptographic protocols are designed with hardware description capabilities, granting them the power to tackle universal challenges. This idea, fittingly called “programmable cryptography,” has the promise of making more complicated actions possible by, to paraphrase Brian Gu, turning the mathematical problem of designing new protocols into the programming problem of combining existing ones.
To determine how programmable cryptography can be most useful in our daily lives, we need to first understand the different layers of cryptographic application from high-level goals to low-level algorithms. Below are three considerations for doing this.
Understanding The Basics – Programmable Cryptography’s Simple Cipher Origin
As the reliance on data in our daily lives grows, new and improved methods of safeguarding it are continuously needed. It is truly staggering to think of how much information is processed online these days. More immediate to most people is how much more time they spend interacting with data now than they did even a few years ago. All of this information they produce, engage with, review, and send is at risk of being spied on, stolen, or manipulated if it is not properly protected.
This is why there is always a need for cryptography. This is why new and improved methods of keeping data private continue to be developed.
Like many other disciplines, cryptography is based on simple concepts that are scaled up as the task becomes more interesting. These concepts, referred to as “cryptographic primitives,” are often basic but can be combined to build something complex.
For example, consider one of the oldest codes, the Cesar cipher. Named after its most famous user, it involves writing words in a cipher text that shifts three letters back from the original message. In this case, the word “the” would be written “qeb.” Each letter is shifted to the one that is three spots ahead of it in the English alphabet.
This code may be simple, but it is well-tested. While it is not the most secure method in the world, it can be combined with other techniques to make it stronger.
To take another example, the Vigenère cipher is a tool for encoding a message using several different Cesar ciphers. In this system, each message is combined with a key that indicates how many places to shift the letters in the message, but each letter has a different number of shifts. The “L” in lemon tells you to shift the first letter in the message twelve spaces, as L is the twelfth letter in the English alphabet. The “E” tells you to shift the second letter to five spaces, and so on.
So, “apple” becomes “peszr.” Without access to the key, it becomes much more difficult to decode the message. However, a brute force calculation can determine what the message is, given enough time. By combining existing tools in a new way, the level of security increases dramatically.
As you can probably guess, it is often much, much easier to combine existing ciphers such as these together in new, more complex ways than it is to invent a new system. Cesar died a long time ago, and we are still using his codebook.
Much of modern cryptographic technology stands on a similar pedestal. Hiring a cryptographer to write new proof is quite time-consuming and is not guaranteed to work. Additionally, cryptographic primitives such as RSA (Rivest-Shamir-Adleman), AES (Advanced Encryption Standard), or Digital Signature systems are known to work and can easily be applied to a wide range of problems. For instance, RSA is widely used for secure data transmission, while AES is a standard for encrypting sensitive data. If they are combined, they can provide innovative functionality and solve more complex problems than any of them could do alone.
While combining simple methods together is a great way to make more complex systems, there are limitations to it. Each of these primitives is designed to be good at a particular task, and it is not uncommon that mistakes are made when combining them that leave their weaknesses exposed.
Increasing Privacy with Mid-Level Protocols
Mid-level protocols target more advanced features and functionalities. Homomorphic encryption is a protocol that allows for encrypted data to be processed without having to decrypt it first. Examples of it exist today, though it is still in its early phases – yet, the concept has many obvious possible applications. Consider how often sensitive yet useful data, such as medical records, are stolen from organizations that need access to it to help you. What if it were possible to interact with your encrypted medical information without ever decoding it?
One way to do this is Multi-Party Computation (MPC), a tool for hiding inputs provided by different actors working together on a common output. It is often described as the “Millionaire problem.”
Imagine that there are two millionaires who want to learn which of them has more money without revealing their net worth. Using MPC, they can add their encrypted net worth to a program designed to compare the values and determine which one of them entered a larger value – all while not being able to see either of their inputs.
Zero-Knowledge Proofs (ZKPs) are more well-known due to their ability to allow a prover to tell another person, often called the verifier, that something is true without saying anything else. Typically, they provide this service to a single user; a person asks for proof, and they get it. There are a number of ZKPs, including zk-SNARK and zk-STARK. Each has its own advantages and disadvantages.
As research on these advanced protocols has progressed, the focus has expanded toward developing general-purpose cryptographic protocols. These initiatives aim to prove that it’s feasible for cryptography to enable universal computation to be done securely and privately. Initially, these endeavors were purely theoretical, prioritizing feasibility over practical implementation efficiency. However, as research has deepened, cryptographers have shifted their attention toward making these concepts practically applicable. They enhance, combine, and invent new protocols and components. Often, the ultimate protocol ends up being a hybrid, leveraging the strengths of multiple approaches. For example, homomorphic encryption utilizes zero-knowledge proofs for range proofs to ensure calculations remain within a valid range. Meanwhile, MPC protocols might incorporate elements of homomorphism for executing non-linear operations.
The Future of Programmable Cryptography
Among the plethora of experimental protocols, some have edged close enough to the practical utility that they are paving the way for real-world development, functioning similarly to compilers by interpreting high-level languages and converting them into circuits that protocols can process. Achieving this compiler-like capability, complete with support for Turing-complete computation, marks the advent of what we call programmable cryptography.
Programmable cryptography is still a new concept, but one that offers the chance to make very complicated problems much simpler without the expense of creating a brand-new system for one application. This possibility alone will likely drive a great deal of interest in the field.
Perhaps the most encouraging aspect of all is that society is still in an early stage of exploring the uses of this technology. ZK proofs were devised in the 1980s, but only made possible in 2012. There may be many possible combinations of mechanisms that nobody has dreamed of yet. The next world-shaking idea could arrive tomorrow. We may not even be able to guess what it will do.