# Automated theorem provers and their role in cryptography

*By Morten Rotvold Solberg and Kristian Gjøsteen*

In the 1970s, Fields Medal winner Paul Cohen made the prediction that in the future, mathematicians would be replaced by computers, implying that all of mathematics – including the writing of proofs – could be automated (Sriraman, 2017). While arguably bold, Cohen’s prediction was not completely unjustified. Indeed, computer scientists developed the first so-called *automated theorem provers*, computer programs designed to prove mathematical theorems, already in the 1950s.

Theorem provers have been under continuous development ever since, sparking both joy and controversy. Today we mainly differ between two kinds of theorem provers. The first is the fully automated theorem prover, which can write a proof for a mathematical conjecture, given a certain set of axioms (or hypotheses). The other is a so-called interactive theorem prover (or proof-assistant), which is not able to produce a proof on its own, but checks an already existing proof line by line to verify (or refute) the validity of the proof.

Common for all theorem provers is that they speak the formal language of mathematical logic. This might be constrained to classical first-order logic, but it might also include higher-order logics or non-classical logics. The formality of such a language is great in the sense that it leaves no room for ambiguity and thus makes it possible to achieve automated theorem proving. On the other hand, it can be very difficult for human-beings to understand. You may even argue that in the time it takes to translate a conjecture into a language the computer understands, and to interpret the proof given by the computer, you could have just solved the problem and written the proof yourself.

One of the main functions of a mathematical proof (in addition to showing that a statement *is *true) is to explain *why* a statement is true. Considering this together with the difficulty of interpreting computer-generated proofs, we mathematicians are not particularly worried that we will lose our jobs to computers any time soon.

However, there are situations where erroneous proofs may have potentially catastrophic consequences and where theorem provers and proof assistants have proven themselves to be valuable resources. To see why, we now move into the field of *cryptography.*

## What is cryptography?

Cryptography has been around for millennia. In a broad sense, the field tries to solve the problem of *how to communicate securely. *Traditionally, cryptography has been used by for example diplomats to send encrypted messages so that spies cannot understand what the message said, in the event that they get hold of it. Nowadays, everyone uses cryptography all the time, often without even being aware of it. Among other things, we use cryptography when signing in to our Twitter accounts, when sending chat messages and when making online bank transactions. Techniques from cryptography can even be (and have been) employed in for example online auctions and large-scale political elections.

Although different cryptographic systems and protocols are used for different purposes, all protocols have one thing in common. They need to be *secure*. It could have catastrophic consequences if anyone could pretend to be your bank if you were depositing money, or if ballots could be tampered with without anyone noticing in an election.

## Analysing the security of cryptographic protocols

So, any protocol we use needs to be secure, but how do we know that it is? Traditionally, the security of a cryptographic protocol has been analysed simply by trying to break it. If enough time passed without anyone breaking the protocol, it could be considered secure. Of course, this would not mean that there are no attacks, only that no attack has been found (yet). Or, worse, that someone found an attack but “forgot” to tell anyone.

This is clearly not a satisfactory approach to convincing others that your protocol is secure. Thus, nowadays, in addition to the proposed protocol, it is common to present a *security proof* – a mathematical argument showing that your protocol is indeed secure.

A security proof often reduces the security of the protocol to some underlying hard mathematical problem. One example is the well-known RSA public key encryption algorithm, which can be reduced to the problem of factoring large integers. The statement is that if there exists an algorithm able to break the security of the protocol, then this can be turned into an algorithm for solving the underlying mathematical problem. As the underlying problem is considered to be very hard to solve (efficiently), the conclusion is that an algorithm that breaks security cannot exist.

## The issues of security proofs

Although this approach gives a convincing argument that the proposed protocol is secure, there are still a couple of obstacles. One is that to even think about writing such a proof, it is crucial to have a clear definition of *what* *security* *means*. While the cryptographic community has arrived at more or less standard security definitions for public key encryption and digital signatures, this is not the case for electronic voting, for example. And coming up with good security definitions has proven to be a highly non-trivial task (as demonstrated by e.g. Bernhard et al. (2015)).

When you have managed to adequately define what security means, you must write a security proof. Although techniques – such as game-hopping (Shoup, 2004) – have been developed to make writing (and reading!) security proofs easier, security proofs still tend to be highly complicated creatures and they are inherently error-prone. Due to their complicated nature, security proofs are also highly difficult to audit, giving low assurance that the claimed proofs are actually correct.

And what happens if there is an error in a security proof? It might of course still be the case that the claim is true, even if the argument is erroneous. But in the worst case, a flawed security proof might mean that the protocol is completely insecure.

## Proof assistants to the rescue

And this is, of course, where proof assistants come into play. Lately, it has become common practice to not only give a security proof for a newly proposed protocol, but also to provide a *machine-checked proof*, increasing assurance that the security proof is valid.

Several proof assistants have emerged over the past decades. A notable example is *EasyCrypt*(https://www.github.com/easycrypt/easycrypt), a proof assistant designed for verifying cryptographic security proofs.

Since its birth around a decade ago, EasyCrypt has seen extensive use for basic cryptographic primitives like public key encryption and digital signature schemes. Lately it has also proven its worth for more complicated protocols like TLS (which is employed in the widely used HTTPS protocol for securing communication over the internet) and electronic voting protocols. Notable examples of the latter include machine-verified security proofs for Helios (Cortier et al., 2017) and Belenios (Cortier et al., 2018) two electronic voting protocols that have already seen widespread use, and a machine-verified security proof for Selene, a voting mechanism designed to make elections both private and verifiable, in a user-friendly way (Dragan et al., 2022).

Circling back to Cohen’s prediction, we do not believe that our jobs will be taken over by computers any time soon. But we do see that automated theorem provers and proof assistants have become valuable resources for increasing assurance that the cryptographic protocols we use are indeed secure. However, the increased assurance does not arise solely from the computers, but from the interaction between humans defining security and writing the security proofs, and the computers verifying that the proofs are correct.

# References

Bernhard, D., Cortier, V., Galindo, D., Pereira, O., & Warinschi, B. (2015). SoK: A Comprehensive Analysis of Game-Based Ballot Privacy Definitions. *2015 IEEE Symposium on Security and Privacy* (pp. 499-516). San Jose, CA, USA: IEEE.

Cortier, V., Dragan, C., Dupressoir, F., & Warinschi, B. (2018). Machine-Checked Proofs for Electronic Voting: Privacy and Verifiability for Belenios. *2018 IEEE 31st Computer Security Foundations Symposium* (pp. 298-312). Oxford, UK: IEEE.

Cortier, V., Dragan, C., Dupressoir, F., Schmidt, B., Strub, P.-Y., & Warinschi, B. (2017). Machine-Checked Proofs of Privacy for Electronic Voting Protocols. *38th IEEE Symposium on Security and Privacy* (pp. 993-1008). San Jose, CA, USA: IEEE.

Dragan, C., Dupressoir, F., Estaji, E., Gjøsteen, K., Haines, T., Ryan, P. Y., . . . Solberg, M. R. (2022). Machine-Checked Proofs of Privacy Against Malicious Boards for Selene & Co. *2022 IEEE 35th Computer Security Foundations Symposium* (pp. 335-347). Haifa, Israel: IEEE.

Shoup, V. (2004). Sequences of Games: A Tool for Taming Complexity in Security Proofs. Cryptology ePrint Archive, Paper 2004/332, https://eprint.iacr.org/2004/332.

Sriraman, B. (Ed.). (2017). *Humanizing Mathematics and its Philosophy – Essays Celebrating the 90th Birthday of Reuben Hersh.* Springer International Publishing AG.

You must be logged in to post a comment.