Contextualizing Cryptography
What Is Cryptography For?
A few years ago, I received an email from a (at the time) PhD student named Alishah Chator. I had just given my Crypto for the People talk at CRYPTO 2020 and Alishah wrote to say something I had felt throughout my career:
While I enjoy...academic research, there...are so many hoops to jump through for the sake of personal advancement rather than actual real-world impact. On reflection, I am not sure if I can justify academia as a path for myself unless I know that there is a meaningful service I am providing…
I received a lot of emails after that talk. The reason this one stood out was that I could really relate to it. It expressed exactly how I have felt throughout my career in cryptography and computer science. But, with the exception of some of my own PhD students, this was the first time someone else had expressed it as well.
Alishah and I met and talked, and during our discussion, we decided that a useful first step towards addressing this malaise would be to find and gather other folks who might feel similarly. We reached out to Leah Rosenbloom and Lucy Qin, both of whom I knew had always felt this way, and together we cobbled together the Community-Driven Cryptography Project. Eventually, this led to the ReCAP workshop (RE-imagining Cryptography And Privacy), organized by Alishah, Leah, and Lucy. From that point on, they did all the work, and they should really be given credit for their effort. If you attended ReCAP or watched the talks online and got something from it, please send them a thank-you email. Remember that at the time they organized the workshop they were PhD students who were also trying to finish their theses, defend, publish, and find postdocs.
When I was later asked to give a talk at ReCAP reflecting on the original Crypto for the People talk, I kept coming back to that common feeling of malaise. It was the thing that brought us together in the first place, and I believe it is what brought many of the attendees of ReCAP together. What I wanted to understand was: why do we feel this way?
Three Goals of Research
After thinking about it, I think the issue comes from the combination of two things. The first is that cryptography research is decontextualized. It is decontextualized from morality, as Rogaway argues in his paper The Moral Character of Cryptographic Work, but also from applications, from society and from our experiences as human beings. The second is that most of us never discuss what we think research is actually for.
People have a variety of reasons for wanting to do research and for choosing the problems they work on. But for the purpose of this discussion, I will identify three broad goals that tend to drive research:
research for problems: the goal is to find and solve challenging, well-defined problems. The driving factor is intellectual challenge and work is evaluated by the difficulty of the problem being solved (think Erdős).
research for understanding: the goal is to understand some aspect of the world more deeply. The driving factor is explanatory power and research is the natural byproduct of that pursuit (think Shannon).
research for impact: the goal is to change the world in some particular way. The driving factor is real-world effect and work is evaluated by the difference it makes (think Dennis Ritchie).
Any given researcher might pursue several of these goals at once or shift between them over the course of a career. This is a simplification, but it is a useful one.
These are not just different emphases. They lead to different technical choices, different evaluation criteria, and different papers. A researcher driven by impact might spend months understanding the operational environment of a system before writing a line of code or a theorem statement. A researcher driven by problems might never need to that at all.
The Silent Shift
What surprises me is that this is rarely, if ever, discussed explicitly. Isn’t one’s view on the fundamental purpose of research really important? When is the last time you had a conversation with your colleagues about what you and they think the purpose of research is? When have you discussed this in a PhD-level course or seminar? When was the last time you discussed this with your advisor? My guess is that for most people reading this, the answer to all of these questions is never.
One reason for this is that research-for-problems is the dominant goal in computer science. It’s not hard to imagine why, given how most educational systems are structured and the type of students who tend to do well in these systems.
But there is a less obvious reason. Even if your goals were originally oriented toward understanding or impact, at some point they likely shifted toward problem-solving. This could be because of an advisor or more likely just due to peer pressure. This shift happened silently. It wasn’t the result of an explicit conversation or debate. After carefully studying the papers accepted into top conferences in your field, and after listening to keynotes and distinguished talks, you probably had an internal monologue that went something like:
I should work on “real” research; on “deep” and “important” problems!
where “real,” “deep,” and “important” is essentially defined by who else worked on it and tried to solve it.
Decontextualized Research
What do I mean by decontextualized research? I mean research that has been stripped of the context that motivated it. That says nothing about who it is for, what system it will live in, what power dynamics it engages with, what happens to real people when it works or when it fails. A decontextualized problem is one where all of that has been removed and what what you are left with is a clean mathematical or computational challenge. Contextualized research, by contrast, keeps that information in the frame. It treats the people, the systems and the power dynamics not as a sentence in the introduction but as constraints that shape the technical work itself.
This is not an abstract philosophical discussion. How we think about this determines who we hire, fund, promote and ultimately what our research contributes to the world. My claim is that research-for-problems is disproportionately dominant in computer science, either by nature or by peer pressure, and that decontextualization is its natural companion.
If your primary goal is problem-solving, then decontextualized research is perfectly fine. It is a feature, not a bug, because what you want is a clean, well-defined, and challenging problem you can tackle.
But if your goals are oriented toward understanding, or especially toward impact, then decontextualized research rubs you the wrong way. It leaves you unsatisfied and you might even wonder why you are doing this work and whether you made a mistake in doing research in the first place. If your goals were originally about understanding or impact but gradually shifted toward problem-solving, you might even start to believe you are not cut out for research. Without wanting to speak for him--—though I guess that is exactly what I am doing-—-I think this is the frame of mind Alishah was in when he emailed me. I think you can “hear” it in his email.
Let me clarify what I am not saying. I’m not saying research-for-problems is wrong or less important. I’m not saying every paper needs a policy section or a societal impact statement. I’m saying that when problem-solving is the only recognized goal, people whose work is driven by understanding or impact learn to hide what actually motivates them. And when that happens, the field loses something it doesn’t know how to measure.
What Contextualization Looks Like
Contextualization is not a vague call to “think about society”. It has concrete consequences for how research is done.
Consider adversarial models. These are not purely abstract constructs. They encode assumptions about who the adversary is, what resources they have and what they are trying to accomplish and they all depend on context. At ReCAP 2024, Daniel Kahn Gilmore gave a talk on digital credential systems that illustrated this well. In it, he highlights which party these systems assume is the adversary and points to how this tells you about the power dynamics at play.
More broadly, context is what connects a field to the reasons it exists. The Cypherpunks were motivated by a contextualized vision of privacy and individual autonomy and that vision led to systems like Tor and Signal; systems that have had enormous real-world impact. If cryptography is, as Rogaway argues, a field that rearranges power, then understanding the contexts in which that rearrangement happens is not optional. It is fundamental to doing the work well.
And this dynamic is not unique to cryptography. Machine learning was for many years a relatively self-contained field driven largely by its own internal problems. But the recent wave of large-scale deployment forced a recontextualization as questions of fairness, accountability and societal impact became impossible to ignore. Contextualization will happen eventually, one way or another. The question is whether a field chooses to engage with it proactively and on its own terms.
What This Means
ReCAP is one attempt to engage proactively. It is a space where researchers can present work that takes seriously the social, political and human dimensions of cryptography and privacy. Where the goals of understanding and impact are valued alongside problem-solving and where the conversations that rarely happen in mainstream venues can take place openly.
But the deeper point is not about any single workshop. It is about what happens when a field’s dominant reward structure is misaligned with the goals of a fraction of its researchers. Those researchers don’t leave because they lack talent. They leave---or worse, stay and slowly stop caring---because the thing that drew them to research in the first place has no name in the field’s vocabulary. The taxonomy I’ve sketched here is an attempt to give it a name. Not so that we can rank the goals but so that we can have conversations about them that we don’t usually have.

