Hello and welcome to my portfolio. I am currently a fifth year PhD student in Computer Science at Cornell Tech, where I am advised by Rafael Pass and Joe Halpern. Previously, I completed my M.S. in Computer Science at the University of Washington under Anna Karlin’s supervision.
My primary research interest is solving incentive problems in blockchain systems. For example, I’ve worked on pricing lending pools fairly, and analyzing transaction fee mechanisms in terms of their social welfare. I’m comfortable both with proof-heavy approaches, as well as using simulations and empirical data to analyze mathematical models. Currently, much of my work blends incentive analysis with consensus, including my projects on timing games and multiple concurrent proposers (see below).
More broadly, I’m interested in the intersection of economics and computer science. I enjoy creating compelling and realistic mathematical models of behavior, and rigorously investigating the outcomes of these models. My Master’s thesis considered a model of how agents might procrastinate when faced with a difficult task, and used competition to alleviate the harms of procrastination. I’ve also worked more broadly in theoretical computer science, investigating strong privacy definitions to protect correlated data and coordination games on large social networks. In the past, I’ve worked on tech policy.
In addition to my CS interests, I’m also interested in epistemology and the philosophy of statistics. My main goal is to provide an “ethics-first” approach to epistemology, by grounding concepts like knowledge, evidence, or explanation, in terms of obligations. As an example of this, we defined “evidence” in terms of what you have a duty to disclose. We showed that this definition is equivalent to several existing theories of statistical evidence, even in settings with only weak, qualitative requirements on rationality. This suggests that existing theories of statistical evidence may have foundational links to ethics.
M.S. in Computer Science, 2020
University of Washington
B.S. in Computer Engineering, 2019
University of Washington
Collective Investment Algorithms (CoinAlgs) are increasingly popular systems that deploy shared trading strategies for investor communities. Their goal is to democratize sophisticated – often AI-based – investing tools. We identify and demonstrate a fundamental profitability-fairness tradeoff in CoinAlgs that we call the CoinAlg Bind: CoinAlgs cannot ensure economic fairness without losing profit to arbitrage. We present a formal model of CoinAlgs, with definitions of privacy (incomplete algorithm disclosure) and economic fairness (value extraction by an adversarial insider).
We prove two complementary results that together demonstrate the CoinAlg Bind. First, privacy in a CoinAlg is a precondition for insider attacks on economic fairness. Conversely, in a game-theoretic model, lack of privacy, i.e., transparency, enables arbitrageurs to erode the profitability of a CoinAlg. Using data from Uniswap, a decentralized exchange, we empirically study both sides of the CoinAlg Bind. We quantify the impact of arbitrage against transparent CoinAlgs. We show the risks posed by a private CoinAlg: even low-bandwidth covert-channel information leakage enables unfair value extraction.
Block proposers can extract more MEV by proposing later within their window, as more transactions appear and markets diverge. This gives rise to timing games, where block proposers try to wait as long as possible before proposing. On Ethereum, this results in a small number of missed blocks. But in a responsive protocol, the round durations are dynamic, with the new round beginning as soon as quorum is achieved on the current block. Timing games are much more problematic here, causing everyone to wait as long as possible to propose, thus defeating responsiveness.
We develop a model of timing games that reveals find a prisoner’s dilemma structure. Cooperation (proposing promptly) is in the validators’ best interest, as the overall system runs faster, increasing their rate of reward. But individual incentives encourage validators to delay proposals selfishly. This motivates our voting approach, as that allows validators to coordinate to reach the better equilibrium.
We also analyze latency fairness. Ideally, everyone’s average rewards should be directly proportional to their stake. In reality, having lower latency to the rest of the network means you can potentially wait longer, or include more transactions in the same time period. This problem exists even with static block rewards, but does get worse for our dynamic block rewards. However, our analysis of real-world data suggests that the fairness degradation is very small.
Lending pool loans are overcollaterized, and if the value of the collateral dips too low, the entire loan is liquidated: the borrower cannot repay the loan to claim their collateral, which is instead sold off at a discount. Borrowers have the option to “top-up” their loans at any time by depositing more collateral which they retrieve when they repay the loan.
We first use the famous Black-Scholes model, which assumes that the price of the asset (i.e., the crypto) obeys geometric brownian motion, that the market is frictionless, and that trades are continuous. If we simplify lending pools by ignoring the ability of borrowers to top up their loans, we can replicate lending pool loans via (down-and-out) barrier options, which are options that are called off when the price of the asset dips too low. This lets us get a simple model for pricing interest rates and collaterization parameters of lending pools.
However, modeling top-ups requires us to deviate from the Black-Scholes model. In particular, top-ups are trivial if there is no discount factor. So, we augment the Black-Scholes model to allow for discounting, and implement simulations to price interest rates with top-ups. To do so, we consider loans which borrowers are allowed to top-up a finite number of times. We show that such loans can be considered nested barrier options, such that topping up $k$ times is a barrier option with a rebate equivalent to the value of being allowed to top-up only $k-1$ times. We show, via simulations, that the values of these limited top-up options converges quickly.
Finally, we study lending pools in practice, and show the discount rates induced by their interest rates, under our model.
We build on a model originally proposed by Jon Kleinberg and Sigal Oren [KO14]. In their model, tasks are represented by directed, acyclic graphs with designated start node $s$ and end node $t$. The agent traverses a shortest $s\to t$ path with one twist: when the evaluating the cost of a path, they multiply the cost of the first edge by their bias parameter, $b$. They traverse the first step of this biased path, and then recompute the best path. Existing results show that these biased agents can take exponentially more costly paths through a given graph.
In our model, the task designer can chunk edges into smaller pieces. We show that, for edges along the shortest path, the optimal way to chunk an edge is to make initial pieces easier and later pieces progressively harder. For edges not along the shortest path, optimal chunking is significantly more complex, but we provide an efficient algorithm. We also show that with a linear number of chunks on each edge, the biased agent’s cost can be exponentially lowered to within a constant factor of the true cheapest path. Finally, we show how to optimally chunk task graphs for multiple types of agents simultaneously.
We build on a model originally proposed by Jon Kleinberg and Sigal Oren [KO14]. In their model, tasks are represented by directed, acyclic graphs with designated start node $s$ and end node $t$. The agent traverses a shortest $s\to t$ path with one twist: when the evaluating the cost of a path, they multiple the cost of the first edge by their bias parameter, $b$. They traverse the first step of this biased path, and then recompute the best path. Existing results show that these biased agents can take exponentially more costly paths through a given graph.
We augment this model by having two agents compete to finish the task first, with the winner getting a reward. We show that, for any graph, a very small amount of reward convinces biased agents to behave optimally, even when their natural behavior would have exponentially high cost. The amount of reward needed to guarantee optimal behavior with competition is also significantly less (in general) than several non-competitive reward schemes in the existing literature.
Though DP is a popular choice is many settings, we argue that DP either does not apply or provides insufficient guarantees when the database is correlated (and this correlation structure is easily inferrable). Unfortunately, databases almost always store correlated data (examples include location data, medical data, power grids, social networks, etc.), and the correlation models are often easily learned from historical data, and so should be assumed to be public knowledge. We thus investigate a stronger notion of privacy, BDP, which offers strong guarantees even when adversaries know the correlation structure.
We provide optimal mechanisms (in terms of noise-privacy tradeoffs) for achieving BDP with Markov chain data. Our mechanism is non-interactive, since it outputs a sanitized database, and local, since it doesn’t require a centralized data curator. We also experiment on real world heart rate data, demonstrating that our mechanism is robust to somewhat varying correlation models.