Hello and welcome to my portfolio. I am currently a third year PhD student in Computer Science at Cornell University, where I am working with Joe Halpern. Previously, I completed my M.S. in Computer Science at the University of Washington under Anna Karlin’s supervision.
My primary research interests lie in the intersection of economics and computer science. I enjoy creating compelling and realistic mathematical models of behavior, and rigorously investigating the outcomes of these models. My Master’s thesis considered a model of how agents might procrastinate when faced with a difficult task, and used competition to alleviate the harms of procrastination. Recently, I started work on pricing problems in cryptocurrencies, focusing on how to best set interest rates for lending pools. I’ve also worked more broadly in theoretical computer science, investigating strong privacy definitions to protect correlated data and coordination games on large social networks. In the past, I’ve done research in Tech Policy/HCI and NLP.
In addition to my CS interests, I’m also interested in epistemology and the philosophy of statistics. I’m currently designing a new, social theory of statistical evidence, Robust Bayesianism (RB), that extends likelihood-based methodologies to broader settings. RB is mathematically interesting because it’s compatible with weak, qualitative requirements on rationality. I’m also working on connecting statistical definitions of evidence to more commonplace ethical definitions of evidence, with the goal of providing an “ethics-first” approach to evidence, and to epistemology more generally.
M.S. in Computer Science, 2020
University of Washington
B.S. in Computer Engineering, 2019
University of Washington
Lending pool loans are overcollaterized, and if the value of the collateral dips too low, the entire loan is liquidated: the borrower cannot repay the loan to claim their collateral, which is instead sold off at a discount. Borrowers have the option to “top-up” their loans at any time by depositing more collateral which they retrieve when they repay the loan.
We first use the famous Black-Scholes model, which assumes that the price of the asset (i.e., the crypto) obeys geometric brownian motion, that the market is frictionless, and that trades are continuous. If we simplify lending pools by ignoring the ability of borrowers to top up their loans, we can replicate lending pool loans via (down-and-out) barrier options, which are options that are called off when the price of the asset dips too low. This lets us get a simple model for pricing interest rates and collaterization parameters of lending pools.
However, modeling top-ups requires us to deviate from the Black-Scholes model. In particular, top-ups are trivial if there is no discount factor. So, we augment the Black-Scholes model to allow for discounting, and implement simulations to price interest rates with top-ups. To do so, we consider loans which borrowers are allowed to top-up a finite number of times. We show that such loans can be considered nested barrier options, such that topping up $k$ times is a barrier option with a rebate equivalent to the value of being allowed to top-up only $k-1$ times. We show, via simulations, that the values of these limited top-up options converges quickly.
Finally, we study lending pools in practice, and show the discount rates induced by their interest rates, under our model.
We build on a model originally proposed by Jon Kleinberg and Sigal Oren [KO14]. In their model, tasks are represented by directed, acyclic graphs with designated start node $s$ and end node $t$. The agent traverses a shortest $s\to t$ path with one twist: when the evaluating the cost of a path, they multiply the cost of the first edge by their bias parameter, $b$. They traverse the first step of this biased path, and then recompute the best path. Existing results show that these biased agents can take exponentially more costly paths through a given graph.
In our model, the task designer can chunk edges into smaller pieces. We show that, for edges along the shortest path, the optimal way to chunk an edge is to make initial pieces easier and later pieces progressively harder. For edges not along the shortest path, optimal chunking is significantly more complex, but we provide an efficient algorithm. We also show that with a linear number of chunks on each edge, the biased agent’s cost can be exponentially lowered to within a constant factor of the true cheapest path. Finally, we show how to optimally chunk task graphs for multiple types of agents simultaneously.
We build on a model originally proposed by Jon Kleinberg and Sigal Oren [KO14]. In their model, tasks are represented by directed, acyclic graphs with designated start node $s$ and end node $t$. The agent traverses a shortest $s\to t$ path with one twist: when the evaluating the cost of a path, they multiple the cost of the first edge by their bias parameter, $b$. They traverse the first step of this biased path, and then recompute the best path. Existing results show that these biased agents can take exponentially more costly paths through a given graph.
We augment this model by having two agents compete to finish the task first, with the winner getting a reward. We show that, for any graph, a very small amount of reward convinces biased agents to behave optimally, even when their natural behavior would have exponentially high cost. The amount of reward needed to guarantee optimal behavior with competition is also significantly less (in general) than several non-competitive reward schemes in the existing literature.
Though DP is a popular choice is many settings, we argue that DP either does not apply or provides insufficient guarantees when the database is correlated (and this correlation structure is easily inferrable). Unfortunately, databases almost always store correlated data (examples include location data, medical data, power grids, social networks, etc.), and the correlation models are often easily learned from historical data, and so should be assumed to be public knowledge. We thus investigate a stronger notion of privacy, BDP, which offers strong guarantees even when adversaries know the correlation structure.
We provide optimal mechanisms (in terms of noise-privacy tradeoffs) for achieving BDP with Markov chain data. Our mechanism is non-interactive, since it outputs a sanitized database, and local, since it doesn’t require a centralized data curator. We also experiment on real world heart rate data, demonstrating that our mechanism is robust to somewhat varying correlation models.