Game Theory & Computation Seminar Series

Game Theory & Computation Seminar Series

About

The Game Theory and Computation Seminar is a weekly theory seminar series, highlighting recent advances in economic theory and computer science.  The range of topics includes algorithmic game theory, market design, microeconomic theory, social network theory, and other topics related to game theory and computation.

The agenda typically consists of a prepared presentation, followed by a period of questions and discussion. The seminar is aimed at an audience of theory researchers and students. We welcome members of the local academic community to attend.

The seminar is currently on hiatus.  There are no upcoming talks scheduled in this series.  Please subscribe to the mailing list to be notified about future events.

If you have any questions or concerns, please send us an email.

Mailing List

To subscribe to the mailing list, send a blank message to this email address.
To unsubscribe, send a blank message to this email address.

Past Speakers

Past Speakers – 2015/2016

SPECIAL EVENT: Reverse AGT workshop on Applied Econometrics

WorkshopFriday, Feb 26, 20162:30 PM – 5:30 PMHarvard University, Maxwell Dworkin 119

Past Speakers – 2014/2015

SPECIAL EVENT: Reverse AGT workshop on Property Rights and the Efficiency of Bargaining

WorkshopFriday, May 8, 20152:00 PM – 5:00 PMHarvard University, Maxwell Dworkin 119

Revenue maximization in interdependent value settings

Shuchi Chawla, University of Wisconsin-MadisonFriday, April 10, 20152:00 PM – 3:00 PMHarvard University, Littauer Center, 3rd Floor Lounge

Description

A fundamental assumption underlying much of mechanism design is that buyers know their values for the products they are interested in. We consider settings where agents receive signals related to their values from a joint distribution, and their estimated values are functions of their own as well as others’ signals. We consider revenue maximization in such settings and show that a variant of the VCG mechanism with admission control gives a constant approximation to the optimal expected revenue. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.

This is joint work with Hu Fu and Anna Karlin.

Biography

Shuchi Chawla is an associate professor of computer sciences at the University of Wisconsin-Madison. Her research interests lie in the design and analysis of algorithms. Her recent work has focused on algorithmic problems arising in economics and networking. Shuchi received her Ph.D. from CMU in 2006. She has held visiting positions at the University of Washington and Microsoft Research. She is the recipient of an NSF Career award and a Sloan foundation fellowship.

Practical Revenue-Maximizing Mechanism Design for Selling Online Ads

Hamid Nazerzadeh, University of Southern CaliforniaFriday, March 6, 20152:00 PM – 3:00 PMHarvard University, Littauer Center, 3rd Floor Lounge

Description

Auctions are the essential tool for allocation and pricing of goods in environments with heterogeneous buyers. However, unless restrictive assumptions are made, the revenue optimal auctions can be quite complicated and even computationally intractable. In this talk, I will discuss some of the challenges in maximizing revenue in the context of online advertising auctions as well as recent results on designing practical and near-optimal mechanisms for selling online ads.

Based on joint work with Vahab Mirrokni.

Biography

Hamid Nazerzadeh is an assistant professor in the Data Sciences and Operations department at USC Marshall School of Business. He obtained his Ph.D. in Operations Research from Stanford University and his B.Sc. from Sharif University of Technology and has worked at Microsoft, Yahoo!, and Google research labs. His research focuses on mechanism design and optimization algorithms and their applications in operations and monetization of online markets. He is the recipient of Yahoo! Ph.D. Student Fellowship Award (2007), Honorable Mention in George Dantzig Dissertation Awards (2009), Google Faculty Research Award (2013), Marshall Dean’s Award for Research Excellence (2014), and INFORMS Revenue Management and Pricing Section Prize (2014).

SPECIAL EVENT: Reverse AGT workshop on Competition in Selection Markets

WorkshopFriday, February 27, 20152:00 PM – 5:00 PMHarvard University, Maxwell Dworkin 119

SPECIAL EVENT: Reverse AGT workshop on Applied Econometrics

WorkshopFriday, Feb 26, 20162:30 PM – 5:30 PMHarvard University, Maxwell Dworkin 119

On the Hardness of Signaling

Shaddin Dughmi, University of Southern CaliforniaFriday, January 30, 20152:00 PM – 3:30 PMHarvard University, Littauer Center, 3rd Floor Lounge

Description

There has been a recent surge of interest in the role of information in strategic interactions. Much of this work seeks to understand how the realized equilibrium of a game is influenced by uncertainty in the environment and the information available to players in the game. Lurking beneath this literature is a fundamental, yet largely unexplored, algorithmic question: how should a “market maker” who is privy to additional information, and equipped with a specified objective, inform the players in the game? This is an informational analogue of the mechanism design question, and views the information structure of a game as a mathematical object to be designed, rather than an exogenous variable.

We initiate a complexity-theoretic examination of the design of optimal information structures in general Bayesian games, a task often referred to as signaling. We focus on one of the simplest instantiations of the signaling question: Bayesian zero-sum games, and a principal who must choose an information structure maximizing the equilibrium payoff of one of the players. In this setting, we show that optimal signaling is computationally intractable, and in some cases hard to approximate, assuming that it is hard to recover a planted clique from an Erdos-Renyi random graph. This is despite the fact that equilibria in these games are computable in polynomial time, and therefore suggests that the hardness of optimal signaling is a distinct phenomenon from the hardness of equilibrium computation. Necessitated by the non-local nature of information structures, en-route to our results we prove an “amplification lemma” for the planted clique problem which may be of independent interest.

Biography

Shaddin Dughmi is an Assistant Professor in the Department of Computer Science at USC, where he is a member of the Theory Group. He received a B.S. in computer science, summa cum laude, from Cornell University in 2004, and a PhD in computer science from Stanford University in 2011. He is a recipient of the NSF CAREER award, the Arthur L. Samuel best doctoral thesis award, and the ACM EC best student paper award.

Do-Not-Track and the Economics of Third-Party Advertising

Georgios Zervas, Boston UniversityWednesday, December 17, 2014 (Note the unusual day)4:00 PM – 5:00 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

Retailers regularly target users with online ads based on their web browsing activity, benefiting both the retailers, who can better reach potential customers, and content providers, who can increase ad revenue by displaying more effective ads. The effectiveness of such ads relies on third-party brokers that maintain detailed user information, prompting legislation such as do-not-track that would limit or ban the practice. We gauge the economic costs of such privacy policies by analyzing the anonymized web browsing histories of 14 million individuals. We find that only 3% of retail sessions are currently initiated by ads capable of incorporating third-party information, a number that holds across market segments, for online-only retailers, and under permissive click-attribution assumptions. Third-party capable advertising is shown by 12% of content providers, accounting for 32% of their page views; this reliance is concentrated in online publishing (e.g., news outlets) where the rate is 91%. We estimate that most of the top 10,000 content providers could generate comparable revenue by switching to a “freemium” model, in which loyal site visitors are charged $2 (or less) per month. We conclude that do-not-track legislation would impact, but not fundamentally fracture, the Internet economy.

Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2505643

Biography

Georgios Zervas is an assistant professor of Marketing at Boston University School of Management. Before joining BU in 2013 he was a Simons postdoctoral fellow at Yale, and an affiliate at the Center for Research on Computation and Society at Harvard. He received his PhD in Computer Science in 2011 from Boston University. He is broadly interested in understanding the strategic interactions of firms and consumers participating in internet markets using large-scale data collection and econometric analysis.

Two-sided matching via balanced exchange: Theory and applications

Utku Ünver, Boston CollegeMonday, December 1, 20144:00 PM – 5:30 PMHarvard University, Littauer Center, 3rd Floor Lounge

Description

We introduce a new class of matching problems that mimic two-sided exchange programs. These include, for example, the tuition exchange programs used by colleges in the US to help the dependents of their eligible faculty use tuition benefits offered at other participating institutions. Each participating institution (i.e., firm) has to maintain a balance between the numbers of exported and imported students (i.e., workers), because a negative balance — exports exceeding imports — is generally penalized by suspension from the program. On the other hand, these programs function through decentralized markets that make it difficult to sustain overall balance, especially since workers and firms have preferences over each other.

We show that stable equilibria discourage net-exporting firms from exchange. We introduce two-sided top-trading-cycles (2S-TTC) mechanism that is balanced-efficient, worker-strategy-proof, individually rational, and respecting priority bylaws regarding worker eligibility. We prove 2S-TTC is the unique mechanism fulfilling these objectives. Moreover, it encourages exchange, since full participation is a dominant strategy for firms.

Joint work with Umut Mert Dur.

Biography

Utku Ünver, a Professor of Economics at Boston College, is an economic theorist with research interests in market design, mechanism design, and game theory with emphasis on the theory and practice of matching markets and allocation/exchange of discrete resources. His recent research includes (but is not limited to) the design of kidney exchange clearinghouses, improving recommendation systems used in adoption of children in foster care, matching in two-sided markets such as tuition exchange in college admissions, and school choice mechanisms. He has also worked on implementing kidney exchanges and improving adoption schemes in real life with policy workers.

SPECIAL EVENT: Reverse AGT workshop on Optimal Taxation

WorkshopMonday, November 24, 20141:00 PM – 4:00 PMHarvard University, 20 University Road, Room 646

Bidding Games, Bargaining and Efficient Allocations

Reshef Meir, Harvard UniversityMonday, November 10, 20144:00 PM – 5:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

Bidding games are extensive form two-player games where players bid over who is going to play next. Zero-sum bidding games have been studied in the literature, mainly as variations of popular recreational games such as Chess and Tic-Tac-Toe.

We study for the first time bidding games with general utility functions, showing that there always exists a natural pure subgame perfect Nash equilibrium with the following desirable properties: (a) a Pareto-efficient outcome is reached for any initial budget; (b) any Pareto-efficient outcome is attained for some initial budget allocation; and (c) each agent’s utility is weakly monotone in her budget.

We show how this result can be applied to design a simple and efficient bargaining mechanism for allocating items among two agents with arbitrary valuation functions. Time permitting, we will discuss the computational complexity aspects and some possible extensions of the mechanism to randomized settings.

Joint work with Gil Kalai and Moshe Tennenholtz.

Biography

Reshef is a post-doctoral fellow at Harvard’s Center for Research on Computation and Society. He received his PhD from the Hebrew University in Jerusalem, which has received an honorable mention for Victor Lesser Distinguished Dissertation Award (IFAAMAS), and won the Michael B. Maschler Prize for research in game theory (shared with Omer Tamuz). His main research areas are Computational Game Theory, Mechanism Design, and Bounded Rationality.

Incentivizing Exploration

Bobby Kleinberg, Cornell UniversityMonday, November 3, 20144:00 PM – 5:30 PMHarvard University, Littauer Center, 3rd Floor Lounge

Description

An important recent theme in the development of on-line social systems is the potential of crowdsourced effort to solve large problems — defining tasks in which many people can each contribute a small amount of time to the overall goal. In some cases the arrangement is based on a direct compensation scheme, in which a (low) rate is paid for each unit of work performed. But in many settings one only has access to a crowd “in the wild”, as they go about their everyday activities. Examples include product recommendations, social news readers, and scientific activities ranging from crowdsourced “citizen science” to the funding of research by national agencies.

In all of these domains, there is a problem of misaligned incentives: the designer’s goal is to carry out exploration (of the space of news stories, products, bird habitats, etc.) as efficiently as possible, but for reasons of scale they must implement the exploration via a crowd composed of members who each derive their own utility from participating in the exploration. We model this as a multi-armed bandit problem in which selfish, myopic agents pull arms with publically observable outcomes, and a principal seeking to maximize the cumulative time-discounted reward may influence the agents by offering monetary (or other) rewards contingent on choosing particular actions. Our main result is a full characterization of the trade-off between the expected payments the principal must make and the total reward that can be achieved.

Joint work with Peter Frazier, David Kempe, and Jon Kleinberg.

Biography

Robert Kleinberg is an Associate Professor of Computer Science at Cornell University. His research studies the design and analysis of algorithms, and their relations to economics, learning theory, and networks. Prior to receiving his doctorate from MIT in 2005, Kleinberg spent three years at Akamai Technologies, where he assisted in designing the world’s largest Internet Content Delivery Network. He is the recipient of a Microsoft Research Faculty Fellowship, an Alfred P. Sloan Foundation Fellowship, and an NSF CAREER Award.

Simple Mechanisms with Simple Strategies

Vasilis Syrgkanis, Microsoft ResearchMonday, October 27, 20144:00 PM – 5:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

We introduce single-bid auctions as a new format for combinatorial auctions. In single-bid auctions, each bidder submits a single bid for the right to buy items at a fixed price. Contrary to other simple auction formats, such as simultaneous or sequential single-item auctions, bidders can implement no-regret learning strategies in polynomial time. Price of anarchy bounds for correlated equilibria concepts in single-bid auctions therefore have more bite than their counterparts in auctions where learning is not known to be computationally tractable. To this end, we show that for any subadditive valuations the social welfare at equilibrium is an \Theta(\log m)-approximation to the optimal social welfare, where m is the number of items. We also provide tighter approximation results for several subclasses. Our welfare guarantees hold for Nash equilibria and no-regret learning outcomes in both Bayesian and complete information settings via the smooth-mechanism framework. Of independent interest, our techniques show that in a combinatorial auction setting, efficiency guarantees of a mechanism via smoothness for a very restricted class of cardinality valuations extend, with a small degradation, to subadditive valuations, the largest complement-free class of valuations.

Joint work with Nikhil Devanur, Jamie Morgenstern, Matt Weinberg.

Biography

Vasilis is a Postdoctoral researcher at MSR NYC. He recently received his PhD in Computer Science from Cornell University under the supervision of Prof. Eva Tardos. His research interests include algorithms, game theory, auction theory, mechanism design, crowdsourcing and computational complexity. Driven from electronic market applications such as ad auctions, his research focuses on the design and analysis of approximately efficient mechanisms with guaranteed good properties even when players participate in many mechanisms simultaneously or sequentially and even if they use learning algorithms to identify how to play the game and have incomplete information about the competition. More broadly he is interested in quantifying the inefficiency of systems with strategic users.

Social Learning with Costly Search and Social Advertising

Mallesh Pai, University of PennsylvaniaMonday, October 6, 20144:00 PM – 5:30 PMHarvard University, Littauer Center

Description

We study a sequential social learning model where agents may privately acquire information by costly search. Search costs of agents are private. We show that asymptotic learning occurs if and only if search costs are not bounded away from zero. We explicitly characterize all equilibria for the case of two actions, and show that the probability of late moving agents taking the suboptimal action vanishes at a linear rate. Social welfare converges to the social optimum as the discount rate converges to one if and only if search costs are not bounded away from zero.

We use this analysis to study agents who are active on a (reduced form) online social network, and must select among competing products of unknown quality. We study the impact of advertising by duopolist firms, and the incentives of the underlying social network. We consider display advertising, which is standard firm to consumer advertising, and social advertising, in which agents who purchased that firm’s product are highlighted to their friends. We show that in equilibrium, the heterogeneous firms spend the same amount on advertising. Social advertising may be more lucrative than standard display advertising. However, both forms of advertising have no effect on consumer welfare. A social network motivated by advertising revenues may limit the amount of information agents see about actions by other agents, since this will increase advertising revenue. This reduces consumer welfare relative to the first best, since early movers’ purchases are informative about relative quality.

This talk is based on two papers, “Social Learning With Costly Search” and “Do Online Social Networks Increase Welfare?”, both joint with Manuel Mueller-Frank.

Biography

Mallesh M Pai is an Assistant Professor of Economics in the Department of Economics at the University of Pennsylvania since 2011. He is also an affiliate of the Warren Center for Network and Data Sciences. His research interests include mechanism design/auction theory, the economics of privacy, social networks/social learning and statistical decision theory.

He has a PhD in Managerial Economics and Strategy from the Kellogg School of Management, Northwestern University, and a Bachelor’s degree in Computer Science and Engineering from the Indian Institute of Technology, Delhi.

Risk Dynamics in Trade Networks

Rafael Frongillo, HarvardMonday, September 29, 20144:00 PM – 5:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

We present a new framework to model interactions among agents which seek to negotiate to minimize their risk with respect to some future outcome. We quantify this risk using the concept of risk measures from finance, and introduce a class of trade dynamics which allow agents to trade contracts contingent upon the future outcome. We then show that these trade dynamics exactly correspond to a variant of randomized coordinate descent. By extending the analysis of these coordinate descent methods to account for our more organic setting, we are able to show convergence rates for very general trade dynamics, showing that the market or network converges to a unique steady state. Applying these results to prediction markets, we expand on recent results by adding convergence rates and general aggregation properties.

Joint work with Mark Reid (ANU & NICTA)

Biography

Rafael Frongillo is a computer scientist working on theoretical problems at the interface between machine learning and economics, primarily focusing on problems such as crowdsourcing or prediction markets which involve the exchange of information for money. Before coming to Harvard, he was a postdoc at Microsoft Research in New York City, and he received his Ph.D. at UC Berkeley, advised by Christos Papadimitriou and supported by the NDSEG Fellowship.

On the Efficiency of the Walrasian Mechanism

Moshe Babaioff, Microsoft ResearchMonday, September 15, 20144:00 PM – 5:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

Central results in economics guarantee the existence of efficient equilibria for various classes of markets. An underlying assumption in early work is that agents are price-takers, i.e., agents honestly report their true demand in response to prices. A line of research initiated by Hurwicz (1972) is devoted to studying market performance when agents are strategic about their demands. This is captured by the Walrasian Mechanism that proceeds by collecting reported demands, then finding clearing prices for the reported market.

In practice, agents commonly reduce their demand, leading to inefficient outcomes. We ask how inefficient the equilibria can be. We show that the welfare of every pure Nash equilibrium of the Walrasian mechanism is at least one quarter of the optimal welfare, when players have gross substitute valuations and do not overbid. Our result shows that approximate efficiency is guaranteed regardless of the size of the market.

Our results extend to Bayes-Nash equilibria and outcomes of no regret learning via the smooth mechanism framework. We also extend our bounds to any mechanism that maximizes welfare with respect to the declared valuations and never charges agents more than their bids. We also relax the no-overbidding assumption, and present bounds that are parameterized by the extent to which agents are willing to overbid.

Joint work with Brendan Lucier, Noam Nisan and Renato Paes Leme

Biography

Moshe Babaioff is a Researcher at Microsoft Research. His research interests lie in the intersection of Computer Science and Economics and he studies problems on the border of Computer Science Theory, Game Theory, and Microeconomic Theory. Much of his research focuses on the theoretical foundations of electronic markets. Prior to joining Microsoft Research at 2007, he spent two years as a post-doctoral scholar at the University of California, Berkeley. He received his PhD in Computer Science from the Hebrew University in Jerusalem at 2005, where he was advised by Professor Noam Nisan. He holds a M.Sc. in Computer Science and a B.A. in Mathematics and Computer Science, also from the Hebrew University.

Past Speakers – 2013/2014

Optimization of submodular functions under submodular constraints

Yaron Singer, HarvardWednesday, April 30, 20142:30 PM – 3:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

Many applications in data mining, machine learning, and social network analysis can be formulated as optimization of a submodular function under submodular constraints. Surprisingly, very little is known about the formal guarantees achievable for these problems. In this talk, we’ll introduce a new technique for bi-criteria optimization that can be applied to a broad range of interesting problems.

Biography

Yaron Singer is an Assistant Professor of Computer Science at Harvard University. He was previously a postdoctoral researcher at Google Research and obtained his PhD from UC Berkeley. He is the recipient of the 2012 Best Student Paper Award at the ACM conference on Web Search and Data Mining (WSDM), the 2010 Facebook Fellowship, the 2009 Microsoft Research Fellowship, and several entrepreneurial awards for work on advertising in social networks.

A Simple and Approximately Optimal Mechanism for Additive Buyers

Matt Weinberg, MITWednesday, April 23, 20142:30 PM – 3:30 PMHarvard University, Littauer Center

Description

Consider a monopolist seller with n heterogeneous items to a single additive buyer, whose value for each item is drawn independently. If the seller aims to maximize revenue, it’s known that even in this simple setting the optimal mechanism may be quite complex, requiring randomization or menus of infinite size.

Hart and Nisan recently initiated a study of two very simple pricing schemes for this setting: item pricing (which sets a price on each item) and bundle pricing (which sets a single price on the grand bundle of all items). They further showed that neither scheme can obtain better than a vanishingly small fraction of the optimal revenue. In sharp contrast, we show that for all instances, the better of item and bundle pricing obtains a constant-factor (<7.5) approximation to the optimal revenue. We further discuss extensions to multiple buyers and to valuations correlated across items.

Biography

Matt is a PhD candidate at the Massachusetts Institute of Technology, where he is advised by Costis Daskalakis. His research lies mostly at the intersection of Computer Science and Economics, including Algorithmic Game Theory, Online Algorithms, and Applied Probability. He obtained his B.A. in Math from Cornell University.

Removing Arbitrage from Wagering Mechanisms

Yiling Chen, HarvardWednesday, April 9, 20142:30 PM – 3:30 PMHarvard University, Littauer Center

Description

We observe that a family of one-shot wagering mechanisms, the Weighted Score Wagering Mechanisms, admit arbitrage. Participants can extract a guaranteed positive payoff by betting on any prediction within a certain range, profiting whether their prediction is right or wrong. As a result, rewards don’t necessarily accrue to the most informed and accurate participants. In essence, participants leave free money on the table when they “agree to disagree”. Our observation suggests that when participants have immutable beliefs, it may be possible to design alternative mechanisms in which the center can make a profit without sacrificing incentive properties such as individual rationality, incentive compatibility, and sybilproofness. We introduce a new family of wagering mechanisms called No-Arbitrage Wagering Mechanisms that retain many of the positive properties of Weighted Score Wagering Mechanisms, but with the arbitrage opportunity removed. We show several structural results about the class of mechanisms that satisfy no-arbitrage in conjunction with other properties, and provide specific examples of No-Arbitrage Wagering Mechanisms with special properties that would be of particular interest.

This talk is based on joint work with Nikhil Devanur, David Pennock, and Jennifer Wortman Vaughan.

Biography

Yiling Chen is the John L. Loeb Associate Professor of Natural Sciences and Associate Professor of Computer Science at Harvard University. She received her Ph.D. in Information Sciences and Technology from the Pennsylvania State University. Prior to working at Harvard, she spent two years at the Microeconomic and Social Systems group of Yahoo! Research in New York City. Her current research focuses on topics in the intersection of computer science and economics. She is interested in designing and analyzing social computing systems according to both computational and economic objectives. Chen received an ACM EC Outstanding Paper Award, an AAMAS Best Paper Award, and an NSF Career award, and was selected by IEEE Intelligent Systems as one of “AI’s 10 to Watch” in 2011.

Gradual Bidding in eBay-Like Auctions

Yuhta Ishii, HarvardWednesday, April 2, 20142:30 PM – 3:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

This paper shows that in online auctions like eBay, if bidders can only place bids at random times, then many different equilibria arise besides truthful bidding, despite the option to leave proxy bids. These equilibria can involve gradual bidding, periods of inactivity, and waiting to start bidding towards the end of the auction – bidding behaviors common on eBay. Bidders in such equilibria implicitly collude to keep the increase of the winning price slow over the duration of the auction. In a common value environment, we characterize a class of equilibria that include the one in which bidding at any price is maximally delayed, and all bids minimally increment the price. The seller’s revenue can be a small fraction of what could be obtained at a sealed-bid second-price auction. With many bidders, we show that this equilibrium has the feature that bidders are passive until near the end of the auction, and then they start bidding incrementally.

Biography

Yuhta Ishii is a sixth year PhD Candidate at the Department of Economics at Harvard University. His research interests lie broadly in game theory, economic theory, and market design. In particular, much of his work studies the ways in which new information (or the lack of new information) affects the decisions of agents in dynamic environments. Yuhta will spend the upcoming year at the Department of Economics at Yale University as a Postdoctoral Associate, and will join the Department of Economics at Instituto Tecnològico Autònomo Mèxico (ITAM) as an assistant professor in 2015.

Revenue Guarantees for Auction Equilibria

Darrell Hoy, NorthwesternWednesday, March 26, 20142:30 PM – 3:30 PMHarvard University, Littauer Center

Description

Revenue approximation results for auctions have been limited to settings in which analytically solving for equilibrium is feasible: in particular, truthful auctions or symmetric settings of non-truthful auctions. We use a price of anarchy style approach similar to the smooth mechanisms framework of Syrkganis and Tardos [2013] to derive revenue approximation bounds in Bayes-Nash equilibrium without needing to analytically characterize equilibrium.

Our central result is that the first-price auction with monopoly reserves in asymmetric settings is a 3.16-approximation to the revenue of the optimal auction; with duplicates instead of reserves, it is a 4.75-approximation.

In addition, we present a framework for reducing revenue approximation in many other settings to this simple single-item, first-price auction setting. This allows the generalization of our approximations to matroid feasibility constraints, all-pay auctions, position auctions, and the simultaneous composition of auctions with unit-demand, single-valued agents.

Joint work with Jason Hartline and Sam Taggart.

Biography

Darrell Hoy is a fourth-year PhD Candidate from Northwestern University, advised by Prof. Jason Hartline. His work lies in the intersections of computer science and economics, and has focused on the effect of complications like risk-aversion and asymmetry on simple games and auctions. Darrell has interned with eBay Research Labs, previously worked in finance and ran a website to help cyclists find good roads to ride. Darrell will be at Harvard and MSR-NE throughout 2014.

The Truth Behind the Myth of the Folk Theorem

Lior Seeman, CornellWednesday, March 19, 20142:30 PM – 3:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

The complexity of finding a Nash equilibrium (NE) is a fundamental question at the interface of game theory and computer science. We focus on the complexity of finding an epsilon-NE in repeated games. Earlier work by Borgs et al. [2010] suggests that this problem is computationally intractable. But, if we take seriously the importance of efficiently finding a NE, it must be because we have computationally-bounded players in mind.

We show that if players are computationally bounded (modeled as polynomial-time Turing machines), and we make some standard cryptographic hardness assumptions (the existence of public-key encryption), then there exists a polynomial-time algorithm for finding an epsilon-NE in repeated games.

We additionally define and study an appropriate notion of a subgame-perfect equilibrium for computationally-bounded players, and show how to efficiently find such an equilibrium in repeated games as well (again, making standard cryptographic hardness assumptions). Our algorithm works not only for games with a finite number of players, but also for constant-degree graphical games.

Joint work with Joe Halpern and Rafael Pass.

Biography

Lior Seeman is a fourth year PhD Student at the Computer Science Department of Cornell University, advised by Joe Halpern and Rafael Pass. His research lies at the intersection of computer science, economics, social science and cognitive science. He is interested in using insights and techniques from one discipline to better understand and analyze concepts originating in other disciplines. More specifically, he uses ideas from computational complexity to model people’s bounded rationality, and use that to better understand their decision-making processes and social interactions.

Welfare and Revenue Guarantees for Competitive Bundling Equilibrium

Inbal Talgam-Cohen, StanfordWednesday, March 12, 20142:30 PM – 3:30 PMHarvard University, Littauer Center

Description

The economic notion of a competitive equilibrium aims to capture the steady state of combinatorial markets: prices create equality between demand and supply, thus stabilizing the market; moreover, prices provide the common information that drives an efficient allocation out of disparate individual decisions. It is well-known however that a competitive equilibrium is not guaranteed to exist when valuations are not gross substitutes. We study an alternative notion of equilibrium that accommodates bundles of goods, as found in many real-life markets (as a simple example consider a six-pack of soda). Our goal is to better understand the stabilizing role of bundles alongside prices, and the welfare and revenue properties of equilibria with bundles. We establish non-trivial welfare and revenue guarantees for competitive equilibrium with bundling in different classes of markets. In studying these properties we address the open question raised by Feldman-Gravin-Lucier in their recent STOC’13 paper.

Joint work with Shahar Dobzinski, Michal Feldman and Omri Weinstein.

Biography

Inbal Talgam-Cohen is a fourth-year computer science Ph.D. student and the Hsieh Family Interdisciplinary Graduate Fellow at Stanford University, advised by Prof. Tim Roughgarden. Her research interests lie in the intersection between computer science and economics, in particular mechanism design questions related to information and robustness. She’s been a research intern at MSR and Yahoo! and is editor-in-chief of ACM’s student magazine XRDS. She holds a Master’s degree in computer science from the Weizmann Institute of Science and a Bachelor’s degree in computer science and law from Tel-Aviv University in Israel, where she served as legal clerk for Honorable Justice Hayut.

Strategic Behavior in Unbalanced Matching Markets

Ran Shorrer, HarvardWednesday, Feb 19, 20142:30 PM – 3:30 PMMicrosoft Research, One Memorial Drive (1st Floor)

Description

In this work we explore how the balance of agents on the two sides of a matching market impacts their potential for strategic manipulation. Coles and Shorrer previously showed that in large, balanced, uniform markets using the Men-Proposing Deferred Acceptance Algorithm, each woman’s best response to truthful behavior by all other agents is to truncate her list substantially. In fact, the optimal degree of truncation for such a woman goes to 100% of her list as the market size grows large. Recent findings of Ashlagi et al. demonstrate that in unbalanced random markets, the change in expected payoffs is small when one reverses which side of the market “proposes,” suggesting there is little potential gain from manipulation.

Inspired by these findings, we study the implications of imbalance on strategic behavior in the incomplete information setting. We show that the “long” side has significantly reduced incentives for manipulation in this setting, but that the same doesn’t always apply to the “short” side. We also show that risk aversion and correlation in preferences affect the extent of optimal manipulation.

joint work with Peter Coles and Yannai Gonczarowski

Biography

Ran Shorrer is an economics PhD candidate at Harvard University and Harvard Business School. Before that he completed a master’s degree at the Center for the Study of Rationality at the Hebrew University. His research focuses on market design and economic theory.

Optimal Allocation of Public Services Without Monetary Transfers or Costly Signals

Itai Ashlagi, MITWednesday, Dec 4, 20132:00 PM – 3:30 PM

Description

We study the social planner’s problem of designing a mechanism to optimally allocate a set of public services to a set of heterogeneous agents with private utilities, without the ability to differentiate agents by charging prices or requiring costly effort. This is motivated by the 2012-2013 Boston school choice reform, in which social planners had to design a school choice lottery to allocate public school seats among families from various neighborhoods, in order to balance efficiency, equity, and busing costs. We consider two types of mechanisms: cardinal mechanisms, which can use any information, and ordinal mechanisms, which can only use agents’ rankings of preferences but not preference intensities. We show that under assumptions of “soft capacity limits” and “Pareto optimality of interim allocation rules,” any valid interim cardinal allocation rule is “type-specific-pricing,” in which agents are given equal budgets of “virtual money” and buy the optimal probabilistic bundle of services given type-specific prices for each service. Under similar assumptions, any valid ordinal allocation rule is “lottery-plus-cutoff,” in which agents are given i.i.d. lottery numbers and services post type-specific lottery-cutoffs; agents get their most preferred service for which they do not exceed the cutoff. Given additional assumptions on the public objective function and allowing for linear budget constraints, we present an algorithm to efficiently compute the optimal ordinal mechanism. We apply this to real data from Boston and for each of the main plans proposed during the 2012-2013 reform, we compute a corresponding “optimal” plan that uses the same transportation budget but optimizes for efficiency and equity. We compare the plans and discuss potential policy insights.

Joint work with Peng shi

Biography

Itai Ashlagi is an Assistant Professor of Operations Management at the Sloan School of Management. He is interested in game theory, mechanism and market design. He is the recipient of the outstanding paper award in the ACM conference of Electronic Commerce 2009. He received the Tarasaki award for novel developments in kidney exchange and the Services and Needs best paper award at INFORMS 2013. Before coming to MIT he spent two years as a postdoctoral researcher at Harvard Business School. Itai holds a BA in mathematics and computer science from Haifa University, and an MSc and PhD in operations research from Technion-Israel Institute of Technology.

The Computational Hardness of Pricing Compound Options

Mark Braverman, PrincetonMonday, Nov 25, 20132:00 PM – 3:30 PM

Description

It is generally assumed that you can make a financial asset out of any underlying event or combination thereof, and then sell a security. We show that while this is theoretically true from the financial engineering perspective, compound securities might be intractable to price. Even given no information asymmetries, or adversarial sellers, it might be computationally intractable to put a value on these, and the associated computational complexity might afford an advantage to the party with more compute power. We prove that the problem of pricing an option on a single security with unbounded compounding is PSPACE hard, even when the behavior of the underlying security is computationally (poly-time) tractable. We also prove some hardness results in a more realistic bounded-compounding model, when the underlying asset is given by an oracle.

Joint work with Kanika Pasricha

Biography

Mark Braverman is an assistant professor of computer science at Princeton University, where he has been on the faculty since 2011. He received his PhD in 2008 from the University of Toronto under the supervision of Stephen Cook. Prior to joining Princeton he spent two years as a postdoc at Microsoft Research New England and a year as a faculty member at the University of Toronto. Braverman’s interests include complexity theory, information theory, the theory of computation in dynamical systems, algorithms, and applications of computer science and game theory in healthcare and medicine.

His recent work developed and expanded the connections between information theory and computational complexity, by extending the reach of Shannon’s information theory to interactive computation. He is a recipient of the Sloan fellowship, the NSF CAREER award, the Turing Centenary Fellowship, and the Packard Fellowship in Science and Engineering.

An Optimization Approach to Mechanism Design

Costis Daskalakis, MITWednesday, Nov 13, 20132:00 PM – 3:30 PM

Description

I will present an optimization framework based on optimal transport theory characterizing the structure of revenue-optimal mechanisms in single-bidder multi-item settings. Our framework provides closed-form descriptions of mechanisms, generalizes Myerson’s celebrated single-item auction, and exhibits simple settings with very rich structure in the optimal mechanism. This part of the talk is based on work with Alan Deckelbaum and Christos Tzamos.

For the second part of the talk, which will be algorithmic, I will revisit the question posed by Nisan and Ronen in the birth of algorithmic mechanism design: How much harder is optimizing an objective over inputs that are furnished by rational agents compared to when the inputs are known? I will present a computationally efficient reduction from mechanism design (i.e. optimizing an arbitrary objective over rational inputs) to algorithm design (i.e. optimizing the same objective over known inputs) in general Bayesian settings. I will discuss applications of this framework to designing optimal mechanisms for max-min fairness and makespan. This part of the talk is based on work with Yang Cai and Matt Weinberg.

No knowledge of mechanism design will be assumed.

Biography

Constantinos Daskalakis is the x-window consortium associate professor of computer science at MIT. He holds a diploma in electrical and computer engineering from the National Technical University of Athens, and a Ph.D. in electrical engineering and computer sciences from UC-Berkeley. His research interests lie in theoretical computer science and applied probability with a focus on the computational aspects of the Internet, online markets and social networks. Daskalakis has been honored with the 2007 Microsoft Graduate Research Fellowship, the 2008 ACM Doctoral Dissertation Award, the Game Theory and Computer Science Prize from the Game Theory Society, the 2010 Sloan Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, the 2011 Ruth and Joel Spira Award for Distinguished Teaching, and the 2012 Microsoft Research Faculty Fellowship.

The Power of Local Information in Social Networks

Michael Brautbar, MITWednesday, Nov 6, 20132:00 PM – 3:30 PM

Description

We study the power of local information algorithms for optimization problems on social networks. We focus on sequential algorithms for which the network topology is initially unknown and is revealed only within a local neighborhood of vertices that have been irrevocably added to the output set. The distinguishing feature of this setting is that locality is necessitated by constraints on the network information visible to the algorithm, rather than being desirable for reasons of efficiency or parallelizability.

We study a range of problems under this model of algorithms with local information. We first consider the case in which the underlying graph is a preferential attachment network. We show that one can find the node of maximum degree in the network in a polylogarithmic number of steps, using an opportunistic algorithm that repeatedly queries the visible node of maximum degree. This addresses a decade-old open question of Bollobas and Riordan. In contrast, local information algorithms require a linear number of queries to solve the problem on arbitrary networks.

Motivated by problems faced by advertisers in online networks, we also consider network coverage problems such as finding a minimum dominating set. For this optimization problem we show that, if each node added to the output set reveals sufficient information about the set’s neighborhood, then it is possible to design randomized algorithms for general networks that nearly match the best approximations possible even with full access to the graph structure. We show that this level of visibility is necessary.

We conclude that a network provider’s decision of how much structure to make visible to its users can have a significant effect on a user’s ability to interact strategically with the network.

Biography

Michael (Mickey) Brautbar is currently a postdoctoral associate in MIT, affiliated with LIDS, hosted by Daron Acemoglu and Asu Ozdaglar. His main research interests lie in the theory of social and economic networks and in the algorithmic aspects of game theory. Before coming to MIT, he completed his PHD degree in computer and information science from the University of Pennsylvania, where he was advised by Michael Kearns. In the past he obtained B.Sc. and M.Sc. degrees in computer science (both summa cum laude) from the Hebrew University of Jerusalem.

The Demise of Walk Zones in Boston: Priorities vs. Precedence in School Choice

Scott Kominers, HarvardWednesday, Oct 30, 20132:00 PM – 3:30 PM

Description

School choice plans in many cities grant students higher priority for some (but not all) seats at their neighborhood schools. This paper demonstrates how the precedence order, i.e. the order in which different types of seats are filed by applicants, has quantitative effects on distributional objectives comparable to priorities in the deferred acceptance algorithm. While Boston’s school choice plan gives priority to neighborhood applicants for half of each school’s seats, the intended effect of this policy is lost because of the precedence order. Despite widely held impressions about the importance of neighborhood priority, the outcome of Boston’s implementation of a 50-50 school split is nearly identical to a system without neighborhood priority. We formally establish that either increasing the number of neighborhood priority seats or lowering the precedence order positions of neighborhood seats at a school have the same effect: an increase in the number of neighborhood students assigned to the school. We then show that in Boston a reversal of precedence with no change in priorities covers almost three-quarters of the range between 0% and 100% neighborhood priority. Therefore, decisions about precedence are inseparable from decisions about priorities. Transparency about these issues—in particular, how precedence unintentionally undermined neighborhood priority—led to the abandonment of neighborhood priority in Boston in 2013.

Biography

Scott Duke Kominers is a Junior Fellow at the Harvard Society of Fellows, a Research Scientist at the Harvard Program for Evolutionary Dynamics, and an Associate of the Harvard Center for Research on Computation and Society. From 2011-2013, he was the inaugural Research Scholar at the Becker Friedman Institute for Research in Economics at the University of Chicago.

Kominers received his A.B. in Mathematics and Ph.D. in Business Economics from Harvard University, in 2009 and 2011, respectively. His research focuses on market design and its interactions with law and computer science. His specific research interests include matching theory, eminent domain, law and economics, and quadratic form representation theory.

Optimal Competitive Auctions

Nick Gravin, MSR New EnglandWednesday, Oct 23, 20132:00 PM – 3:30 PM

Description

In an effort to make a progress in worst-case mechanism design a big amount of work has been devoted to the following simple scenario: a single-round, sealed-bid auction for an item in unlimited supply (usually referred to as a digital good). A competitive auction must be truthful (i.e., encourages buyers to bid truthfully) and on every input it must generate profit within a constant factor of a certain benchmark. The profit of the optimal single sale price is commonly used as a benchmark in the context of digital goods auctions.

I’ll present a few examples of competitive auctions as well as our work in progress with Ning Chen and Pinyan Lu. The optimal competitive ratio is known to be at least 2.42 due to Goldberg, Hartline, Karlin, and Saks (2004), who also conjectured that this lower bound must be tight. I will describe necessary and sufficient conditions for any benchmark to admit a truthful auction that raises at least 100% revenue of the benchmark’s value on every input. As a corollary we will see that the conjectured competitive ratio is indeed optimal. As another application we will see that the optimal auction has a competitive ratio of 1.25 for one more economically meaningful benchmark given by the revenue of the best Vickrey auction with a limited supply.

Biography

Nick Gravin is a Postdoc with us at Microsoft Research New England. Nick has two PhD degrees: one from Steklov Institute of Mathematics in Saint-Petersburg and another from Nanyang Technological University in Singapore. His research interests are twofold. In Mathematics he works in graph theory, convex and discrete geometry. In Theoretical Computer Science he is particularly interested in Algorithmic Mechanism Design and Equilibria computations.

Emissaries: How Bridges Affect Network Centrality

Ben Golub, HarvardWednesday, Oct 16, 20132:00 PM – 3:30 PM

Description

Consider two finite-state irreducible Markov chains. Each chain has a stationary distribution, which can be interpreted as measuring the ‘importance’ or ‘centrality’ of various states in a corresponding weighted directed graph of transitions. I’ll also mention some quite different applications in which stationary distributions (or, equivalently, eigenvector centrality measures) describe the ‘importance’ of nodes in an economic network.

Suppose we connect our two Markov chains by introducing a bridge — new transitions between two emissary states, one from each chain. How much of the stationary mass does each constituent chain get after the merger? We give an explicit formula in terms of the properties of the bridge: how central the emissaries are in their own chains, and how much weight they place on each other. Arbitrarily “small” bridges can have arbitrarily large effects on the post-merger stationary distribution, and a chain keeps more of the mass if its emissary was less central pre-merger. The proof, which is short enough to give in full, uses the characterization of the stationary distribution in terms of expected visits during sojourns in the Markov chain.

Applications include understanding why the small details of the present legislative standoff matter a lot for the world, and understanding how ‘social power’ is redistributed after two societies come into contact.

Biography

Benjamin Golub is currently a Junior Fellow at the Harvard Society of Fellows, and will begin an appointment as an Assistant Professor at the Harvard Department of Economics in July 2015. He received a Ph.D. in Economics from the Stanford Graduate School of Business in 2012, and a B.S. in Mathematics from the California Institute of Technology in 2007. Ben’s research focuses on microeconomic theory, and in particular, social and economic networks: how these networks form when agents invest strategically in relationships, how information is transmitted through them, and how they mediate processes such as group cooperation. Applications of his research include measuring social segregation and understanding its consequences for polarization of beliefs or behaviors.

Two(!) Good To Be True

Sergiu Hart, Hebrew University of JerusalemMonday, Oct 7, 20132:00 PM – 3:30 PM

Description

How to sell goods optimally? While the mechanism-design literature has solved this problem neatly when there is only one good, the multiple goods case turns out to be extremely difficult, mathematically and conceptually. Much of what is true for one good does not extend to multiple goods. We will try to explain the difficulties, show what can go wrong, and then present some universal approximation results. The talk is essentially self-contained; no background in mechanism design is necessary.

Biography

Sergiu Hart is the Kusiel-Vorreuter University Professor, Professor of Mathematics, and Professor of Economics, at the Hebrew University of Jerusalem. From 1991 to 1998 he was the founding director of the highly reputed Center for the Study of Rationality. Besides game theory and economic theory, he has contributions in mathematics, computer science, probability and statistics. He is known for studies of strategic foundations of cooperation; strategic use of information in long-term interactions (“repeated games”); adaptive and evolutionary dynamics, particularly with boundedly rational agents; perfect economic competition and its relations to models of fair distribution; and riskiness. He is Fellow of the Econometric Society and Member of the Israel Academy of Sciences and Humanities. In 1998 he received the Rothschild Prize. He served as president of the Israel Mathematical Union in 2005-2006, and as president of the Game Theory Society in 2008-2010.

On the Value of using Group Discounts under Price Competition

Reshef Meir, HarvardWednesday, Oct 2, 20132:00 PM – 3:30 PM

Description

The increasing use of group discounts has provided opportunities for buying groups with diverse preferences to coordinate their behavior in order to exploit the best offers from multiple vendors. We analyze this problem from the viewpoint of the vendors, asking under what conditions a vendor should adopt a volume-based price schedule rather than posting a fixed price, either as a monopolist or when competing with other vendors. When vendors have uncertainty about buyers’ valuations specified by a known distribution, we show that a vendor is always better off posting a fixed price, provided that buyers’ types are i.i.d. and that other vendors also use fixed prices. We also show that these assumptions cannot be relaxed: if buyers are not i.i.d., or other vendors post discount schedules, then posting a schedule may yield higher profit for the vendor. We provide similar results under a distribution-free uncertainty model, where vendors minimize their maximum regret over all type realizations.

Biography

Reshef Meir is a post-doctoral fellow at Harvard SEAS and at the Center for Research on Computation and Society. He was a PhD student at the Hebrew University in Jerusalem and an intern in MSR in Israel. He is mainly interested in the design and analysis of mechanisms that encourage self-interested agents to cooperate.

Predictability and Power in Legislative Bargaining

Nageeb Ali, University of California-San DiegoWednesday, Sept 25, 20132:00 PM – 3:30 PMDescription

This paper examines the effect of the predictability of recognition processes on the concentration of political power in legislative bargaining. For a broad class of legislative bargaining games, we identify a mild predictability condition on the recognition rule, requiring an ability to rule out some minimum number of legislators as the next proposer, under which Markovian equilibria deliver all economic surplus to the first proposer. That result holds irrespective of whether legislators are patient or impatient. When legislators can be nearly certain that the next proposer belongs to a class of the requisite size, the first proposer receives nearly all of the surplus.

Biography

Nageeb Ali is an assistant professor of economics at UCSD, and is visiting Microsoft Research for the month of September as a Consulting Researcher. His focus is microeconomic theory, and his interests range from multilateral bargaining to studying self-control problems to the role of networks in contract enforcement.

Repeated Population Games of Incomplete Information I: Compressed Nash Equilibrium

Ehud Kalai, MSR New EnglandWednesday, Sept 18, 20132:00 PM – 3:30 PM

Description

In equilibrium of large (many players) Bayesian games, stability properties fail when player types are interdependent. But when the game is played repeatedly, the players “learn to be independent” and stability is restored.

To conduct robust tractable analysis of such games, we develop a new equilibrium concept called compressed equilibrium. Defined for finite population games, compressed equilibrium is significantly easier to compute, is independent of the number of players and repetitions of the game, and it becomes an ε-Nash equilibrium as the number of players increases.

Biography

Ehud Kalai works in game theory and its interface with economics, computer science and other areas. He is a Principal Researcher at Microsoft Research New England and the James J O’Connor Distinguished Professor of Decision and Game Sciences at Northwestern University, where is the director of the Center for Game Theory and Economic Behavior. He holds an AB (Cornell, 1972), a PhD (UC Berkeley, 1967) and an Honorary Doctorate (University of Paris 2010) in mathematics. Kalai is the founding Editor of Games and Economic Behavior, and a co-founder and past President of the Game Theory Society. He is a Fellow of the American Academy of Arts and Sciences and of the Econometric Society.