Type keywords...

Working Knowledge

Working Knowledge # 5: Collusion Course

February 21, 2021

As corporations and consumers have continued to integrate artificial intelligence (AI) into everyday life, scientists, philosophers, and tech enthusiasts have raised concerns about the dangers of AI. Although pop culture is filled with dystopian AI books and movies, the fear of AI taking over humanity has remained science fiction until recently. The 2020 U.S. presidential campaign cycle has brought AI closer to the mainstream with Andrew Yang’s bid for the Democratic nomination and increasing scrutiny of social media’s content moderation policies. In 2015, Elon Musk, Steven Hawking, and many others signed an open letter that urgesd broader consideration for society as AI research advances. Yet, tucked into that letter, is a research  priorities document that raises concerns about the unintended consequences of AI. The document lists “Optimizing AI’s Economic Impact” as the first short-term research priority; the optimal state being society maximizing AI’s economic benefits while mitigating its negative impacts.

Unintended Consequences

Among the negative side effects, the letter’s authors and other researchers have identified the broad category of “market disruptions” as a possible unintended consequence. Market disruptions are not necessarily bad; iTunes disrupted the music industry by allowing consumers to purchase singles instead of whole albums, and Spotify later disrupted the industry again with music streaming. But there are bad market disruptions such as collusion. Collusion occurs when rival firms cooperate to raise profits above a competitive equilibrium level, damaging other stakeholders such as consumers. For profits to be maintained at “supracompetitive” levels, colluders need to (1) agree on a course of action, (2) monitor each other’s adherence to this course of action, and (3) punish those who deviate from the agreed plan. The perfect example of this was revealed earlier this year when the US government indicted four chicken-company executives for a conspiracy to fix prices for chickens:

Beginning at least as early as 2012 and continuing through at least early 2017, the exact dates being unknown to the Grand Jury, in the State and District of Colorado and elsewhere, JAYSON PENN, MIKELL FRIES, SCOTT BRADY, and ROGER AUSTIN (“Defendants”), together with co-conspirators known and unknown to the Grand Jury, entered into and engaged in a continuing combination and conspiracy to suppress and eliminate competition by rigging bids and fixing prices and other price related terms for broiler chicken products sold in the United States...It was further part of the conspiracy that PENN, FRIES, BRADY, and AUSTIN, together with their co-conspirators, in the State and District of Colorado and elsewhere, utilized that continuing network [of Suppliers and co-conspirators]:

  1. to reach agreements and understandings to submit aligned, though not necessarily identical, bids and to offer aligned, though not necessarily identical, prices, and price-related terms, including discount levels, for broiler chicken products sold in the United States;
  2. to participate in conversations and communications relating to nonpublic information such as bids, prices, and price-related terms, including discount levels, for broiler chicken products sold in the United States with the shared understanding that the purpose of the conversations and communications was to rig bids, and to fix, maintain, stabilize, and raise prices and other price-related terms, including discount levels, for broiler chicken products sold in the United States;
  3. to monitor bids submitted by, and prices and price-related terms, including discount levels, offered by, Suppliers and co-conspirators for broiler chicken products sold in the United States.

Although collusion is difficult to identify due to its inherently furtive nature, regulators and lawyers have an easier time proving explicit collusion, much like this chicken conspiracy, because there is direct communication and cooperation between competing firms. And when the hens come home to roost, regulators will assign blame and shame in a straightforward fashion. Firms can also engage in tacit or implicit collusion where anti-competitive coordination occurs without explicit agreements, resulting in the same supracompetitive outcome as explicit collusion. For example, if a market leading firm were to raise prices and all of its competitors follow suit and raise their prices as well, that would be implicit collusion. All the firms followed a logical, independent conclusion to improve their profit, resulting in a supracompetitive outcome.

Four economists have found that AI-powered algorithms learn to tacitly collude and charge supracompetitive prices. Although this was only an experiment and no real-world case of an AI pricing fixing exists (or hasn’t been detected yet), the research indicates that algorithmic collusion is a real-life possibility. If algorithmic collusion were to occur, regulators would encounter unique challenges in bringing algorithms to heel. We will focus on two key challenges: the legal definition of collusion and liability assignment.

A Legal Framework

The first challenge would be redefining what constitutes a collusive agreement in the legal framework. The current legal framework focuses on the means that competitors use to achieve a collusive outcome, which contrasts with the economic approach that focuses on the collusive outcome itself. Therefore, antitrust laws prohibit anti-competitive agreements, but not the supracompetitive outcome itself. Explicit collusion clearly violates brightline rules and standards of anticompetitive conduct, but tacit collusion does not. If competitors are truly acting independently and rationally within their markets, and supracompetitive prices are the natural outcome, then tacit collusion is beyond the reach of the law. Algorithms, in particular markets and sets of circumstances, may make competitors interdependent without direct communication or explicit agreements, increasing the risk of tacit collusion and economic harm to consumers. Although algorithmic collusion is currently theoretical, regulators should consider revisiting and updating the existing antitrust framework (subject to future research demonstrating the real-world possibility of algorithmic collusion) to bring algorithms under their jurisdiction.

Even if algorithmic collusion were to be under antitrust purview, regulators would face challenges in determining and assigning antitrust liability. The scope of liability will likely be determined by the strength of the relationship between the principal (i.e. a firm or individual that owns and controls assets, both physical and intangible) and agent (i.e. the individual or entity that the principal has appointed to act on their behalf). The narrowest interpretation of the principal-agent relationship would view AI simply as a tool, attributing the liability to the firm or human operator. This approach was applied to the accidental death of a pedestrian hit by an autonomous Uber vehicle, and liability was assigned to the human operator. In this particular case, the human operator, who was acting as the backup driver and supposed to take control in emergencies, was looking toward the vehicle’s console where her phone was placed before the collision. Although the vehicle’s programming had shortcomings that prevented it from activating emergency braking, the National Transportation Safety Board had a clear sense of the principal-agent relationship between Uber’s backup driver and the autonomous vehicle, viewed the vehicle as a tool that was improperly operated, and therefore assigned liability to the operator. The accident occurred in a simple environment that makes the tool analogy valid: one company, one algorithm, one human operator who was determined to be negligent, one affected party, and easily observable conditions. However, a more complex environment with more autonomous algorithms and weaker principal-agent relationships would highlight the challenges of liability assignment.

Regulators will encounter difficulties in assigning liability when multiple competitors, employing multiple human operators and multiple algorithms, violate antitrust laws. In determining liability, a regulator would likely have to determine the intent to collude, if competitors could or should have reasonably known that the algorithms would lead to a noncompetitive outcome, and oversight of the algorithms to ensure compliance with antitrust law. Intent to collude aside, it will be difficult to prove that a company or set of competitors should have known that their algorithms were violating antitrust laws. The difficulty will lie in how the algorithm operates in a live environment when multiple variables and inputs, including the actions of their competitors’ algorithms, are factors in an algorithm’s actions. The firms themselves will likely not know that their algorithms have created an anticompetitive environment.

As algorithmic collusion is still theoretical, we won’t go further down the rabbit hole. But the economists that demonstrated the possibility of algorithmic collusion provide a useful lesson in considering the second- and third-order effects of AI. These economists also illustrate that AI’s ripple effects can be predicted to a certain degree, and that new points of view can broaden the scope of AI research.

Keep up-to-date with the latest from AI4A!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.