' Act 2 Enforcement for Antitrust and Algorithms | MTTLR

Act 2 Enforcement for Antitrust and Algorithms

If algorithms can learn and develop means of achieving business efficiencies beyond the initial parameters set by their programmers, regulators will need to employ creative means of enforcing antitrust violations. The lack of clarity on whether section 1 agreements under the Sherman Antitrust Act have been made—and on what parties were involved in making said agreements—cuts in favor of finding liability under section 2.

Pricing algorithms have been used generally to procompetitive ends. With the ability to efficiently process huge quantities of data and respond to consumer almost immediately, more and more businesses are adopting pricing algorithms. The uptick in algorithm adoption and, consequently, data accessibility has created price transparency—this benefits consumers, who generally like to price compare before purchasing. In a world where firms are adopting proprietary price algorithms, anticompetitive effects are not immediately apparent.

Of course, things aren’t that simple. The development of artificial intelligence has created a world where algorithms are not solely the product of human creation. Where algorithms were previously limited by the parameters outlined programmers, artificial intelligence and neural networks have opened the door for algorithms to dynamically derive targeted outcomes. This integration of artificial intelligence, neural networks, and traditional algorithmic coding raises risk of potential antitrust liability for well-meaning firms. Pricing algorithms may possibly integrate with competing algorithms and result in price fixing absent human agreement. The development of these types of intelligent algorithms has firms concerned about their exposure to antitrust liability in situations where they are passive participants in cartels formed by their intelligent pricing algorithms.

The European Commissioner for Competition, Margethe Vestager, has noted that the onus is on the businesses that implement algorithms to ensure that no antitrust/competition violations arise, even when a firm “may not always know exactly how an automated system will use its algorithms to make decisions.” Vestager’s comments imply that algorithms should be treated as employees or agents of the firm for compliance considerations. Her comments also seem to imply that the algorithms can be limited in their capacity to collude.

Though The Poster Case did not present complex questions concerning whether or not an agreement to restrict trade existed, the use of algorithms can frustrate the ability of antitrust enforcers to determine when and if agreements are being made. This obscuring of restrictive agreements complicates Commissioner Vestager’s hardline stance on holding businesses responsible for robotic “agreements.” Rather than seeking out constructive agreements between firms and algorithms, good faith enforcement of antitrust law may force regulators to pursue harms to competition under section 2 of the Sherman Act.

If we accept the premise that dynamic algorithms can target business efficiencies without regard for legal parameters, and that algorithmic integration across firms is possible, it seems likely that a network of smart algorithms will form price agreements resulting in price stagnation. In a world of algorithmic uniformity, regulators may find success in a theory of algorithmic monopolization. Algorithms interacting and learning from each other may integrate to the point where there is no practical distinction between Firm A’s algorithm and Firm B’s algorithm. Where there is one de facto algorithm operating in a given product market, there could be a plausible monopoly in the price setting and product marketing arena.

Following the Supreme Court’s analysis for section 2 violations, integrated algorithms would certainly exist as monopolists exercising their market power to maintain control of the market. See U.S. v. Grinnell Corp., 384 U.S. 536, 570-71 (1966) (stating the monopoly offense contains both an element of monopoly power in a given market and willful maintenance of that monopoly power).

Strict enforcement of antitrust law in this area may result in corporations removing artificial intelligence from their pricing algorithms and constraining the software on the front-end. Usually, antitrust law seeks to avoid over-deterrence of conduct, which is designed to encourage potentially beneficial outcomes for consumers. However, the business efficiencies presented with dynamic algorithms may come with too much risk for consumer harm. To preserve competition we may need to broadly restrict the use of artificial intelligence and software integration.*

*Jordan Wampler is an associate editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *