' MTTLR | Michigan Telecommunications and Technology Law Review

Recent Articles

Association for Molecular Pathology v. Myriad Genetics: A Critical Reassessment

By  Jorge L. Contreras
Article, Fall 2020
Read More

From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

By  David Nersessian & Ruben Mancha
Article, Fall 2020
Read More

Will the 'Legal Singularity' Hollow Out Law's Normative Core?

By  Robert F. Weber
Article, Fall 2020
Read More

Antitrust Overreach: Undoing Cooperative Standardization in the Digital Economy

By  Jonathan M. Barnett
Article, Spring 2019
Read More

Bank On We The People: Why and How Public Engagement Is Relevant to Biobanking

By  Chao-Tien Chang
Article, Spring 2019
Read More

Recent Notes

The Contribution of EU Law to the Regulation of Online Speech

By  Luc von Danwitz
Note, Fall 2020
Read More

The Unified Patent Court and Patent Trolls In Europe

By  Jonathon I. Tietz
Note, Spring 2019
Read More

Blog Posts

Political neutrality in content moderation compels private speech

Lots of online life today takes place on social media platforms. These platforms have become a place for communication of all types of ideas. Platforms establish community guidelines and moderate content for a variety of reasons. Congress saw a problem with platforms becoming liable for user content when they moderated “bad” content that was established in case law, so they passed section 230. This protects platforms from liability for content provided by users, while also allowing good faith moderation without revoking that protection. This protection has allowed platforms to create their own terms of service and define what type of content a user can post. If content violates the terms of use or is otherwise objectionable, platforms can remove it without fear of becoming liable as publishers of content on their site, instead of leaving all content untouched out of fear of incurring liability. Recently this section has come under fire. Specifically, because section 230 protects moderation that is not politically neutral on some of the biggest internet platforms. Several bills have been introduced to address this and mandate neutrality in moderation of content. The problem with this approach is that it will compel social media platforms to host content that they do not want to. Forcing a private company to do so violates their first amendment rights. The first amendment protects freedom of speech in the U.S. but section 230 provides enhanced protections. Congress conferred a benefit to internet platforms in the form of liability protections. These protections allow platforms to operate without fear of overwhelming lawsuits because of user posted content. It also allows platforms the freedom... read more

Uncovering the Burial of Transformative Trademark & Copyright Measures in Congress’ 2021 Stimulus Package: Protections to Come for Content Creators

The recently passed stimulus package quietly incorporates consequential changes to American intellectual property laws via the advent of the Trademark Modernization Act of 2020 (“the TMA”), the Copyright Alternative in Small-Claims Enforcement Act of 2020 (the “CASE Act”), and the Protecting Lawful Streaming Act (the “PLSA”).   On December 21, 2020, about eight months into the sudden and persistent COVID-19 pandemic, Congress swiftly passed the Consolidated Appropriations Act, 2021 (“the Act”), a long-awaited bill focused on providing another round of pandemic relief and economic stimulus; and avoiding a government shutdown. Six days later, then President Donald Trump signed the Act into law.   Buried within $900 billion in stimulus provisions and a $1.4 trillion federal agency funding deal, the Act includes provisions that amend trademark and copyright laws and thus, impact creators in the booming digital economy. The TMA, the CASE Act, and the PLSA will offer trademark and copyright owners, thereby many content creators, meaningful benefits, including (i) making it easier for trademark owners to obtain injunctive relief; (ii) creating a small-claims court for copyright infringement disputes; and (iii) imposing a felony on unlawful streaming of copyrighted material.   The Trademark Modernization Act (TMA) of 2020: Resolving a Circuit Split   The TMA, among other initiatives to expand and fortify the accuracy and integrity of the federal trademark register, settles a long-standing circuit split: whether the Supreme Court’s ruling in eBay, Inc. v. MercExchange LLC, 547 U.S. 388 (2006) (holding that irreparable harm could not be presumed in a patent infringement lawsuit) applies to trademark infringement. Historically, to obtain preliminary injunctive (PLI) relief, the movant has the burden... read more

Trans-Atlantic Data Transfers After Schrems II

In July 2020, the European Court of Justice released Schrems II, an opinion finding the EU/US Privacy Shield insufficient to guarantee compliance with EU data protection laws. The decision marked the second time the ECJ would invalidate a data privacy adequacy decision between the EU and US, sabotaging once more an enterprise meant to safeguard trans-Atlantic data transfers without compromising US national security activities. Consequently, US companies who house or process EU data outside of the EU are now exposed to serious liability when they send data across the Atlantic, something many companies do in the regular course of business. Schrems II left open a potential means of escaping liability through Standard Contractual Clauses (SCCs), but the ECJ seemed poised to invalidate that mechanism the next time it comes under their scrutiny. The decision arises out of the acutely conservative approach the EU takes to data privacy. In the EU, “[p]rivacy rights are given the status of a fundamental right,” enshrined in the EU Charter of Fundamental Rights and formally guaranteed to all EU citizens under the 2009 Lisbon Treaty. In addition to general privacy protections provided under the Charter, the Charter specifically establishes a “right to the protection of personal data concerning him or her.” That right includes a guarantee that an EU citizen’s data will be processed fairly and only for “specified purposes.” According to the EU supervisory data authority, the right to be “in control of information about yourself…plays a pivotal role” within the notion of dignity enshrined in the Charter. With this historical context, the European Commission passed the GDPR, which came into effect in... read more

AI v. Lawyers: Will AI Take My Legal Job?

Artificial Intelligence (AI) is changing the global workforce, generating fears that it will put masses of people out of work. Indeed, some job loss is likely as computers, intelligent machines, and robots take over certain tasks done by humans. For example, passenger cars and semi-trailer trucks will be able to drive themselves in the future, and that means that there won’t be a need for quite as many drivers. Elon Musk, the co-founder and CEO of Tesla and SpaceX, predicts that so many jobs will be replaced by intelligent machines and robots in the future that eventually “people will have less work to do and ultimately will be sustained by payments from the government.” The World Economic Forum concluded in a recent report that “a new generation of smart machines, fueled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs.”   All of this raises the question of whether lawyers and even judges will eventually be replaced with algorithms. As one observer noted, “The law is in many ways particularly conducive to the application of AI and machine learning.” For example, legal rulings in a common law system involve deriving axioms from precedent, applying those axioms to the particular facts at hand, and reaching conclusions accordingly. Similarly, AI systems also learn to make decisions based on training data and apply the inferred rules to new situations. A growing number of companies are building machine learning models that ask the AI to assess a host of factors—from the corpus of relevant precedent, venue, to a case’s particular fact pattern–to predict the outcomes of pending cases.  ... read more

Deep-Fake, Real Pain: The Implications of Computer Morphing on Child Pornography

The proliferation of “deep-fake” internet videos—in which a person in an existing video is replaced with the likeness of another—has called into question our most basic method for perceiving the world: using our own eyes. While the definition of deep-fake transforms as the technology develops, the video technology is generally regarded as the use of machine-learning to replace the face of one individual with another. Troublingly, deep-fakes have changed the landscape of digital pornography. Advances in computer morphing software have produced a new category of child pornography: “morphed” child pornography, in which a child’s face is virtually superimposed onto the body of an adult performing sexually explicit acts. Today, the rapidly changing field of technology has created an unresolved legal question: is “morphed” child pornography protected under the First Amendment? In February of 2020, the Fifth Circuit Court of Appeals weighed in on the debate in United States v. Mecham. When Clifford Mecham Jr. took his computer to a technician for repairs, the technician discovered thousands of images depicting the nude bodies of adults with faces of children superimposed. Once notified, the Corpus Christi Police Department seized several hard drives, revealing over 30,000 pornographic photos and videos with “morphed” child pornography. The Fifth Circuit affirmed Mecham’s conviction, but remanded his case to reduce his sentence, holding that the sentencing enhancement for “sadistic or masochistic conduct” does not apply to morphed child pornography as there is no depictions of “contemporaneous infliction of pain.” While child pornography is not protected under the First Amendment, virtual child pornography, sexually explicit images created with adults who look like minors or created solely by... read more

Patent Trolls Show Immunity to Antitrust: Patent Trolls Unscathed by Antitrust Claims from Tech-Sector Companies

Patent trolls have become a prominent force to be reckoned with for tech-sector companies in the United States, and tech-sector companies’ recent failure in using antitrust law to combat patent trolls indicates a continuation of that prominence. Patent trolls have been quite the thorn in the side of tech-sector companies. The term “patent troll” is the pejorative pop culture title for the group of firms also known as non-practicing entities, patent assertion entities, and patent holding companies. These entities buy patents, not with the purpose of utilizing the patent’s technology, but with the purpose of suing companies for patent infringement. Patent trolls have made up around 85% of patent litigation against tech-sector companies in 2018. Moreover, in comparison to the first four months of 2018, the first four months of 2020 saw a 30% increase in patent litigation from patent trolls. At a high-level, antitrust law appears to be a proper tool for wrangling patent trolls. Antitrust law cracks down on anticompetitive agreements and monopolies for the sake of promoting consumer welfare. Patents are effectively legal monopolies over a claimed invention, and patent trolls use these legal monopolies to instigate frivolous patent infringement lawsuits on companies. Such lawsuits increase litigation and licensing costs for companies who can then push such costs, via increased product prices, onto the downstream consumer. In an attempt to go on the offensive, tech-sector companies have brought antitrust claims against patent trolls. The antitrust claims have operated on one of two theories. In Intellectual Ventures I LLC v. Capital One from 2017, Capital One counterclaimed antitrust remedies on the basis of a patent troll suing... read more

View More Recent Articles