' MTTLR | Michigan Telecommunications and Technology Law Review

Recent Articles

Association for Molecular Pathology v. Myriad Genetics: A Critical Reassessment

By  Jorge L. Contreras
Article, Fall 2020
Read More

From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

By  David Nersessian & Ruben Mancha
Article, Fall 2020
Read More

Will the 'Legal Singularity' Hollow Out Law's Normative Core?

By  Robert F. Weber
Article, Fall 2020
Read More

Antitrust Overreach: Undoing Cooperative Standardization in the Digital Economy

By  Jonathan M. Barnett
Article, Spring 2019
Read More

Bank On We The People: Why and How Public Engagement Is Relevant to Biobanking

By  Chao-Tien Chang
Article, Spring 2019
Read More

Recent Notes

The Contribution of EU Law to the Regulation of Online Speech

By  Luc von Danwitz
Note, Fall 2020
Read More

The Unified Patent Court and Patent Trolls In Europe

By  Jonathon I. Tietz
Note, Spring 2019
Read More

Blog Posts

Big Data: Transitioning Away From the White Male Norm

As the capacity to generate and use digital information increases, the use of big data has permeated many industries. Its usage in medicine is poised to make major impacts on clinical practice. There are many benefits to the quality and efficiency of healthcare that can be achieved through the utilization of big health data. But there is a need for an understanding of how big data will affect populations that face disparities and inequalities in medicine – women and people of color. In medicine, the white male is generally the default. This default often affects how women and people of color are diagnosed and treated. Women may go undiagnosed and untreated due to having “exclusively female disease” or diseases that occur more frequently in women than men. Or they are misdiagnosed because their symptoms don’t manifest in the same way they do in men. For people of color, differences in race may affect the efficacy of drugs and medical devices. For both populations, they may ultimately have to be sicker or wait longer to qualify for the same treatment as a white man. Big data may help overcome these disparities through recognition of patterns in the treatment of women and people of color. Data generated during the course of care can be used to measure quality, develop hypotheses, and compare effectiveness of different treatments. Artificial intelligence (AI) technology provides the ability to take massive data sets and find patterns. These identified patterns may reveal gender and racial differences that affect diagnosis and treatment. Through the use of big data in precision medicine, for example, the identification of “biological variation... read more

California’s Prop 22: A Cautionary Tale

Even before COVID-19 hit last year, food delivery apps such as Caviar and Postmates had gained popularity as a convenient and relatively quick way to order food without the hassle of long lines or even needing to leave home. After the pandemic led to shelter-in-place orders and temporarily closed indoor dining in several states, there was an even greater demand for these food delivery services that provided a safe alternative to going out to eat or walking inside a restaurant to pick up a carry-out order. Additionally, even though these delivery apps are run by large corporations, this technology made it easier for diners to support local restaurants at a time when their patronage was even more impactful. According to MarketWatch, in the six months between April 2019 and September 2019, four of the major delivery app services – DoorDash, Uber, Grubhub, and Postmates – collectively brought in $2.5 billion in revenue. In the same time period the following year, which covered the early days of the pandemic, revenue for these four companies more than doubled to a combined $5.5 billion. However, as the popularity of these apps grew, so did the criticism and controversies surrounding their business practices. All eyes have been on California for the past few years with regard to the laws surrounding this relatively new type of employment and how to classify “gig” workers employed by rideshare and food delivery companies. Originally, companies like Uber and DoorDash were able to cut operating costs by hiring independent contractors as opposed to full-time employees. By doing so, these companies were able to deny their workers minimum wage,... read more

Political neutrality in content moderation compels private speech

Lots of online life today takes place on social media platforms. These platforms have become a place for communication of all types of ideas. Platforms establish community guidelines and moderate content for a variety of reasons. Congress saw a problem with platforms becoming liable for user content when they moderated “bad” content that was established in case law, so they passed section 230. This protects platforms from liability for content provided by users, while also allowing good faith moderation without revoking that protection. This protection has allowed platforms to create their own terms of service and define what type of content a user can post. If content violates the terms of use or is otherwise objectionable, platforms can remove it without fear of becoming liable as publishers of content on their site, instead of leaving all content untouched out of fear of incurring liability. Recently this section has come under fire. Specifically, because section 230 protects moderation that is not politically neutral on some of the biggest internet platforms. Several bills have been introduced to address this and mandate neutrality in moderation of content. The problem with this approach is that it will compel social media platforms to host content that they do not want to. Forcing a private company to do so violates their first amendment rights. The first amendment protects freedom of speech in the U.S. but section 230 provides enhanced protections. Congress conferred a benefit to internet platforms in the form of liability protections. These protections allow platforms to operate without fear of overwhelming lawsuits because of user posted content. It also allows platforms the freedom... read more

Uncovering the Burial of Transformative Trademark & Copyright Measures in Congress’ 2021 Stimulus Package: Protections to Come for Content Creators

The recently passed stimulus package quietly incorporates consequential changes to American intellectual property laws via the advent of the Trademark Modernization Act of 2020 (“the TMA”), the Copyright Alternative in Small-Claims Enforcement Act of 2020 (the “CASE Act”), and the Protecting Lawful Streaming Act (the “PLSA”).   On December 21, 2020, about eight months into the sudden and persistent COVID-19 pandemic, Congress swiftly passed the Consolidated Appropriations Act, 2021 (“the Act”), a long-awaited bill focused on providing another round of pandemic relief and economic stimulus; and avoiding a government shutdown. Six days later, then President Donald Trump signed the Act into law.   Buried within $900 billion in stimulus provisions and a $1.4 trillion federal agency funding deal, the Act includes provisions that amend trademark and copyright laws and thus, impact creators in the booming digital economy. The TMA, the CASE Act, and the PLSA will offer trademark and copyright owners, thereby many content creators, meaningful benefits, including (i) making it easier for trademark owners to obtain injunctive relief; (ii) creating a small-claims court for copyright infringement disputes; and (iii) imposing a felony on unlawful streaming of copyrighted material.   The Trademark Modernization Act (TMA) of 2020: Resolving a Circuit Split   The TMA, among other initiatives to expand and fortify the accuracy and integrity of the federal trademark register, settles a long-standing circuit split: whether the Supreme Court’s ruling in eBay, Inc. v. MercExchange LLC, 547 U.S. 388 (2006) (holding that irreparable harm could not be presumed in a patent infringement lawsuit) applies to trademark infringement. Historically, to obtain preliminary injunctive (PLI) relief, the movant has the burden... read more

Trans-Atlantic Data Transfers After Schrems II

In July 2020, the European Court of Justice released Schrems II, an opinion finding the EU/US Privacy Shield insufficient to guarantee compliance with EU data protection laws. The decision marked the second time the ECJ would invalidate a data privacy adequacy decision between the EU and US, sabotaging once more an enterprise meant to safeguard trans-Atlantic data transfers without compromising US national security activities. Consequently, US companies who house or process EU data outside of the EU are now exposed to serious liability when they send data across the Atlantic, something many companies do in the regular course of business. Schrems II left open a potential means of escaping liability through Standard Contractual Clauses (SCCs), but the ECJ seemed poised to invalidate that mechanism the next time it comes under their scrutiny. The decision arises out of the acutely conservative approach the EU takes to data privacy. In the EU, “[p]rivacy rights are given the status of a fundamental right,” enshrined in the EU Charter of Fundamental Rights and formally guaranteed to all EU citizens under the 2009 Lisbon Treaty. In addition to general privacy protections provided under the Charter, the Charter specifically establishes a “right to the protection of personal data concerning him or her.” That right includes a guarantee that an EU citizen’s data will be processed fairly and only for “specified purposes.” According to the EU supervisory data authority, the right to be “in control of information about yourself…plays a pivotal role” within the notion of dignity enshrined in the Charter. With this historical context, the European Commission passed the GDPR, which came into effect in... read more

AI v. Lawyers: Will AI Take My Legal Job?

Artificial Intelligence (AI) is changing the global workforce, generating fears that it will put masses of people out of work. Indeed, some job loss is likely as computers, intelligent machines, and robots take over certain tasks done by humans. For example, passenger cars and semi-trailer trucks will be able to drive themselves in the future, and that means that there won’t be a need for quite as many drivers. Elon Musk, the co-founder and CEO of Tesla and SpaceX, predicts that so many jobs will be replaced by intelligent machines and robots in the future that eventually “people will have less work to do and ultimately will be sustained by payments from the government.” The World Economic Forum concluded in a recent report that “a new generation of smart machines, fueled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs.”   All of this raises the question of whether lawyers and even judges will eventually be replaced with algorithms. As one observer noted, “The law is in many ways particularly conducive to the application of AI and machine learning.” For example, legal rulings in a common law system involve deriving axioms from precedent, applying those axioms to the particular facts at hand, and reaching conclusions accordingly. Similarly, AI systems also learn to make decisions based on training data and apply the inferred rules to new situations. A growing number of companies are building machine learning models that ask the AI to assess a host of factors—from the corpus of relevant precedent, venue, to a case’s particular fact pattern–to predict the outcomes of pending cases.  ... read more

View More Recent Articles