' MTTLR | Michigan Telecommunications and Technology Law Review

Recent Articles

How Can I Tell if My Algorithm Was Reasonable?

By  Karni A. Chagal-Feferkorn Article, Spring 2021
Read More

Taking It With You: Platform Barriers to Entry and the Limits of Data Portability

By  Gabriel Nicholas Article, Spring 2021
Read More

Pushing Back on Stricter Copyright ISP Liability Rules

By  Pamela Samuelson Article, Spring 2021
Read More

Association for Molecular Pathology v. Myriad Genetics: A Critical Reassessment

By  Jorge L. Contreras Article, Fall 2020
Read More

From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

By  David Nersessian & Ruben Mancha Article, Fall 2020
Read More

Recent Notes

Mitochondrial Replacement Therapy: Let the Science Decide

By  Sabrina K. Glavota Note, Spring 2021
Read More

The Contribution of EU Law to the Regulation of Online Speech

By  Luc von Danwitz Note, Fall 2020
Read More

Blog Posts

From Third-Party Data to First-Party Data: Is FLoC right for the future?

Third-party cookies are often used by advertisers to track users’ activities across websites to show them relevant ads. While these cookies are beneficial for websites due to the advertising revenue they generate, these cookies are often criticized for the lack of privacy they provide users and the amount of data they collect. The data these cookies provide can be used to build a significant profile of an individual without their consent or knowledge. In addition, this data is often sold without the user’s explicit knowledge and consent to various companies for marketing or other purposes. Issues with third-party cookies afflict even reputable news organizations, who create privacy risks through their advertising on controversial articles while simultaneously reporting on privacy violations by government agencies such as the NSA. Fortunately, the European Union has required since 2019 that users must give their informed consent to non-essential cookies and users are assumed to have opted out unless they opt in. Websites must provide this consent option through banners displayed at the top or the bottom of a page which over time have grown to include additional disclosure information. A European court has determined that an already checked box is insufficient consent and the user must check the box themselves. Privacy laws similar to those passed in the EU have also been passed in Canada and Brazil. Unfortunately, these banner alerts are often not effective because users simply click past the alerts without reading the website’s cookie policy, which can be many pages long. In some cases, users view these alerts more as pop-ups and a nuisance rather than as informative or important,... read more

Big Data: Transitioning Away From the White Male Norm

As the capacity to generate and use digital information increases, the use of big data has permeated many industries. Its usage in medicine is poised to make major impacts on clinical practice. There are many benefits to the quality and efficiency of healthcare that can be achieved through the utilization of big health data. But there is a need for an understanding of how big data will affect populations that face disparities and inequalities in medicine – women and people of color. In medicine, the white male is generally the default. This default often affects how women and people of color are diagnosed and treated. Women may go undiagnosed and untreated due to having “exclusively female disease” or diseases that occur more frequently in women than men. Or they are misdiagnosed because their symptoms don’t manifest in the same way they do in men. For people of color, differences in race may affect the efficacy of drugs and medical devices. For both populations, they may ultimately have to be sicker or wait longer to qualify for the same treatment as a white man. Big data may help overcome these disparities through recognition of patterns in the treatment of women and people of color. Data generated during the course of care can be used to measure quality, develop hypotheses, and compare effectiveness of different treatments. Artificial intelligence (AI) technology provides the ability to take massive data sets and find patterns. These identified patterns may reveal gender and racial differences that affect diagnosis and treatment. Through the use of big data in precision medicine, for example, the identification of “biological variation... read more

California’s Prop 22: A Cautionary Tale

Even before COVID-19 hit last year, food delivery apps such as Caviar and Postmates had gained popularity as a convenient and relatively quick way to order food without the hassle of long lines or even needing to leave home. After the pandemic led to shelter-in-place orders and temporarily closed indoor dining in several states, there was an even greater demand for these food delivery services that provided a safe alternative to going out to eat or walking inside a restaurant to pick up a carry-out order. Additionally, even though these delivery apps are run by large corporations, this technology made it easier for diners to support local restaurants at a time when their patronage was even more impactful. According to MarketWatch, in the six months between April 2019 and September 2019, four of the major delivery app services – DoorDash, Uber, Grubhub, and Postmates – collectively brought in $2.5 billion in revenue. In the same time period the following year, which covered the early days of the pandemic, revenue for these four companies more than doubled to a combined $5.5 billion. However, as the popularity of these apps grew, so did the criticism and controversies surrounding their business practices. All eyes have been on California for the past few years with regard to the laws surrounding this relatively new type of employment and how to classify “gig” workers employed by rideshare and food delivery companies. Originally, companies like Uber and DoorDash were able to cut operating costs by hiring independent contractors as opposed to full-time employees. By doing so, these companies were able to deny their workers minimum wage,... read more

Political neutrality in content moderation compels private speech

Lots of online life today takes place on social media platforms. These platforms have become a place for communication of all types of ideas. Platforms establish community guidelines and moderate content for a variety of reasons. Congress saw a problem with platforms becoming liable for user content when they moderated “bad” content that was established in case law, so they passed section 230. This protects platforms from liability for content provided by users, while also allowing good faith moderation without revoking that protection. This protection has allowed platforms to create their own terms of service and define what type of content a user can post. If content violates the terms of use or is otherwise objectionable, platforms can remove it without fear of becoming liable as publishers of content on their site, instead of leaving all content untouched out of fear of incurring liability. Recently this section has come under fire. Specifically, because section 230 protects moderation that is not politically neutral on some of the biggest internet platforms. Several bills have been introduced to address this and mandate neutrality in moderation of content. The problem with this approach is that it will compel social media platforms to host content that they do not want to. Forcing a private company to do so violates their first amendment rights. The first amendment protects freedom of speech in the U.S. but section 230 provides enhanced protections. Congress conferred a benefit to internet platforms in the form of liability protections. These protections allow platforms to operate without fear of overwhelming lawsuits because of user posted content. It also allows platforms the freedom... read more

Uncovering the Burial of Transformative Trademark & Copyright Measures in Congress’ 2021 Stimulus Package: Protections to Come for Content Creators

The recently passed stimulus package quietly incorporates consequential changes to American intellectual property laws via the advent of the Trademark Modernization Act of 2020 (“the TMA”), the Copyright Alternative in Small-Claims Enforcement Act of 2020 (the “CASE Act”), and the Protecting Lawful Streaming Act (the “PLSA”).   On December 21, 2020, about eight months into the sudden and persistent COVID-19 pandemic, Congress swiftly passed the Consolidated Appropriations Act, 2021 (“the Act”), a long-awaited bill focused on providing another round of pandemic relief and economic stimulus; and avoiding a government shutdown. Six days later, then President Donald Trump signed the Act into law.   Buried within $900 billion in stimulus provisions and a $1.4 trillion federal agency funding deal, the Act includes provisions that amend trademark and copyright laws and thus, impact creators in the booming digital economy. The TMA, the CASE Act, and the PLSA will offer trademark and copyright owners, thereby many content creators, meaningful benefits, including (i) making it easier for trademark owners to obtain injunctive relief; (ii) creating a small-claims court for copyright infringement disputes; and (iii) imposing a felony on unlawful streaming of copyrighted material.   The Trademark Modernization Act (TMA) of 2020: Resolving a Circuit Split   The TMA, among other initiatives to expand and fortify the accuracy and integrity of the federal trademark register, settles a long-standing circuit split: whether the Supreme Court’s ruling in eBay, Inc. v. MercExchange LLC, 547 U.S. 388 (2006) (holding that irreparable harm could not be presumed in a patent infringement lawsuit) applies to trademark infringement. Historically, to obtain preliminary injunctive (PLI) relief, the movant has the burden... read more

Trans-Atlantic Data Transfers After Schrems II

In July 2020, the European Court of Justice released Schrems II, an opinion finding the EU/US Privacy Shield insufficient to guarantee compliance with EU data protection laws. The decision marked the second time the ECJ would invalidate a data privacy adequacy decision between the EU and US, sabotaging once more an enterprise meant to safeguard trans-Atlantic data transfers without compromising US national security activities. Consequently, US companies who house or process EU data outside of the EU are now exposed to serious liability when they send data across the Atlantic, something many companies do in the regular course of business. Schrems II left open a potential means of escaping liability through Standard Contractual Clauses (SCCs), but the ECJ seemed poised to invalidate that mechanism the next time it comes under their scrutiny. The decision arises out of the acutely conservative approach the EU takes to data privacy. In the EU, “[p]rivacy rights are given the status of a fundamental right,” enshrined in the EU Charter of Fundamental Rights and formally guaranteed to all EU citizens under the 2009 Lisbon Treaty. In addition to general privacy protections provided under the Charter, the Charter specifically establishes a “right to the protection of personal data concerning him or her.” That right includes a guarantee that an EU citizen’s data will be processed fairly and only for “specified purposes.” According to the EU supervisory data authority, the right to be “in control of information about yourself…plays a pivotal role” within the notion of dignity enshrined in the Charter. With this historical context, the European Commission passed the GDPR, which came into effect in... read more

View More Recent Articles