' MTTLR | Michigan Telecommunications and Technology Law Review

Recent Articles

Content Moderation Remedies

By  Eric Goldman Article, Fall 2021
Read More

An Empirical Study: Willful Infringement & Enhanced Damages in Patent Law After Halo

By  Karen E. Sandrik Article, Fall 2021
Read More

Individuals as Gatekeepers Against Data Misuse

By  Ying Hu Article, Fall 2021
Read More

How Can I Tell if My Algorithm Was Reasonable?

By  Karni A. Chargal-Feferkorn Article, Spring 2021
Read More

Taking it With You: Platform Barriers to Entry and the Limits of Data Portability?

By  Gabriel Nicholas Article, Spring 2021
Read More

Recent Notes

NHTSA Up in the Clouds: The Formal Recall Process & Over-the-Air Software Updates

By  Emma Himes Note, Fall 2021
Read More

Arms Control 2.0: Updating the Cyberweapon Arms Control Framework

By  Evan Mulbry Note, Fall 2021
Read More

Blog Posts

Discrimination By Proxy: How Ai Uses Big Data To Discriminate

Countless State and Federal regulations and statutes—not to mention the U.S. Constitution—prohibit discrimination against protected groups. However, AI systems might slip discrimination past current laws through “proxy discrimination” without new regulatory and statutory approaches. Today’s AI systems and algorithms are capable of dredging oceans of big data to find statistical proxies for protected characteristics and create algorithms that disparately impact protected groups. AI systems with such capabilities already exist in the fields of health and automobile insurance, lending, and criminal justice, among others. In Proxy Discrimination in the Age of Artificial Intelligence and Big Data, Anya E.R. Prince and Daniel Schwarcz address this particularly “pernicious” phenomenon of proxy discrimination. Current anti-discriminatory regimes which simply deny AI systems the ability to use the protected characteristics or the most inuitive proxies will fail in the face of increasingly sophisticated AI systems. They provide a coherent definition for proxy discrimination by AI: usage of a variable whose statistical significance for prediction “derives from its correlation with membership in a suspect class.” For instance, consider a hiring algorithm for a job where a person’s height is relevant to job performance, but where the algorithm does not have access to height data. In attempting to factor height, the algorithm might discover the correlation between height and sex, and correlations between sex and other data. This would be an example of proxy discrimination because the statistical significance of the other data derives from its correlation with sex, a protected class. Prince and Schwarcz first foray into the pre-AI system history of proxy discrimination, i.e. human actors intentionally using proxies to discriminate. This discussion is interesting... read more

Copyright Beyond Borders: Moral Rights & the Implications of Fahmy v. Jay-Z

Short of recognizing the validity of an “international copyright,” American intellectual property law generally purports to offer protections to foreign literary and artistic works under a number of international conventions to which the United States has been a signatory since the late 1880s. However, as emerging trends in the quickly globalizing music industry challenge the notion that “exclusive” tonal genres and scènes à faire can be fixed within geographic and cultural borders, courts are likely to face more complex questions about necessary international copyright protections for non-American artists when derivative works created in the U.S. appropriate elements of an underlying work beyond the scope of fair use or public domain. A 2019 decision by the U.S. Court of Appeals for the Ninth Circuit highlights a particularly novel issue at the intersection of legal protection for foreign works and the departure of American copyright law from much of the rest of the world with respect to recognition of a “moral right” to musical and non-visual works. In Fahmy v. Jay-Z, Osama Ahmed Fahmy, nephew and heir to the famous Egyptian composer Baligh Hamdy, appealed from a lower court’s dismissal of his lawsuit against Jay-Z, which alleged that the rapper’s 1999 single “Big Pimpin’” infringed the copyright in Hamdy’s arrangement for the popular 1957 film track “Khosara.” The five-minute record by Jay-Z and Timbaland sampled a significant and distinctive portion of the introductory flute melody from Hamdy’s composition, which continued on a loop in the background of Big Pimpin’ for the entirety of the song’s duration. The Ninth Circuit ultimately held that Fahmy failed to establish standing to bring the action... read more

Online Harassment and Doxing on Social Media

Online harassment has been around as long as the internet. However, in recent years, online harassment has been on the rise in part because of the popularity and access to social media websites. A particularly dangerous form of online harassment is doxing. Doxing is the malicious practice of revealing someone’s personal information without their consent. The phrase has its origins in hacker feuds in the early 1990s and is short for “dropping documents.” This is done to retaliate against or harass someone by outing them online – usually by exposing personal information such as home address, place of work, phone numbers, and other information that while available on the internet, one may not want exposed to the world.             The issue with doxing – like any form of harassment – is that it can range from trivial annoyance to a form of threatening that can affect someone’s emotional, economic, and even physical safety. While there are many state and federal laws that punish harassment and stalking, doxing may or may not fall within those current provisions. So, this begs the question, how should doxing be handled?             Some states have already enacted anti-doxing legislation. California Penal Code § 653.2 makes every person who intends to put another person in reasonable fear for their safety by means of an electronic communication device guilty of a misdemeanor that is punishable by up to one year in a county jail and/or by a fine of not more than $1,000. Under this statute, there is no requirement that “actual incitement or actual production of the enumerated unlawful effects” follow a doxing event, rather... read more

Privacy Risks with using XR in Education

Online learning has become widespread and normalized during the pandemic. In a survey conducted from September to October 2020 of about 3,500 full-time college students, 72% of students were concerned about remaining engagedwhile learning remotely. Extended Reality (XR) technologies, including Augmented Reality (AR) and VR (Virtual Reality), can improve student engagement and success in online education. Augmented Reality, as its name suggests, augments a user’s surroundings by placing digital elements in a live view, commonly through the user’s smartphone camera. On the other hand, Virtual Reality allows the user to replace the real world through wearing a headset and headphones to simulate an immersive experience. Though XR technologies have not been widely adopted in education yet, its use can benefit a variety of disciplinesranging from medicine to foreign languages. Among various legal uncertainties, universities that seek to provide XR in education should be aware of privacy risks associated with these technologies. Privacy Concerns with Computed Data XR technologies comprise displays and sensors that need to collect heavy data streams in order to provide the user with an immersive experience. Data can include a user’s location, biographic, biometric, and demographic information. More intrusive types of data collection include gaze-tracking, a feature likely to be essential to XR technologies’ ability to provide users deeply immersive experiences, such as rendering more sharply the virtual world elements where users are actively looking. The types of data that XR devices collect can be broadly categorized into four categories of data: observable, observed, computed, and associated. Observable data is information that third parties can observe and replicate, such as digital communications between users. In contrast,... read more

Sovereign Digital Currency – A New Economic Foundation for Native American Tribes?

Before European settlers explored America, it is approximated that 18 million indigenous people called North America their home. Decades of war, disease, discrimination, removal, and termination rapidly brought those numbers to only 5.2 million today. Today, there are about 574 federally recognized and 63 state recognized Native American tribes in the United States. After losing an abundance of land and forceful relocation to reservations, modern Native Americans fight for sovereignty and cultural and economic survival. Today, 1 in 3 Native Americans are living in poverty, with a median income of $23,000 a year.   In 2009, a new kind of economic technology emerged—crypto currency. A cryptocurrency is a digital or virtual currency that is secured by cryptography, which makes it nearly impossible to counterfeit or double-spend. Many cryptocurrencies are decentralized networks based on blockchain technology—a distributed ledger enforced by a disparate network of computers. A defining feature of cryptocurrencies is that they are generally not issued by any central authority, rendering them theoretically immune to government interference or manipulation. In 2009, Bitcoin software and mining was the first digital currency made available to the public. As of March 2022, there are over 18,000 crypto currencies in existence throughout the world. Native American tribal leaders caught wind of the successes of cryptocurrency and joined in on the business of creating, mining, and selling digital currency for two main reasons: (1) to empower Native Americans through a modern declaration of sovereign status and independence and (2) to boost economic activity and general income in Indian Country. In 2014, the Oglala Lakota Pine Ridge Indian Reservation in South Dakota adopted MazaCoin as... read more

Automating Healthcare: Current Challenges that Must be Addressed

Artificial Intelligence has the potential to improve health care systems worldwide. For example, AI can optimize workflow in hospitals, provide more accurate diagnoses, and bring better medical treatments to patients. However, medical AI also creates challenges that we, as a society, need to face. This article does not attempt a comprehensive listing but focuses on three important obstacles: safety, transparency, and privacy. Regulating Safety             It is of utmost importance that the use of AI is safe and effective. How do we ensure that AI trained in a particular setting is going to be reliable once deployed? As a real example, IBM Watson for Oncology uses AI algorithms to assess patients’ medical records and help physicians explore cancer treatment options. However, it has come under fire for not delivering on expectations and providing “unsafe and incorrect” recommendations. The problem appears to have been in the training, which is not based on an analysis of historical patient records, but rather only a few “synthetic cases.” This highlights the need for datasets that are reliable and valid. The software is only as good as the data its trained on, so the collection and curating of data can be the most critical part of deploying effective AI. Also, algorithms need further refinement and continuous updating. This separates AI from other medical devices, which do not have the ability to continuously learn.             In terms of legal regulation, some medical AI-based products must undergo review by the FDA. A medical device is defined under section 201(h) of the Federal Food, Drug, and Cosmetic Act. While some software functions does not qualify (see Section... read more

View More Recent Articles

Archive