' MTTLR | Michigan Telecommunications and Technology Law Review

Recent Articles

Association for Molecular Pathology v. Myriad Genetics: A Critical Reassessment

By  Jorge L. Contreras
Article, Fall 2020
Read More

From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

By  David Nersessian & Ruben Mancha
Article, Fall 2020
Read More

Will the 'Legal Singularity' Hollow Out Law's Normative Core?

By  Robert F. Weber
Article, Fall 2020
Read More

Antitrust Overreach: Undoing Cooperative Standardization in the Digital Economy

By  Jonathan M. Barnett
Article, Spring 2019
Read More

Bank On We The People: Why and How Public Engagement Is Relevant to Biobanking

By  Chao-Tien Chang
Article, Spring 2019
Read More

Recent Notes

The Contribution of EU Law to the Regulation of Online Speech

By  Luc von Danwitz
Note, Fall 2020
Read More

The Unified Patent Court and Patent Trolls In Europe

By  Jonathon I. Tietz
Note, Spring 2019
Read More

Blog Posts

Apple vs. Facebook: The Demand of Growing Data Ethics

  In January, WhatsApp announced the release of a new privacy policy that allows the messenger service to share user data with its parent company Facebook. The policy has been met with public outcry and resulted in many users flocking to rival companies such as Signal. The backlash led to WhatsApp deciding to postpone the update, and it recently clarified that the new update relates to how people interact with businesses and how users will be asked to review its privacy terms. Previously, users saw a full-screen message prompting them to accept policy changes. With the new update, users will see a small banner near the top of their screen requesting them to review the company’s privacy policy, and they will then have the option to download a more detailed PDF of the update. According to the new policy, customers interacting with businesses could have their data collected and shared with Facebook and its companies. This means that customer transactions and customer service chats could be used for targeted advertising.   Facebook’s change of the WhatsApp privacy policy is fuel to its existing war over data privacy with one of the other largest tech companies, Apple. In 2014, Apple’s chief executive Tim Cook criticized companies like Facebook by saying, “If they’re making money mainly by collecting gobs of personal data, I think you have a right to be worried.” Apple also heightened its words into actions: it is introducing a new App Tracking Transparency feature to be automatically enabled in iOS in early spring, which requires every iOS app developer to explicitly request user permission to track and share... read more

Limitations on AI in the Legal Market

In the last 50 years, society has achieved a level of sophistication sufficient to set the stage for an explosion in AI development. As AI continues to evolve, it will become cheaper and more user friendly. Cheaper and easier to use AI will provide an incentive for more firms to invest. As more firms invest, AI use will become the norm. In many ways, the rapid development of AI can look like an ominous cloud to those with careers in the legal market. For some, like paralegals and research assistants, AI could mean a career death sentence. Although AI is indeed poised to alter the legal profession fundamentally, AI also has critical shortcomings. AI’s two core flaws should give those working in the legal market faith that they are not replaceable. Impartiality and Bias             AI programs excel in the realm of fact. From chess-playing software to self-driving cars, AI has demonstrated an ability to perform factual tasks as well as, if not better, than humans. That is to say, in scenarios with clear-cut rights and wrongs, AI is on pace to outperform human capabilities. It is reasonable to conclude that AI is trending towards becoming a master of fact. However, even if AI is appropriately limited to the realm of fact, AI’s ability to analyze facts also has serious deficiencies. Similar to the process by which bias can infiltrate and cloud human judgment, bias can also infiltrate and corrupt AI functionality. The two main ways that bias can hinder AI programs are called algorithmic bias and data bias. First, algorithmic bias is the idea that the algorithms underlying... read more

Waive or enforce? The Debate over Intellectual Property Issues in Covid-19 Vaccines

In December of 2020, the long-awaited coronavirus vaccines began to slowly roll out across the world. The vaccines give people some hope of taming the virus, but the logistical hurdles of the vaccines seem worrisome. The daunting task of manufacturing, delivering, and administering massive quantities of vaccines on a global scale has highlighted many intellectual property issues in the drug industry. Recently, there has been a contentious debate, with the central issue concerning how intellectual property rules will influence the availability of the Covid-19 vaccines. At the meeting of the World Trade Organization in October, South Africa, India, and many other developing countries, proposed that intellectual property rules’ application to the vaccines be waived. Specifically, the basic position of these countries is that the exceptional circumstances created by the pandemic should warrant the “exemption of member countries from enforcing some patent, trade secrets or pharmaceutical monopolies” under the organization’s trade-related intellectual property agreements. This would allow drug companies in developing countries to manufacture generic versions of the Covid vaccines. The wealthier countries, namely – the United States, the European Union, Britain, Norway, Switzerland, Japan, Canada, Australia and Brazil – opposed the proposal by suggesting that doing so would upend the “incentives for innovation and competition.” This disagreement raises a big question: will the waiver subvert the purposes of the intellectual property laws by disincentivizing innovation or will it lead to a win-win situation for all by massively increasing access to and affordability of the Covid vaccines while allowing investors and the pharmaceutical industry to get a sufficient return on the research investment? As part of coming up with a... read more

Intellectual Property Considerations for Protecting Autonomous Vehicle Technology

Autonomous vehicle technology has progressed significantly in the past decade, and a growing number of automotive and electronics organizations are working to create these self-driving vehicles. While the race to autonomy is heating up, so is the race to own IP rights and protect technological advancements in this domain. This blog will discuss the different types of intellectual property that automotive and technology companies are utilizing to protect their technological advancements in the field of autonomous vehicles. First, it is important to understand what exactly autonomous vehicles are. Autonomous vehicles are cars capable of sensing their environment and operating without human involvement. There are currently six levels of driving automation ranging from level zero, fully manual, to level five, fully autonomous. Level five has not been achieved yet, but many automotive and technology companies are racing to be the first with a fully autonomous car. To do this, “autonomous vehicles rely on sensors, actuators, complex algorithms, machine-learning systems and powerful processors to execute software.” Considering all of the technology and development that goes into producing an autonomous vehicle, it is not surprising that companies would want to protect their intellectual property. In fact, in the last several years automakers and their suppliers have significantly increased the number of patent applications filed in the United States and abroad. However, since autonomous vehicles will require automakers and suppliers to develop technology outside of the scope of their traditional product development, patents may not provide substantial protection for these inventions. Instead trade secret protection may provide more appropriate intellectual property protection for autonomous vehicle technology. Companies must therefore decide which type of... read more

Law Enforcement’s Newest Witness, Alexa

On July 12, 2019, Adam Reechard Crespo and his girlfriend, Silvia Galva, got into an argument at Crespo’s home in Hallandale Beach, Florida. What happened next remains unclear, but it ended with Galva stabbed through the chest.   Crespo said he pulled the blade from Galva’s chest and tried to stop the bleeding. It was too late though. Galva died, leaving police to rely solely upon the stories told by Crespo and Galva’s friend who said she overheard the fight.   That is, until police realized that there may have been a silent “witness” of sorts. Crespo had an Amazon Echo, commonly known as Alexa, in his home. The device was not actively in use at the time of the crime, but police believed it may have heard something that could shed light on the otherwise private final moments of Galva’s life.  One month after the alleged crime was committed, police successfully obtained a warrant for those recordings and ultimately received them. Crespo was charged with murder.   The Amazon Echo is a voice activated AI virtual assistant that will tell you the weather, read you the news, or play your favorite song, among other things. But beyond its intended uses, the Echo has proved useful to law enforcement officers, offering a rare, inside look into the crucial moments before a crime was committed. The Amazon Echo made headlines for its role as a potential key witness in the investigations of a 2015 suspected murder in Arkansas and a 2017 New Hampshire double homicide.   In each of these cases, the conversation inevitably turned to privacy concerns as questions... read more

Zooming in on Children’s Online Privacy

An era of remote learning raises questions about children’s data privacy. As COVID-19 spread through the United States this spring, school districts across the country scrambled to find a way to teach students remotely. Many turned to Zoom, the videoconferencing platform that has rapidly become a household name. But as the usage of Zoom skyrocketed, the platform’s data privacy policies came under heightened scrutiny. Just a few weeks after Zoom CEO Eric Yuan gave K-12 schools in the U.S. free accounts in mid-March, the New York attorney general and Senators Ed Markey and Elizabeth Warren sent letters to the company requesting more information about its privacy and security measures. Both parties were particularly concerned about how Zoom handled children’s personal data now that so many minors were using the service for their education. Children’s online privacy in the United States is governed by the Children’s Online Privacy Protection Act (COPPA).  Passed in 1998, COPPA is intended to protect the privacy of children under 13 by giving their parents control over the kind of information that is collected about them online. COPPA applies to web services that are either aimed at children under 13 or have “actual knowledge” that they collect and store personal information from children under 13. Personal information includes data like a child’s name, contact information, screennames, photos and videos, and geolocation information. In order to comply with COPPA, covered websites must publish their privacy policies, provide notice to parents and obtain their consent before collecting personal information from children, and give parents the opportunity to review and delete the information and opt-out of further collection. The... read more

View More Recent Articles

Archive