' Blog | MTTLR

Blog

Political neutrality in content moderation compels private speech

Lots of online life today takes place on social media platforms. These platforms have become a place for communication of all types of ideas. Platforms establish community guidelines and moderate content for a variety of reasons. Congress saw a problem with platforms becoming liable for user content when they moderated “bad” content that was established in case law, so they passed section 230. This protects platforms from liability for content provided by users, while also allowing good faith moderation without revoking that protection. This protection has allowed platforms to create their own terms of service and define what type of content a user can post. If content violates the terms of use or is otherwise objectionable, platforms can remove it without fear of becoming liable as publishers of content on their site, instead of leaving all content untouched out of fear of incurring liability. Recently this section has come under fire. Specifically, because section 230 protects moderation that is not politically neutral on some of the biggest internet platforms. Several bills have been introduced to address this and mandate neutrality in moderation of content. The problem with this approach is that it will compel social media platforms to host content that they do not want to. Forcing a private company to do so violates their first amendment rights. The first amendment protects freedom of speech in the U.S. but section 230 provides enhanced protections. Congress conferred a benefit to internet platforms in the form of liability protections. These protections allow platforms to operate without fear of overwhelming lawsuits because of user posted content. It also allows platforms the freedom...

Uncovering the Burial of Transformative Trademark & Copyright Measures in Congress’ 2021 Stimulus Package: Protections to Come for Content Creators

The recently passed stimulus package quietly incorporates consequential changes to American intellectual property laws via the advent of the Trademark Modernization Act of 2020 (“the TMA”), the Copyright Alternative in Small-Claims Enforcement Act of 2020 (the “CASE Act”), and the Protecting Lawful Streaming Act (the “PLSA”).   On December 21, 2020, about eight months into the sudden and persistent COVID-19 pandemic, Congress swiftly passed the Consolidated Appropriations Act, 2021 (“the Act”), a long-awaited bill focused on providing another round of pandemic relief and economic stimulus; and avoiding a government shutdown. Six days later, then President Donald Trump signed the Act into law.   Buried within $900 billion in stimulus provisions and a $1.4 trillion federal agency funding deal, the Act includes provisions that amend trademark and copyright laws and thus, impact creators in the booming digital economy. The TMA, the CASE Act, and the PLSA will offer trademark and copyright owners, thereby many content creators, meaningful benefits, including (i) making it easier for trademark owners to obtain injunctive relief; (ii) creating a small-claims court for copyright infringement disputes; and (iii) imposing a felony on unlawful streaming of copyrighted material.   The Trademark Modernization Act (TMA) of 2020: Resolving a Circuit Split   The TMA, among other initiatives to expand and fortify the accuracy and integrity of the federal trademark register, settles a long-standing circuit split: whether the Supreme Court’s ruling in eBay, Inc. v. MercExchange LLC, 547 U.S. 388 (2006) (holding that irreparable harm could not be presumed in a patent infringement lawsuit) applies to trademark infringement. Historically, to obtain preliminary injunctive (PLI) relief, the movant has the burden...

Trans-Atlantic Data Transfers After Schrems II

In July 2020, the European Court of Justice released Schrems II, an opinion finding the EU/US Privacy Shield insufficient to guarantee compliance with EU data protection laws. The decision marked the second time the ECJ would invalidate a data privacy adequacy decision between the EU and US, sabotaging once more an enterprise meant to safeguard trans-Atlantic data transfers without compromising US national security activities. Consequently, US companies who house or process EU data outside of the EU are now exposed to serious liability when they send data across the Atlantic, something many companies do in the regular course of business. Schrems II left open a potential means of escaping liability through Standard Contractual Clauses (SCCs), but the ECJ seemed poised to invalidate that mechanism the next time it comes under their scrutiny. The decision arises out of the acutely conservative approach the EU takes to data privacy. In the EU, “[p]rivacy rights are given the status of a fundamental right,” enshrined in the EU Charter of Fundamental Rights and formally guaranteed to all EU citizens under the 2009 Lisbon Treaty. In addition to general privacy protections provided under the Charter, the Charter specifically establishes a “right to the protection of personal data concerning him or her.” That right includes a guarantee that an EU citizen’s data will be processed fairly and only for “specified purposes.” According to the EU supervisory data authority, the right to be “in control of information about yourself…plays a pivotal role” within the notion of dignity enshrined in the Charter. With this historical context, the European Commission passed the GDPR, which came into effect in...

AI v. Lawyers: Will AI Take My Legal Job?

Artificial Intelligence (AI) is changing the global workforce, generating fears that it will put masses of people out of work. Indeed, some job loss is likely as computers, intelligent machines, and robots take over certain tasks done by humans. For example, passenger cars and semi-trailer trucks will be able to drive themselves in the future, and that means that there won’t be a need for quite as many drivers. Elon Musk, the co-founder and CEO of Tesla and SpaceX, predicts that so many jobs will be replaced by intelligent machines and robots in the future that eventually “people will have less work to do and ultimately will be sustained by payments from the government.” The World Economic Forum concluded in a recent report that “a new generation of smart machines, fueled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs.”   All of this raises the question of whether lawyers and even judges will eventually be replaced with algorithms. As one observer noted, “The law is in many ways particularly conducive to the application of AI and machine learning.” For example, legal rulings in a common law system involve deriving axioms from precedent, applying those axioms to the particular facts at hand, and reaching conclusions accordingly. Similarly, AI systems also learn to make decisions based on training data and apply the inferred rules to new situations. A growing number of companies are building machine learning models that ask the AI to assess a host of factors—from the corpus of relevant precedent, venue, to a case’s particular fact pattern–to predict the outcomes of pending cases.  ...

Deep-Fake, Real Pain: The Implications of Computer Morphing on Child Pornography

The proliferation of “deep-fake” internet videos—in which a person in an existing video is replaced with the likeness of another—has called into question our most basic method for perceiving the world: using our own eyes. While the definition of deep-fake transforms as the technology develops, the video technology is generally regarded as the use of machine-learning to replace the face of one individual with another. Troublingly, deep-fakes have changed the landscape of digital pornography. Advances in computer morphing software have produced a new category of child pornography: “morphed” child pornography, in which a child’s face is virtually superimposed onto the body of an adult performing sexually explicit acts. Today, the rapidly changing field of technology has created an unresolved legal question: is “morphed” child pornography protected under the First Amendment? In February of 2020, the Fifth Circuit Court of Appeals weighed in on the debate in United States v. Mecham. When Clifford Mecham Jr. took his computer to a technician for repairs, the technician discovered thousands of images depicting the nude bodies of adults with faces of children superimposed. Once notified, the Corpus Christi Police Department seized several hard drives, revealing over 30,000 pornographic photos and videos with “morphed” child pornography. The Fifth Circuit affirmed Mecham’s conviction, but remanded his case to reduce his sentence, holding that the sentencing enhancement for “sadistic or masochistic conduct” does not apply to morphed child pornography as there is no depictions of “contemporaneous infliction of pain.” While child pornography is not protected under the First Amendment, virtual child pornography, sexually explicit images created with adults who look like minors or created solely by...

Patent Trolls Show Immunity to Antitrust: Patent Trolls Unscathed by Antitrust Claims from Tech-Sector Companies

Patent trolls have become a prominent force to be reckoned with for tech-sector companies in the United States, and tech-sector companies’ recent failure in using antitrust law to combat patent trolls indicates a continuation of that prominence. Patent trolls have been quite the thorn in the side of tech-sector companies. The term “patent troll” is the pejorative pop culture title for the group of firms also known as non-practicing entities, patent assertion entities, and patent holding companies. These entities buy patents, not with the purpose of utilizing the patent’s technology, but with the purpose of suing companies for patent infringement. Patent trolls have made up around 85% of patent litigation against tech-sector companies in 2018. Moreover, in comparison to the first four months of 2018, the first four months of 2020 saw a 30% increase in patent litigation from patent trolls. At a high-level, antitrust law appears to be a proper tool for wrangling patent trolls. Antitrust law cracks down on anticompetitive agreements and monopolies for the sake of promoting consumer welfare. Patents are effectively legal monopolies over a claimed invention, and patent trolls use these legal monopolies to instigate frivolous patent infringement lawsuits on companies. Such lawsuits increase litigation and licensing costs for companies who can then push such costs, via increased product prices, onto the downstream consumer. In an attempt to go on the offensive, tech-sector companies have brought antitrust claims against patent trolls. The antitrust claims have operated on one of two theories. In Intellectual Ventures I LLC v. Capital One from 2017, Capital One counterclaimed antitrust remedies on the basis of a patent troll suing...

Apple vs. Facebook: The Demand of Growing Data Ethics

  In January, WhatsApp announced the release of a new privacy policy that allows the messenger service to share user data with its parent company Facebook. The policy has been met with public outcry and resulted in many users flocking to rival companies such as Signal. The backlash led to WhatsApp deciding to postpone the update, and it recently clarified that the new update relates to how people interact with businesses and how users will be asked to review its privacy terms. Previously, users saw a full-screen message prompting them to accept policy changes. With the new update, users will see a small banner near the top of their screen requesting them to review the company’s privacy policy, and they will then have the option to download a more detailed PDF of the update. According to the new policy, customers interacting with businesses could have their data collected and shared with Facebook and its companies. This means that customer transactions and customer service chats could be used for targeted advertising.   Facebook’s change of the WhatsApp privacy policy is fuel to its existing war over data privacy with one of the other largest tech companies, Apple. In 2014, Apple’s chief executive Tim Cook criticized companies like Facebook by saying, “If they’re making money mainly by collecting gobs of personal data, I think you have a right to be worried.” Apple also heightened its words into actions: it is introducing a new App Tracking Transparency feature to be automatically enabled in iOS in early spring, which requires every iOS app developer to explicitly request user permission to track and share...

Limitations on AI in the Legal Market

In the last 50 years, society has achieved a level of sophistication sufficient to set the stage for an explosion in AI development. As AI continues to evolve, it will become cheaper and more user friendly. Cheaper and easier to use AI will provide an incentive for more firms to invest. As more firms invest, AI use will become the norm. In many ways, the rapid development of AI can look like an ominous cloud to those with careers in the legal market. For some, like paralegals and research assistants, AI could mean a career death sentence. Although AI is indeed poised to alter the legal profession fundamentally, AI also has critical shortcomings. AI’s two core flaws should give those working in the legal market faith that they are not replaceable. Impartiality and Bias             AI programs excel in the realm of fact. From chess-playing software to self-driving cars, AI has demonstrated an ability to perform factual tasks as well as, if not better, than humans. That is to say, in scenarios with clear-cut rights and wrongs, AI is on pace to outperform human capabilities. It is reasonable to conclude that AI is trending towards becoming a master of fact. However, even if AI is appropriately limited to the realm of fact, AI’s ability to analyze facts also has serious deficiencies. Similar to the process by which bias can infiltrate and cloud human judgment, bias can also infiltrate and corrupt AI functionality. The two main ways that bias can hinder AI programs are called algorithmic bias and data bias. First, algorithmic bias is the idea that the algorithms underlying...

Waive or enforce? The Debate over Intellectual Property Issues in Covid-19 Vaccines

In December of 2020, the long-awaited coronavirus vaccines began to slowly roll out across the world. The vaccines give people some hope of taming the virus, but the logistical hurdles of the vaccines seem worrisome. The daunting task of manufacturing, delivering, and administering massive quantities of vaccines on a global scale has highlighted many intellectual property issues in the drug industry. Recently, there has been a contentious debate, with the central issue concerning how intellectual property rules will influence the availability of the Covid-19 vaccines. At the meeting of the World Trade Organization in October, South Africa, India, and many other developing countries, proposed that intellectual property rules’ application to the vaccines be waived. Specifically, the basic position of these countries is that the exceptional circumstances created by the pandemic should warrant the “exemption of member countries from enforcing some patent, trade secrets or pharmaceutical monopolies” under the organization’s trade-related intellectual property agreements. This would allow drug companies in developing countries to manufacture generic versions of the Covid vaccines. The wealthier countries, namely – the United States, the European Union, Britain, Norway, Switzerland, Japan, Canada, Australia and Brazil – opposed the proposal by suggesting that doing so would upend the “incentives for innovation and competition.” This disagreement raises a big question: will the waiver subvert the purposes of the intellectual property laws by disincentivizing innovation or will it lead to a win-win situation for all by massively increasing access to and affordability of the Covid vaccines while allowing investors and the pharmaceutical industry to get a sufficient return on the research investment? As part of coming up with a...

Intellectual Property Considerations for Protecting Autonomous Vehicle Technology

Autonomous vehicle technology has progressed significantly in the past decade, and a growing number of automotive and electronics organizations are working to create these self-driving vehicles. While the race to autonomy is heating up, so is the race to own IP rights and protect technological advancements in this domain. This blog will discuss the different types of intellectual property that automotive and technology companies are utilizing to protect their technological advancements in the field of autonomous vehicles. First, it is important to understand what exactly autonomous vehicles are. Autonomous vehicles are cars capable of sensing their environment and operating without human involvement. There are currently six levels of driving automation ranging from level zero, fully manual, to level five, fully autonomous. Level five has not been achieved yet, but many automotive and technology companies are racing to be the first with a fully autonomous car. To do this, “autonomous vehicles rely on sensors, actuators, complex algorithms, machine-learning systems and powerful processors to execute software.” Considering all of the technology and development that goes into producing an autonomous vehicle, it is not surprising that companies would want to protect their intellectual property. In fact, in the last several years automakers and their suppliers have significantly increased the number of patent applications filed in the United States and abroad. However, since autonomous vehicles will require automakers and suppliers to develop technology outside of the scope of their traditional product development, patents may not provide substantial protection for these inventions. Instead trade secret protection may provide more appropriate intellectual property protection for autonomous vehicle technology. Companies must therefore decide which type of...

Law Enforcement’s Newest Witness, Alexa

On July 12, 2019, Adam Reechard Crespo and his girlfriend, Silvia Galva, got into an argument at Crespo’s home in Hallandale Beach, Florida. What happened next remains unclear, but it ended with Galva stabbed through the chest.   Crespo said he pulled the blade from Galva’s chest and tried to stop the bleeding. It was too late though. Galva died, leaving police to rely solely upon the stories told by Crespo and Galva’s friend who said she overheard the fight.   That is, until police realized that there may have been a silent “witness” of sorts. Crespo had an Amazon Echo, commonly known as Alexa, in his home. The device was not actively in use at the time of the crime, but police believed it may have heard something that could shed light on the otherwise private final moments of Galva’s life.  One month after the alleged crime was committed, police successfully obtained a warrant for those recordings and ultimately received them. Crespo was charged with murder.   The Amazon Echo is a voice activated AI virtual assistant that will tell you the weather, read you the news, or play your favorite song, among other things. But beyond its intended uses, the Echo has proved useful to law enforcement officers, offering a rare, inside look into the crucial moments before a crime was committed. The Amazon Echo made headlines for its role as a potential key witness in the investigations of a 2015 suspected murder in Arkansas and a 2017 New Hampshire double homicide.   In each of these cases, the conversation inevitably turned to privacy concerns as questions...

Zooming in on Children’s Online Privacy

An era of remote learning raises questions about children’s data privacy. As COVID-19 spread through the United States this spring, school districts across the country scrambled to find a way to teach students remotely. Many turned to Zoom, the videoconferencing platform that has rapidly become a household name. But as the usage of Zoom skyrocketed, the platform’s data privacy policies came under heightened scrutiny. Just a few weeks after Zoom CEO Eric Yuan gave K-12 schools in the U.S. free accounts in mid-March, the New York attorney general and Senators Ed Markey and Elizabeth Warren sent letters to the company requesting more information about its privacy and security measures. Both parties were particularly concerned about how Zoom handled children’s personal data now that so many minors were using the service for their education. Children’s online privacy in the United States is governed by the Children’s Online Privacy Protection Act (COPPA).  Passed in 1998, COPPA is intended to protect the privacy of children under 13 by giving their parents control over the kind of information that is collected about them online. COPPA applies to web services that are either aimed at children under 13 or have “actual knowledge” that they collect and store personal information from children under 13. Personal information includes data like a child’s name, contact information, screennames, photos and videos, and geolocation information. In order to comply with COPPA, covered websites must publish their privacy policies, provide notice to parents and obtain their consent before collecting personal information from children, and give parents the opportunity to review and delete the information and opt-out of further collection. The...

Anti-Discrimination Laws and Algorithmic Discrimination

Machine algorithms can discriminate. More accurately, machine algorithms can produce discriminatory outcomes. It seems counterintuitive to think that dispassionately objective machines can make biased choices, but it is important to remember that machines are not completely autonomous in making decisions. Ultimately, they follow instructions written by humans to perform tasks with data provided by humans, and there are many ways discriminations and biases can occur during this process. The training data fed to the machine algorithm may contain inherent biases, and the algorithm may then focus on factors in the data that are discriminatory towards certain groups. For example, the natural language processing algorithm “word2vec” learns word associations from a large corpus of text. After finding a strong pattern of males being associated with programming and females being associated with homemakers in the large text datasets fed to it, the algorithm came up with the analogy: “Man is to Computer Programmer as Woman is to Homemaker.” Such stereotypical determinations are among the many discriminatory outcomes algorithms can produce. The European Union (EU), out of fear of these outcomes leading to discriminatory effects produced by decision-making algorithms, included Article 22 when enacting the General Data Protection Regulation, which gives people “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Although what constitutes “solely automated processing” is debatable, the EU’s concern of algorithmic discrimination is evident. In the United States (U.S.), instead of passing laws that specifically target algorithmic discrimination, such concerns are handled largely under regular anti-discrimination laws,...

Data in the Post-Pandemic Era: Zoom Video’s Security and Censorship Controversies

As the use of Zoom Video Conferencing has skyrocketed since the start of the Coronavirus Pandemic, the company’s security infrastructure and alleged interference in virtual events over the platform have come under fire multiple times since the beginning of global quarantines in March 2020. As millions of Americans are now using Zoom and other videoconferencing tools daily, any data breaches may provide unprecedented access to otherwise confidential conversations between users, including any U.S. government and private sector professionals who utilize the app for their work. Furthermore, censorship of certain virtual gatherings may place dangerously restrictive limits on communication and social organizing as the pandemic demands that most of the population continue to conduct its daily business virtually. Most recently, the U.S. Department of Justice has charged former China-Based Zoom executive Xinjiang Jin, also known as “Julien Jin,” with conspiracy to commit interstate harassment and unlawful conspiracy to transfer a means of identification after his alleged participation in a scheme to assist the People’s Republic of China in blocking virtual commemorations of the Tiananmen Square massacre in May and June 2020. News of this potential attempt to censor Chinese dissidents should remind users that their choice to route our communications through this (and other) videoconferencing apps has created new, special pandemic-era censorship concerns, Zoom has released a blog post and S.E.C. filing on its website acknowledging the charge and investigation, reaffirming its “support [for] the U.S. Government to protect American interests from foreign influence,” dedication “to the free and open exchange of ideas,” and ongoing, “aggressiv[e]” actions to “anticipate and combat…data security challenges.” Furthermore, the blog post details subpoenas received...

Tracking COVID-19 on College Campuses: False Starts, Missteps, and Considerations for the Future

As colleges and universities reopened campuses to students last fall, a number of schools across the United States turned towards the use of location tracking apps, wearable technology, and other surveillance tools in the hope that they would facilitate contact tracing and potentially mitigate the spread of COVID-19 in residence halls and in-person classes. These efforts to monitor student health and track student activity have been met with skepticism from students and privacy advocates, who cite concerns about the invasive nature of such tools and the risk that the data they generate may be misused by unauthorized parties.   In Michigan, Oakland University had announced earlier in August that it would require students living in residence halls to wear a BioButton, a coin-sized device that would monitor physiological data, such as skin temperature and heart rate as well and physical proximity to others wearing BioButtons. Administrators had hoped that this would allow the university to pinpoint early-stage cases among the student body. The university soon withdrew the policy, however, after receiving significant backlash from students, who, citing privacy and transparency issues, petitioned the school to make usage optional.   Albion College, a private liberal arts college in Michigan, had issued a similar requirement for students to install the Aura app on their phones before they could come on campus. As a contact-tracing app, Aura would record students’ real-time location using phone GPS services and alert students when they had been in close proximity with someone who had tested positive for the virus. Albion had intended for the Aura app to work in tandem with what some considered to be...

Privacy, a Group Effort – Approaches to International Data Privacy Agreements

The modern, digital world has made the world smaller and faster, with information and data transferred within an instant, ignoring any and all physical borders. While this digital highway is an essential pillar for our Internet age, it is also not without its problems. One such area of concern rests with data protection and privacy enforcement laws.

Privacy in the Golden State

Are you a resident of California? Or are you a business owner whose business reaches consumers in California? If your answer to either of these questions is “yes,” then you should familiarize yourself with the California Consumer Privacy Act (“CCPA”).

The CRISPR War Drags On: How the Fight to Patent CRISPR-Cas9 Creates Uncertainty in the Biotechnology Sphere

On September 10, 2018, the Federal Circuit Court of Appeals (“Federal Circuit”) affirmed the ruling of the United States Patent Trial and Appeals Board (“the Board”) in Regents of the University of California v. Broad Institute, finding that there was no interference-in-fact between competing patents that claimed methods of using CRISPR-Cas9 to modify cellular DNA. Rather than settling the patentability issue, however, exhaustive litigation has continued, as both parties seek to protect the results obtained from costly research.

Posts on the MTLR Blog are editorial opinion pieces written by student-editors of the Michigan Technology Law Review. The opinions expressed in these editorial posts are not espoused or endorsed by the University of Michigan or its Law School. To view scholarly Articles and Notes published by the Michigan Technology Law Review, please visit the MTLR home page.