' Blog | MTTLR

Blog

Zooming in on Children’s Online Privacy

An era of remote learning raises questions about children’s data privacy. As COVID-19 spread through the United States this spring, school districts across the country scrambled to find a way to teach students remotely. Many turned to Zoom, the videoconferencing platform that has rapidly become a household name. But as the usage of Zoom skyrocketed, the platform’s data privacy policies came under heightened scrutiny. Just a few weeks after Zoom CEO Eric Yuan gave K-12 schools in the U.S. free accounts in mid-March, the New York attorney general and Senators Ed Markey and Elizabeth Warren sent letters to the company requesting more information about its privacy and security measures. Both parties were particularly concerned about how Zoom handled children’s personal data now that so many minors were using the service for their education. Children’s online privacy in the United States is governed by the Children’s Online Privacy Protection Act (COPPA).  Passed in 1998, COPPA is intended to protect the privacy of children under 13 by giving their parents control over the kind of information that is collected about them online. COPPA applies to web services that are either aimed at children under 13 or have “actual knowledge” that they collect and store personal information from children under 13. Personal information includes data like a child’s name, contact information, screennames, photos and videos, and geolocation information. In order to comply with COPPA, covered websites must publish their privacy policies, provide notice to parents and obtain their consent before collecting personal information from children, and give parents the opportunity to review and delete the information and opt-out of further collection. The...

Anti-Discrimination Laws and Algorithmic Discrimination

Machine algorithms can discriminate. More accurately, machine algorithms can produce discriminatory outcomes. It seems counterintuitive to think that dispassionately objective machines can make biased choices, but it is important to remember that machines are not completely autonomous in making decisions. Ultimately, they follow instructions written by humans to perform tasks with data provided by humans, and there are many ways discriminations and biases can occur during this process. The training data fed to the machine algorithm may contain inherent biases, and the algorithm may then focus on factors in the data that are discriminatory towards certain groups. For example, the natural language processing algorithm “word2vec” learns word associations from a large corpus of text. After finding a strong pattern of males being associated with programming and females being associated with homemakers in the large text datasets fed to it, the algorithm came up with the analogy: “Man is to Computer Programmer as Woman is to Homemaker.” Such stereotypical determinations are among the many discriminatory outcomes algorithms can produce. The European Union (EU), out of fear of these outcomes leading to discriminatory effects produced by decision-making algorithms, included Article 22 when enacting the General Data Protection Regulation, which gives people “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Although what constitutes “solely automated processing” is debatable, the EU’s concern of algorithmic discrimination is evident. In the United States (U.S.), instead of passing laws that specifically target algorithmic discrimination, such concerns are handled largely under regular anti-discrimination laws,...

Data in the Post-Pandemic Era: Zoom Video’s Security and Censorship Controversies

As the use of Zoom Video Conferencing has skyrocketed since the start of the Coronavirus Pandemic, the company’s security infrastructure and alleged interference in virtual events over the platform have come under fire multiple times since the beginning of global quarantines in March 2020. As millions of Americans are now using Zoom and other videoconferencing tools daily, any data breaches may provide unprecedented access to otherwise confidential conversations between users, including any U.S. government and private sector professionals who utilize the app for their work. Furthermore, censorship of certain virtual gatherings may place dangerously restrictive limits on communication and social organizing as the pandemic demands that most of the population continue to conduct its daily business virtually. Most recently, the U.S. Department of Justice has charged former China-Based Zoom executive Xinjiang Jin, also known as “Julien Jin,” with conspiracy to commit interstate harassment and unlawful conspiracy to transfer a means of identification after his alleged participation in a scheme to assist the People’s Republic of China in blocking virtual commemorations of the Tiananmen Square massacre in May and June 2020. News of this potential attempt to censor Chinese dissidents should remind users that their choice to route our communications through this (and other) videoconferencing apps has created new, special pandemic-era censorship concerns, Zoom has released a blog post and S.E.C. filing on its website acknowledging the charge and investigation, reaffirming its “support [for] the U.S. Government to protect American interests from foreign influence,” dedication “to the free and open exchange of ideas,” and ongoing, “aggressiv[e]” actions to “anticipate and combat…data security challenges.” Furthermore, the blog post details subpoenas received...

Tracking COVID-19 on College Campuses: False Starts, Missteps, and Considerations for the Future

As colleges and universities reopened campuses to students last fall, a number of schools across the United States turned towards the use of location tracking apps, wearable technology, and other surveillance tools in the hope that they would facilitate contact tracing and potentially mitigate the spread of COVID-19 in residence halls and in-person classes. These efforts to monitor student health and track student activity have been met with skepticism from students and privacy advocates, who cite concerns about the invasive nature of such tools and the risk that the data they generate may be misused by unauthorized parties.   In Michigan, Oakland University had announced earlier in August that it would require students living in residence halls to wear a BioButton, a coin-sized device that would monitor physiological data, such as skin temperature and heart rate as well and physical proximity to others wearing BioButtons. Administrators had hoped that this would allow the university to pinpoint early-stage cases among the student body. The university soon withdrew the policy, however, after receiving significant backlash from students, who, citing privacy and transparency issues, petitioned the school to make usage optional.   Albion College, a private liberal arts college in Michigan, had issued a similar requirement for students to install the Aura app on their phones before they could come on campus. As a contact-tracing app, Aura would record students’ real-time location using phone GPS services and alert students when they had been in close proximity with someone who had tested positive for the virus. Albion had intended for the Aura app to work in tandem with what some considered to be...

Privacy, a Group Effort – Approaches to International Data Privacy Agreements

The modern, digital world has made the world smaller and faster, with information and data transferred within an instant, ignoring any and all physical borders. While this digital highway is an essential pillar for our Internet age, it is also not without its problems. One such area of concern rests with data protection and privacy enforcement laws.

Privacy in the Golden State

Are you a resident of California? Or are you a business owner whose business reaches consumers in California? If your answer to either of these questions is “yes,” then you should familiarize yourself with the California Consumer Privacy Act (“CCPA”).

The CRISPR War Drags On: How the Fight to Patent CRISPR-Cas9 Creates Uncertainty in the Biotechnology Sphere

On September 10, 2018, the Federal Circuit Court of Appeals (“Federal Circuit”) affirmed the ruling of the United States Patent Trial and Appeals Board (“the Board”) in Regents of the University of California v. Broad Institute, finding that there was no interference-in-fact between competing patents that claimed methods of using CRISPR-Cas9 to modify cellular DNA. Rather than settling the patentability issue, however, exhaustive litigation has continued, as both parties seek to protect the results obtained from costly research.

The Rise and Fall of a Patent Boomtown

Plano, Texas used to be home to the third oldest Apple store ever built. This Dallas suburb’s median household income of $92,121 is 55% above the national average. The eventual construction of Apple’s 500 locations worldwide was in some ways a result of its early success in Plano.

The Future of Autonomous Vehicles and the Fourth Amendment

Level 4 autonomous vehicles, vehicles that do not require human interaction in most circumstances,  are predicted to be on the road as soon as 2021. Experts believe that as autonomous vehicles grow in popularity and availability, the prevalence of car ownership will dramatically decrease.

Why the “Right to be Forgotten” Won’t Make it to the United States

In 2018, the General Data Protection Regulation (GDPR) began to govern members of the European Union. The GDPR allows individuals the “right of erasure” — the ability to request erasure of personal data from the Internet. But the European Union’s top court recently stymied the regulation’s effect, ruling that search engine operators are not required to de-reference subjects globally. Thus, the potential spillover effects — i.e., the potential issue of whether a U.S. court ought to enforce a European de-referencing — won’t allow for a cascading privacy right debate to enter American discourse.

Don’t Bury your Bitcoin! Estate Planning for Cryptocurrencies

From the transferability of social media or email accounts to maintaining online accounts linked to a client’s virtual assets, estate planning issues regarding digital assets have existed for some time. But, now that blockchain based assets such as cryptocurrencies are more commonplace, there is an increased need to plan for the disposition of these digital assets. Estate planning for cryptocurrencies raises unique concerns and the blockchain technology behind cryptocurrencies might provide potential solutions.

Posts on the MTLR Blog are editorial opinion pieces written by student-editors of the Michigan Technology Law Review. The opinions expressed in these editorial posts are not espoused or endorsed by the University of Michigan or its Law School. To view scholarly Articles and Notes published by the Michigan Technology Law Review, please visit the MTLR home page.