Blog

The Future of Net Neutrality

After years of struggling with what the federal government’s role should be in regulating the “free internet,” the FCC voted to enforce net neutrality rules under Title II of the Communications Act. Under the new Rules, major Internet Service Providers (ISPs) like Verizon, AT&T and Comcast are prohibited from slowing down applications or services, accepting fees for preferential treatment or blocking lawful content. In a nutshell, the rules place ISPs under the same strict regulatory framework that governs telecommunication networks to ensure that all Internet traffic that runs through these providers is treated equally. While the Rules have been praised by the Obama Administration and the FCC Chairman as “necessary to protect Internet openness against new tactics that would close the Internet,” there has been rapid backlash from opponents. USTelecom, a consortium of ISPs that had filed a suit against the FCC before the Rules went public, re-filed its suit just minutes after the Rules were published on the Federal Register earlier today. USTelecom claims that the FCC used the incorrect approach to implementing net neutrality standards and argues that the reclassification of broadband Internet access as a public utility is “arbitrary, capricious, and an abuse of discretion.” Another snag in the implementation of the FCC rules comes from Congressional support of the ISP lobby. Representative Doug Collins, a Georgia Republican introduced a new bill in Congress that would allow Congress to use an expedited legislative process to review new federal agency regulations. The measure would need only a simple majority to pass, instead of the usual 60 votes needed to overcome a filibuster. Essentially, this bill is a quick-stop...

Twitter and Cyber-bullying

Twitter has recently announced that it will be rolling out a new “quality filter” that is designed to “remove all Tweets from your notification timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts.” The “quality filter” is only attached to verified users since they have the most followers and therefore are susceptible to the most abuse, but Twitter has also implemented other anti-harassment tools such as a feature that makes it easier to report abuse to law enforcement. So essentially, this quality filter and other recent features are designed to prevent instances of cyber-bullying and protect user safety. Cyber-bullying is more and more common as Internet users are shielded by anonymity on the Web. Cyber-bullying is especially present on Twitter. According to data from the Pew Center, Twitter users face many forms of harassment including death threats and threats of sexual abuse and stalking and the victims of this abuse are disproportionately women. There have been several recent high-profile cases of cyber-bullying involving Twitter including #gamergate, the harassment of Robin William’s daughter after his death, and Ashley Judd’s decision to press charges against trolls. These high-profile incidents have been speculatively identified as the impetus for Twitter’s implementation of anti-harassment blocking tools including the “quality filter”. Twitter initially positioned itself as the “free speech wing of the free speech party”, which meant that they took a neutral view on message content. Twitter’s “neutral view” has seemingly made the company more tolerant of abuse and harassment on their social media site relative to other social media sites. For instance, Twitter is notoriously criticized for...

The Danger of “Just & Reasonable” Net Neutrality Rules: The Potential Toothlessness of the FCC’s New Rules

On February 26, 2015, proponents of the open Internet celebrated the Federal Communications Commission’s vote to reclassify broadband Internet as a public utility and approve new net neutrality rules. The goal of the FCC’s vote is to protect Net neutrality by requiring Internet service providers (ISPs) to treat all Internet traffic equally. Although increasing regulatory oversight of the “last mile” of the Internet is certainly a step in the right direction toward a true open Internet, this is not a clear victory for Net neutrality advocates. On March 12, 2015, the FCC released a declaratory ruling and order that contained the FCC’s newly adopted Net neutrality rules. Because the FCC voted to reclassify broadband Internet as a public utility, all ISPs are now subject to regulation under Title II of the Communications Act of 1934. This effectively places ISPs under the same strict regulations as telephone networks. Accordingly, the document outlines strict rules for Internet providers that are designed to preserve an open Internet. The Net neutrality rules help ensure Net neutrality by explicitly prohibiting ISPs from: 1) blocking legal content, 2) throttling, and 3) creating Internet fast lanes (accepting fees for priority treatment). While these are all great things, Net neutrality advocates should hold off on celebrating with the top-shelf champagne because the new rules include a standard of review that can greatly undermine their robustness. The rules require ISPs’ conduct to be “just and reasonable.” This gives the FCC the power to decide on a case-by-case basis whether a ISP has overstepped its bounds or to exempt its actions as “just and reasonable”. The FCC itself admits that the terms just and reasonable are broad, “inviting the...

Glancing at the USPTO Enhanced Patent Quality Initiative

The United States Patent & Trademark Office (USPTO) recently began an enhanced patent quality initiative.  Over the past few years, the USPTO has significantly reduced patent application backlog and pendency and is now turning its attention to patent quality.  The USPTO is better positioned to address patent quality than ever before, since the America Invents Act (AIA) allows the USPTO to set its own fees and retain the fees it collects.  Previously, the USPTO was required to share a portion of its fees with other government entities.  With the ability to charge higher fees and keep the fees it collects, it is possible to imagine significant progress towards improved patent quality.  Currently, a large part of the problem is that patent examiners work in an environment where quantity is often emphasized over quality.  The patent examiner count system awards points to examiners for processing patent applications. With a new emphasis on quality and more resources at its disposal, the USPTO has the opportunity to change this environment. The USPTO has been seeking public input and guidance to direct its continued efforts towards enhancing patent quality.  Their stated focus is on “improving patent operations and procedures to provide the best possible work products, to enhance the customer experience, and to improve existing quality metrics.” Just recently, on March 25 and 26, 2015, the USPTO held a Quality Summit with the public to discuss its outlined proposals.  The USPTO has outlined six proposals: Requests for Quality Review: allowing applicants to request a review if they receive very low quality office actions Automated Pre-Examination Search: searching for new tools to find better search results...

Autonomous Cars: The Legality of Cars on Autopilot

Mercedes, BMW, Infiniti, Honda, and Volvo have produced cars that have the ability to be in a semi-autopilot mode in certain situations. Google has even produced bubble-like experimental self-driving cars that completely take the human driver out of the equation. Recently, the chief executive officer of Tesla, Elon Musk, announced that the company would introduce cars with an autopilot mode into the U.S. market this summer. Tesla’s anticipated product would not remove human participation completely, like the Google self-driving car, but it is the first commercially available, largely autonomous vehicle. Tesla’s car would have technology that would allow drivers to transfer control to autopilot on “major roads” such as highways. The only thing required to obtain this technology is a software update in Tesla’s current Model S sedans. This is hugely exciting news for a lot of people; not having to pay attention during the commute to and from work would allow an extra hour or so for people to be productive or get some rest. However, there are serious legal questions regarding autonomous vehicles that have yet to be answered. For example, who will be liable if the car strikes a pedestrian while on autopilot? Will it be the driver, as the owner of the car, who maintains the ultimate ability to control the vehicle? Will it be the manufacturer or programmer who developed the software that failed to detect the pedestrian? There simply are not laws covering these scenarios in most states, let along cohesive federal laws. At most, there are a few states that have passed laws declaring the legality of autonomous vehicles mainly for testing...

Will the “Blurred Lines” Verdict Fuel Excessive Litigation?

In the past two months, three major pop artists have paid royalties to older musicians because new pop songs sounded too much like older hits: Sam Smith paid Tom Petty for the similarities between “Stay With Me” and “Won’t Back Down” and Pharrell Williams and Robin Thicke paid the family of Marvin Gaye for the similarities between “Blurred Lines” and “Got to Give It Up.” Concerning the Smith-Petty dispute, a mashup of the two songs seems to show strong similarities. Although Smith’s representatives and co-writers acknowledged the “undeniable similarities” of the two songs, they claimed that they were “Not previously familiar with… “’I Won’t Back Down’” and that all similarities between the songs were “complete coincidence.” The two artists settled the dispute outside of court. Tom Petty does not seem to think that Sam Smith and his co-writers infringed on purpose: “The word lawsuit was never even said and was never my intention . . . all my years of songwriting have shown me these things can happen . . . a musical accident no more no less.” On the other hand, the Williams/Thicke and Gaye dispute was much more venomous and personal. In  a federal trial in the Central District of California, a jury awarded damages of nearly $7.4 million dollars in a trial in which entertainment lawyer Richard Busch succeeded in branding Pharrell and Thicke as “liars who went beyond trying to emulate the sound of Gaye’s . . . music and copied . . . Got to Give It Up outright.” With tears in her eyes, Marvin Gaye’s daughter Nona told reporters that the verdict made...

Increased Use of StingRay Devices May Raise More than Just Privacy Concerns

On February 22, 2015, the Washington Post ran an article about the arrest of Florida man Tadrae McKenzie.  The facts of the case were relatively unremarkable:  Mr. McKenzie was arrested on March 6, 2013 by the Tallahassee Police Department.  Mr. McKenzie was charged with robbery with a deadly weapon, a first degree felony.  If convicted, Mr. McKenzie would have faced a prison sentence of up to 30 years.   However, luckily for Mr. McKenzie, this was not to be.  Before his trial began, the state of Florida offered him a plea bargain under which he agreed to plead guilty to a lesser charge (second-degree misdemeanor) and serve six months probation. On its face, this seems like a routine story of a small-time criminal who got a lucky break from the criminal justice system.  So why did it attract the attention of a national newspaper like the Washington Post?  The answer lies in the reason behind Florida’s the plea agreement offer to Mr. McKenzie.  If this case had gone to trial, the state of Florida would have been forced to disclose to Mr. McKenzie and the public information about a surveillance device known as a “Stingray” (sometimes called an “IMSI-catcher”). [1] So what is a StingRay?  To explain this, the Post’s article included a helpful infographic.  Essentially, StingRays take advantage of a security flaw in older 2G cell signals to gain access to data stored in nearby cell phones.  Unlike the newer 3G and 4G cell signals, 2G cell signals do not authenticate the cell phone towers with which they communicate.  To gain access to nearby cell phones, a StingRay blocks 3G and...

Is There a Role for International Law in Privacy and Technology?

Recently there has been an increasingly large spotlight being shown down upon the realm of technology, big data, and privacy. Certainly we live in a world that becomes more and more dependent upon technology. Additionally, we live a world where business and personal lives are becoming increasingly globalized, and the lines between national and international can be hazy at best. This can be especially true in areas like technology that allow for communication and transactions to occur real-time across borders. With technological and global expansion comes the risk associated with data breaches. This has become apparent with events such as the Edward Snowden debacle and numerous data breaches at large, multinational corporations. As a necessary corollary the public and businesses alike become entangled in a struggle to protect their information and privacy. In cases like this people turn to the law to provide guidance and relief. Thus, it is worth asking what role will International Law play in all of this? At the MTTLR Symposium on Saturday, February 21, 2015 the international law panelists discussed their perspectives on International Law and its relation to privacy and technology. At its most basic level, International Law was discussed as a tool to aid in the protection of private rights. By private rights I mean, for example, individual citizens or individual corporate entities. In my opinion, International Law is wholly inadequate to deal with the technology and privacy issues facing the world and its citizens today in respect to protecting individual rights. My rationale is threefold and is rooted in the main limitations of International Law generally. First, under International Law, private...

Big Data and the Fall of Personally Identifiable Information

There has been no shortage of “Big Data” based start-ups in the last decade, and that trend shows no sign of slowing down. As computing power and sophistication continues to increase, the ability to process large sets of information has led to increasingly pointed insights about the sources of this data. Take Target for example. When you pay for something at Target using a credit card, not only do you exchange your credit for physical goods, you also open a file. Target records your credit card number, sticks it to a virtual file and begins to fill that file with all sorts of information. Your purchase history is recorded: what you buy, when you bought it, how much you bought. Every time you respond to a survey, or call the customer help line or send them an email, Target is aware. Anytime you interact with Target, the data and meta-data that characterize that interaction are parsed carefully and stored as Target’s institutional knowledge. But it doesn’t end there. As diligent as Target may be in monitoring your interactions, there will inevitably be holes. But fear not! Instead of settling for an inadequate picture of who you are, Target can just buy the rest of it from the other people you do business with. “Target can buy data about your ethnicity, job history, the magazines you read, if you’ve ever declared bankruptcy or got divorced, the year you bought (or lost) your house, where you went to college, what kinds of topics you talk about online, whether you prefer certain brands of coffee, paper towels, cereal or applesauce, your political leanings,...

Bullish on Anti-Bullying Apps

A 2013 Youth Risk Behavior Surveillance Survey found that 15% of American high school students reported being electronically bullied. The increasing prevalence of this behavior—and the potentially tragic outcomes—have made “cyberbullying” a buzzword in recent years, and has sparked legislation and policy changes in many states. The difficulty in enforcement stems from the inability of school administrators to reach beyond the school ground and monitor what is happening in students’ homes on their personal computers. This has led to increased liability for school systems, and several large lawsuits by students against school districts. Some cyberbullying laws, while well-intentioned, have been ruled unconstitutional because they infringe on student speech. The laws have also faced issues because they frequently only apply to students in the purview of public school boards—exempting private school students and parents like Lori Drew, who created a fictitious MySpace account for a 16-year-old boy and sent her 13-year-old neighbor Megan Meier increasingly negative messages on the online forum. About a month and a half after friending “Josh” on MySpace, Megan hanged herself. Instead of waiting for clear and effective legislation, some school districts are turning to the free market for a solution. Stop!t is a mobile app that allows students to screen shot or photograph interactions with cyberbullies and send the pictures anonymously to administrators. The same anonymity that lets some cyberbullies thrive could be the key to increased reporting and cyberbullying prevention. One principal at a school that implemented Stop!t said that within the first year of adopting the app, the school has received 75 percent fewer bullying reports. However, there is still room for the...

How the SEC Really Feels About High Frequency Trading

For fans of Michael Lewis’s Flash Boys, the SEC would like you to know that things are going splendidly on the high frequency crackdown front. In January 2015 alone, the agency brought three high frequency trading (HFT) suits against different sharks in the securities market. One such shark is high frequency trader Aleksandr Milrud. Milrud layered trades for approximately two years starting in January 2013. Around the globe, Milrud’s recruits used HFT to fraudulently inflate and deflate stock prices to profit upon buying and selling at the altered price. To clear up any lingering confusion on the part of the SEC’s confidential broker informant, Milrud actually referred to the artificial price pressure as “the dirty work.” Milrud further explained that he usually wired his illicit profits to an offshore bank account and later met with an individual who would give him a suitcase full of cash. The SEC’s complaint confirms that the agency believes “Milrud’s layering scheme was very lucrative. In the course of soliciting the [confidential informant’s] participation in his scheme, Milrud stated that one of his trading groups generated profits of approximately one million dollars per month.” Indeed, the complaint later outlines two examples of Milrud’s profiteering activities: Exhibit 1 involved an order that resulted in a $72.28 profit for the trader. Exhibit 2 clocked in a bit more conservatively at $60.74 worth of illegal profits. Milrud even “directed a wire transfer of $5,000 to a bank account located in New Jersey. The purpose of the transfer was to fund a trading account . . . so that Milrud’s traders could use the account to engage in layering.”...

Poorly Stated Policy: The Ongoing Saga of Samsung’s SmartTVs

On Monday, February 5th, Shane Harris at the Daily Beast reported on a questionable provision in the Samsung Privacy Policy–SmartTV Supplement: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.” This clause sparked its own share of outrage, and comparisons to George Orwell’s 1984, online: “Samsung’s Smart TV privacy policy sounds like an Orwellian nightmare” – The Verge “Careful what you say around your TV. It may be listening. And blabbing.” – The Daily Beast “Left: Samsung SmartTV privacy policy… Right: 1984” – Parker Higgins, EFF Activist on Twitter Rather than trying to sweep this bad publicity under the rug, or defending itself without making any changes, the technology company amended its policy for clarity and revealed more about how the system works. In the blog post discussing the issue, titled “Samsung Smart TVs Do Not Monitor Living Room Conversations“, the company explained that the voice recognition system would only be triggered on one of two events: the user pressing a button on their television remote, or the user stating one of the several predetermined commands. In the latter event, voice data is apparently not transmitted. They also identified who the third party would be, Nuance Communications, Inc. Finally, they guaranteed that it would be possible to turn off the voice recognition system entirely, if you desired. While public reaction to this newest revision has been decidedly more muted than the original revelation, I think Samsung deserves some recognition for the behavior they have...

Obama Administration to Weigh in on Google v. Oracle Java Dispute

Last month, the Supreme Court invited input from the Department of Justice regarding the ongoing Java dispute between Google and Oracle, asking for advice on whether the Court should hear the case. According to the Court’s memo, U.S. Solicitor General Donald Verrilli, Jr. “is invited to file a brief in this case expressing the views of the United States.” Technology Analyst Al Hilwa calls this a “true 2015 nail-biter for the industry” because “[t]his is a judgment on what might constitute fair use in the context of software.” The dispute between Google and Oracle began in 2010, when Oracle sued Google seeking $1 billion in damages on the claim that Google had used Oracle Java software to design the operating system for the Android smartphone. Google wrote its own version of Java when it implemented the Android OS, but in order to allow software developers to write their own programs for Android, Google relied on Java Application Programming Interfaces (“APIs”). These APIs are “specifications that allow programs to communicate with each other,” even though they may be written by different people. Oracle alleged that Google copied 37 packages of prewritten Java programs when it should have licensed them or written entirely new code. Google responded with the argument that such code is not copyrightable under §102(b) of the Copyright Act, which withholds copyright protection from “any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in [an original work of authorship].” Google also argued that the copied elements were “a key part of allowing...

FCC Aims to Flex Muscle to Remove State Barriers to Municipal Internet

On June 10, 2014, FCC Chairman Tom Wheeler published an op-ed championing municipality-funded broadband. Noting Chattanooga, Tennessee’s past as a 19th century railroad boom town, he juxtaposed the city’s history with its recent decision to fund its own gigabit-per-second infrastructure: “Chattanooga’s investment has not only helped ensure that all its citizens have Internet access, it’s made this mid-size city in the Tennessee Valley a hub for the high-tech jobs people usually associate with Silicon Valley. Amazon has cited Chattanooga’s world-leading networks as a reason for locating a distribution center in the area, as has Volkswagen when it chose Chattanooga as its headquarters for North American manufacturing. Chattanooga is also emerging as an incubator for tech start-ups. Mayor Berke told me people have begun calling Chattanooga “Gig City” – a big change for a city famous for its choo-choos.” Mr. Wheeler then delivered his punchline: “I believe it is in the best interests of consumers and competition that the FCC exercises its power to preempt state laws that ban or restrict competition from community broadband. Given the opportunity, we will do so.” Fast-forwarding to the present, Chairman Wheeler just announced on Monday that he is circulating a proposed Order to his fellow FCC commissioners encouraging FCC preemption of state laws that stymie municipality-sponsored broadband projects via its granted authority under Section 706 of the Communications Act. The announcement comes a few weeks after President Obama himself pushed for increased support of community internet, with the White House publishing a detailed policy report extolling its virtues. Proponents applaud the move as facilitating the growth of high-speed internet in communities where major...

Net Neutrality: A Brief Overview Prior to FCC Vote on Feb. 26

Net neutrality is the concept that broadband network providers should be completely detached from the information that is sent over their networks. Some large internet providers want to get rid of net neutrality, which is the current state of affairs, and replace it with a prioritized internet that would create a series of “internet fast lanes” that would be available at a price premium over “internet slow lanes.” In very simple terms, this means that if one has the money, one will have a fast internet connection. If one does not have the money, one will have a relatively slow internet connection. Removing net neutrality is rationalized by a number of different advocates supporting various agendas. The most obvious support comes from internet service provider companies who stand to profit in offering various internet packages to not only consumers who are visiting websites but also companies, businesses, and individuals who are running websites. Companies don’t often come right out and state that they are lobbying the government for a piece of legislation that will generate more profit for companies in that field, but instead come up with another, more altruistic rationalization. One example is Verizon stating that net neutrality harms disabled people and the access of visually-impaired people to faster internet access. Support also comes from a libertarian camp that wants to encourage deregulation and minimal government interference into free market capitalism. The third main support for removing net neutrality takes the form of national security and preventing access to sites with undesirable or dangerous content. The poster child in the industry for net neutrality is Netflix. In 2014,...

The Fight for Faster Internet

The past few days have been lively for the FCC, with a passing vote to redefine what counts as ‘broadband’ internet access and rumors of regulation that would limit States’ ability to curtail municipal broadband programs. The these changes come following a recent statement by the Executive Office of the President concerning the availability of ‘fast enough’ internet access for rural populations across the U.S. The statement illustrated that there exists a sharp divide between the availability of internet access at certain speeds in rural, as opposed to urban, communities: 51% of of the rural population lacks access to 25 Mbps internet access as opposed to 94% of the urban population. Prior to the FCC’s redefinition of ‘broadband’ internet access as 25 Mbps or greater, the standard was only 4 Mbps or greater. At this level, the divide is much smaller, 95% of rural communities and 99.9% of urban communities have access that meets this threshold. Naturally, many internet service providers (ISPs) are not happy with the new definition. While protesting that the new standard is far more than ‘most customers’ will ever need, ISPs continually push customers towards faster (and more expensive) packages. On the Comcast website, the first tier of internet access that is advertised as sufficient for HD streaming or online gaming is 50 Mbps. The Executive Statement claims that only 47% of rural communities have access to these speeds. One promising method for bringing faster internet access to these underserved populations is municipal broadband. Over 350 municipalities are listed in the report as providing some sort of broadband internet access, but many are limited by state laws that...

The Right to be Forgotten

This past May, the Court of Justice of the European Union approved “the right to be forgotten” in a case brought by Mario Costeja against a newspaper and Google, a move which fundamentally changed our notions of Internet privacy. More than a decade earlier, Costeja had posted two notices about an auction of his property to pay off debt, and the links to the notices were still appearing in the search results when Googling his name. Costeja brought suit in an effort to remove the links from the search results. The court said the links could be removed if they were found to be “inadequate, irrelevant or no longer relevant.” Under the right to be forgotten, only searches that include a person’s name will provoke the search result removal, which means that the articles or website can still show up in the results if the search is under a different keyword. The European Union’s right to be forgotten has spurred much concern for free speech campaigners, who claim the ruling unjustly limits what can be published online. Privacy advocates, however, are praising the ruling for allowing people some exercise of power over what content appears about them online. This new right creates a process for people to remove links to embarrassing, outdated, and otherwise unwanted content from Google and other search engines’ results. Courts are directed to balance the public’s interest in access to the information in question and the privacy interests of the person affected by the content. As of now, the ruling applies only to Google’s local European sites, such as Google.de in Germany, Google.fr in France,...

Freedom of Speech in a Digital Age: Ramifications for Hyperbolic Rhetoric and Free Debate

The ability to talk without fear of governmental repercussions is a crucial element in the ability of states to serve as laboratories for democracy. Without free debate, the voices of “we the people” become muffled and our local and federal governments are rendered inadequate representatives of our evolving needs. The issue of freedom of speech in our digitized world ought to be at the forefront of our constitutional concerns. So much of our daily interactions occur online. Individuals read articles from news sites and voice their grievances via their Twitter and Facebook accounts. Such grievances often give rise to heated debates, sometimes over inane issues (like whether the trend of naming children after inanimate objects should somehow be a violation of free speech), and, most importantly, over social issues that need awareness and action. But how does Freedom of Speech really work in our modern era, where people often update their statuses or make posts that are easily taken out of context and read without the writer’s intent in mind? What happens if an individual, angry and hurt by a politician’s repeated failure to address an issue she considers of paramount importance, takes to her Twitter account and posts: “God, I’m going to KILL Politician X for overlooking the safety of our local mothers and children!” In the United States, it is a federal crime to truly threaten another person with violence. Clearly, such speech is not protected by the First Amendment. But is our hypothetical distraught citizen’s Twitter post just hyperbole, as is much of what’s found on the internet, or is it a true threat of violence? What...

Will federal legislation make consumers’ private information safer?

After JP Morgan’s computers were penetrated in the early summer of 2014 by hackers, exposing the personal information of the firm’s customers, the firm did not disclose the breach until late in the summer.[1] Over 76 million customers’ contact information—phone numbers and email addresses—were stolen.[2] The Connecticut and Illinois Attorney Generals started scrutinizing JP Morgan’s delayed notification to their customers that their contact information was obtained by hackers, taking issue with the fact that JP Morgan “only revealed…limited details” about the extent of the breach.[3] Both attorneys general are assessing whether JP Morgan complied with their state privacy laws—mainly their state’s data breach notification laws. With the size of JP Morgan and with 76 million customer information breached, it is safe to assume that residents of Connecticut and Illinois were not the only ones whose personal information was compromised. Data breach has become a big issue not only for JP Morgan, but for many other companies. The same hackers who breached JP Morgan’s security wall attempted to get customer data from Deutsche Bank, Bank of America, Fidelity and other financial institutions.[4] Hackers breached Target and Home Depot’s customer credit information, taking 40 million of Targets’ customer credit card information and 56 million of Home Depot’s customer credit card information.[5] Data breach and data lost seem to be inevitable, whether it is through someone working internally for an organization—à la Edward Snowden—or through hackers— like in the case of JP Morgan, Home Depot and Target. Regardless of how data is lost, there is a need to evaluate the best approach in notify a consumer when someone else obtain a consumer’s...

Drone Regulation is Up in the Air

Drones, those unmanned aerial systems that have long been a source of international controversy, have also created interest in the commercial market. Transportation, security, agriculture, and oil and gas exploration are just a few of the sectors that could benefit from drone use. Yet there are valid objections that have stalled the legal use of drones. These worries normally center on the threat of collisions with manned airplanes, and the potential invasion of privacy. Recently the U.S. National Transportation Safety Board has ruled that drones are “aircraft” under the regulatory scheme of the Federal Aviation Administration (FAA). The FAA has published policy statements banning the commercial use of drones. Officials have approved commercial drone flights on a case-by-case basis, which has led to only a small handful of legal drone operators. The rest of the industry has disregarded the policy statements, treating them as simple recommendations and not legally enforceable regulations. The FAA has issued cease-and-desist letters and in one peculiar circumstance, a $10,000 fine, to these drone operators. Yet such actions have been struck down numerous times by the courts. This fight between regulator and industry is a common one but particularly potent in the case of drones. Delays by the FAA have led to its demonization by the drone industry, the Association for Unmanned Vehicle Systems International. The president of the association has stated each day lost to delays in integration will lead to $27 million “in lost economic impact.” However, as alarming as this is, it is important to make sure that when sweeping regulation is enacted it puts a premium on safety, while not destroying...

Gas, Electric, Water, and…Internet?

In the midst of the battle for the future of the Internet, President Barack Obama has made his allegiance clear. Obama released a statement on November 10th urging the FCC to adopt new regulations that would treat the Internet like a utility in order to preserve a “free and open internet.” The President’s plan endorses an idea that has become popularly known as “net neutrality.” Proponents of net neutrality claim that it would prevent Internet service providers (ISPs) from picking winners and losers online, which they claim would effectively destroy the open Internet. In his recent statement, Obama outlined several bright line rules which would prevent ISPs from blocking content from customer access, prohibit throttling, increase transparency, and forbid paid prioritization. In order for the FCC to accomplish these goals, President Obama advised that the Commission must adopt the strictest rules possible, which would require broadband service to be treated as a public utility. Opponents of President Obama’s plan argue that treating the Internet like a utility would slow innovation and raise costs, equating the potential FCC regulations to “micromanagement.” Many who oppose the plan argue that the move would increase bureaucracy and cause inefficiency; rather than add it to the list of government-controlled infrastructure, they believe that the open market is the best method of meeting consumer needs. Classifying the Internet as a utility would entail treating ISPs as common carriers, which are governed by Title II of the 1934 Telecommunications Act. Currently, ISPs are classified as information services. Section 706 of the 1996 Telecommunications Act, which governs the FCC’s oversight of broadband services provided by ISPs, grants...

Apple Pay and MCX: Antitrust Minefield or Misfire?

On October 20, Apple implemented an NFC payment method called Apple Pay. In a nutshell, this service allows customers to store credit card, debit card, and brand loyalty card information in their iPhones and complete transactions without using the physical card or giving any third-party access to the card number. Within 72 hours, the service already had more than one million card activations. One of the benefits of using NFC for this service is that many merchants (as many as 220,000 nationwide) already have the hardware in place to enable Apple Pay with no active support required from the merchants. This led to Apple Pay being accepted and used at various merchants who had not anticipated supporting it. Within one week of Apple Pay’s implementation, however, CVS and Rite-Aid turned off the hardware that enabled Apple Pay and similar services, like Google Wallet. The reason? Those merchants are members of a group called MCX, which has been planning to release a similar payment solution called CurrentC early in 2015. Members could face steep fines for failing to boycott a competing mobile payment method. Aside from customer backlash, this has raised concerns of anti-competitive behavior in the form of a private antitrust investigation against MCX. Although CVS and Rite-Aid’s actions “raise an antitrust smell“, it remains unlikely that an antitrust suit will bring about change. Proponents of the claims state that what MCX has done is create a ‘horizontal boycott‘ which is illegal. At first glance, this seems to be a slam dunk, but in order to prove a boycott, the potential plaintiffs would need to produce hard evidence of...

Killer Robots on the Horizon for Weapons Technology

With advances in technology and artificial intelligence, the development of fully autonomous weapons has become closer to reality. Lethal Autonomous Weapons Systems (LAWS), more commonly known as “killer robots,” are different from the drones utilized by the military today in that they will be capable of selecting and engaging targets independently. The concern over the potential ramifications of LAWS has led to an international discussion which is riddled with moral undertones; the prospect of giving a robot the “choice” and power to kill seems wrong to humanitarian rights groups. The effect LAWS could have on military operations is astounding: with autonomous robots in play there is a reasonable possibility of completely removed and emotionless combat. The fear is that these killer robots will be able to make the ultimate decision regarding who lives and dies. The use of LAWS will undoubtedly affect international relations and pose a serious challenge for international law. There is a debate about whether killer robots will even be permitted in warfare, due to possible compliance issues with the international obligations set forth in the UN Charter and the Geneva Convention. As of now, LAWS are not being utilized in the field since they are not yet completely operational, but the United Nations is seeking to anticipate the potential issues and address the problem before the situation spirals and a race to the bottom ensues. The United Nations met in May to consider the potential social and legal implications of killer robots, and one major legal issue is liability. There is some ambiguity regarding who will be held liable if a robot “commits” a war crime...