' Rachel Foster | MTTLR

Should Smart Personal Assistants Ever Report Your Conversations?

The rise of smart home devices and personal assistants such as Amazon’s Echo, Google’s Home, and Apple’s Siri has also led to an increased wariness about these devices listening to private conversations. These fears are not unfounded: according to the Echo’s privacy policy, the device “processes and retains audio, interactions, and other data in the cloud.” Consumers can find warnings about the privacy implications of the Echo they unwrapped on Christmas morning all over the Internet. Privacy horror stories regarding smart home device recordings are everywhere. Some of the patents that companies have filed for are illustrative. Amazon owns a patent to recommend cough drops and soup to those who sound sick while speaking to their Echo. Recordings of a criminal defendant from the Echo are being used to prosecute him. An Echo device secretly recorded and sent a clip of a couple’s conversation to someone in their contacts. These examples are enough to make anyone concerned about what the device that they bring into their home might be hearing and what it might be using that information for. Users are concerned not just that companies are retaining the data collected by the devices to more effectively market their own products, but also that the data is being sold to third parties. Further, the threat of the government accessing this information adds a dystopian Big Brother element. However, the overwhelming backlash to the sharing of data from smart personal assistants comes into conflict with the reaction to a recent news story. Earlier this month, a 13-year-old Indiana boy was detained by police after he posted a screenshot of a...

Machines May Not be the Solution to Tech Recruiting’s Gender Bias

The tech industry is currently being scrutinized for gender discrimination and a gender employment gap. While women make up more than half of the U.S. workforce, they make up less than 20% of U.S. tech jobs. High-profile women at technology companies have come forward to tell their stories of sexual harassment at work. Other women have spoken out about the often-toxic atmosphere for women in technology workplaces. Perhaps this is why, as Reuters reported, Amazon began testing an AI tool to help streamline the recruiting process. After all, since humans are clearly biased in hiring, making the process more objective and turning it over to machines could be the answer. Unfortunately, Amazon discontinued the experimental tool after discovering that it showed bias against women. The technology rated candidates on a scale of one star to five stars on a variety of factors. It was designed to take in a large number of candidates and output the top few options. However, Amazon discovered that the system taught itself to prefer male candidates: It penalized resumes that included the word “women’s,” as in “women’s tennis team member,” and downgraded graduates of two all-women’s colleges. Further, it gave preference to so-called “masculine language”: words such as “executed” or “captured.” Amazon tried altering the program to fix these problems, but the issue is bigger than these two instances. The computer model learned from patterns submitted to the company over a 10-year period, and male applicants have dominated the industry since its inception. The program learned the biases from the humans that had done the job before it, and these biases could present in...