Showing 4 results for:
Popular topics
Over the past few years, big tech companies like Facebook and Amazon have come under fire for discriminatory artificial intelligence. Now, U.S. lawmakers are presenting a bill that will make tech companies check their algorithms for biases. Drafted by Sens. Cory Booker and Ron Wyden, the Algorithmic Accountability Act of 2019 calls for the Federal Trade Commission to require companies collecting and sharing data for the purpose of algorithms to conduct impact assessments on their privacy and AI tools. The law notes that algorithms can contribute to and amplify “unfair, biased, or discriminatory decisions” that impact consumers. For now, the bill is aimed at big tech companies and data brokers. It would only apply to companies who are valued at more than $50 million or who have access to more than 1 million consumers’ data. “Computers are increasingly involved in the most important decisions affecting Americans’ lives — whether or not someone can buy a home, get a job or even go to...
Facebook has had a lengthy list of privacy and advertising scandals within the last year. Recently, the U.S. Department of Housing and Urban Development is charging Facebook for housing discrimination in its ads. HUD alleged that Facebook’s ad platform “discriminated in the terms, conditions, or privileges of the sale or rental of dwellings because of race, color, religion, sex, familial status, national origin or disability.” Now, a recent report from Cornell University shows that Facebook’s ads can discriminate against groups even when advertisers don’t want them to. “Advertising platforms can play an independent, central role in creating skewed, and potentially discriminatory, outcomes,” the report said. The report found that the lower the daily budget an ad had, the fewer women saw it. The content of an ad can also skew the types and amount of people who see it. Researchers used public voter records in one test, resulting in the post being delivered to specific audiences, even...
Using automation to tackle gender bias in the workplace has proven to be a difficult task for many companies. In some cases, trying to solve the issue has amplified the problem. For example, Amazon had to get rid of one of its AI human resources tool that discriminated against women applicants. And last year, Google Translate had to update its algorithm to stop the tool from only returning masculine translations to gender-neutral terms. Now, Slack’s new plug-in, #BiasCorrect, is changing the way people speak about women at work. #BiasCorrect was launched by Catalyst, a non-profit dedicated to making workplaces more women-friendly. The plug-in used automation to suggest alternative terms and phrases when Slack users describe women coworkers using words with negative connotations. For example, if someone types “she’s so aggressive,” #BiasCorrect will offer terms like “passionate” or “focused” to point out the user’s unconscious bias. Since the launch of the plug-in, women across...
Google announced the revamping of Translate to offer more interpretations. Before the update, only masculine translations existed for gender-neutral words. “Google Translate learns from hundreds of millions of already-translated examples from the web,” Google Translate Product Manager James Kuczmarski said in a blog post. “Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form.” Photo: Google Translate Google Translate plans to extend gender-specific translations to more languages. Google Translate will launch the update on its iOS and Andriod apps later in the year. It is also set to address gender bias in features like query auto-complete. Tech companies have struggled with identifying and fixing gender bias in machine learning and artificial intelligence technologies. Last year, Wired reported on a Virginia computer science professor finding gender bias in some of the machine-learning software that he...