Algorithms are ubiquitous and can determine success and failure in various aspects of life.
Winners of algorithmic decisions (job offers, credit card offers) gain benefits; losers face discrimination (no interviews, higher insurance rates).
These underlying algorithms often lack transparency and an avenue for appeal.
Algorithms are constructed using past data and a definition of success, which raises the issue:
What if the algorithms are flawed?
The reliance on historical data can embed past biases into future predictions.
Individuals utilize algorithms daily, albeit informally.
Example: Preparing a family meal involves personal data (ingredients, time, ambition) and a personalized definition of success (nourishment values based on family preferences).
The cook’s algorithm can differ vastly from what children might desire (e.g., preference for sweets).
This reflects that algorithms represent subjective perspectives rather than objective truths.
Common belief is that algorithms are objective, scientific, and reliable.
This misconception serves as a marketing tactic to promote trust and intimidation.
Blind faith in algorithms can lead to adverse effects, particularly when they operate without scrutiny.
Keri Soros, a NYC high school principal, attempted to understand an algorithm called the value-added model used to assess teachers
The formula was deemed inaccessible; the lack of transparency perpetuated systemic unfairness.
The New York Post exposed the unfairness by revealing scores, which led to teacher shaming; this created a public outcry for clarity.
Statistician Gary Rubenstein analyzed the data and found inconsistencies, revealing that the algorithm’s variability was alarming and inappropriate for individual assessments.
Example of Fox News’ potential algorithmic hiring process raises concerns:
Using past data from 21 years of applications to profile success could inadvertently filter out women due to historical hiring biases.
Algorithms reinforce existing inequities in hiring practices instead of remedial measures.
The reliance on flawed data can amplify biases rooted in societal inequality.
The resulting algorithms can perpetuate racial segregation and discrimination in policing and justice:
Example: ProPublica’s findings revealed inconsistencies in risk assessments for recidivism that unfairly penalized certain racial groups over others.
Algorithms can serve as tools of systemic bias, labeled by the speaker as "weapons of mass destruction."
Privately developed algorithms often lack accountability and perpetuate profit-driven inequalities.
There’s a risk of embedding bias through the selected data for algorithm training and the definition of success.
Experiment evidence shows bias in hiring practices based on name analysis (e.g., white vs. black-sounding names).
Algorithms can be audited for fairness, leading to improved equity.
Steps for auditing include:
Data Integrity Check: Assess biases in data collection and ensure fairness across categories.
Definition of Success: Re-examine criteria for success beyond traditional views, using blind evaluations as an example.
Accuracy Consideration: Assess the error rates of algorithms and their disproportionate impacts on different groups.
Consider Long-term Effects: Evaluate feedback loops and potential unintended consequences, such as engagement bias on social media platforms.
Data scientists must shift from being mere providers of calculations to mediators of ethical discussions.
Society must recognize algorithms as tools requiring transparency and accountability.
Acknowledging the political implications of algorithms is crucial to invoking necessary changes for societal fairness and equity.