Detailed Study Notes on Human Classification and AI Bias

Examination of the Morton Cranial Collection

  • The room is filled with approximately five hundred human skulls. These skulls were collected during the early 1800s.

  • Each skull is varnished, with numerical markings in black ink on the frontal bone.

  • Specific areas of the skulls are marked according to phrenology, indicating various human qualities such as "Benevolence" and "Veneration."

  • Some skulls are labeled with descriptors such as "Dutchman," "Peruvian of the Inca Race," or "Lunatic."

  • The collection was curated by Samuel Morton, an American craniologist, who was a physician, natural historian, and a member of the Academy of Natural Sciences of Philadelphia.

  • Morton collected skulls globally through trades with scientists and skull hunters who, at times, robbed graves to obtain specimens.

  • By 1851, Morton amassed over a thousand skulls, holding the title for the largest collection of its kind at that time.

  • Much of Morton’s collection is now stored at the Physical Anthropology Section of the Penn Museum in Philadelphia.

Morton's Methodology and Views

  • Unlike classical phrenologists, Morton did not believe that character could be discerned solely from head shape.

  • Morton's objective was to classify and rank human races based on physical skull characteristics.

  • He categorized humanity into five races: African, Native American, Caucasian, Malay, and Mongolian, which reflects a colonialist perspective prevalent in his era.

  • Morton’s polygenism theory posited that distinct human races evolved separately, a viewpoint supported by various scholars of his time and used to justify colonial and racist ideologies.

The Impact and Misuse of Craniometry

  • Morton’s craniometry—a method for measuring skull size—aimed to reveal human differences and essentially supported racial hierarchies.

  • Many skulls in his collection were from individuals born in Africa who died enslaved in the Americas.

  • Morton measured skulls using lead shot to assess cranial cavity volume, claiming variations in skull size indicated different levels of intelligence among races, with results suggesting that Caucasians had the largest skulls and Black individuals the smallest.

  • His findings were accepted as legitimate scientific data supporting racial superiority theories, thus legitimizing slavery and racial segregation.

Critique of Morton's Work

  • Stephen Jay Gould critically analyzed Morton’s research, arguing his results were marked by manipulation rather than conscious fraud. Gould claims: "Morton's summaries are a patchwork of fudging and finagling in the clear interest of controlling a priori convictions."

  • Contemporary re-evaluations of Morton’s measurements indicate he made errors, including improper sample selection and ignoring the fact that larger individuals typically have larger brains.

  • Modern assessments demonstrate no significant skull volume differences among races, contradicting Morton's assertions.

The Legacy of Race Science

  • Morton's work represented the zenith of biological determinism in the 19th century, grounded in flawed assumptions about race and intelligence.

  • The implications of Morton’s taxonomy extended beyond science into socio-political realms, entrenching systems of oppression and racial violence.

  • Cornel West remarks on how dominant metaphors and categories from historic race science not only legitimize white supremacy but also shape specific political narratives about race.

Modern Implications in Artificial Intelligence

  • The legacy of Morton's classification practices hints at potential issues with measurement and classification systems in contemporary artificial intelligence (AI).

  • Classification methods informed by historical biases can perpetuate the same inequalities and encroach upon racially oppressed communities under the guise of technical objectivity.

  • The paper emphasizes that the issues in AI bias often revolve less around numerical errors and more on the foundational ideologies that fuel the classification methods used.

  • The failure to critique oversights in the classification methods in AI—such as the lack of consideration for social, political, and economic implications—may lead to replicated discriminatory outcomes in AI models.

Classification as Power

  • As noted by Geoffrey Bowker and Susan Leigh Star, classification serves as a powerful technological and social method that tends to render invisible the processes behind it while maintaining status quo hierarchies.

  • The chapter also stresses that the dynamics of classification are critical to understanding machine learning systems, as they determine the fabric of social order and resources built into AI outputs.

  • Deep dynamics of inequality are reflected in patterns of data used to train these systems and the organizations and practices behind them.

Case Study: Amazon's Hiring Practices

  • In 2014, Amazon attempted to automate its hiring process, creating models trained on resumes of previous employees, which reflected gender biases due to a male-dominated workforce.

  • The algorithm developed a preference for masculine language, downgrading resumes from female candidates disproportionately.

  • This case highlighted how machine learning can reinforce existing societal biases rather than eradicate them, indicating the need for more comprehensive approaches to addressing bias in AI systems.

Classification Machines: Ethical Considerations

  • AI classification practices often fail to acknowledge the inherent power and political implications in their categorizations.

  • Historical criticisms of classification emphasize how such systems can perpetuate harmful stereotypes and social grouping based on rigid and often flawed binary frameworks.

  • There is a urgent call for a shift away from merely optimizing AI systems to understanding the sociological and ethical principles informing their design, highlighting how data extraction and knowledge construction present burdens to fairness.

  • The Morton Cranial Collection includes approximately five hundred human skulls collected in the early 1800s, each varnished and marked with numbers on the frontal bone.

  • Skulls are labeled by phrenology, indicating traits like "Benevolence" and "Veneration."

  • Curated by Samuel Morton, an American craniologist, the collection was assembled through global trades, sometimes involving grave-robbing.

  • By 1851, Morton held the largest collection of skulls at the time.

  • Morton aimed to classify and rank human races based on physical skull characteristics and identified five races: African, Native American, Caucasian, Malay, and Mongolian, reflecting a colonialist mindset.

  • His polygenism theory proposed that distinct races evolved separately, which supported colonial and racist ideologies.

  • Morton's craniometry measured skull size to draw conclusions about intelligence, reinforcing racial hierarchies; he concluded Caucasians had larger skulls than other races.

  • Critique from Stephen Jay Gould suggested that Morton's research was manipulated to fit biases, with analyses revealing flawed methodology and errors in sample selection.

  • Modern assessments dispute any significant differences in skull volume among races, undermining Morton's theory.

  • Morton's work exemplified biological determinism and influenced socio-political narratives, contributing to racial oppression.

  • The implications of his classification practices are relevant in modern AI, where biases from historical data can persist in classification methods.

  • Classification methods can reinforce existing inequalities and overlook socio-political contexts, as illustrated by Amazon's biased hiring algorithm trained on male-dominated workforce resumes.

  • Ethical considerations in AI classification stress the need to recognize the power dynamics in categorization, advocating for a comprehensive understanding of the sociological and ethical principles in AI design.