Elliott Ash, Swiss Federal Institute of Technology (ETH)
Daniel Chen, Toulouse School of Economics
Arianna Ornaghi, University of Warwick
This paper provides a quantitative analysis of implicit language associations among judges and legislators using recent machine learning tools designed to assess semantic biases in text corpora. Our measure proxies for implicit associations by looking at relative co-occurrence of sentiment words (e.g. positive versus negative, career versus family) for gender identifiers (man versus woman). Using the universe of published opinions in U.S. Circuit Courts, we document that judicial language displays a stronger associations between men and positive versus negative attributes, and career versus family, with respect to women. Judges displaying higher language bias against women tend to be older, male, and Protestant. Having daughters and increased exposure to female judges in a court reduces bias. Finally, language bias predicts conservative votes on women rights' issues. A preliminary analysis for political language, based on U.S. Congressmen's speeches, shows similar results.
Presented in Session 264. Gender, Race and the Criminal Justice System