Algorithmic Fairness

In multiple domains, ranging from automatic face detection to automated decisions on parole, machine learning algorithms have been found to be systematically biased and favoring one demographic group over another.

Our work focuses on:
(1) Auditing multiple algorithms that affect human lives. We have looked at bias in multiple applications such as, visual gender bias in Wikipedia biographies [3], image search results for professional images [7], sounds used by household devices [5], face matching algorithms [11], pupil detection algorithms [4], toxicity/cyberbullying detection algorithms [1, 8, 10], misinformation detection [2] and sentiment detection algorithms [9].

(2) Designing newer algorithms that are less biased in measurable ways. We have been designing newer algorithms that reimagine how bias should be quantified and what corrective actions can be undertaken. This includes probabilistically fusing different decisions coming from different modalities (e.g, text, images)  or different blackbox algorithms [8, 9] to ensure fairness and accuracy. We have also worked on identifying the right optimization parameters within the algorithm to ensure fairness and accuracy [10]. Active projects in this space aim to reify time [1] and networks in the definitions of fairness.

Related Publications

  1. Almuzaini, A. A., Bhatt, C. A., Pennock, D. M., & Singh, V. K. (2022, June). ABCinML: Anticipatory Bias Correction in Machine Learning Applications. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1552-1560).
  2. Park, J., Ellezhuthil, R., Arunachalam, R., Feldman, L., & Singh, V. (2022). Fairness in Misinformation Detection Algorithms. In Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media. Retrieved from https://doi. org/10.36190.
  3. Beytia, P., Agarwal, P., Redi, M., & Singh, V.K. (2022), “Visual Gender Biases in Wikipedia: A Systematic Evaluation across the Ten Most Spoken Languages”. To be published in the Proceedings of the ACM International Conference on Web and Social Media (ICWSM).
  4. Kulkarni, O. N., Patil, V., Singh, V. K., & Atrey, P. K. (2021). Accuracy and Fairness in Pupil Detection Algorithm. In 2021 IEEE Seventh International Conference on Multimedia Big Data (BigMM) (pp. 17-24). IEEE. 
  5. Roy, J., Bhatt, C., Chayko, M., & Singh, V. K. (2021). Roy, J., Bhatt, C., Chayko, M., & Singh, V. K. (2021). Gendered Sounds in Household Devices: Results from an Online Search Case Study. Proceedings of the Association for Information Science and Technology58(1), 824-826.
  6. Singh, V. K., André, E., Boll, S., Hildebrandt, M., & Shamma, D. A. (2020). Legal and ethical challenges in multimedia researchIEEE MultiMedia27(2), 46-54.
  7. Singh, V., Chayko, M., Inamdar, R., & Floegel, D. (2020), Female Librarians and Male Computer Programmers: Gender Bias in Occupational Images on Digital Media Platforms.  Journal of the Association for Information Science and Technology71(11), 1281-1294. [see Poster]
  8. Alasadi, J., Ramanathan, A., Atrey, P. & Singh, V. K. (2020). A Fairness-Aware Fusion Framework for Multimodal Cyberbullying Detection. In Proceedings of the IEEE International Conference on Multimedia Big Data. 
  9. Abdulaziz, A., & Singh, V. K. (2020). Balancing Fairness and Accuracy in Sentiment Detection Using Multiple Black-box Models. In Proceedings of the 2nd ACM International Workshop on Fairness, Accountability, and Transparency, and Ethics in MultiMedia.
  10. Singh, V., & Hofenbitzer, C. (2019). Fairness across network positions in cyberbullying detection algorithms. In 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 557-559). IEEE.
  11. Alasadi, J., Al Hilli, A., & Singh, V., (2019). Toward Fairness in Face Matching Algorithms. (2019) In Proceedings of the 1st International Workshop on Fairness, Accountability, and Transparency in MultiMedia (pp. 19-25).

Funding and Support

We gratefully acknowledge the support from the National Science Foundation for this work.

1. EAGER: SaTC: Early-Stage Interdisciplinary Collaboration: Fair and Accurate Information Quality Assessment Algorithm

2. RAPID: Countering Language Biases in COVID-19 Search Auto-Completes

 
Coverage

Media coverage for gender bias in professional images paperYahoo LifestyleAmerican Libraries MagazineACM Tech NewsHindustan TimesDaily Targum.2020: Rutgers Today: Online Autocompletes Are More Likely to Yield COVID-19 Misinformation in Spanish than in English