Fairness in AI Seminar Series

Machine learning (ML) and artificial intelligence (AI) systems are ubiquitous in everyday decision-making, and are extensively used in contexts that directly affect our everyday lives. These algorithms are run using already collected datasets, which unfortunately tend to include implicit social biases. As a result, there has been a plethora of examples where the final outcome of ML and AI systems perpetuates the existing underlying biases. Therefore, it is imperative to design systems while taking issues of fairness explicitly into account. This fascinating research direction has recently attracted the interest of the scientific community.


Sponsored by the University of Maryland Center for Machine Learning and Capital One, the purpose of the Fairness in AI seminar series is to explore the topic of fairness in AI, ML and theoretical computer science by having researchers present their cutting-edge work on this family of problems. The talks will take place virtually on Mondays from 11 a.m. to noon. Recordings will be posted here for later viewing.

Talk announcements and Zoom links will be distributed through the Center for Machine Learning mailing list, which you can sign up for here. If you are interested in giving a talk, please contact Leonidas Tsepenekas.

Spring 2021 Schedule

Mondays 11 a.m.–noon

March 8

Fairness in AI and in Algorithms

Aravind Srinivasan, Professor of Computer Science, UMD

With the increasing role played by AI and algorithms in our lives, it is well-recognized now that fairness should be a key aspect of such AI systems, to avoid automated decision-making that may (inadvertently) be biased. After surveying some of the approaches considered in this general area, we will discuss some of our work on fairness–particularly in unsupervised learning. The talk will only assume a general exposure to computer science and to AI.

March 29

Just, Equitable and Efficient Algorithmic Allocation of Scarce Societal Resources

Sanmay Das, Professor of Computer Science, George Mason University

Demand for resources that are collectively controlled or regulated by society, like social services or organs for transplantation, typically far outstrips supply. How should these scarce resources be allocated? Any approach to this question requires insights from computer science, economics, and beyond; we must define objectives (foregrounding equity and distributive justice in addition to efficiency), predict outcomes (taking causal considerations into account), and optimize allocations, while carefully considering agent preferences and incentives. Motivated by the real-world problem of provision of services to homeless households, I will discuss our approach to thinking through how algorithmic approaches and computational thinking can help.

April 5

Technical Challenges for Training Fair Neural Networks

Vedant Nanda, Ph.D. candidate in Computer Science, UMD and MPI-SWS joint program

As machine learning algorithms have been widely deployed across applications, many concerns have been raised over the fairness of their predictions, especially in high stakes settings (such as facial recognition and medical imaging). To respond to these concerns, the community has proposed and formalized various notions of fairness as well as methods for rectifying unfair behavior. While fairness constraints have been studied extensively for classical models, the effectiveness of methods for imposing fairness on deep neural networks is unclear. In this paper, we observe that these large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences. We conduct our experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.

April 12

Fairness, Semi-Supervised Learning, and More: A General Framework for Clustering with Stochastic Pairwise Constraints

Leonidas Tsepenekas, Ph.D. candidate in computer science, UMD

Metric clustering is fundamental in areas ranging from Combinatorial Optimization and Data Mining, to Machine Learning and Operations Research. However, in a variety of situations we may have additional requirements or knowledge, distinct from the underlying metric, regarding which pairs of points should be clustered together. To capture and analyze such scenarios, we introduce a novel family of stochastic pairwise constraints, which we incorporate into several essential clustering objectives (radius/median/means). Moreover, we demonstrate that these constraints can succinctly model an intriguing collection of applications, including among others Individual Fairness. Our main result consists of a general framework that yields approximation algorithms with provable guarantees for important clustering objectives, while at the same time producing solutions that respect the stochastic pairwise constraints.

April 19

Probabilistic Fair Clustering

Seyed Abdulaziz Esmaeili, Ph.D candidate in computer science, UMD

In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the features of a valid clustering might also include the representation of colors in that clustering. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize prior work by assuming imperfect knowledge of group membership through probabilistic assignments. We present clustering algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership," where different groups have a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach, and also surface nuanced concerns when group membership is not known deterministically.