How AI randomization can improve scarce resource management strategies
Investigation: How Randomization in AI Allocation Strategies Can Lead to More Equitable Outcomes
To ensure fairness in model predictions during deployment, users typically focus on minimizing bias. This often involves strategies such as altering the decision-making features of the model or adjusting the calibration of its output scores.
Researchers from MIT and Northeastern University challenge the adequacy of current fairness methods in tackling structural injustices and underlying uncertainties. Their new paper on the arXiv preprint server reveals that systematically randomizing model decisions can improve fairness in certain scenarios.
For instance, if several companies employ the same machine-learning model to rank job candidates in a deterministic manner-without incorporating randomization-there is a risk that a highly qualified individual might consistently be ranked at the bottom for all opportunities. This could result form the model's decision-making process could mitigate rhe risk of consistently denying a deserving individual or group access to valuable resources, such as job interviews.
Their study demonstrated that randomization is especially effective in scenarios where a model's decisions are marked by uncertainty or where a particular group consistently faces negative decisions.
The researchers propose a framework for integrating a controlled level of randomization into a model's decisions through a weighted lottery system. This adaptable approach allows users to customize the degree of randomization to their specific needs, enhancing fairness without compromising the model's efficiency or accuracy.
Shomik jain, a graduate student at the Institute for Data, Systems and Society (IDSS) and lead author of the paper, questions whether social allocations of scarce resources should be determined solely by scores or rankings. He notes that as algorithms scale and increasingly influence decision-making, the inherent uncertainties in these scores can become more pronounced. The research suggests that achieving fairness may necessitate the introduction of some form of randomization.
The paper, co-authored by Shomik Jain, Kathleen Creel, Assistant Professor of Philosophy and Computer Science at Northeastern University, and senior author Ashia Wilson, Lister Brothers Career Development Professor in Electrical Engineering and Computer Science and principal investigator at the Laboratory for information and Decision Systems (LIDS), will be presented at the International Conference on Machine Learning (ICML 2024), scheduled for July 21-27 in Vienna, Austria.
Evaluating assertions
This research builds on earlier investigations into the drawbacks of scaling deterministic systems. Previous studies demonstrated that deterministic resource allocation via machine-learning models can magnify inequalities present in training data, thus reinforcing systemic bias and inequity.
Wilson highlights that randomization, a key concept in statistics, proves to be highly effective in meeting fairness requirements form both systemic and individual perspectives.
This research examines how randomization can improve fairness, with an analytical approach based on the ideas of philosopher John Broome. Broome's advocacy for using lotteries to fairly allocate limited resources and honor individual claims informed their exploration.
According to Wilson, an individual's entitlement to scarce resouces like a kidney transplant can be justified by factors such as merit, deservingness, or necessity. For instance, the intrinsic right to life may support one's claim to a kidney transplant.
According to jain, acknowledging that individuals have varying claims to scarce resources implies that fairness requires respecting each claim. He raises the question of whether consistently favoring those with stronger claims truly constitutes fairness.
Deterministic allocation methods may lead to systemic exclusion or amplify existing inequalities, particularly when receiving an allocation enhances an individual's chances of future allocations. Moreover, since machine-learning models are prone to errors, a deterministic approach could perpetuate these errors.
While randomization can address these issues, it does not imply that every decision made by a model should be subject to uniform randomization.
Methodical randomization
The researchers employed a weighted lottery to modulate the extent of randomization according to the uncertainty inherent in the model's decision-making process. Decisions characterized by higher uncertainty were subjected to greater levels of randomization.
"In kidney allocation, planning typically revolves around estimated lifespan, which carries significant uncertainty. When two patients are merely five years apart in projected lifespan, measuring the difference becomes challenging. We aim to use this level of uncertainty to adjust the randomization process," says Wilson.
The researchers employed statistical uncertainty quantification techniques to assess the appropriate level of randomization required in various scenarios. Their findings indicate that well-calibrated randomization can enhance fairness for individuals while preserving the model's overall utility and effectiveness.
According to Wilson, finding a balance between maximizing overall utility and honoring the rights of individuals receiving limited resources is essential, though the trade off is generally quite modest.
The researchers point out that while randomization can be advantageous, there are cases-particularly in criminal justice--where it may fail to improve fairness and could have adverse effects on individuals.
However, the researchers suggest that randomization could enhance fairness in areas such as college admissions and intend to investigate additional use cases in future studies. They also plan to examine how randomization might influence factors like competition and pricing, as well as its potential to enhance the robustness of machine-learning models.
Wilson remarks, 'We hope our paper represents an initial step in demonstrating the potential benefits of randomization. We present randomization as a tool, but the extent to which it is implemented will be determined by the stakeholders involved in the allocation process. Additionally, the decision-making process itself poses a separate research question.'
Further detail: Jain, Shomik, et al. 'Scarce Resource Allocations That Rely on Machine Learning Should Be Randomized.' arXiv preprint, 2024.
Labels: AI Allocation Strategies