Beware of biased vaccine distribution algorithms

[ad_1]

As the logistical challenge of distributing vaccines to more than 300 million Americans looms, institutions are furiously developing algorithms to facilitate deployment. The promise: This technology will allow us to efficiently allocate a limited number of doses – to the highest priority groups and without human error.

Vaccine distribution algorithms have already been deployed in many places. In December 2020, researchers at Stanford University, has implemented a system that prioritizes individuals in its community of over 20,000 strong. At this time, the US Department of Health and Human Services revealed he had teamed up with data analytics company Palantir to leverage “Tiberius,” an algorithm for efficiently assigning doses to the hardest hit areas. And at the state level, Arizona, Caroline from the south, Tennessee and at least four others are developing proprietary technologies to modulate vaccine deployment.

But it could all turn out badly.

Imagine a vaccine delivery algorithm that undersupplies predominantly black counties. Or the one that puts patients at a disadvantage compared to men. Or another one that favors the richest 1%.

These possibilities seem bizarre, even dystopian. But they could become reality in the space of the next few months.

At the heart of these frightening perspectives is the issue of algorithm bias. Rule-based systems, like the one that powers the Stanford algorithm, can deliver discriminatory results if programmers fail to capture all the relevant variables in fine detail. Machine learning systems, such as the one most likely behind Palantir’s algorithm, on the other hand, may seem to escape this problem because they learn from data with minimal human input. But in their case, a bias arises when the data they receive under-represents certain demographics – like class, gender, or race. The algorithms then replicate the bias encoded in the data.

Already, there are warning signs that vaccine delivery algorithms may allocate doses unfairly. For example, the Stanford system caused a debacle when he only selected seven of 1,300 medical residents to receive doses. The researchers deduced that this error was due to a failure of the programmers to adapt to the residents’ actual exposure to the virus.

The University then had to issue a blanket apology and revise its deployment plans; some vials have even been dosed manually. In addition, a number of studies have highlighted biases in machine learning systems and could be a reason to doubt their effectiveness.

In 2018, a test by the American Civil Liberties Union (ACLU) revealed that Amazon’s machine learning system falsely identified 28 members of Congress as criminals. A disproportionate number of those falsely detected were people of color – they included six members of the Congressional Black Caucus, including the late civil rights leader Rep. John Lewis. There is a risk that algorithms deployed with the intention of distributing vaccines ethically, ironically, will distribute them unethically.

This problem points to a central tension. Technology is probably the best solution we have today for massive logistical challenges such as the delivery of groceries, meals and packages. But at the same time, the algorithms are not yet sufficiently advanced to make ethical decisions. Their failures could be very damaging, especially to vulnerable populations. So how do you resolve this tension, not only in the context of vaccine distribution, but more broadly?

One way is to remedy technology. Rule-based systems can be thoroughly evaluated prior to deployment. Machine learning systems are harder to fix, but research suggests that improving the quality of data sets reduces bias in the results. Even then, creating perfectly unbiased datasets could be a bit like a band-aid for a terminal illness. These sets often contain millions or even billions of samples.

Humans would need to sort them all out and categorize them appropriately for this approach to be successful – which is both impractical and non-scalable. Central limitation, as pioneering computer scientist Judea Pearl did sharp Persistent: Machine learning is statistical in nature and powered by reams of data.

But this limitation also suggests avenues for improvement. Psychologist Gary Marcus came up with one: a “hybrid” approach, which combines statistical elements with those of deductive reasoning. Based on this idea, programmers would explicitly teach algorithms concepts such as “race” and “gender” and encode rules preventing any discrepancy in results based on these categories. Then the algorithms would learn from reams of relevant data. In this way, hybrid approaches would be the best of both worlds. They would grasp the advantages of statistical methodologies while providing a clear solution to problems such as bias.

A third solution to the tension is to designate areas where algorithmic approaches should for the moment be experimental and not decisive. It may be acceptable to exploit algorithms for well-defined tasks where the risk of failure is low (such as the operation of transport networks, logistics platforms and energy distribution systems), but not yet for those that involve messy ethical dilemmas, such as vaccine distribution.

This view still leaves open the possibility that in the future a more advanced system may be given such a task while protecting us from immediate potential harm.

As technology becomes more powerful and ubiquitous, it will help us to be on our guard for the damage algorithms might cause. We’ll need the proper remedies in place – or possibly let the machines assign them.

Published March 18, 2021 – 21:18 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *