Computer generated maps do little to provide fairness in redistricting

When I was a graduate student at the University of North Carolina Charlotte, I investigated partisan gerrymandering and whether mathematical models produce fair congressional districts. Come to find out, it was a fool’s errand. I computed all kinds of maps using political data; however, the idea of fairness eluded me, as I could not determine it through algorithms.

Perhaps it was my skill as a mathematician or the realization that value propositions are defined before the computational process, but I could not see fairness materializing from my analysis. That is, I could not argue my algorithm necessarily produced a fair result. Accordingly, I find it interesting when individuals use similar models to argue congressional district maps that diverge from a mathematical output is evidence of bias.

The North Carolina trial over the new congressional district maps is transpiring this week, and experts in mathematics and political science have given testimony as to the efficacy of the new maps. So far, most experts base their opinion on algorithms used to create congressional district maps.

What I find interesting about their testimonies is that their opinions suggest that the congressional district maps created by the General Assembly are inherently biased. However, nothing that the experts have said thus far indicates that the new maps are biased. This is because the experts have not demonstrated that the result of their analyses is a stand-in for fairness such that a deviation from it would necessarily be indicative of bias.

The plaintiff in the trial must argue that because the General Assembly’s maps did not conform to experts’ analyses, the maps must be biased. What is the standard of fairness that informed the experts’ analyses? From what I have observed, there is no standard of fairness that informs the experts’ opinion—and if their maps do not represent fairness, then how does the plaintiff measure bias?

The experts’ maps are not representative of fairness because they are the product of unsupervised models or the average of several outcomes. The type of algorithms used to create the congressional district maps referenced by the experts can only be suggestive. They essentially amount to Monte Carlo simulations, whereby an average of all computed results is generally seen as the best possible outcome from mathematics, not from fairness. Alternatively, they use unsupervised clustering algorithms based on data features like population, demographics, voting records, etc., and layer the results over a map. If the output from the model is the product of an unsupervised process, how do you ensure the result is representative of fairness? While using algorithms to determine congressional district maps is an attractive idea, the leading computational methods in political map-making does little to help determine fairness unto itself.

I say this with trepidation as I am a data scientist, and I think these types of algorithms used to develop political maps can be supportive under the right circumstances. However, I am not blinded by ambition in academia or partisan interest to miss the fact that mathematical theory is not a substitute for fairness. The truth is that machine learning cannot tell us what is fair in congressional map-making.

The concept of fairness comes before we draw the maps. For example, we believe it is fair to respect local political boundaries, e.g., a county’s borders. Therefore, a congressional district that is fairly drawn minimizes splitting up counties. Consequently, we define a parameter in the algorithm to keep political boundaries materially whole. Here the model follows from a value proposition that is generally perceived as fair. However, the model itself does not define fairness.

Joshua Peters is a philosopher and social critic from Raleigh, NC. His academic background is in western philosophy, STEM, and financial analysis. Joshua studied at North Carolina State University (BS) and UNC Charlotte (MS). He is a graduate of the E.A. Morris Fellowship for Emerging Leaders.