This piece appeared first at The Hill on September 4, 2024.
One Way AI Could Make Elections Fairer
Electoral Boundaries Could Be Chosen More Fairly
It is election season. In addition to the usual shenanigans, this time we have new election drama captured by AI-fueled social media that is tailoring news feeds to our individual interests.
AI has proven troublesome for elections, creating confirmation bias and even fake news. But in the right hands, could AI could make elections fairer? Here’s one way: AI-infused maps could provide a more objective and fairer approach to redistricting electoral boundaries.
On the surface, it is easy to argue that mapping election districts should be an apolitical representation of the people. Election districts should serve the people, enabling individuals to be represented fairly in government.
But redistricting, which happens every 10 years, is a legal and legislative process that is politically charged, expensive and prone to bias.
In 1814, the term “gerrymandering” was first used in response to the Massachusetts Senate redrawing election districts under Gov. Elbridge Gerry. An oddly shaped district was drawn that increased the effect of supportive votes and diluted the effect of opposition votes. Using gerrymandering, parties spread undesirable votes across many districts or concentrate them in one district, whichever they find most advantageous.
Two hundred years later, gerrymandering is still a thing.
AI makes a prediction using both data and a series of rules or training. In this case, the AI would be trained to predict what electoral district each location in a state should be assigned to. The data would be the social, economic and demographic characteristics of a place. The rules would be the criteria that provide guardrails for how electoral districts should be designed.
Current definitions leave a lot to interpretation. Electoral districts must be contiguous, geographically compact and represent groups of people with similar political, economic and social interests. With so much clarity, what could go wrong?
Rather than advocate for boundaries that serve particular groups, what if groups had to agree on fair rules and then let AI decide the boundaries? What are the most important aspects of a community in terms of governance choice? Is it income, education or race? I don’t know the answer but the conversation seems worth having.
Even discussion of the rules will be difficult, as people will have different ideas about what characteristics of a district might be most valuable and should receive priority. However, a bipartisan dialogue will benefit from discussions about rules, as opposed to championing specific lines that may be more directly linked to election outcomes. AI can handle multiple rules and competing priorities.
The reason that an AI approach to delineating election boundaries could work is that necessary maps and data already exist. Using geographic information systems (GIS), a digital mapping technology, massive amounts of data on people and places already exist and are already being heavily used for elections and redistricting. GIS analysts (think modern cartographers) are skilled at collecting, compiling and analyzing the exact data needed for an AI approach.
To be sure, even AI is biased. AI is based on data generated by people and trained with rules that reflect the values of the people who build them. However, conversations about rules and consequences will be more effective when separated from the weeds of drawing a boundary.
The quality of the map data will be the most critical determinant of how well an AI approach to election boundary delineation could work. By far the most important dataset will be the Census, which, though imperfect, will help AI draw maps with less bias and error.
Maps infused with AI are an approach to making electoral boundary mapping more objective. That doesn’t mean we don’t throw out common sense or expert opinion.
It will matter who builds the AI. To be unbiased, we need diverse participation in the development and deployment of AI rules. But conversations focused on rules mean we can collectively make goals and rules that are fair and sensible and then let the chips fall where they may.
We have certainly seen examples of how AI can interfere with democracy. If used with the right intention, it may also be able to guide us toward it.
Trisalyn Nelson is the Jack and Laura Dangermond Chair of Geography at the University of California Santa Barbara. She is also the director of the Center for Spatial Studies and Data Science and a Public Voices Fellow of The OpEd Project.