Quick Thoughts on Anticipating the Unanticipated in AI

The negative impact of AI has been the subject of vociferous debate. Most of these deleterious impacts are largely anticipated based on observations of current instantiations of AI. We can predict the proliferation of deep fakes because we are seeing them, and we can anticipate that as the technology gets better and spreads wider, these fakes will indeed proliferate. We can anticipate that some jobs will be replaced and some will be augmented by AI because we see it happening. We can predict that data privacy and ownership will be a growing concern. We can predict that multi-modal LLMs will likely reduce hallucinations, improve training, and enhance outputs. These trends are evident. However, while we can extrapolate, it is hard to see discontinuities. What is missing in our discourse are the unanticipated consequences of AI’s impact on society.It is missing because it is unanticipated. However, given the high stakes, it needs to be part of the conversation.
Let’s take the case of social media. We got social media regulation incredibly wrong. Social media companies adopted an advertising model that generally rewards eyeballs. Fostering engagement on your platform is a completely rational corporate decision, given the strong engagement-to-dollar relationship under an advertising model. So far, so good. However, while extremely lucrative for social media companies, this objective function had an unanticipated consequence of increasing the noise/signal ratio and promoting extreme content. Social media companies buttressed their engagement model with engagement-promoting tools (like the “thumbs up” and “algorithmically curated feeds”) that created unhealthy benchmarks of normalcy and even more unhealthy competition. Again, these rational decisions for a company extend into negative impacts on society manifested in polarization, disinformation, and mental health issues (particularly in young adults). Post-hoc efforts (like fact-checking) to rectify these issues turn out to be band-aids, marginally effective solutions that just tinker with the problem’s symptoms.
Now, we are in the process of making decisions regarding the deployment of AI. How do we prevent these issues? To regulate, paradoxically, these unanticipated consequences need to be anticipated. It is a wicked problem with unknown unknowns. One way to begin to think about this wicked problem is to study the path dependencies created. Just like AI trained on biased data can reinforce biases – what about AI trained on AI-generated data? What are the implications as the vast corpus of data becomes increasingly AI-generated? As AI is integrated into everything – does that lead to reduced human empathy? Is there a loss of human skills and does it matter? What happens if we automate AI toward increasingly algorithmically driven companies with few employees? What are the societal implications of following an advertising model in AI (like social media) where each company’s AI bot is inter-dispersed with ads…or, as the prevailing trend, is the subscription model better? If we train personal assistants on personal data, how will that affect human relationships? If we can curate information, what are the implications of digitally curating people? What are the societal impacts of having large foundational models controlled by big technology companies (or the government), or is the natural proclivity to innovate at the individual level, democratizing AI using smaller open-source models? When can locally optimum solutions in AI become globally optimum nightmares?
These are tough questions to address – but actively engaging in this discourse is necessary. Ultimately, it seems that a major portion of the cornucopia of inevitable AI profits needs to be allocated (through tax policy or regulation) to our massive private and public research enterprise and to innovative minds that can study AI’s impact on society. Specifically, what are the longer-run unanticipated consequences of decisions we make on AI deployment today? Doing this might just prevent issues that we are experiencing on social media – but with far more profound implications. Failing to do this could push a society already tottering over the edge….and we just might run out of band-aids.
Picture of Varun Grover

Varun Grover

George and Boyce Billingsley Endowed Chair and Distinguished Professor, Walton College of Business at University of Arkansas

Share this article: