Quick Thoughts on Being Intelligent in an AI World
Human knowledge is largely built on learning (cognition) through trial-and-error engagement and experiments with the world. AI-based knowledge relies on patterns in data. As a researcher with a positivist bent, I have often impressed upon my students that there is an “objective” mesh of causal relationships between constructs that can help us understand and explain the world. Our job as researchers is to identify these constructs and relationships, test them, and continue to refine them to build our bridge to knowledge. Correspondingly, there is “AI-Knowledge” — generative AI has a representation of a massive corpus of words (in LLMs) and can establish predictive relationships. Given that human knowledge, in its various manifestations, trains and builds AI-based knowledge, there is a parallelism between these forms of knowledge, and with the evolution of human knowledge and the training of AI-based knowledge, it is likely that both will advance in tandem.
Most AI-based knowledge is dormant until activated through our engagement at the point of need. An important question is how we (humans) intelligently engage with this knowledge. While intelligence is a multifaceted concept, it is useful to consider it more of a verb than a noun describing how we engage with knowledge. What do we mean by someone who is knowledgeable or intelligent in this AI world?
One way to think of this is in terms of passive and active intelligence. If we are mindlessly scrolling on our smartphones (e.g., social media feeds) and responding to algorithmic cues, as many of us tend to do – then we are engaged in passive intelligence, accessing information while also training the models. Passive intelligence can be useful as it extends our knowledge, albeit somewhat superficially, but it does not require much cognitive effort on our part. The major benefit of passive intelligence goes to the machine. On the other hand, active intelligence involves cognitively engaging with AI as a tool to better leverage it. For instance, active intelligence would involve cognitive effort in the best problem formulation to extract value from the AI, asking the right questions, probing through conversational interactions, and using critical thinking in evaluating the outputs. Active intelligence leads to better alignment between inputs (what is being sought) and outputs (what the AI provides). It leads to deeper and more sustainable understanding and benefits humans at least as much, if not more, than machines.
Critics of generative AI argue that human knowledge (and intelligence) is far superior since it includes emotions and semantics, while AI just uses algorithms to regurgitate data and has no real memory. I would point out, however, that AI-based knowledge simulates emotion and meaning (i.e., rather than interpreting a word in its context as a human would do, AI simulates meaning through its network of associations with other words). Simulated knowledge can pass the Turing Test. So, as AI-based knowledge continues to advance, we need to place particular importance on our engagement with it. If we delegate too much to the AI, the likelihood of negative outcomes is higher than if we have mindful engagement. There will be many situational contingencies where passive intelligence is appropriate. However, active intelligence practiced with AI helps us retain more agency over human and societal growth – keeping that human edge.
Varun Grover
George and Boyce Billingsley Endowed Chair and Distinguished Professor, Walton College of Business at University of Arkansas