The Human Aspect of AI: Behavioral and Organizational Implications (S02E08)
A new episode of Abrupt Future is live!
Sebastian Krakowski is an Assistant Professor at the Stockholm School of Economics (House of Innovation) and a Digital Fellow at the Stanford Digital Economy Lab.
In this enlightening discussion between Benoit Hardy-Vallee and Sebastian Krakowski, the conversation delves into the role of AI in addressing information overload and flawed reasoning. Krakowski shares insights from his research on how AI thrives in data-rich environments with repetitive tasks. Furthermore, the dialogue considers the ethical implications of AI. Krakowski emphasizes that no AI or data is truly unbiased, and calls for a careful evaluation of goal setting, an often overlooked yet vital aspect of deploying AI systems. He advocates for explicit discussions within organizations to determine the optimal application of AI, taking into account different goals and values.
Sebastian asserts that artificial intelligence (AI) and machine learning (ML) are tools that we construct to either help us accomplish our goals or take over the task entirely. These technologies can be used for automation (complete substitution or replacement of a human) or augmentation (enhancing human capabilities).
He suggests that a purely automation-based view of AI, such as replacing all employees with robots, is shortsighted. While it may cut costs in the short term, it will lead to long-term consequences, such as "descalling" if humans completely rely on robots.
On the other hand, augmentation or the reinforcement of human abilities, while a more optimistic view, is also too simplistic. It requires significant time, effort, and upscaling of employees, as well as breaking down organizational and data silos.
Sebastian advocates for a balance between automation and augmentation, which should be continuously reevaluated over time. He cautions against excessive reliance on technology and argues for maintaining a human-in-the-loop approach.
In his research, Sebastian studies the interplay between augmentation and automation in different contexts. He mentions a case study with a global pharmaceutical company implementing an AI-driven sales support system. The study differentiated groups based on how they interacted with the system, exploring the effects of their approach towards decision making and work structuring.
Sebastian discusses the differences between structured and improvisational workers. He notes that while structured workers use a system as recommended, improvisational ones have the ability to deviate from system recommendations.
Krakowski explains the results of a study that compared a standard system rollout to a more flexible one. The study found a decrease in performance when using the standard system, despite it being advanced. Conversely, when human decision-making was considered and the system was more flexible, there was an increase in performance.
Krakowski delves into the use of AI in chess, explaining how freestyle or "centaur" chess allows players to use algorithms as a team, offering a real-life context of full AI adoption.
He reveals findings from a chess study showing that traditional determinants of performance (like chess skill) become irrelevant in tournaments involving AI. Instead, the ability to interact with, deploy, and judge the algorithmic advice becomes the key determinant of success.
Krakowski argues that the patterns seen in AI chess adoption reflect broader trends in society and business, highlighting the importance of data literacy and AI understanding in the current era. He emphasizes the need for upskilling and reskilling to stay competitive in a world increasingly dominated by AI and technology.
Sebastian discusses the potential of AI in data-rich, repetitive environments that require consistent data processing, especially in the context of information overload and the need to navigate the vast amount of data available today. He emphasizes the growing importance and rapid adoption of machine learning in automating or augmenting tasks.
He acknowledges that many organizations possess relevant data for machine learning applications but often fail to utilize it due to it being buried, unavailable, or unshared. He advises focusing on low-hanging fruit by leveraging good enough data in stable contexts to create value with machine learning.
He discusses the ethical considerations of AI, asserting that no AI or data is entirely unbiased and that a degree of humility should be maintained when using data. He also emphasizes the importance of understanding the origin and treatment of data and the choices made in the process.
Sebastian explores the notion of 'value alignment' in AI, highlighting the importance of setting clear and thoughtful goals for AI systems, understanding that these goals will be determined by diverse human values and preferences. He encourages more explicit discussions on this matter, especially as the granularity of data and number of KPIs increase.
In future research, Sebastian plans to focus on the human aspect of the AI-human relationship, exploring behavioral, psychological, and organizational factors that influence the successful use of AI. He hopes to promote more explicit debate and conversation around what constitutes ethical AI, going beyond general agreement and platitudes.