These insights are geared toward helping you make actionable changes in your organization in 2025, and beyond.
1. Use a Human-First Approach When Implementing AI
Research from Todd Jick and Stephan Meier, shows that balancing these needs are at the heart of creating — and maintaining — an AI-ready workforce.
According to Meier, there are three main ways in which AI and human-machine interaction impact leaders and integration, primarily around implementation. “It’s about assuring employees that managers are not taking your job. They simply want you to make your job better,” Meier says.
Jick advises that in order to lessen AI-related anxiety, leaders can offer employees transparency and greater involvement in the AI implementation process. Leaders must also address employees’ fears of job loss, which is a significant concern.
“Employees’ unwillingness to take in something like AI comes from reasons that have to do with their motivation, incentives, the degree of inclusion, and their actual ability to be able to use the tool,” Jick says.
2. Look Beyond Workplace Culture to Increase Retention
Voluntary turnover can hit your business hard. Not only is there the potential for the loss of valuable institutional knowledge but also the cost of a replacement.
So, retaining employees is important, but equally crucial is understanding why they decide to leave in the first place. Research by Adina Sterling, the Katherine W. Phillips Associate Professor of Business, shows that an employee’s decision to leave can be tied to their race — and their access to resources.
Her research suggests that to address disparities in voluntary turnover, organizational leaders need to look beyond just internal policies and culture and consider how external societal forces and resource constraints can impact employee retention.
3. Combat AI-Driven Misinformation
For business leaders, misinformation can spell disaster for reputation and brand favorability. When a business pays for ad placement on a social media platform, for example, it is often done so through opaque auction systems, according to Johar. This means that well-meaning ads can appear on sites that circulate misinformation, without knowledge of the brand. With the advent of AI, the probability of an ad appearing on a misinformation site is even great, leading to reputational risks.
While regulators have a role to play, business leaders can fight misinformation in the meantime by forming trade associations and withholding advertising dollars from any platforms that are not seriously monitoring misinformation, according to Johar.
If advertisers do work together in such a way, it would be a “win-win” according to Johar, as businesses benefit both themselves and society by holding platforms to account.
Read the full article by Jonathan Sperling, Columbia Insights