With 2026 shaping up to be another consequential year for artificial intelligence, some MIT faculty members and researchers recently shared what they’re paying attention to when it comes to AI and work.
Here’s what they’re keeping tabs on.
The human-LLM accuracy gap
Rama Ramakrishnan, professor of the practice, AI/machine learning, MIT Sloan
“I will be paying attention to the human-large language model accuracy gap.
“The automation of knowledge work using LLMs is the key focus of many enterprise generative AI pilots. For certain tasks, the accuracy of the LLM may not be ‘good enough,’ and it may be tempting to conclude that the task is not a good fit for automation using LLMs. But rather than comparing the LLM’s accuracy to the best possible (i.e., 100%), it is better to compare it to the accuracy of humans doing the work right now and to track the changing human-LLM accuracy gap for that task. Maybe humans achieve 95% accuracy and the LLM achieves ‘only’ 90%.
“The key thing to remember is that as frontier LLMs get more capable, their accuracy will continue to improve, while human accuracy will likely be unchanged. So it is quite possible that LLM accuracy surpasses human accuracy in 2026 for many enterprise tasks.
“What are these tasks? How much business value do they represent? How much employment is at risk? These are some of the questions on my radar for 2026.”
Guardrails for AI
Barbara Wixom, principal research scientist, MIT Center for Information Systems Research
“My colleague Nick van der Meulen and I are deep in research regarding the guardrails that companies need to establish to effectively and safely deploy AI solutions without compromising compliance, values, ethics, and innovation. That’s a tough balance — the old governance playbooks are not working for AI because of the pace of change and such. We will continue to be on the lookout for emergent practices that help organizations adapt their governance so that AI solutions can reach scale and be sustained over time.”
What happens when humans outsource creativity to AI?
Roberto Rigobon, professor of applied economics, MIT Sloan
“Plasticity is the brain’s ability to change its structure and function through life in response to experiences. Therefore, when we stop solving differential equations, we forget how. When we stop doing calculus, we forget how. And when we stop using our brains to remember phone numbers and directions, we forget them. When the phone is a substitute for a sense of direction, we forget and become more dependent on Google Maps.
“For directions, I think it is OK if we forget. But what if we start using AI to replace experimentation, or creation, or what-if thought processes? Do we really want to forget these activities? What about entrepreneurship? What about art? What about music? I do think that the creativity — the authentic creativity — that humans have displayed through centuries is infinitely better than what any AI entity can do. The AI will try more things, with higher variance, and likely produce worse outcomes than those of individuals building upon each other.
“So how AI is implemented is a first-order concern. I have written a new paper with Isabella Loaiza exactly on this topic, and I believe we need to think harder about it.”
Understanding the inner workings of AI models
Melissa Webster, senior lecturer in managerial communication, MIT Sloan
“Looking ahead to 2026, I’m paying attention to mechanistic interpretability research for what it shows about the inner workings of AI models, both for our understanding and for potentially increasing safety and alignment.
“As Chris Olah of Anthropic says, the [generative AI] models are effectively grown through training rather than explicitly built or programmed, leading to an unusually opaque technology.
Mechinterp aims to reveal how specific neural networks function and what actually leads to the outputs we see. For AI at work, I’m eager to see how this helps us as users make better-grounded decisions, and I’m hopeful about its broader societal impact.”
Scaling AI solutions
George Westerman, senior lecturer in information technology, MIT Sloan
“This year will mark a shift in enterprises from experimenting with generative AI and agents to finding viable solutions that create real value at scale.
“With the hype around generative AI and agents, it’s essential to focus on the right question: What problem are you trying to solve? The answer will require finding the right combination of techniques — AI, traditional IT, and human — for each task in the solution.”
The LLM-ification of data
Harang Ju, digital fellow at the MIT Initiative on the Digital Economy and assistant professor, Johns Hopkins University
“I expect to see the LLM-ification of data as the primary trend playing out in 2026 and beyond. By ‘LLM-ification,’ I am referring to data sources in companies and in private company databases (such as your Apple Notes, for example) becoming easily accessible to LLM-based agents rather than being accessible only to humans through existing user interfaces.”











