Friday, December 5, 2025
  • Login
CEO North America
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel
No Result
View All Result
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel
No Result
View All Result
CEO North America
No Result
View All Result

CEO North America > Technology > At What Point Do We Decide AI’s Risks Outweigh Its Promise?

At What Point Do We Decide AI’s Risks Outweigh Its Promise?

in Technology
At What Point Do We Decide AI’s Risks Outweigh Its Promise?
Share on LinkedinShare on WhatsApp

In June 2015, Sam Altman told a tech conference, “I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”

His comments echoed a certain postapocalyptic New Yorker cartoonopen in new window, but Altman, then the president of the startup accelerator Y Combinator, did not appear to be joking. In the next breath, he announced that he’d just funded a new venture focused on “AI safety research.” That company was OpenAI, now best known as the creator of ChatGPT.

The simultaneous cheerleading and doomsaying about AI has only gotten louder in the years since. Charles Jones, a professor of economics at Stanford Graduate School of Business, has been watching with interest as developers and investors like Altman grapple with the dilemma at the heart of this rapidly advancing technology. “They acknowledge this double-edged sword aspect of AI: It could be more important than electricity or the internet, but it does seem like it could potentially be more dangerous than nuclear weapons,” he says.

Out of curiosity, Jones, an expert on modeling economic growth, did some back-of-the-envelope math on the relationship between AI-fueled productivity and existential risk. What he found surprised him. It formed the basis of a new paper in which he presents some models for assessing AI’s tradeoffs. While these models can’t predict when or if advanced artificial intelligence will slip its leash, they demonstrate how variables such as economic growth, existential risk, and risk tolerance will shape the future of AI — and humanity.

There are still a lot of unknowns here, as Jones is quick to emphasize. We can’t put a number on the likelihood that AI will birth a new age of prosperity or kill us all. Jones acknowledges that both of those outcomes may prove unlikely, but also notes that they may be correlated. “It does seem that the same world where this fantastic intelligence helps us innovate and raise growth rates a lot also may be the world where these existential risks are real as well,” he says. “Maybe those two things go together.”

Healthy, Wealthy… and Wise?

Jones also built a more complex model that considers the possibility that AI will help us live healthier, longer lives. “In addition to inventing safer nuclear power, faster computer chips, and better solar panels, AI might also cure cancer and heart disease,” he says. Those kinds of breakthroughs would further complicate our relationship with this double-edged tech. If the average life expectancy doubled, even the most risk-averse people would be much more willing to take their chances with AI risk. “The surprise here is that cutting mortality in half suddenly turns your willingness to accept existential risk from 4% to 25% or even more,” Jones explains. In other words, people would be much more willing to gamble if the prize was a chance to live to 200.

The models also suggest that AI could mitigate the economic effects of falling birth rates, another subject Jones has recently written about. “If machines can create ideas, then the slowing of population growth may not be such a problem,” he says.

Jones’ models provide insights into the wildest visions of AI, such as the singularity — the fabled moment when technological growth becomes infinite. He found that, in practical terms, accelerated growth might be hard to distinguish from the singularity. “If growth rates were to go to 10% a year, that would be just as good as a singularity,” he says. “We’re all going to be as rich as Bill Gates.”

Overall, Jones cautions that none of his results are predictive or prescriptive. Instead, they’re meant to help refine our thinking about the double-edged sword of AI. As we rush toward a future where AI can’t be turned off, efforts to quantify and limit the potential for disaster will become even more essential. “Any investments in reducing that risk are really valuable,” Jones says.

Read the full article by Dave Gilson

Related Posts

Strong start to online holiday shopping masks signs of a fragile U.S. consumer
Technology

Strong start to online holiday shopping masks signs of a fragile U.S. consumer

AI-powered children’s toys are here, but are they safe?
Technology

AI-powered children’s toys are here, but are they safe?

Chipmaker Nvidia doubles Q2 sales
Technology

Nvidia stock falls after report says Google, Meta in talks for multibillion-dollar AI chip deal

What are AI agents?
Technology

What are AI agents?

Reimagining cloud operations
Technology

Reimagining cloud operations

Data strategy for AI success: Winning the race against time
Technology

Data strategy for AI success: Winning the race against time

How AI-driven retail software helps emerging retailers win big
Technology

How AI-driven retail software helps emerging retailers win big

Elon Musk uses Grok to imagine the possibility of love
Technology

Elon Musk uses Grok to imagine the possibility of love

AI-washing and the massive layoffs hitting the economy
Technology

AI-washing and the massive layoffs hitting the economy

Microsoft AI chief says only biological beings can be conscious
Technology

Microsoft AI chief says only biological beings can be conscious

No Result
View All Result

Recent Posts

  • Rio Tinto CEO outlines $10 billion divestment plan
  • Layoff announcements for 2025 top 1.1 million
  • US dollar declines for tenth day in a row
  • The environmental costs of corn: should the US change how it grows its dominant crop?
  • How to lead a high-performance team

Archives

Categories

  • Art & Culture
  • Business
  • CEO Interviews
  • CEO Life
  • Editor´s Choice
  • Entrepreneur
  • Environment
  • Food
  • Health
  • Highlights
  • Industry
  • Innovation
  • Issues
  • Management & Leadership
  • News
  • Opinion
  • PrimeZone
  • Printed Version
  • Technology
  • Travel
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

  • CONTACT
  • GENERAL ENQUIRIES
  • ADVERTISING
  • MEDIA KIT
  • DIRECTORY
  • TERMS AND CONDITIONS

Advertising –
advertising@ceo-na.com

110 Wall St.,
3rd Floor
New York, NY.
10005
USA
+1 212 432 5800

Avenida Chapultepec 480,
Floor 11
Mexico City
06700
MEXICO

  • News
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life

  • CONTACT
  • GENERAL ENQUIRIES
  • ADVERTISING
  • MEDIA KIT
  • DIRECTORY
  • TERMS AND CONDITIONS

Advertising –
advertising@ceo-na.com

110 Wall St.,
3rd Floor
New York, NY.
10005
USA
+1 212 432 5800

Avenida Chapultepec 480,
Floor 11
Mexico City
06700
MEXICO

CEO North America © 2024 - Sitemap

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.