How a professional risk manager views threats posed by AI | The Business Standard
Skip to main content
  • Latest
  • Economy
    • Banking
    • Stocks
    • Industry
    • Analysis
    • Bazaar
    • RMG
    • Corporates
    • Aviation
  • Videos
    • TBS Today
    • TBS Stories
    • TBS World
    • News of the day
    • TBS Programs
    • Podcast
    • Editor's Pick
  • World+Biz
  • Features
    • Panorama
    • The Big Picture
    • Pursuit
    • Habitat
    • Thoughts
    • Splash
    • Mode
    • Tech
    • Explorer
    • Brands
    • In Focus
    • Book Review
    • Earth
    • Food
    • Luxury
    • Wheels
  • Subscribe
    • Get the Paper
    • Epaper
    • GOVT. Ad
  • More
    • Sports
    • TBS Graduates
    • Bangladesh
    • Supplement
    • Infograph
    • Archive
    • Gallery
    • Long Read
    • Interviews
    • Offbeat
    • Magazine
    • Climate Change
    • Health
    • Cartoons
  • বাংলা
The Business Standard

Wednesday
July 23, 2025

Sign In
Subscribe
  • Latest
  • Economy
    • Banking
    • Stocks
    • Industry
    • Analysis
    • Bazaar
    • RMG
    • Corporates
    • Aviation
  • Videos
    • TBS Today
    • TBS Stories
    • TBS World
    • News of the day
    • TBS Programs
    • Podcast
    • Editor's Pick
  • World+Biz
  • Features
    • Panorama
    • The Big Picture
    • Pursuit
    • Habitat
    • Thoughts
    • Splash
    • Mode
    • Tech
    • Explorer
    • Brands
    • In Focus
    • Book Review
    • Earth
    • Food
    • Luxury
    • Wheels
  • Subscribe
    • Get the Paper
    • Epaper
    • GOVT. Ad
  • More
    • Sports
    • TBS Graduates
    • Bangladesh
    • Supplement
    • Infograph
    • Archive
    • Gallery
    • Long Read
    • Interviews
    • Offbeat
    • Magazine
    • Climate Change
    • Health
    • Cartoons
  • বাংলা
WEDNESDAY, JULY 23, 2025
How a professional risk manager views threats posed by AI

Panorama

Aaron Brown, Bloomberg
03 January, 2024, 01:55 pm
Last modified: 03 January, 2024, 03:54 pm

Related News

  • Nvidia's Huang hails Chinese AI models as 'world class'
  • Meta's Zuckerberg pledges hundreds of billions for AI data centers in superintelligence push
  • Dubai to debut restaurant operated by an AI chef
  • Google hires Windsurf execs in $2.4 billion deal to advance AI coding ambitions
  • No invitation for observers who certified last 3 elections as credible: CEC

How a professional risk manager views threats posed by AI

A slew of official documents designed to keep artificial intelligence from turning on humans were released in 2023. All were lacking 

Aaron Brown, Bloomberg
03 January, 2024, 01:55 pm
Last modified: 03 January, 2024, 03:54 pm
TBS Sketch of Aaron Brown
TBS Sketch of Aaron Brown

Runaway artificial intelligence has been a science fiction staple since the 1909 publication of E M Forster's The Machine Stops, and it rose to widespread, serious attention 2023. 

The National Institute for Standards and Technology released its AI Risk Management Framework in January 2023. Other documents followed, including the Biden administration's 30 October executive order Safe, Secure, and Trustworthy Artificial Intelligence, and the next day, the Bletchley Declaration on AI Safety signed by 28 countries and the European Union.

As a professional risk manager, I found all these documents lacking. I see more appreciation for risk principles in fiction. In 1939, author Isaac Asimov got tired of reading stories about intelligent machines turning on their creators. 

The Business Standard Google News Keep updated, follow The Business Standard's Google news channel

He insisted that people smart enough to build intelligent robots wouldn't be stupid enough to omit moral controls — basic overrides deep in the fundamental circuitry of all intelligent machines. 

Asimov's first rule is: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Regardless of the AI's goals, it is forbidden to violate this law.

Or consider Arthur C Clarke's famous HAL 9000 computer in the 1968 film, 2001: A Space Odyssey. HAL malfunctions not due to a computer bug, but because it computes correctly that the human astronauts are reducing the chance of mission success — its programmed objective. 

Clarke's solution was to ensure manual overrides to AI, outside the knowledge and control of AI systems. That's how Frank Bowman can outmaneuver HAL, using physical door interlocks and disabling HAL's AI circuitry.

While there are objections to both these approaches, they pass the first risk management test. They imagine a bad future state and identify what people then would want you to do now. 

In contrast, the 2023 official documents imagine bad future paths, and resolve that we won't take them. The problem is an infinite number of future paths, most of which we cannot imagine. 

There is a relatively small number of plausible bad future states. In finance, a bad future state is to have cash obligations you cannot meet. There are many ways to get there, and we always promise not to take those paths. Promises are nice, but risk management teaches focus on things we can do today to make that future state survivable.

There is no shortage of things that could end human existence: asteroid impact, environmental collapse, pandemic, global thermonuclear war. These are all blind dangers. They do not seek to hurt humans and so there is some possibility that some humans survive.

Two dangers are essentially different — attack by malevolent intelligent aliens, and attack by intelligences we build ourselves. An intelligent enemy hiding until it acquires strength and position to attack, with plans to break through any defenses and to continue its campaign until total victory is attained, is a different kind of worry than a blind catastrophe.

The dangers of computer control are well known. Software bugs can result in inappropriate actions with sometimes fatal consequences. While this is a serious issue, it is a blind risk. AI poses a fundamentally different danger, closer to a malevolent human than to a malfunctioning machine. 

With AI and machine learning, the human gives the computers objectives rather than instructions. Sometimes these are programmed explicitly, other times the computer is told to infer them from training sets. AI algorithms are tools the computer — not the human — uses to attain the objectives. The danger from a thoughtlessly specified objective is not blind or random.

This differs from a dumb computer programme, where a human spells out the programme's desired response to all inputs. Sometimes the programmer makes errors that are not caught in testing. 

The worst errors are usually unexpected interactions with other programmes rather than individual programme bugs. When software bugs or computer malfunctions do occur, they lead to random results. Most of the time the consequences are limited to the system the computer is designed to control.

This is another key risk distinction between dumb and smart programmes. The conventional computer controlling a nuclear power plant might cause a meltdown in the plant, but it can't fire nuclear missiles, crash the stock market or burn your house down by turning your empty microwave on. 

But malevolent intelligence could be an emergent phenomenon that arises from the interaction of many AI implementations, controlling almost everything.

Human intelligence, for example, probably emerged from individual algorithms that evolved for vision, muscle control, regulation of bodily functions and other tasks. 

All those tasks were beneficial to humans. But out of that emergent consciousness, large groups of humans chose to cooperate in complex, specialised tasks to build nuclear weapons capable of wiping out all life on Earth. This was not the only terrible, life-destroying idea that emerged from human intelligence—think genocide, torture, divine right of kings, holy war and slavery. 

The fact that individual AI routines today lack the sophistication and power necessary to destroy humanity, and mostly have benign goals, is no reason to think emergent AI intelligence will be nicer than people are.

My hope for 2024 is we will conduct serious reverse stress tests for AI. We invite diverse groups of people — not just officials and experts — and have them assume some specific bad state. 

Maybe it's 2050 and Skynet has killed all other humans (I often show disaster movies to prepare groups for reverse stress tests, it helps set the mood and make people more creative — it's Hollywood's great contribution to risk management). 

You're the last survivors, hiding out until Terminators find and terminate you. Discuss what you wish people had done in 2024, not to prevent this state from happening, but to give you some means of survival in 2050.

Aaron Brown is a Bloomberg Opinion columnist covering cryptocurrencies and finance 

Disclaimer: This article first appeared on Bloomberg, and is published by special syndication arrangement.

Features

AI / risk management

Comments

While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.

Top Stories

  • Photo: CA Press Wing
    Stronger stance needed on maintaining law and order: Political parties to CA
  • Volunteers collect and gather parts of the wrecked plane from the Milestone School and College grounds on Tuesday, a day after the devastating aircraft crash. Photo: Mehedi Hasan/TBS
    Grief, angst and anger: The unbearable toll of Milestone crash
  • Photo: Syed Zakir Hossain/TBS
    Secretariat protest: 75 injured in police-protester clash over edu adviser's resignation for delaying HSC rescheduling

MOST VIEWED

  • Screengrab/Video collected from Facebook
    CCTV footage shows how Air Force jet nosedived after technical malfunction
  • ISPR clarifies crashed plane was battle aircraft, not training jet
    ISPR clarifies crashed plane was battle aircraft, not training jet
  • The jet plane charred after crash on 21 July at the Milestone school premises. Photo: Mehedi Hasan/TBS
    Milestone plane crash: Death toll rises to 31 as nine more succumb to injuries
  • Students and police clash at Milestone School and College on 22 July 2025. Photo: TBS
    Protesting Milestone students clash with police, besiege law and education advisers
  • Photo: Syed Zakir Hossain/TBS
    Secretariat protest: 75 injured in police-protester clash over edu adviser's resignation for delaying HSC rescheduling
  • Aerial view of the Milestone school premises where the crash took place on 21 July. Photo: Olid Ebna Shah/ TBS
    ‘Why here?’: Concerns expressed over airbase inside city

Related News

  • Nvidia's Huang hails Chinese AI models as 'world class'
  • Meta's Zuckerberg pledges hundreds of billions for AI data centers in superintelligence push
  • Dubai to debut restaurant operated by an AI chef
  • Google hires Windsurf execs in $2.4 billion deal to advance AI coding ambitions
  • No invitation for observers who certified last 3 elections as credible: CEC

Features

Photo: Mehedi Hasan/TBS

Aggrieved nation left with questions as citizens rally to help at burn institute

9h | Panorama
Photo: TBS

Mourning turns into outrage as Milestone students seek truth and justice

3h | Panorama
Illustration: TBS

Uttara, Jatrabari, Savar and more: The killing fields that ran red with July martyrs’ blood

1d | Panorama
Despite all the adversities, girls from the hill districts are consistently pushing the boundaries to earn repute and make the nation proud. Photos: TBS

Ghagra: Where dreams rise from dust for Bangladesh women's football

2d | Panorama

More Videos from TBS

What information did the director of the NBPSI give about the admitted patients?

What information did the director of the NBPSI give about the admitted patients?

2h | TBS Today
What is discussed at the Chief Advisor's meeting?

What is discussed at the Chief Advisor's meeting?

2h | TBS Today
Two advisors and press secretary were blocked at Milestone for 8 hours

Two advisors and press secretary were blocked at Milestone for 8 hours

2h | TBS Today
Chief advisor's meeting with 4 parties; what was discussed?

Chief advisor's meeting with 4 parties; what was discussed?

2h | TBS Today
EMAIL US
contact@tbsnews.net
FOLLOW US
WHATSAPP
+880 1847416158
The Business Standard
  • About Us
  • Contact us
  • Sitemap
  • Advertisement
  • Privacy Policy
  • Comment Policy
Copyright © 2025
The Business Standard All rights reserved
Technical Partner: RSI Lab

Contact Us

The Business Standard

Main Office -4/A, Eskaton Garden, Dhaka- 1000

Phone: +8801847 416158 - 59

Send Opinion articles to - oped.tbs@gmail.com

For advertisement- sales@tbsnews.net