logo
Published on

Existential AI

Authors
  • avatar
    Name
    Strategic Machines
    Twitter
stack

Is AI a threat or a gift to humanity?

Our last post caught a lot of attention. So when we read the recent statement from the Center for AI Safety about mitigating the risk of extinction from AI, and equating AI risk with pandemics and nuclear war, we felt compelled to weight in with our point of view. We don’t claim any special insights over the luminaries who signed this statement, but we are practitioners engaged with a host of AI applications, so we do have some perspective on this topic.

We recognize we cannot count on models to take the Hippocratic Oath and do no harm. After all, it’s code. And when we pull the plug, the model stops working. But the notion that we are at the point where killer software will spring to life and wreak havoc on humanity is a stretch even for our creative imaginations. It is true that the latest rage, Generative AI, has biases, mimics harmful language and even makes things up. Research from Google DeepMind, OpenAI, Oxford, Cambridge and other centers for AI have published about the challenge of evaluating models for extreme risk. While frameworks to address risk would be very helpful (and we expect better technology will be introduced with time), users will develop their own ‘guard rails’ against risk based on the use cases. And that’s the key in our view. It is impossible to gauge risk apart from the context of the application. Not all the training datasets are perfect. Not all the training costs are reasonable. And none of the risks are existential.

So, if AI is not existential, is it a gift? Definitely! We believe the adoption path will scale with better controls over the production of content from foundation models, but in the meantime can be deployed with astonishing results in suitable applications. For low-stake tasks, like coding, search, document summaries and the like, productivity improvements are evident. For complex workflows, like supply chain management, product reclamation, and order management, awesome frameworks are beginning to emerge. For the existential stuff, like patient care, there is much more work to do.

While some luminaries seem to be inviting government regulation at scale to address potential risks, we are fully vested in driving innovation to eliminate risks. After all, the only real harm we see at this point is the threat of doing nothing at all.