Super-AI – for better or worse?

12 MARCH 2024

Super-AI – for better or worse?

Is a friendly super-artificial intelligence (AI) desirable? A Charles Sturt University philosopher considers whether the development of such super-AI, which might limit our freedom and treat us like ‘pets’, is preferable to no super-AI at all.

  • Humanity is at the crossroads of the existential threat posed by the creation of a super-intelligent artificial intelligence (AI)
  • A Charles Sturt University philosopher ponders humanity’s theoretical options and responses
  • A super-intelligent AI, so technologically advanced it could interfere in our choices if it wanted to, might not give humanity a choice

Is a friendly super-artificial intelligence (AI) desirable? A Charles Sturt University philosopher considers whether the development of such super-AI, which might limit our freedom and treat us like ‘pets’, is preferable to no super-AI at all.

The question ‘Do we want a friendly super-AI to exist?’ is posed in a recently published journal article by Associate Professor in Philosophy Morgan Luck (pictured inset) in the Charles Sturt School of Social Work and Arts and Research Acceleration Fellow in the Charles Sturt Artificial Intelligence and Cyber Futures Institute.

The article, ‘Freedom, AI and God: why being dominated by a friendly super-AI may not be so bad’, is published online in AI and Society (Springer, February 2024).

Professor Luck recognises that, as humanity now hastens toward the existential threat posed by the creation of a super-intelligent AI, one response is to design it to be friendly to us.

“I am not claiming that the development of a friendly super-AI would be good; it may well be very bad,” Professor Luck said.

“Rather I am claiming, if it is bad, it would not be because it would result in us having less freedom, providing our freedom is reduced by the right kind of agent.

“That is, an agent that would seek to optimise our freedom, to give us as much freedom as possible, but not so much it could not interfere for the right reasons.”

Drawing on the work of contemporary philosophers, Professor Luck examines the nature of a ‘republican’ theory of freedom, and the God-like qualities that a super-intelligent AI might manifest; that is, an agent which is far more benevolent, powerful and intelligent than us.

A ‘republican’ theory of freedom posits that even if a super-AI were to be friendly, it would still dominate us, and this dominance would in turn diminish our freedom.

“Most of us hold freedom to be an important good, and as freedom is also a necessary condition for other goods such as justice, democracy and sovereignty, there is considerable reason to desire freedom,” Professor Luck said.

He arrives at the conclusion we should not want a super-AI that never acts against our interests but instead we should want a super-AI that never acts against our interests for the wrong reasons.

“Perhaps we should not want to be completely free,” he suggests.

“If the only way to stop some group (such as a terrorist organisation, or rogue nation) from causing some moral atrocity is by occasionally, or even systematically, interfering in their choices in ways that they cannot control, then arguably they should be dominated.

“However, in doing so, their freedom is diminished.

“But perhaps this is not such a bad result when what is dominating is properly orientated, as God should be. Such a dominator would be orientated to give us as much freedom as possible, but not so much that it could not interfere if it was the right thing to do; that is, it would seek to optimise our freedom.”

Professor Luck said it is worth noting that the same position could be made in regard to any suitably intelligent agent.

For example, an alien civilisation that is so technologically advanced it could interfere in our choices if they wanted to, but also so incredibly benevolent and intelligent that they would not, unless this was the right thing to do.

“Should we want there to be no such benevolent civilisations out there?” he said. “Such a desire seems suspect, and it strikes me as a particularly egregious instance of anthropocentricity.

“The real question we should be focusing on here is whether a friendly super-AI would be the right kind of agent to permissibly dominate us, one that would try to optimise our freedom.

“I do not feel particularly confident that it would be, but I hopefully would not begrudge its existence if it were, despite the loss of freedom that might result.”


Media Note:

To arrange interviews with Associate Professor Morgan Luck who is based in Wagga Wagga, contact Bruce Andrews at Charles Sturt Media on mobile 0418 669 362 or via news@csu.edu.au

Share this article
share

Share on Facebook Share
Share on Twitter Tweet
Share by Email Email
Share on LinkedIn Share
Print this page Print

All Local NewsArtificial Intelligence and Cyber Futures InstituteComputer ScienceReligion and EthicsSociety and CommunityTechnology