
AGI AND THE ECHOES OF THE MANHATTAN PROJECT
AGI AND THE ECHOES OF THE MANHATTAN PROJECT
10 December 2024 – by Andrew Dolan
Once again, parallels are being drawn in some quarters between the development of AGI and the Manhattan Project, the name given to the original development of the atomic bomb.
Max Tegmark, the Chief Executive Officer (CEO) of the Future of Life Institute noted that in a recent US Congressional report by the US-China Economic and Security Review Commission, there was a call for the Congress to ‘establish and fund a Manhattan Project-like program dedicated to and acquiring an Artificial General Intelligence (AGI) capability. Tegmark reminds those interested that there is plenty of expert advice suggesting that such a development is fraught with risk.
Those calling for such an initiative would no doubt have Russia’s President Putin’s 2017 words in mind when he claimed that such a development could be best described as an ‘Arms Race’ and to the victors, the spoils of global leadership. With global security undergoing a tectonic shift considering the conflicts in Ukraine and the Middle East and possibly in Southeast Asia, it is not surprising that the Security Review Commission identified AGI as a possible ‘force multiplier’.
Of course, the references to the Manhattan Project are not new. Indeed, we have written about this in the past. What has changed in the interim, that has led Tegmark and others to raise the alarm again?
Perhaps part of the explanation lies with the claims of AGI advocates, suggesting that workable and safe AGI is within humanity’s grasp. These same advocates successfully resisted calls over a year ago for all research and development on AGI be suspended until such times as the development and utilisation of such systems could be made safe, a concern raised by some aspects of ChatGPT development.
For those of the same opinion of Tegmark, the concerns are not intended to be obstructive but are rather based on various aspects of human safety. These commentators fear that humans could, within short order, lose control of some of the machine intelligence being developed, failing to ensure the ‘embedding’ of suitable alignment arrangements and ‘safety ‘guardrails’ into the new systems and perhaps by inference, becoming too close to the so-called military industrial complex. It is awkward to say the least that several of the more advanced AI powers who attended the Bletchley Summit are concurrently developing strategies and technical action plans for integrating various AI-enabled weapons and associated systems into national security structures, influenced no doubt by the very real risks and challenges emerging from the state of world affairs.
To some extent, the references to a new Manhattan Project are both apt but at the same time, slightly misaligned, depending on your vantage point. Irrespective of the varied perspectives, such an analogy serves a purpose, namely, to find a frame of reference around which one form of AGI development can be viewed. I would suggest that unlike the original Manhattan Project, despite the covert nature of much of today’s commercial AI research and development, there remains some way for key developers to influence the process. Although global security concerns are focusing minds, there might still be occasions when the staff of leading tech or data firms resist too much involvement with military authorities – such as Google and ‘Project Maven’.
For what it is worth, my own view is that the deepening integration of tech and data companies in Silicon Valley associated with AI and the move towards AGI and possible military applications will persist and flourish. Just look at the recent tie up between Open AI and start-up weapons company Anduril for evidence of this.
Yet such a departure, for sound economic and national security reasons no doubt, will bring with it a new range of considerations unfamiliar to AI developers but arguably quite familiar to the scientists associated all those years ago with the atomic project. Security will become ever more important and the AI development community, will inevitably be subject to intrusive vetting and background checking. They will have to recognise that they are arms manufacturers of a sort and a premium will be placed on saying little about the product and even less about its potential use. Indeed, one could argue that the dual-use implications of such new AGI applications, which are plainly meant for commercial use and equally controllable, might not become as freely available as one hopes. Under such circumstances, can we realistically expect freedom of access to certain AI or AGI codes? Can we also realistically expect open and international AI or AGI development in an age of self-empowered individual or non-state actors whose use for such a capability might be malicious? Silicon Valley might have to reposition its ethical base in many ways that had not been previously considered and find itself as part of a national regulatory framework for a certain section of national or international counter proliferation. I am not sure if such a possibility is fully appreciated by many ‘frontier’ AI pioneers.
Tegmark’s warning on the race to develop AGI also dips into concerns about national energy systems, critical network infrastructure and public health. Each of the above might be considered as critical arteries of societal functioning but equally, they represent obvious examples of societal security. The disruption, degradation or destruction of such systems through highly empowered machine intelligence systems is in his eyes, a form of existential risk and urgently demands recognition by those bent on being first in the race to AGI.
This is arguably unfair on AGI developers. Frankly, one could justifiably argue that such existential risk exists now from a range of modern weapon systems ranging from nuclear or electromagnetic pulse weapons to drones, cyber and biological or chemical materials. The world is not short of destructive capabilities but Tegmark is not wrong, however, to express genuine concerns. He is far from alone in this.
If nothing else, the raising of concerns should encourage further debate on societal risk from AI. If we seem incapable of restraining AI or AGI development, stopping for example at ‘Narrow AI’ developments, then the best we might hope for is an ability to either make new applications safe or controllable. All sorts of ‘Guardrails’, ‘Kill Switches’ and ‘Black Boxes’ make their traditional appearance but I am yet to be convinced we are anywhere near failsafe systems.
Furthermore, the world is far from united on this potentially monumental breakthrough and even less on its’ potential use. We should not forget that one of the key drivers of the Manhattan Project was the perceived arms race with Germany and although it eventually became clear that the risk evaporated, the process of developing the weapon did not abate. Do we know for sure what other powers might do in terms of development and application? Are we back to the old concept of ‘the sum of all fears’ as our authorities try to find a suitable modus vivendi with AGI?As much as we can herald the possibility of an AGI breakthrough soon – with or without adequate safeguards – we might have to recognise that a new future arrangement for those developing the machine intelligence has already arrived. A retrospective look at the Manhattan Project will only take us so far and it would be prudent for authorities globally to begin thinking what such a development might mean for societal security and public policy and sooner rather than later.