Why We Need a NZ AI Safety Institute Now

Bletchley Park

"For progress there is no cure… The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment." — John von Neumann

Bletchley park is a wooded English county estate, dotted with old victorian style cottages. These same cottages once housed the prototype computers Alan Turing used to break the Enigma cipher and defeat the Nazis. After the war, when the radioactive dust from the two great bombs had settled, Alan Turing spent his time thinking about the possibility of another doomsday device; a Thinking Machine.

It was fitting that in 2023 the UK government used the grounds to host the Bletchley declaration, a world first international agreement on AI safety. Many advanced countries agreed to work together on developing AI in a manner that was "safe, human-centric, trustworthy and responsible".

New Zealand was not in attendance.

In 2024, building on the momentum of the Bletchley Declaration, the Seoul AI Safety Summit was held. The agreements from the BD were strengthened, and many more countries agreed to establish AI safety institutes, and cooperate internationally, including Australia.

This time, New Zealand was present, and supported the summit, but did not sign on to establishing a NZAISI. In this article, I lay out the case for establishing a NZAISI as urgently as possible.

What are AI Safety Institutes?

AI safety institutes (AISIs), such as the UK AISI, increase safety through three paths; doing research, setting standards, and facilitating cooperation.

Through research, AISIs develop the science of AI safety. This mostly involves empirical testing - can you jailbreak AI? If you do this, can you get it to do dangerous things? There's less of an emphasis on loss of control. An example is the focus of the UKAISI (now called the AI "security" institute) on cyber risk.

Standards are what they sound like. For example, in the US, Paul Christiano (one of the world's leading AI safety researchers) is "head of AI safety" at the US AISI, which is based out of NIST. NIST is the National Institute for Standards and Technology, and they do work on many fundamental scientific technical standards, including defining exactly what a second is, and how much mass a kilogram has. Christiano's work is focused on developing standard methods to test frontier AI before it is deployed.

AISIs also have a big focus on developing collaboration between government, industry, and civil society, nationally, and internationally - see the Singapore Consensus on global AI safety research. The construction of AISIs is creating a worldwide network of AI safety people who can collaborate and share research, while also being able to maximise benefits and minimize harms in their own national context.

Having a loose network of national AISIs that collaborate and adapt standards to local context is a politically feasible way to increase global cooperation. As a Kiwi-American, I doubt US defined regulations would map well onto the NZ political context, or vice versa. I end up having to explain this a lot to my Kiwi friends when they ask "why doesn't the USA just do this thing that NZ does?". Our institutions of government are just too different. The Japanese AISI will have a different context for AI governance, as will the German one, and the UK one, all the way down the line. But they can all agree on the "big risks". This is a bottom up approach to international governance, not a top down one.

Why is it important for New Zealand to establish an AISI?

Not having an AISI sets us back in multiple ways.

First of all, it hurts us economically and scientifically. In economics, there exists a concept of "talent clusters", and "agglomeration effects". Basically, you have all the AI companies in San Francisco. Then, if someone is interested in having an impact on AI, even if they don't want to leave their country, they have to go to San Francisco. This is a self reinforcing cycle. More talent is there, so people move there. More capital flows in because that's where talent is. People leave companies and go to different ones, hang out at parties and industry events, and ideas cross pollinate. It's very difficult for other places to keep up. AISIs both allow governments to retain their own safety focused AI talent, and suck in skilled foreigners from overseas. There is currently a exodus of AI safety talent to London because of the UK AISI first mover advantage.

Our lack of an AISI also hurts us in terms of soft power. New Zealand is a small country, but we punch above our weight on moral issues and international influence. For example, we were the first country to give women the right to vote, and our nuclear free policy was internationally influential. We could similarly take the moral lead on AI safety and governance, just as Australia has done on social media regulation.

We are also hurt in terms of security. If we don't have a AISI monitoring the situation in AI, the government has to rely on outside sources for up to date information and advice on AI impacts, which will not map well onto the NZ context. In NZ, our risk profile from AI is not like other countries. Our small size, low corruption, and well functioning government should allow us to move quickly on AI governance. On the other hand, our remoteness, lack of AI frontier labs, and reliance on exports sets us up to be surprised and overwhelmed by AI.

For example, a large portion of our economy is services, including white collar exports like finance and software. Anthropic's CEO, Dario Amodei, went on the record to say that there will be a significant increase in unemployment due to AI in the near future. He is possibly just hyping up his own company (they just released a software engineer agent product), but the scenario he is outlining is worth taking seriously.

Funding a NZAISI allows us to be proactive and develop evidence based policy for different types of risks, be they economic or existential. Just as we don't wait for natural disasters to strike before we develop an emergency prepardness plan, we now must fund a NZAISI to get ready for AI. However, this threat is more complex - we can't just buy some extra cans of beans at the supermarket. We need a highly skilled team of scientists and engineers game-planning different scenarios to prepare for AI's impact.

The current approach to AI in NZ is weak, but we could get something going quickly and cheaply

The current NZ government has their head in the sand on AI. Judith Collins, our minister for AI, has ruled out comprehensive AI regulation, fearing it would "harm innovation". To quote; she wants a "light-touch, proportionate and risk-based approach to AI regulation". Concerns with AI are instead addressed through a patchwork of existing laws. For example, privacy concerns are addressed through the Privacy Act, and so on.

On the state capacity front, there is a small "digital futures" policy team in the Ministry for Business, Innovation and Employment (MBIE). They are "scoping an AI strategy for New Zealand" and "creating Responsible AI guidance for business" in cooperation with the Department of Internal Affairs, and the NZ AI Forum (a plenary of academics and industry people), so we aren't starting from nothing. But overall, state capacity on this issue is incredibly anemic given how important it is.

AI hasn't yet emerged as a political issue in its own right here. The New Zealand general election is next year. There is a lot of focus on our economic stagnation, high emigration, and the housing crisis, as there should be. But AI is conspicuously absent. No major party has an AI policy that I could find. Only The Opportunities Party (TOP) has an indirect mention of AI's effects, when introducing their universal basic income policy: "New Zealanders face an increasingly insecure economy due to.. new technology". This would be paid for by a land value tax. I support this policy approach to technological unemployment, but I am skeptical TOP reaches the 5% threshold to get into parliament. They are currently sitting at 0.5-2.5%.

This lack of political emphasis on AI is concerning, but I don't think it reflects a lack of focus on AI in popular consciousness. A recent report by ONE-NZ showed that 77% of kiwis surveyed knowingly use AI, and 65% feared jobs losses from it. Anecdotally, many people I know have discussed it with me (There is obviously a selection effect going on here, but despite that, I think people are aware of it).

New Zealand has strong academic expertise in AI, and a good stable of software engineers from our tech sector. If we wanted to start a NZ AISI, here's a simple framework. We already have the digital futures team in MBIE. MBIE, like the US NIST, is a large, catch all department with a lot of state capacity, and a strong existing infrastructure.

We should build up a AISI team embedded within MBIE, with the digital futures team as the policy contact. Get 5 newly graduated Kiwi AI PHDs to start a research program, and 5 hands on ML and Systems engineers to build out the NZAISI infrastructure (125-150k each). Leadership should be technical and results oriented - get a principal engineer from the tech industry, or a AI professor (200k). Allow 500k overhead for extra compute costs. Add on a 100k budget to hire summer interns and sponsor graduate students. Even if they don't stay long term, you're cultivating local talent and propagating your values of safety. The NZAISI would be able to utilise MBIE's existing admin, legal, and HR infrastructure. This NZ AISI would end up costing ~2m a year, a fraction of the UKAISI's 100m pound budget. This would be less than half of what we spend on film industry subsidies in a week (250m a year, <5m a week). It would more than pay for itself, and it's the right thing to do. We can then scale up and pivot to focusing on different risks as needed. The best time to do it was years ago, and the next best time is now.


Published

Category

Blog

Tags

Contact