Article Image

IPFS News Link • Robots and Artificial Intelligence

ChatGPT maker quietly changes rules to allow the US military to incorporate its technology


Experts have previously voiced fears that AI could escalate conflicts around the world thanks to 'slaughterbots' which can kill without any human intervention.

The rule change, which occurred after Wednesday last week, removed a sentence which said that the company would not permit usage of models for 'activity that has high risk of physical harm, including: weapons development, military and warfare.'

An OpenAI spokesperson told that the company, which is in talks to raise money at a valuation of $100 billion, is working with the Department of Defense on cybersecurity tools built to protect open-source software.

The spokesman said: 'Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.

'There are, however, national security use cases that align with our mission.

'For example, we are already working with the Defense Advanced Research Projects Agency ( DARPA) to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on.

'It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.'