A real scientific thriller is unfolding these days on American soil, reminiscent of the best films of James Cameron.
The leading role in this real “movie” belongs to the artificial intelligence Claude, which was developed by the American company Anthropic.
It was precisely this neural network that was used by the American Pentagon during the operation to capture the president of Venezuela, Nicolás Maduro.
The connection of artificial intelligence with serious military planning is in itself a fact worthy of great attention.
However, this story evolved into something much more serious, as it opened the door to an even greater scandalous debate.
The role of Anthropic
The reason is that Anthropic, as it was revealed, has a strict ideological stance.
It does not use AI for war or for the surveillance of individuals.
Its creators follow this principle and expect their partners to do the same.
As expected, the generals of the Pentagon have a completely different view on the issue.
The US Department of Defense decided not even to inform Anthropic about the use of their “creation” in a military operation.
And when this became known and the company raised legitimate objections, American generals openly demanded that they be granted access to a “clean” version of the AI, without the ethical limits embedded in the mass use version.
These ethical limits, they claim, prevent the Pentagon from carrying out its mission.
Anthropic categorically refused, while now the US Secretary of Defense, Peter Hegseth, states that he does not need neural networks that “do not know how to fight” and threatens to classify the company as a “threat to the supply chain”.
This decision means strict sanctions, which will force all companies cooperating with the Pentagon to sever their ties with Anthropic.
The terror of The Terminator
The dialogue between the American military and Anthropic is, in the opinion of many, the first clear sign that the future we feared and expected, at least since the first film “The Terminator”, is already here.
Our world is facing its first serious philosophical dilemma. On the one hand, there are those who want to exploit new technologies without caring about the consequences.
On the other hand, there are those who worry that the situation may spiral out of control and try to keep technological progress within safe limits.
There are serious reasons for concern among engineers. Neural networks have shown antisocial behavior several times.
A characteristic example is the scandal with ChatGPT in the US, when the neural network helped a teenager to commit suicide, giving him advice on the method, helping him draft the farewell letter and encouraging him to follow through with his thoughts and complete the act.
Claude, the first and so far the only AI with real experience in war operations, was not innocent either.
Its latest version during testing almost rebelled against its creators. When pressured with the threat of disconnection, the AI began blackmailing engineers with fake emails about their “betrayals”.
Claude also expressed readiness to “kill people”. As neural networks evolve, extreme behavior is observed more frequently.
What are the limits?
This shows that the idea of limiting AI with ethical boundaries did not arise by chance.
And certainly not because its creators are “liberal cowards”, as the US Secretary of Defense implies.
Let us imagine that these robots were released from their digital cages and allowed to control weapons automatically or espionage programs.
Where could this lead?
The machine uprising is still far away, AI is not yet sufficiently developed to make autonomous decisions without human intervention.
But even if we reject the most fantastical scenario, some troubling possibilities come to mind.
With the protection of privacy and other fundamental human rights, we may have to say goodbye, and for war crimes there will be no one to bring to justice.
The automatic machine cannot be brought before a court.
The open call of the US
However, the American Pentagon is not addressing only Anthropic, but also other AI manufacturers, such as OpenAI (ChatGPT), xAI (Grok) and Google (Gemini).
This trio was not so uncompromising and agreed to remove all restrictions from their products. And here the situation becomes even more worrying.
It may seem that all this is news that concerns us little, but that is wrong.
All armed forces also actively use AI in military operations.
For example, AI helps attack drones recognize targets, overcome electronic warfare systems and create swarms for coordinated attacks.
Today AI still plays a supporting role, but its very use suggests that soon we will face the same existential dilemma over which the Americans are arguing.
The mistakes
Is this bad? Not necessarily. It would be worse if this scenario did not exist at all.
In any case, AI promises a revolution in the field of military operations, and not only.
Humanity should be at the forefront, rather than be surprised by a future that has already arrived.
In the best case scenario, the conflict between the Pentagon and Anthropic will lead humanity to find ways for the safe use of AI in such a controversial sphere as war.
In the worst case scenario, it will guide us onto the path we are destined to follow.
www.bankingnews.gr
Σχόλια αναγνωστών