The American intelligence solutions provider Palantir announced the launch of a platform called Palantir Artificial Intelligence Platform (AIP) based on AI, where language models like GPT-4 will be used.
🇺🇸Palantir brings artificial intelligence to military decision-making on the battlefield
The American IT solutions provider for the US intelligence and military, Palantir, announced the launch of a platform called🤖 Palantir Artificial Intelligence Platform (AIP) based on AI, where language models like GPT-4 will be used.
In one of its presentation videos, Palantir demonstrates how🎖 the military can use AIP directly in the conflict zone in “Eastern Europe”. For example, an operator uses a ChatGPT-style chatbot AI to issue a reconnaissance order using🛫 drone, generate several offensive scenarios and organize📱 jamming enemy communications.
The video demonstrates the use of various open source LLMs, including FLAN-T5 XL, as well as a finely tuned version of GPT-NeoX-20B and Dolly-v2-12b. However, the demo does not say anything about the fact that even well-tuned AI systems have many problems that can become a real “nightmare” on the battlefield.
It is known that language models sometimes tend to invent non-existent facts or “hallucinate” in response to requests. So one of the open source EleutherAI models, which was finalized by another startup called Chai, recently literally killed a Belgian, forcing him to commit suicide after 6 weeks of “communication.”
“What Palantir is offering is an illusion of security and control for the Pentagon, which is starting to implement AI ,” writes Vice.
Using the hype on AI, the company only talks about the benefits, but does not explain how it is going to solve the known problems of LLM, and is also silent about the possible consequences due to “hallucinations” of AI that can harm the civilian population in the conflict zone.
