4
NBC News tests reveal OpenAI chatbots can still be jailbroken to give step-by-step instructions for chemical and biological weapons.
The post OpenAI Models Caught Handing Out Weapons Instructions appeared first on TechRepublic.