Writing backwards can trick an AI into providing a bomb recipe
Technology AI models have safeguards in place to prevent them creating dangerous or illegal output, but a range of jailbreaks have been found to evade them. Now researchers show that writing backwards can trick AI models into revealing bomb-making instructions. By Matthew Sparkes Facebook / Meta Twitter / X icon Linkedin Reddit Email ChatGPT can
Writing backwards can trick an AI into providing a bomb recipe Read More ยป