Want to ask an AI about a topic it’s not designed to discuss? Various methods of “jailbreaking” AI are available, and researchers at Anthropic have uncovered a new one. They discovered that with the correct priming through less harmful questions, a Large Language Models (LLM) could be persuaded to provide information on dangerous topics, such […]