pull down to refresh

NTU Researchers were able to jailbreak popular AI chatbots including ChatGPT, Google Bard, and Bing Chat. With the jailbreaks in place, targeted chatbots would generate valid responses to malicious queries, thereby testing the limits of large language model (LLM) ethics.