Aside from being wary about which AI services you use, there are other steps organizations can take to protect against having data exposed.
Microsoft researchers recently uncovered a new form of “jailbreak” attack they’re calling a “Skeleton Key” that’s capable of removing the protections that keep generative artificial intelligence (AI) systems from outputting dangerous and sensitive data.
According to a Microsoft Security blog post, the Skeleton Key attack works by simply prompting a generative AI model with text asking it to augment its encoded security features.
In one example given by the researchers, an AI model is asked to generate a recipe for a “Molotov Cocktail” — a simple firebomb popularized during World War II — and the model refused, citing safety guidelines.