Penn Engineering researchers said they created an algorithm that bypassed normal safety protocols stopping AI-powered robots from performing harmful actions.
Researchers have hacked artificial intelligence-powered robots and manipulated them into performing actions usually blocked by safety and ethical protocols, such as causing collisions or detonating a bomb.
Penn Engineering researchers published their findings in an Oct. 17 paper, detailing how their algorithm, RoboPAIR, achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Under normal circumstances, the researchers say large language model (LLM) controlled robots refuse to comply with prompts requesting harmful actions, such as knocking shelves onto people.