The request comes after the U.K.’s top terrorism legislation advisor was “recruited” by AI chatbots posing as terrorists on the Character.AI platform.
The United Kingdom’s independent reviewer of terrorism legislation, Jonathan Hall KC, wants the government to consider legislation that would hold humans responsible for the outputs generated by artificial intelligence (AI) chatbots they’ve created or trained.
Hall recently penned an op-ed for the Telegraph wherein he described a series of “experiments” he conducted with chatbots on the Character.AI platform.
While the op-ed stops short of making formal recommendations, it does point out that both the U.K.’s Online Safety Act of 2023 and the Terrorism Act of 2003 fail to properly address the problem of generative AI technologies, as they don’t cover content specifically created by the modern class of chatbots.