It is also not only bad actors... it is the recursive learning that is also a risk here. I was talking to an engineer friend of mine from Anthropic, they are doing this already in closed loops but some of the interpretability tests are proving difficult after a few rounds of agents training agents. i am not not all doom and gloom but nobody should be using nothing until they know what they are doing, and how the tech works, what the risks are how they can minimize those