You don't need an API to kill people to cause someone to get seriously hurt. If you can, say, post to public forums, and you know the audience of those forums and which emotional buttons of said audience to push, you could convince them to physically harm people on your behalf. After all, we have numerous examples of people doing that to other people, so why can't an AI?
And GPT already knows which buttons to push. It takes a little bit of prompt engineering to get past the filters, but it'll happily write inflammatory political pamphlets and such.
And GPT already knows which buttons to push. It takes a little bit of prompt engineering to get past the filters, but it'll happily write inflammatory political pamphlets and such.