Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The argument is not transferable because breaking into someone's house is sure to do more harm than the unspecified hypothetical harm that a "script kiddie" could do with ChatGPT, and that bypassing a door lock requires some degree of skill whereas a ChatGPT jailbreak requires you to google a prompt and copypaste it. A physical lock on a door offers a great deal more security than the limp solution that current AI safety provides, and it solves a much more pressing problem than "stopping trolls."

If your hypothetical involved a combination lock and the combination was on a sticky note that anyone could read at any time it might be more apt, but even then the harms done by breaking the security aren't the same. I'm not convinced a typical user of ChatGPT can do significant harm, the harms from LLMs are more from mass generated spam content which currently has no safeguards at all.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: