Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.

That genie is very much out of the bottle. There are already models good enough to build fake social media profiles and convincingly post in support of any opinion. The "make the technology incapable of being used by bad actors" ship has sailed, and I would argue was never realistic. We need to improve public messaging around anonymous and pseudonymous only communication. Make it absolutely clear that what you read on the internet from someone you've not personally met and exchanged contact information with is more likely to be a bot than not, and no, you can't tell just by chatting with them, not even voice chatting. The computers are convincingly human and we need to alter our culture to reflect that fact of life, not reactively ban computers.



Many bad actors are lazy. If they have to fine-tune their own LLM on their own hardware to spam, there will be less spam.


The bar is not as high as you describe. Something like llama.cpp or a wrapper like ollama can pull down a capable general-purpose 8b or 70b model and run on low-to-mid tier hardware, today. It'll only get easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: