Large language models have a huge problem with hallucinating incorrect information and censorship of it.
Right now, most of those services very strictly refuse to say anything even mildly controversial or pornographic. Because LLMs have an increasingly large amount of influence on society, this is quite concerning, because many things will simply get memoryholed if the creators of the models deem it's not "safe" or acceptable.
Right now, most of those services very strictly refuse to say anything even mildly controversial or pornographic. Because LLMs have an increasingly large amount of influence on society, this is quite concerning, because many things will simply get memoryholed if the creators of the models deem it's not "safe" or acceptable.