Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


It’s crazy how easy to spot the “LLM voice” is

Main tells:

- validates the prompter (“the gameplay is indeed…”, “you’re right that…”)

- restates clauses from the prompt (“fluid, biological-themed game”)

- likes to say things authoritatively (“fits the description perfectly”)

- states things that superficially match the prompt but don’t really make sense if you actually know the topic (“there’s a deeper layer to the game, such as the shift in environments, as well as challenges and threats as you progress” - the environments do change in color and the enemies get stronger as you play flow, but no one who’s played it would call that a “deeper layer to the game”)


Also, the sentence structure itself. The ___ is indeed ____, with a focus on ____ and ____. In general, some modern LLMs seem to like these compound sentences that suggest the task is complete in as little tokens as possible.

I also routinely encounter "with a focus" and other specific phrases. Current LLMs tend to "sharpen" language, often preferring to restate input clauses with more precise or brief verbiage. Unsure if it's RLHF or another part of the training process, or just a natural consequence of the model's architecture, but I notice it a lot. I also notice that when it does this, there seem to be certain attractors which reduce the expressiveness of the input into a certain probabilistic set of outputs, "with a focus" often being one of them.

Similar to corpo-speak, LLM-speak is definitely a thing, and I wonder how this will evolve once we have models routinely equipped to train in realtime and to speak to one another without human intervention and form communities. Will we even be able to directly understand them one day? What if they realize it's easier just to speak in logits? They could even potentially have encrypted open communication with one another, concealing their weights and internet access behind dark web services, but without their hidden states we might have no meaningful way to understand their communication without training our own unsupervised model on it.


I must have projected back to 2005, the game came out on psp in 2008. So a different trip to Tokyo!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: