People have goals, and specially clear goals within contexts. So if you give a large/effective LLM a clear context in which it is supposed to have a goal, it will have one, as an emergent property. Of course, it will "act out" those goals only insofar as consistent with text completion (because it in any case doesn't even have other means of interaction).
I think a good analogy might be seeing LLMs as an amalgamation of every character and every person, and it can represent any one of them pretty well, "incorporating" the character and effectively becoming the character momentarily. This explains why you can get it to produce inconsistent answers in different contexts: it does indeed not have a unified/universal notion of truth; its notion of truth is contingent on context (which is somewhat troublesome for an AI we expect to be accurate -- it will tell you what you might expect to be given in the context, not what's really true).
I think a good analogy might be seeing LLMs as an amalgamation of every character and every person, and it can represent any one of them pretty well, "incorporating" the character and effectively becoming the character momentarily. This explains why you can get it to produce inconsistent answers in different contexts: it does indeed not have a unified/universal notion of truth; its notion of truth is contingent on context (which is somewhat troublesome for an AI we expect to be accurate -- it will tell you what you might expect to be given in the context, not what's really true).