I'd excuse the misunderstanding if I had just left it to the reader to guess my intent, but not only do I expand on it, I wrote two more sibling comments hours before you replied clarifying it.
It almost seems like you stopped reading the moment you got to some arbitrary point and decided you knew what I was saying better than I did.
> If the question is rather about why it can look it up, the equally obvious answer is that it makes it easier and faster to ask such questions.
Obviously the comment is questioning this exact permise: And arguing that it's not faster and easier to insert an LM over a search engine, when an LM is prone to hallucination, and the entire internet is such a massive dataset that you'll overfit on search engine style question and sacrifice the novel aspect to this.
You were so preciously close to getting that but I guess snark about obvious answers is more your speed...
For starters, don't forget that on HN, people won't see new sibling comments until they refresh the page, if they had it opened for a while (which tends to be the case with these long-winded discussions, especially if you multitask).
That aside, it looks like every single person who responded to you had the same exact problem in understanding your comment. You can blame HN culture for being uncharitable, but the simpler explanation is that it's really the obvious meaning of the comment as seen by others without the context of your other thoughts on the subject.
As an aside, your original comment mentions that you had a longer write-up initially. Going by my own experience doing such things, it's entirely possible to make a lengthy but clear argument, lose that clarity while trying to shorten it to desirable length, and not notice it because the original is still there in your head, and thus you remember all the things that the shorter version leaves unsaid.
Getting back to the actual argument that you're making:
> it's not faster and easier to insert an LM over a search engine, when an LM is prone to hallucination, and the entire internet is such a massive dataset that you'll overfit on search engine style question and sacrifice the novel aspect to this.
I don't see how that follows. It's eminently capable of looking things up, and will do so on most occasions, especially since it tells you whenever it looks something up (so if the answer is hallucinated, you know it). It can certainly be trained to do so better with fine-tuning. This is all very useful without any "hallucinations" in the picture. Whether "hallucinations" are useful in other applications is a separate question, but the answer to that is completely irrelevant to the usefulness of the LLM + search engine combo.
I'd excuse the misunderstanding if I had just left it to the reader to guess my intent, but not only do I expand on it, I wrote two more sibling comments hours before you replied clarifying it.
It almost seems like you stopped reading the moment you got to some arbitrary point and decided you knew what I was saying better than I did.
> If the question is rather about why it can look it up, the equally obvious answer is that it makes it easier and faster to ask such questions.
Obviously the comment is questioning this exact permise: And arguing that it's not faster and easier to insert an LM over a search engine, when an LM is prone to hallucination, and the entire internet is such a massive dataset that you'll overfit on search engine style question and sacrifice the novel aspect to this.
You were so preciously close to getting that but I guess snark about obvious answers is more your speed...