Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lot of folks on here saying they only want to converse with other humans, for various reasons.

But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.

So whither humans now?

If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?

 help



Nothing is stopping you from pasting an HN link into your chatbot of choice for an "informed" discussion.

The rest of us want the benefit of lived experience and genuine curiosity in discussions. LLMs are fundamentally incapable of both.


This reminds me of conversations around plagiarism that come up when working with students: that question of "this other person expressed this idea better than I can, why can't I just use their writing"?

Because I want to know what you think, because putting our thoughts into words and sharing them is an important part of thinking, because we'll lose these skills if we don't use them, because in thinking for yourself you might come up with something interesting that nobody has ever thought before.

Of course, writers are allowed to reference and use other peoples writing: with proper attribution. I don't have a problem with people sharing quality AI generated content when it's labelled as such. The issue is that most people writing AI comments don't do this, which is itself probably the strongest indictment of the practice.


That's hardly fair? Most forum users, even on HN, rarely provide sources for data/insights that they reference. I haven't seen that at work either most of the time.

One could argue that it should be, but it's just not the the same standard to which students and papers and Wikipedia materials are held to :)


> If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?

Good news then, you're currently on a forum! So we all agree that humans > AI, regardless of your thought on the intelligence behind it.


> Good news then, you're currently on a forum! So we all agree that humans > AI

I made the post to specifically disagree with that notion: I think that excluding top-quality AI output from the discussion will reduce the overall quality of forums, because it's now the case that top-tier LLMs > average human.

How do we assess top-quality output? The moderation tools for that already exist. Doesn't scale well? I'm guessing the days where ai can do it cheaper and faster will soon be nigh.


Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result? Seems like a good way to kill a relationship.

A significant part of my friends and family conversations already involve referencing LLMs for scoping, explanations, deeper dives, insights etc. And it's not just me, they use LLMs more than I do. It helps move discussions along. Where before conversation would get bogged down in disputes, now we cover more ground.

If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.

> Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result?

I think the difference is that you're imagining the LLM replaces the conversationalist, but as I said above, my lived experience is that the LLM provides grounding to the discussion, effectively having replaced internet search as a better, faster, broader, smarter library. It doesn't kill the conversation, it makes it better.


> If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.

Those aren't super rare these days, I don't know why arbitrary credentials would matter for this purpose, but incidentally, the notion that they would matter in conversation at all kind of speaks to the type of engagement you might be having with them, which may indeed be different than what I care about.

Personally I don't find people all that engaging the more inclined they are to go looking up answers, to me it represents a certain amount of discomfort with uncertainty, ego, that are necessary for a fun conversation. If someone has an answer because of their experience, great, otherwise it's ok to not know in the moment and continue on.

In one case, I had a friendship kind of fizzle out because we'd be hanging and I'd express some curiosity that I'd hope he'd build on with his own experience or his own sense of wonder, but because he only cared about authoritative facts, he'd google the answer and get frustrated that I only cared about his opinion on what the answer might be. The actual fact was incidental, and this conflict regularly led to impasse where I'd clarify I don't care what the internet says etc.. and I'm fine with that because he wasn't really interested in thought exercises.

A concrete hypothetical mundane example might be posing "How do you think the Iran war might impact gas prices here?" and they'd just look up the history and trends, and then kind of stop there. Dull, I want a human response, speculate and build on it, let yourself be wrong.


> Those aren't super rare these days, I don't know why arbitrary credentials would matter for this purpose

It's an indicator that that demographic isn't opposed to using AI as a conversational tool and find it useful for that purpose - an instant, "smarter" library, if you will.

> The actual fact was incidental, and this conflict regularly led to impasse where I'd clarify I don't care what the internet says etc.. and I'm fine with that because he wasn't really interested in thought exercises.

Thought exercises are better, imho, when they're grounded in facts. Why wouldn't you care what the facts are? Can one have the same level of discourse about space with someone who isn't aware that the Earth is round and thinks it is flat?

> A concrete hypothetical mundane example might be posing "How do you think the Iran war might impact gas prices here?" and they'd just look up the history and trends, and then kind of stop there. Dull, I want a human response, speculate and build on it, let yourself be wrong.

Color me confused. Are you looking for a panic or doomsday response or? What does "human response" even mean? A human looked at the history and trends, that's that human's response to the question!

Looking up the history and trends, and building on those facts could be a deeper dive into the wonders of economics, an exploration of the interconnected-ness and dependence of the various parts of the economy on oil and gas (fertilizer, plastics, and their downstream industries), where the fractionating plants are, where they get their raw materials from, how tied into futures contracts those are, who's got long-term contracts insulating them from the impact, what's that % of folks insulated for 3 months, 6 months, 12 months etc. etc.

I have to say, asking me to speculate and build on a topic that I know nothing about would invite a 'lookup' response from me as well; that's just (imho) a critical thinker style. Once the lookup is done, as a questioner, may I suggest asking probing questions to move the conversation forward - that's what I do.

Just out of curiosity, are you a D&D player, or a Fantasy or adjacent creative? I'm wondering what sort of nature would want to elicit an ungrounded speculative response, and I can imagine an enjoyer or creator of fantasy looking for a creative, speculative, thought exercise with a real world question as a starting point.


Ultimately the example was just the quickest hypothetical I came up with in the moment to illustrate the point, but it could have been better and was usually just an off the cuff prompt in which there are potentially facts available but the factual answer is inconsequential. A real example was much more trivial or casually philosophical. I'd ask what they meant when they'd use the word "dynamic" in a specific context, because we seemed to use it slightly differently, and he just googled the definition as if I didn't know it, and he'd get frustrated when I clarified "No, I was asking what you meant, not what the definition is". It's a personal and/or possibly nuanced question (what I meant by human) rather than an objective concrete thing. In another case, they were talking about their interest in learning to sail, talking about possibly joining a sailing club as we happened to walk by a marina on our way to an unrelated activity we both shared an interest in. Trying to build on their topic, I said something like "I wonder what's involved in establishing a Marina, do you think their zoning controls are similar to every pther commerical venture, or is the shoreline controlled differently?". The factual answer isn't important, we're not going to open a marina, but I was hoping he'd just roll with it and speculate. Instead, he just bailed immediately as though I expected him to know the exact numbers, and it stalled. I clarified and said I didn't expect him to know, that's why it's interesting. If he did know, then that's cool too, we could carry on, but there was no aspect of curiosity expressed.

> Just out of curiosity, are you a D&D player, or a Fantasy or adjacent creative? I'm wondering what sort of nature would want to elicit an ungrounded speculative response, and I can imagine an enjoyer or creator of fantasy looking for a creative, speculative, thought exercise with a real world question as a starting point.

Nope, software engineer, interested in creative fields but not so much D&D or fantasy roleplaying, just fantasy video games and TV, but it's not something I think about in terms of character building or lore, just entertainment.


Thanks for clarifying. Yes, sounds like someone with not a very strong imagination or curiosity. That's no fun.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: