Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm feeling people are using AI in the wrong way.

Current LLM is best used to generate a string of text that's most statically likely to form a sentence together, so from user's perspective, it's most useful as an alternative to manual search engine to allow user to find quick answers to a simple question, such as "how much soda is needed for baking X unit of Y bread", or "how to print 'Hello World' in a 10 times in a loop in X programming language". Beyond this use case, the result can be unreliable, and this is something to be expected.

Sure, it can also generate long code and even an entire fine-looking project, but it generates it by following a statistical template, that's it.

That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there. But the hard, domain-specific problem, is less likely to have a publicly-available solution.



>I'm feeling people are using AI in the wrong way.

I think people struggle to comprehend the mechanisms that lets them talk to computers as if they were human. So far in computing, we have always been able to trace the red string back to the origin, deterministically.

LLM's break that, and we, especially us programmers, have a hard time with it. We want to say "it's just statistics", but there is no intuitive way to jump from "it's statistics" to what we are doing with LLM's in coding now.

>That's why "the easy part" is easy because the easy problem you try to solve is likely already been solved by someone else on GitHub, so the template is already there.

I think the idea that LLM's "just copy" is a misunderstanding. The training data is atomized, and the combination of the atoms can be as unique from a LLM as from a human.

In 2026 there is no doubt LLM's can generate new unique code by any definition that matters. Saying LLM's "just copy" is as true as saying any human writer just copies words already written by others. Strictly speaking true, but also irrelevant.


Well said. It also causes a lot of bitterness among engineers too, not being able to follow the red string is maddening to some. This rage can prevent them from finding good prompting strategies also which would directly ease a lot of the pain, in a similar way to how it’s far harder to teach my mother how to do something on her phone if she’s already frustrated with it.


Which is great because then I can use my domain expertise to add value, rather than writing REST boilerplate code.


Having to write boilerplate code is a sign that libraries are just not up to the level they should be. That can be solved the regular old way.


Come on, this shows fundamental lack of understanding and experience on your side.


I think you severely overestimate your understanding of how these systems work. We’ve been beating the dead horse of “next character approximation” for the last 5 years in these comments. Global maxima would have been reached long ago if that’s all there was to it.

Play around with some frontier models, you’ll be pleasantly surprised.


Did I miss a fundamental shift in how LLMs work?

Until they change that fundamental piece, they are literally that: programs that use math to determine the most likely next token.


This point is irrelevant when discussing capabilities. It's like saying that your brain is literally just a bunch of atoms following a set of physics laws. Absolutely true but not particularly helpful. Complex systems have emergent properties.


The problem I think is that current LLMs maybe not complex enough to accept all stimulation.

Current LLM systems are more like simulation of the stimulation, a conclusion rather than a exploration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: