Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Surprised to see HN being bearish on this.

I have 10 years of experience. I am a reasonable engineer. I can tell you that about half of the hype on twitter is real. It is a real blessing for small teams.

We have 100k DAU for a consumer crud app. We built and maintain everything in-house with 3 engineers. This would have taken atleast 10 engineers 3-4 years back.

We don't have a bug list. We are not "vibe coding" , 2 of us understand almost all of the codebase. We have processes to make sure the core integrity of codebase doesn't go for a toss.

None has touched the editor in months.

Even the product folks can raise a PR for small config changes from slack.

Velocity is through the roof and code quality is as good if not better than when we write by hand.

We refactor almost A LOT more than before because we can afford to.

I love it.

 help



HN is in denial, which is understandable

AI is already better at understanding code than 99.99% of human, the more I use it the more I believe this is true. It can draw connections between dots far quicker and accurate than a human could ever be.

At very least, AI is going to be a must even as a co-supervisor to your project

What in doubt right now, is whether AI can manage a codebase fully autonomously without bring it down, which I doubt it can at the moment. Be it 4.6 or 5.4, they always, almost always, add code instead of removing them, the sheer complexity will explode at certain point.

But that is my assessment for models TODAY, who knows where they will end up being in 6 months. AI is entering the recursive self improvement phase, that roadmap is laying in front our eyes, what it can and would unlock is truly, truly unpredictable.

I am both intrigued and scared.


> AI is already better at understanding code than 99.99% of human

not to nitpick here, but AI does not understand code. AI (LLMs) are token predictors pr at best sophisticated pattern matching in huge search space...


The RAG models are very competent at programming. I am worried about my job as a SWE in the near future, but didn't the MIT paper about a week ago pretty much confirm that width-scaling the model is about to (or has already) stopped giving any measurable increase in quality because the training data no longer overfills the model?

Any authentic training data from pre-LLM's is assumed to have been used in training already and synthetic or generated data gives worse performing models, so the path of increasing its training data seems to be a dead end as well?

What is the next vector of training? Maybe data curation? Remove the low quality entries and accept a smaller, but more accurate data set?

I think the AI companies are starting to sweat a little, considering the promises they have made, their inability to deliver and turn a profit at its current state and the slowing improvements.

Interesting times! We are either all out of jobs or a massive market crash is imminent, awesome...


Different architectures, different RL training loops, maybe memory modules [1][2] as part of the architecture, focusing on efficiency, the giant troves of data we're generating by using claude code/gemini-cli/opencode, there's lots of research to be made.

[1] https://research.google/blog/titans-miras-helping-ai-have-lo... [2] https://github.com/deepseek-ai/Engram


Link? Genuinely curious to check it out.

100k DAU - you’ll lose 98% within 6-9 months once 1-2 person team clones it as sells it for 10% of what you are charging

That was always a possiblty even before AI, what's hard to clone is how they got those users

possibility yes - reality often no due to cost that would have to be incurred to make this happen. the "how they got those users" is the easy part if you offering is same(ish) at a fraction of the cost.

not if LTV is already only a little higher than CAC and all the marketing channels are already saturated

I don't disagree but in this day/age figuring out who the customers are is fairly trivial with "AI" and then simple marketing campaign will at least point the right eyes to the right place. Unless there is large moat switching will happen (it is already happening across the industry which is why we are hearing about the death of SaaS all that jazz...)

most developers are still in denial. Many are afraid of job loss or the corporations are forcing AI without clear scopes and proper implementation, which results in a mess. Small teams for small-medium products are productive as hell with AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: