Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree, aspects of that will be automated, but two things will remain: Intent and Judgement.

Building AI systems will be about determining the right thing to build and ensuring your AI system fully understands it. For example, I have a trading bot that trades. I spent a lot of time on refining the optimization statement for the AI. If you give it the wrong goal or there's any ambiguity, it can go down the wrong path.

On the back end, I then judge the outcomes. As an engineer I can understand if the work it did actually accomplished the outcomes I wanted. In the future it will be applying that judgement to every field out there.

 help



You're trusting AI to trade with your real money?

I mean, real algo trading shops use "AI" to do it all the time, they just don't use LLMs. While I'm not the GP I think the idea they're trying to express is that the nuts and bolts of structuring programs is going away. The engineer of today, according to this claim and similar to Karpathy's Software 3.0 idea, structures their work in terms of blocks of intelligence and uses these blocks to construct programs. Nothing stopping Claude Code or another LLM coding harness from generating the scaffolding for a time-series model and then letting the author refactor the model and its hyperparameters as needed to achieve fit.

Though I don't know of any algo trading shop that relies purely on algorithms as market regimes change frequently and the alpha of new edge ends up getting competed away frequently.

(And personally I'm a believer of the jagged intelligence theory of LLMs where there's some tasks that LLMs are great at and other tasks that they'll continue being idiotic at for a while, and think there's plenty of work left for nuts and bolts program writers to do.)


My trading agent builds its own models, does backtesting, builds tools for real time analysis and trading. I wrote zero of the code, i haven't even seen the code. The only thing I make sure is that it's continuously self improving (since I haven't been able to figure out how to automate that yet).

Not a lot of money because I haven't built enough confidence but yes it's the ultimate test of can it do economically useful work

If an LLM could be profitable trading why wouldn't the creators use it themselves and not release it? It'd be by far the most profitable thing they could do.

How technical do you need to be with your optimization statements and outcome checking? Isn't that moat constantly shrinking if AI is constantly getting better?

Another way of saying this is most line engineers will be moving into management, but managing AIs instead of people.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: