Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[dead]


And if you make someone 3x faster at producing a report that 100 people has to read, but it now takes 10% longer to read and understand, you’ve lost overall value.


You are forgetting that they are now going to use AI to summarize it back.


This is one of my major concerns about people trying to use these tools for 'efficiency'. The only plausible value in somebody writing a huge report and somebody else reading it is information transfer. LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high, and you will be worse off reading the summary than if you skimmed the first and last pages. In fact, you will be worse off than if you did nothing at all.

Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.

Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!


Yep. The other way it can have net no impact is if it saves thousand of hours of report drafting and reading but misses the one salient fact buried in the observations that could actually save the company money. Whilst completely nailing the fluff.


> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high

I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far.

We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.


Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it.

Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings.


Yeah, definitely more skeptical for communication pipelines.

But for coding, the latest models are able to read my codebase for context, understand my question, and implement a solution with nuance, using existing structures and paradigms. It hasn't missed since January.

One of them even said: "As an embedded engineer, you will appreciate that ...". I had never told it that was my title, it is nowhere in my soul.md or codebase. It just inferred that I, the user, was one. Based on the arm toolchain and code.

It was a bit creepy, tbh. They can definitely infer context to some degree.


> We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.

While we're speculating, here's mine: we're in the Windows 7 phase of AI.

IOW, everything from this point on might be better tech, but is going to be worse in practice.


I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.


Context size helps some things but generally speaking, it just slows everything down. Instead of huge contexts, what we need is actual reasoning.

I predict that in the next two to five years we're going to see a breakthrough in AI that doesn't involve LLMs but makes them 10x more effective at reasoning and completely eliminates the hallucination problem.

We currently have "high thinking" models that double and triple-check their own output and we call that "reasoning" but that's not really what it's doing. It's just passing its own output through itself a few times and hoping that it catches mistakes. It kind of works, but it's very slow and takes a lot more resources.

What we need instead is a reasoning model that can be called upon to perform logic-based tests on LLM output or even better, before the output is generated (if that's even possible—not sure if it is).

My guess is that it'll end up something like a "logic-trained" model instead of a "shitloads of raw data trained" model. Imagine a couple terabytes of truth statements like, "rabbits are mammals" and "mammals have mammary glands." Then, whenever the LLM wants to generate output suggesting someone put rocks on pizza, it fails the internal truth check, "rocks are not edible by humans" or even better, "rocks are not suitable as a pizza topping" which it had placed into the training data set as a result of regression testing.

Over time, such a "logic model" would grow and grow—just like a human mind—until it did a pretty good job at reasoning.


Upvoted, as it basically 99% matches my own thinking. Very well said. But I, personally, would not predict a breakthrough in this direction in the next 2-5 years, as there is no pathway from current LLM tech to "true reasoning". In my mental model LLM operates in "raster space" with "linguistic tokens" being "rasterization units". For "true reasoning" an AI entity has to operate fluently in "vector space", so to speak. LLM can somewhat simulate "reasoning" to a limited degree, and even that it only does with brute force - massive CPU/GPU/RAM resources, enormous amount of training data and giant working contexts. And still, that "simulation" is incomplete and unverifiable.

I would argue that the research needed to enable such "vector operation" is nowhere near the stage to come to fruition in the next decade. So, my prediction is, maybe, 20-50 years for this to happen, if not more.


Wasn’t this idea the basic premise of coq? Why didn’t it work?


> I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.

Might not make a difference. I believe we are already at the point of negative returns - doubling context from 800k tokens to 1600k tokens loses a larger percentage of context than halving it from 800k tokens to 400k tokens.


First impressions are everything. It's going to be hard to claw back good will without a complete branding change. But... where do you go from 'AI'???


There's many things that used to be called AI, but as their shortcomings became known we started dropping them from the AI bucket and referring to them by a more specific name: expert systems, machine learning, etc. Decades later plenty of people never learned this and those things don't pop into mind with "AI" so LLMs were able to take over the term.

Given time I could see this happening again.


> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high…

I keep seeing this statement in threads about AI, and maybe it’s just from you, but high SNR is a good thing.

See https://en.wikipedia.org/wiki/Signal-to-noise_ratio

I think the rest of your post is very valid. It’s the mental equivalent of this article https://news.ycombinator.com/item?id=47049088


Except they didn't say signal-to-noise, they said noise-to-signal. And if the NSR is unacceptably high, that means the SNR is unacceptably low.

Two inverses do make a right, it seems.


Hah, you got me there! I'll try to keep the ratio flipped correctly next time ;)

I read that article recently, so the similarities might not be entirely coincidental. 'JPEG of thought' is gonna stay in my vocabulary for a while.


Hehe, yeah there's some terms that just are linguistically unintuitive.

"Skill floor" is another one. People generally interpret that one as "must be at least this tall to ride", but it actually means "amount of effort that translates to result". Something that has a high skill floor (if you write "high floor of skill" it makes more sense) means that with very little input you can gain a lot of result. Whereas a low skill floor means something behaves more linearly, where very little input only gains very little result.

Even though its just the antonym, "skill ceiling" is much more intuitive in that regard.


Are you sure about skill floor? I've only ever heard it used to describe the skill required to get into something, and skill ceiling describes the highest level of mastery. I've never heard your interpretation, and it doesn't make sense to me.


Yes, I am very sure. And it isn't that difficult to understand, it is skill input graphed against effectiveness output. A higher floor just means that with 1 skill, you are guaranteed at least X (say, 20) effectiveness output.

https://imgur.com/tOHltkx

The confusion comes from people using "skill floor" for "learning curve" instead of "effectiveness".

But this is a thing where definitions have shifted over time. Like jealousy. People use "jealousy" when they really mean "envy", but correcting someone on it will usually just get you scorn and ridicule, because like I mentioned, language is fluid.


If the skill floor is high and therefore "effectiveness" is the same for a wide range of skill levels, isn't that the same as having a high barrier to entry? It seems that any activity or game where it takes a lot of skill before you can differentiate yourself from other players would be described that way.


No, a high skill floor is the opposite. It means that anyone can pick up the thing and immediately do decently.

To put it simply, think assault rifle vs sniper rifle. Anyone can use the AR and spray and pray and do pretty okay. You can't do that with the sniper rifle. So the AR has a high skill floor (minimum effectiveness) whereas the sniper rifle has a low skill floor (low minimum effectiveness). But the AR has a low skill ceiling too a point where you can put in endless amounts of skill and see no improvement in effectiveness. The sniper being an infinite range OHKO can scale to the end given aim skill and map knowledge.

Another example would be Reinhardt in Overwatch. You can tell a noob to "look in that direction and deploy shield" and they will contribute to the team. You can't put a noob on Widowmaker and have them contribute (as) significantly.


I've also never heard that use of "skill floor" before. The "floor/ceiling" descriptors imply min/max constraints.


It reminds me of that Apple ad where a guy just rocks up to a meeting completely unprepared and spits out an AI summary to all his coworkers. Great job Apple, thanks for proving Graeber right all along.


> Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!

That is true, but then again also with google. You could see why some people want to go back to the "read the book" era where you didn't have google to query anything and had to make the real questions.


One thing AI should eliminate is the "proof of work" reports. Sometimes the long report is not meant to be read, but used as proof somebody has thoroughly thought through various things (captured by, for instance, required sections).

When AI is doing that, it loses all value as a proof of work (just as it does for a school report).

My AI writes for your AI to read is low value. But there is probably still some value in "My AI takes these notes and makes them into a concise readable doc".


> Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.

i may put this into my email signature with your permission, this is a whip-smart sentence.

and it is true. i used AI to "curate information" for me when i was heads-down deep in learning mode, about sound and music.

there was enough all-important info being omitted, i soon realized i was developing a textbook case of superficial, incomplete knowledge.

i stopped using AI and did it all over again through books and learning by doing. in retrospect, i'm glad to have had that experience because it taught me something about knowledge and learning.

mostly that something boils down to RTFM. a good manual or technical book written by an expert doesn't have a lot of fluff. what exactly are you expecting the AI to do? zip the rar file? it will do something, it might look great, lossless compression it will be not.

P.S. not a prompt skill issue. i was up to date on cutting edge prompting techniques and using multiple frontier models. i was developing an app using local models and audio analysis AI-powered libraries. in other words i was up to my neck immersed in AI.

after i grokked as much as i could, given my limited math knowledge, of the underlying tech from reading the theory, i realized the skill issue invectives don't hold water. if things break exactly in the way they're expected to break as per their design, it's a little too much on the nose. even appealing to your impostor syndrome won't work.

P.P.S. it's interesting how a lot of the slogans of the AI party are weaponizing trauma triggers or appealing to character weaknesses.

"hop on the train, commit fully, or you'll be left behind" > fear of abandonment trigger

"pah, skill issue. my prompts on the other hand...i'm afraid i can't share them as this IP is making me millions of passive income as we speak (i know you won't probe further cause asking a person about their finances is impolite)" > imposter syndrome inducer par excellence, also FOMO -- thinking to yourself "how long can the gold rush last? this person is raking it in!! what am i doing? the miserable sod i am"

1. outlandish claims (Claude writes ALL the code) noone can seem to reproduce, and indeed everyone non-affiliated is having a very different experience

2. some of the darkest patterns you've seen in marketing are the key tenets of the gospel

3. it's probably a duck.

i've been 100% clear on the grift since October '25. Steve Eisman of the "Big Short" was just hopping onto the hype train back then. i thought...oh. how much analysis does this guru of analysts really make? now Steve sings of AI panic and blood in the streets.

these things really make you think, about what an economy even is. it sure doesn't seem to have a lot to do with supply and demand, products and services, and all those archaisms.


So what we now have is a very expensive and energy-intensive method for inflating data in a lossy manner. Incredible.


Remarkably it has only cost a few trillion dollars to get here!


don't forget the insane costs to stay here


This reminds me of that "telephone" kids game.

https://en.wikipedia.org/wiki/Telephone_game


So a circular economy in which you add mistakes


For all the technology we develop, we rarely invest in processes. Once in a blue moon some country decides to revamp its bureaucracy, when it should really be a continuous effort (in the private sector too).

OTOH, what happens continuously is that technology is used to automate bureaucracy and even allows it to grow some complexity.


An economy of the LLMs, by the LLMs, for the LLMs, shall not perish from the Earth.


Rather poignant actually. By replacing people with LLM's, you've just made the economy as a whole something which can be owned.


See, this is an opportunity. Company provides AI tool, monitors for cases where AI output is being fed as AI input. In such cases, flag the entire process for elimination.


Maybe the take is that those reports that people took a day to write were read by nobody in the first place and now those reports are being written faster and more of them are being produced but still nobody reads them. Thus productivity doesn't change. The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better.


The managerial class are like cats and closed doors.

Ofcourse they don't read the reports, who has time to read it? But don't even think about not sending the report, they like to have the option of reading it if they choose to do so.

A closed door removes agency from a cat, an absent report removes agency from a manager.


> The solution is to get rid of all the people who write and process reports and empower the people who actually produce stuff to do it better.

That’s the solution if you’re the business owner.

That’s definitely not the solution if you’re a manager in charge of this useless activity, in fact, you should increase the amount of reports being written as much as humanly possible. The more underlings under you= more power and prestige.

This is the principal-agent problem writ large. As the comment mentioned above, also see Graeber’s Bullshit Jobs essay and book.


> Thus productivity doesn't change.

Indeed, productivity has decreased, because now there’s more output that is waste and you are paying to generate that excess waste.


What happens if (and I suspect this to be increasingly the case now) you make someone 3x faster at producing a report that nobody reads and those people now use LLMs to not read the report whereas they were not reading it in person before?

Then everyone saves time, which they can spend producing more things which other people will not read and/or not reading the things that other people produce (using llms)?

Productivity through the roof.


Now you know why GDP is higher than ever and people are poorer than ever.


Mmm I can’t wait to get home and grill up some Productivity for dinner. We’ll have so much Productivity and no jobs. Hopefully our billionaire overlords deign to feed us.


> Hopefully our billionaire overlords deign to feed us.

Eat the Rich


And like the article says, early computerization produced way more output than anybody could handle. In my opinion, we realized the true benefits of IT when ordinary users were able to produce for themselves exactly the computations they needed. That is, when spreadsheets became widespread. LLMs haven’t had their spreadsheet moment yet; their outputs are largely directed outward, as if more noise meant more productivity.


Not necessarily. You could have 100 FTE on reports instead of 300 FTE in a large company like a bank. That means 200 people who'd normally go into reporting jobs over the next decade, will go into something else, producing something else ontop of the reports that continue to be produced. The sum of this is more production.

Looking at job numbers that seems to be happening. A lot less employment needed, freeing up people to do other things.


    Looking at job numbers
That is a wild take given the recession we're basically in from bad US policy like tariffs.


I’m not in favor of these tariffs. At all. However, it seems that they haven’t had such an impact yet on the economy, at least regarding consumer prices. You’d expect much larger inflation given the tariffs IIUC.

My current understanding of the general consensus is that many companies have been eating the tariffs with the hope SCOTUS will strike them. If they are upheld, prices will likely rise significantly


Actually job numbers are depressed (hiring recession) and GDP numbers are still way up, both precisely due to the AI investment. More output with fewer people.

Wild take to cite a recession when last quarter growth was 4.4%.


"The economy" is not GDP.


Firstly, nobody said 'the economy' so I don't know why you're even putting it in quotation marks.

Secondly, GDP is the best measure of output / value-add we have, and it's significantly up, despite jobs being down.

Output going up with fewer people means more productivity. That's the point that was being made.

Recessions are measured in economics by tracking GDP, which the person I replied to said we're in. We're not.

Whatever concept of "the economy" you had in mind to bring more nuance and refinement to a discussion, which is possible and welcome, and which you haven't bothered to add, doesn't refute the basics above.


It is for the wealthy. And nobody cares what the economy is like for everyone else.


> The real gains from AI show up when it changes what work gets done, not just how fast existing work happens.

Sadly AI is only capable of doing work that has already been done, thousands of times.


This is the natural result when the value of businesses is not strongly related to their actual output.


The most hyped use cases for AI/LLM make me wonder, "why are we doing this activity to begin with? We could just not."


What a load of nonsense, they won't be producing a report in a third of the time only to have no-one read it. They'll spend the same amount of time and produce a report three times the length, which will then go unread.


Stewart Butterfield calls these ""Hyper-realistic work like activities"


Not a phase, I’d argue that 90% of modern jobs are bullshit to keep cattle occupied and economy rolling.


You know, that would almost be fine if everyone could afford a home and food and some pleasures.


Your claim and the claims that all white collar jobs are going to disappear in 12-18 months cannot both be true. I guess we will see.


It's possible to automate the pointless stuff without realising it's pointless.


Made me think of this.

https://imgur.com/T4DAGG8


Imgur is banned on UK.

I recommend using https://catbox.moe/ which can even use remote-links so pasting the imgur link in it can also work.

https://files.catbox.moe/4dhvok.jpeg


> Imgur is banned on UK.

It's the other way round, Imgur banned UK access so that they wouldn't have to worry about the UK's stupid, authoritarian Online "Safety" Act.


The question is: did the fake numbers make any difference? Were the management decisions based on them better or worse?


I think they can both be true. Perhaps the innovation of AI is not that it automates important work, but because it forces people to question if the work has already been automated or is even necessary.


Well, if a lot of it is bullshit that can also be done more efficiently with AI, then 99% of white collar roles could be eliminated by the 1% using AI, and essentially both were very close to true.


Jobs you don’t notice or understand often look pointless. HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc.

In the end when jobs are done right they seem to disappear. We notice crappy software or a poorly done HVAC system not clean carpets.


This just highlights the absurdity of having your employer responsible for your health insurance and managing your taxes for you.

These should be handled by the government, equally for all.


Moving some function to the government doesn’t eliminate the need for it. Something would still need to tell the government what you’re paid unless you’re advocating for anarchy or communism.

Also, part of that etc is doing payroll so there’s some reason for you to show up at work every day.


> These should be handled by the government, equally for all.

This is certainly possible, but it's called communism.


No. Private insurance could still be an option.


> HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc.

That's not why companies have HR; sure, it's a nice side-effect, but it's not the reason for HR.

HR exists primarily to protect the company from the employees.


I emailed HR and asked what to do to best ask for leave in case of a future event (serious illness with a family member, I just wanted to be one step ahead and make sure I did everything right even in the state of grief).

HR wouldn't tell me what would be the best and most correct course of action, the only thing that they said was that it was my responsibility as an employee to find out. Well, what did they think I was doing.


Side effect seems like an odd way to describe what’s going on when these functions are required for a company to operate.

Companies don’t survive if nobody is paid to show up every day or if they keep paying every single ex employee that ever worked for the company. It’s harder to attract new employees if you don’t offer competitive salaries or benefits. HR is a tiny part of most companies, but without that work being done the company would absolutely fail.

Similarly a specific ratio of flight attendants to passengers are required by the FAA in case of an emergency. Airlines use them for other stuff but they wouldn’t have nearly as many if the job was just passing out food.


> HR on the surface seems unimportant, but you’d notice if the company stopped having health insurance or sending your taxes to the IRS etc etc.

Interesting on how the very example you give for "oh this job isn't really bullshit" ultimately ends up being useless for the business itself, and exists only as a result of regulation.

No, health insurance being provided by employers, or tax withholding aren't useful things for anyone, except for the state who now offloads its costs onto private businesses.


Only result of regulation, that statement invalidates probably a majority of modern work, and like every legal professional.


i agree.


> Not a phase, I’d argue that 90% of modern jobs are bullshit to keep cattle occupied and economy rolling.

Cattle? You actually think that about other people?


It seems more like they're implying it's those at the top think that about other people.


Nope, the entire statement betrays a combination of ignorance and arrogance that is best explained by them seeing most everyone else as beneath them.


Hard miss. GP is right, and your assumptions say more about you than about me. :^)


My observation is about what your assumptions say about you, and that's not a miss.

Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).


> Nobody really understands a job they haven't done themselves, and "arguing" that 90% of them are "bullshit" has no other possible explanation than a combination of ignorance (you don't understand the jobs well enough to judge whether they are useful) and arrogance (you think you can make that judgement better than the 90% of people doing those jobs).

That's fine if you disagree, I'm not aiming to be the authority on bullshit jobs.

This doesn't change the fact that you and I are cattle for corpo/neo-feudals.


> Hard miss. GP is right, and your assumptions say more about you than about me. :^)

No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."

And the interpretation...

> It seems more like they're implying it's those at the top think that about other people.

...beggars belief. What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.

I was half a mind to point that out in my original comment, but didn't get around to it.


> No. If that's the case, your statement was unclear: since you didn't specify who else thinks those people were cattle, the implication is that you think it. Especially since you prefaced your statement with "I’d argue."

I never said it was clear? Two commenters got it right, two wrong, so it wasn’t THAT unobvious.

> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.

Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?


>> What indication has "the top" given to show they have that kind of foresight and control? The closest is the AI-bros advocacy of UBI, which (for the record) has gone nowhere.

> Tech bros selling “no more software engineers” to cost optimizers, dictatorships in US, Russia, China pressing with their heels on our freedoms, Europe cracking down on encryption, Dutch trying to tax unrealized (!) gains, do I really need to continue?

All those things are non sequiturs, though, some directly contradicting the statement I was responding to, as you claim it should be interpreted. If "90% of modern jobs are bullshit to keep cattle occupied" that implies "the top" deliberately engineered (or at least maintains) an economy where 90% jobs are bullshit (unnecessary). But that's obviously not the case, as the priority of "the top" is to gather more money to themselves in the short to medium term, and they very frequently cut jobs to accomplish that. "Tech bros selling “no more software engineers” to cost optimizers," is a new iteration of that. If "the top" was really trying "to keep cattle occupied" they wouldn't be cutting jobs left and right.

We don't live in a command economy, there's no group of people with an incentive to create "bullshit" jobs "to keep cattle occupied."


I think what he meant was that the top 1% ruling class is keeping those bullshit jobs around to keep the poor people (their cattle) occupied so they won't have time and energy to think and revolt.


Or for everyone in chain of command to have people to rule over. A common want for many in leadership positions. At least two ways, you want to control people. And your value to your peers is the amount of people or resources you control.


It is bullshit argunent. The 1% is seeking to fire as many people as possible and with pleasure.

We dont matter to them, one was or the other. They dont see us as a threat, just as bugs.


If push comes to shove hopefully those bugs will remember how to bite.


And the fact that you can make it 3x faster substantially increases the chances that nobody will read it in the first place.


I suspect that we are going to see managers say, "Hey, this request is BS. I'm just going to get ChatGPT to do it" while employees say, "Hey, this response is BS, I'm just going to get ChatGPT to do it" and then we'll just have ChatGPT talking to itself. Eventually someone will notice and fire them both.

"What would you say you do here?" --Office Space


> This is an underrated take. If you make someone 3x faster at producing a report nobody reads, you've improved nothing

In the private market are there really so many companies delivering reports no one reads ? Why would management keep at it then ? The goal is to maximize profits. Now sure there are pockets of inefficiency even in the private sector but surely not that much - whatever the companies are doing - someone is buying it from them, otherwise they fail. That's capitalism. Yes there is perhaps 20% of employees who don't pull their weight but its not the majority.


I don't know what to tell you aside from "just go and work at a large private company and see".

I'm not smart enough to understand the macro-economics or incentive structures that lead to this happening, but I've seen many 100+ man teams that output whose output is something you could reasonably expect from a 5 man team.


Sorry I meant to say the private sector, not sure if it changes the argument though since you seem to believe inefficiencies are all over the place - in public companies, private etc. I've worked in tech all my life and in general if you were grossly inefficient you'd get fired. Now tech may be a high efficiency / low bullshit industry but I'm assuming in general if you are truly shit at your job you'd get fired no matter the industry.


Many of these companies are fairly close to the mechanisms of credit creation. That distortion can make a market work very counterintuitively.


> In the private market are there really so many companies delivering reports no one reads ? Why would management keep at it then ?

In finance, you have to produce truly astounding amounts of regulatory reports that won't be read... until there is a crash, or a lawsuit, or an investigation etc. And then they better have been right!


Got it that's a fair point - you're saying many companies deal with heaps of regulations and expediting that isn't really adding to productivity. I agree with you here. But even if 50% of what a company does is shit no one cares about - surely there's the other 50% that actually matters - no? Otherwise how does the company survive financially.


>In the private market are there really so many companies delivering reports no one reads ?

Just this month the hospital in my municipality submitted an application to put in a new concrete pad for a new generator beside the old one that they, per the application, intend to retire/remove and replace with a storage shed on it's pad once the new one is operational.

Full page intro about how the hospital is saving the world, such a great thing for the community and all manner of vapid buzzword bullshit. dozens of pages of re-hashing bullshit about the environmental conditions, water flows down hill, etc, etc, (i.e. basically reiterating stuff from when they built the facility), etc, etc.

God knows how many people and hours it took to compile it (we'll ignore the labor wasted in the public sector circulating and reading it).

All for a project that 50yr ago wouldn't have required 1/100th of the labor expenditure just to be kicked off. All that labor, squandered on nothing that makes anyone any richer. No goods made. No services rendered.


Why should hospitals be for-profit organizations? Sounds like all the wrong incentives.


>Why should hospitals be for-profit organizations? Sounds like all the wrong incentives.

You're conflating private ownership with the organizations nominal financial structure. It has nothing to do with the structure model of the organization and everything to do with resources wasted on TPS reports. This waste has to come from somewhere. Something is necessarily being forgone whether that's profit, reinvestment in the organization or competitive edge that benefits the customer (e.g. lower cost or higher quality for same cost). The same is true for a for profit company, or any other organization.

FWIW the hospital is technically nonprofit as is typical for hospitals. And I assure you, they still have all the wrong incentives despite this.


The cost is it taking America a billion dollars to build what China can for 50 million. That's ultimately where the waste accumulates.


Best description of America's biggest long term problem I've read. This shit is exactly the reason we can't build anything in America anymore.


The implication is that companies in a private market can't possibly be hugely inefficient for irrational reasons that can ultimately be self-harming.

An interesting take.


They can be irrational and ineffective. Nevertheless, if LLM are useful, they would still earn more then before.

Regardless of their effectivity, it means LLMs are not useful for them.


I used the term "private market" when I actually meant the private sector. I just mean all labor that isn't government owned - public companies, private companies etc. So yes - in a reasonably functioning capitalist market (which the U.S still is in my eyes) I expect gross inefficiencies to not be prevalent.


> So yes - in a reasonably functioning capitalist market (which the U.S still is in my eyes) I expect gross inefficiencies to not be prevalent.

I am not sure that is true, though. Assume for a moment that Google would waste 50% of their profits. Truly, a huge inefficiency. However, would that make it likely some other corp could take their search/ad market share from them? I doubt it, given the abyss of a moat.


True. Therefore, what?

One could say: True, therefore search is not a reasonably functioning capitalist market.

Yeah, I know, this can turn into "no true capitalist market". Still, it seems reasonable to say that many markets work in a certain kind of way (with lots of competition), and search is not one of those markets.


The parent was referring to the whole US as "market". In that sense the numerous exceptions and non-functioning markets invalidate the statement, IMHO.


The goal might be to maximize profits, but that only means that managers want to make sure everyone further down the chain are doing whatever they identify to be the best way to accomplish that. How do you do that? Reports.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: