Hacker Newsnew | past | comments | ask | show | jobs | submit | WD-42's commentslogin

> It's like how every country knows embassies are full of spies but they let them operate as diplomats anyway

Or in Iran’s case, they don’t.


Well their country is currently being bombed, curious what additional ramifications you’d like to see?

I think he's pointing out that we're not bombing China or Russia or North Korea, or any other states, over similar attacks.

Because they have nukes unlike Iran.

And one wonders why Iran wants a nuke. It's not to wipe out Israel and the US as some hawks in Congress falsely claim. It's the same reason North Korea developed nukes. Terrible regimes, but they understand countries with nukes don't get bombed or invaded. That's Ukraine's tragedy.

yeah, if there's one clear takeaway from the US-involved conflicts of the past several decades, it's that nukes are the key to making the U.S. keep its hands to itself

Well they're not... um... what was it that Iran was doing to make us bomb them again?

plainly: they're being punished for not having nuclear weapons already

Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.

I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.

Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.

I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.

Moving more and more into private communities removes that, and that is a great loss IMO.


> Moving more and more into private communities removes that, and that is a great loss IMO

It is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.


The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.

Here’s an example: https://imgur.com/a/konsole-vs-ghostty-tR4Otmy

Konsole on left, ghostty (which is gtk) on right. The latter has at least 3 additional lines visible outputting the same command. The giant copy paste buttons, tab bar which wastes a ton of space, are typical of kde apps. The klutter isn’t just visually annoying it makes the apps less useful.


Honestly this is the only complaint I agree with. KDE plasma desktop and its configurability looks and feels great... but all their in house actual windowed applications like Konsole and Kate are mediocre at best. All that duplicated effort seems wasteful.

It's honestly just Konsole. Kate is very good, you can legitimately use it as a vscode replacement if you want. Dolphin is also the best file manager and it's not even close.

And then kmail... kmail is bad.


I don't want Kate to be vscode, I want it to be notepad.exe. I want it to open a document instantly and let me edit and save and close with no distractions or delays.

I have vscode for vscode.


For just notepad you have kwrite.

I think Kate does that, it just also has a bunch of other functionality.

I wonder how much QT has to do with this. AFAIK the only _decent_ bindings are still C++ and Python. For KDE it might just be C++?

There's plenty of valid criticism of GTK but choosing C over C++ isn't one of them. It seems like there is a new Rust GTK app every week, and other languages as well, thanks to the availability of bindings.

I'm curious how long relying on C++ contributions is going to last.


There’s a reason GNOME is the default for most of the major distributions.


I've had twice the issues with GNOME as I have with KDE or Cinnamon

I think it all really depends on the wants and workflows of the user


Obviously. Now think of the wants and workflows of your average user and is starts to make sense.

This is great. I’m still rocking a nearly 10 year old T470s. Great machine with Linux on it, still snappy enough- Tailscale is there when I need to do serious work (on my desktop at home!)

I replaced the batteries a few months ago and it was painless.


I have a T470. I have changed the screen (after I dropped water on it and shorted it), changed the batteries after 5 years, increased the RAM, and added an M2 drive. All of these were painless operations. Couldn't be happier with my purchase.


Same. And it's still fast enough for almost all 08/15 Tasks if you replace Windows with Fedora


I use my 2019 X1C 7th Gen daily and it's been the best laptop I've owned by a mile. Never skipped a beat.

I immediately switched it to Fedora and everything worked out of the box except the fingerprint reader which started working a few weeks later after a firmware update (also handled effortlessly/perfectly within Gnome - and it still gets updates!)


Same here. The only problem is that I "only" have 24Gb of RAM. I wish I could upgrade but it's a hard limit. And keyboard quality seems to have been degrading over the years since 2020. Is this new model good in terms of keyboard?


You can have more than 32GB RAM in T470s (btw. I'm using T480s with 40GB RAM)

https://www.reddit.com/r/thinkpad/comments/bibx3p/t470s_supp...


If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all? You could query a LLM for whatever subject, argument or counterpoint you wish.

Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.


Do you enjoy reading slop? I fail to see how this is a controversial take.


You act as if the internet was like a high society book club - all the previous articles were written by ivy league grads.

I recall geocities, angelfire, all the chans.

The internet has always been a cesspool with little islands of quality floating in a proverbial sewage of human output. In theory AI slop will improve.

A racist, sexist, ignorant online community of humans 20 years ago, if it is still active, is almost certainly still a racist, sexist, and ignorant community today.


Being able to name especially egregious forums is the point. AI slop isn't worse than preceding slop, but it is more widespread, partly because it's more socially acceptable than racism, sexism, and ignorance, and partly because it's harder to identify.

Similarly, email spam that is easy to automatically categorize is not a problem.

Making slop less sloppy makes the problem worse, not better. You could claim that that's only up to a threshold, but there's a pretty strong information theoretic argument against that.


I am making no claims about slop - I think that saying, "AI slop is going to ruin the internet" is something that itself requires further clarification.

I'm assuming you are advocating for AI to "go away" or be banned or something of that nature - that is most definitely not a valid argument.

AI doesnt do anything. People set the AI on a task. People have every right to ruin the internet however they see fit (within legal realms) and I dont even think you are actually upset about the actually more "unleashed AI" that post comments and participate in chats with specific agenda - you are annoyed with the websites that are mostly AI content...

The AI didnt make the website, select the topic prompt, and paste that onto the page -> a person did that. Your actually upset at people for not posting content up to your standards - which people have been saying the entire time the internet has been a public thing.

I honestly do not understand what part of this whole process, and AI content in general, appears so empowering for this.

Your argument is essentially akin to "people don't kill people, guns do" and all artuments framed this way, operate under an assumption that they are like some arbitor of quality - and simply by stating "AI Slop" it makes it so.

All of this is nonsense.


AI slop is ruining the current internet, including forums, email, blogs, announcements, and much of the remaining content. I say "current internet" because we will adapt as we always have, but many things that were formerly useful or interesting will be buried in so much crap that it will stop being something that people use the internet for.

At the dawn of email, I could and did cold email professors, and they would respond based on whether my query was worth responding to. I put effort into my messages (and had a reason, I wasn't just trying to elicit responses), and my success rate was very high. It wasn't scale that killed that, it was spam and greed. (There's overlap, but by spam I mean unsolicited commercial email, and by greed I mean people blasting out large number of low-effort messages in an attempt to gain something.) Professors are still interested in meaningful correspondence, but email is no longer a usable communication medium unless they already know their correspondent.

AI applies the same dynamic to many more forms of content. Individually, it doesn't do much harm. In aggregate, the meaning and value are rapidly being destroyed.

It's kind of ironic -- in the early days of online communication, there was endless hand-wringing over all the cues and subtext that we've lost from face-to-face communication. Now we take that loss as a given, and have collectively decided to attenuate the signal even more.

I wouldn't advocate for AI to just go away in all domains. It's a cool and useful technology. But I personally would prefer if representing AI output as your own writing were looked upon roughly the same way as having a secretary write all of your correspondence. Well, a little worse -- it's like have an arbitrarily chosen secretary from a worldwide pool write each item of correspondence. If I ruled the internet, that's where I would set social norms and expectations. People could still use it for translation, but it would be a major faux pas to not divulge your use of AI if there is reason to believe you wrote it yourself. Sure, there would have to be many judgement calls -- if you get an AI's advice on how to say something and then reprocess it into your own words, for me that'd depend on how real that reprocessing is. But that's nothing new, it's just another form of the plagiarism slippery slope.

Sadly, I do not rule the internet, and it's a lost cause.

Whether it's the person using AI or AI itself that is responsible? That's a non-sequitur. I don't care. Describe it how you like. I'm describing the effect, not assigning blame.


I have extensively used AI - its not as capable as you think it is. I frequently run into hard limits of its ability - I understand what recursive means in the sense of an AI, I can see it folding into itself pieces of this and that of what I've said or has been discussed to create the appearance of depth, growth or progress - none of that is real. The AI does not change.

I use AI as feedback - but only after setting almost 50 variables/conditions for that feedback, because AI is an automatic sycophant 100% - but it doesnt have to be that.

I occasionally use AI to transfer what I am saying to a person, into words that don't offend them - as I have absolutely no patience for people's insecurities when I find myself in a position where I need to teach them something, which happens often.

Let me be very clear - you are not capable of identifying AI content any longer, nobody is.

I extensively tested that by having a broad conversation with some of the smartest people on a platform (on earth in general really) whom all have very real credentials - I engaged with two sides of the AI coin regarding AI being self-aware or not, which is actually being debated, by some of the smartest people.

Half of my comments, I ran thru AI - or just completely generated from a prompt - my most liked comment was not mine - liked by people whose professional occupations is literally AI.

I'm sure this disturbs you - that an AI can create a Wikipedia page with more accuracy, better quality of writing, and in a more engaging way than 99% of human people - that is our actual reality tho.

Now all those little chat bots running around the internet, low level AI - they are creating slop, in exactly the same places and ways that humans do, their very words are modeled after the words people have literally written.

So, an AI can create a 100% perfectly written article for a major publication - and then AI can also fill the comments on that "perfect" article with absolute garbage - very similar to how things have always functioned online.

You need to interact with AI more , so you actually understand it and are not afraid of it, or imagining it with more ability than it has, or giving it human agency - AI is literally not capable of having agency at all.

Right now, there are tens of millions of millennials who are functionally identical to Boomers with smartphones.

You can't prevent AI from changing every aspect of human life - nobody can. You can be the boomers who refuses to adopt a smartphone - they all have smartphones now.


So your straw man is that the internet already had bad stuff on it? Cmon, you can do better. Adding more bad to bad is still bad.


Where did I strawman?

And might be the 1st time in my life that I was strawmanned, with an accusation of pulling a strawman - thats pretty fantastic actually, I'll give you that.

Otherwise I wrote a book on the other comment on this - you should check that out.


Honestly, I'm not good enough at distinguishing between AI and human-written content to know. I don't want to read 'bad writing' for sure, but that's rules out a lot more than AI. I also believe that the human (hopefully) reading and accepting the output of AI before putting it on the internet is more responsible for 'AI slop' more than the AI is, because the human-side author should be checking what they publish, so I don't really need to know. If I read someone's post and don't like it I won't go back to their blog again. If I read it and I do like it, I will go back. Whether or not they're using AI is essentially irrelevant to me.

Fortunately for us all HN does that curation for us. High quality blog posts from well-written and interesting blogs like simonw's posts get posted here a lot. I can't tell if he uses AI to help write them but given his deep work on AI topics I'd be surprised if he doesn't.

Plus, I strongly suspect that AI content is improving at a pace that means most people won't be able to tell in a few years, especially once tools to easily fine tune a model on a corpus of your own text are simple to use.


Listen to the podcast Shell Game.


No. It doesn’t matter how good an llm model is. If a person has something to say and they can give the llm enough context to say it well, they should just write it themselves. Theres 0 reason to bring a llm into it. Doing so simply makes your writing less trustworthy because as a reader I don’t know if what I’m reading is genuine from the writer or simply average of all texts filler.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: