Hacker Newsnew | past | comments | ask | show | jobs | submit | ksec's commentslogin

I am surprised it got updated without redesign because AirPod Max was perhaps one of the worst Apple product in recent history. Big, Bulky, Heavy and not comfortable.

Surprisingly people say consumer will blindly follow Apple but instead you see far more Sony XM series headphone around.


Not that I disagree with you on how poor the UK Government are, but why are the MOD "wokest organisations on the planet" ?

If you are talking about DEI, then it isn't just MOD but the whole HMG are part of the same policy and initiative.


Seems the headline isn't accurate as it implied he is against liquid. May be better to use - Shopify/liquid: Performance: 53% faster parse+render, 61% fewer allocations.

-The autoresearch pattern; Is this like a brute force AI trial and error to performance improvement for agent?

- 53% faster combined parse+render time, , 61% fewer object allocations[1]

Surprisingly he got the fewer object allocation right, but it isn't 53% faster as it would implied 1.53x the speed. In reality it is 53% reduction ( or half ) in rendering time which is close to 2x faster.

At the scale of Shopify, I wonder how much saving would make with a 2x faster rendering software. I am thinking if it would hit a million dollar per year.

>With GC consuming 74% of total CPU time

I guess there is still plenty of room for improvement.

[1] https://github.com/Shopify/liquid/pull/2056


I am wondering why the sudden turn, and if it has anything to do with Apple. Given Apple has always allowed Hong Kong to use its international AI tools and does not rely on Mainland China's technology.

Or has US government decided to relax the rule a bit? This also happened South Korea announced a week ago they can finally use Google Map.


>I don't know where this fascination with getting everyone to download your app comes from.

So they could do exactly what they are doing on the web and may be even more but with Native code so it feels much faster.

I got to the point and wonder why cant all the tracking companies and ad network just all share and use the same library.

But on Web page bloat. Let's not forget Apps are insanely large as well. 300 - 700MB for Banking, Traveling or other Shopping App. Even if you cut 100MB on L10n they are still large just because of again tracking and other things.


You do get a new battery with Keyboard replacement. So it is not that bad although it may still be better to buy a new MacBook considering the cost.

>Which is weird....

It isn't weird at all. I would be surprised if it ever succeed in the first place.

Cost was way too high. Intel not sharing the tech with others other than Micron. Micron wasn't committed to it either, and since unused capacity at the Fab was paid by Intel regardless they dont care. No long term solution or strategy to bring cost down. Neither Intel or Micron have a vision on this. No one wanted another Intel only tech lock in. And despite the high price, it barely made any profits per unit compared to NAND and DRAM which was at the time making historic high profits. Once the NAND and DRAM cycle went down again cost / performance on Optane wasn't as attractive. Samsung even made some form of SLC NAND that performs similar to Optane but cheaper, and even they end up stopped developing for it due to lack of interest.


A ways back, I wrote a sort of database that was memory-mapped-file backed (a mistake, but I didn’t know that at the time), and I would have paid top dollar for even a few GB of NVDIMMs that could be put in an ordinary server and could be somewhat straightforwardly mounted as a DAX filesystem. I even tried to do some of the kernel work. But the hardware and firmware was such a mess that it was basically a lost cause. And none of the tech ever seemed to turn into an actual purchasable product. I’m a bit suspicious that Intel never found product-market fit in part because they never had a credible product on the NVDIMM side.

Somewhere I still have some actual battery-backed DIMMs (DRAM plus FPGA interposer plus awkward little supercapacitor bundle) in a drawer. They were not made by Intel, but Intel was clearly using them as a stepping stone toward the broader NVDIMM ecosystem. They worked on exactly one SuperMicro board, kind of, and not at all if you booted using UEFI. Rebooting without doing the magic handshake over SMBUS [0] first took something like 15 minutes, which was not good for those nines of availability.

[0] You can find my SMBUS host driver for exactly this purpose on the LKML archives. It was never merged, in part, because no one could ever get all the teams involved in the Xeon memory controller to reach any sort of agreement as to who owned the bus or how the OS was supposed to communicate without, say, defeating platform thermal management or causing the refresh interval to get out of sync with the DIMM temperature, thus causing corruption.

I’m suspicious that everything involved in Optane development was like this.


If you're thinking about reliability in terms of 9s you should probably not be depending on a single machine. Reboots could take hours and be fine if your architecture is set up for reliability.

Things go down and need to come back up. Take a look at almost any major cloud incident in the last 15 years — some combination of goofs results in a bunch of services going down. Then it takes many minutes or hours to come back up. Sure, one can (and should) try to design for fewer failures that result in waiting for things to come up, but one should also design for faster bringup.

For the NVDIMM in question, the whole point is for durability after a reboot or power loss. A fifteen minute cycle in which it would write its contents out to nonvolatile storage and then read it back, during which the firmware is too inept to notice that all the data is already loaded is a bug that should have been fixed. (In fairness, this was a pre-production model.)

Even ignoring the reboot time, SMBUS was needed to properly identify the device and to access its health-check data.

As for my database, it’s not intended for active-active HA setups — it’s intended for use on a small number of high quality machines. It has had zero meaningful data loss incidents in its entire time in production, and it’s extremely fast. But it has some other properties I don’t like and that would cause me to do it differently if I were to start over.


I worked at Micron in the SSD division when Optane (originally called crosspoint “Xpoint”) was being made. In my mind, there was never a real serious push to productize it. But it’s not clear to me whether that was due to unattractive terms of the joint venture or lack of clear product fit.

There was certainly a time when it seemed they were shopping for engineers opinions of what to do with it, but I think they quickly determined it would be a much smaller market anyway from ssds and didn’t end up pushing on it too hard. I could be wrong though, it’s a big company and my corner was manufacturing and not product development.


I worked at Intel for a while and might be able to explain this.

There were/are often projects that come down from management that nobody thinks are worth pursuing. When i say nobody, it might not just be engineers but even say 1 or 2 people in management who just do a shit roll out. There are a lot of layers of Intel and if even one layer in the Intel Sandwich drag their feet it can kill an entire project. I saw it happen a few times in my time there. That one specific node that intel dropped the ball on kind of came back to 2-3 people in one specific department, as an example.

Optane was a minute before I got there, but having been excited about it at the time and somewhat following it, that's the vibe I get from Optane. It had a lot of potential but someone screwed it up and it killed the momentum.


Are you referring to the Intel 10nm struggles in your reference to 2-3 people?

This is actually insane. Do you mean 2-4 people in one department basically killed Intel? Roll to disbelief.

Yes this is pretty common in large enterprise-ey tech companies that are successful. There are usually a small group of vocal members that have a strong conviction and drive to make a vision a reality. This is contrary to popular belief that large companies design by committee.

Of course it works exceptionally well when the instinct turns out to be right. But can end companies if it isn’t.


It's somewhat plausible that a small group of people in one department were responsible for the bad bets that made their 10nm process a failure. But it was very much a group effort for Intel to escalate that problem into the prolonged disaster. Management should have stopped believing the undeliverable promises coming out of their fab side after a year or two, and should have started much sooner to design chips targeting fab processes that actually worked.

A friend was working at Micron on a rackmount network server with a lot of flash memory, I didn't ask at the time what kind of flash it used. The project was cancelled when nearly finished.

Cost was fantastically cheap, if you take into account that Optane is going to live >>10x longer than a SSD.

For a lot of bulk storage, yes, you don't have frequently changing data. But for databases or caches, that are under heavy load, optane was not only far faster, but if looking at life-cycle costs, way way less.


Optane was in the market during a time when the mainstream trend in the SSD industry was all about sacrificing endurance to get higher capacity. It's been several years, and I'm not seeing a lot of regrets from folks who moved to TLC and QLC NAND, and those products are more popular than ever.

The niche that could actually make use of Optane's endurance was small and shrinking, and Intel had no roadmap to significantly improve Optane's $/GB which was unquestionably the technology's biggest weakness.


> I'm not seeing a lot of regrets from folks who moved to TLC and QLC NAND, and those products are more popular than ever.

That's interesting. Even TLC has huge limitations, but QLC is basically useless unless you use it as write-once-read-many memory.

I wish I have bought a lot of SSDs when you could still buy MLC ones.


> QLC is basically useless unless you use it as write-once-read-many memory

The market thoroughly disagrees with your stupid exaggeration. QLC is a high-volume mainstream product. It's popular in low-end consumer SSDs, where the main problem is not endurance but sustained performance (especially writing to a mostly-full drive). A Windows PC is hardly a WORM workload.


Seems like it is though? Most consumer usage does not have much churn. For things like the browser cache that do churn the total volume isn't that high.

The comparison here is database and caching workloads in the datacenter that experience high churn at an extremely high sustained volume. Many such workloads exist.


Consumer usage does not have much churn, but the average desktop is probably doing 5-50 drive writes per year. That's far away from a heavy database load, but it's just as far away from WORM.

There's a very big difference between a workload where you have to take care to structure your IO to minimize writes so you don't burn out the drive, and a workload that is simply easy enough that you don't have to care about the write endurance because even the crappy drives will last for years.

Of course. The inferior but cheaper technology is more cost effective in most cases but for certain workloads that won't be the case despite being more affordable per unit upfront.

The workloads flash is more cost effective for (ie most of them) either aren't all that write heavy or alternatively leave the drive sitting idle the vast majority of the time. The typical consumer usecase is primarily reads while it mostly sits idle, with the relevant performance metrics largely determined by occasional bursts of activity.


So instead of replacing every 5 years you replace every 5 years because if you need that level of performance you're replacing servers every 5 years anyway

Write endurance of the drive would be measured in TBW, and TLC flash kept adding enough 3D layers to stay cheap enough, quickly enough, that Optane never really beat their pricing per TBW to make a practical product.

I have to wonder if it isn't usable for some kind of specialized AI workflow that would benefit from extremely low latency reads but which is isn't written often, at this point. Perhaps integrated in a GPU board.


Optane practical TBW endurance is way higher than that of even TLC flash, never mind QLC or PLC which is the current standard for consumer NAND hardware. It even seems to go way beyond what's stated on the spec sheet. However, while Optane excels for write-heavy workloads (not read-heavy, where NAND actually performs very well) these are also power-hungry which is a limitation for modern AI workflow.

You're conflating two things. Yes, Optane would survive more writes. But it wouldn't survive more TBW/$, because much larger flash drives were available cheaper. Double the size of the drive using identical technology, and you double TBW ratings.

This was very clearly not true at the time for the actual implied TBW figures of even a tiny Optane drive, and is not even true today when you account for the much lower DWPD of TLC/QLC media. Do the math, $1/GB vs $0.1/GB where the actual difference in DWPD per spec sheets is more like two or three orders of magnitude, with the real-world practical one being quite possibly larger. (Have people even seen Optane actually fail in the wild due to media wear out? This happens all the time with NAND.)

> it wouldn't survive more TBW/$

Yes it would, by an almost arbitrarily large margin. You can test this out for yourself. Overwrite one of each in an endless loop. Whenever the flash based drive fails, replace it and continue. See how long it takes for the optane to fail.

You should be able to kill a typical consumer flash drive in well under a week. Even high end enterprise gear will be dead within a couple of months.


The extra capacity of modern SSD is a good point, especially now that we have 100TB+ SSD.

But Optane still offered 100 DWPD (drive writes per day), up to 3.2TB. Thats still just so many more DWPD than flash ssd. A Kioxia CM8V for example will do 12TB at 3 DWPD. The net TBW is still 10x apart.

You can get back to high endurance with SLC drives like the Solidigm p7-p5810, but you're back down to 1.6TB and 50 DWPD, so, 1/4 the Intel P5800X endurance, and worse latencies. I highly suspect the drive model here is a homage, and in spite of being much newer and very expensive, the original is still so much better in so many ways. https://www.solidigm.com/content/solidigm/us/en/products/dat...

You also end up paying for what I assume is a circa six figure drive, if you are substituting DWPD with more capacity than you need. There's something elegant about being able to keep using your cells, versus overbuying on cells with the intent to be able to rip through them relatively quickly.


We don't care about (TBW/TB) at the consumer level, we care about (TBW/$), and 3D TLC was far, far cheaper per TB, so much so that TBW/$ was not a numerical advantage of Optane.

That left ONLY the near-RAM-read-latency, which is only highly beneficial on specific workloads. Then they didn't invest in expanding killer app software that could utilize that latency, and didn't drop prices sufficiently to make it highly competitive with big RAMdisks.


In 2018, with Optane drives launching around $1.50/GB and TLC flash drives around $0.15/GB, it wasn't that much cheaper. As far as I'm aware Optane had a lot more than 10x the endurance.

10x endurance and 10x latency reduction, for 10x the price. It was closer to 5x the price with the closeout firesale after it was killed.

This is going to be against a lot of the comments and opinion posted on HN here.

The best selling Macbook in history, as percentage of total MacBook sold is the 11" $899 MacBook Air. That was when Apple learned people are willing to give up on performance and features just to get a Mac, or just to use OSX.

And despite the declining state of macOS, as Gruber said it is still zillions times better than Windows.

Apple Mac has always been more expensive than PC. But they are also better built. No Laptop has decent trackpad until M$ pull R&D into their surface book. PC Speaker was appalling until YouTuber start to state the obvious how MacBook speakers were better. But none of these matters, at the end of the day most consumer dont understand spec. They see that is the cheapest MacBook, it looks good and works, just like the MacBook Air 11", if they could afford to buy a $500 laptop, they will spend extra $100 on Apple. Even if the spec on paper is arguably worse.

And if we are really talking about spec and compare. If you even want some after sales services, you would at least have to look at Dell, HP or ASUS. And not some random Chinese brand.

These 1920 * 1080 15" screen is not a decent screen. Even ignoring P3 colour, you will have to find a screen with 200PPI+, let alone Apple do it with 220PPI.

If you want to use Amazon as comparison, they have been selling M4 MacBook Air at $200 discount sometimes $250 for most of the time. I have no idea why, but I would not be surprised the $699 model be selling at $599, same as EDU price. Then at this point the MacBook Neo is extremely competitively priced. You get better screen, faster CPU for less storage and less ram.

And let's fast forward a year. A Neo with A19 Pro as used in iPhone 17 Air and Pro with 12GB RAM, Double the SSD Speed. WiFI 7. Assuming that is true, I dont even see anything on the PC roadmap that is competitive, especially when they are all facing DRAM pricing pressure. ( Although I also think Apple will bump A19 Pro version by additional $100 )

Forgetting all that for a second, not a single review look into the actual Neo hardware. We will have to wait for iFixit for detail teardown. But is should be the easiest to fix Mac, and designed to be simple to manufacture as they said in the interview. The chassis is likely heavier due to this process but could see further refinement. The mechanical trackpad is work of genius, I am not sure if this is Apple only innovation or something that is on the market already. That trackpad alone is 150g, that is nearly one tenth of the weight of whole Neo.

The Neo is, as far as I am aware perhaps the first Apple product that was designed and engineered to be as practical and cost effective as possible. True to their words this isn't some cost reduction exercise using old design and components. This makes Neo the most boring Apple product on paper, but sometimes boring is good. And I agree with MKBHD, this is perhaps the most disruptive Apple product since the original iPhone.

There are roughly 1.5 - 2 billions Windows PC in use today. And Apple has at best 150 to 200M Mac user. So there is plenty of room to grow. I would be happy if they could double that in 5 years time.

I am really liking everything this New Apple is coming through so far. Molly Anderson as Industrial Engineer. John Ternus on Hardware Engineering. Not sure if Steve Lemay is great but my gut feeling is he would restore a lot of Apple HID.

The only thing missing is software ( And may be Services lead ). I know Craig Federighi is popular on HN and internet but I haven't liked a single software engineering direction since he took charge. Stop adding features and Resume driven development and start fixing bugs.

May be lastly, Tim Cook has never been any good at picking person. But all these new selection seems to be great. This cant be a coincidence. I am wondering if there are some additional changes in the background at Apple we dont see.

I have been giving Tim Cook's Apple plenty benefits of doubt but losing faith steadily for 10 years. This is the first time ever since Steve Jobs passed away I am excited to see changes in direction. The name Neo is just great. Truly something new.



Pretty much everything I expected from the Quick Look of it on Dave2D. Only surprises were the keyboard.

I dont think there has ever been a MacBook that has been designed with this level of practicality. I think we need to go back to Powerbook era and I dont think we have that either.

Would have been perfect if they come with A19 Pro 12GB DRAM and double the SSD speed next year.


>(then the M1 wins handily).

How do you came to this conclusion when both are passingly cooled and A19 Pro is faster. Not to mention AV1 and other newer codec hardware accelerator and NPU / GPU improvements.

Also remember M1 MBA is may be Walmart and US only. Around the world most dont even get a chance to buy M1 at $599. The display dont have P3 but is actually brighter than M1 400 nits. Not sure how Keyboard is worse. Neo also have 1080P webcam rather than 720P.

And if Walmart is selling M1 at $599, I am sure they will also sell Neo at lower than RSP may be even same as educational discount $499. And this point surely Neo would win?

What a lot of people dont talk about, and may be wait until iFixit to confirm. Neo is basically the iPhone 17 of MacBook. It is perhaps the easiest to repair and cheapest MacBook for Apple to services.


"...handily...". Apple set the power limit for the A19 at 4 watts. The M1 does not have this limit. So in all tests using processes that tax the CPU, the M1 wins. Apple ignored the greater thermal cooling available with the new case. The A19 would beat the M1 if Apple did not do this. But no one really cares cause... it is Apple. The other points re: Walmart are valid. However, it goes to the point that Apple could sell the M1 at this price point, but chose not to. Seems likely the Walmart M1 at $599 had lower margins (My Guess), so the Neo was born.

This is most likely for battery life. The Neo has a smaller and cheaper battery, so if you didn't limit the power somewhat you'd burn through it too fast.

At this point I am thinking apart from Windows release, libghostty is 90% done?

I am hoping Mitchell might move on to do something else.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: