The patches were written in 2011 and published in 2012. They did what they were supposed to at the time.
For the peanut gallery: this is a manifestation of an internal eng culture at fb that I wasn't particularly fond of. Celebrating that "I killed X" and partying about it.
You didn't reply to the main point: did you benchmark a server that was running several days at a time? Reasonable people can disagree about whether this a good deployment strategy or not. I tend to believe that there are many places which want to deploy servers and run for months if not days.
For the peanut gallery more: I worked with both of these guys at Meta on this.
The "servers are only on for a few hours" thing was like never true so I have no idea where that claim is coming from. The web performance test took more than a few hours to run alone and we had way more aggressive soaks for other workloads.
My recollection was that "write zeroes" just became a cheaper operation between '12 and '14.
A fun fact to distract from the awkwardness: a lot of the kernel work done in the early days was exceedingly scrappy. The port mapping stuff for memcached UDP before SO_REUSEPORT for example. FB binaries couldn't even run on vanilla linux a lot of the time. Over the next several years we put a TON of effort in getting as close to mainline as possible and now Meta is one of the biggest drivers of Linux development.
It's not just that zeroing got cheaper, but also we're doing a lot less of it, because jemalloc got much better.
If the allocator returns a page to the kernel and then immediately asks back for one, it's not doing its job well: the main purpose of the allocator is to cache allocations from the kernel. Those patches are pre-decay, pre-background purging thread; these changes significantly improve how jemalloc holds on to memory that might be needed soon. Instead, the zeroing out patches optimize for the pathological behavior.
Also, the kernel has since exposed better ways to optimize memory reclamation, like MADV_FREE, which is a "lazy reclaim": the page stays mapped to the process until the kernel actually need it, so if we use it again before that happens, the whole unmapping/mapping is avoided, which saves not only the zeroing cost, but also the TLB shootdown and other costs. And without changing any security boundary. jemalloc can take advantage of this by enabling "muzzy decay".
However, the drawback is that system-level memory accounting becomes even more fuzzy.
I am trying to understand the reason behind why "zeroing got cheaper" circa 2012-2014. Do you have some plausible explanations that you can share?
Haswell (2013) doubled the store throughput to 32 bytes/cycle per core, and Sandy Bridge (2011) doubled the load throughput to the same, but the dataset being operated at FB is most likely much larger than what L1+L2+L3 can fit so I am wondering how much effect the vectorization engine might have had since bulk-zeroing operation for large datasets is anyways going to be bottlenecked by the single core memory bandwidth, which at the time was ~20GB/s.
Perhaps the operation became cheaper simply because of moving to another CPU uarch with higher clock and larger memory bandwidth rather than the vectorization.
I think it's fair to say the hardware changed, the deployment strategy changed and the patches were no longer relevant, so we stopped applying them.
When I showed up, there were 100+ patches on top of a 2009 kernel tree. I reduced the size to about 10 or so critical patches, rebased them at a 6 months cadence over 2-3 years. Upstreamed a few.
Didn't go around saying those old patches were bad ideas and I got rid of them. How you say it matters.
The linked article says they decided to do CD in 2016 fwiw so that's not inconsistent with what I said.
You reduced the number of patches a lot and also pushed very hard to get us to 3.0 after we sat on 2.6.38 ~forever. Which was very appreciated, btw. We built the whole plan going forward based on this work.
I'm not arguing that anyone should be nice to anyone or not (it's a waste of breath when it comes to Linux). I'm just saying that the benchmarking was thorough and that contemporary 2014 hardware could zero pages fast.
Tangentially, on this CD policy - it leads to really high p99s for a long tail of rare requests which don’t get reliable prewarming due to these frequent HHVM restarts…
For me it happened around my first week after the bootcamp, so about 6 weeks from joining.
An important nuance - most Facebook engineers don't believe that Facebook/Meta would continue to grow next year; and that disbelief had been there since as early as in 2018 (when I'd joined).
very few facebook employees use their products outside of testing, which is a big contributor to that fear - they just can't believe that there are billions of people who would continue to use apps to post what they had for lunch!
And as a result of that lack of faith, most of them believe that Meta is a bubble and can burst at any point. Consequently, everyone works for the next performance review cycle, and most are just in rush to capture as much money as they could before that bubble bursts.
> don't believe that Facebook/Meta would continue to grow next year
Huh.
The time I worked at a hyper growth company, us working in the coal mine had much the same skepticism. Our growth rate seemed ridiculous, surely we're over building, how much longer can this last?!
Happily, the marketing research team regularly presented stuff to our department. They explained who are customers were, projected market sizes (regionally, internationally), projected growth rates, competitive analysis (incumbents and upstarts), etc.
It helped so much. And although their forecasts seemed unbelievable, we over performed every year-over-year. Such that you sort of start to trust the (serious) marketing research types.
I'm personally appreciative of these comments. It's good that people make claims, be challenged, and both sides walk away with informative points being made. It's entirely possible both sides here are correct and wrong in their own way.
Fwiw, this sounds like a healthy discourse - you don’t have to agree on everything, every approach has its merits, code that ends up shipping and supporting production wins the argument in some sense…
This is not special to Meta in any way, I observed it in any team which has more than 1 strong senior engineer.
Except one is an employee and the other one is an ex employee. The bias this introduces is not just a minor nuance, it's what fuels the public conflict and causes everybody else to double check their popcorn reserves.
Of course technical discussions happen all the time at companies between competent people. But you don't do that in public, nor is this a technical debate: "I don't recall talking to you about it" - "I do, I did xyz then you ignored me" - "<changes subject>"
Important distinction yes. It also means I can't go back and check the thread on what was said and when. Nor do I want to.
Always good to talk face to face if you're have strong feelings about something. When I said "talk" I meant literally face to face.
Spending a decade or so on lkml, everyone develops a thick skin. But mix it with the corporate environment, Facebook 2011, being an ex-employee adds more to the drama.
Having read through the comments here, I'm still of the opinion that any HW changes had a secondary effect and the primary contributor was a change in how HHVM/jemalloc interacted with MADV.
One more suggestion: evaluate more than one app and company wide profiling data to make such decisions.
One of the challenges in doing so is the large contingent of people who don't have an understanding of CPU uarch/counters and yet have a negative opinion of their usefulness to make decisions like this.
So the only tool you have left with is to run large scale rack level tests in a close to prod env, which has its own set of problems and benefits.
Perf counters are only indicative of certain performance characteristics at the uarch level but when one improves one or more aspects of it the result does not necessarily positively correlate to the actual measurable performance gains in E2E workloads deployed on a system.
That said, one of the comments above suggests that the HW change was a switch to Ivy Bridge, when zeroing memory became cheaper, which is a bit unexpected (to me). So you might be more right when you say that the improvement was the result of memory allocation patterns and jemalloc.
Yea I knew meta was toxic, but publicly beefing over something over a decade ago is a whole other matter. I can’t even remember what I was working on 10 years ago, and even if I did I wouldn’t be bringing people down that much later.
The problem is a lot of very strong engineers are also very difficult to work with. I worked at Meta too and can tell you the other side of the coin is that people who were too toxic could get canned as well!
Yes, I have worked with the strong but arrogant/snarky engineers. Luckily most of them got canned or forced out because the environment they create around themselves more than negates the positive impact they have. The strongest engineers I have worked with are all humble and kind.
It is their loss, I cannot imagine letting a minor work quarrel live rent free in my head for over a decade. I feel bad enough when something is stuck in my mind for a week.
Yeah, I am loving the public mudslinging over shit from 10 years ago, like high school girls fighting. This is like the FAANG version of the TV show Suits. We can call it FAANGs and use Midjorney to create the cover art and give the actors vampire fangs.
On a more serious note, it seems like any hyper competitive company eventually spirals into an awful, toxic working env.
Nope, I started in 2014.
> I don't recall ever talking to you on the matter.
I recall. You refused to believe the benchmark results and made me repeat the test, then stopped replying after I did :)