Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also given that a lambda instance can only handle one request at a time, it generally gets very poor CPU utilization.

Google cloud-run can handle multiple requests at a time, but still suspends the instance while no requests are being processed, and is billed to the nearest 100ms



Lambda is billed to the nearest 1ms and you can always lower your RAM and CPU requirements per function. Though at some point you hit the minimum.

To flip (because I'm generally pro lambda) the one call per instance also encourages global state (since you don't need to worry about two calls running in parallel using the same memory), which is pretty bad coding practice.


A few years back at $DAY_JOB I was trying to optimize the cost for a serverless stack. To my surprise, small RAM lambda instances had extra latency of up to 100ms for DynamoDB queries!


Depends on if your workload is CPU or I/O bound. It's true that CPU and RAM and proportional in Lambda and raising or lowering the RAM also raises and lowers your CPU.

I'm not sure why but I found with pre-compiled languages like Go it's not as big an issue (as long as your app is I/O bound). With Node.js I've found that increasing the size of the Lambda helps even with I/O bound functions. I assumed it was because the JIT compiling of the JS takes CPU but it also seems slower on subsequent runs and Lambda sleeps the apps it doesn't stop them (until you hit the end of the 5 minute reuse window). I give my Node.js a min of 1 Gig of RAM whereas I've been able to get some of my Go functions down to 128MB with no performance hit.

Which yes, means the Node.js functions are about twice as expensive per MS before you even consider that the Go function runs for less time. But in both cases I've still found it cheaper than servers due to efficiency even though strictly speaking it costs more than a server per GB/Ghz.


Lambda function code tends to be fairly single threaded, as others have mentioned, using a lower memory size with Lambda, also lowers the amount of CPU performance available.

However, it appears to not only lower the amount of cores available, but the performance of those cores.

So even if your code is very single threaded, and has low memory requirements, you still might want to provision a larger lambda memory size.

The frustrating thing with this, is that your single threaded code might only be using one of up to 6 CPU cores made available to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: