Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks like it's here now: https://replicate.com/replicate/llama70b-v2-chat

As for pricing, that model's pages says: "Predictions run on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 17 seconds."

And the pricing page (https://replicate.com/pricing) says Nvidia A100 (80GB) GPU hardware costs $0.0032 per second.

So Llama 2 70B would "typically" cost under 17 x 0.0032 = $0.0544 per run.



Thank you for checking that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: