Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, GGP's take is not that interesting. It just depends on how much extra perf/value you get from the GPU vs the CPU.

If the GPU gives you some nice fancy shading, but your game works fine without it, then GGP's suggestion to "just make it fail" is of course silly.

Slight tangent: I think a lot of "tech hot takes" (and snarky SO comments) come from a lack of imagination outside our own experience. It's easy to forget that there is often a huge diversity of applications/use-cases for the languages/libraries/etc that we use.



> If the GPU gives you some nice fancy shading, but your game works fine without it, then GGP's suggestion to "just make it fail" is of course silly.

Not really, that's actually the point being made: if you try to use the GPU with your fancy shaders, you'd rather have it fail noisily so that you can explicitly fallback to the CPU implementation without your fancy shaders, than the library silently falling back to rendering everything (including your expensive shaders) on the CPU.


Then, so far as I can see, it's not even a un-interesting take, since you can easily detect WebGL support before running GPU.js. Perhaps this was difficult to do with OpenCL in 2010, but it's a non-issue here.


Detecting WebGL will not tell you what is needed.

I just tried a WebGL demo, and got wildly different results depending on the browser, and which GPU is active at the time the demo started. Results varied between:

- Smooth and good looking

- Good looking but <1fps. Turning off GPU features would have been the better thing to do.

- Noisy, corrupt-looking pixels over half the image because calculations were producing incorrect numerical results; it looked like overflow or something.

At least it's possible to detect a low frame rate and, eventually, compensate by reducing features and/or resolution. But that's still annoyingly slow (it will take a few seconds to detect).

Bad calculations are a bigger problem, as it's hard to anticipate what kind of bad to look for and how to detect it.


> I just tried a WebGL demo, and got wildly different results depending on the browser

That's not a problem with the API, that's a problem with one or more of your browsers.

Bench-marking your code during init to choose the best algorithm is a very sane thing to do if you know your users' machines/browsers may have different capabilities. We're talking about extra fractions of a second of load time. The alternative is to expose a bunch of details about what's underneath the browser, which would probably not fit with the goals of the web - it being a sort of abstraction layer over all the operating systems and devices.


I agree when it comes to benchmarking for speed in WebGL.

It's much more of a pain when calculations come out wrong though. It's hard to anticipate all the ways that might happen, check for them, and then find workarounds when you don't have the hardware where they go wrong.

As noted in this comment https://news.ycombinator.com/item?id=24026914 some of these problems appear to be browser-independent.

Bad calculations can ruin a game, not just due to visual glitches, but by revealing things that should be hidden, for example seeing through holes in walls caused by glitches.


I have to test for SwiftShader in my web game. Whenever Chrome gets a bit upset with the GPU it will silently fall back and make the game 10x as slow and I get a bunch of support requests. Putting in a check and telling people to restart Chrome has reduced my support workload a ton.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: