I wasn't expecting to see a 3d rendering treatment!
> Drawing complicated scenes in much more complicated in 3D than in 2D—and more interesting, since some real ideas are required. Much high-end 3D drawing, for example in video games or movies, relies on a pixel-by-pixel treatment. the pixels in hardware designed for this purpose incorporate a depth coordinate—that is to say, depth with respect to the plane of the screen—and pixels are colored in the order of their depth, so that close pixels are painted after far ones. This hardware option is unavailable to PostScript, which is essentially device-independent. The PostScript program itself must therefore be responsible for keeping track of depth. The standard method for doing this is to use a binary space partition.
Yeah, this is a good reminder that there's no Z-buffer in vector rendering. I haven't used painter's algorithm for a couple decades, but this makes me want to play with PostScript and code up a little 3d surface renderer with a BSP tree and painter's algorithm.
I wrote my own PostScript interpreter three months ago, which runs in a browser :) You can play with it at http://www.ivank.net/veci/pdfi/ There are also some demos from 80s and 90s.
What's really cool is you could build window systems based on shipping around PS fragments instead of bitmaps. SUN's NeWS had a ton of technical advantages for networked workstations such as display and scale independence and smaller wire size. It was also interesting because PS could do a bunch of calculation on the display device, like draw the graph of a math function, and that spec could be encapsulated in other code. It could also do I/O which was interesting in a UI capacity, ie OpenWindows.
Right around this same time, X11 was coming up on other platforms like HP and Apollo, and it seems to have "won", and NeWS disappeared.
I wonder if the time is right for a resurgence in this idea, now that we're rethinking compositors, X servers, Wayland etc etc.
One thing that bothers me (Chapter 6, page 11) is how they approximate a circle using quadratic Bezier curves, and say that "an approximation
by eight quadratic curves is just about indistinguishable from a true circle." However, if you look at the picture on the next page, you can clearly see the difference.
That is not an illustration of eight quadratic curves, it is an illustration of four.
"The figure below shows how the curve x^2+y^2−1=0 is approximated by four quadratic curves (in red). An approximation by eight quadratic curves is just about indistinguishable from a true circle."
the illustration shows 4 an approximation with quadratic curves. There's no point drawing the one with 8 as it would be indistinguishable, as the article points out
Mathematically, they are not wrong. An eigth circle is still visually indistinguishible from a true circle (see https://pomax.github.io/bezierinfo/#circles for the proof to back that claim up). Their image, however, shows what happens when you try to only use four, rather than eight, curves: things look clearly wrong (as for how wrong, the aforementioned link gives the error measure. It's _quite_ wrong).
This is fantastic. IMHO, Adobe should have just stuck with PostScript and forgot about PDF. I remember having to create fractal images for research papers and my dissertation -- the only way to allow them to render at any resolution was to write a program in PostScript -- (very cool to program figures for a paper)! -- no idea if you can program PDF's this way (which, unlike PostScript, I can't edit with a text editor since they're binary).
No, they don’t. That’s like saying PNG files can contain postscript code because tools exist that convert postscript to .png.
The basic drawing operations in postscript and PDF are the same, but that’s about it.
An important difference between PDF and Postscript is that PDF isn’t Turing complete (1). Benefits of that are that, for example, you can determine how many pages are in a .pdf without rendering it in full.
Disadvantage for drawing fractals is that PDF doesn’t have a notion of looping or recursion. That postscript program to draw a fractal can, at render time, check the resolution of the output device and then decide how deep to recurse. A PDF file has to decide at file creation time, and has to contain every single drawing command, so it will be larger (I don’t remember PDF internals well enough to be sure about the latter. It may be possible to avoid the latter a bit by redrawing scaled parts of a drawing)
>> PDFs can, and typically do, contain postscript.
> No, they don’t. That’s like saying PNG files can contain postscript code because tools exist that convert postscript to .png.
It's clear you know the difference between PS and PDF, so don't you think your straw man example is a bit exaggerated? PDF is originally based on PS, and it has 1:1 correspondence for almost all it's rendering features with PS. PNG is completely different, since it's a raster format, it doesn't share any drawing operations with PS.
You could have offered a gentle nitpick that PDF is interpreted PS rather than source PS, but claiming @pletnes' comment is completely wrong, when it isn't, makes your comment seem both unnecessarily snarky and somewhat off base, FWIW.
"PDF contains tokenized and interpreted results of the PostScript source code, for direct correspondence between changes to items in the PDF page description and changes to the resulting page appearance."
As far as the graphics output is concerned, PDF is strict superset of PostScript.
In fact the "PDF page description language" is PostScript constrained to only drawing operations which have somewhat extended semantics, as PDF drawing model does Porter-Duff composition while PS does not.
In all, when you rewrite any piece of PS code delimited by /showpage into shortest PS reprasentation without control structures you get valid PDF representation of exactly same final page image.
Yes, they can, and the postscript portions are even stored in plain text for you to edit, unless you tell whatever pdf compilation tool you're using to encrypt the entire document. There is no conversion unless you're using a poorly written PDF compiler.
(this also makes your analogy pretty weak - it's more like "having a PNG include a bitmap image", in addition to all the other data it can contain)
of course, most _readers_ will ignore raw postscript because the PDF format is intended for print documents and having an insanely complex code instruction that draws "a picture" is crazy inefficient compared to just including the vector graphic (not the bitmap graphic, that would be quite dumb) that the postscript program is supposed to yield.
PDF today might be a monstrosity, but I remember simple reasons for its creation. postscript is turing complete, and less a document. It's like having html replaced by javascript calls to dom.append(<expr>) everywhere.
you have to evaluate the whole source to be able to know the actual output, PDF mitigate this by giving static structure (pages etc) so that a program can find its way out of the box. Allowing for showing the page N only, quickly. Such things.
PDF is interpreted PostScript. It is not a programming language as PostScript is.
Edit: PostScript has to be interpreted and rendered, PDF only has to rendered. You can't jump to page 100 in a PostScript if you haven't interpreted page 1 to 99 before. A PDF can do that.
PDF is interpreted too -- actually a subset of PS -- in fact you can write programs that work in PostScript that PDF will not handle in the same transparent way. The PS2PDF translator will have to render those pieces ahead of time -- in PS, the printer can render/interpret the code in a way that is optimal for the printer. That's why I used it.
You cannot write programs in PDF. As I have written, it is not a programming language. It is already interpreted.
For your use-case a PostScript file is better, that's why you used it. But most of the time all graphics for a given page are prerendered and therefore static. If all page content is vector based you can render a PDF in every resolution you need.
In my little printing world I like to have control over the page, I don't like that the printer (RIP) can control the page "in a way that is optimal for the printer". I am responsible for the output, not the printer. If something goes wrong inside the RIP, the printer doesn't pay the damages, I am. And things can get costly.
It is also pretty resource intensive to work with PS (because it has to be interpreted every time you want to have a look at it). All things considered (and there are a lot more differences) it is simply more pleasant to work with PDF instead of PS. That's why the majority uses it. But it definitely has flaws, too.
Dumb printer drivers don't print hundreds of pages of source when you send a PDF to print. If you went to college in the 1990s you might remember begging for printer fee refunds.
PDF is "bread and butter"? There is no fundamental reason why PDF's should be easier to process -- in fact, PS files are far more transparent being simple text files (also capable of holding binary data as well).
PDF is also plain text file, with one slight caveat that PDF page index contains byte offsets into the file.
[Edit: this observation should probably be taken as relative to various other "plain-text with byte offsets as decimal ASCII number" I have seen, with PDF being the most sane from the set of such formats I've seen, with Netscape/Mozilla Mork beeing the most insane]
If you want to run many of the examples in this book, you can (on a Mac) usually create and save a text file (with .ps suffix) and then double-click the file in the Finder. The ps2pdf distiller should convert the contents to PDF and open in Preview. (Not all the IO functions are available, though.)
I took this course at UBC and it was great. Programming postscript to render 3D animated shapes was mind-blowing. Especially cool was that you needed to build everything from scratch
> Drawing complicated scenes in much more complicated in 3D than in 2D—and more interesting, since some real ideas are required. Much high-end 3D drawing, for example in video games or movies, relies on a pixel-by-pixel treatment. the pixels in hardware designed for this purpose incorporate a depth coordinate—that is to say, depth with respect to the plane of the screen—and pixels are colored in the order of their depth, so that close pixels are painted after far ones. This hardware option is unavailable to PostScript, which is essentially device-independent. The PostScript program itself must therefore be responsible for keeping track of depth. The standard method for doing this is to use a binary space partition.
Yeah, this is a good reminder that there's no Z-buffer in vector rendering. I haven't used painter's algorithm for a couple decades, but this makes me want to play with PostScript and code up a little 3d surface renderer with a BSP tree and painter's algorithm.
I'm also reminded of ray tracing in PostScript http://www.realtimerendering.com/resources/RTNews/html/rtnv6...