history of realtime raytracing - part 3

[History of realtime raytracing (part #3)................]
We need raytrace, anyway.

"balls are the new bobs" - Insectecutor

"When art critics get together they talk about Form and Structure and Meaning.
When artists get together they talk about where you can buy cheap turpentine." - Pablo Picasso


Did you think we were dead? Yes, we were, indeed. And then your scientists invented electricity.
Merry Christmas.




We've left raytracing extinguishing just after the release of Heaven seven. Demoscene of the years '00 gets oriented towards an orgy of 3d scenes, more and more crowded and detailed as the graphic cards give away more raw power for free. Or maybe, demoscene is simply dazzled by the German Rise of the group Farbrausch, that leads the way with robots, discoball-alike dancing men and supermarket carts.

But everything has an end, even the insatiable thirst for power. And at the millionth polygon, the scene searchs for new directions.

And how do you innovate, if not going back to the roots of old and dear raytracing?

Where it all started (again)
The ones that first recognize in the new graphic cards not only polygon crunching powers are the almost unknown Castrum Doloris, who incidentally name the new scene's manifesto "ps_2_0" (as the code directive that says to the graphic card to use pixel shaders 2.0), and propose again the undead classic of the shadowcasting spheres in a hollow cylinder. But somehow, a little bit for the (coder) colours, a little bit because the scenes' resolution makes it look like a mid-nineties intro, the tentative of pioneers Castrum Doloris doesn't gain any followers.

The scene goes ahead wandering between hundreds of polygons, with brief stops in the accelerated raytracing's field, as in the Tron-esque and beautiful "Fair play to the queen" by Candela, where the running robot escaping from something passes behind three metaballs with specular bumps (yes, that's made with pixel shaders).

Rudebox - toruses stomping
And then, suddenly the scene has a revelation. The "revelation" is not so different from the innovative Castrum Doloris' intro (of two years before), but:

1) it doesn't have coder colours
2) it shows much much higher resolution
3) it's 4k in size (OMG, 4k)

Its name is "kinderpainter", and the author is noone but iq of rgba.

A new era is born. Everybody realizes that raytracing, and fast raytracing is possible on the GPU, and suddenly everyone starts making 4k intros with raytracing. But why is raytracing so fast on modern GPUs? We asked that to the author of kinderpainter himself:

"Well, it fact depends a bit. Raytracing is also damn fast in CPUs today. With multithreading, SSE coding and cache friendly design you can get several million rays per second in a regular CPU, which allows realtime raytracing of huge 3d models with millions of polygons. You can find here a few screenshots of my own CPU realtime raytracing.

Now, democoders are not in general interested in real raytracing - we are quite behind the raytracing community. We are still tracing rudimentary spheres and planes like in the 80s which is completely unimpressive for those who are really doing raytracing today. If you speak about raytracing as understood in the "demoscene" (spheres and planes), then yes, GPU is very fast, faster than the CPU.


Muon Baryon - love the grid at the bottom
Real raytracers are usually memory bandwidth bound, and not compute bound, because huge kdtree/bih structures have to be traversed and few polygons accessed per ray. That's why the fastest raytracers out there are CPU and not GPU.
However, in simple scenes like in the demoscene without kdtree/bih or complex data structure and geometry, brute force rendering the GPU really shines indeed. That´s because no memory accesses means the ALUs of the shader units can work without stalling. Of course all these ALUs work in parallel because every pixel of the screen is independent of the other pixels in a raytracer. So, for demoscener kind of raytracing the GPU is really a good option. Also, as the setup needed is minimum so it's perfect for 4k productions. Lastly, as GPUs get more powerful by means of adding more shader units (arithmetical power) these sort of simple raytracers will benefit more and more, just like rendering of fractals which are pure computation problem.

However, the CPU is still probably the best/fastest option for real raytracing with not ten, or a hundred of "stupid" spheres but millions of polygons. However NVidia is trying to have real raytracing running fast in the GPU too, although it's only for a limited amount of polygons (say, a million)."


After this "breakthrough", it becomes possible to reproduce in realtime the first chapter's patinated covers of early '80s magazines. It's a cycle that completes inside the scene. "Photon race"'s still image now moves, between translucency and refractions, in "Photon race 2". Every surface that seemed made of plastic now reflects everything around or diverts the rays of light, reaching a realism never seen before at this detail and at those resolutions. The 3d scenes are not penalized anymore by triangles' jizziness, but gather new and unexplored roundness.

And finally, raytracing
The possibilities offered by pixel shaders motivate the scene to go behind the classic spheres and planes (on the contrary of what iq says).

In "Yes we can", the Obama-sponsored 4k, Quite use raytracing to create kaleidoscopic plasmas, curved and distorted surfaces that alternate with rainbows of flexuos lines, billions of dust particles that generate infinite geometries.

In "Nevada", Loonies recreate cavernous moon passages, undergound wet and barely lighted tunnels, but more than anything, creased tentacles that mix their roots in a substance that reminds of lava, and stretch out towards a distant and incandescent star.

In "dollop", SQNY uses isosurfaces to build an enviroment that reminds of the underwater abyss, passing with a seamless cut to the soft clouds of a sunny day, only disturbed in an imperceptible way by an electric and psychedelic shell that surrounds the observer.

Raytracing is born again.
Someone calls it "raymarching", but the line between the two techniques is thin.

4k intros, genre that was drying up as the use of 3d raised the bar further and further, come back, and in democompos the number of those little gems exceeds the one of other productions. The "still images in 4k" genre, newly born, can move towards a robust youth, leading to masterpieces like "organix" or "slisesix".

A spark generates an explosion, if you have explosive.

Interlude

[Ladies and gentlemen, the interval................]
Yeah, who needs raytrace...

Guys, life took over as always, and part III of the raytracing series is gonna take a bit longer than expected (quite a bit longer...). So, in the mean time, you can probably check out some links. For example, take a look at this great article. It's one of the best making-of I've ever seen, for Applied Mediocrity, Kakiarts' intro.

Otherwise, you can always drop your jaw on the latest astounding 256-bytes intro by Rrrola: Puls.
The tunnel will be back in a second... almost...

history of realtime raytracing - part 2

[History of realtime raytracing (part #2)..............]
Who needs raytrace, anyway?

(let me be proud of being Italian, just for one second, please)

The "optimizing" phase (mid 1997 - mid 2001) is the second era of demoscenic raytracing. Up to here, we've seen blocky slow spheres, really more similar to blurry squares, going nowhere on infinite parallel planes. Casting shadows. And all of this at fps-es you can count on the fingers of a hand.
Then, an intro came, out of nowhere, an intro that changed the state of things. Its name was obscure and ermetic: "Gamma" (no, not this one).

Let me make a SoLo-stylee description of "Gamma":

black screen
synth demo music
gamma logo
vertical and horizontal pipes with flares between
flashy metaballs
more flashes
psychedelic palette cycling
morphing and reflecting cube
end credits


(memo: write a Polygen grammar to generate automatic SoLo demoreviews)

The incredible raytracing engine of "Gamma"
"Gamma" has a unique trance-ish feeling. Differently from before, you can't say at the first glance that this production is raytracing. Fps-es are too high (it runs flawlessly on a 233mhz pentium), and, more then any other thing, there are no reflecting spheres and checkerboarded planes. Finally, mfx abandoned the mode-x scanlines and that blobby blurring we've seen in the "Transgression" series, and let pure raytracing powa flow outside of our screens.

The other groups will be astonished for a long period, and continue making raytraced planes and spheres for some time more... but! Let me open here the Italian parenthesis in raytracing's history!




Those guys have seen "Amici miei" too many times
Italian groups have made mainly "mainstream" and "one 'human' character show" type of demos.
But two groups were making raytracing instead: Spinning Kids and Bug2Fix.

Bug2Fix reached the aphex of their career in the first phase of raytracing history, when they brought "Just like Antani" at Assembly 1997 (making out an honorable 5th place in the intro compo - just behind mfx, by the way). "Just like Antani" is quite a good intro, but it belongs, as said, to the "inception" phase or raytracing history, with black/white floors and so on.

Spinning Kids hit the scene in 1997, with two contributions: one totally raytraced intro ("I feel like I could"), and one "not-so-realtime-but-beautiful" raytraced scene in their "Back to the mansion" prod (otherwise almost forgettable). The IFLIC engine is still phase-1 raytracing, but shows some nice innovations, like the dynamic resolution switcher (from 80x50 up) and the (forgettable) dithered mode.

Raytracing in *cough* "realtime"
Then, things evolve into "Sviluppo insostenibile" ('unsustainable development', but it sounds better in Italian). It was september 1998.
Dixan, Spinning Kids' musician, says something about the technical features of this one: "'Sviluppo Insostenibile' was coded at Pan's place, in Venice, and on the train, while we were going to Abort98. The most relevant feature is Pan's texture generator, clever coding exercise that had its own virtual machine, technologically advanced 2d filters and a highly optimized memory management scheme, so optimized that, at the end, we managed to put only *one* texture in the intro".

Maybe they've seen too many b-movies instead
Their raytracing engine really evolved, anyway, and lead to "Taint", presented at "The Trip '99", the (*last*) real Italian party (yeah, you've heard well, 1999; that's where I stop being proud of living in Italy :). "Taint" remains probably the higher expression of that technology, with its volumetric lights (maybe a primer in a demo) and atmosphere. I remember that the Spinning Kids guys had filled the Trip '99 party hall with black and white photocopies of one scene of "Taint", and everybody was asking what that blurry thing was.

Yeah, Italian history of raytracing, sadly, ends here (but with pride).


And then, on the 24th of April 2000, raytracing disappears from the scene.

Yes, that's the date when "Heaven seven" by Exceed was shown at Mekka & Symposium 2000.
"Heaven seven" was the fastest raytracing seen up to there. This intro became in people's imaginary like an icon of design and demo-crafting. Even many years after M&S, people have taken the binary of "Heaven seven", like a fetish, and expanded it, upgraded it. We can see now "Heaven seven" in 1080p HD. And it still runs as fast as a polygon rasterizer.
"Heaven seven" is still number 3 in the "pouet's top ten" of all time, and it's almost a decade old. Many young people have got in touch with the scene seeing that intro for the first time. We can't really explain why "Heaven seven" exploded like that. The only suggestion we can give is "watch it".
Blocky jagged spheres were forgotten forever after the 24th of April 2000.

The trancey atmosphere of 'Fresnel 2'
As said, after that date, raytracing disappears from the scene.
We still have some other examples of good raytracing intros or demos around 2000.
Suburban was perfectioning their rtrt engine (*then* "Heaven seven" came).
Kolor produced the "Fresnel" series, FAN brought us "Nature still suxx".
But "Nature still suxx" is already something different from a demo, something that pushes its bounds more towards a benchmark (and "Nature still suxx" will really become a benchmark).


The daemon of ray tracing will sleep from here after under the demoscenic beds. 

history of realtime raytracing - part 1

[History of realtime raytracing (part #1)..............]
How realtime raytracing has changed the demoscene and viceversa

"As we all know, the audience voted for best productions at Juhla Pi. And, as always - audience was stupid." -- Jmagic, on why the world's first realtime raytracing demo didn't win 1st place at Juhla Pi (from: "(Illogical) PC Demoscene Quotes List" or "how the 'first' realtime raytracing demo cannot be called 'Transgression 2'")

(not so) realtime raytracing on C64
Realtime raytracing, the wet dream of demoscene, deepens its roots on patinated PC magazines with still images of platonic solids that had unreal crystal reflections. Lifecycle of raytracing in the demoscene crosses at least three phases, the first one of which (the "inception" phase, that spans from the mid-90s to 1997's end) we'll analyze in this post.

The word on which the first realtime raytracing experiments bump is "fake" (or even "precalculated"). In order to build a convincing raytracing, in fact, one that you can really define "realtime", a minimum quantity of processing power is required, quantity that personal computers of old ages didn't have.
Not the widely used Commodore 64 or Amiga, but not even the first generations of PCs (i386 included).

For that reason, sceners had to reproduce visual-loops of pre-rendered images on those less powered platforms. For that reason, the intro quoted as "the first one with realtime raytracing" is an animation. The second intro of that series, instead, rushfully separates precalculated (duh) raytracing from the realtime one, but unfortunately the "realtime" version looks more like a slideshow of the magazine's images of above.

Attempts at raytracing again on C64
In the first creative phase, we can't count many productions using realtime raytracing. It seems that in order to show a sphere reflecting the surrounding landscape you need higher technical skills than the ones used to code (or rip) a 3d rotating cube. Or maybe the problem is that the closed circle of demoscene lacks information on this subject, and for those reasons realtime raytracing remains confined (and marked as "fake") to the black magic realms.

To make things more complicated, the raytracing-intro authors themselves mix cards, stating that the engine used in their productions is nothing else than "the slowest polygon engine in the world".

From the "Gamma 2" infofilu:

The 1st real rtrt intro (Transgression/MFX)
"It was in the beginning of 1995 when 216 had this pretty wild idea of a realtime raytracer. But soon it was discovered that the project was not only impossible but his programming skills weren't at that level. So he decided to do it the other way around.
Transgression 1 was created.

It wasn't anything but a regular polygon engine with a clever trick used. After filling the scene it just took samples from here and there and painted big blobs on those points and thus the outlook was so crappy that no-one was to tell whether it was traced or not."


The first few realtime raytracing demoscenic productions are all alike for using some cliches: reflecting spheres, shadow-caster spheres in all different sizes and colours (with fog, without fog, with lights, with more lights), perfectly perpendicular cylinders, checkerboard-textured planes and so on. By the way, those are the easiest primitives to trace.

Anyway, the limits of machines where those productions run are still there, and, for how great a coder could have been, tracing more than a textured plane at full framerate was impossible. For that reason, the around-'97 raytracing intros are rendered at resolutions that are fractions of real screen resolution (for most of them the fraction is fixed, but there are exceptions that we'll see next), and then interpolated to cover its entire surface. Raytracing is then rapidly associated in people's imaginary to the jagginess of any oblique surface.

For each intro of that period, also, the object's movement, seen on more powerful PCs, makes them look as they're submerged in highly viscous liquid. Coding a raytracing intro on a low-end i486 must have been hard (but that's demoscene: pushing the limits): probably it was like seeing a bunch of polaroids in sequence.

Bumped and raytraced trees anyone?
This phase of history of raytracing in demoscene ends symbolically with the highest evolution of polaroid-technology: "Sink" by Pulse (Pulse is the group that gave us presents like "Famous Cyber People", "73 Million Seconds" and "Square"), highly unknown and underrated intro that tries to go ahead of the "textured-plane/reflecting sphere with shadow" concept, and proposes more consistent scenes, like an underwater and futuristic landscape, or a Paper-esque cartoonish grass foreshortening with (bumped) trees and sun.

After "Sink" we won't see any raytracing from Technomancer again, but someone may be working on Pulse's nextgen engine:

"soon unreal's granny will join with her texture+phong+bump+zbuffer+shadows+ hexalinear heptapentametaoctamapping+fog+rain+particles+depth focus+hair modeller+volume rendering+ radiosity+ raytracing+ perspectivecorrect 3d system that she is coding since world war II
at the moment she's workin on the speed, now she's said to have 50spf (seconds per frame)"

interview #3: iq

[interview #3: Iñigo Quilez aka iq.........................]
According to this interview, it seems anyone can do a 4k after all...

             And this *simple* math equation generates...

Hello folks.

DaTunnel returns after the break with a long long long long interview to our beloved coder Iñigo Quilez (mind the "ñ"), which is the man who brought us (along with Rgba, his group) unforgettable pieces of jaw-breaker coding and compression like "Paradise" in 2004 or "Elevated" at the latest Breakpoint.

So let's switch to the Spanish keyboard (for the "ñ", obviously) and let's go on.



1) Ten years ago, TASM was sufficient to make a 4k intro. Now you need at least a degree in mathematics. Did the scene really change like that? Did the bar rise so high?


Well, well, TASM was not enough. You had to know numbers like 0xa000 or 0x3c8 by heart, and all the other registers of the VGA card, and some int 0x21 routines too. You also needed to go mad to setup a linear frame buffer in 32 bpp with VESA and a bit of crazy protected mode interruptions. Then you could start to write a rasterizer and implement a good clipping routine in fixed point. Today you don't need all that, maybe knowing how a perlin noise function looks like is more useful.

So did the scene really change like that? Sure.

Did the bar rise so high? Not sure. The difficulties have shifted.

... a butterfly (on a rhino)
4k today are mostly about procedural content, so of course knowing a bit of maths helps, but you don't need a degree in maths at all. I'm electrical engineer myself, and for making an intro I don't use even 10% of the maths I learned. In reality you just need to know what's behind words like dot product, smoothstep, noise and cosinus. I think anyone who could build a trifiller for a 4k in the 90s will easily understand what a smoothstep is. So I'm not sure the bar has risen.

In the other hand it's true the expectations of 4ks have increased, so somehow it might be more difficult to make a 4k today... but at the same time now we have plenty of tutorials, websites and source code examples of 4k intros out there, so it's easier to learn.

So I really don't know, I cannot answer the question. (sic!, ndFriol)

2) You are the coder of many well known (and party-winner) 64k or 4k intros. Are there any 4ks from other groups that make you say "how did they do that", "how did they fit that in 4k"?

Not really. I guess most 4k coders can spot all the techniques and many of the tricks of an intro by just watching. I never got the "but how did they do that" thing, and I'm pretty sure none of the other 4k coders did neither. What I (we) usually think is more like "shit, they found this nice trick, and I didn't see it in all this time, damn it".

In fact I think 4k today is not really about expertise in coding as much as it is about finding a nice idea/algorithm. So basically when somebody makes a nice intro it doesn't make me go "how they did that" but "shit those bastards found this nice algo before me" and then "ok no problem, I will have to find a better one now". I bet it's the same for the rest of 4k coders.

The same applies to 64k intros - no real surprises anymore since Stash o fr08, except for when I watched Tracie/TBC and I felt again the "but how the fuck?" magic in my body. Then I saw the shader code and I said "ah, oh, of course, damn you Mentor!".

Movement got better after the infamous 'Bambi'...


3) What is your opinion on the "eternal dispute" between who forgives the use of "system" DLLs in 4k intros and who doesn't?

Ideally one should use whatever s/he needs and just be clear about it, with no cheats, and let the people judge accordingly.

Now in the real world, for a party competition where you need to set some minimum set of rules, I would probably disallow using external DATA (gm.dls, OS wallpapers, OS sounds or music), and I would allow to use external FUNCTIONALITY if it's provided by default with the platform of choice.

In my opinion the renewal of the hardware and APIs are the main cause for the progress of the 4k scene lately (plus improvements in compression techniques of course). I think that if we didn't allow to use the new functionalities provided by the pc platforms we would still be coding i386 intros, just in 3 gigahertz machines, but they wouldn't look any different than those of the late 90s (perhapse we would have few hundred raytraced spheres instead of ten). But that would probably be very boring.

Also, I think there is some misconceptions about system DLLs, I dare to say that this is especially true for those old school sceners. System DLLs don't have 3d engines, system DLLs don't have 3d object, system DLLs don't have image filters, system DLLs don't have glDrawMountains( GL_BEAUTIFUL, GL_PLEASE ) or D3DX_BuildArchitecture( D3DX_AN_ATRIUM ). System DLLs don't have sound synthetisers, or camera animation systems, nor they do glApplyEfect( GL_GLOW ) or anything like that. The most advanced thing a system DLL possibly does is to give you the geometry of a sphere or a rectangle, perhaps a cube if you are lucky. And in case you were to try to impress the audience with some cubes (?), the system DLL would save you the same 100 bytes that you would be wasting anyway initializing the DLL itself. Also, most intros today are heavily based on raymarching, where generating a polygonal mesh is anyways useless.

In my experience abusing a system DLL can save 200 bytes total of your intro, if you are heavily using the functionality in it. Does this help? Yes of course. Now, are those 200 bytes gonna make the difference between your prod being a great intro or just an average intro? I don't think so! Awesomeness must surely come from another place, not from the 200 bytes you will save by using system DLLs.

4) In your Function '08 paper, your conclusion is that demosceners are attracted by "CUBE-based" productions, but "organic" and "not-abstract" stuff is harder to do. Isn't there the risk to be like "Dr. J. Evans Pritchard" of Dead Poets Society in classifying artistic productions? Who is right: the masses, that make "Candystall" win against "Stiletto" at Assembly 2007, or 0,1% of demosceners, that love "organic and non-abstract" stuff?

I don't think it's bad to do a classification according to some parameters you choose (classifications can coexist), as it probably helps to make a dissertation or to communicate (like when you naturally classify when telling about a demo to your friend - "hey, did you see this demo? It's a olschool demo, minimalistic". Nothing wrong with that probably. In the case of that presentation at Function '08, it helped me to (hopefully) communicate an idea (more on that shortly).

The risk would be, I suppose, to give value or judge a piece of art or demo by measuring it thru a given criteria or classification, like when Dr.J.Evans Pritchard phD measure indeed how "great" something is. Thankfully demos are not valued like that, but by the rule of thumb and mouth to mouth.

And indeed the classification I made was not to compute which intros were good or bad, but to study the taste of the intro audience and the preferences of the intro coders themself. First I contextualized the type of intros acording to a criteria that helped me explain my experience. My message was that after having tried myself all the type of intros I found that indeed abstract ones are far more simple to produce than non abstract, and that at the same time they are by no means less powerfull to transmit an idea/feeling, or even to simply seduce an audience.

Also, I love this retoric games where you investigate things by taking all the view angles possible, whatever you agree with them or don't. Ocassionaly you find big "truths" or ideas.
Sometimes I also like to force myself to play the bad guy (like when "I hate cubes"), although I'm not very good at it.

5) We've seen futuristic cities ("Micropolis"), landscapes ("Mojo dreams" and "Elevated") and half women ("Stiletto") in 4k intros. Where are 4k intros (or 4k image executables) headed from here (technically and artistically)?

Well, you have described "non abstract" intros only (hey, classifications!).
Abstract intros (Texas, Nucleophile, Receptor, Kindernoiser) will continue their own progress I guess, there isn't any "logic evolution" for them from my point of view, they will continue to look nicer and be more artistically impressive.

But for non-abstract intros, indeed, after cities, landscapes and half-women, I guess the easy (!?) way would be to have more realistic cities, more alive landscapes and body-complete women :) However not many want to or can go that way.

If I had to do a prediction, I would say that for one or two years we will have a bit of the same things. Since 2007 we are apparently in the "raymarched intros" era, so we will see more Tracies, Sults and Nevadas (see prods of BP09 and Evoke09!). But who knows, one of the nice things of the demoscene is precisely that everything is "just as before" until somebody makes a super cool intro suddently, out of the blue. And that can be anytime!

Well, looks good, even without penguins

6) Each time I've seen the sea scene with icebergs in "Paradise", I've always thought that something was missing there for lack of space/time on your side. Now, can you tell us what you *really* imagined in that scene? :)

Paradise (2004) was my second production for Windows and hardware rendering, and my first with shaders, so not only I was heavily limited by the deadline itself but by my graphic abilities at the time too. My goal with Paradise was indeed twofold: to learn shader programming and to see how far I could go on realism. And so, for the icebergs scene my idea was to have done a better snow material (softshadow, a better subsurface scattering approximation, etc) and a better ocean surface. For the content, I didn't have the time to make the tail of the whale appear as it enters the water. The scene of the desert is also embarrasignly unfinished and empty. I think only the rhinos was properly finished. (oh god, I always thought they wanted to put *penguins* there... dream shatters, ndFriol)

7) Last question: we know you've made great work and research on character's movement in intros, but do you agree on the definition "the Bambi that walks like the robot from Kasparov" for that lamb in "Paradise"? :)

Hehe, well, in fact it walks much worse than the robot from Kasparov. It looks awful :)
Just for the record, there was no animation system in Paradise, nothing like skeleton animation and skinning or anything. Instead, animations (whale, dolphin, rhinos, etc) where done my moving the vertices of the meshes with formulas. So was the Bambi, where every vertex was moved every frame with some combination of smoothsteped cosinus and exponentials.
The final version of the intro has a somehow better animation, although it's still terrible (it's uploaded to Pouet I think, although the video capture has never been updated). But well, Paradise was the first time I was formulanimating characters, and at the time I was reasonably satisfied even with the Bambi.




Oh well, we thank so much iq for this great interview, and suggest him to register the term "formulanimating", could be worth millions in some decades. By the way, you can find all the productions of Rgba here. Don't forget to check also iq's work in 4k procedural rendering (you can find more infos on his site).

[Update!!! I was right on the penguins after all:
> 2009/8/6 iñigo quilez:
> Btw, you are right, there were some penguins in the icebergs at some point,
> I forgot about them!! I had to remove them because they were not good
> looking enough!
]