Thursday, 2 December 2010

Animated Random Access Vector Graphics

I have just finished my Honours degree and the topic of my thesis is "Random Access Rendering of Animated Vector Graphics Using GPU". It is based on previous work by Nehab & Hoppe. Full paper of my thesis is available here.

The meaning of "random access" here is similar to when used in term random-access memory (RAM). It means that the color of the image can be queried at an arbitrary coordinate within the image space. Efficient random access to image data is essential for texture mapping, and has been traditionally associated with raster images.


The novel approach to random access rendering introduced by Nehab & Hoppe is to preprocess the vector image, localize the data into cells, and build a fast look-up structure used for rendering. In short, it allows efficient texture mapping of vector images onto curved surfaces. However, their CPU localisation was way too slow to be used in real time with animated vector images.

My goal was to improve their approach of vector image localisation by parallelizing the encoding algorithm and implementing it on the GPU to take full advantage of its parallel architecture, thus allowing application to animated images. In my thesis, I present an alternative look-up structure, more suitable to parallelism, and an efficient parallel encoding algorithm. I was able to achieve significant performance improvements in both encoding and rendering stages. In most cases, my rendering approach even outperforms the traditional forward rendering approach.




Wednesday, 1 December 2010

Game Engine

Over the last few years, while studying Games & Graphics Programming at RMIT, I was also working on my own game engine as one of my hobby projects. A complete game engine is a complex beast which includes many subsystems including rendering, animation, sound, physics, AI, resource management, exporting pipeline and more. Trying to implement everything from scratch would be simply a crazy idea, therefore I mostly focused on three objectives.


This demo is available for download here: http://www.ivanleben.com/Demo/EngineDemo.zip

Firstly, I wanted to implement a modern renderer with state-of-art visual effect. Secondly, I wanted to implement an animation system supporting both skeletal and cutscene (scene-wide) animations. Finally, I wanted to implement a content pipeline for models, animations and scenes (maps), to bring resources from content creation software such as Maya into the game engine.

I have followed the trend of many other recent game engines and decided to implement deferred rendering, which allows me to use complex lighting scenarios with hundreds of lights and still render the scene efficiently. This approach also turned out to lend itself nicely to implementation of depth-of-field effect. I also implemented a High-Dynamic-Range lighting pipeline with tone mapping and "Bloom" effect.

I used Maya SDK to create a custom exporter tool. By using Maya's C++ API I was able to link the plugin executable directly to my engine library, allowing me to use the engine's serialization functions. My exporter was somewhat inspired by that of UnrealEd and allows separate exporting of static meshes, skeleton poses or character animations. However, since I wasn't going to develop my own editor software, I implemented additional exporting functions for scene-wide animations, allowing construction of cutscene sequences. The exporter encodes all these resources into package files which can be directly loaded by the engine.

In 3rd year of uni I took the opportunity to use my game engine in the major project. I teamed up with a couple of digital artists and designers to produce a short demo in the form of an intro to what would be a game similar to Heavy Rain. We tried to use the features of the rendering engine creatively, such as utilizing the depth-of-field effect to steer the player's attention towards an interactive object.


In order to avoid combinatorial explosion in shading code arising from many different material and geometry types, I implemented an automatic shader compositor. It is designed as a data-driven graph system, where inter-connected processing nodes are defined by their input data, output data and shading code. Every asset in the pipeline can register its own shading nodes which get automatically inserted into the shading graph based on the flow of data through the network. The automatic shader compositor makes it easy to combine different types of materials with various number of input textures and shading modes such as diffuse map, normal map and cell shading, as well as additional geometry processing such as hardware skinning.

Overall, I am quite satisfied with the work I've done on the engine so far. I have achieved my goal of implementing some of the state-of-art visual effects as well as the supporting asset pipeline and learned a lot in the process. In the future, I might add more to it, either in the form of additional rendering effects or optimizations and improvement of existing techniques. In particular, I am planning to implement screen-space ambient occlusion and explore alternative shadowing approaches (simple shadow maps turn out quite bad more often than not).

Sunday, 16 March 2008

Water reflections with OpenGL

I've always wanted to implement water waves with nice reflections, even more so, when HalfLife 2 came out, followed by Bioshock, Crysis... The water stuff in those games looked just so cool I couldn't stop but to feel like I'm falling behind with my OpenGL skills. Unfortunately, I never really had time to focus on that, since I was so heavily into 2D vector graphics stuff.



Luckily, this year at uni we got this awesome assignment for Interactive 3D subject, where we have to generate a water surface with some random waves going on and a boat that is believably wobbling as the waves pass it. The assignment doesn't necessarily require reflections on the water surface, but we do get bonus marks for making it look cooler, so I thought, well, why not do it the proper way!

In the end implementing all this was much easier than I thought it would be. For a start, you need to render a reflection of the whole scene (less the water surface). Since the water usually lies in the horizontal (XZ) plane, this is no harder than multiplying the modelview matrix with a negative vertical scale (-1 value for Y coordinate). If the water surface is not at height 0 or the reflective surface (e.g. mirror) lies in a different plane, just do whatever you need, to get the scene mirrored over that plane. Also, remember to respecify the light locations after that to mirror them properly, as well as change the culling face from GL_FRONT to GL_BACK (if you are doing backface culling at all) since mirroring reverses the orientation of the polygon vertices.


If there was no waves on the water surface, that would be it. Just draw the water plane over it with a half-transparent blueish color and you are done. To be able to displace the reflection image with waves, though, you need to render the mirrored scene to a texture. You can either do it with a framebuffer object or (what I find more handy) just do a glCopyTexSubImage2D. If you don't want to waste memory and are willing to sacrifice some reflection image resolution (with lot's of waves no one will notice anyway), you can create a smaller texture than the window size, but don't forget to respecify the glViewport to fit the size of the texture.


Now that you've got the reflected image in a texture, it is time to write some shaders. Basically, what you want to achieve is deform the way the texture is being draw to the screen. To get it really nice this has to be done per-pixel in a fragment shader. A direct copying of the texture would mean, that each fragment takes the pixel from the texture exactly at it's window coordinate (gl_FragCoord.xy). To get the wobbly effect, this reading has to be offset by a certain amount depending on the normal of the surface at current pixel. These normals might come from a normal map or (as in my case) computed in the shader itself (first derivative of the wave function).

The really tricky part here is finding the right way, how to transform surface normals into texture sampling offset. Keep in mind that (as opposed to real ray-tracing) this technique is purely an illusion and not a proper physical simulation, so anything that makes it at least look right might do. Here I will give you my solution which is based more on empirical experimentation rather than some physical-matematical approach.

The first invariant is quite obvious - the pixels with normal equal to the undeformed mirror surface normal must not deform the image in any way (a totally flat mirror should just copy the texture to the screen). So what I do is take in cosideration just the part of the normal vector that offsets it away from the mirror normal (the projection of the normal vector onto the surface plane). The easiest way to get it is subtract out the projection of the normal onto the mirror plane normal:

vec3 flatNormal = waveNormal - dot (waveNormal, mirrorNormal) * mirrorNormal


Next thing I do is project the flattened normal into eye space. The XY coordinates of the result tell us how we see that normal on the screen (perspective projection not taken into account).

vec3 eyeNormal = gl_NormalMatrix * flatNormal

We almost got it now. We could use XY coordinates to offset the readings into the reflection texture, but there is one problem with that. When the camera gets really close to the water surface, the projected image of the flat normals around the center of the view gets really small and thus the image is more distorted at sides than in the middle. To correct this, I normalize the projected normal image to have it only as a direction and use the initial length of the unprojected normal as a scalar (note that this value is zero when normal equals mirror normal and goes toward 1 when it's perpendicular to it - or rather it is a cosine of the angle between normal and the mirror plane).

vec2 reflectOffset = normalize (eyeNormal.xy) * length (flatNormal) * 0.1

The 0.1 factor is there just to scale down the whole distortion. You can adjust it to your liking - just keep in mind that the greater it is, the more the reflection image will be distorted.

Saturday, 7 July 2007

ShivaVG: open-source ANSI C OpenVG

Vector graphic algorithms on CPU are more or less reaching their limits and the only hope to get software like Adobe Illustrator speeded up is probably by installing a faster processor into your PC. I've started an open-source project for a vector-graphics drawing library that would use hardware acceleration already a year ago under the name libShiva. However, it never took up really well and recently I realized why.

Basically, it's the same problem as with other current open-source vector-graphics APIs that utilize graphic card. They are usually a part of a very large toolkit or framework and it's hard to use the sole drawing API without having to link your project against and make it dependent on the whole framework. This goes for projects like Qt (and it's OpenGL drawing widget) as well as Amanith, which itself depends on Qt for creaton of a window and initialization of an OpenGL context.

It was the same problem with libShiva - I tried to implement a HW accelerated renderer and a whole GUI toolkit together with event management like Flash offers in its ActionScript in one single library. That made the lib very heavy for use, but on the other hand, what we really need is a lightweight API that just renders the graphics, possibly written in ANSI C to be as portable as possible.

And here's where OpenVG came into play. It is a royalty free API developed by Khronos Group (http://www.khronos.org/openvg). Originally it targets smaller, portable devices (e.g. mobile phones, PDA's) but the specification itself encourages PC implementations as well. What is pretty obvious is that on hand-held devices the API will be implemented in hardware and there are already plans by nVidia and other companies to design chips to handle OpenVG.

Since such hardware is not available for PCs yet, I decided to implement the API on top of OpenGL which is basically the same approach that Qt's OpenGL painter and Amanith use. Actually, there is already a project named AmanithVG which is another implementation of the OpenVG API, but it's commercial and closed source. Usually, when I see good projects made closed-source and commercialized, I get really pissed off, and I feel the urge to start a similar open-source project. Besides, after having a look at the API it felt like this is exactly what FOSS community would need. Moreover, as a response to a blog entry by Zack Rusin regarding the implementation of the OpenVG API on top of Qt (link), there was already a request for an ANSI C version of it.

So, I took the rendering code from libShiva project, translated it from C++ to ANSI C and wrapped into OpenVG API, to create a new implementation called ShivaVG. So far, most of the code for creating, transforming, interpolating and drawing paths is done. The imaging support is currently very poor (just RGBA_8888 format supported), but both linear and radial gradients work and full porter-duff blending is being developed. The source code is accessible via subversion at SourceForge:

$ svn co https://shivavg.svn.sourceforge.net/svnroot/shivavg/trunk

After optimizing the code a little bit the performance is even much better than the old libShiva implementation. Check the lower-left corner of the example program screenshots to see the number of frames per second - that's on Core 2 Duo 2.16 with GeForce 7600 GT. The EGL API for creating an OpenVG context is not implemented yet, but you can use any kind of technique to achieve that (GLUT, SDL, native). The only thing to do then is to call vgCreateContextSH prior to any other VG call and vgDestroyContextSH when you are done. The example programs use GLUT, so you will need that to compile them. There are even Visual Studio projects provided for those of you who use Windows.




iBook died: retrieving the data

Can you imagine what's the worst things that can happen when you are at the end of semester, polishing your last assignments in order to achieve the highest marks... Can you imagine loosing all that data just a few days before the due date? Well, that's just what happened to me!

I usually work at home on my PC, but occasionally I used to copy the data to my iBook to work at the uni in case I had some free time in between the lectures. Yeah, I used to, because my iBook is now dead, without a single spark of life in it. Well, what happened is: my break was just going to finish and I routinely stretched my arm towards my bag pocket to search for the USB key. I always do that, you know, just in case anything goes bad, I copy the last files I worked on before putting the laptop to sleep.

Well, to my surprise I somehow forgot the usb key at home, but I thought 'That's ok, its not like something is going to happen exactly today, right'. So I close the iBook and wait for that cute white 'sleep led' to light up.... but nothing happens. I lift the screen back up - black... I drag over the touchpad, press space a few times.... nothing.

Actually, it was not the first time such a thing happened - it looks like the laptop is somehow bugged and stops somewhere in between the 'sleep' and 'awake' state. As I always do, I detached the battery to turn iBook off and restart it again. There's no restart button on that laptop anyway. But this time it wouldn't turn on. The only thing I heard was the short 'scratching' of the disk as it turned on and that was it. I tried to turn it on a few more times later at home but to no avail.

I took the laptop to the nearest Apple store and they told me they want 70$ AUD just to 'have a look at it' and 150$ more to retrieve the data. I thought no way, I am not paying that money for something I can do myself. Fortunately, I have some really cool friends that use a Mac as well, and one of them provided me with an e-book with the pictures of the whole process of disassembling an iBook which came very much handy! If you want to do it too, I suggest you google for 'ibook repair book' ;)

So I went to nearest Dick Smith to buy all the necessary tools like tiny hex screwdrivers etc. The disassembling itself went quite smooth, the largest struggle being sorting out all the screws so I could properly place them back when assembling the laptop again. What I did was, I just placed them on a sheet of paper, writing aside where the screws go, but I suggest buying a toolbox with those small compartments for screws, because it's very easy to accidentally move the paper and mix all the screws lying on it.

After 'tearing' apart the laptop's case and protective shields, removing the memory card and airport card and all the other necessary things, the flashy iBook's internals showed up. You can see how smartly and precisely its architecture was designed to keep it as slim as possible. Everything just fits nicely together. However, this super slim architecture makes it hard to replace one single thing, so you have to really open both sides of the laptop (upper and lower) to be able to get the hard drive out. Well, a few more screwdriver turns and the hard drive was finally lying in front of me. My eyes were almost slightly aroused as I held it in my hand, knowing my precious data is somewhere there, among those ones and zeros! :)

I forgot one thing, though: the hard drive inside a laptop of course doesn't use the 3.5 inch IDE connector that PC's do. So I had to buy this 3.5 to 2.5 inch IDE adapter. Unfortunately, I also had no place to fix the tiny hard drive with screws inside the PC case, so I just used a CD case cover. Remember that you shouldn't block the openings through which the disk is cooling or it might overheat. The final copying of the data was trivial. Use linux, mount the mac's HFS+ filesystem and there you go!

Sunday, 15 April 2007

Random Test

Pixxxxellllz pixxxelized into pixic pixart. Vectors rasterized into vectrix vectart.
Pixels in vectorz
vectorz
in
pixelz
peexeeelzzzzzzzz
.
.
.
.
Just testing the
scrolling of my
template
.
.
.
.
.
..
..
..
.
.
...........
.....................
.................................

here we go!