Dec 21, 2010

Symbolic computations

Just found a good symbolic (algebraic) computation program that can help you to develop/visualize/simplify your formulas. The graphical interface is name XCAS and uses the GIAC engine. You can download everything here.

As an example, you can "pre-compute" rotation matrix from Euler's angles for a 3 bands spherical harmonics vector and visualize it as following:

Good to start optimizing things...

Dec 19, 2010

Eye rendering

Some more rendering R&D I am on: eye rendering.

here are the supported features: light refraction at cornea for point/spot and environment lighting, light concentration, light scattering in the iris, procedural description of the organic structure of the iris.


A brown eye


A fantasy eye
I hope you like it! ;)

Dec 12, 2010

Skin rendering - environment lighting

In parallel to the development of the graphical pipeline of Dynamixyz, my work also involved some R&D on rendering. I may show some of my work here when my technical director allows me to do it.

Some results showing the GPU Gems 3 skin rendering method but this time no point or spot light: only environment lighting using spherical harmonics! Below, some screenshots showing a head model lit by its surrounding environment (with an without light directional light occlusion.) Of course, this is rendered in real-time and with gamma correction. These screenshots emphasize the importance of using a ambient lighting model that takes into account incoming light directions of the environment as well as directional occlusions.


Environment lighting with a cube-map having a
pure red color on a single face.
(left without, and right with directional light occlusion)



Environment lighting with a cube-map from Humus.
(left without, and right with directional light occlusion)

This is static and I am working on an idea to make everything dynamic. The occlusion is computed on the GPU using rasterization instead of ray tracing.

The head model is a free scan released by Infinite-Realities. Thanks to you Mr. Lee Perry Smith!

Dec 4, 2010

Books + interesting CUDA facts

Hi followers!

I know I am still not updating often this blog. In majority, this is, again, due to my new work and the preparation of my PhD thesis defense. So, I am doing a lot of interesting stuff at work that I may share in here some day. Besides, concerning my PhD, my last paper has been successfully presented last week at VRST2010 by my advisor (I was not able to go myself). This paper was about simulating the Human visual system using the new visual attention model I have proposed to predict user's gaze during real-time first-person navigation in interactive 3D virtual environments. Also, my last collaboration ended with a paper accepted for publication in the IEEE TVCG journal. This paper is about real-time haptic interaction with fluids: my small contribution is a cheap fluid rendering method. I may talk about it here soon. I now only have to think about my PhD defense presentation! :)

Note on CUDA

I have read the two books about CUDA (1 and 2). They are pretty amazing book for learning the basics but I was still hungry after that. So I have decided to read more documents like the nVidia best practice recommendations. I was surprised to see that these books/documents advice a set of "simple" rules to follow in order to get good performance such as "Increasing occupancy is the only way to improve latency hiding". But, as exposed in this very good presentation, you have to be careful with these advices. It reveals another set of rules for better performance with lower occupancy. Overall, like with every computing systems, you should always try several codes and use the one which give the best performance.

Amazing books

- This book about the two Id johns is very well written! I could not stop reading it!

- Are you looking for a book to learn physically-based rendering methods? Stop searching and buy this one! An incredible amount of methods are covered in this book with a new presentation style I have just discovered: literate programming. The book covers a large range of knowledge: from the basics (ray tracing, collision, etc) to the advanced one (metropolis light transport, sub-surface scattering, spherical harmonics, etc). There are all Math equations and corresponding source code! Perfect!

Oct 26, 2010

Best Fit Normal Map Generator + Source Code

The BFN generator is finally here! I have found time for this tonight.

So, this small application allows you to generate each face of a BFN cube-map. If you want to encode the BFN only as the length of the best fit normal or transform it to a 2D texture (as suggested in Crytek's presentation), you will have a little bit of work to do but it should not be such a big deal. You can select the resolution of the cube-map as well as the bias vector at compilation time.

Some screen-shots for 256x256x6 cube-maps:


At the origin: the BFN cube-map with a bias vector=vec3(0.5,0.5,0.5).
in the +X direction: the regular normalization cube-map.
In the +X+Y direction: a BFN cubemap with a bias vector=vec3(0.5,0.5,8.0/255.0). It results in less precision for normals having negative Z value and more precision for normals having positive Z value.



The reconstruction error of the regular normalization cube-map as compared to the ground-truth normal map. (scale factor of 50)



The reconstruction error of the BFN cube-map with bias a vector=vec3(0.5,0.5,0.5).
Some reconstruction errors are still visible. (scale factor of 50)


The reconstruction error of the BFN cube-map with bias a vector=vec3(0.5,0.5,8.0/255.0).
No more error patterns are visible. (scale factor of 50)


Reconstruction error on the back faces (scale factor of 50). There is a large error when using a bias vector of (0.5,0.5,8.0/255.0). However, it is not important since only the positive Z values are important when storing normals for a light pre-pass or deferred renderer.


Error when computing a specular lob with the regular normalization cube-map. The specular exponent is 64.

No noticeable difference when changing the bias vector of the BFN.

You can download the source here!

And finally, the important references :
  • Amantide ray marching paper.
  • Crytek's presentation from SIGGRAPH 2010 as well as their 2D BFN texture (which I recommend) can be downloaded here.

Oct 23, 2010

Some more links


Oh, by the way, my tool to generate the Best fit normals cube map will be available shortly (together with C++ sources). Many people asked me about it... Now, I just need to write a blog post and put it on my personal website for download.

Oct 13, 2010

News and random links

Hi everyone!

I am currently very busy because of the end of my PhD thesis and my new job! This is why I am not currently very active.

So, as I said, my PhD is over. The manuscript is currently proofread by my two advisors. I will defend my PhD on January 21st 2010.

I finally had a phone interview with the game company that contacted me 2 times last year. Despite the fact that "I have a very interesting profile", they told me that I still "have to get more experience"... So sad... :/ So this is what I am doing right now by working at Dynamixyz as a R&D engineer. It is a new French start-up so you can imagine that there is a lot of work to do!! :) I am mainly involved in the development of the CG part of their facial animation tools pipeline.

I want to finish this post with these interesting/funny random links:
  • Interesting paper from Insomniac on 2 band spherical harmonics with truncated triple product.
  • Patterncraft or how to teach design patterns using Starcraft2! Much more fun than typical school use cases! :)
  • The blog of a French guy working at Crytek. Sorry, it is in French. :)
  • Game design fundamentals blog post
  • 16bits ALU in Minecraft here
  • Log out fail in MoH
  • Amazing dance animations in this TF2 video
  • What is exactly a doctorate? Answer

Aug 31, 2010

Eye rendering - first try

I was in holidays near Deauville (France) last week, that's why I remains quiet.

I have started working on an eye rendering method inspired by the one presented in this paper. As you can see on the next screen-shots, I have some interesting first results. The characteristics of the eye is taken from biologists reports papers. The iris is rendering using the subsurface texture mapping method simulating single scattering in participating medium (in this case, the iris). The light refraction at the cornea is taken into account in a different way than what is done in the eye paper (their refraction function). The sclera (the white part) is currently not shaded.

I hope you like it!



Blue iris rendering with refraction.
(click on the picture for higher resolution)


Green and Brown eyes.

I have set-up the scattering/extinction parameters rapidly so you may not found the color of this eyes really "realistic"! I still have to improve the current algorithm and I have some interesting idea for that.

Aug 13, 2010

Quackecon 2010: Carmack's keynote

Every developers familiar with virtual texturing would have though: "this is a real scalable technology"! Carmack just proved it with his iPhone4 Rage demo running at 60fps!

By the way, I am looking for a full video of the keynote. I, I mean google, can't find it...

Also, John now has a twitter which would work as his old .plan files! Back to good old time?

Aug 12, 2010

BFN: about Crytek's approach

In my case, I have computed the best fist RGB value given a bias vector and, of course, the direction to fit. The BFN are stored in a cube map.

Crytek's approach is only storing the length of the best fitting vector for the requested normal direction. Furthermore, as explained in the notes (slide 94), 8-bits precision is sufficient if the best fit length is stored for the normal divided by maximum component.

As pointed out by Vince on my first post about BFN, there is some symmetry on each cube face. Indeed, the cubemap can be compressed into a single 2D texture (slide 94). This allows to save video memory at the cost of several ALU operations in the fragment shader (slide 95).

When using this final representation as a 2D texture, it is not possible to change the bias vector as I have proposed in my previous post. However, results seems to be good enough with Crytek's approach... (slides 42-43) Is it worth the cost to use a cubemap? My next step will be to compare the image quality with or without changing the bias vector in my deferred relief mapped renderer.

Aug 8, 2010

Best Fit Normals: playing with the bias vector

Yesterday night, I have first implemented the Amantide ray marching method to compute the best fit normal (BFN) cube map. The gain in performance is, as expected, major. I can now generate a large cube map in few minutes (instead of hours).

Second, I was thinking about changing the bias vector in order to get precision where it matter, similarly to the quantization approach presented by Crytek in this old presentation slide 13. This, in order to get more precision (more direction) in the usual normal direction: along the Z axis. I found that it is hard to choose a good bias value. In the case where negative Z value are ignored (there is some games engine in this case but I can't remember which), choosing a bias of 0.0 would be just perfect. Then, I suppose one could choose the bias value visually, or image difference error from key view points. Also, encoding objects normal map using such a bias should be of higher quality because normal maps rarely contain normals with negative Z.

Enough talking, here are some screenshot:


A view of the BFN cube map face (positive Z)
with a bias factor of 16 on the Z axis.


Bias modification example. Left: BFN with usual bias,
Right: BFN with Bias of 2 on the Z axis
(poor precision for normal with negative Z).
(32*32 cube map)


Visualization of the reconstruction error (multiplied here by 70)
as compared to the true direction.
Front box: BFN with Bias of 16 on the Z axis
(poor precision for normal with negative Z),
Far box
: BFN with usual bias. (256*256 cube map)

Visualization of the reconstruction error on the positive Z face.
Left: BFN with Bias of 16 on the Z axis,
Right: BFN with usual bias. (256*256 cube map)

A you can see on the last figure, you can get more precision on a simple 256 BFN cube map by simply changing the bias on the Z axis.

The presentation of Anton about this is now available on the course website. I will have a deeper look at the details (I was implementing this methods based only on my memory and ideas since SIGGRAPH). Interestingly, it seems that they do not use a cubemap but a 2D texture...

As always, feel free to discuss and to ask me for the BFN cubemap textures (at any resolution now :D) if you want to try this in your engine (some developers from video game studios requested these textures last time).

Aug 4, 2010

Crytek's Best Fit Normals

Among the SIGGRAPH presentations, there was one about Crytek's rendering methods. One interesting techniques quickly presented was best fit normals (BFN). This methods is aimed to improve normal precision when stored in the RGB8 format.

When using traditional scaled&biased normals in RGB8 format, some accuracy errors can occur because of the low precision of the RGB8 format related to the scale&bias. For example, considering a 256*256*6 cube map, 393216 directions can be represented. However, due to the low precision of RGB8 format, only 219890 (55.9%) of these directions are effectively represented (many similar directions being represented by the same compressed value). In this case, I have computed that only 1.31% of the full 256*256*256 voxel possibilities of the RGB8 values are used. Also, I have computed that each voxel effectively used represents meanly 1.788 directions.

The idea behind the BFN approach is to search for the voxel that will best represent (fit) a given direction: it may be a non-normalized vector. Using this method with a 256*256*6 cube map, I have found that 387107 (98.4%) of directions are effectively represented. Furthermore, in this case, each voxel used represents meanly 1.016 directions. Thus, using such a method results in a more accurate reconstruction of normals (see screen-shots). Moreover, compression is a single cubemap lookup, and reconstruction is an unbias and normalize.



Left: BFN cubemap, Right: scale&bias cubemap


Reconstruction error (absolute value scaled by 70) for BFN (left) and
scale&bias normal (right)

So how to generate each cube face? Currrently, I am using the brut force method which is horribly slow: for each direction on the cube face, I parse each voxel of the RGB8 volume to search for the one which match the best. One faster method I plan to implement later is to use the Amantide ray marching method to ray march the voxel volume along the ray direction and find the best representative one.

How can this method be used?
  • Better normal map encoding: when computing object normal map, instead of converting from floating point normal map to scaled&biased normal, do a texture look up in the BFN cube map texture.
  • Deferred rendering: high quality normal buffer in RGB8! :) It could also be possible to pack a normal in a 32F channel.
  • Any other ideas?
So when I will have time, don't know when because the end of my PhD is approaching quickly, I plan to implement Amantide's methods to accelerate the computation. Then, maybe use a better representation instead of a cube map.

If you have questions or want access to the cubemap textures, send me an email. As always, feel free to discuss here about this method.

Aug 1, 2010

Back from SIGGRAPH 2010

Wow, SIGGRAPH was awesome!

I saw and learn so many things that I can't list them here. All I can say is that it was nice attending movies (Weta, Pixar, ILM, Dreamworks, etc), games (Crytek, Dice, Valve, etc) as well as researchers presentations. Now I can put a face on those names I often encounter when reading research papers or presentations.

Here are some links with very interesting courses and presentations about graphics:
  • Dice presentations
  • Advances in real-time 3D rendering course
  • Beyond programmable shading course. (This year theme was really oriented towards future hardware evolution)
  • New: Physically Based Shading Models course
  • New: Stylized Rendering in Games course
  • New: GI across Industries courses

Jul 22, 2010

My Siggraph Schedule

Here is my schedule:
  • Sunday: Avatar in Depth,
  • Monday: All about Avatar, Detailed Surfaces, Volume and Precipitation, Real-time Live Demo,
  • Tuesday: Simulation in Production, Computer Animation Festival, Blowing $h!t up, Real-time Live Demo,
  • Wednesday: Advances in Real-Time Rendering in 3D Graphics and Games I and II,
  • Thursday: Beyond Programmable Shading I and II.
There is a lot of uncertainty for some sessions because many are overlapping with others I am interested in (hair rendering, stylized rendering, global illumination, physically based shading, etc). It is really hard to make my final choice.

Moreover, two friends are presenting this year:
  • Nicolas Stoiber, PhD student at Orange Labs like me: The Mimic Game: Real-Time Recognition and Imitation of Emotional Facial Expressions. (He will write an article here about his very interesting work after SIGGRAPH),
I wish them good luck with their presentation. I hope one day I could contribute to SIGGRAPH to. Indeed, my PhD was not really suited for such graphic/animation research... Maybe during my future job!!

I will post here about SIGGRAPH during the next week if I can find time between sessions and SIGGRAPH night parties. :)

Jul 20, 2010

OpenCL

Before I even did something with CUDA, I have decided to move to OpenCL. Just because I prefer the GLSL-like approach to load program instead of using an additional dedicated compiler. I am close to finish my C++ library with nice classes encapsulating OpenCL objects (And I am using the C interface). It looks really like what I did for my Shader manager (like in the demos on my personal website). I will post it here soon in case someone want to get it and modify it. Also, choosing OpenCL is for the ability to use any platforms, not only nVidia. The current problem is that I don't have a single clue about how ATI parallel architecture is (their Stream Computing technology).

In addition to a review and my PhD thesis I have to finish writing, I am also preparing my travel to Los Angeles for SIGGRAPH 2010. I will post my schedule tomorrow in a second post.

Bonus:
- a nice real-time CUDA ray-tracer called brigade.
- a new A-Buffer method using per-pixel linked list to manage dynamic allocation of pages containing depth layers. In OpenGL! :)

Jun 27, 2010

Volumetric lines 2 + News

Hello!

Sorry for the long time silence but I have been very busy recently: PhD writing, not accepted paper I have to modify, internship students to take care, researchers who want to use my engine, etc... Now it's better, I will have more free time by the end of the week.

So what are the news! Volume Lines 2 demo is on-line. In fact, I have submitted it to GpuPro2 but it was not accepted: I expected this answer because indeed the method is pretty simple. The results are nicer than previous volumetric lines but it requires geometry shaders and it is more computationally expensive.

Also, I started to work in Virtual Textures few weeks ago. I have the low resolution feedback buffer working with required pages to load, indirection table, etc. Just need to load required pages from the disk to GPU virtual memory in a separate thread together with a LRU cache. I will work again on it soon.

The development of my light-pre pass renderer with relief mapping is... well... deferred. :/ I have a lot of optimization ideas and I hope I can show you more about it soon.

Finally, I will go to Siggraph 2010 this year as an attendee! :) Many courses and talks to attend! It will be hard to attend every sessions I would like to but, I will do my best. I may talk about this conference, show some pictures and report about courses/talks on this blog. Maybe I will be able to meet some of you overthere! :D

Jun 1, 2010

Knee deep in the dead

Some pointers on fast but pretty good reviews of old IdTech and their adaptions to iPhone!

Wolfenstein3D iPhone
Doom PC
Doom iPhone
Quake PC

Ah those good old days... I was so young when I played Doom the first time on a friend's computer. By the way, everyone should read Micheal Abrash's Black Book full of anecdotes and in-depth details about Quake renderer algorithms. For instance, you would learn that John Carmack bet that he could implement dynamic lighting in the quake engine in less than an hour. Indeed, he failed but of only few minutes (3 to 7, I can't remember). But hell yeah! In an hour! :)

May 19, 2010

Automatic CUDA optimizer

A promising automatic CUDA optimizer has been proposed by Huiyang Zhou et al. as you can read on his website: http://www.eecs.ucf.edu/~zhou/ (with downloadable paper). Also, they plan to extend their compiler to OpenCL and I am sure this could also be done for DX11 ComputeShader.

The performance are interesting since they are able to achieved as good to better performance as compared to manually optimized programs (something which takes time). Bonus: the code of this open source compiler will be available soon.

Update: the code is available here. (source)

Apr 21, 2010

My Relief-Mapped Light Pre-Pass Renderer

Hi!

Just to show you some progress I made concerning my new renderer: a light pre-pass renderer with relief-mapping over all surfaces! An important feature of this engine is that each texel in the virtual scene is unique. Indeed, no virtual texture mapping is used but simply large 4096*4096 textures! Because no virtual texturing is used, everything is resident in the GPU video memory. Texture for dynamically added objects in the VE are allocated in dynamic texture atlases (using a quad-tree to manage texture space use). As a result, when designing a map (under Maya with my exporter), you have to find a good trade-off between texel density in world space and the size of your map. For me, it is important to have unique texel everywhere to achieve this kind of effects I made before.

Yesterday, as visible on screen-shots, I have finished implementing Virtual Shadow Depth Cube map as described in ShaderX3. I may test later render to cube map using geometry shader.

Thank to this light pre-pass engine, I will be able to easily implement/test several rendering methods like soft particles, light propagation volume, SSAO, etc... The only thing I miss in this engine is spherical-harmonic- or Source-like lightmaps (I wish I had a Beast or Turtle license for this). Currently, it is a Quake3-like lightmap: no directional information about incoming light on surfaces.

With this engine I plan to develop a simple Quake3-like-death-match-with-bots game as a simple demo. If some artist read this post and are interested in designing a death match map, please send me an email. The only thing you would need is Maya2008.



Black lightmap, two shadowed point light sources


Black lightmap, two shadowed point light sources


Grey lightmap: directional indirect lighting using
spherical harmonic volume.
For this, I use the library I have developed.

In these screenshots, I only used a uniformly colored lightmap because I did not found time to generate one for this level. And also because I just finished adding point light sources support which will be used for direct illumination (lightmap will only contains emissive surface and global illumination). Next move is adding spot lights and optimizing shaders as well as the way meshes are processed by the engine.

Feel free to ask me some questions! :)

Apr 9, 2010

Spherical Harmonics Lighting

Hello everyone!

It's time for another post on another rendering method! :) Currently, I am working on a light pre-pass renderer. I have just finished to include SH lighting inside.

There exist several methods to compute the lighting solution of a virtual environment. Some methods are fully dynamic (IdTech4, CryEngine) or fully/partially static (idTech3, UnrealEngine). For fully dynamic methods, since the lighting solution is often unified, there is no problem to compute the lighting of dynamic objects. However, for static methods, there can be several problems.

Let's take the case of an environment having its lighting solution stored in a static lightmap. When you add dynamic objects in the virtual environment, you have to compute their lighting solution. But how can we compute the light that reach each dynamic object using only their position/orientation and the surrounding environment?

Common methods are using probe or light volume. For instance, in Quake3, a light volume is define for the entire environment and each Voxel stores an ambient color plus a directional colored source (Quake3 map Specs). Another solution is to used probes positioned by artists in the virtual environment (Source engine). At each of this location, irradiance can be computed and stored in several formats (SH, directional light, Source basis, etc).

For my light pre-pass renderer, I decided to use a light volume having each Voxel containing a 2 bands spherical harmonics as visible on the next screen-shots.


A quake3 map with it's corresponding SH volume.


My test map with its corresponding SH volume.

For each dynamic object, the SH volume is sampled on the CPU using tri-linear interpolation (using object's position). The final SH contribution is added during the final pass of the light pre-pass pipeline according to the normals stored in the g-buffer (And in my renderer, each surface is rendered using relief mapping).



A dynamic object lit only by dynamic lights (left) and
using the SH volume (right).
Note that the lighting information stored in the lightmap now
affects the object final look.

The SH volume is computed as a pre-process. For each Voxel, I render the virtual environment in a cube map as in my previous demo. Finally, the surrounding colored environment stored in this cube map is transformed into the SH basis and stored in its corresponding SH volume Voxel.

I plan to store only global illumination in the lightmap of my environments so the SH volume will only contains global illumination contribution. Then, direct lighting will be computed on static/dynamic objects using standard unified light rendering and shadowing. I also plan to add Crytek's light propagation volume later...

I will continue to work on this engine in parallel to the writing of my PhD thesis, other little demos I have in my pipeline and future job research.

New volumetric lines method

Hi readers,

I have been working on many things recently. One of these things is a new volumetric line algorithm!
My previous method (basically the same as the one proposed in ShaderX7) was really fast and yields good looking result. I think that's why it was successfully used in the iPhone game PewPew. The only thing about this method is that you should avoid looking lines along their direction because, in this case, the trick I use become visible. Also, it was not possible to shade the line based on its thickness from current viewpoint.
Another method has been proposed by Tristan Lorach to change volumetric lines appearance based on the angle between view and line directions. However, line appearance was represented by only 16 texture tiles and interpolation between them was visible.

The new method I propose is able to render capsule like volumetric lines with any width and for any point of view, i.e. you can look inside/through the line. It also allows the use of thickness based visual effects.

Here is the overall algorithm of my new method:
  1. Extrude an OOBB around the volumetric line having the same width as the line. This is done using a geometry shader computing triangles strips from a single line.
  2. Compute closest and farthest intersections between view ray and capsule using geometric methods and/or quadratic functions resolution. This is done in the OOBB frame reference.
  3. Compute thickness based on capsule intersections and environment depth map.
  4. Shade the volumetric line based on it's thickness.
It is pretty simple but efficient. I will not go into the details right now: there will be a post on my website soon. I will just show some early screenshots from the current version:

The capsules representing the volumetric lines
filled with a white color


line radius can be changed


Thickness based shading and intersection with the environment



View from above and under the ground

The thickness is correct,
even if the camera is inside the volumetric lines

Mar 22, 2010

Learning CUDA

Hi there!

I did not post here for a while because I am very busy right now! I am working on my last paper for my PhD thesis and the deadline is by the end of march... I had do conduct an experiment, process some data and now I am parsing the ANOVA results and this in parallel to the writing of the paper. After this last deadline, I will begin to write my PhD manuscript in English! :) So exciting!

During the few free time I found (almost at night), I test Battlefield Bad Company 2 with friends. Also, I implement spherical harmonic lighting for the purpose of ambient lighting of dynamically moving objects in the current "3d engine" I develop (a light pre-pass renderer with relief-mapping on all surfaces and other cool features).

And, as written in the title, I am currently learning CUDA! Indeed, I have bought the new book by David Kirk and Wen-mei W. Hwu called Programming Massively Parallel Processors: A Hands-on Approach. I found this book well written and it nicely introduces CUDA. Good job authors! I only had a large overview of the CUDA architecture but now it is clear in details.
The first thing I did is to take an example in the nVidia CUDA browser and make it independent of their cutil library and others. To build and run my example, You should just need to install the CUDA Toolkit. You can download this simple program HERE. It features a simple grid of points which height is computed using CUDA and stored in a VBO. The result is finally displayed using OpenGL. Only Debug and Release configurations work (32 and 64bits). Emulation profiles will not compile right now. Also, the CUDA code is nicely separated from the c++ code, something I did not find in every web examples. The other libraries used are glut, glu and glew (everything is in the zip file).
So when I will have finished to read the book (and implemented their examples), I may start writing a little demo... :D

See you soon!

Feb 19, 2010

Trailer of Amnesia:The Dark Descent

Trailer is HERE.

This link features the teaser of the upcoming game from the talented developers of the Penumbra series: Amnesia: The Dark Descent. I have found this teaser really impressive. This is a good example of a nice sounds, visuals and interactions integration!

You can also pre-order the game. And If they reach 2000 pre-orders before the 31st of May, the game will get extra content! I did not played the Penumbra series excepted the demo but I want to try this one so I have pre-ordered the game.

Do not hesitate to spread the word!

Feb 12, 2010

Fourier Opacity Mapping rocks!

The up-coming I3D conference on interactive 3D Graphics and Games will feature several interesting papers! For instance, this year will features papers such as " Cascaded Light Propagation Volumes for Real-Time Indirect Illumination" by Anton Kaplanyan (Crytek) or "Interactive Volume Caustics in Single-Scattering Media" by Wei Hu. Another paper I found very interesting is the one proposed by Jon Jansen and Louis Bavoil (Yeah a French guy), both at nVidia, called "Fourier Opacity Mapping". I could not resist implementing this nice paper.

Fourier Opacity Mapping (FOM) is about approximating light attenuation through a volume made of particles. Let's consider a spot light: the authors proposed to reconstruct the light transmittance function along each ray using a Fourier series. The coefficients of the Fourier series are stored in the fourier opacity map in light view space. This map is generated using usual particles rendering with transformation in the Fourier basis of extinction coefficients in the fragment program. Then, when rendering the particles from view, the corrected light attenuation for each particle can be recovered. I will not go into the details here. You can read the paper here and my report on my personal webpage. Indeed, you can see a video of my implementation of FOM here.

Here are some screenshots of my implementation (Top: without FOM, Bottom with FOM):

A simple particle chain. Notice the correct order of light attenuation.


Colored light attenuation through a particle block.


A grey smoke volume with some red particles. Notice that the red particles attenuate the light correctly: they only affect the color of particles that receive the light after them.

I know that my screenshots are not really eye candy but they show that this method is really efficient to simulate colored light extinction when passing through a volume of particle. Furthermore, if you want an exemple of good use by skilled artist, just have a look at the game Batman Arkham Asylum which implements this method.

~~UPDATE~~
My report is finally available on my website together with demo and open source code. :)
You need a recent video card to run the demo since it uses GLSL 1.5 and render data into 6 buffers in one pass.