Modeling and real-time rendering of participating media using the GPU

Abstract : This thesis deals with modeling, illuminating and rendering participating media in real-time using graphics hardware. In a first part, we begin by developing a method to render heterogeneous layers of fog for outdoor scenes. The medium is modeled horizontally using a 2D Haar or linear/quadratic B-Spline function basis, whose coefficients can be loaded from a fogmap, i.e. a grayscale density image. In order to give to the fog its vertical thickness, it is provided with a coefficient parameterizing the extinction of the density when the altitude to the fog increases. To prepare the rendering step, we apply a wavelet transform on the fog's density map, and extract a coarse approximation and a series of layers of details (B-Spline wavelet bases).These details are ordered according to their frequency and, when summed back together, can reconstitute the original density map. Each of these 2D function basis can be viewed as a grid of coefficients. At the rendering step on the GPU, each of these grids is traversed step by step, cell by cell, since the viewer's position to the nearest solid surface. Thanks to our separation of the different frequencies of details at the precomputations step, we can optimize the rendering by only visualizing details that contribute most to the final image and abort our grid traversal at a distance depending on the grid's frequency. We then present other works dealing with the same type of fog: the use of the wavelet transform to represent the fog's density in a non-uniform grid, the automatic generation of density maps and their animation based on Julia fractals, and finally a beginning of single-scattering illumination of the fog, where we are able to simulate shadows by the medium and the geometry. In a second time, we deal with modeling, illuminating and rendering full 3D single-scattering sampled media such as smoke (without physical simulation) on the GPU. Our method is inspired by light propagation volumes, a technique whose only purpose was, at the beginning, to propagate fully diffuse indirect lighting. We adapt it to direct lighting, and the illumination of both surfaces and participating media. The medium is provided under the form of a set of radial bases (blobs), and is then transformed into a set of voxels, together with solid surfaces, so that both entities can be manipulated more easily under a common form. By analogy to the LPV, we introduce an occlusion propagation volume, which we use to compute the integral of the optical density, between each source and each other cell containing a voxel either generated from the medium, or from a surface. This step is integrated into the rendering process, which allows to animate participating media and light sources without any further constraint
Document type :
Theses
Complete list of metadatas

Cited literature [68 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-00802395
Contributor : Abes Star <>
Submitted on : Tuesday, March 19, 2013 - 5:22:15 PM
Last modification on : Thursday, July 5, 2018 - 2:28:57 PM
Long-term archiving on : Thursday, June 20, 2013 - 10:15:59 AM

File

TH2012PEST1079_complete.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-00802395, version 1

Citation

Anthony Giroud. Modeling and real-time rendering of participating media using the GPU. Other [cs.OH]. Université Paris-Est, 2012. English. ⟨NNT : 2012PEST1079⟩. ⟨tel-00802395⟩

Share

Metrics

Record views

636

Files downloads

1076