University of Bergen | Faculty of Mathematics and Natural Sciences | Department of Informatics | Visualization Group
Visualization
You are here: Department of Informatics > Visualization Group > Publications > Patel-2013-ICS
 Visualization
 > about
 > team & contact info
 > research
 > publications
 > projects
 > teaching
 > seminars
 > resources
 > network
 > events
 > links

Instant Convolution Shadows for Volumetric Detail Mapping

Daniel Patel, Veronika Šoltészová, Jan Martin Nordbotten , Stefan Bruckner

ARTICLE, ACM Transactions on Graphics, sep, 2013

Abstract

In this article, we present a method for rendering dynamic scenes featuring translucent procedural volumetric detail with all-frequency soft shadows being cast from objects residing inside the view frustum. Our approach is based on an approximation of physically correct shadows from distant Gaussian area light sources positioned behind the view plane, using iterative convolution. We present a theoretical and empirical analysis of this model and propose an efficient class of convolution kernels which provide high quality at interactive frame rates. Our GPU-based implementation supports arbitrary volumetric detail maps, requires no precomputation, and therefore allows for real-time modi?cation of all rendering parameters.

Published

ACM Transactions on Graphics

Media

  • paper
  • www
  • Click to view

BibTeX

@ARTICLE{Patel-2013-ICS,
  author = {Daniel Patel and Veronika \v{S}olt{\'e}szov{\'a} and Jan Martin Nordbotten
	and Stefan Bruckner},
  title = {Instant Convolution Shadows for Volumetric Detail Mapping},
  journal = {ACM Transactions on Graphics},
  year = {2013},
  volume = {32},
  pages = {154:1--154:18},
  number = {5},
  month = sep,
  abstract = {In this article, we present a method for rendering dynamic scenes
	featuring translucent procedural volumetric detail with all-frequency
	soft shadows being cast from objects residing inside the view frustum.
	Our approach is based on an approximation of physically correct shadows
	from distant Gaussian area light sources positioned behind the view
	plane, using iterative convolution. We present a theoretical and
	empirical analysis of this model and propose an efficient class of
	convolution kernels which provide high quality at interactive frame
	rates. Our GPU-based implementation supports arbitrary volumetric
	detail maps, requires no precomputation, and therefore allows for
	real-time modi?cation of all rendering parameters.},

  keywords = {shadows, volumetric effects, procedural texturing, filtering},

  url = {http://dl.acm.org/citation.cfm?id=2492684},

}






 Last Modified: Jean-Paul Balabanian, 2014-09-11