- Research Article
- Open Access

# Two-Dimensional Beam Tracing from Visibility Diagrams for Real-Time Acoustic Rendering

- F Antonacci (EURASIP Member)
^{1}Email author, - A Sarti (EURASIP Member)
^{1}and - S Tubaro (EURASIP Member)
^{1}

**2010**:642316

https://doi.org/10.1155/2010/642316

© F. Antonacci et al. 2010

**Received:**26 February 2010**Accepted:**25 August 2010**Published:**26 September 2010

## Abstract

We present an extension of the fast beam-tracing method presented in the work of Antonacci et al. (2008) for the simulation of acoustic propagation in reverberant environments that accounts for diffraction and diffusion. More specifically, we show how visibility maps are suitable for modeling propagation phenomena more complex than specular reflections. We also show how the beam-tree lookup for path tracing can be entirely performed on visibility maps as well. We then contextualize such method to the two different cases of channel (point-to-point) rendering using a headset, and the rendering of a wave field based on arrays of speakers. Finally, we provide some experimental results and comparisons with real data to show the effectiveness and the accuracy of the approach in simulating the soundfield in an environment.

## Keywords

- Specular Reflection
- Virtual Source
- Acoustic Path
- Diffraction Coefficient
- Geometric Domain

## 1. Introduction

Rendering acoustic sources in virtual environments is a challenging problem, especially when real-time operation is required without giving up a realistic impression of the result. The literature is rich with methods that approach this problem for a variety of purposes. Such methods are roughly divided into two classes: the former is based on an approximate solution of the wave equation on a finite grid, while the latter is based on the geometric modeling of acoustic propagation. Typical examples of the first class of methods are based on the solution of the Green's or Helmholtz-Kirchoff's equation through finite and boundary element methods [1–3]. The computational effort required by the solution of the wave equation, however, makes these algorithms unsuitable for real-time operation except for a very limited range of frequencies. Geometric methods, on the other hand, are the most widespread techniques for the modeling of early acoustic reflections in complex environments. Starting from the spatial distribution of the reflectors, their acoustic properties, and the location and the radiation characteristics of sources and receivers (listening points), geometric methods cast rays in space and track their propagation and interaction with obstacles in the environment [4]. The sequence of reflections, diffractions and diffusions a ray undergoes constitutes the acoustic path that link source and receiver.

Among the many available geometric methods, a particularly efficient one is represented by beam tracing [5–9]. This method was originally conceived by Hanrahan and Heckbert [5] for applications of image rendering, and was later extended by Funkhouser et al. [10] to the problem of audio rendering. A beam is intended as a bundle of acoustic rays originating from a point in space (a real source or a wall-reflected one), which fall onto the same planar portion of an acoustic reflector. Every time a beam encounters a reflector, in fact, it splits into a set of subbeams, each corresponding to a different planar region of that reflector or of some other reflector. As they bounce around in the environment, beams keep branching out. The beam-tracing method organizes and encodes this beam splitting/branching process into a specialized data structure called *beam-tree*, which describes the information of the visibility of a region from a point (i.e., the source location). Once the beam-tree is available, path-tracing becomes a very efficient process. In fact, given the location of the listening point (receiver), we can immediately determine which beams illuminate it, just through a "look up" of the beam-tree data structure. We should notice, however, that with this solution the computational effort associated to the beam tracing process and that of path-tracing are quite unbalanced. In fact if the environment is composed by
reflectors, the exhaustive test of the mutual visibility among all the
reflectors involves
tests, while the test of the presence of the receiver in the
traced beams needs only
tests. Some solutions for a speedup of the computation of the beam-tree have been proposed in the literature. As an example in [10] the authors adopt the Binary Space Partitioning Technique to operate a selection of the visible obstacles from a prescribed reflector. A similar solution was recently proposed in [11], where the authors show that a real-time tracing of acoustic paths is possible even in a simple dynamic environment.

In [12] the authors generalized traditional beam tracing by developing a method for constructing the beam-tree through a lookup on a precomputed data structure called global visibility function, which describes the visibility of a region not just as a function of the viewing angle but also of the source location itself.

Early reflections are known to carry some information on the geometry of the surrounding space and on the spatial positioning of acoustic sources. It is in the initial phase of reverberation, in fact, that we receive the echoes associated to the first wall reflections. Other propagation phenomena, such as diffusion, transmission and diffraction tend to enrich the sense of presence in "virtual walkthrough" scenarios, especially in densely occluded environments. As beam tracing was originally conceived for the modeling of specular reflections only, some extensions of this method were proposed to account for other propagation phenomena. Funkhouser et al. [13], for example, account for diffusion and diffraction through a bidirectional beam tracing process. When the two beam-trees that originate from the receiver and the source intersect on specific geometric primitives such as edges and reflectors, propagation phenomena such as diffusion and diffraction could take place. The need of computing two beam-trees, however, poses problems of efficiency when using conventional beam tracing methods, particularly when sources and/or receivers are in motion. A different approach was proposed by Tsingos et al. [14], who proposed to use the uniform theory of diffraction (UTD) [15] by building secondary beam-trees originated from the diffractive edges. This approach is quite efficient, as the tracing of the diffractive beam-trees can be based on the sole geometric configuration of reflectors. Once source and receiver locations are given, in fact, a simple test on the diffractive beam-trees determines the diffractive paths. Again, however, this approach inherits the advantages of beam tracing but also its limits, which are in the fact that a new beam-tree needs be computed every time a source moves.

As already mentioned above, in [12] we proposed a method for generating a beam-tree through a lookup on the global visibility function. That method had the remarkable advantage of computing a large number of acoustic paths in real time as both source and reflector are in motion in a complex environment. In this paper we generalize the work proposed in [12] in order to accommodate diffusion and diffraction phenomena. We do so by revisiting the concept of global visibility and by introducing novel lookup methods and new operators. Thanks to these generalizations, we will also show how it is possible to work on the visibility diagrams not just for constructing beam-trees but also to perform the whole path-tracing process.

In this paper we expand and repurpose the beam tracing method for applications of real-time rendering of acoustic sources in virtual environments. Two are the envisioned scenarios: in the former the user is wearing a headset, in the latter the whole sound field within a prescribed volume is rendered using loudspeaker arrays. We will show that the two scenarios share the same beam tracing engine which, in the first case, is followed by a path-tracing algorithm based on beam-tree lookup [12], with an additional head-related transfer function. In the second case the beam tracer is used for generating the control parameters of the beam-shaping algorithm proposed in [16]. This beam-shaping method allows us to design the spatial filter to be applied to the loudspeaker arrays for the rendering of an arbitrary beam. Other solutions exist in the literature for the rendering of virtual environments, such as wave field synthesis (WFS) and ambisonics. Roughly speaking, WFS computes the spatial filter to be applied to the speakers with an approximation of the Helmholtz-Kirchoff's equation. Interestingly enough, for example, in [17] the task of computing the parameters of all the virtual sources in the environment is demanded to an image-source algorithm. Therefore, some WFS systems already partially rely on geometric methods. When rendering occluded environments, however, the image-source method tends to become computationally demanding, while fast beam tracing techniques [12] can offer a significant speedup.

It is important to notice that the method proposed in [12] was developed for modeling complex acoustic reflections in a specific class of 3D environments obtained as the cartesian product between a 2D floor plan and a 1D (vertical) direction. This situation, for example, describes a complex distribution of vertical walls ending in horizontal floor and ceiling. When considering acoustic wall transmission, a 2D 1D environment becomes useful for modeling a multi-floored building with a repeated floor plan. Although 2D 1D environments enjoy the advantages of 2D modeling (simplicity, duality, etc.), the computation of all delays and path lengths still needs to be performed in a 3D space. While this is rather straightforward in the case of geometric reflections, it becomes more challenging when dealing with diffraction and diffusion phenomena.

The paper is organized as follows. In Section 2 we review and revisit the concept of global visibility and its use for efficiently tracing acoustic paths. In Section 3 we discuss the main mathematical models used for explaining diffusion and diffraction phenomena, and we choose the one that best suits our beam tracing approach. Sections 4 and 5 focus on the modeling of diffusion and diffraction with visibility diagrams. In Section 6 we present two possible applications of the algorithm presented in this paper. In Section 7 we prove the efficiency and the effectiveness of our modeling solution. Finally, Section 8 provides some final comments and conclusions.

## 2. The Visibility Diagram Revisited

In this section we review the concept of visibility diagram, as it is a key element for the remainder of this paper. In [12] we adopted this representation for generating a specialized data structure that could swiftly provide information on how to trace acoustic beams and rays in real time with the rules of specular reflection. This approach constitutes a generalization of the beam tracing algorithm proposed by Hanrahan and Heckbert [5]. The visibility diagram is a re-mapping of the geometric structures and functional elements that constitute the geometric world (rays, beams, reflectors, sources, receivers, etc.) onto a special parameter space that is completely dual to the geometric one. Visibility diagrams are particularly useful for immediately assessing what is in the line of sight from a generic location and direction in space. We will first recount the basic concepts of visibility diagrams and provide a general view of the path-tracing problem for the specific case of purely specular reflections. This overview will be provided in a slightly more general fashion than in [12], as all the algorithmic steps will be given with reference to visibility diagrams, and will constitute the starting point for the discussions in the following sections.

### 2.1. Visibility and the Tracing Problem

*Reference Reflector Parametrization*(RRP) parametrization based on the location of the intersection on the reference reflector and the travel direction of the ray. Although the RRP is referred to a frame attached to a specific reflector, this choice does not represent a limitation, due to the iterative nature of the visibility evaluation process. Let be the reference reflector. For reasons that will be clearer later on, the RRP normalizes through a translation, a rotation and a scaling of the axes in such a way that the reference reflector lies on the segment of the -axis between and 1. The set of rays passing through is described by the equation . Figure 1 shows the reflector referred to the normalized frame in the geometric domain (left). The set of rays passing through is called region of visibility from and it is represented by the horizontal strip (reference visibility strip) in the domain. Due to the duality between primitives in and domains we will sometimes refer to the RRP as the

*dual space*. We are interested in representing the mutual occlusions between reflectors in the dual space. With this purpose in mind, we split the visibility strip into

*visibility regions*, each corresponding to the set of rays that hit the same reflector. According to the image-source principle, all the obstacles that lie in the same half space of the image-source, are discarded during the visibility test. As a convention, in the future we will use the rotation of the reference reflector which brings the image-source in the half-space . The above parameter space turns out to play a similar role as the dual of a geometric space. In Table 1 we summarize the representation of some geometric primitives in the parameter space. A complete derivation of the relations of Table 1 can be found in [12, 18]. Notice that the relation between primitives in the two domains is of complete duality. For example, the dual of the oriented reflector is a wedge in the domain (sort of an oriented "beam" in parameter space). Conversely, the dual of an oriented beam (a single wedge in the geometric space) is an oriented segment in the domain (sort of an oriented "reflector" in parameter space).

Primitives in the geometric domain and their corresponding representation in ray space.

Geometric space | Ray space |
---|---|

Omnidirectional bundle of nonoriented rays | Non-oriented ray or two-sided infinite reflector |

Omnidirectional bundle of outgoing rays (source) | One-sided infinite reflector |

Beam (double wedge) | Two-sided reflector |

Two-sided reflector | Beam (double wedge) |

Oriented beam (single wedge) | One-sided reflector |

One-sided reflector | Oriented beam (single wedge) |

#### 2.1.1. Visibility Region

The parameters describing all rays originating from the reference reflector
form the region of visibility *from* that reflector. After normalization, this region takes on the strip-like shape described in Figure 1, which we refer to as "reference visibility strip". Those rays that originate from the reference reflector and hit another reflector
form a subset of this strip (see Figure 1) which corresponds to the intersection between the dual of
and the dual of
(reference visibility strip). The intersection of the dual of
and the visibility strip is the visibility region of
from
. Once the source location is specified, the set of rays passing through
and
and departing from that location will be a subset of the visibility region of
. One key advantage of the visibility approach to the beam tracing problem resides in the fact that we only need geometric information about the environment to compute the visibility regions, which can therefore be computed in advance.

#### 2.1.2. Dual of Multiple Reflectors: Visibility Diagrams

*overrides*which in their overlap. Two solutions for the occlusion problem are possible: the first, already presented in [12], is based on a simple test in the geometric domain. An arbitrary ray chosen in the overlap of visibility regions can be cast to evaluate the front-to-back ordering of visibility regions or, more simply, to determine which oriented reflector is first met by the test ray. An example is provided in Figure 2 where, if is the reference reflector, we end up having an occlusion between and , which needs to be sorted out. A test ray is picked at random within the overlapping region to determine which reflector is hit first by the ray. This particular example shows that, unless we consider each reflector as the combination of two of oppositely-facing oriented reflectors, we cannot be sure that the occlusion problem can be disambiguated. In this case, for example, occludes for some rays, and occludes for others. As shown in Table 1, a two-sided reflector corresponds to a double wedge in ray space, each wedge corresponding to one of the two faces of the reflector. By considering the two sides of each reflector as individual oriented reflectors, we end up with four distinct wedge-like regions in ray space, thus removing all ambiguities. The overlap between visibility regions of two one-sided reflectors arises every time the extreme lines of the corresponding visibility regions intersect. We recall that the dual of a point is a line whose slope is . The extreme lines of the visibility region of reflector are the dual of the endpoints of , that are and and the slopes of the extreme lines of the visibility region of are and . A similar notation is used for the overlapping reflector . Under the assumption that and never intersect in the geometric domain, we can reorder one-sided reflectors in front-to-back order by simply looking at the slopes of the extreme lines of their visibility regions. If the line of equation and the line of equation intersect in the dual space, then guarantees that occludes and guarantees that occludes .

### 2.2. Tracing Reflective Beams and Paths in Dual Space

#### 2.2.1. Tracing Beams

In this paragraph we summarize the tracing of beams in the geometric space using the information contained in the visibility diagrams. Further details on this specific topic can be found in [12]. This can be readily done by scanning the visibility diagram along the line that represents the "dual" of the virtual source. In fact, that line will be partitioned into a number of segments, one per visibility region. Each segment will correspond to a subbeam in the geometric space. Consider the configuration of reflectors of Figure 3(a). The first step of the algorithm consists of determining how the complete pencil of rays produced by the source is partitioned into beams. This is done by evaluating the visibility from the source using traditional beam tracing. This initial splitting process produces two classes of beams: those that fall on a reflector and those that do not. The beams and the corresponding beam-tree are shown in Figures 3(a) and 3(b), respectively. We consider the splitting of beam , shown in Figure 4. The image-source is represented in the dual space by the line . The beam will therefore be a segment on that line, which will be partitioned in a number of segments, one for each region on the visibility diagram. In Figure 4(a) the beam splitting is accomplished in the domain, while in Figure 4(b) we can see the corresponding subbeams in the geometric domain. This process is iterated for all the beams that fall onto a reflector. Further details can be found in [12]. At the end of the beam tracing process we end up with a tree-like data structure, each node of which contains information that identifies the corresponding beam:

- (i)
- (ii)
- (iii)
- (iv)
- (v)
the parent node (if any),

- (vi)
a list of the children nodes (if at least one exists).

The last two items are useful when reclaiming the "reflection history" of a beam. Given the above information we are immediately able to represent the beams (i.e., segments) in the domain.

#### 2.2.2. Tracing Paths

## 3. Mathematical Models of Diffraction and Diffusion

In this section we investigate some mathematical models used in the literature to quantitatively describe the causes of diffraction and diffusion. Later we will choose the model which best works for our beam tracing method.

### 3.1. Models of Diffusion

*scattering coefficient*and the

*diffusion coefficient*[24, 25]. The diffusion coefficient measures the similarity between the polar response of a Lambertian reflection and the actual one. This coefficient is expressed as the correlation index between the actual and the diffusive polar responses corresponding to a wavefront coming from a perpendicular direction with respect to the surface. The scattering coefficient measures the ratio between the energy diffused in nonspecular directions and the total (specular and diffused) reflected energy. This parameter is useful when we are interested in modeling diffusion in reverberant enclosures but it does not account for the directions of the diffused wavefronts. This approximation is reasonable in the presence of a large number of diffusive reflections, but tends to become a bit restrictive when considering first-order diffusion only (i.e., ignoring diffusion of diffused paths). This is why in this paper we consider the additional assumption that diffusive surfaces be wide. This way the range of directions of diffused propagation turns out to be wide enough to minimize the impact of the above approximation. We will use the scattering coefficient to weight the contribution coming from totally diffuse reflections (modeled by Lambert's cosine law) and specular reflections.

### 3.2. Models of Diffraction

Diffraction is a very important propagation mode, particularly in densely occluded environments. Failing to properly account for this phenomenon in such situations could result in a poorly realistic rendering or even in annoying auditory artifacts. In this section we provide a brief description of three techniques for rendering diffraction phenomena: the Fresnel Ellipsoid, the line of sources, and the Uniform Theory of Diffraction (UTD). We will then explain why the UTD turns out to be the most suitable approach to the modeling of diffraction in conjunction with beam tracing.

#### 3.2.1. Fresnel Ellipsoids

Let us consider a source and a receiver with an occluding obstacle in between. According to the Fresnel-Kirchhoff theory, the portion of the wavefront that is occluded by the obstacle does not contribute to the signal measured in , which therefore differs from what we would have with unoccluded spherical propagation. In order to avoid using the Fresnel-Kirchhoff integral, we can adopt a simpler approach based on Fresnel ellipsoids. If is the distance between and , only objects lying on paths whose length is between and are considered as obstacles, where is the wavelength. If is the generic location of the secondary source, the locus of points that satisfy the equation is an ellipsoid with foci in and . The portion of the ellipsoid that is occluded by obstacles provides an estimate of the absolute value of the diffraction filter's response. It is important to notice that the size of the Fresnel ellipsoid depends on the signal wavelength. As a consequence, in order to study diffraction in a given configuration, we need to estimate the occluded portion of the Fresnel ellipsoids at the frequencies of interest. In [26] the author proposes to use the graphics hardware to estimate the hidden portions of the ellipsoids. The main limit related to the Fresnel ellipsoid is the absence of information related to the phase of the signal: from the hidden portions of the ellipsoid, in fact, we can only infer the absolute value of the diffraction filter. If we need a more accurate rendering of diffraction, we must resort to other techniques.

#### 3.2.2. Line of Sources

In [27] the authors propose a framework for accurately quantifying diffraction phenomena. Their approach is based on the fact that each point on a diffractive edge receives the incident ray and then re-emits a muffled version of it. The edge can therefore be seen as a line of secondary sources. The acoustic wave that reaches the receiver will then be a weighed superposition of all wavefronts produced by such edge sources.

In order to quantitatively determine the impact of diffraction in closed form, we need to be able to evaluate the visibility of a region (environment) from a line (edge of secondary sources). As far as we know, there are no results in the literature concerning the evaluation of regional visibility from a line. There are, however, several works that simplify the problem by sampling the line of sources. This way, visibility is evaluated from a finite number of points [28–30]. This last approach can be readily accommodated into our framework. However, as we are interested in a fast rendering of diffraction, we prefer to look into alternate formulations.

#### 3.2.3. Uniform Theory of Diffraction

In a way, the GTD allows us to compactly account for all contributions of a line distribution of sources. In fact, if we were to integrate all the infinitesimal contributions over an infinite edge, we would end up with only one significant path, which is the one that complies with the Keller condition, as all the other contributions would end up canceling each other out. The impact of diffraction on the source signal is rendered by a diffraction coefficient that depends on the frequency and on the angle between the incident ray and the angular aperture of the diffracting wedge (see Section 6.1 and [32] for further details). This geometric interpretation of diffraction is also adopted by the UTD. The difference between GTD and UTD is in how such diffraction coefficients are computed (see Section 6.1).

The use of the UTD in beam tracing is quite convenient as it only involves one incident ray per diffractive path. The UTD, however, assumes that the wedge be of infinite extension and perfectly reflective, which in some cases is too strong an assumption. Nonetheless, the advantages associated to considering only the shortest path make the UTD an ideal framework for accounting for diffraction in beam tracing applications. Notice that when the incident ray is orthogonal to the edge ( ), the conic surface flattens onto a disc. This particular situation would be of special interest to us if we were considering an inherently 2D geometry. This, however, is NOT our case. We are, in fact, considering the situation of "separable" 3D environments [12], which result from the cartesian product between a 2D environment (floor map) and a 1D (vertical) direction. This special geometry (sort of an extruded floor map) requires the modeling of diffraction and diffusion phenomena in a 3D space. The Uniform Theory of Diffraction is, in fact, inherently three-dimensional, but our approach to the tracing of diffractive rays makes use of fast beam tracing, whose core is two dimensional. In order to be able to model UTD in fast beam tracing, we need therefore to first flatten the 3D geometry onto a 2D environment and later to adapt the 2D diffractive rays to the 3D nature of UTD. In order to clamp down the 3D geometry to the floor map, we need to establish a correspondence between the 3D geometric primitives that contribute to the Uniform Theory of Diffraction and some 2D geometric primitives. For example, when projected on a floor map, an infinitely long diffracting edge becomes a diffractive point, and a 3D diffracted ray becomes a 2D diffracted ray. When tracing diffractive beams, each wedge illuminated (directly or indirectly) by the source will originate a disk of diffracted rays, as shown in Figure 8. At this point we need to consider the 3D nature of the environment. We do so by "lifting" the diffracted rays in the vertical direction. We will end up with sort of an extruded cylinder containing all the rays that are diffracted by the edge. However, when we specify the locations of the source and the receiver, we find that this set includes also paths that do not honor the Keller cone condition , and are therefore to be considered as unfeasible. The removal of all unfeasible diffracted rays can be done during the auralization phase. During the auralization, in fact, we select the paths coming from the closer diffractive wedges, as they are considered to be more perceptually relevant. The validation is a costly iterative process, therefore we only apply it to paths that are likely to be kept during the auralization.

### 3.3. New Needs and Requirements

As already said above, we are interested in extending the use of visibility maps for an accurate modeling and a fast rendering of diffusion and diffraction phenomena. As visibility diagrams were conceived for modeling specular reflections, it is important to discuss what needs and requirements need to be considered.

Diffraction

Visibility regions can be used for accommodating and modeling diffraction phenomena. In fact, according to the UTD, when illuminated by a beam, a diffractive edge becomes a virtual source with specific characteristics. Our goal is to model the indirect illumination of the receiver by means of secondary paths: wavefronts are emitted from the source, after an arbitrary number of reflections they fall onto the diffractive edge, which in turn illuminates the receiver after an arbitrary number of reflections. A common simplification that is adopted in works that deal with this phenomenon [14] consists of assuming that second and higher-order diffractions are of negligible impact onto the auralization result. This, in fact, is a perceptually reasonable choice that considerably reduces the complexity of the problem. In fact, a simple solution for implementing the phenomenon using the tracing tools at hand, consists of deriving a specialized beam-tree for each diffractive source. We will see later how. Another important aspect to consider in the modeling of diffraction is the Keller-cone condition [31], as briefly motivated above: with reference to Figure 8 we have to retain paths for which . Tsingos et al. in [14] proposed to account for it by generating a reduced beam-tree, as constrained by a generalized cone that conservatively includes the Keller-cone. The excess rays that do not belong to the Keller cone, are removed afterwards through an appropriate check. We will see later that this approach can be implemented using the visibility diagrams.

Diffusion

Let us consider a source and a receiver, both facing a diffusive surface. In this case, each point of the surface generates an acoustic path between source and receiver. This means that the set of rays that emerge from the diffusing surface no longer form a beam (i.e., no virtual source can be defined as they do not meet in a specific point in space). In fact, according to Huygens principle, all points of the diffusive surface can be seen as secondary sources on a generally irregular surface, therefore we no longer have a single virtual source. Unlike diffraction, diffusion indeed poses new problems and challenges, as it prevents us from directly extending the beam tracing method in a straightforward fashion. One major difference from the specular case is the fact that the interaction between multiple diffusive surfaces cannot be described through an approach based on tracing, as we would have to face the presence of closed-loop diffusive paths. On the other hand, the impact of a diffusive surface on the acoustic field intensity is rather strong, therefore we cannot expect an acoustic path to still be of some significance after undergoing two or more diffusive reflections. It is thus quite reasonable to assume that any relevant acoustic paths would not include more than one relevant diffusive reflection along its way. We will see later on that this assumption, reasonably adopted by other authors as well (see [13]) opens the way to a viable solution to the real-time rendering of such acoustic phenomena. In fact, even if a diffusive surface does not preserve beam-like geometries, it is still possible to work on the visibility regions to speed up the tracing process between a source and a receiver through a diffusive reflection. This can be readily generalized to the case in which a chain of rays go from a source through a series of specular reflections and finally undergoes a diffusive reflection before reaching the receiver (diffusive path between a virtual source and a real receiver). A further generalization will be given for the case in which the rays undergo all specular reflections but one, which could be a diffusive reflection somewhere in between the chain. This last case corresponds to one diffusive path between a virtual source and a "virtual receiver", which can be computed by means of two intersecting beam-trees (a forward one from the source to the diffusive reflector and a backward one from the receiver to the diffusive reflector).

## 4. Tracing Diffusive Paths Using Visibility Diagrams

As already said before, the rendering of diffusion phenomena is commonly based on Bidirectional Beam Tracing, from both the source and the receiver. The need of tracing beams not just from the source but also from the receiver requires a certain degree of symmetrization in the definitions. For example, we need to introduce the concept of "virtual receiver", which is the location of the receiver as it gets iteratively mirrored against reflectors.

The diffuse paths can be quite easily represented in the RRP from the reference reflector. The path from a point on the reflector is, in fact, the intersection of the dual of , which is the line ; with the dual of , which is the line . Similarly, the ray from to is the intersection between the line and the line . As we can see, we do not just have the ray that corresponds to the intersection of the two lines and (same point, same direction), but a whole collection of rays corresponding to the horizontal segment that connects the source line and the receiver line (same point but different directions).

## 5. Tracing Diffractive Beams and Paths Using Visibility Diagrams

In this section we extend the use of visibility diagrams to model diffractive paths and, using the UTD, we generalize the fast beam tracing method of [12] to account for this propagation phenomenon.

### 5.1. Selection of the Diffractive Wedges

*wedge*as a geometric configuration of two or more walls meeting into a single edge. If the angular opening of the wedge is smaller than and both the receiver and the source fall inside the wedge, then source and receiver are in direct visibility. Not all wedges are, therefore, worth retaining. Even if a wedge is diffractive, we can still find configurations where source and receiver are in direct visibility. When this happens, diffraction is less relevant than the direct path and we discard these diffractive paths. With reference to Figure 11, we are interested in auralizing diffraction in the two regions marked as I and II, where source and receiver are not necessarily in conditions of mutual visibility. For each of the two regions we will build a beam-tree. This selection process returns a list of diffractive wedges and their coordinates.

### 5.2. Tracing Diffractive Beam-Trees

### 5.3. Diffractive Paths Computation

Once source and receiver are specified, we can finally build the diffractive paths. Let ( ) denote the th reflective path between the source and the beam-tree to the region I (to the region II) departing from the diffractive wedge marked with in the environment. As far as the auralization of diffraction is concerned, the path is completely defined if we specify the source location, the position of the point of incidence of the ray on the walls (possibly in normalized coordinates) and the location of the diffractive edge. The set of paths between the source and the diffractive region inside the beam-tree of the region I (II) is ( ). Similarly, ( ) is the th path between the receiver and the th diffractive wedge in the beam-tree I (II), and ( ) is the set of paths between receiver and the diffractive edge inside the region I (II).

In order to preserve the Keller cone condition, we have to determine the point on the diffractive edge that makes the angle between the incoming ray and the edge equal to the angular aperture of the Keller cone . When dealing with diffractive beam-trees which include also one or more reflections, the condition is no longer sufficient for determining the location of the virtual source, as we must also preserve the Snell's law for the reflections inside the path. In [14] the authors propose to compute the diffraction and reflection points along the diffractive path through a system of nonlinear equations. This solution is obtained through an iterative Newton-Raphson algorithm. In this paper we adopt the same approach.

## 6. Applications to Rendering

In this section we discuss the above beam tracing approach in the context of acoustic rendering. As anticipated, we first discuss the more traditional case of channel-based (point-to-point) rendering. This is the case of auralization when the user is wearing a headset.

We will then discuss the case of geometric wavefield rendering.

### 6.1. Path-Tracing for Channel Rendering

In this section we propose and describe a simple auralization system based on the solutions proposed above. Due to their different nature, we will distinguish between reflective, diffusive and diffractive echoes. In particular, diffraction involves a perceptually relevant low-pass filtering on the signal, hence we will stress diffraction instead of diffusion and reflection. The diffraction is rendered by a coefficient, whose value depends on the frequency. Each diffractive wedge, therefore, acts like a filter on the incoming signal, with an apparent impact on the overall computational cost. We therefore made our auralization algorithms select the most significant paths only, based on a set of heuristic rules that take the power of each diffractive filter into account.

#### 6.1.1. Auralization of Reflective Paths

As far as reflections are concerned, we follow the approach proposed in [12]: the echo is characterized by a length . The magnitude of the th echo is , being the reflection coefficient and being the number of reflections (easily determined by inspecting the beam-tree). The delay associated to the echo is , where is the speed of sound ( m/s in our experiments).

#### 6.1.2. Auralization of Diffraction

As motivated in Section 5, we have resorted to UTD to render diffraction. In order to auralize diffraction paths, we have to compute a diffraction coefficient, which exhibits a frequency-dependent behavior. A comprehensive tutorial on the computation of the diffraction coefficient in UTD may be found in [14]. We remark here that in order to compute the diffractive filter, for each path we need some geometrical information about the wedge (available at a precomputation stage) and about the path, which is available only after the Newton Raphson algorithm described in the previous Section.

#### 6.1.3. Auralization of Diffusive Paths

Accurate modeling of diffusion is typically based on statistical methods [33]. Our modeling solution is, in fact, aimed at auralization and rendering applications, therefore we resort to an approximated but still reliable approach that does not significantly impact on the computational cost: we make use of the Lambert cosine law, which works for energies, to compute the energy response. Then we convert the energy response onto a pressure response by taking its square root. A similar idea was developed in [34], where the author combines beam tracing and radiosity.

### 6.2. Beam Tracing for Sound Field Rendering

The beam-tree contains all the information that we need to structure a sound field as a superposition of individual beams, each originating from a different image-source. A solution was recently proposed for reconstructing an individual beam in a parametric fashion using a loudspeaker array [16]. Here, an arbitrary beam of prescribed aperture, orientation origin is reconstructed using an array of loudspeakers. In particular, the least squares difference of the wavefields produced by the array of loudspeakers and by the virtual source is minimized over a set of predefined control points. The minimization returns a spatial filter to be applied at each loudspeaker. It is interesting to notice that the approach described in [16] offers the possibility to design a spatial filter that performs a nonuniform weighting of the rays within the same beam. This feature enables the rendering of "tapered" beams, which is particularly useful when dealing with diffractive beams. In fact, the diffraction coefficient (see [14] for further details) assigns different levels of energy to rays within the same beam, according to the reciprocal geometric configuration of the source, the wedge and the direction of travel of the ray departing from the wedge.

The rendering of the overall sound field is finally achieved by adding together the spatial filters for the individual beams. More details and a some preliminary results of this method can be found in [35].

## 7. Experimental Results

In order to test the validity of our solution, we performed a series of simulation experiments as well as a measurement campaign in a real environment. An initial set of simulations was performed with the goal of assessing the computational efficiency of our techniques for the auralization of reflective, diffractive and diffusive paths, separately considered. In order to assess the accuracy of our method, were constructed the impulse responses of a given environment (on an assigned grid of points) through both simulation and direct measurements. From such impulse responses, we derived a set of parameters, typically used for describing reverberation. The comparison between the measured parameters and the simulated ones was aimed at assessing the extent of the improvement brought by introducing cumulatively diffraction and diffusion into our simulation.

### 7.1. Computation Time

- (i)
compare the beam-tree's building time of visibility-based beam tracing with that of traditional beam tracing [5];

- (ii)
measure the diffractive path-tracing time;

- (iii)
measure the diffusive path-tracing time.

- (i)
the number of diffractive paths with respect to the number of walls in the environment;

- (ii)
the computational time for auralizing the diffractive paths.

### 7.2. Validation

Early Decay Time (EDT)

Normalized Energy of the Impulse Response

Center Time

## 8. Conclusions

In this paper we proposed an extension of the visibility-based beam tracing method proposed in [12], which now allows us to model and render propagation phenomena such as diffraction and diffusion, without significantly affecting the computational efficiency. We also improved the method in [12] by showing that not just the construction of the beam-tree but also the whole path-tracing process can be entirely performed on the visibility maps. We finally showed that this approach produces quite accurate results when comparing simulated data with real acquisitions. Thanks to that, this modeling tool proves particularly useful every time there is a need for an accurate and fast simulation of acoustic propagation in environments of variable geometry and variable physical characteristics.

## Declarations

### Acknowledgment

The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 226007.

## Authors’ Affiliations

## References

- Kopuz S, Lalor N: Analysis of interior acoustic fields using the finite element method and the boundary element method.
*Applied Acoustics*1995, 45(3):193-210. 10.1016/0003-682X(94)00045-WView ArticleGoogle Scholar - Kludszuweit A: Time iterative boundary element method (TIBEM)—a new numerical method of four-dimensional system analysis for thecalculation of the spatial impulse response.
*Acustica*1991, 75: 17-27.Google Scholar - Ciskowski R, Brebbia C:
*Boundary Element Methods in Acoustics*. Elsevier, Amsterdam, The Netherlands; 1991.MATHGoogle Scholar - Krockstadt U: Calculating the acoustical room response by the use ofa ray tracing technique.
*Journal of Sound and Vibrations*1968, 8(1):118-125. 10.1016/0022-460X(68)90198-3View ArticleGoogle Scholar - Heckbert PaulS, Hanrahan : Beam tracing polygonal objects.
*Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques, July 1984*119-127.Google Scholar - Dadoun N, Kirkpatrick D, Walsh J: The geometry of beam tracing.
*Proceedings of the ACM Symposium on Computational Geometry, June 1985, Sedona, Ariz, USA*55-61.Google Scholar - Monks M, Oh B, Dorsey J: Acoustic simulation and visualisationusing a new unified beam tracing and image source approach.
*Proceedings of the 100th Audio Engineering Society Convention (AES '96), 1996, Los Angeles, Ariz, USA*Google Scholar - Stephenson U, Kristiansen U: Pyramidal beam tracing and time dependent radiosity.
*Proceedings of the 15th International Congress on Acoustics, June 1995, Trondheim, Norway*657-660.Google Scholar - Walsh JP, Dadoun N: What are we waiting for? The developmentof Godot, II.
*Journal of the Acoustical Society of America*1982, 71(S1):S5.View ArticleGoogle Scholar - Funkhouser T, Carlbom I, Elko G, Pingali G, Sondhi M, West J: Beam tracing approach to acoustic modeling for interactive virtual environments.
*Proceedings of the Annual Conference on Computer Graphics (SIGGRAPH '98), July 1998*21-32.Google Scholar - Laine S, Siltanen S, Lokki T, Savioja L: Accelerated beam tracing algorithm.
*Applied Acoustics*2009, 70(1):172-181. 10.1016/j.apacoust.2007.11.011View ArticleGoogle Scholar - Antonacci F, Foco M, Sarti A, Tubaro S: Fast tracing of acoustic beams and paths through visibility lookup.
*IEEE Transactions on Audio, Speech and Language Processing*2008, 16(4):812-824.View ArticleGoogle Scholar - Funkhouser T, Min P, Carlbom I: Real-time acoustic modeling for distributed virtual environments. In
*Proceedings of ACM Computer Graphics (SIGGRAPH '99), 1999, Los Angeles, Ariz, USA*Edited by: Rockwood A. 365-374.Google Scholar - Tsingos N, Funkhouser T, Ngan A, Carlbom I: Modeling acoustics in virtual environments using the uniform theory of diffraction.
*Proceedings of the 28th International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2001), August 2001*545-552.Google Scholar - Kouyoumjian RG, Pathak PH: A uniform geometrical theory of diffraction for an edge in a perfectly conducting surface.
*Proceedings of the IEEE*1974, 62(11):1448-1461.View ArticleGoogle Scholar - Canclini A, Galbiati A, Calatroni A, Antonacci F, Sarti A, Tubaro S: Rendering of an acoustic beam through an array of loud-speakers.
*Proceedings of the 12th International Conference on Digital Audio Effects (DAFx '09), 2009*Google Scholar - Berkhout AJ: Holographic approach to acoustic control.
*Journal of the Audio Engineering Society*1988, 36(12):977-995.Google Scholar - Foco M, Polotti P, Sarti A, Tubaro S: Sound spatialization basedon fast beam tracing in the dual space.
*Proceedings of the 6th International Conference on Digital Audio Effects (DAFX '03), September 2003, London, UK*198-202.Google Scholar - Brouard B, Lafarge D, Allard J-F, Tamura M: Measurement and prediction of the reflection coefficient of porous layers at oblique incidence and for inhomogeneous waves.
*Journal of the Acoustical Society of America*1996, 99(1):100-107. 10.1121/1.415222View ArticleGoogle Scholar - Kleiner M, Gustafsson H, Backman J: Measurement of directional scattering coefficients using near-field acoustic holography and spatial transformation of sound fields.
*Journal of the Audio Engineering Society*1997, 45(5):331-346.Google Scholar - Nocke C: In-situ acoustic impedance measurement using a free-field transfer function method.
*Applied Acoustics*2000, 59(3):253-264. 10.1016/S0003-682X(99)00004-3View ArticleGoogle Scholar - Thomasson SI: Reflection of waves from a point source by an impedance boundary.
*Journal of the Acoustical Society of America*1976, 59(4):780-785. 10.1121/1.380943View ArticleMATHGoogle Scholar - Funkhouser T, Tsingos N, Jot JM: Sounds good to me! computational sound for graphics, virtual reality, and interactive systems.
*Proceedings of ACM Computer Graphics (SIGGRAPH '02), July 2002, San Antonio, Tex, USA*Google Scholar - Kuttruff H:
*Room Acoustics*. 3rd edition. Elsevier, Amsterdam, The Netherlands; 1991.Google Scholar - Beranek LL:
*Concert and Opera Halls: How they Sound*. Acoustical Society of America through the American Institute of Physics; 1996.Google Scholar - Tsingos N:
*Simulating high quality virtual sound fields for interactive graphics applications, Ph.D. dissertation*. Universite J. Fourier, Grenoble, France; 1998.Google Scholar - Biot MA, Tolstoy I: Formulation of wave propagation in infinite media by normal coordinates with an application to diffraction.
*Journal of the Acoustical Society of America*1957, 29: 381-391. 10.1121/1.1908899View ArticleGoogle Scholar - Lokki T, Savioja L, Svensson P: An efficient auralization of edge diffraction.
*Proceedings of 21st International Conference of Audio Engineering Society (AES '02), May 2002*Google Scholar - Svensson UP, Fred RI, Vanderkooy J: An analytic secondary source model of edge diffraction impulse responses.
*Journal of the Acoustical Society of America*1999, 106(5):2331-2344. 10.1121/1.428071View ArticleGoogle Scholar - Torres RR, Svensson UP, Kleiner M: Computation of edge diffraction for more accurate room acoustics auralization.
*Journal of the Acoustical Society of America*2001, 109(2):600-610. 10.1121/1.1340647View ArticleGoogle Scholar - Keller JB: Geometrical theory of diffraction.
*Journal of the Optical Society of America*1962, 52: 116-130. 10.1364/JOSA.52.000116MathSciNetView ArticleGoogle Scholar - McNamara D, Pistorius C, Malherbe J:
*Introduction to the Uniform Geometrical Theory of Diffraction*. Artech House; 1990.Google Scholar - Rindel JH: The use of computer modeling in room acoustics.
*Journal of Vibroengineering*2000, 3: 41-72.Google Scholar - Lewers T: A combined beam tracing and radiatn exchange computer model of room acoustics.
*Applied Acoustics*1993, 38(2–4):161-178.View ArticleGoogle Scholar - Antonacci F, Calatroni A, Canclini A, Galbiati A, Sarti A, Tubaro S: Soundfield rendering with loudspeaker arrays through multiple beam shaping.
*Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA '09), 2009*313-316.Google Scholar - Rife DD, Vanderkooy J: Transfer-function measurement with maximum-length sequences.
*Journal of the Audio Engineering Society*1989, 37(6):419-444.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.