Wave Field Synthesis, commonly referred to as WFS, is a technique for spatial audio rendering. Users can create acoustic environments in the virtual realm. WFS involves producing “artificial” wave fronts that are synthesized by a number of individually driven speakers. These wave fronts appear to come from a virtual starting point called the notional source or virtual source. Unlike conventional spatialization techniques, like stereo for example, the localization of virtual sources in WFS is not dependent on where the listener is positioned.

The Physical Fundamentals of Wave Field Synthesis
Wave Field Synthesis is based on the Huygens’ principle. This principle states that any particular wave front can be considered as a superposition of elementary spherical waves. This means that you can synthesize any wave front from these elementary waves. A computer can control a large number of speakers and actuate each individual one at the exact time that the desired virtual wave front passes through it.
In 1988, Professor Berkhout, from the University of Delft, created the procedure for doing this. It’s mathematically based on the Kirchhoff-Helmholtz integral. This integral states that the sound pressure is completely determined within a volume of free sources as long as velocity and sound pressure are determined at all points on its surface.



Essentially, this means that it is possible to reconstruct any sound field if acoustic velocity and sound pressure are restored at every point of the surface of its volume. This approach is the basic principle of Holophony.
In order to reproduce a sound field, one would have to cover the entire surface of the volume with dipole and monopole loudspeakers placed closely together. These would each need to be driven with their own unique signal. The listening area would also have to be anechoic so that you can comply with the source-free volume assumption. This is nearly impossible in practice.
We can determine sound pressure at each point of a half-space if we know the sound pressure in each point of its dividing plane, according to Rayleigh II. Since the most accurate acoustic perception is achieved in the horizontal plane, a horizontal loudspeaker line is typically arranged in a circle or rectangle around the listener.
Any point on this horizontal plane of loudspeakers can be the point of origin for the synthesized wave front. This origin is our representation of the virtual acoustic source, and there is not much difference between it and a material acoustic source in the same position. Unlike your standard stereo reproduction, virtual sources do not move along if a listener moves from one point of the room to another.
The array will produce convex wave fronts for sources that are behind the loudspeakers. On the other hand, sources in front of the loudspeakers can be rendered by concave wave fronts that first focus in the virtual source and then diverge again. This leaves the reproduction inside the volume incomplete; therefore, it breaks down when the listener is seated between the inner source and the speakers.
Procedural Advantages of Wave Field Synthesis
A sound field with acoustic sources in a very stable position can be created through the use of wave field synthesis. This is achieved through level and time information being stored in the impulse response of the recording room or through the help of the model-based mirror-source approach. It is even possible to take a genuine sound field and craft a perfect virtual copy of it that is nearly indistinguishable from the real sound.
When the listener moves and changes positions in the area of rendition, they will experience the same effect as a change of location in the recording room. Listeners are no longer limited to standing in the “sweet spot” within a recording room.
MPEG4 is an object-oriented transmission standard that was created and standardized by the Moving Picture Expert Group. It allows for the separate transmission of the acoustic model or the impulse response and the dry recorded audio signal. Every virtual acoustic source requires its own (mono) audio channel. A recording room’s spatial sound field is a combination of the direct wave of the acoustic source and a pattern of mirror acoustic sources that are distributed throughout the space. The mirror acoustic sources are the product of reflections from the recording room surface.
A significant amount of spatial information is inevitably lost when the spatial mirror source is distributed onto only a few transmitting channels. This spatial distribution can much more accurately be synthesized by the rendition side.
Wave Field Synthesis offers an advantage when it comes to conventional, channel-oriented rendition procedures. Virtual acoustic sources known as “virtual panning spots,” which guide the signal content of the associated channels, can be placed far outside the material rendition area. This helps to decrease the influence of the listener position because the relative changes in levels and angles are obviously smaller, as is the case with material loudspeaker boxes placed close together. This significantly broadens the sweet spot to cover almost the entire rendition area. Wave Field synthesis is a procedure that not only offers compatibility but also greatly improves the reproduction for the conventional transmission methods.
Problems that Remain
One of the most obvious issues with the original sound field is the reduction on the horizontal level of the loudspeaker lines. Because acoustic damping is necessary in the rendition area, this problem is particularly noticeable. It hardly mirrors the acoustic sources that we find outside of this level. Unfortunately, without this type of acoustic treatment, the condition of the source-free volume from the mathematical approach would be impaired.
Wave field synthesis attempts to simulate a recording space’s acoustic characteristics, which means that we must suppress the acoustics of the rendition area. There are many solutions to make this happen. For starters, we can arrange the walls in a way that is both non-reflective and absorbing. We can also have the playback within the near field. Of course, for this to happen, we must have a large diaphragm surface or couple the loudspeakers very closely in the hearing zone.
The “Truncation Effect’ is another cause for disturbance in the spherical wave front. The resulting wave front is a combination of elementary waves. This means that a sudden pressure change may happen if no further speakers produce elementary waves by the end of the speaker row. This sudden change in pressure can cause what we call a “shadow wave” effect. If the outer loudspeaker volume is decreased, we can reduce the shadow wave effect. However, when it comes to virtual acoustic sources that sit in front of the loudspeaker arrangement, this pressure change arrives ahead of the actual wave front, and so it becomes clearly audible.
Another major problem with WFS is that it’s expensive. A good amount of individual transducers have to be placed close together. If not, spatial aliasing effects become audible. Spatial aliasing results from having a finite amount of transducers (which leads to elementary waves).
Also position-dependent, narrow-band break-downs in the frequency response within the rendition range can cause discretisation. The angle of the listener to the loudspeaker arrangement as well as the angle of the virtual acoustic source will determine their frequency.

Market Maturity and Research
Delft University offered a newer development for the WFS in 1988. As part of the European Union’s CARROUSO project, ten institutes throughout Europe were included in this research. The WFS sound system called IOSONO was created by the Fraunhofer Institute for Digital Media Technology (IDMT). These loudspeaker rows were installed in some theaters and cinemas, and the results were positive.
In July of 2008, the Technical University of Berlin broadcast the very first live WFS transmission from the Cologne Cathedral into lecture hall 104. With 2,700 loudspeakers on 832 independent channels, this room contains the largest speaker system in the world. The procedure was not successfully applied for home-audio purposes until recently. However, significant acceptance problems persist. If these problems are overcome, the potential for creating virtual acoustic environments will become very intriguing.
Downloads
References
- Perceptual Differences Between Wavefield Synthesis and Stereophony by Helmut Wittek
- Inclusion of the playback room properties into the synthesis for WFS – Holophony
- Wave Field Synthesis at IRCAM
Leave a Reply