On Monday October 11th the AES Melbourne Section mounted another Zoom online meeting, due to Victoria’s continuing and now world record-breaking pandemic lockdown situation.
Over thirty members and guests joined the meeting to hear Inventor and Vastigo Ltd CTO, Joe Hayes speak on his DaS AuReality loudspeaker technology in a presentation entitled:
“From Stradivarius to Speaker – technology for improvements in sound delivery and room acoustics treatment”
The DaS AuReality system claims to reduce the impact of room acoustics for the listening experience and produce a listening experience free of sweet spot limitations by using an algorithm to create Diffusion at the Source (DaS))
After welcoming everyone, Section Chair Graeme Huon introduced Joe, who started by outlining some of his early work on improving musical instrument performance.
He recounted how his university thesis on loudspeaker design, coupled with his background as a musician led him to work on the acoustics of the guitar, and its similarity to speaker boxes with panel resonances. He said this led him to multi-pickup techniques with vent simulations for a more natural guitar sound in amplified instruments. Concentrating on the modal vibrations of stringed instruments including acoustic guitar and violin bodies, his work led to the development of the PASSAC (PreAmplified Split Signal Acoustic Contour) a guitar signal processing device, and related products.
Joe then went on to describe his studies into psychoacoustic effects pointing out that Pitch was the dominant perceptual clue. For music and describing his model of “pitch trains” and the overlay of patterns of pitch intervals.
Joe then moved on his to his latest work on loudspeakers, and his aim to get an immersive experience closer to the original source.
Joe pointed out acoustic degradation by coherent early reflections and used a visual comparison by the analogy of in-phase pulse propulsion of the jellyfish versus the wave-like motion of the Spanish Dancer Fish (TDP) to show the differences between time delay and coherent waves
He went on to describe the Zwicker effect where the relationship between bandwidth and perceived volume is not linear, with the volume appearing constant until the bandwidth reaches a threshold point then increasing rapidly. Joe also pointed out that perceived volume or level was a learned parameter.
He indicated that there were 32 critical bands (32 “pitch trains”) from the Basilar Membrane into the brain, with each band being defined by this critical bandwidth.
Joe then pointed out how early reflections and the reverberant sound field can cause difficulties by disturbing the “pitch train” and creating “spectral splatter”. He posited that early specular room reflections can significantly interrupt the pitch trains between the inner ear and the rest of the brain.
Discussing masking room effects, Joe commented that the precedence effect, or law of the first wavefront is a binaural psychoacoustical effect.
Using the visual metaphor of various “versions” of an image of the Mona Lisa he demonstrated the degrading effect of conflicting acoustic images with varied time delays up to the point where they become distinct second image echoes (30 – 50msec).
After a short break, where some audience questions were addressed, Joe moved on to describe how he has tackled these challenges.
Joe first reinforced the need for spectral consistency in all reproduced sound treatments (‘pitch is king”), then showed Time Correlated vs Time De-correlated Polar graphics using their internally developed “Correlogram” display indicating differences between a conventional loudspeaker source with phase coherent reflections and a DaS (Diffusion at Source) AuReality speaker where the spectral content remained consistent but the spatial phase was varied for all audio input/output.
He asserted that the DaS speaker lessened the impact of the first and subsequent reflections for listeners in the room. The verbal descriptions proved challenging without the ability to demonstrate the technology but this will be scheduled for a later date.
Joe went on to describe hardware implementations of the DaS principle using driver arrays and amplification channels including modern class D amplifiers, along with the (patent applied for) DaS AuReality algorithms implemented in either combined or standalone chipsets
Joe then described applications such as high-end, midFi and soundbar loudspeakers, automotive infotainment systems, and other future applications.
He also explained that other applications open up with some of the new loudspeaker drivers available, such as (for example) the USound miniature drivers for smartphone applications, and larger multi-driver arrays for sound reinforcement applications.
In outlining these devices and applications, he also pointed out that one effect of DaS was that perceived loudness was much greater for a given acoustical output, opening the possibilities for lowered SPL without reduced intelligibility and lower powered, low voltage devices with all the benefits that accrue.
A Q&A session following the presentation canvassed a range of topics on Joe’s presentation and indicated a keen interest in his presentation.
A video recording of the Zoom session has been created. An edited version is below:
This video can be viewed directly on YouTube at: https://youtu.be/L4ty9ey9EFA
A PDF version of Joe’s slides is available at:
The full presentation in .pptx format is available (right-click and choose “Save Link as”) at:
Melbourne participants are looking forward to having the opportunity to hear his loudspeakers, as Joe has committed to arranging a listening session sometime soon when gathering in a group is again permitted.
DaS AuReality described at the Vastigo website – https://vastigo.com/
USound MEMS loudspeakers – https://www.usound.com/
Resonado drivers – https://www.resonado.com/
TI TAS5825M amplifier chip – https://www.ti.com/product/TAS5825M
Maxim MAX98357 amplifier chip – https://www.maximintegrated.com/en/products/analog/audio/MAX98357A.html
Prepared by P Smerdon/G Huon