Two topics are addressed in this talk. Sound field capturing is aimed at interpolating and reconstructing a spatial acoustic filed with multiple microphones, which can be applied to high fidelity audio playback using loudspeakers or headphones, spatial noise cancellation, sound visualization, etc. A method based on sparse sound field decomposition is introduced for reconstructing a sound field inside a region including sources, which is known as an ill-posed problem. Next, the source (loudspeaker) and sensor (control point/microphone) placement problem for sound field control is addressed. A method based on empirical interpolation method, which is originally proposed for numerical analysis of partial differential equation, is introduced.
Shoichi Koyama received the B.E., M.S., and Ph.D. degrees from the University of Tokyo, Tokyo, Japan, in 2007, 2009, and 2014, respectively. He joined Nippon Telegraph and Telephone Corporation (NTT) in 2009 and started the career as a researcher in acoustic signal processing at NTT Media Intelligence Laboratories. In 2014, he joined the Graduate School of Information Science and Technology, the University of Tokyo, as an Assistant Professor. From 2016 to 2018, he was also a visiting researcher at Paris Diderot University (Paris 7) / Institut Langevin, Paris, France.