zum Inhalt springen

A04 - Processing articulatory data with the ema2wav converter

Philipp Buech, Simon Roessig, Lena Pagel, Doris Mücke & Anne Hermes

In collaboration with Philipp Buech and Anne Hermes (Université Sorbonne Nouvelle, Paris), Simon Roessig, Lena Pagel and Doris Mücke from project A04 (University of Cologne, CRC1252) developed ema2wav, a software conversion tool for electromagnetic articulographic data.

3D electromagnetic articulography (EMA) is a widely used technique to capture the movements of, e.g., the jaw, lips, tongue tip, and tongue body during speech. Through the placement of sensor coils on the articulators of interest, their position can be tracked in the three-dimensional space over time. While there is already a number of software packages for the analysis and visualisation of EMA data [e.g., 1; 2; 3], their availability is partly restricted due to expensive licences and their interface with other programs is limited. The ema2wav converter is a lightweight open-source software package, which is platform-independent (works on Linux, Windows and Mac), is built entirely with open source tools, can easily interface with other software and is freely available for the research community.

ema2wav allows converting EMA trajectories not only into CSV files but also into multi-channel WAVE files. This provides the option to process kinematic signals in Praat [4], which is the quasi-standard for the annotation of acoustic speech signals. ema2wav aims to create an easy workflow for all Praat users in experimental linguistics to view, annotate and analyse EMA kinematics and acoustics in one place. Subsequently, the converted data can be analysed, e.g., in R [5]. The converter is capable to extract not only the 3D position data but also allows calculations (e.g., 1st derivative as velocity, 2nd derivative as acceleration as well as tangential velocities and Euclidean distances). Additionally, different kinds of filtering/smoothing methods can be applied to the data. To ensure accessibility to all users, ema2wav can be executed either through a stand-alone Python script or through a graphical user interface (GUI).

An extensive description of the software has been presented in a conference proceedings paper [6], conference poster [7] and on the GitHub page [8]. ema2wav has been successfully applied and cited, e.g. in [9; 10; 11].

References

[1] Tiede, Mark. 2005. MVIEW: Software for visualization and analysis of currently recorded movement Data. Haskins Laboratories.

[2] Ouni, Slim, Loïc Mangeonjean & Ingmar Steiner. 2012. Visartico: a visualization tool for articulatory data. In Proceedings of Interspeech, 9-13 September 2012, 1878–1881. Portland, USA.

[3] Winkelmann, Raphael, Klaus Jaensch, Steve Cassidy & Jonathan Harrington. 2021. emuR: Main Package of the EMU Speech Database Management System, v. 2.3.0.

[4] Boersma, Paul & David Weenink. 2022. Praat: doing Phonetics by Computer, v. 6.2.09.

[5] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2022.

[6] Buech, Philipp, Simon Roessig, Lena Pagel, Doris Mücke & Anne Hermes. 2022. ema2wav: doing articulation by Praat. In Proceedings of Interspeech, 18-22 September 2022, 1352-1356Incheon, Korea. DOI: 10.21437/Interspeech.2022-10813.

[7] Buech, Philipp, Simon Roessig, Lena Pagel, Doris Mücke & Anne Hermes. 2022. ema2wav: doing articulation by Praat. Poster at Interspeech, 18-22 September. Incheon, Korea.

[8] https://github.com/phbuech/ema2wav.

[9] Pagel, Lena, Márton Sóskuthy, Simon Roessig & Doris Mücke. 2023. A kinematic analysis of visual prosody: Head movements in habitual and loud speech. In Proceedings of the 20th International Congress of Phonetic Sciences (ICPhS), 7-11 August, 4130–4134. Prague, Czech Republic: Guarant International. DOI: 10.5281/zenodo.10299230.

[10] Lara, Andres Felipe, Anne Hermes, Sejin Oh & Claire Pillot-Loiseau. 2023. Monolingual and plurilingual strategies in the articulation of French R: a case study. In Proceedings of the 20th International Congress of Phonetic Sciences (ICPhS), 7-11 August, 1147–1151. Prague, Czech Republic: Guarant International.

[11] Shao, Bowei, Philipp Buech, Anne Hermes & Maria Giavazzi. 2023. Lexical stress and velar palatalization in Italian: A spatio-temporal interaction. In Proceedings of Interspeech, 20-23 August 2023, 1833-1837. Dublin, Ireland.

*