Zhao Ren received her bachelor and master degree in Computer Science and Technology from the Northwestern Polytechnical University (NWPU) in the P.R. China, in 2013 and 2017. Currently, she is a EU-researcher and working on her Ph. D. degree at the ZD.B Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany, where she is involved with the H2020-MSCA-ITN-ETN project TAPAS, for health-related speech analysis. She regularly reviews IEEE Transactions on Cybernetics, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Affective Computing, and IEEE Transactions on Multimedia.
Research Interests
Her research interests mainly lie in transfer learning, multi-task learning, and deep learning for the application in health care and wellbeing.
F. Dong, K. Qian, Z. Ren, A. Baird, X. Li, Z. Dai, B. Dong, F. Metze, Y. Yamamoto, and B. Schuller, “Machine listening for heart status monitoring: Introducing and benchmarking HSS the heart sounds shenzhen corpus,” IEEE Journal of Biomedical and Health Informatics, Nov. 2019. 13 pages.
J. Han, Z. Zhang, Z. Ren, and B. Schuller, “Exploring perception uncertainty for emotion recognition in dyadic conversation and music listening,” Cognitive Computation, Oct. 2019. 10
pages.
Z. Zhao, Z. Bao, Y. Zhao, Z. Zhang, N. Cummins, Z. Ren, and B. Schuller, “Exploring deep spectrum representations via attention-based recurrent and convolutional neural networks for speech emotion recognition,” IEEE Access, vol. 7, pp. 97515–97525, July 2019.
J. Han, Z. Zhang, Z. Ren, and B. Schuller, “EmoBed: Strengthening monomodal emotion recognition via training with crossmodal emotion embeddings,” IEEE Transactions on Affect Computing, July 2019.
Z. Ren, K. Qian, Z. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep scalogram representations for acoustic scene classification,” IEEE/CAA Journal of Automatica Sinica, vol. 5, pp. 662–669, May 2018. [link]
Conference Papers
Z. Ren, J. Han, N. Cummins, Q. Kong, M. Plumbley, and B. Schuller, “Multi-instance learning for bipolar disorder diagnosis using weakly labelled speech data,” in Proc. DPH, (Marseille, France), pp. 79–83, 2019.
K. Qian, H. Kuromiya, Z. Ren, M. Schmitt, Z. Zhang, T. Nakamura, K. Yoshiuchi, B. Schuller, and Y. Yamamoto, “Automatic detection of major depressive disorder via a bag-of-behaviour-words approach,” in Proc. ISICDM, (Xi’an, China), 2019. 5 pages.
F. Ringeval, B. Schuller, M. Valstar, N. Cummins, R. Cowie, L. Tavabi, M. Schmitt, S. Alisamir, S. Amiriparian, E.-M. Messner, S. Song, S. Liu, Z. Zhao, A. Mallol-Ragolta, Z. Ren, M. Soleymani, and M. Pantic, “AVEC 2019 workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition,” in Proc. AVEC, (Nice, France), pp. 3–12, 2019.
Z. Ren, Q. Kong, J. Han, M. Plumbley, and B. Schuller, “Attention-based atrous convolutional neural networks: Visualisation and understanding perspectives of acoustic scenes,” in Proc. ICASSP, (Brighton, UK), 2019. 56–60.
J. Han, Z. Zhang, Z. Ren, and B. Schuller, “Implicit fusion by joint audiovisual training for emotion recognition in mono modality,” in Proc. ICASSP, (Brighton, UK), pp. 5861–5865, 2019.
Z. Ren, Q. Kong, K. Qian, M. D. Plumbley, and B. W. Schuller, “Attention-based convolutional neural networks for acoustic scene classification,” in Proc. DCASE, (Surrey, UK), pp. 39–43, 2018.[link]
Z. Ren, N. Cummins, J. Han, S. Schnieder, J. Krajewski, and B. Schuller, “Evaluation of the pain level from speech: Introducing a novel pain database and benchmarks,” in Proc. ITG, (Oldenburg, Germany), pp. 56-60, 2018.[link]
J. Han, Z. Zhang, M. Schmitt, Z. Ren, F. Ringeval, and B. Schuller, “Bags in bag: Generating context-aware bags for tracking emotions from speech,” in Proc. INTERSPEECH, (Hyderbad, India), pp. 3082–3086, 2018. [link]
B. Schuller, S. Steidl, A. Batliner, P. B. Marschik, H. Baumeister, F. Dong, S. Hantke, F. B. Pokorny, E.-M. Rathner, K. D. Bartl-Pokorny, C. Einspieler, D. Zhang, A. Baird, S. Amiriparian, K. Qian, Z. Ren, M. Schmitt, P. Tzirakis, and S. Zafeiriou, “The INTERSPEECH 2018 computational paralinguistics challenge: Atypical & self-assessed affect, crying & heart beats,” in Proc. INTERSPEECH, (Hyderbad, India), pp. 122–126, 2018. [link]
Z. Ren, N. Cummins, V. Pandit, J. Han, K. Qian, and B. Schuller, “Learning image-based representations for heart sound classification,” in Proc. DH, (Lyon, France), pp. 143–147, 2018. [pdf]
J. Han, Z. Zhang, Z. Ren, F. Ringeval, and B. Schuller, “Towards conditional adversarial training for predicting emotions from speech,” in Proc. ICASSP, (Calgary, Canada), pp. 6822–6826, 2018. [link]
Z. Ren, V. Pandit, K. Qian, Z. Yang, Z. Zhang, and B. Schuller, “Deep sequential image features on acoustic scene classification,” in Proc. DCASE, (Munich, Germany), pp. 113–117, 2017. [link]
Z. Ren, V. Pandit, K. Qian, Z. Yang, Z. Zhang, and B. Schuller, “A system for 2017 DCASE challenge using deep sequential image and wavelet features,” tech. rep., DCASE Challenge, Munich, Germany, 2017. 1 page. [link]
K. Qian, Z. Ren, V. Pandit, Z. Yang, Z. Zhang, and B. Schuller, “Wavelets revisited for the classification of acoustic scenes,” in Proc. DCASE, (Munich, Germany), pp. 108–112, 2017. [link]
S. Amiriparian, N. Cummins, M. Freitag, K. Qian, Z. Ren, V. Pandit, and B. Schuller, “The combined Augsburg/Passau/TUM/ICL system for DCASE 2017,” tech. rep., DCASE Challenge, Munich, Germany, 2017. 1 page. [link]
Z. Ren, Q. Zhang, H. Zhu, and Q. Wang, “Extending the FOV from disparity and color consistencies in multiview light fields,” in Proc. ICIP, (Beijing, China), pp. 1157–1161, 2017. [pdf]
J. Deng, N. Cummins, J. Han, X. Xu, Z. Ren, V. Pandit, Z. Zhang, and B. Schuller, “The University of Passau open emotion recognition system for the multimodal emotion challenge,” in Proc. CCPR, (Chengdu, China), pp. 652–666, 2016. [link]
Y. Zhang, F. Weninger, Z. Ren, and B. Schuller, “Sincerity and deception in speech: Two sides of the same coin? a transfer- and multi-task learning perspective,” in Proc. INTERSPEECH, (San Francisco, CA), pp. 2041–2045, 2016. [link]