Sunday, December 22, 2024
HomePress ReleaseDiscover New Breakthroughs in Speech Emotion Recognition, RevComm Research Selected for ICASSP...

Discover New Breakthroughs in Speech Emotion Recognition, RevComm Research Selected for ICASSP 2024 Conference

The RevComm Research and Development Team was selected to present its research “Large Language Model-Based Emotional Speech Annotation Using Context and Acoustic Feature for Speech Emotion Recognition” at the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2024 in Seoul, South Korea.

Jakarta, April 1, 2024 – RevComm is pleased to announce that RevComm’s Research and Development team, RevComm Research, has been selected to present its research at the International Conference on Acoustics, Speech and Signal Processing, ICASSP 2024. 

The conference is organized by the IEEE Signal Processing Society, the longest serving society of the Institute of Electrical and Electronics Engineers. This year will be the 49th ICASSP conference and will be held on April 14-19, 2024 in Seoul, South Korea. 

RevComm Research: Identifying Emotions through Voice Transcription with a Large Language Model

The research findings related to Speech Emotion Recognition by RevComm Research are motivated by the high cost involved in adding emotional information to a voice. Conventionally, this has to be done manually by listening to the voice, identifying the emotion, and adding it. This makes it very difficult to create large-scale voice data with emotional information. 

Recognizing this challenge, RevComm Senior Research Engineers Jennifer Santoso and Kenkichi Ishizuka, as well as Research Director Taiichi Hashimoto, published their research “Large Language Model-Based Emotional Speech Annotation Using Context and Acoustic Feature for Speech Emotion Recognition” to ICASSP 2024 and was selected to present their research at the conference.

RevComm Research made an innovative breakthrough by proposing the automatic annotation of emotions using a Large Language Model (LLM) based on voice transcription and acoustic features. The results of this research experiment demonstrate that the Large Language Model can estimate emotions with nearly the same accuracy as humans. Therefore, with the outcomes of this research, it is anticipated that generating voice data with rich emotional information will become more accessible, facilitating the development of more accurate voice emotion recognition systems.

RevComm Research aims to drive innovation in the field of AI technology and enhance communication. To accomplish this objective, RevComm will continue to advance research and development in speech, language, and image processing. Furthermore, RevComm will actively contribute to academic endeavors on both national and global scales by harnessing AI technology for product and service development.

RevComm continues to provide leading-edge communication solutions through its product, MiiTel. MiiTel as an AI-based technology is able to analyze your telephone conversations and meetings automatically. Claim MiiTel Free Demo now at miitel.id. Limited slots are available!

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments