Computational Intelligence & Information Processing Laboratory

Author:Date:September 6, 2021Click Times:

About   Us

Our   team focuses on the research hotspots in Artificial Intelligence, and   strengthens the research and development of a new generation of Artificial   Intelligence. We are breaking through and solving bottlenecks in the theory   and application of affective computing, e.g., human-robot interaction systems   based on multimodal affective computing. We mainly study the design and   development of key technologies of service robots such as environmental   perception, affective computing, and intelligent human-robot interactive   decision-making, which will endow service robots with the ability to   recognize, understand, and adapt to humans’ emotion, communication   atmosphere, and personal intentions, to realize diversified, intelligent, and   humanized human-robot interaction. By doing so, it will open up new ideas of affective   computing in human-robot interaction, and promote the industrialization and   popularization of a new generation of robots.

Team   Members

Assoc Prof. Zhentao Liu, Assoc Prof. Luefeng Chen

Research   Activities

1.Research   on speaker-independent speech emotion recognition in small sample environment   for natural human-robot interaction, National Natural Science Foundation of   China, 2020.1.1– 2023.12.31.

2.   Modeling of Communication Atmosfield in Human-Robot Interaction Based on   Multi-modal Emotion Recognition, National Natural Science Foundation of   China, 2015.1.1– 2017.12.31.

3.Research   on Initiative Service Method of Emotional Robot Based on Deep Cognitive   Information, Hubei Provincial Natural Science Foundation of China under,   2018. 1–2019.12.

4.Key   Technologies of Multimodal Deep Cognitive Information Perception and Emotion   Expression for Service Robot, Wuhan Science and Technology Project,   2017.8–2019.7.

5.High   Performance of Servo Control Technology of Industrial Robot Under Complex   Working Conditions, National Key Research and Development Project,   2017.12-2020.11.

6.Multi-Dimensional   Emotional Intention Understanding and Natural Interaction of Emotional   Robots, National Natural Science Foundation of China, 2020.1.1– 2023.12.31.

7.Multi-robot   behavior adaptation mechanism based on deep level cognitive information in   human-robot interaction, National Natural Science Foundation of China Youth   Project, 2017.1.1– 2019.12.31.

8.Multi-robot   behavior coordination mechanism based on man-machine communication   information, China University of Geosciences (Wuhan) Complex System Advanced   Control And Intelligent Geosciences Instrument Research Center Open Fund   Project, 2015.1.1– 2017.12.31.

Major   Achievements

Journal Papers

[1]   Zhen-Tao   Liu*,   Si-Han Li, Min Wu, Wei-Hua Cao, Man Hao, and Lin-Bo Xian. Eye Localization   Based on Weight Binarization Cascade Convolution Neural Network. Neurocomputing,   2020, 378: 45-53.

[2]   Zhen-Tao   Liu,   Qiao Xie, Min Wu, Wei-Hua Cao*, Dan-Yun Li*, and Si-Han Li. Electroencephalogram   Emotion Recognition Based on Empirical Mode Decomposition and Optimal Feature   Selection. IEEE Transactions on Cognitive and   Developmental Systems, 2019, 11 (4): 517-526.

[3]   Zhen-Tao   Liu,   Si-Han Li, Wei-Hua Cao, Dan-Yun Li*, Man Hao, and Ri Zhang. Combining 2D   Gabor and Local Binary Pattern for Facial Expression Recognition Using   Extreme Learning Machine. Journal of Advanced Computational Intelligence   and Intelligent Informatics, 2019, 23 (3): 444-455.

[4]   Zhen-Tao   Liu,   Qiao Xie, Min Wu, Wei-Hua Cao*, Ying Mei, and Jun-Wei Mao. Speech Emotion   Recognition Based on An Improved Brain Emotion Learning Model. Neurocomputing,   2018, 309: 145-156.

[5]   Zhen-Tao   Liu,   Min Wu, Wei-Hua Cao*, Jun-Wei Mao, Jian-Ping Xu, and Guan-Zheng Tan. Speech   Emotion Recognition Based on Feature Selection and Extreme Learning Machine   Decision Tree. Neurocomputing, 2018, 273: 271-280.

[6]   Zhen-Tao   Liu,   Min Wu*, Wei-Hua Cao, Lue-Feng Chen,   Jian-Ping Xu, Ri Zhang, Meng-Tian Zhou, and Jun-Wei Mao. A Facial Expression   Emotion Recognition Based Humans-Robots Interaction System. IEEE/CAA   Journal of Automatica Sinica, 2017, 4 (4): 668-676.

[7]   Zhen-Tao   Liu,   Jian-Ping Xu, Min Wu, Wei-Hua Cao, Lue-Feng Chen, Xue-Wen Ding, Man   Hao, Qiao Xie. Review of Emotional Feature Extraction and Dimension Reduction   Method for Speech Emotion Recognition. Chinese Journal of Computers,   2018, 41 (12): 2833-2851.

[8]   Zhen-Tao   Liu,   Hui-Ting Wang, Dan-Yun Li*, Hao Zhou, Zi-Han Zhou. A Novel Micro Pressure   Power Generation System Based on Super Capacitor Energy Storage and PI   Controller. Journal of Advanced Computational Intelligence and Intelligent   Informatics, 2016, 20 (7): 1051-1059.

[9]   Zhen-Tao   Liu,   Min Wu, Dan-Yun Li, Lue-Feng Chen, Fang-Yan Dong, Yoichi Yamakaki, and   Kaoru Hirota. Concept of Fuzzy Atmosfield for Representing Communication   Atmosphere and Its Application to Humans-Robots Interaction. Journal of   Advanced Computational Intelligence and Intelligent Informatics, 2013, 17   (1): 3-17.

[10]  Zhen-Tao   Liu,   Min Wu, Dan-Yun Li, Lue-Feng Chen, Fang-Yan Dong, Yoichi Yamakaki, and   Kaoru Hirota. Communication Atmosphere in Humans and Robots Interaction Based   on the Concept of Fuzzy Atmosfield Generated by Emotional States of Humans   and Robots. Journal of Automation, Mobile Robotics and Intelligent Systems,   2013, 7 (2): 52-63.

[11]  Hui-Ting   Wang,   Zhen-Tao Liu*, Yong He. Exponential Stability   Criterion of the Switched Neural Networks with Time-Varying Delay. Neurocomputing,   2019, 331: 1-9.

[12]  Xiao-Bo   Liu, Zhen-Tao Liu*, Guang-Jun Wang, Zhi-Hua Cai, Harry Zhang. Ensemble   Transfer Learning Algorithm. IEEE Access, 6 (1): 2389-2396.

[13]  Man   Hao, Wei-Hua Cao*, Zhen-Tao Liu, Min Wu, Peng Xiao. Visual-Audio   Emotion Recognition Based on Multi-Task and Ensemble Learning with Multiple   Features. Neurocomputing, 2020. DOI:   10.1016/j.neucom.2020.01.048.

[14]  Man   Hao, Wei-Hua Cao, Min Wu, Zhen-Tao Liu*, and Si-Han Li. An Initiative   Service Method Based on Fuzzy Analytical Hierarchy Process and Context   Intention Inference for Drinking Service Robot. IEEE Transactions on   Cognitive and Developmental Systems, 2019, 11(2): 221-233.

[15]  Man   Hao, Wei-Hua Cao, Min Wu, Zhen-Tao Liu *, Lue-Feng Chen, and Ri   Zhang. Proposal of Initiative Service Model for Service Robot. CAAI   Transactions on Intelligence Technology, 2 (4): 253-261.

[16]  Lue-Feng   Chen,   Wan-Juan Su, Min Wu*, Witold Pedrycz, and Kaoru Hirota. A Fuzzy Deep Neural   Network with Sparse Autoencoder for Emotional Intention Understanding in   Human-Robot Interaction. IEEE Transactions on Fuzzy Systems, 2020, DOI:   10.1109/TFUZZ.2020.2966167.

[17]  Lue-Feng   Chen,   Min Wu*, Meng-Tian Zhou, Zhen-Tao Liu, Jin-Hua She, and Kaoru Hirota.   Dynamic Emotion Understanding in Human-Robot Interaction Based on Two-Layer   Fuzzy SVR-TS Model. IEEE Transactions on Systems, Man, and Cybernetics:   Systems, 2020, 50 (2): 490-501.

[18]  Lue-Feng   Chen,   Wan-Juan Su, Yu Feng, Min Wu*, Jin-Hua She, and Kaoru Hirota. Two-layer   Multiple Random Forest for Speech Emotion Recognition in Human-Robot   Interaction. Information Sciences, 2020, 509: 150-163.

[19]  Lue-Feng   Chen,   Wan-Juan Su, Min Li, Min Wu*, Witold Pedrycz, and Kaoru Hirota. A Population   Randomization Based Multiobjective Genetic Algorithm for Gesture Adaptation   in Human-Robot Interaction. SCIENCE CHINA Information Sciences, 2020, DOI:   10.1007/s11432-019-2749-0.

[20]  Lue-Feng   Chen,   Yu Feng, Mohamed A. Maram, Ya-Wu Wang, Min Wu*, Kaoru Hirota, and Witold   Pedrycz. Multi-SVM based Dempster-Shafer Theory for Gesture Intention   Understanding Using Sparse Coding Feature. Applied Soft Computing, 2019, DOI:   10.1016/j.asoc.2019.105787.

[21]  Lue-Feng   Chen,   Min Li, Wan-Juan Su, Min Wu*, Kaoru Hirota, and Witold Pedrycz. Adaptive   Feature Selection-Based AdaBoost-KNN with Direct Optimization for Dynamic   Emotion Recognition in Human-Robot Interaction. IEEE Transactions on Emerging   Topics in Computational   Intelligence, 2019, DOI: 10.1109/TETCI.2019. 2909930.

[22]  Lue-Feng   Chen,   Meng-Tian Zhou, Min Wu*, Jin-Hua She, Zhen-Tao Liu, Fang-Yan Dong, and   Kaoru Hirota. Three-Layer Weighted Fuzzy Support Vector Regression for   Emotional Intention Understanding in Human-Robot Interaction. IEEE   Transactions on Fuzzy Systems, 2018, 26 (5): 2524-2538.

[23]  Lue-Feng   Chen,   Min Wu*, Meng-Tian Zhou, Jin-Hua She, Fang-Yan Dong, and Kaoru Hirota.   Information-Driven Multi-Robot Behavior Adaptation to Emotional Intention in   Human-Robot Interaction. IEEE Transactions on Cognitive and Developmental   Systems, 2018, 10 (3): 647-658.

[24]  Lue-Feng   Chen,   Meng-Tian Zhou, Wan-Juan Su, Min Wu*, Jin-Hua She, and Kaoru Hirota. Softmax   Regression Based Deep Sparse Autoencoder Network for Facial Emotion   Recognition in Human-Robot Interaction. Information Sciences, 2018, 428:   49-61.

[25]  Lue-Feng   Chen,   Zhen-Tao Liu, Min Wu*, Min Ding, Fang-Yan Dong, and Kaoru Hirota.   Emotion-Age-Gender-Nationality Based Intention Understanding in Human-Robot   Interaction Using Two-Layer Fuzzy Support Vector Regression. International   Journal of Social Robotics, 2015, 7 (5): 709-729.

[26]  Lue-Feng   Chen,   Zhen-Tao Liu, Min Wu*, Fang-Yan Dong, Yoichi Yamazaki, and Kaoru   Hirota. Multi-Robot Behavior Adaptation to Local and Global Communication   Atmosphere in Humans-Robots Interaction. Journal on Multimodal User   Interfaces, 2014, 8 (3): 289-303.

 

Monographs, popular science articles

[1]   Min Wu, Zhen-Tao Liu*, Lue-Feng Chen. Affective Computing and   Emotional Robot System, Science Press, Beijing, 2018.4.

[2]   Zhen-Tao Liu, Min Wu, Lue-Feng Chen. How far is the heartwarming   "Baymax" from you? IEEE SPECTRUM (in Chinese), 2019, (8):   76-79.