About
Dr. Haofei Wang is an associate researcher and PhD advisor at Pengcheng Laboratory (PCL). Before joining PCL, he earned his PhD in ECE from The Hong Kong University of Science and Technology (HKUST) and Bachelor in Automation with Chu Kochen Honors from Zhejiang University (ZJU). Haofei is selected for the Shenzhen Overseas High-Level Talents Program and the Guangdong Province Overseas Young Postdoctoral Talent Program.
Haofei’s primary research objective is to enable robots/computers to better understand human intention (brain) and behavior (body), thereby facilitating more natural and effective human-machine interaction. This interdisciplinary endeavor integrates key areas such as human-computer interaction (HCI), virtual reality (VR), and large language model(LLM)-based medical data processing. By exploring novel interaction paradigms, immersive virtual environments, and advanced AI techniques, particularly large language and vision models, the research aims to bridge the cognitive gap between humans and machines. In the medical domain, the study focuses on leveraging large language/vision models to extract meaningful insights from complex healthcare datasets, ultimately supporting intelligent decision-making and personalized healthcare. Our work aspires to lay the foundation for empathetic, context-aware, and human-centered artificial intelligence systems.
Education
- 2013~2019, PhD in Electronic and Computer Engineering, The Hong Kong University of Science and Technology
- 2009~2013, Bachelor in Automation, Zhejiang University, Chu Kochen Honor Excellent Graduates
Work Experience
- 2024 ~ Present, Associate Researcher/PhD Advisor, Pengcheng Laboratory
- 2022-2024, Assistant Researcher/PhD Advisor, Pengcheng Laboratory
- 2020-2022, Postdoc Fellow, Pengcheng Laboratory
- 2019-2020, Research Associate, The Hong Kong University of Science and Technology
- 2013-2019, Teaching Assistant/Research Assistant, The Hong Kong University of Science and Technology
Publication
[1] Y. Wu, H. Wang, X. Huang, H. Wang, F. Yang, W. Sun, S. W. Su, and S. H. Ling, “Multimodal learning for non-small cell lung cancer prognosis,” Biomed. Signal Process. Control, vol. 106, p. 107663, 2025.
[2] R. Liu, H. Wang, and F. Lu, “From gaze jitter to domain adaptation: Generalizing gaze estimation by manipulating high-frequency components,” Int. J. Comput. Vis., vol. 133, no. 3, pp. 1290–1305, 2025.
[3] Z. Cai, H. Wang, Y. Niu, and F. Lu, “Gam360: sensing gaze activities of multi-persons in 360 degrees,” CCF Trans. Pervasive Comput. Interact., pp. 1–14, 2025.
[4] J. Yuan, H. Wang, T. Yang, Y. Su, W. Song, S. Li, and W. Gong, “FF-net: A target detection method tailored for mid-to-late stages of forest fires in complex environments,” Case Stud. Therm. Eng., vol. 65, p. 105515, 2025.
[5] H. Wang, Y. Zhang, and F. Lu, “Enhancing Text Entry in Mixed Reality with Tangible Feedback,” in Proc. Int. Conf. Ext. Reality, pp. 120–135, 2024.
[6] S. Liang, H. Wang, and F. Lu, “EyeIR: Single Eye Image Inverse Rendering In the Wild,” in ACM SIGGRAPH Conf. Papers, pp. 1–11, 2024.
[7] J. Yuan, H. Wang, M. Li, X. Wang, W. Song, S. Li, and W. Gong, “FD-Net: A Single-Stage Fire Detection Framework for Remote Sensing in Complex Environments,” Remote Sens., vol. 16, no. 18, p. 3382, 2024.
[8] Y. Zhang, H. Wang, and B. E. Shi, “User Engagement Correlates Better with Behavioral than Physiological Measures in a Virtual Reality Robotic Rehabilitation System,” in Proc. IEEE SMC, pp. 5189–5194, 2024.
[9] Y. Cheng, H. Wang, Y. Bao, and F. Lu, “Appearance-based gaze estimation with deep learning: A review and benchmark,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 12, pp. 7509–7528, 2024.
[10] R. Liu, Y. Liu, H. Wang, and F. Lu, “Pnp-ga+: Plug-and-play domain adaptation for gaze estimation using model variants,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 5, pp. 3707–3721, 2024.
[11] Y. Zhang, Y. Zhang, and H. Wang, “Patient Subtyping via Learning Hidden Markov Models from Pairwise Co-occurrences in EHR Data,” in Proc. IEEE EMBC, pp. 1–4, 2024.
[12] H. Tong, J. Yuan, J. Zhang, H. Wang, and T. Li, “Real-time wildfire monitoring using low-Altitude Remote sensing imagery,” Remote Sens., vol. 16, no. 15, p. 2827, 2024.
[13] M. Xu, H. Wang, and F. Lu, “Learning a generalized gaze estimator from gaze-consistent feature,” in Proc. AAAI Conf. Artif. Intell., vol. 37, no. 3, pp. 3027–3035, 2023.
[14] Z. Yang, H. Wang, F. Lu, and Q. Zhao, “Towards practical facial video-based remote heart rate estimation via cross domain rPPG adaptation,” in Proc. Int. Conf. Biomed. Bioinform. Eng., pp. 154–161, 2023.
[15] Z. Yang, H. Wang, B. Liu, and F. Lu, “cbPPGGAN: a generic enhancement framework for unpaired pulse waveforms in camera-based photoplethysmography,” IEEE J. Biomed. Health Inform., vol. 28, no. 2, pp. 598–608, 2023.
[16] Z. Yang, H. Wang, and F. Lu, “Assessment of deep learning-based heart rate estimation using remote photoplethysmography under different illuminations,” IEEE Trans. Hum.-Mach. Syst., vol. 52, no. 6, pp. 1236–1246, 2022.
[17] Y. Bao, Y. Liu, H. Wang, and F. Lu, “Generalizing gaze estimation with rotation consistency,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 4207–4216, 2022.
[18] R. Liu, Y. Bao, M. Xu, H. Wang, Y. Liu, and F. Lu, “Jitter does matter: Adapting gaze estimation to new domains,” arXiv preprint arXiv:2210.02082, 2022.
[19] Z. Wang, H. Wang, H. Yu, and F. Lu, “Interaction with gaze, gesture, and speech in a flexibly configurable augmented reality system,” IEEE Trans. Hum.-Mach. Syst., vol. 51, no. 5, pp. 524–534, 2021.
[20] Y. Liu, R. Liu, H. Wang, and F. Lu, “Generalizing gaze estimation with outlier-guided collaborative adaptation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., pp. 3835–3844, 2021.
[21] Y. Liu, H. Wang, Y. Yue, and F. Lu, “Separating content and style for unsupervised image-to-image translation,” arXiv preprint arXiv:2110.14404, 2021.
[22] M. Xu, H. Wang, Y. Liu, and F. Lu, “Vulnerability of appearance-based gaze estimation,” arXiv preprint arXiv:2103.13134, 2021.
[23] Y. Zhang, H. Wang, and B. E. Shi, “Gaze-controlled robot-assisted painting in virtual reality for upper-limb rehabilitation,” in Proc. IEEE EMBC, pp. 4513–4517, 2021.
[24] Z. Wang, H. Yu, H. Wang, Z. Wang, and F. Lu, “Comparing single-modal and multimodal interaction in an augmented reality system,” in Proc. IEEE ISMAR-Adjunct, pp. 165–166, 2020.
[25] H. Wang and B. E. Shi, “Gaze awareness improves collaboration efficiency in a collaborative assembly task,” in Proc. ACM Symp. Eye Track. Res. Appl., pp. 1–5, 2019.
[26] H. Wang, Y. Wang, and B. E. Shi, “Convolutional neural network for target face detection using single-trial EEG signal,” in Proc. IEEE EMBC, pp. 2008–2011, 2018.
[27] H. Wang, J. Pi, T. Qin, S. Shen, and B. E. Shi, “SLAM-based localization of 3D gaze using a mobile eye tracker,” in Proc. ACM Symp. Eye Track. Res. Appl., pp. 1–5, 2018.
[28] H. Wang, M. Antonelli, and B. E. Shi, “Using point cloud data to improve three dimensional gaze estimation,” in Proc. IEEE EMBC, pp. 795–798, 2017.
[29] X. Dong, H. Wang, Z. Chen, and B. E. Shi, “Hybrid brain computer interface via Bayesian integration of EEG and eye gaze,” in Proc. IEEE/EMBS Conf. Neural Eng., pp. 150–153, 2015.
[30] H. Wang, X. Dong, Z. Chen, and B. E. Shi, “Hybrid gaze/EEG brain computer interface for robot arm control on a pick and place task,” in Proc. IEEE EMBC, pp. 1476–1479, 2015.
[31] H. Wang, T. Wang, C. Song, W. Du, and J. Lu, “Development of a Novel Efficient Marine Oil Spill Cleaner with Advanced Control Technology,” Adv. Mater. Res., vol. 616, pp. 1208–1211, 2013.
