Skip to main content

Biography

  • Chair Professor in Department of Computer Science at Hong Kong Baptist University
  • Research interests include machine learning, visual computing and their applications
  • SRFS project — to detect facial reenactment in a monocular video stream to fight fake videos towards speaker verification, which is a timely but challenging topic, particularly with the emergence of ChatGPT-like techniques generating fake videos
  • Awards and Honours:
    • RGC Senior Research Fellow (2023)
    • Best Theoretical Paper Award in WI-IAT (2020)
    • Swiss Automobile Club Prize (2017)

Project Title

  • Facial Reenactment in a Monocular Video Stream for Speaker Verification and Its Applications

Award Citation

Professor Yiu-ming Cheung is a distinguished scholar, whose research interests include machine learning and visual computing, as well as their applications in data science, pattern recognition, multi-objective optimization, and information security. His research work has been published in top-tier journals and conferences in his field. He has been ranked the world’s top 1% most-cited scientists in the field of artificial intelligence and image processing by Stanford University, over the past three years. In recognition of his international reputation, dedication and exceptional achievements in his academic career, he has been elected a fellow of IEEE, AAAS, IET, BCS and AAIA.

 

The SRFS project led by Professor Cheung aims at developing innovative AI-based methods to detect facial reenactment in a monocular video stream to fight fake videos towards speaker verification. With the emergence of ChatGPT-like techniques, it is expected that more and more fake videos will be appeared, with a serious impact on our society. One kind of fake video can be generated by facial reenactment, which transfers a source face shape to a target face, meanwhile reserve the appearance and the identity of the target face. The facial reenactment has many potential attractive applications, such as in animation, VR and entertainment programs, but this technique could also result in serious security problems as it allows a speaker in a video to be reenacted and thus faked. Therefore, this project will address this important but challenging problem by developing an effective method to detect facial reenactment for speaker verification, and apply the resulting techniques to two applications, i.e., fake video detection and visually speech-independent speaker verification. The research outcomes will have wide-ranging applications in various facial reenactment related domains, and also contribute to advancements in computer vision and machine learning research.

 

Short video of awardee