Interdisciplinary Artificial Intelligence Research

Image Processing
Distributed Machine Learning
Social Sciences AI research
Humanity and Language AI research
Business and Hospitality management AI research
Artificial Intelligence Research Team

Prof. H. Anthony CHAN

Prof. Wan-Chi SIU

Dr. Xueting Tina LIU

Dr. Chun Pong Jacky CHAN

Dr. Ricky Kwok Cheong AU

Dr. Patrick Chi Wai LEE

Dr. Charles Kin Man CHOW

Return to index map

Artificial Intelligence Research

Computer and Information Sciences AI research

Computer/Information Sciences and Health Sciences AI research

Computer/Information Sciences and Social Sciences AI research

Social Sciences AI research

Humanity and Language AI research

Business and Hospitality management AI research

Return to index map

Prof. H. Anthony CHAN, BSc (HKU), MPhil (CUHK), PhD (Maryland), FIEEE

Professor and Dean, School of Computing and Information Sciences

5G Wireless network, Distributed machine learning, IETF (IPv6 Internet) standards, Components/Systems and Products Reliability

Room A804-1; Tel: 3702 4210; hhchan at

Distributed machine learning in 5G Wireless

The current/emerging 5G Wireless is not just providing much higher data rate and expected to have 100 billion Internet Of all Things (IOT) IP devices using 5G Wireless network by year 2025. A few important technologies are:

3GPP is also defining the architecture, functions, and interfaces to enable use of machine learning in 5/6G Wireless. Network data analytics function (NWDAF) can employ machine learning to unlock the value of network data in the 5G Wireless core network, whereas machine learning may also be employed in the radio access network (RAN).

In addition, the IOT devices provide a large amount of data to support learning whereas learning may also be distributed or centralized.

Return to index map

Prof. Wan-Chi SIU, MPhil (CUHK), PhD (Imperial College London), IEEE Live-Fellow

Research Professor, School of Computing and Information Sciences

DSP and Fast Algorithms, Intelligent Image and Video Coding, HDTV and 3DTV, Super-Resolution with Machine Learning and Deep Learning, Visual Surveillance, Object Recognition and Tracking, Visual Technology for Vehicle Safety and Autonomous Car

Room A804; wcsiu at

Dr. Xueting Tina LIU, BEng, BA (Tsinghua), PhD (CUHK)

Assistant Professor, School of Computing and Information Sciences

Computer Graphics, Computer Vision, Machine Learning, Computational Manga and Anime

Room A806; Tel: 3702 4217; tliu at

Image Processing research

Deep Extraction of Manga Structural Lines

Given a pattern-rich manga, we proposed a novel method to identify the structural lines of the manga image, with no assumption on the patterns. To suit our purpose, we propose a deep network model to handle the large variety of screen patterns and raise output accuracy. We also develop an efficient and effective way to generate a rich set of training data pairs.

Invertible Grayscale

In this work, we propose an innovative method to synthesize invertible grayscale. It is a grayscale image that can fully restore its original color. The key idea here is to encode the original color information into the synthesized grayscale, in a way that users cannot recognize any anomalies. We propose to learn and embed the color-encoding scheme via a convolutional neural network (CNN). We then design a loss function to ensure the trained network possesses three required properties: color invertibility, grayscale conformity, and resistance to quantization error.

Deep Binocular Tone mapping

Tone mapping is a commonly used technique that maps the set of colors in high-dynamic-range (HDR) images to another set of colors in low-dynamic-range (LDR) images. Recently, with the increased use of stereoscopic devices, the notion of binocular tone mapping has been proposed. The key is to map an HDR image to two LDR images with different tone mapping parameters, one as left image and the other as right image, so that more human-perceivable visual content can be presented with the binocular LDR image pair than any single LDR image. In this work, we proposed the first binocular tone mapping operator to more effectively distribute visual content to an LDR pair, leveraging the great representability and interpretability of deep convolutional neural network. Based on the existing binocular perception models, novel loss functions are also proposed to optimize the output pairs in terms of local details, global contrast, content distribution, and binocular fusibility.

Deep Visual Sharing with Colorblind

Visual sharing between color vision deficiency (CVD) and normal-vision audiences is challenging due to the need of simultaneous satisfaction of multiple binocular visual requirements, in order to offer a color-distinguishable and binocularly fusible visual experience to CVD audiences, without hurting the visual experience of the normal-vision audiences. In this paper, we propose the first deep-learning based solution for solving this visual sharing problem. To achieve this, we propose to formulate this binocular image generation problem as a generation problem of a difference image, which can effectively enforce the binocular constraints. We also propose to retain only high-quality training data and enrich the variety of training data via intentionally synthesizing various confusing color combinations.

Colorblind-Shareable Videos by Synthesizing Temporal-Coherent Polynomial Coefficients

To share the same visual content between color vision deficiencies (CVD) and normal-vision people, attempts have been made to allocate the two visual experiences of binocular display (wearing and not wearing glasses) to CVD and normal-vision audiences. However, existing approaches only tailor for still images. In this paper, we propose the first practical solution for fast synthesis of temporal-coherent colorblind-sharable video. Our fast solution is accomplished by a convolutional neural network (CNN) approach. To avoid the color inconsistency, we propose to use a global color transformation formulation. It first decomposes the color channels of each input video frame into several basis images and then linearly recombines them with the corresponding coefficients. Besides the color consistency, our generated colorblind-sharable video also needs to satisfy four constraints, including the color distinguishability, the binocular fusibility, the color preservation, and temporal coherence. Instead of generating the left and right videos separately, we train our network to predict temporal-coherent coefficients for generating a single difference video (between left and right views), which in turn to generate the binocular pair.

Return to index map

Dr. Chun Pong Jacky CHAN, BSc, PhD (CityUHK)

Assistant Professor

Computer Graphics, Computer Vision, Machine Learning, Character Animation, Health Care Technology

Room A810; Tel: 3702 4204; j2chan at

Classification of Parkinsonian Versus Normal 3-D Reaching

An objective assessment for determining whether a person has Parkinson disease is proposed. This is achieved by analyzing the correlation between joint movements, since Parkinsonian patients often have trouble coordinating different joints in a movement. It provides classification of subjects as having or not having Parkinson’s disease using the least square support vector machine (LS-SVM). Experimental results showed that using either auto-correlation or cross-correlation features for classification provided over 91% correct classification.

Return to index map

Improving posture classification accuracy for depth sensor-based human activity monitoring in smart environments

Smart environments and monitoring systems are popular research areas nowadays due to its potential to enhance the quality of life. Applications such as human behavior analysis and workspace ergonomics monitoring are automated, thereby improving well-being of individuals with minimal running cost. In this paper, we propose a framework that accurately classifies the nature of the 3D postures obtained by Kinect using a max-margin classifier. Different from previous work in the area, we integrate the information about the reliability of the tracked joints in order to enhance the accuracy and robustness of our framework. As a result, apart from general classifying activity of different movement context, our proposed method can classify the subtle differences between correctly performed and incorrectly performed movement in the same context. We demonstrate how our framework can be applied to evaluate the user’s posture and identify the postures that may result in musculoskeletal disorders. Such a system can be used in workplace such as offices and factories to reduce risk of injury. Due to the low cost and the easy deployment process of depth camera based motion sensors, our framework can be applied widely in home and office to facilitate smart environments.

Return to index map

A generic framework for editing and synthesizing multimodal data with relative emotion strength

Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.

Return to index map

Dr. Ricky Kwok Cheong AU, BSocSc MPhil (HKU), PhD (Univ of Tokyo)

Assistant Professor, School of Social Sciences

Room A626; Tel: 3702 4528; rau at

Social Sciences and artificial intelligence research

Emotional expression recognition

The School of Social Sciences is investigating emotional expressions with facial recognition aiming to achieve higher accuracy which has been challenging the prior art to recognize emotions in the difficult scenarios.

Return to index map

Dr. Patrick Chi Wai LEE, MBA (Leicester, UK), MA (HKBU), MA (CityUHK), PhD (Newcastle UK)

Assistant Professor, School of Humanities and Languages

Theoretical Linguistics (syntax), Second Language Acquisition, Discourse Analysis

Room A617; Tel: 3702 4303; cwlee at

Humanities and Languages AI research

Language and artificial intelligence research:

The School of Humanities and Languages is investigating the use of different responses or types of speech acts between interlocutors (particularly the workplace English for students) to understand the use of various responses or types of speech acts and their functions in a specific scenario.

Return to index map

Dr. Charles Kin Man CHOW, BSc (CityUHK), MBA (HKPolyU), MSc PGCE (HKU), DBA (Newcastle, Australia)

Assistant Professor, Rita Tong Liu School of Business and Hospitality Management

Management information systems, e-commerce, and e-service quality

Room A511; Tel: 3702 4568; cchow at

Business and Hospitality management AI research

Smart hotel

Current work includes smart hotel, such as smart scenery, smart guest services. Other areas in accounting and finance are also being explored.

In a previous study (Chow, 2017), Dr. Charles Chow examined how web design, responsiveness, reliability, enjoyment, ease of use, security, and customization influence e-service quality of online hotel booking agencies in Hong Kong. It was found that these factors positively influenced e-service quality of online hotel booking agencies. The research provided new insights to industry practitioners with recommendations in designing and implementing online hotel booking websites. With these findings taking into consideration, online hotel booking companies can concentrate their resources to the identified dimensions, in particularly through web design and customization, to achieve the desired level of e-service quality.


Chow, K. M. (2017). E-service quality: A study of online hotel booking websites in Hong Kong. Asian Journal of Economics, Business and Accounting, 3(4), 1-13.

Return to index map

AI References

References in AI

Introductory Guide to Artificial Intelligence

Machine learning resources

Return to index map

References in Healthcare

How Artificial Intelligence Helps in Health Care

Artificial Intelligence Will Change Healthcare as We Know It

AI In Health Care: The Top Ways AI Is Affecting The Health Care Industry

Surgical robots, new medicines and better care: 32 examples of AI in healthcare

HL7:to develop American National Standards Institute ANSI-accredited standards for the exchange, integration, sharing and retrieval of electronic health information that supports clinical practice and the management, delivery and evaluation of health services.

CCD:Continuity of Care Document is built using HL7 Clinical Document Architecture (CDA) elements and contains data that is defined by the American Society for Testing and Materials (ASTM) Continuity of Care Record (CCR) to share summary information about the patient within the broader context of the personal health record.

Jiang F, Jiang Y, Zhi H, et al., "Artificial intelligence in healthcare: past, present and future," Stroke and Vascular Neurology 2017;2:doi: 10.1136/svn-2017-000101

Dhamdhere P, Harmsen J, Hebbar R, et al., "Big Data in Healthcare," 2016.

Joyce C, "Harnessing the Power of Data in Health Care: Data as a Strategic Asset,"Symposium on Data Science for Healthcare (DaSH), 2017.

Ettinger A, "Data and Information Management in Public Health," July 2004.

Return to index map

References in Management

The Place of Management in an AI Curriculum

Managing Human and Machine Intelligence

What does human-centric AI mean to management?

Business Analytics Institute

Return to index map

References in Social services

Future of Social Work, GISW Anniversary Celebration, Nov 23 2018.

Future of Social Work: International Perspectives

How Artificial Intelligence Will Save Lives in the 21st Century

Artificial Intelligence Ideas and Social Work

Return to index map

AL demo

AI experiments with Google

Return to index map

10 craziest AI experiments

Return to index map