QU develops intelligent security systems for Qatar 2022
In collaboration with the Supreme Committee for Delivery & Legacy (SC), Qatar University (QU)’s College of Engineering has developed intelligent crowd management and control systems, including multiple components for crowd counting, facial recognition and abnormal event detection (AED).
In a statement, QU said the security and safety of FIFA World Cup Qatar 2022 players, spectators and other individuals involved in the event or attending it, are among the primary concerns of the organisers. “Typically, security risks increase multifold due to the large scale of the event and the significant number of fans expected to attend (more than 1.5mn). Thus, the security of the FIFA World Cup Qatar 2022 is challenging due to the increasing number of possible threats and use of technology,” QU said.
Using technologies such as Artificial Intelligence (AI), drone-based computer vision and ICT provides promising perspectives to ensure the security and safety of players, spectators and other people attending the event, and helps efficiently manage crowds and identify people and abnormal events, the statement noted.
This project aims to use AI and drone-based video surveillance to optimise crowd management and control at FIFA World Cup Qatar 2022 sporting facilities. Typically, the team’s effort focuses on ensuring stadium safety and security plans whereby the fans do not need to think about their security; detecting abnormal events; tracking and recognising suspicious persons using face recognition tools; and maintaining the privacy of fans by using computing platforms wherein recorded data are processed locally.
Crowd management at the World Cup stadiums and their perimeters is crucial to ensuring the safety and smoothness of matches and associated events due to the density of the crowd within and outside the stadiums. “The FIFA World Cup Qatar 2022 will rely on the deployment of cutting-edge technologies, such as surveillance drones, ICT and AI, to optimise crowd management. In this respect, the QU research team has first developed a crowd counting system from drones’ data, which exploits the dilated and scaled neural networks to extract pertinent features and density crowd estimations,” the statement explained.
Additionally, a new dataset for crowd counting in sports facilities named Football Supporters Crowd Dataset (FSC-Set) has been introduced. It includes 6,000 images labelled manually and representing various types of scenes, containing thousands of people gathering in or around the stadiums.
The research team’s efforts have also focused on developing a face recognition system, which considers faces under pose variations using a multitask convolutional neural network (CNN). Specifically, a cascade structure was employed to combine a pose estimation approach and a face identification module.
The CNN-based pose estimation approach has been trained on three categories of face images, including left side, frontal and right side captures. Moving on, three CNN architectures, namely VGG-16+PReLU left, VGG-16+PReLU front, and VGG-16+PReLU right, have been deployed to identify faces based on the estimated pose.
Additionally, a skin-based face segmentation scheme, based on structure-texture decomposition and a colour-invariant description, has been introduced to remove useless face information (for example, background content). Empirical evaluations have been conducted on four popular face recognition datasets, wherein the proposed system has outperformed related state-of-the-art schemes.
Recently, using drone-based video surveillance, abnormal event detection (AED) is receiving increasing attention due to its reliability and cost-effectiveness. Typically, drones augmented with cameras can spot violent behaviours in crowds during sports events. They can monitor crowds in the perimeter of stadiums and/or other public venues during the FIFA World Cup Qatar 2022. To that end, the research team led by Prof al-Maadeed has developed a novel AED, which aims to learn abnormal actions using both normal and abnormal segments.
It helps avoid the annotation of anomalous events in training video sequences to reduce the computational cost and hence can be easily implemented on drones. Therefore, abnormal events are learned using a deep multiple instance ranking scheme, which leverages weakly annotated training video sequences. Put simply, training annotations are put on whole videos instead of specific clips.