How will technology be used to manage FIFA World Cup crowds in Qatar? – Doha News
[ad_1]
One of the biggest challenges Qatar is set to face by the end of the year is crowd control. Now, researchers are trying to find a solution.
Research teams in Qatar have identified new technology to help crowd control during the much-awaited FIFA World Cup tournament later this year.
The team from College of Engineering at Qatar University (QU) proposed the use of cutting-edge technologies including surveillance drones, ICT, and AI to manage an expected 1.5 million visitors expected to flock to the Gulf state for the event in November.
Qatar’s organising committee is primarily focused on the security and safety of participants, fans, and other parties involved in the FIFA World Cup Qatar 2022.
The expected inherent opacity and density of the audience inside and outside the stadiums has made crowd control at the World Cup stadiums and their perimeters essential to ensuring the safety and efficiency of the events.
For that reason, the state-owned university has created an intelligent crowd management and control system with numerous components for crowd counting, face recognition, and abnormal event detection in partnership with the Supreme Committee for Delivery and Legacy (SC).
How will it help?
The system counts crowds using data from drones that make use of dilated and scaled neural networks to extract useful features and estimate crowd densities.
Football Supporters Crowd Dataset (FSC-Set), a brand-new dataset for crowd counting at sports venues, will also be introduced during the tournament. It contains 6000 manually categorised photographs of various settings with tens of thousands of people congregating in or near the stadiums.
The research team has also concentrated on creating a face identification system that uses a multitask convolutional neural network to take into account faces in various poses. To combine a posture estimation approach and a face identification module, a cascade structure was used specifically.
The left side, frontal, and right side captures of faces served as the training data for the CNN-based posture estimation method.
Three CNN architectures—VGG-16+PReLU left, VGG-16+PReLU front, and VGG-16+PReLU right—have also been deployed to identify faces based on the estimated pose.
Meanwhile, in order to eliminate unnecessary face information, a skin-based face segmentation approach based on structure-texture decomposition and a color-invariant description has been presented (e.g., background content). The suggested approach beat related state-of-the-art systems in empirical evaluations on four well-known face recognition datasets.
Abnormal event detection (AED) has recently gained popularity thanks to drone-based video monitoring because it is dependable and economical. Typically, drones equipped with cameras may detect aggressive conduct in crowds at sporting events.
With that being said, the technology can be used during the World Cup to keep an eye on the crowds outside of stadiums and/or other public places.
In order to achieve this, the research team under the direction of Prof. Al Maadeed created a revolutionary AED that seeks to learn abnormal movements utilising both normal and abnormal segments.
It makes it possible to avoid annotating odd occurrences in training video sequences, which lowers the computing cost and makes it simple to apply to drones.
As a result, aberrant events are learned utilising a deep multiple instance rating system that makes use of training video sequences with poor annotation. This way, the training annotations are added to entire movies rather than individual segments.
[ad_2]
Source link