Workshop 1: Visual Tasks and Challenges under Low-quality Multimedia Data
Overview
The field of computer vision has been a research hotspot, and early research focused on high-quality images or daytime scenes with better illumination. Existing vision techniques have achieved better results with an approximately accuracy rate of 96% with these conditions. In practice, nearly 90% of criminal activities occur in the night scenes with low quality, especially in major cases. The video data collected by the surveillance system in these scenes has low contrast and poor quality. According to the Ministry of Public Security Evidence Identification Center (China), the proportion of poor quality video images at night is as high as 95%, and the performance of current methods on low-quality visible images is low, which is difficult to cope with the actual security needs. There is an urgent need to optimise this problem.
Challenge
The goal of this challenge is to:
- Bring together the state of the art research on object detection under low illumination;
- Call for a coordinated effort to understand the opportunities and challenges emerging in object detection;
- Identify key tasks and evaluate the state-of-the-art methods;
- Showcase innovative methodologies and ideas;
- Introduce interesting real-world intelligent object detection under low illumination;
- Propose new real-world datasets and discuss future directions. We believe the workshop will offer a timely collection of research updates to benefit the researchers and practitioners working in the broad computer vision, multimedia, and pattern recognition communities.
Call for Papers
Except for the challenge, we solicit original research and survey papers in (but not limited to) the following topics:
- Pedestrian detection in low illumination, low resolution, rain and fog, etc.
- Object detection in low illumination, low resolution, rain and fog, etc.
- Person re-identification in low illumination, low resolution, rain and fog, etc.
- Object recognition in low illumination, low resolution, rain and fog, etc.
- Segmentation in low illumination, low resolution, rain and fog, etc.
- Counting in low illumination, low resolution, rain and fog, etc.
Organisers
- Jing Xiao, (jing@whu.edu.cn), Wuhan University, China
- Xiao Wang, (hebeiwangxiao@whu.edu.cn), Wuhan University, China
- Liang Liao, (liang@nii.ac.jp), National Institute of Informatics, Japan
- Shin’ichi Satoh, (satoh@nii.ac.jp), National Institute of Informatics, Japan
- Chia-wen Lin, (cwlin@ee.nthu.edu.tw), National Tsing Hua University, Taiwan
Workshop 2: Multi-Modal Embedding and Understanding
Overview
We humans perceive the physical world via multiple ways, e.g., watching, touching, hearing, and so on, which means that we process multi-modal information for environment perception. Multi-modal understanding plays a crucial role in enabling the machine with such ability. Due to its research significance, multi-modal embedding and understanding has gained much research attention and achieved much progress in the past couple of years. The recent advances in deep learning inspire us to explore more and deeper for the multi-modal embedding and understanding, such as self-supervised learning and pre-training in it. In this workshop, we aim to bring together researchers from the field of multimedia to discuss recent research and future directions for multi-modal embedding and understanding, and their applications.
Call for Papers
Multi-modal understanding are important and fundamental problems in the field of multimodal analysis, which have been attracting much research attention in recent years. Previous works have explored shallow embedding and understanding in many downstream tasks, including cross-modal retrieval, visual navigation, VQA, visual captioning, etc. To encourage researchers to explore new and advanced techniques in this area, we are organizing a workshop on “multi-modal embedding and understanding” with the conjunction of ACM MM Asia 2021, and calling for contributions. The included (but not limited) topics are as follows:
- Large-scale pre-training for multi-modal embedding and understanding
- Self-supervised learning in multi-modal embedding and understanding
- Semi-supervised learning in multi-modal embedding and understanding
- Contrastive learning in multi-modal embedding and understanding
- Interpretability in multi-modal embedding and understanding
- Interactive multi-modal understanding
- Trust AI for multi-modal understanding
- Cross-modal matching and retrieval
- Cross-modal understanding
- Multi-modal deep fake generation and detection
- And other related…
Submission Guidelines
Format: Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords.
Length: Papers must be no longer than 6 pages, including all text and figures, and up to two additional pages may be added for references. The reference pages must only contain references. Over-length papers will be rejected without review.
Workshop Schedule
Please note: The submission deadline is at 11:59 p.m. of the stated deadline date Anywhere on Earth.
- Paper Submission Deadline: 19 October, 2021.
- Notifications of Acceptance: 1 November, 2021.
- Camera-ready Submission: 7 November, 2021.
Organisers
- Wenguan Wang, ETH Zurich, Switzerland
- Xiaojun Chang, RMIT, Australia
- Yanli Ji, University of Electronic Science and Technology of China, China
- Yi Bin, University of Electronic Science and Technology of China, China
Workshop 3: Multi-Model Computing of Marine Big Data
https://riverw.github.io/web/MCMBD/index.html
Overview
Different from the traditional multimedia technology which mainly focuses on human life, it is a novel and challenging problem to study multimedia data analysis methods for marine big data. Compared with traditional multimedia data, marine big data has big differences in feature distribution, content understanding, applications, etc. This makes existing multimedia analysis methods in target detection and recognition, tracking and depth estimation and other tasks cannot be simply applied to ocean data analysis. The study of multimedia data analysis technology with marine big data can help humans understand the marine, realise the detection and protection of ocean resources intelligently, and provide important technical support for the protection of various rare ocean resources.
Call for Papers
Marine multimedia data analysis and retrieval techniques are essential for marine resource exploration and marine environment prediction and forecasting. The main analytical tasks based on the marine domain include detection, identification, retrieval, tracking, and prediction forecasting of marine environmental data such as weather, temperature, humidity, and rainfall. Detection and protection of marine resources can be intelligent through detection, identification and tracking technologies, which provides important technical support for the protection of various types of rare marine resources. Today, in order to better understand the ocean, humans are rapidly collecting a wide variety of marine multimedia big data. Therefore, in this workshop, we will present the recent advances of multimedia technology in marine big data. The main analytical tasks based on the marine domain include detection, identification, retrieval, tracking, and prediction forecasting of marine environmental data such as weather, temperature, humidity, and rainfall. Exploring multi-modal data provides important technical support for understanding the marine and protecting various rare marine resources. We believe that this workshop will facilitate a closer integration of multimedia content analysis technologies with applications in the marine field. we solicit original research and survey papers in (but not limited):
- Marine object detection
- Marine object re-identification
- Cross-modal hash retrieval in the marine area
- Fine-grained identification of marine organisms
- Artificial Intelligence for coastal environment evolution prediction
- Artificial Intelligence for optimisation of an ecological dynamic model
- Marine big data mining methods
Organisers
- Jie Nie, Ocean University of China, China
- Lei Huang, Ocean University of China; Pilot National Laboratory for Marine Science and Technology (Qingdao)
- An-An Liu, Tianjin University, China
- Junbo Guo, State Key Laboratory of Communication Content Cognition, People’s Daily Online, China
- Zhiqiang Wei, Ocean University of China; Pilot National Laboratory for Marine Science and Technology (Qingdao)