Skip to main content

Call for Papers: Real-time Video and Image Analysis in Embedded Systems

Currently, we live in a world where the production of image and video data is constantly increasing. The amount of images and video produced every day is already in the hundreds of exabytes. In addition, we can now obtain this data through mobile devices as well as sensors, which are becoming more common in our daily lives. A prominent example is the Internet of Things (IoT), through which different kinds of objects are connected to the internet via sensors that are capable of monitoring their environment and sending data back to other devices for analysis. Real-time video and image analysis are important in automotive, agriculture, security, surveillance, etc. (Zeden et al., 2020). It helps machines perceive the environment and provides the detailed context necessary for automation, enabling features such as motion detection, object recognition, gesture or pattern analysis, and counting. The development of sophisticated embedded applications that combine video processing and other sensor data requires the mastery of many technologies. The next generation of the embedded system is driven by the increasing demands for real-time video and image analysis in various applications. The demand for embedded system performance is growing rapidly, and an efficient embedded architecture will greatly impact reducing size and power consumption. In general, real-time image and video analysis in embedded systems detects, tracks, and analyzes objects in images or videos, often to help machines understand scenes and react appropriately. This can be applied to many applications, including robots for surveillance and security, autonomous vehicles, drones, and consumer devices. Real-time video analytics leverages algorithms powered by machine learning that can work independently on individual devices (edge processing) to perform object detection and recognition, face recognition, motion detection, scene change detection, and more (Metwaly et al., 2019).

Embedded systems are becoming increasingly prevalent in our daily lives, deploying a wide range of rich capabilities. Cameras and vision-processing technologies provide more opportunities than ever before to expand the capabilities of embedded systems (Yang et al., 2003). Cameras are now at the heart of many embedded systems, from smart homes to advanced robotics. Real-time image and video analysis can be used in embedded systems for monitoring, surveillance, protection, and detection applications. It helps to process a stream of images and detect specific events within that stream. Embedded vision pushes the borders of what system-on-chip (SoC) technology can do for deep learning. The paradigm shift in analytics leads to the development of high-performance embedded vision SoCs that integrate deep learning and artificial intelligence. This special issue will focus on real-time image and video analysis across various aspects of embedded systems. The topics covered in this course include image/video sampling, image/video transform, image/video filtering, face detection, blob detection, multi-object tracking, stereo vision, augmented reality, and gesture recognition.

The list of topics of interest includes but not limited to the following: 

•    Real-time deep learning frameworks for image and video analysis
•    Trends in artificial intelligence for efficient video processing in mobile embedded systems
•    Smart surveillance in real-time embedded systems with artificial intelligence and deep learning
•    2D/3D image and video understanding and processing in real-time systems
•    Advances in image and video analysis approaches for large-scale embedded systems
•    Real-time knowledge integration in embedded systems and applications with image and video analysis
•    Real-time image and video analysis for aerial computing and drone systems
•    Real-time mobile visual search with deep learning and computer vision
•    Real-time mobile imaging and video processing for smart applications
•    Advances in real-time computational photography for embedded applications

Important Dates

•    Date of Submission Deadline – 31 December 2022

Guest Editors
Dr. Andino Maseleno, Universiti Tenaga Nasional, Selangor, Malaysia.
Email ID - andino.maseleno@ieee.org

Dr. Ashutosh Dhar Dwivedi, Copenhagen Business School, Denmark.  
Dr. J. Alfred Daniel, Anna University, India
Dr. Sujatha Krishnamoorthy, Wenzhou Kean University, Wenzhou, China.
 

Submission Instructions
Before submitting your manuscript, please ensure you have carefully read the Instructions for Authors for EURASIP Journal on Advances in Signal Processing. The complete manuscript should be submitted through the EURASIP Journal on Advances in Signal Processing submission system. To ensure that you submit to the correct special issue please select the appropriate section in the drop-down menu upon submission. In addition, indicate within your cover letter that you wish your manuscript to be considered as part of the special issue on 'Real-time Video and Image Analysis in Embedded Systems'. All submissions will undergo rigorous peer review and accepted articles will be published within the journal as a collection.

Sign up for article alerts to keep updated on articles published in EURASIP Journal on Advances in Signal Processing - including articles published in this special issue!