Journal of Electronic Imaging

Editor-in-Chief: Zeev Zalevsky, Bar-Ilan University, Israel

The Journal of Electronic Imaging, copublished by IS&T and SPIE, publishes papers in all technology areas that make up the field of electronic imaging and are normally considered in the design, engineering, and applications of electronic imaging systems.

On the cover: The figure is from "Saliency-enhanced two-stream convolutional network for no-reference image quality assessment" by Huanhuan Ma et al. in Volume 31, Issue 2.

Call For Papers
How to Submit a Manuscript

Regular papers: Submissions of regular papers are always welcome.

Special section papers: Open calls for papers are listed below. A cover letter indicating that the submission is intended for a particular special section should be included with the paper.

To submit a paper, please prepare the manuscript according to the journal guidelines and use the online submission systemLeaving site. All papers will be peer‐reviewed in accordance with the journal's established policies and procedures. Authors have the choice to publish with open access.

Perspectives on Selected Topics of Electronic Imaging
Publication Date
This is an ongoing call for papers.
Submission Deadline
Rolling submissions
Guest Editors
Zeev Zalevsky

Bar-Ilan University
Israel
Zeev.Zalevsky@biu.ac.il

Scope

The field of electronic imaging is very broad and includes many fast-evolving sub-directions that might be of interest to our broader community of readers.

The structure of this type of manuscript should include a comprehensive review of scientific milestones achieved/performed on the proposed topic as well the perspectives of the authors on the future scientific and technological evolution of the field and its eminent impact on society.

This special call for papers invites leading researchers doing research in various sub-fields related to electronic imaging to submit a pre-manuscript proposal/draft to the journal to be evaluated. If found suitable, a full manuscript will be invited from the authors.

 

Recent Advances in Multimedia Information Security
Publication Date
July/August 2023
Submission Deadline
Submissions open 1 August through 1 December 2022.
Guest Editors
Amit Kumar Singh

National Institute of Technology
Department of Computer Science and Engineering
Patna, India
amit.singh@nitp.ac.in

Ashima Anand

Thapar Institute of Engineering and Technology
Department of Computer Science and Engineering
Patiala, India
ashima1795@gmail.com

Stefano Berretti

University of Florence
Florence, Italy
stefano.berretti@unifi.it

Scope

Multimedia content includes images, text, audio, video, and graphics and stands as one of the most demanding and exciting aspects of the information era. Due to new developments in science and technology, different methods are used to copy, recreate, distribute, and store these contents easily for many real-world applications, such as smart healthcare, secure multimedia content on social networks, secured e-voting systems, automotive, military, digital forensic, digital cinema, education, insurance companies, drivers licenses, and passports. Multimedia information over open channel using information and communication technology (ICT) has proved an indispensible and cost-effective technique for dissemination and distribution of media files. However, various criminal offenses such as identity theft, copyright violation, and misuse of personal and medical information, have become part of daily life and cause financial loss. Therefore, addressing these challenges has become an interesting problem for researchers in the field. Motivated by these facts, this special section targets the researchers from both academia and industry to explore and share new ideas, approaches, theories, and practices with focus on multimedia security and privacy solutions for real-world applications.

Authors are invited to submit original research and high-quality survey articles on topics including but not limited to:

  • Multimedia intellectual property protection
  • Encryption of multimedia records
  • Multimedia information hiding
  • Blockchain technology for multimedia
  • Biometrics
  • Multimedia security for industries
  • Security and privacy trends in the industrial IoMT
  • Cloud data security
  • Cyber security
  • Protection systems/mechanisms against patient identity leakage
  • Multimedia forensics
  • Privacy in multimedia communication
  • Federated learning for multimedia security and privacy
  • Multimedia in fog/edge computing environments
  • Secure media streaming

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

Multimedia Security
Color: From Images to Videos
Publication Date
July/August 2023
Submission Deadline
Submissions open 1 September to 1 December 2022.
Guest Editors

University of Milan - Bicocca
Italy
simone.bianco@unimib.it

Marco Buzzelli

University of Milan - Bicocca
Italy
marco.buzzelli@unimib.it

Alain Trémeau

University Jean Monnet
France
alain.tremeau@univ-st-etienne.fr

Scope

One of the growing challenges faced by the color research community is the transition from the image domain to the video domain, across all aspects of color imaging. This special section aims at bringing together a number of contributions from experts in the field to present methods and techniques that:

  • Apply to traditional color imaging, with discussions about their possible extension to the video domain.
  • Have been successfully transferred from the image domain to the video domain.
  • Have been explicitly developed for the video domain.

The analysis and processing of video sequences is typically addressed with early-fusion or late-fusion approaches. In early-fusion approaches, each frame is individually processed with image-specific techniques, and the results are combined during a second stage. In late-fusion approaches, the developed methods explicitly consider the relationship between adjacent frames to perform temporally aware processing. Solutions related to these and other categories are welcome submissions to the special section.

We also encourage the submission of manuscripts that address the trade-off between efficiency and effectiveness. For specific applications in the video domain, in fact, the computational efficiency of the developed method might be particularly relevant, to provide real-time feedback in the camera viewfinder stream. Even in an off-line setup, where live processing is not required, a fast computation could still be considered critical for the practical processing of long video sequences. Solutions that focus on efficiency, effectiveness, or a trade-off between the two are welcome submissions to the special section.

Possible contributions to the special section include but are not limited to the following topics:

  • Color vision (e.g. biologically inspired systems) and active vision (e.g. autonomous cameras, wearable and assistive displays)
  • Video capture, compression, and display
  • Video processing, restoration, and enhancement
  • Material and color appearance
  • Video color constancy in static and dynamic scenes
  • Person re-identification, video-surveillance and security
  • Color perception and video quality assessment
  • Multispectral video imaging
  • Computer graphics, machine vision (e.g. industrial inspection) and embedded real-time systems

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

Color

Sustainable Solutions for Cyber Physical Systems
Publication Date
Vol. 32, Issue 3
Submission Deadline
Submissions open from 1 July through 1 October 2022.
Guest Editors
Rajakumar Arul

Vellore Institute of Technology
Chennai, India
rajakumararul@ieee.org

Muhammad Imran

Federation University
Ballarat, Australia
m.imran@federation.edu.au

Shahid Mumtaz

Instituto de Telecomunicações
Aveiro, Portugal
smumtaz@av.it.pt

Jun Wu

Shanghai Jiao Tong University
Shanghai, China
junwuhn@sjtu.edu.cn

Scope

Cyber-physical systems (CPS) are finely tuned engineering frameworks that amalgamate virtual and real processes. CPSs can intimately associate coordination and control of computational, information exchange, and physical facilities. The incorporation of real and virtual worlds results in technological advancements and leads to sustainable development across a wide range of businesses. CPSs are hybrid supervision, perception, and control systems that use a duplex link between real-world and cyberspace components to independently regulate processes, instructions, and resources. Internet integrated innovative technologies have transformed human lives by making it more advanced and digitized. Future energy grids, building automation, Internet of Vehicles, advanced and powerful Industrial productions, remote medical innovations, computerized flight navigation systems, and driverless vehicles are only a few of the key techniques of CPS.

It could result in real life scenarios that CPS should function reliably, conveniently, seamlessly, and expeditiously. CPS adopters routinely interact with one another and relay information effectively. The proportion of cyberspace aspects has progressively gained prominence in recent decades, toward the extent that CPSs are now emerging applications and services systems with more linked best available technologies and learning algorithms. There has been a considerable effort in the community of cyber-physical systems to interconnect image processing systems. The adoption of CPSs in imaging systems will tremendously influence the way smart platforms and smart infrastructure facilities are developed, administered, and interconnected to other intelligent devices, such as computer vision, biomedical imaging, sensing, processing and medical imaging systems. It can be characterized as the assimilation of software applications, processing, communication, and management systems in further depth information. This is an evolving and multidisciplinary topic in which researchers from bioinformatics, smart manufacturing, and engineering systems collaborate to innovate. The CPS's significance may be seen in a variety of growing industries, including home care, automobiles, sustainable energy, medical implants, quality health care, vehicle tracking, and logistics.

If we don't include trusted and reliable information systems and computational technologies, the contribution of CPS is inadequate. Likewise in the medical industry, patient health datasets comprise extremely precise information, such as binary mask images in tumor measures, diverse ECG waves, and so on. Therefore image recognition, feature extraction, and clustering techniques are essential.

Computer vision (CV)-based CPSs are computerized depth perception systems that often don't rely on interpretations such as structured conditions. Secondly, researching into the world of computer image processing algorithms widens opportunities and possibilities. CV-based CPS necessitates practical studies focusing on the diffusion of technology from scholars to investigators, posing impediments such as significant levels of accuracy, authentic computation, inadequate training data, reduced manual contribution, customer interoperability, and so on. Further with the advent of different image processing techniques technologies, the objective of advanced computer vision technologies in CPS is to evaluate a deep interconnection of cyber and physical system applications, which additionally analyze and optimize virtualization technology in real time, protected, sustainable, feasibly applied.

This special section intends to give practitioners and professionals a platform to show off their expertise and explore research challenges in computer cision-based CPS. The adoption of CPS in association with advanced computer vision technologies will broaden boundaries and evaluate key ideas which might be of great significance and academic value. The goal of this special section is to disseminate large submissions that address latest findings on themes relevant to CPS deployment using advanced computer vision technology.

Potential topics include but are but not limited to:
• Adaptive video surveillance and observation of crowd behavior.
• Health information network monitoring and smart healthcare computing.
• Associative image knowledge and understanding.
• Medical image enhancement in healthcare administration.
• Synchronization and visualization systems in urban areas.
• Heterogeneous sensory fusion and decentralized sensor nodes.
• System failure identification and assessments.
• Medical inspection and security systems.
• Enhanced and augmented reality systems.
• CPS interface design and supervisory control management.
• CPS implementation and authorization using classifiers.
• Automatic specification optimizing in image compressors and code refactoring.

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

 

Cyber-Physical Systems

Evolutionary Techniques for Computer Vision and Imaging Processing
Publication Date
Vol. 32, Issue 3
Submission Deadline
Submissions open from 1 August through 1 November 2022.
Guest Editors

University of Petroleum & Energy Studies
Dehradun, India
dkoundal@ddn.upes.ac.in

Kemal Polat

Bolu Abant Izzet Baysal University
Department of Electrical and Electronics Engineering
Bolu, Turkey
kpolat@ibu.edu.tr

University of Illinois at Springfield
Springfield, Illinois, United States
yguo56@uis.edu

Scope

Evolutionary techniques is an interdisciplinary area that encompasses a variety of computing paradigms. In the last decade, several evolutionary techniques, such as fuzzy logic, neuro-fuzzy systems, neural network, genetic algorithms, and support vector machines, have found numerous applications in various domains of computer vision and image processing. There are numerous applications of evolutionary techniques ranging from industrial automation to agriculture and from medical imaging to aerospace engineering. This special section deals with the relevance and feasibility of soft computing tools in the area of image processing, analysis, and recognition. The techniques of image processing stem from two principal applications, namely improvement of pictorial information for human interpretation and processing of scene data for automatic machine perception. The different tasks involved in the process include filtering, enhancement, noise reduction, contour extraction, segmentation, and skeleton extraction. The ultimate goal is to make understanding, recognition, and interpretation of the images from the processed information available from the image pattern. Several hybridized techniques, such as genetic-fuzzy systems, fuzzy-neural network (FNN), neuro-genetic systems, neuro-fuzzy systems (NFS), neuro-fuzzy-genetic systems, exist for various image processing applications. Tools like simulated annealing (SA), genetic algorithms (GAs), and tabu search (TS) have been incorporated with evolutionary techniques for applications involving optimization. Soft computing techniques, including fuzzy logic, neural networks, and evolutionary methods, as alternatives to the existing classical techniques, have shown great potential to solve image processing problems under such conditions. Even though more techniques are available, the applicability of these techniques in the medical field is not fully explored. In the medical and industry field, there is always a significant necessity for automated techniques that yield high accuracy and convergence rate.

Indeed, soft computing paradigms have been demonstrated to be capable of tackling a wide range of problems, e.g. optimization, decision making, information processing, pattern recognition, and intelligent data analysis. The goal of this special section is to bring researchers together to share and exchange knowledge and ideas on the development and applications of soft computing techniques, including metaheuristics, artificial neural networks, fuzzy logic, Markov process, Bayesian networks, and Petri nets for computer vision and imaging applications.

This special section aims to provide a collection of high-quality research articles that address broad challenges in both theoretical and application aspects of soft computing in computer vision and image processing. We invite colleagues to contribute original research articles, as well as review articles, that will stimulate the continuing effort on the application of soft computing approaches to solve image-processing problems. This section is devoted to soft computing techniques for all possible applications in the computer vision and image processing field.

The topics of this issue include but are not limited to:

  • Artificial neural networks for abnormality detection in images
  • Fuzzy theory for abnormal region segmentation in images
  • Image denoising and noise removal
  • Image texture analysis
  • Image processing using evolutionary algorithms such as GA, PSO, and ACO
  • Noise cancellation in signals like EEG, ECG, and MG
  • Watermarking in images using soft computing techniques
  • Image fusion techniques
  • Image registration methodologies
  • Image indexing and retrieval techniques
  • Equalization methodologies in 1-D signals
  • Image coding and compression
  • Image sampling and interpolation
  • Image quantization and halftoning
  • Image quality assessment
  • Image filtering and enhancement
  • Image morphology
  • Image edge detection and segmentation
  • Hybrid approaches
  • Any other area which deals with imaging applications using soft computing techniques
  • Neural networks, fuzzy logic, rough sets, and evolutionary methods
  • Expert system
  • Applications of computer vision and image processing

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

 

Evolutionary Techniques

Synthetic Aperture Radar Imaging Technology in Deep Learning: New Trends and Viewpoints
Publication Date
Vol. 32, Issue 2
Submission Deadline
1 July 2022
Guest Editors
Achyut Shankar

Amity University
India
ashankar1@amity.edu

Li Zhang

University of London Egham
United Kingdom
Li.Zhang@rhul.ac.uk

Yu Chen Hu

Providence University
Taiwan
ychu@pu.edu.tw

Prabhishek Singh

Amity University
India
psingh29@amity.edu

Scope

The advancement of deep learning has transformed the way to several SAR image processing tasks. The information collected for detecting and tracking ships, ocean wave forecasting, agricultural monitoring, military systems, assessment of damages after flood and earthquake etc. are attained through SAR images. The SAR image quality determines the appropriate information retrieval. The large wavelength and penetrating capability of SAR sensors allow them to acquire images in all weather and during day or night, but the random and continuous interaction of high frequency electromagnetic radiations emitted from SAR sensors with target areas causes constructive and destructive interference, resulting in speckle noise that adversely affects the acquired SAR image. The extraction of information in such a scenario is a difficult task. Apart from speckle noise, SAR images are also affected by geometric distortion, system nonlinear effects, and range migration that needs to be researched as well. There are three different modes of SAR based on nature of their application: strip mapping mode SAR, specifically used for capturing large terrain of area; spotlight mode SAR used for capturing small terrain area by staring at an exact scene from different locations; and inverse SAR used for monitoring the movement of target in war applications. Deep learning methods like a convolutional neural network (CNN) generates incredible results for image classification and restoration purposes. Therefore, new SAR image processing approaches must be created, and SAR raw signal modelling techniques must be developed to help experts, academicians, researchers, and scientists build new SAR systems.

The major objectives of this special section are to:

  • Identify the basic research issues related to SAR image processing that are vital for real-world SAR and other remote sensing applications using deep learning technique.
  • Monitor the progressive performance made in the solution of remote sensing problems.
  • Have experts, academicians, researchers, and scientists share their achievement stories of applying advanced deep learning techniques to the real-world SAR and other remote sensing problems.

We invite manuscripts that successfully apply unconventional and unsupervised deep learning based SAR image processing techniques to various SAR image classification and restoration problems as discussed below. Our topics of interest are broad, including but not limited to the related sub-topics listed below:

  • SAR image despeckling using deep learning
  • Strip mapping mode SAR image processing using deep learning
  • Spotlight mode SAR image processing using deep learning
  • Inverse SAR image processing using deep learning
  • Solving the problem of geometric distortion in SAR images using deep learning
  • Solving the problem of system nonlinear effects in SAR images using deep learning
  • Solving the problem of range migration in SAR images using deep learning
  • High performance computing for SAR data processing
  • Interferometric and polarimetric SAR processing methods
  • Electromagnetic scattering models for SAR signal simulation
  • Fusion of information from SAR images
  • Detection and tracking of ships using deep learning
  • Detection of oil natural leakage using deep learning
  • Ocean wave forecasting and marine climatology using deep learning
  • Regional ice monitoring using deep learning
  • Forestry and agricultural land monitoring using deep learning
  • Assessment of damage caused by natural calamities such as floods and earthquakes using deep learning
  • Detection of small surface movement caused by earthquakes, landslides, or glacier advancement using deep learning

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

 

SAR Technology

Machine Learning-based Techniques and Applications for Next Generation Image and Video Compression
Publication Date
Vol. 32, Issue 1
Submission Deadline
15 June 2022
Guest Editors
Zheng Xu

Shanghai Polytechnic University
China
zhengxu@shu.edu.cn

Neil Y. Yen

University of Aizu
Japan
neilyyen@u-aizu.ac.jp

Vijayan Sugumaran

Oakland University
United States
sugumara@oakland.edu

Scope

Machine learning (ML) has drastically pushed the frontier of every aspect of artificial intelligence (AI) in the past decade. It has been widely used in various computer vision, automation, and image and video processing applications, leading to leapfrogging improvements in performance. Tremendous efforts have been dedicated to applying ML-based techniques to image and video coding, which can be roughly categorized into two groups: the first group can be termed as ML-based hybrid approaches, which incorporate machine-leaning techniques into traditional image and video coding systems, either as stand-alone pre- and post-processing modules, or as optimization techniques of the operation (e.g. parameter setting) of traditional image and video coding systems, or for designing modules (e.g. filters for in-loop deblocking or motion compensation) in traditional image and video coding systems. The second group are native ML-based algorithms that aim at replacing the traditional prediction with quantized transform coding framework with an end-to-end ML-based approach, e.g. by using auto-encoders in image and video coding. Many ML-based hybrid techniques have already been incorporated into conventional image and video coding systems, while some native ML-based algorithms have also showed promises and achieved respectable performances. Various standardization organizations including the JPEG, MPEG, and JVET have also either already started defining new standards with ML as the center of the coding system and/or key target application, or are starting to look into related technologies.

In this special issue of Journal of Electronic Imaging, leading researchers and practitioners in academia, industry and standard-bodies are invited to contribute and produce a go-to reference of the state of the art in ML-based image and video coding theories, algorithms, techniques, systems, and standardization activities for the entire community. Submissions that focused on “brave new ideas” in the development of end-to-end ML-based image and video compression systems are especially encouraged, even if such schemes might still lag in performance as compared with highly optimized traditional approaches. In addition to compression, submissions related to image and video processing using ML techniques are also welcome. Subject of interests include but are not limited to:

  • Machine learning-based end-to-end image and video compression algorithms
  • Image and video compression with/for AI applications
  • AI-based content analysis and generation
  • Machine learning-based parameters tuning and compression algorithm setting for legacy image and video compression standards, such as JPEG, AVC, HEVC, VVC, and AV-1
  • Machine learning-based quality evaluation for image and video compression
  • Machine learning-based Quality of Experience (QoE) for end-to-end image and video acquisition and presentation
  • Machine learning-based end-to-end image and video compression systems
  • Machine learning-based image and video pre- and post-processing, spatial and temporal-super-resolution
  • Implementation optimizations
  • Large public and annotated test and training data sets with descriptions

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

 

Intelligent Vision Computing for Action and Behavior Recognition
Publication Date
Vol. 32, Issue 2
Submission Deadline
Closed
Guest Editors
Ali Ahmadian

University Mediterranea of Reggio Calabria
Reggio Calabria, Italy
ahmadian.hosseini@unirc.it

Valentina E. Balas

Aurel Vlaicu University of Arad
Romania
balas@drbalas.ro

Soheil Salahshour

Bahçeşehir University
Turkey
soheil.salahshour@eng.bau.edu.tr





Scope

With enhanced digital advancement and connectivity, internet technology and computers have become a crucial part of human lives in the last two decades. People use this technology for communication, to work, to explore new information, to shop, and for entertainment. In addition to this, most scientists have shifted their research in recognizing human action automatically. Human actions convey meaningful information while interacting with the environment, machines, or human-to-human communication. Automatic analysis and understanding of human action is an exciting yet very complex task to explore. Moreover, a real-world environment in uncontrolled surroundings makes it more complicated, challenging, and exciting. However, with the emergence of computer vision and artificial intelligence, these challenges can be addressed significantly. Recently, automatic recognition of human behavior and action is a very active research topic in computer vision that has drawn much attention from researchers worldwide due to its promising results. Some of the applications based on human action recognition include intelligent video monitoring, human-computer interaction, human-machine interaction, ambient assisted living, content-based video search, and entertainment. Despite having an extensive range of applications in computer vision, human action and behavior recognition is still an attractive research field because ambiguities and challenges are experienced while recognizing action, including motion of body parts regarding real-world problems like dynamic background, camera motion, and bad weather conditions. To address these challenges, artificial intelligence needs to be embedded with computer vision to make the vision computing intelligent for action and behavior recognition. 

Intelligent vision computing for action and behavior recognition automatically detects, tracks, and describes human activities based on the sequence of image frames extracted from the video stream in a real-time environment. Intelligent vision computing proves more effective than traditional passive vision computing because, in the conventional approach, the number of cameras exceeds the ability of human operators to monitor them. Intelligent vision computing in a surveillance environment can automatically detect abnormal activities that can be used to alert authorities for immediate action. In the healthcare system, automatic activity recognition can help older people or sick people for immediate healthcare assistance without delay. In sports and entertainment, these systems with an augmented visual display can recognize the activities of different players in the game to facilitate a better understanding of the game for the viewer and improve the performance and tactics of players. With the increasing demand for precision in decisions, intelligent vision computing based on computer vision and AI for human action recognition are on an eye-popper in research direction.

Though a tremendous amount of research has been done in intelligent vision computing for human action and behavior identification, there are still many open challenges needed to be studied further, especially in the following areas. Multi-view variations are one dominant issue for real-world applications in unrestrained environmental surroundings. Announcing a new approach to attain a remarkable view invariance is still an open area of research. Strategies that address harmful weather environments, inadequate data, and constant camera motion showed satisfactory results but are still unsatisfactory. Also, significant research efforts have been made to overcome radiance variations, dynamic background, occlusion, and chaotic surroundings. However, there is still room for advancement, mainly in a real-world environment. The introduction of an intelligent and new classification approach to handling the various challenges of intelligent vision computing-based for human activity and behavior recognition is still a promising research area.

This special section aims to gather original research articles that explore novel methods, approaches, algorithms, and architecture for human action and behavior recognition based on computer vision and AI techniques. Specific topics of interest include but are not limited to:

  • Human activity recognition in the healthcare system for immediate assistance using AI-assisted intelligent vision computing
  • Computer vision and AI-based intelligent surveillance monitoring system for detecting abnormal activity
  • Research opportunities and challenges for human action behavior recognition using intelligent vision computing approaches
  • Research towards intelligent vision computing for public security
  • Smart vision computing model to monitor and recognize human activity based on human interaction gestures
  • Vision-based intelligent monitoring system for human behavior analysis
  • Human activity and behavior recognition under real-time dynamic environment using intelligent vision computing
  • Detection and classification of human action and behavior using AI-assisted computer vision techniques
  • An intelligent vision computing system for automatically recognizing human activity in spontaneous behavior
  • AI and computer vision integrated human activity recognition system in multi-view variant surroundings
  • Intelligent monitoring system for child care at home using intelligent vision computing based on human action recognition
  • Accurate and precise recognition of human activity in sports using an AI-assisted vision computing system

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

 

Intelligent vision
Machine Vision: Systems, Methods, and Applications
Publication Date
Vol. 31, Issue 5
Submission Deadline
Closed
Guest Editors
Wolfgang Osten

University of Stuttgart
Germany
wo@ito.uni-stuttgart.de

Dmitry Nikolaev

Institute for Information Transmission Problems
Russia
dimonstr@iitp.ru



Johan Debayle

MINES Saint-Etienne
France
debayle@emse.fr





Scope

The emergence of machine vision as a ubiquitous platform for innovations has laid the foundation for the rapid growth of information. Side by side, the use of mobile and wireless devices such as PDA, laptop, and cell phones for accessing the Internet has paved the ways for related technologies to flourish through recent developments. In addition, machine vision technology is promoting better integration of the digital world with the physical environment.

This special section focuses primarily on research in the field of machine vision. Our purpose is to review the new progress and achievements of the research work to date.

Topics of interest for this special section include but are not limited to:

  • Machine vision systems and components (hardware and software, sensor fusion)
  • Machine vision applications (industrial inspection, navigation, optical metrology, autonomous vehicles, remote sensing, astronomy and astronautics, biomedical imaging, face and gesture recognition, data compression, security and coding, document processing)
  • Computer vision (scene reconstruction, video tracking, 3D pose estimation, action recognition)
  • Active vision (autonomous cameras, wearable and assistive computing, real-time 3D scene segmentation and reconstruction)
  • 3D vision (stereovision, laser triangulation, multi-cameras)
  • Machine learning (artificial intelligence, neural networks, deep learning, big data, and data mining)
  • Image processing (analog, digital, electronic, optical, acoustical, hybrid)
  • Image processing methods (pre-processing, image analysis, feature extraction, segmentation, classification, pattern recognition, coding, understanding, modeling, color, texture, shape, geometry, topology, SIMD, MIMD)
  • Computational imaging (coherent diffractive imaging, coded-aperture imaging, super- resolution imaging)

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

Machine vision
Computer Vision for Next-Generation Cyber Physical Systems
Publication Date
Vol. 31, Issue 6
Submission Deadline
Closed
Guest Editors
K. Shankar

Alagappa University
Department of Computer Applications
Tamil Nadu, India
shankarkpdf@alagappauniversity.ac.in

Gyanendra Prasad Joshi

Sejong University
Department of Computer Science and Engineering
Seoul, Republic of Korea
joshi@sejong.ac.kr

 

Bassam A.Y. Alqaralleh

Al-Hussein Bin Talal University
Department of Computer Science
Faculty of Information Technology
Ma’an, Jordan
alqaralleh@ahu.edu.jo



Scope

The ever growing relationship between humans and machines has possibly led to the evolution of next-generation cyber-physical systems (CPS). Unlike traditional embedded systems, CPS is typically designed across a network of interconnected entities that efficiently interact based on physical inputs and outputs. They are often tightly coupled to the features of sensor networks and system components (physical and software components) loaded with computational intelligence. From a future perspective, different kinds of CPS will possibly have various impacts on our everyday lives. The growing number of connected devices, computing resources, and advanced communication technologies give rise to advanced next-generation CPS systems on a scale and complexity level that is far beyond the human ability to comprehend and control. As a result, the CPS is already experiencing a changing landscape across various streams such as smart grids, smart robots, smart vehicles, smart cities, etc. And over time, CPS will undoubtedly unlock creative solutions and exciting opportunities across various sectors.

On the other hand, the ever-increasing computational power of the peripheral devices has increased the probability of bringing in computer vision and real-time decision-making towards the edge (where the data has been produced). This enables the advent of privacy-aware, secure and adaptive decision-making algorithms to the context. Realization of the computer vision potential will significantly transform the sophistication of CPS-driven solutions in a more efficient manner. The use of vision intelligence enables smart, efficient, and real-time information to be made available to the various components of the CPS and the other users associated with the application. It forms the basis for the smooth functioning of the CPS devices, especially to deal with time-critical tasks. Further, it encourages better optimization of the embedded software design models of the CPS and reduces the power consumption with improved security measures. In summary, the ongoing transition towards the fourth industrial revolution induces game-changing opportunities that could possibly result from the evolution of advanced automation and data transfer technologies. This leverages the significance of computer vision over the next-generation CPS systems such as health monitoring, smart manufacturing, autonomous transportation, and so on. However, the transition from concept to reality requires more in-depth research and state-of-the-art advancements. To effectively address these requirements, this special aims to bring out the advances in computer vision for next-generation CPS applications.

The following topics are welcome but not restricted to:

  • Modelling and application of computer vision for next generation CPS
  • Trends, challenges and future research directions in computer vision assisted CPS applications
  • Computer vision for intelligent video surveillance using CPS
  • Vision based control mechanisms for next generation CPS
  • Intelligent, adaptive, and personalised user interfaces for CPS using vision algorithms
  • Deep learning and computer vision for autonomous monitoring
  • Mobile CPS and deep learning algorithms
  • Computer vision and robotics
  • Deep learning for cyber-enabled learning analytics
  • Role of computer vision in intelligence mining of smart CPS systems
  • Deep learning models for autonomous monitoring
  • Security and privacy in smart CPS

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

Computer vision

Aerial Vehicle Surveillance using Embedded Real-Time Image and Video Processing
Publication Date
Vol. 32, Issue 1
Submission Deadline
31 May 2022
Guest Editors
Marimuthu Karuppiah

SRM Institute of Science and Technology, India
kmarimuthu@ieee.org

Shehzad Ashraf Chaudhry

Istanbul Gelisim University, Turkey
sashraf@gelisim.edu.tr



Mohammed H. Alsharif

Sejong University, Republic of Korea
malsharif@sejong.ac.kr





Scope

With the advent of drone technology, aerospace has advanced to add the inclusion of unmanned aerial vehicle (UAV) in artillery, the airborne application has increased several fold. Using an airborne platform such as a UAV, surveillance and reconnaissance tasks are currently often performed in the areas that can be monitored using EO/IR cameras aerially. Different image processing techniques may be applied to the data to assist the role of the sensor analyst, both in real time and for forensic applications. Also, surveys on environmental observation and measurement, such as surveys of the ozone layer,  air pollution, coastlines, wildfires, and plant growth, have been performed using UAVs. The potential of UAVs is limitless. In order to derive full benefits from them a lot of research is being conducted.

Today, UAVs are primarily used for agricultural chemical spraying, environmental monitoring, surveillance, and military purposes in various countries. To conduct a range of surveillance and reconnaissance operations, fields like military forces use small UAVs. There is increasing interest in conducting aerial surveillance using video cameras. Video cameras have the ability to observe action within a scene and automatically monitor the camera to track the activity, compared to conventional framing cameras. However, new technological challenges are posed by high data rates and relatively limited fields of view for video cameras. To be successful, these missions depend on both the quality of the images and videos given by the cameras and image analysts' ability to identify and track objects of interest in the imagery. Citing these applications, the need to build a framework that could help complement those applications using low-cost video processing and data telemetry capability components with device independence in hardware requirements becomes apparent.

This special section will gather researchers from academia and industry to demonstrate the latest findings and approaches in the era of Big Data and the Internet of Things (IoTs) on various aspects of real-time image and video processing for smart surveillance applications. Researchers who are involved in multimedia research are invited to submit original research papers that promote ongoing efforts to understand algorithms for real-time image and video processing, data structures, trade-off optimization, architectures and applications that allow smart surveillance applications in real-time.

Topics of interest include but are not limited to the following:  

  • Image enhancement techniques in aerial surveillance
  • Object information and interpretation in real-time motion detection applications
  • Activity recognition in intelligence tasks and for forensic applications
  • Challenges in image enhancement algorithms and solutions in military applications
  • Digital video mosaicking for environmental monitoring
  • Multiple object tracking in video surveillance for roadway traffic monitoring systems
  • Big data analytics for video surveillance in education/health care/tour and travels
  • Enabling future smaller and lighter UAVS to ensure highway infrastructure management – advances and challenges
  • Distributed smart camera systems for real-time embedded video processing
  • Public video surveillance for crime control and prevention
  • Unmanned aerial aircraft systems for intelligent transportation systems monitoring

Manuscripts should conform to the author guidelines of the Journal of Electronic Imaging. Prospective authors should submit an electronic copy of their manuscript through the online submission system at https://jei.msubmit.net. The special section should be mentioned in the cover letter. Each manuscript will be reviewed by at least two independent reviewers. Peer review will commence immediately upon manuscript submission, with a goal of making a first decision within six weeks. Each paper is published as soon as the copyedited and typeset proofs are approved by the author.

Aerial surveillance
Published Special Sections

Frontiers in Computer Vision for Robotics (November/December 2022)
Guest Editors: Ali Kashif Bashir, Irfan Mehmood, and Shahid Mumtaz

Image and Video Manipulation: Challenges and Solutions
(September/October 2022)
Guest Editors: Deepak Kumar Jain, Irshad Ahmed Ansari, Johan Debayle, Li Zhang, and Vito Di Maio

Biologically Inspired Computer Vision and Image Processing (July/August 2022)
Guest Editors: Keping Yu, Wei Wang, and Muhammad Tariq

Image and Video Compression using Deep Neural Networks
(July/August 2021)
Guest Editors: Ofer Hadar and Touradj Ebrahimi

Advances in Urban Imaging and Applications (May/June 2021)
Guest Editors: Xiaohui Yuan, Sos Agaian, Wencheng Wang, Mohamed Elhoseny

Perceptually Optimized Imaging (March/April 2021)
Guest Editors: Shuhang Gu, Radu Timofte, Kede Ma

Quality Control by Artificial Vision VI (July/August 2020)
Guest Editors: Olivier Aubreton, Kunihito Kato, Kazunori Umeda, and Christophe Cudel

Advanced and Intelligent Vision Systems (March/April 2019)
Guest Editors: Fabrice Meriaudeau, Tang Tong Boon, and Irraivan Elamvazuthi

Image and Video Analysis, Detection, and Recognition
(September/October 2018)
Guest Editors: Edoardo Ardizzone and M. Emre Celebi

Computational Color Imaging (January/February 2018)
Guest Editors: Simone Bianco, Raimondo Schettini, Shoji Tominaga, and Alain Trémeau

Superpixels for Image Processing and Computer Vision (November/December 2017)
Guest Editors: Olivier Lézoray, Cyril Meurie, and M. Emre Celebi

Video Analytics for Public Safety (September/October 2017)
Guest Editors: Robert Loce, Edward J. Delp, and Sharath Pankanti

Retinex at 50 (May/June 2017)
Guest Editors: Alessandro Rizzi, John J. McCann, Marcelo Bertalmío, Gabriele Gianini

Image Processing for Cultural Heritage (January/February 2017)
Guest Editors: Aladine Chetouani, Robert Erdmann, David Picard, Filippo Stanco

Perceptually Driven Visual Information Analysis (November/December 2016)
Guest Editors: Mohamed-Chaker Larabi, Sanghoon Lee, Mohammed El Hassouni, Frédéric Morain-Nicolier, Rachid Jennane

Color in Texture and Material Recognition (November/December 2016)
Guest Editors: Raimondo Schettini, Joost van de Weijer, Claudio Cusano, and Paolo Napoletano

Intelligent Surveillance for Transport Systems (September/October 2016)
Guest Editors: Louahdi Khoudour, Yassine Ruichek, ans Sergio Velastin

Advances on Distributed Smart Cameras (July/September 2016)
Guest Editors: Jorge Fernández-Berni, François Berry, and Christian Micheloni

Quality Control by Artificial Vision: Nonconventional Imaging Systems (November-December 2015)
Guest Editors: Fabrice Mériaudeau and Aamir Saeed Malik

Ultrawide Context- and Content-Aware Imaging, Part II (November-December 2015)
Guest Editors: François Brémond, Ljiljana Platiša, and Sebastiano Battiato

Ultrawide Context- and Content-Aware Imaging, Part I (September-October 2015)
Guest Editors: François Brémond, Ljiljana Platiša, and Sebastiano Battiato

Image/Video Quality and System Performance (November-December 2014)
Guest Editors: Mohamed-Chaker Larabi, Sophie Triantaphilliadou, and Andrew B. Watson

Stereoscopic Displays and Applications (January-February 2014)
Guest Editors: Nicolas Holliman and Takashi Kawai

Video Surveillance and Transportation Imaging Applications (Oct-Dec 2013)
Guest Editors: Robert Loce and Eli Saber

Compressive Sensing for Imaging (April-June 2013)
Guest Editors: Fauzia Ahmad, Gonzalo R. Arce, Ram M. Narayanan, Dimitris A. Pados

Mobile Computational Photography (January-March 2013)
Guest Editors: Todor Georgiev, Andrew Lumsdaine, Sergio Goma

Quality Control by Artificial Vision (April-June 2012)
Guest Editors: Jean-Charles Pinoli, Karen Panetta, and Seiji Hata

Stereoscopic Displays and Applications (January-March 2012)
Guest Editors: Neil Dodgson and Nick Holliman

Quality Control by Artificial Vision (September-December 2010)
Guest Editors: Edmund Y. Lam, Shaun S. Gleason, and Kurt S. Niel

Digital Photography (April-June 2010)
Guest Editors: Peter B. Catrysse and Sabine Süsstrunk

Image Quality (January-March 2010)
Guest Editors: Susan Farnand and Frans Gaykema

Quality Control by Artificial Vision (July-September 2008)
Guest Editors: Hamed Sari-Sarraf, David Fofi, and Nelson H. C. Yung

Biometrics: Advances in Security, Usability, and Interoperability (January-March 2008)
Guest Editors: Claus Vielhauer, Berrin Yanikoğlu, Sonia Garcia-Salicetti, Richard M. Guest, and Stephen J. Elliott

Security, Steganography, and Watermarking of Multimedia Contents (October-December 2006)
Guest Editors: Jana Dittmann and Edward J. Delp

Color Imaging: Processing, Hard Copy, and Applications (October-December 2006)
Guest Editors: Reiner Eschbach and Gabriel Marcu

Quality Control by Artificial Vision (July-September 2004)
Guest Editors: Kenneth W. Tobin, Fabrice Meriaudeau, and Luciano da Fontoura Costa

Retinex at 40 (January-March 2004)
Guest Editors: John J. McCann

Imaging through Scattering Media (October-December 2003)
Guest Editors: David A. Boas, Charles A. Bouman, and Kevin J. Webb

Model-Based Medical Image Processing and Analysis (January-March 2003)
Guest Editors: James C. Gee; Mostafa Analoui

Internet Imaging (October-December 2002)
Guest Editors: Giordano Beretta and Raimondo Schettini

Storage, Processing, and Retrieval of Digital Media (October-December 2001)
Guest Editors: Minerva M. Yeung, Chung-Sheng Li, Rainer Lienhart,and Boon-Lock Yeo

Process Imaging for Automatic Control (July-September 2001)
Guest Editors: David M. Scott and Hugh McCann

Statistical Issues in Psychometric Assessment of Image Quality (April-June 2001)
Guest Editors: John C. Handley and John Bunge

Human Vision and Electronic Imaging (January-March 2001)
Guest Editors: Bernice E. Rogowitz, Thrasyvoulos N. Pappas, and Jan P. Allebach

Back to Top