IAESONLINEIAESONLINE

Indonesian Journal of Electrical Engineering and Informatics (IJEEI)Indonesian Journal of Electrical Engineering and Informatics (IJEEI)

This article introduces a novel blind image quality metric (BIQM) for color images which is designed taking into account human visual system characteristics. The BIQM has a four-stage framework: RGB to YUV transformation, denoising with convolutional neural network , quality evaluation, and weighting to make it compatible with the human visual system. Experimental results, including Spearmans rank-order correlation coefficient, confirm BIQMs effectiveness, particularly in scenarios involving white noise and its compatibility with the human visual system. Furthermore, a survey involving 100 participants ranks images based on three distinct qualities, validating the methods alignment with the human visual system. The comparative analysis reveals that the proposed BIQM can compete with commonly used non-referenced quality measures and is more accurate than some of them. The MATLAB codes for the development of the BIQM are made available through the provided link: https://bit.ly/49MrbFX.

This study proposes a new deep CNN-based NR image quality metric (BIQM) consisting of four phases, including RGB to YUV transform, CNN determination considering the file type, denoising with deep-CNN, and quality value calculation.The experimental results demonstrate the effectiveness of the presented method, particularly for white noise.The SROCC values indicate that BIQM provides more consistent results in terms of HVS compared to its counterparts, particularly for images corrupted with white noise.Moreover, the survey results suggest that BIQM could potentially enhance user satisfaction when used for image quality measurement.The main contribution of this work is the integration of DnCNN residual learning with HVS-based channel weighting.This combination improves the robustness of BIQM against various distortions.

Berdasarkan latar belakang, metode, hasil, keterbatasan, dan saran penelitian lanjutan yang ada dalam paper, berikut adalah beberapa saran penelitian lanjutan yang dapat dikembangkan:. . 1. **Pengembangan Model CNN yang Lebih Adaptif:** Penelitian selanjutnya dapat difokuskan pada pengembangan model CNN yang lebih adaptif terhadap berbagai jenis noise dan distorsi gambar. Hal ini dapat dicapai dengan menggunakan teknik pembelajaran transfer (transfer learning) atau dengan melatih model pada dataset yang lebih beragam dan representatif.. 2. **Integrasi dengan Model Perseptual yang Lebih Kompleks:** BIQM saat ini menggunakan weighting sederhana berdasarkan rasio sel cahaya dan warna pada retina. Penelitian selanjutnya dapat mengeksplorasi integrasi dengan model perceptual yang lebih kompleks, seperti model yang mempertimbangkan efek psikovisual lainnya, untuk meningkatkan akurasi dan kesesuaian dengan persepsi manusia.. 3. **Evaluasi pada Aplikasi Real-World:** Meskipun BIQM telah dievaluasi pada berbagai dataset standar, evaluasi lebih lanjut pada aplikasi real-world, seperti sistem pengawasan video atau aplikasi medis, akan memberikan wawasan yang lebih berharga tentang kinerja dan keandalan BIQM dalam kondisi praktis.

  1. Making a “Completely Blind” Image Quality Analyzer | IEEE Journals & Magazine | IEEE... ieeexplore.ieee.org/document/6353522Making a AuCompletely BlindAy Image Quality Analyzer IEEE Journals Magazine IEEE ieeexplore ieee document 6353522
  1. #cloud computing#cloud computing
  2. #intrusion detection#intrusion detection
Read online
File size1.45 MB
Pages16
Short Linkhttps://juris.id/p-3qB
Lookup LinksGoogle ScholarGoogle Scholar, Semantic ScholarSemantic Scholar, CORE.ac.ukCORE.ac.uk, WorldcatWorldcat, ZenodoZenodo, Research GateResearch Gate, Academia.eduAcademia.edu, OpenAlexOpenAlex, Hollis HarvardHollis Harvard
DMCAReport

Related /

ads-block-test