You are on page 1of 5

Image Restoration Using Joint Statistical

Modeling in a Space-Transform Domain

AbstractThis paper presents a novel strategy for high-fidelityimage restoration by characterizing both

local smoothness andnonlocal self-similarity of natural images in a unified statistical

manner. The main contributions are three-fold. First, from theperspective of image statistics, a joint
statistical modeling (JSM)in an adaptive hybrid space-transform domain is established,
which offers a powerful mechanism of combining local smoothnessand nonlocal self-similarity
simultaneously to ensure amore reliable and robust estimation. Second, a new form of
minimization functional for solving the image inverse problem isformulated using JSM under a
regularization-based framework.Finally, in order to make JSM tractable and robust, a new Split
Bregman-based algorithm is developed to efficiently solve theabove severely underdetermined inverse
problem associated withtheoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring, and mixed Gaussian plus salt-and peppernoise removal applications

Reversible De-Identification for Lossless Image

Compression using Reversible Watermarking
Abstract - De-Identification is a process which can be usedto ensure privacy by concealing the identity
of individualscaptured by video surveillance systems. One importantchallenge is to make the obfuscation
process reversible sothat the original image/video can be recovered by persons inpossession of the right
security credentials. This workpresents a novel Reversible De-Identification method thatcan be used in
conjunction with any obfuscation process.The residual information needed to reverse the
obfuscationprocess is compressed, authenticated, encrypted andembedded within the obfuscated image
using a two-level Reversible Watermarking scheme. The proposed methodensures an overall single-pass
embedding capacity of1.25 bpp, where 99.8% of the images considered requiredless than 0.8 bpp while
none of them required more than1.1 bpp. Experimental results further demonstrate that theproposed
method managed to recover and authenticate all images considered.

LBP-Based Edge-Texture Features for Object Recognition

AbstractThis paper proposes two sets of novel edge-texturefeatures, Discriminative Robust Local Binary
Pattern (DRLBP) and Ternary Pattern (DRLTP), for object recognition. By investigatingthe limitations of Local
Binary Pattern (LBP), Local Ternary Pattern (LTP) and Robust LBP (RLBP), DRLBP andDRLTP are proposed as
new features. They solve the problem of discrimination between a bright object against a dark background and
vice-versa inherent in LBP and LTP. DRLBP also resolves the problem of RLBP whereby LBP codes and their
complements in the same block are mapped to the same code. Furthermore, the proposed features retain contrast
informationnecessary for proper representation of object contours that LBP, LTP, and RLBP discard. Our proposed
features are tested onseven challenging data sets: INRIA Human, Caltech Pedestrian, UIUC Car, Caltech 101,
Caltech 256, Brodatz, and KTH-TIPS2- a. Results demonstrate that the proposed features outperform the
compared approaches on most data sets.

Fingerprint Compression Based on Sparse

AbstractA new fingerprint compression algorithm based on sparse representation is introduced.
Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent
them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary
for predefined fingerprint image patches. For a new given fingerprint images,
represent its patches according to the dictionary by computing l0-minimization and then quantize and
encode the representation. In this paper, we consider the effect of various factors on compression results.
Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is
efficient compared with several competing compression techniques (JPEG,
JPEG 2000, andWSQ), especially at high compression ratios. The experiments also illustrate that the
proposed algorithm is robust to extract minutiae.

Adaptive Watermarking and Tree Structure Based

Image Quality Estimation
AbstractImage quality evaluation is very important. In applications involving signal transmission, the
Reduced- or No-Reference quality metrics are generally more practical than the Full- Reference metrics.
In this study, we propose a quality estimation method based on a novel semi-fragile and adaptive
watermarking scheme. The proposed scheme uses the embedded watermark to estimate the degradation
of cover image under different distortions. The watermarking process is implemented in DWT domain of
the cover image. The correlatedDWTcoefficients across theDWTsubbands are categorized into Set
Partitioning in Hierarchical Trees (SPIHT). Those SPHIT trees are further decomposed into a set of
bitplanes. The watermark is embedded into the selected bitplanes of the selectedDWTcoefficients of the
selected treewithout causing significant fidelity loss to the cover image. The accuracy of the quality
estimation is made to approach that of Full-Reference metrics by referring to an "Ideal Mapping Curve"
computed a priori. The experimental results show that the proposed scheme can estimate image quality
in terms of PSNR,wPSNR, JNDand SSIMwith high accuracy under JPEG compression, JPEG2000
compression, Gaussian low-pass filtering and Gaussian noise distortion. The results also show that the
proposed scheme has good computational efficiency for practical applications.

Iris Image Classification Based on Hierarchical

Visual Codebook
AbstractIris recognition as a reliable method for personal identification has been wellstudied with the objective to assign the class label of each iris image to a unique subject. In
contrast, iris image classification aims to classify an iris image to an application specific
category, e.g. iris liveness detection (classification of genuine and fake iris images), race
classification (e.g. classification of iris images of Asian and non-Asian subjects), coarse-to-fine
iris identification (classification of all iris images in the central database into

multiple categories). This paper proposes a general framework for iris image classification
based on texture analysis. A novel texture pattern representation method called Hierarchical
Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The
proposed HVC method is an integration of two existing Bag-of-Words models, namely
Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a
coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate
and sparse representation of iris texture. Extensive experimental results demonstrate that the
proposed iris image classification method achieves state-of-the-art performance for iris
liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive
fake iris image database simulating four types of iris spoof attacks is developed as the
benchmark for research of iris liveness detection.

Coherency Based Spatio-Temporal Saliency

Detection for Video Object Segmentation
AbstractExtracting moving and salient objects from videos isimportant for many applications
like surveillance and video retargeting. In this paper we use spatial and temporal coherency
information to segment salient objects in videos. While many methods use motion information
from videos, they do not exploit coherency information which has the potential to give more
accurate saliency maps. Spatial coherencymaps identify regions belonging to regular objects,
while temporal coherency maps identify regions with high coherent motion. The two coherency
maps are combined to obtain the final spatio-temporal map identifying salient regions.
Experimental results on public datasets show that our method outperforms two competing
methods in segmenting moving objects from videos.

Inpainting for Remotely Sensed Images With a

Multichannel Nonlocal Total Variation Model
AbstractFilling dead pixels or removing uninteresting objects is often desired in
the applications of remotely sensed images. In this paper, an effective image
inpainting technology is presented to solve this task, based on multichannel nonlocal
total variation. The proposed approach takes advantage of a nonlocal method, which
has a superior performance in dealing with textured images and reconstructing largescale areas. Furthermore, it makes use of the multichannel data of remotely sensed
images to achieve spectral coherence for the reconstruction result. To optimize the
proposed variation model, a Bregmanized-operator-splitting algorithm is employed.
The proposed inpainting algorithm was tested on simulated and real images. The
experimental results verify the efficacy of this algorithm.

High-Quality Real-Time Video Inpainting

with PixMix
AbstractWhile image inpainting has recently become widely available in image manipulation
tools, existing approaches to video inpainting typically do not even achieve interactive frame
rates yet as they are highly computationally expensive. Further, they either apply severe
restrictions on the movement of the camera or do not provide a high-quality coherent video
stream. In this paper we will present our approach to high-quality real-time capable image and
video inpainting. Our PixMix approach even allows for the manipulation of live video streams,
providing the basis for real Diminished Reality (DR) applications. We will show how our
approach generates coherent video streams dealing with quite heterogeneous background
environments and non-trivial camera movements, even applying constraints in real-time.

Histogram of Oriented Lines for Palmprint Recognition

AbstractSubspace learning methods are very sensitive to the illumination, translation,
and rotation variances in image recognition. Thus, they have not obtained promising
performance for palmprint recognition so far. In this paper, we propose a new descriptor of
palmprint named histogram of oriented lines (HOL), which is a variant of histogram of
oriented gradients (HOG). HOL is not very sensitive to changes of illumination, and
has the robustness against small transformations because slight translations and rotations
make small histogram value changes. Based on HOL, even some simple subspace learning
methods can achieve high recognition rates.

Data Hiding in Encrypted H.264/AVC Video Streams

by Codeword
AbstractDigital video sometimes needs to be stored and processed in an encrypted format to maintain
security and privacy. For the purpose of content notation and/or tampering detection, it is necessary to
perform data hiding in these encrypted videos. In this way, data hiding in encrypted domain without
decryption preserves the confidentiality of the content. In addition, it is more efficient without
decryption followed by data hiding and re-encryption. In this paper, a novel scheme of data hiding
directly in the encrypted version of H.264/AVC video stream is proposed, which includes the following
three parts, i.e., H.264/AVC video encryption, data embedding, and data extraction. By analyzing the
property of H.264/AVC codec, the codewords of intraprediction modes, the codewords of motion vector
differences, and the codewords of residual coefficients are encrypted with stream ciphers. Then, a data
hider may embed additional data in the encrypted domain by using codeword substitution technique,

without knowing the original video content. In order to adapt to different application scenarios, data
extraction can be done either in the encrypted domain or in the decrypted domain. Furthermore, video
file size is strictly preserved even after encryption and data embedding. Experimental results have
demonstrated the feasibility and efficiency of theproposed scheme.

A High Performance Fingerprint Matching System

for Large Databases Based on GPU
AbstractFingerprints are the biometric features most used for identification. They can be characterized
through some particular elements called minutiae. The identification of a given fingerprint requires the
matching of its minutiae against the minutiae of other fingerprints. Hence, fingerprint matching is a key
process. The efficiency of current matching algorithms does not allow their use in large fingerprint
databases; to apply them, a breakthrough in running performance is necessary. Nowadays, the minutia
cylinder-code (MCC) is the best performing algorithm in terms of accuracy. However, a weak point of
this algorithm is its computational requirements. In this paper, we present a GPU fingerprint matching
system based on MCC. The many-core computing framework provided by CUDA on NVIDIA Tesla and
GeForce hardware platforms offers an opportunity to enhance fingerprint matching. Through a thorough
and careful data structure, computation and memory transfer design, we have developed a system that
keeps its accuracy and reaches a speed-up up to 100.8 compared with a reference sequential CPU
implementation. A rigorous empirical study over captured and synthetic fingerprint databases shows the
efficiency of our proposal. These results open up a whole new field of possibilities for reliable real time
fingerprint identification in large databases.

A Fragile Watermarking Algorithm for Hologram

AbstractA fragile watermarking algorithm for hologram authentication is presented in this
paper. In the proposed algorithm, the watermark is embedded in the discrete cosine
transform(DCT) domain of a hologram. The watermarked hologram is stored in spatial domain
with finite precision level. By enhancing the precision for storing the watermarked hologram
pixels, the distortion produced by the proposed watermarking scheme can be lowered. While
providing high perceptual transparency, the proposed algorithm also attains high performance
detection to delivery errors and malicious tampering. Experimental results reveal that the
proposed algorithmcan be used as an effective filter for blocking polluted or tampered holograms
from 3D magnitude and/or phase reconstruction.