You are on page 1of 4

IEEE Int. Workshop VLSI Design & Video Tech.

Suzhou, China, May 28-30,2005

Motion Adaptive Deinterlacing Combining with Texture Detection and Its FPGA Implementation
Yung Yuhong, Chen Yingqi, Zhung Wenjun

Shanghai JiaoTong University


Haoran Ed. 15F Huashan Rd. No. 1954 Shanghai 200030 P.R.China

ABSTRACT
A same-parity 4-field based motion adaptive deinterlacing algorithm combining with texture detection is presented in this paper. Comparing with the existing low-end deinterlacing algorithm, this method greatly improves image quality for both static and fast moving objects due to high motion detection accuracy. Its hardware architecture is also simpler than high-end deinterlacer architecture, e.g. motion estimation & compensation deinterlacer. Its FPGA implementation is also presented. 1. INTRODUCTION Interlace scanning in high resolution display devices may cause some visual artifacts. There are some high-end deinterlacing methods which use motion-estimation and adaptive interpolation algorithm and some are object-based true-motion estimation algorithm[2]-[6]. All of these methods can provide better deinterlacing quality. However, considering the hardware cost, there are also many low-end deinterlacing methods for consumer electronics. BOB and Weave[]] are two low complexity deinterlacing methods. BOB is an intra-interpolation method to reconstruct one progressive frame that the vertical resolution is halved and the image is blurred. Weave deinterlacing method combines directly two interlaced fields into one progressive frame. The line-crawling effects occur in the motion area. Some motion adaptive deinterlacing techniques have been presented to improve image quality[h][7]. We also presented an motion adaptive deinterlacing algorithm with 2 hierarchical weighted averagc before[X]. Based on our previous work and some references, we present a high quality deinterlacing algorithm o f the same-parity 4-field based motion adaptive deinterlacer with texture detection. Its hardware architecture and available memory access arbitration method with fixed order will also be described. The overview of the motion adaptive deinterlacing method and software simulation result is shown in section 2 and section 3. Section 4 presents its hardware architecture and FPGA implementation result. Section 5

gives the conclusion. 2. THE PROPOSED MOTION ADAPTIVE

DEINTERLACING METHOD
Our previous motion adaptive deinterlacing method[8] uses 2 hierarchical weighted average. That is, the intra-field interpolation of 2-D Cubic interpolation and edge interpolation according to edge detection is calculated first, then the weighted average result of intra-field interpolation and inter-field interpolation is caiculated according to motion detection coefficient. By using 2-field motion detection method, we have to average the upper-line and the lower-line in the previous field comparing with the current line in the current field due to the different parity of the 2 fields. The motion detection error sometimes is much high. So we consider to use same parity fields motion detection. The 4-field horizontal motion detection method from 5 directions is shown in fig. 1[7]. Block matching is done between the forward field and the backward field by 1x3 block size from five directions. If the minimum difference of block matching is smaller than the threshold, and the pixel difference between the forward-forward field and the current field is also smaller than the threshold, the temporal interpolation will be adopted.

The threshold value used to detect the motion area is calculated according to [7] which is based on the principle that human eyes are less sensitive to lighter or darker area

0-7803-9005-9/05/$20.0002005 IEEE

316

than gray area, so the threshold at those pixels should be much smaller than the threshold at gray color pixels. We determine the motion area when the field difference is larger than the threshold value. When the field difference is smaller than the threshold value, the static area is found. Equation 1 presents the simple threshold adjusting mechanism for motion detection. For an 8bit gray scale picture, 255 and 0 mean the white and black color, respectively. Experimental result shows that when the current pixel value is 255 or 0, which is not sensitive for human eyes, the threshold value is 20. When the current pixel value is 127, which has high sensitivity, the threshold value is 10. [7]
I / / , C,\[I/>/L/
IO . = LO - -1 l2X
\\hIlC

-F(x * w i d t h y + 2 * n + 1)
~

n = -1,0,1,2
n = -l,0,1,2

(2)
(3)

Gvad(n,dir,x,y),,wo,d -Ji.rd = F ( x * widrh + y + dir + 2 * n - 1)


F ( x *width

+ y + dir -I2*nI I)

I'

.: 12s

As the luminance of the same parity fields in the same static area may have differences that may not be distinguished by eyes, that is, their texture is inconsistent. The threshold value given should be high enough to detect the static area. But if we raise the threshold value, sometimes the true motion area cannot be detected correctly, especially in fast moving areas. Sometimes the threshold trade-off cannot be given for acquiring good quality results of both static and fast moving sequences. In order to solve the problem, we propose to utilize texture consistency detection. Because the texture must be inconsistent in fast moving areas of forward and backward fields in case their temporaI differences in same parity fields are under the original threshold, these areas cannot be detected as static areas. So by detecting texture consistency of forward and backward fields, motion detection accuracy can be improved. Only when temporal field difference is smaller than the threshold value and the texture of the 2 fields in that direction is consistent, the static area is found. Otherwise we use motion adaptive weighted interpolation of the current field and backward field. The block size of texture detection is 1x9. Four texture grads(n= -1, 0, 1, 2) of every other pixel of each block in forward and backward fields are calculated respectively. The texture consistency of the two fields are matched along the five directions which is the same as that in motion detection. The four texture grads calculation in each field is shown in fig. 2 and equation 2, 3:

Figure 2 . Backwardforward field texture detection

where F( -) is luminance of forward or backward field. (x,y) is the center pixel of the current block. width is the pixel number of a line of each field. dir is the direction of texture consistency detection from -2 to 2. That is, we define the no. of pixels in 1x9 block from 1 to 9. The four texture grads in each field are pixel value difference of 4 pairs: 1-3, 3-5, 5-7, 7-9. If all the absolute value of texture grads of the forward and backward fields in the same direction of motion detection are lower than a fixed threshold or the corresponding grads have the same sign of the two fields, then we assume the texture of the two fields in this direction is consistent. Otherwise it is inconsistent. The block diagram of the deinterlacing method is shown in fig. 3. Three-field buffers are used to store the reference data from 3 previous fields. The ELA module does the directional edge interpolation according to the current-field information[7]. The same-parity 4-field horizontal motion detection and backward\fonvard fields texture consistency detection calculate the differences between the forward-forward field and the current field, and the difference between the forward field and the backward field. The field difference will be sent to the threshold-adjusting module. According to the pixel value of the current field, the threshold-adjusting module will provide an adaptive threshold value to produce the motion information. The decision block receives the motion information and texture consistency information of backward and forward fields, and then selects the forward field in the static area or motion adaptive weighted interpolation of ELA of current field and backward field according to motion adaptive coefficient which is described behind. So if static area is detected, the average of forward field and backward field in the detected direction is selected. Otherwise if the temporal field difference value is higher than threshold or the texture of backward and forward fields is inconsistent, the static area cannot be found. Then we use motion adaptive weighted interpolation of the current field and backward field. Motion adaptive coefficient is selected from the precalculated LUT which is derived from experimental statistics according to the difference of the average of upper and lower pixel of the interpolated pixel in current field and the corresponding pixel in the backward field. The difference is the input of the coefficient LUT that can output motion adaptive coefficient. The resulted interpolation value is as follows: Mapresult=( 1-Ma-coef)* Edge-current+ M a c o e P f-backward (4)

317

M a c o e f is motion adaptive coefficient kom LUT. Edge-current is directional edge interpolation in current field. fbackward i s the corresponding pixel value in backward fieId. Finally, the current field and the interpolation field are merged into the progressive frame.

Using our proposed methods these problems can be solved in a way. Good quality can be acquired in both own in fig. 6(a) (6). static

Rsckrard l i c l d .

Figure 4. Deinterlacing of fast motion sequence:racing car using method of [7]

Figure 3. Block diagram of proposed deinterlacing method

3. ALGORITHM SIMULATION RESULTS


The sequences of racing car and pendulum is simulated using the motion adaptive deinterlacing method presented in [7] and our proposed method respectively. And the sequence of pendulum is also simulated using our previous method[S]. In pendulum sequence, the letters of '0' and 'K' are only presented seperately in different fields. The racing car sequence has a fast moving background. With the method of [7], the fast motion sequence of racing car cannot be detected correctly. There appears many noises in fast moving background(see fig. 4). It seems that threshold is too high. Because the difference of the two same parity fields is below the threshold, the sequence must be detected to be static. Then the motion adaptive deinterlacing is inter-field interpolation with wrong neighboring field value. But if we reduce the threshold value, the static letter of 'O'and 'K' in pendulum sequence cannot be reserved in all resulted frame (see fig. 5 ) because the difference of the two fields in same disparity in static area is sometimes larger than the threshold. Then the static area is detected to be in motion state and deinterlacing is done by intra-field interpolation. So flicker effect will exist in letters area. The resulted frame of pendulum using our previous method has the same effect.

Figure 5 . Deinterlacing of pendulum with static letter O&R using method of [7] & our previous method (consecutive two frames)

(ai racing car

(b) pendulum . Figure 6.2 deinterlacing sequences by our proposed

318

method
4. ARCHITECTURE DESIGN AND FPGA

IMPLEMENTATION
The architecture of the deinterlacer is shown in fig, 7, There are six main units: input unit, 4-field motion detection and texture consistcncy detection unit, SDRAMcontroller unit, FIFO unit, ELA intra-field deinterlacing unit and ouput buffer unit,
i - F i e l d Uniion
,

considering overhead of SDRAM access and simulation shows that each FIFO depth should be 64 and the maximum of SDRAM interface clock is lOOMHz for HDTV video format. We use Xilinx virtexII 4000 to implement the deinterlacer because we will combine other function modules such as scaler and video enhancement together. External SDRAM is 32bit 64Mbit. The resource usage of FPGA is shown in table I. So the hardware implementation is simple and cost efficient. Table I . FPGA Resource Usage Resource I Usage Rate External IOBs I 17% Slices 1 39% RAMB 16s I 10%
5. CONCLUSION

Figure 7. Architecture of 4-field based deinterlacer where k is the motion adaptive weighted coefficient of current field and backward field. Input unit has the same function as we described before[8]. It accepts 2 channels or 1 channel 4:2:2 video input. Meanwhile the input control module controls the global frameifield synchronization, It extracts the TRS(Timing Reference Signal) information from input video stream that corresponds to SMPTE or BT.656 or get the synchronous information fiom output F-in,Vin,H-in signals io synchronize the whole IC. ELA Intra-field deinterlacing unit performs edge detection and intra-field ELA interpolation in the current field. 4-field motion detection and texture consistency detection unit performs the key function that receives one line from each of the 4 fields during the same period. The calculation is described above. When static area is detected, it outputs the average of corresponding line blocks in backward field and forward field. Otherwise, it outputs the backward field data line by line. Meanwhile the motion adaptive coefficients k and 1-k can also be acquired which has been decribed above. If the static area is found, k is set to zero. Otherwise k is fiom zero to one. Three fields--current field, forward field and forward forward field are stored in SDRAM. So the four lines of the 4 fields should be fetched fiom S D M M and input stream y. simultaneous1 We adopt the time sharing memory access scheme. SDRAMController module arbitrates the memory access requests and generates memory access signals. It uses fixed order of the four FIFO modules access requests. For avoiding overflowiunderflow of each FIFO, evaluation

We propose a new motion adaptive deinterlacing method combining with texture consistency detection to improve image quality. It is based on same-parity 4-field horizontal motion detection. Experimental results o f the comparisons between the proposed method and our previous method and the method in [7] are given above. It shows that the. method we propose in this paper is better for both static and fast moving pictures. Its hardware architecture and FPGA implementation result are also presented with low complexity and high-speed processing capability. We use a simple and efficient memory arbitration algorithm. It is very cost-efficient.
6. REFERENCES
[I] http://nickygukdes.digital-digest.com/index.htm
121 Kenju Sugiyama and Hiroya Nakamura, A Method of Deinterlacing with Motion Compensated Interpolation, IEEE Trand Consumer Elec.,vol.45, no. 3, pp61 1-6 16, August 1999. [3] Dang-Jiang Wang and Jin-Jang Leou, A New Approach to Video Format Conversion Using Bidirectional Motion Estimation and Hybrid Error Concealment, Joumal of information Science and Engineering 17, pp753-777,2001,

[4] Jed Deame, Motion Compensated Deinterlacing: The Key to the Digital Video Transition, SMPTE 141st Technical Conference in NY, November 19-22, 1999. [5] D.Van de Ville, WPhilips, and l.Lemahieu, Motion Compensated De-interlacing for Both Real Time Video and Still Images, International Conference on Image Processing, Vol. 2,
pp 680-683,2000, [6] G. De Haan, E.B.Bellers, Deinterlacing-an Overview, Proceedings of the IEEE, vo1.86, Issue:9, pp 1839-1857, Sept. 1998. [7] Shyh-Feng Lin, Yu-Ling Chang, and Liang-Gee Chen, Motion Adaptive Interpolation with Horizontal Motion Detection for Deinterlacing, IEEE Transactions on Consumer Electronics, Voi, 49, No.4, November 2003 [SI Yang Y uhong, Zhang Wenjun, Adaptive Deinterlacing Algorithm and Its Circuit Design, Journal of Data Acquisition&Processing,Vol. 19, No.3,pp334-338,2004

3 19

You might also like