You are on page 1of 8

Video Over Wireless: Issues faced today and their resolutions

Parul Khurana Abstract: This paper discusses the various issues faced while transmitting/receiving Video over wireless networks such as bandwidth variation and limitation in wireless networks, high transmission error rates etc. It then describes the various methods/techniques used to overcome these problems based on the requirements at the receivers end, of the video being transmitted. We describe MPEG-4 video codec and H.264 in some detail, compare them with each other and their predecessors; and describe how these are better suited for wireless networks. Some techniques to overcome the limitations of a wireless channel such as minimizing the effect of error-prone channel, low bit rate etc are then described. Based on these known techniques, a method has been proposed for the transmission of video over the wireless channel which has low bit rate and is error-prone (is this right? Is the method you are suggesting error prone??). The technique proposed in this paper, will ensure video playback at the receiver and can be modified based on the quality of video desired at the receiving end. INTRODUCTION Transmission over wireless channel is a significant challenge because the wireless link is subject to low bit rate and high error rate as compared to wireline links. Therefore to transmit delay-sensitive high bit rate applications over wireless networks, we need effective compression algorithms. These challenges become even more difficult if complex coding techniques and compression methods are employed. Issues faced with wireless multimedia applications are very different from desktop multimedia applications. The reasons are summarized as below: 1) Limited bandwidth available to the wireless channel, the channel capacity is not fixed and it will vary over a wide range. 2) High bit error and burst error rates, due to multipath reflections and fading. 3) Diversity of wireless networks with respect to network topology, protocols, reliability etc 4) Need of a trade-off between quality, performance and cost. Video transmission and reception over wirless is difficult due to the large amount of data contained in it. As stated earlier also, channel capacity is a limited resource, hence various compression techniques are employed for reducing the data rates. But there is dependency due to reduced bit rate. Any transmission error over the noisy channel can cause disturbance in the decoding process, and hence the video quality will degrade. Many video compression algorithms have been proposed such as MPEG-1, MPEG2, H.26X, but these are not suited for wireless needs. In the coming sections we will discuss about these video compression standards, mainly concentrating on MPEG-4, H.264. 1. Compression Standards

MPEG-1 was released in 1993 by ISOMPEG. It mainly contributed in storage


Transform Macro block size MC Accuracy Additional Prediction Modes H.261 (1990) 8X8 DCT 16 X 16 1- pel None H.263 (1995) 8X8 DCT 8X8, 16X16 pel B-frames Advance Prediction mode 1 or 4 MVs per MB OBMC Arbitrary As low as 20 kbps for PSTN MPEG-1 (1993) 8X8 DCT 16X16 - pel B-frames

of video in VCDs. It was also used for distribution of video over internet. Also,
MPEG-2 (1994) 8X8 DCT 16X16, 8X16 - pel B-frames Interlace MPEG-4 (2000) 8X8 DCT 8X8, 16X16 - pel B-frames Interlace Global motion compensation VOP coding Low to high bit rates H.264/AVC (2002-2006) 4X4 DCT 16X16, 16X8, 8X4, 4X4 - pel B-frames Long term frame memory In-loop deblocking filter Bit rates px64 kbps Support the bits rates of H.263 and MPEG-4

Bit Rates

px64 kbps (p=1,..30)

1.5 Mbps

4-8 Mbps (for TV) 1845Mbps (for HDTV

Table 1: Basic features of Video Coding Standards MPEG-1 had no measures of errorresilience. MPEG-2 was mainly introduced for compressing digital TV signals for 4-6 Mbps. This standard introduced digital TV around the world, mainly for wideband applications. MPEG-2 also has no in-built error resilience. 1.1 MPEG-4 MPEG-4 was introduced for coding video at very low bit rates, which is why it suits the wireless video transmission needs. It is used for streaming video, wireless streaming, digital video cameras etc. All the above three compression standards MPEG-1, MPEG-2 and MPEG-4 used block based hybrid coding. Each video frame is divided into fixed size blocks called Macro Blocks (MB). Each macro block is then independently coded using motion compensated temporal prediction and transform coding. Then the DCT (Discrete Cosine Transform) coefficients and motion vectors, ie the displacement between current MB block and the one in the previous frame, are binary coded using variable length coding. MPEG-4 is an improved compression technique, it uses concepts from image analysis, such as segmentation and model based compression, which improve coding efficiency. MPEG -4 allows the interactivity between the user and the application and provides high degree of compression. MPEG-4 also has an option of using wavelet transform for intra-frame coding. The basic unit of compression is a Video Object(VO). A Video Object is an object of interest in a picture , which can be of any shape, like a person, a scene, tree, background etc. These are primitive VOs. A combination of primitive VOs will make a compound VO. This distinction between primitive and

compound VO gives the user the ability to select and decode the selected VOs ie objects of interest. This is called contentbased scalability. MPEG-4 gives you the ability to decode the VO without the need to decode the entire scene. This is the major reason why MPEG-4 is suited for bandwidth- constrained applications, ie in wireless systems. We can state the following characteristics of MPEG-4 compression standard: 1. Bitrates between 5kbps- 10Mbps 2. Drag formats: progressive as well as interlaced video. 3. Compression efficiency from acceptable to near lossless. Why MPEG-4 is suited for wireless multimedia applications: 1. provides high compression. Lowest bit rate can be 5kbps. 2. MPEG-4 uses scalable video coding, best suited for varying channel capacity since it provides varying coding bit rate. 3. MPEG-4 uses error-resilient tools which will ensure less errors (fewer errors) in transmission of video over error-prone wireless channel. 4. Face animation parameters can be used to reduce bandwidth consumption, for real time communications for eg mobile conferencing. We will discuss more about the scalable video coding and error resilience in the subsequent sections.

coding techniques like multimode and multi-reference MC, fine motion vector precision, B-frame prediction weighting, 4x4 integer transform, in-loop deblocking filter, uniform variable length coding, network abstraction layer (NAL) and so on. It is observed that H.264 can save bit rate upto 50% relative to other standards like MPEG-4, H.263 at the same visual quality. 2. Addressing the Issues faced while transmitting Video Over Wireless 2.1 #Issue: Bandwidth Limitation #Technique: Scalable Video coding In traditional video compression systems, the video quality can be optimized at a given bit rate. The video is compressed at a bit rate which is less than the channel bit rate, and later decoded at the receiver using the bits received from the channel. In this case, the encoder already knows the channel capacity. This is the requirement of video coding in traditional systems. But this is not the case with wireless applications where channel capacity can vary over a range. The encoder no longer knows the channel capacity and the bit rate at which the video has to be optimized. So, the wireless video coding has the objective of optimizing the video quality over a given bit rate range and not at a given bit rate. This is done by Scalable Video Coding. In Scalable Video Coding, the encoder generates one base layer and several enhancement layers for the video. The base layer contributes for the basic video quality, when the bit rate is very low over the channel. The several enhancement layers add to the quality of the video. The enhancement layers can be dropped if the throughput of the video stream decreases.

1.2 H.264 H.264 has better coding and decoding complexity over MPEG-4. This is a still improved standard since it uses new

Hence for the continuity of the video playback, it is enough to get the base layer. This method is used to adapt to the scalable video bit rate to the available bandwidth or throughput. 2.2 #Issue: Error prone channel #Technique: Error Resilient Techniques To minimize the effect of error transmission in wireless channel, Various error resilient techniques have been introduced. These methods help deliver quality video over error prone wireless channels. Following are these techniques: 1) Feedback channel or retransmission approach The decoder gives feedback ie successful and unsuccessful reception to the encoder by sending positive (ACK) or negative acknowledgements (NAK). Then the encoder may use multiframe prediction or intra MB for the correction. However feedback channel approach may introduce some additional delay. 2) Forward error correction or channel codimg(coding here??) approach It

adds on redundant bits to correct the sequence which may have errors. But this may not be very suitable for applications with very low bit rate, since the amount of data bits to be transmitted get increased. 3) Error resilience approachsynchronization markers are used for spatial error localization. These markers allow bit synchronization of the decoder by resetting spatial location by resetting the spatial location in the decoded frames and prevent the propagation of spatial prediction errors. In H.263, the resynchronization markers are inserted at certain positions in the bit stream like starting point of the Group of Block (GoB). But in this case, some areas in the video remain more error prone. This was overcome in MPEG-4 where, resynchronization markers are inserted at constant intervals. 4) Error detection and correction- can be done at image level or bit level. At image level, the disturbances resulting due to propagation of errors in the neighbouring pixels of the reconstructed frame, are exploited.

Fig 1 A typical example of video over wireless networks. [6] 2.3 #Multiple Description Coding This method is mainly used for Real time streaming. For transmission of video over wireless channels, we use error resilient techniques at the encoder and error concealment at the decoder. FEC or Forward Error Correction is the method for error resilience, suited for nodes which are stationary. But for nodes which are mobile over a wireless network, there could be cases where nodes are moving out of the source transmission range. In such cases, the video can be encoded with several independent descriptions and transmitted over multiple channels. This method is known as Multiple description (Capital D) Coding or MDC. This method helps achieving error resilience using path diversity. In the MDC method, the different descriptions can be decoded independently at the receiver. There is a trade off between the redundancy and the error resilience in the MDC scheme. There is a MDC technique called temporal splitting where the even frames are sent on one channel, and the odd frames on the other. Hence, if only one description is received at the receiver, then the lost frames may be reconstructed from the other description. But this method has poor efficiency, since the increased temporal distance degrades motion prediction based coding. Other MDC method is spatial splitting, in which even lines are sent on one path, and odd lines on the other. 2.4 #Video Streaming with Multi-path routing Multipath routing is preferred over a single path routing when MDC is used for encoding the video stream. As described earlier, layered video coding has one base layer and several enhancement layers. Adding one or more enhancement layer will increase the quality of the decoded video. A particular quality enhancement layer will be decoded only when all the lower quality enhancement layers are available. Thus any enhancement layer is useless at the receiver if the lower quality

layers are not available, or majorly when base layer is not available. So if we assume that only two layers, one base layer and one enhancement layer, were transmitted to the destination, two disjoint paths are used for streaming these layers, such that the base layer takes a path with lowest packet-drop probability. Unlike layered coding, MDC does not have hierarchy among the descriptions, since each description can be decoded independently. MDC generates different descriptions such that different levels of reconstruction qualities can be obtained from these different descriptions. The fact that descriptions may be independently decoded makes MDC highly suitable for packet networks, where there is no prioritization amongst the packets. We propose a different technique in this paper, which is based on the MDC method and layered video coding method described above. This technique is best suited for cases where the bit rate over the wireless channel is extremely low. In such a case, it will ensure that the video playback is continuous at the receiver, however the focus here will be on the continuity of the video playback rather than quality of the video being received. In this technique, of all the base and enhancement layers of the video to be transmitted, only the base layer of the video follows the Multiple description Coding; while the enhancement layers are simply transmitted with usual coding techniques, with minimal or no error resilience done at the encoder for these layers. The MDC employed for the base layer will ensure that base layer is received with one or more descriptions and hence the probability of errors being propagated in the base layer stays at the minimum. This way we ensure that if the channel is error-prone, the base layer is

received with the minimized effects of error propagation. Hence, the video playback will be continuous. While on the other hand, we care less about the enhancement layers, and we let them get transmitted over the error- prone channel with minimal or no error resilient techniques being implemented for them. This technique can be modified as per the requirement of the video quality to be received at the receiver end. If base layer reception is not enough for the video quality, one or more enhancement layers sitting above the base layer in the hierarchy can follow the MDC and rest of the enhancement layers may not follow any error resilient method. With this technique we are saving on the bandwidth of the wireless channel. 2.5 #QoS based on Video Content Most of the techniques employed for video on wireless links are based on signal level image quality parameters. The desired QoS of the video to be received is not only dependent on frame rate, image quality, granularity but also on video content. In wireless video streaming, video content is a very important factor in determining the desirable QoS of the video streaming. For different scenes in a video, different QoS may be desired. For example, in a video streaming of a lecture, low segment video data may be transmitted for a scene that contains only classroom discussions, but a high quality video

Fig 2 Content Analysis of the lecture Videos: Left image shows the original image and the right image shows the processed image with content region detected and enhanced. [1] signal is desired in case the blackboard discussion is happening, since the content on the blackboard is an essential part in understanding the lecture. Under a given network bandwidth, to select an appropriate set of QoS parameters depends on both video content and the preferences of clients. For example, in a sports video which is streamed on a wireless channel, if the network bandwidth allows either a high quality image or a low quality be transmitted, the selection of which scheme depends on the content scenes in the sports video and the preference of the viewer. On making decisions based on video content in different scenes, a strong content analysis method is required. Content analysis analyzes the video content dynamically in real time. It classifies the video into scenes and segment content regions. For lecture video, content analysis module estimates blackboard background color and variation and segments content regions from different areas in each frame. In the technique that we proposed above in this paper, can be easily modified understanding the scenario stated above. (this doesnt seem right, if there is something in the technique that you proposed, there should be a name that should come before can e.g. In the technique that we proposed above in this paper, XYZ can be or it should be the technique that we proposed above in this paper, can be easily) For example, if we take the case of the video lecture being streamed, and the scene on the video is a long classroom discussion, without the images of the blackboard being shown, then we may not apply any error resilient techniques such as MDC etc to either the base layer or the enhancement layers. This may cause the reception of poor video quality, but in this case it does not matter since the purpose of the video lecture is still getting perfectly served. While in case, when the blackboard scenes are getting transmitted, the quality of video desired at the receiver is high, hence we can apply error resilient schemes to either the base layer, or to the base layer and several enhancement layers based on the quality asked by the client at the receiving end.

CONCLUSION: In this paper, we have listed down the various issues that are addressed for transmission of video over wireless networks and whose solutions are still the research topics of today. It is shown that MPEG-4 and H.264 have been proved (have been shown/proven) to be more suitable for video compression in wireless. The techniques to make the error propagation minimum have been discussed and based on them, a technique has been proposed that addresses the limitation of transmission of video over low bit rate channel. The issues faced while transmission of video over wireless, vary greatly with the quality of the video desired at the receivers end. This is mainly discussed in the subsection of content-based video. There are still awaited better resolutions to the transmission over wireless. REFERENCES: 1. Tiecheng Liu, Chekuri Choudary Content-Aware Streaming of Lecture Videos over Wireless Networks , IEEE journal paper. 2. Yu Wang, Lap-Pui Chau, KimHui Yap, A novel resynchronization method for scalable video, IEEE journal
3. M. Salim Beg, Ekram Khan, Video Over Wireless Networks: A brief Review, IEEE journal 4. Bo Yan, Kam W. Ng, A Survey on the techniques for the transport of MPEG-4 video over wireless video over wireless networks, IEEE journal 5. Xiaowei Ding, Kaushik Roy, A Novel Bitstream Level Joint

Channel Error Concealment Scheme for Real time Video over Wireless Networks, IEEE

6. Viswesh Parameswaran, Sudheendra Murthy, Arunabha Sen, Baoxin Li, An Adaptive Slice Group Multiple Coding technique for Real time video transmission over Wireless Networks, IEEE 7. Mihaela van der Schaar, Deepak Turaga, Thomas Stockhammer, MPEG-4 beyond Conventional Video Coding: Object Coding Resilience and Scalability

You might also like