Example: barber

Latency in live network video surveillance

White paperLatency in live network video surveillanceTable of contents1. Introduction 32. What is Latency ? 33. How do we measure Latency ? 34. What affects Latency ? Latency in the camera capture Latency Latency during image enhancement Compression Latency Buffer Latency Audio Latency Latency in the network The infrastructure video stream data amount The transmission protocols Latency on the Client side Play-out buffer Audio buffer Decompression Display device refresh rate 85. Reducing Latency Camera side network Client side 106. Conclusion 1031. IntroductionIn the network video surveillance context, Latency is the time between the instant a frame is captured and the instant that frame is displayed. This is also called end-to-end Latency or sensor-to-screen Latency .

4.1.1 Capture latency Let us take a look inside the video camera. Images are made from pixels captured by the camera sensor. The capture frequency of a sensor defines how many exposures the sensor delivers per time unit, i.e. how many frames/ number of images it can capture per minute. Depending on which capture rate you choose

Tags:

  Video, Capture

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Latency in live network video surveillance

1 White paperLatency in live network video surveillanceTable of contents1. Introduction 32. What is Latency ? 33. How do we measure Latency ? 34. What affects Latency ? Latency in the camera capture Latency Latency during image enhancement Compression Latency Buffer Latency Audio Latency Latency in the network The infrastructure video stream data amount The transmission protocols Latency on the Client side Play-out buffer Audio buffer Decompression Display device refresh rate 85. Reducing Latency Camera side network Client side 106. Conclusion 1031. IntroductionIn the network video surveillance context, Latency is the time between the instant a frame is captured and the instant that frame is displayed. This is also called end-to-end Latency or sensor-to-screen Latency .

2 This transporting process includes a long pipeline of steps. In this white paper we will try to dissect these steps. We will first look into those which will affect Latency and finally give recommenda-tions in how to reduce What is Latency ?The definition of Latency depends on the context, where there will be variations in the meaning. In net-work technology, Latency is commonly perceived as the delay between the time a piece of information is sent from the source and the time the same piece of information is received at its final destination. This paper discusses Latency in network video surveillance systems. Here we define Latency as the delay from when an image is captured by a camera until it is visible on a video display. There are several stages required in this process: capture , compress, transmit, decompress and display of the image.

3 Each stage adds its own share of delay, which together produces the total delay, which we call end-to-end Latency . This end-to-end Latency can be divided into 3 major stages impacting the total system Latency :1. Latency introduced by the camera (image processing / encoding Latency )2. Latency introduced by the network (transmission Latency )3. Latency introduced by the receiver side (client buffer, decoder Latency , and display Latency ). Each of these latencies needs to be considered when designing the video solution in order to meet the Latency goal of the video surveillance How do we measure Latency ? Latency is usually expressed in time units, , seconds or milliseconds (ms). It is very hard to measure exact Latency as this will require the clock on the camera and the display device to be synched exactly. One simple way (with reservation for minimum deviation from the exact values) is by using the time-stamp overlay text feature.

4 This method measures the end-to-end Latency of a video surveillance system that is, the time difference between the capture of one image frame in the lens to when that same frame is rendered on a monitoring device. Note that this method will produce a possible error of up to one frame interval. The possible error of one frame interval depends on the fact that the timestamps used to calculate the Latency are only collected at frame capture . We will therefore only be able to compute the Latency with the factor of the frame rate. Hence, if we have a frame rate of 25 fps, we can calculate the Latency as a multiple of 40 ms. If we have a frame rate of 1 fps, we can calculate the Latency as a multiple of seconds. This method is there-fore not recommended for low frame rates.> Turn on timestamp in the overlay by using (%T:%f)> Place the camera in an angle so it captures its own live stream output > Now take snapshots of the live stream output to compare the time difference between the time displayed in original text overlay and the time displayed in the screen loop4 From the picture above you can see the time difference is 460 ms - 300 ms which gives us an end-to-end Latency at 160 What affects Latency ?

5 Latency in the capture latencyLet us take a look inside the video camera. Images are made from pixels captured by the camera sensor. The capture frequency of a sensor defines how many exposures the sensor delivers per time unit, how many frames/ number of images it can capture per minute. Depending on which capture rate you choose you will have different capture Latency . By setting the capture rate to 30fps, meaning the sensor will captures one image/frame every 1/30 th of a second, you are introducing a capture Latency of 2 Latency during image enhancementAfter capturing the raw image, each image frame will go through a pipeline of enhancement processing. These steps, such as de-interlacing, scaling and image rotation, will add Latency . The more enhancement you want, the higher the cost in Latency in the camera.

6 At the same time the enhancements also affects the total data being produced, leading to effects in the network Latency . Below are a few parameters that will affect rotationRotation of the video stream to either 90 or 270 degrees adds an additional load to the encoding processor. The pixels will have to be rearranged and buffered before they are sent to the decoder, caus-ing resolution means more pixels for the processor to encode; the increase in processing time for a higher resolution vs a lower resolution is balanced by a faster processing unit in high resolution cameras and thus usual insignificant. But higher resolution does result in more data per frame. , more packets to be transmitted. In a network with limited bandwidth it might lead to delay during transmis-sion which in turn will lead to the need of larger buffer at the receiver side, causing longer Latency .

7 5 Multiple streamsIf more than one kind of stream is requested from the camera (different frame rates or resolutions), the processing of an additional kind of stream will add Latency as all streams must be encoded by the same 3 Compression latencyAfter the image has been processed it will be encoded to compress the amount of data that need to be transferred. Compression involves one or several mathematical algorithms that remove image data. This takes time depending on the amount of data to process. The delay introduced in this step is called com-pression Latency . There are three aspects of compression that will affect of Compression algorithmsMore advanced compression algorithm will produce a higher Latency . is a more advanced com-pression method than MJPEG, but the difference in Latency during encoding is only a matter of a few microseconds.

8 On the other hand, on the decoding site the variation may be bigger. (The data stream produced by Axis video products requires the decoder to buffer at least one frame, while MJPEG decoding requires no buffer.)Effectiveness of the compression methodMost common encoding schemes used in Axis cameras are MJPEG and Both MJPEG and introduce Latency in the camera. is a compression encoding that, when applied, minimizes the throughput of a video stream to a greater extent than when compared with MJPEG. Which means using will produce fewer data packets to be sent through the network , unpacked and rendered in the receiver end. This will, of course, have a positive effect on reducing the total choice of bitrateVideo compression reduces video data size. However, not all frames will be the same size after compres-sion. Depending on the scene, the compressed date size can vary.

9 In other words, the original compressed data is streams of Variable Bit Rate (VBR), which result in variable bitrate being outputted into the network . One needs to take the constraints of the available network such as bandwidth limita-tions into consideration. The bandwidth limitations of a streaming video system usually require regulation of the transmission bit rate. In some encoders, the choice of VBR and Constant Bite Rate (CBR) is presented. By choosing CBR you will guarantee the network receives a limited amount of data so it will not be overloaded, leading to network delay and the need of a larger buffer in the receiver end further on in the Axis cameras, choosing will provide you the choice to select CBR or VBR. From firmware the choice is between Maximum Bit Rate (MBR) and VBR. However, Axis has always recommended using networked video with VBR where the quality is adapted to scene content in real-time.

10 It is not recom-mended to always use CBR as a general storage reduction tool or fix for weak network connections, since cameras delivering CBR video may be forced to erase important forensic details in critical choosing a compression method one should take all three aspects mentioned above into consider-ation. On one hand an advanced encoding algorithm will take longer time to encode and decode, on the other hand it will reduce the data volume being sent through the internet, which will in turn shorten transition delays and reduce the size of receiver Buffer latencyBecause images are handled one frame at a time, only a limited amount of data can be compressed at a time, short-term buffers between the processing stages are sometimes needed. These buffers also contribute to the Latency in the Audio latencyIn some cases the video stream is accompanied by audio.


Related search queries