Items with no label
3335 Discussions

RealSense SR300 Latency Measuring Method

idata
Employee
1,412 Views

Hi,

Right now I am involved in a project using RealSense SR300 and I need to measure its latency for hand recognition.

Firstly I use system clock in VS2015 to measure the time between sm -> AcquireFrame(true) to handData->Update(). Part of the code is showed below:

for (;;)

{

//Clock Start

clock_t start = clock();

if (sm->AcquireFrame(true) < Status::STATUS_NO_ERROR) break;

const Sample *sample = reader->GetSample();

//Hand Detecting

if (handData->Update() == Status::STATUS_NO_ERROR)

{

//Clock End

clock_t end = clock();

cout << double(end - start) << "ms" << endl;

}

}

sm is an instance of SenseManager and handData is HandData.

The processing time obtained from the program above is around 18 ms, which is satisfying. However when I measure the latency of the whole hand recognition process, the total time is between 60 ms to 80 ms.

Does anyone obtain similar results?

The basic idea of my measuring program is described below:

1. I write a function to continuously show the system time of the hand recognition program on the screen;

2. I place the RealSense SR300 in front of the screen to record the time;

3. In the program I enable ALERT_HAND_DETECTED and as long as a hand is showed in front of the camera, SR300 record the time and the hand in the same frame;

4. Inside the program, when ALERT_HAND_DETECTED is fired, I stop timing and output the frame.

5. Finally I deducted the time when ALERT_HAND_DETECTED is fired (which is "end" in above program) and the time showed in the frame to get the processing latency.

A sample picture is showed below. The picture is taken at clock time 25100 and the time when ALERT_HAND_DETECTED fired is 25158. For this picture the latency is 58 ms.

Best,

Eric

0 Kudos
7 Replies
idata
Employee
393 Views

Hello Eric,

 

 

Thanks for reaching out!

 

 

I've never heard of anyone measuring the latency of hand recognition using the SR300. However, please let us understand your goal, what it is? Do you want to reduce this latency? Or, are you simply looking to confirm this number?

 

 

Let us know.

 

Pedro M.
0 Kudos
idata
Employee
393 Views

Hi Pedro,

Thanks for the reply.

Right now I'm just trying to confirm the number. I want to know if the recognition latency is or can be reduced to a time less or equal to 50 ms.

Best,

Eric

0 Kudos
MartyG
Honored Contributor III
393 Views

I managed to track down a research study of the older R200 and F200 cameras that someone wrote. Amongst numerous factors discussed in the paper are latency and 'time to recognition' values. In their study, they were aiming for a target value of under 50 ms from making a hand gesture to recognition of the gesture on the F200 camera (the direct predecessor of the SR300).

http://www.girdac.com/Products/Real-Sense-New-Experiences-F200-R200.pdf http://www.girdac.com/Products/Real-Sense-New-Experiences-F200-R200.pdf

Ultimately though, the latency is not likely to be measurable in absolute values, as it may be influenced positively or negatively by factors such as the capabilities of the device the camera is attached to, the stability and behavior of the USB ports on a particular machine, etc.

0 Kudos
idata
Employee
393 Views

Hi Marty,

Thanks for your help.

Do you think the CPU plays an important role in hand recognition latency? The CPU I use is a 2014 Xeon E3. I thought the 3D reconstruction is done by the ASIC inside the SR300 module and the computer is in charge of processing the depth data, such as hand recognition.

I will also look into my code again to see if something else influences the results.

Best,

Eric

0 Kudos
MartyG
Honored Contributor III
393 Views

Since there is a computer involved, there is bound to be the potential for a processing bottleneck somewhere. For example, even if the camera has independent hardware that can handle detection speedily, there may be a delay in doing something with that data afterwards (e.g triggering an event in a program in response to the detection, since the program would be handled by the PC processor).

So whether your measurement includes some lag or not depends on what point in the process you are measuring. If you were measuring the speed that the camera reacts to a detection event then there may certainly be some lag caused by the PC hardware.

If you were simply measuring the initial detection time though, and that part was handled solely by the camera hardware without involvement of the PC hardware, then you could be confident that detection speed should be relatively consistent with all models of that camera (though the variables of no two cameras of the same model are precisely identical due to manufacturing variances at the factory in the camera's "intrinsic matrix").

I should add that some developers have speculated that temperature is a factor that can influence camera hardware performance. Page 30 of the SR300 data sheet document would seem to back this up.

Your 2014-era Xeon E3 is probably a 4th generation Haswell architecture. The SR300 usually prefers a minimum of a 6th generation Skylake architecture processor. However, Xeons exist in their own 'strange universe' that defy normal specification rules and work with the SR300.

0 Kudos
idata
Employee
393 Views

Hi Marty,

I really appreciate your elaborate explanation. There are trully so many factors that may influence the results, including the length of cable. Since I can only use the SDK to do latency measurement, I can't find a better way to eliminate all these noise. But based on your answer, I ran a few more tests.

Firstly I decreased color camera fps to 30 and I record the processing loop time, which contains mainly "senseManager->AcquireFrame()" and "handData->Update()". It took about 33ms to finish each loop, which met the 30fps setting. So I conclude the computer has enough computing power to finish its work in time. Then I recorded the loop time when the algorithm detects a hand. Initially I thought the program would take extra time processing images which contained only part of hand, so lags would be accumulated before a "whole" hand was detected. However, my assumption didn't agree with the measuring data.

1. The first test is the normal "ALERT_HAND_DETECTED" program under 30fps. "1" means a hand is detected and "0" for no hand detected.

2. The second test is designed to see if image output process has influence on hand recognition latency.

In the first test, we can basically say the computer can decide whether a hand exists or not in 33ms. 6ms and 10ms are the loop time for the first two frames.

In the second test, the image output process does add up process time for the loop and the next two loop processing time drop a lot. This is probably because the frame is already stored in image buffer so the program can deal with it immediately. However, I don't think this process will influence hand recognition latency because the previous frame is done within 33 ms.

Probably we are not be able to measure the real latency for this camera through SDK, as you have mentioned it varies from computer to computer. However I do hope Intel can provide some parameters for reference.

Best,

Eric

0 Kudos
MartyG
Honored Contributor III
393 Views

Yes, cable length is a factor in signal quality. the cable supplied with the camera has been rated specifically for efficient uses with the camera, and using a USB extension cable can degrade performance. Most users find that the camera will work with a 1 m extension cable, but not a 2 m one unless that cable is a high-grade premium quality one.

I did some further research but wasn't able to find further information about RealSense's detection speed that might be relevant to your project, aside from an SDK function called QueryEngagementTime that reports when the cursor is ready to be controlled by the hand after detection.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?queryengagementtime_pxccursorconfiguration.html Intel® RealSense™ SDK 2016 R2 Documentation

0 Kudos
Reply