Items with no label
3335 Discussions

Missing Header Files

PTel
Beginner
3,318 Views

Hello,

I downloaded all the SDK components (2016 R3) and tried to compile the projects of the tutorial.

The first one could be built, but the others complained about missing header files.

Missing:

* pxcfaceconfiguration.h

* pxcfacedata.h

* pxcfacemodule.h

Where are these files?

0 Kudos
14 Replies
idata
Employee
993 Views

Hello Heimetli,

 

 

Thanks for reaching out!

 

 

I did a quick search online and I was able to find the source code of those files in the following links:

 

 

https://github.com/libgdx/gdx-realsense/blob/master/native/include/pxcfaceconfiguration.h

 

https://github.com/libgdx/gdx-realsense/blob/master/native/include/pxcfacedata.h

 

https://github.com/libgdx/gdx-realsense/blob/master/native/include/pxcfacemodule.h

 

 

Nevertheless, since you are using the R3 version of the SDK, could you try version R2 to see if this issue can be fixed?

 

The version R2 can be downloaded from http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/9078/intel_rs_sdk_offline_package_10.0.26.0396.exe

 

 

I hope this helps.

 

-Peter.
0 Kudos
idata
Employee
993 Views

Hello Heimetli,

 

 

Do you have any updates about this?

 

 

-Peter.
0 Kudos
JHasl
Beginner
993 Views

Hi Peter,

I am also failing to compile the FaceTracking sample for the RealSense SDK 2016 R3 (for use with the SR300 camera), for the same reasons as above ie 3 missing header files. I have tried using the github headers you pointed at, but this fails to compile with the rest of the example code. Is there a bugfix release due any time soon, and if not, what earlier version of the SDK should I use, where can I get it, and will this work with the latest SR300 camera driver?

Best Regards,

Jane

0 Kudos
JHasl
Beginner
993 Views

I have also tried installing the previous version of the SDK and runtime ( intel_rs_sdk_offline_package_10.0.26.0396.exe and intel_rs_sdk_runtime_10.0.26.0396.exe ) . These install, but claim not to be able to find the camera hardware, and the samples do not work with the more up to date DCM installed ( intel_rs_dcm_sr300_3.3.27.5718.exe ). The link to the previous DCM for the SR300 linked to from https://software.intel.com/en-us/articles/previous-intel-realsense-install https://software.intel.com/en-us/articles/previous-intel-realsense-install does not exist.

Any suggestions on a work-around?

Best Regards,

Jane

0 Kudos
PTel
Beginner
993 Views

Hello JaneH,

I uninstalled R3 of the SDK and installed R2 from the link above.

With R2 the samples could be built and they work with our camera.

Regards

Peter

0 Kudos
JHasl
Beginner
993 Views

Thanks Peter,

I gave it another try and it works, with the latest DCM and R2 of the SDK, and as you say, the samples build and work. Not sure what went wrong when I tried it first, but thanks very much for giving me the persistance to try again :-)

Jane

0 Kudos
BKour
Novice
993 Views

As the API has been refactored these headers were part of the old-style setup.

Unfortunately they were supposed to be still supported, I guess this was yet another oversight. (Examples seem to be all 'old style' so far)

I am currently combing through everything in order to switch to a new style API, as unfortunately there doesn't seem to be a clear conversion guide.

To see the new structure look at: /(RSSDK_Install)/include/RealSense/ the new header structure is in there.

If you look under /Face/ you will notice that all the previous functionality seems to be there just in a different form.

Ill try to post my experience if I have any success.

idata
Employee
993 Views

Hi,

I have the same problema. What do you recommend me to do? It's preferible just use the R2 version? I'm not able neither to find the sample of 'emotion' as the SDK manual said.

Thanks in advance,

0 Kudos
MartyG
Honored Contributor III
993 Views

There has not been emotion recognition in the SDK since 2015. It was originally a component provided by a third party company called Emotient but it was withdrawn after RealSense's first year, possibly because the licencing agreement wasn't renewed after Emotient got purchased by Apple.

idata
Employee
993 Views

Thanks MartyG,

So, in your opinion, what would i do? Using the R2 or R3 SDK Version? I have to use mostly face analysis, so I want to get landmarks and the values of 'expression data' and realise the tracking of the face.

Thanks again!

0 Kudos
MartyG
Honored Contributor III
993 Views

Using my records, I managed to track down the last SDK in which emotion was fully supported. It was R3 (not 2016 R3). So the emotion sample is presumably included in that install package, but the download links on the SDK page for past versions are broken links now and probably never coming back. It's a pity, as the emotion sample was good fun.

You can still interface with the landmark points on the face - it just won't give you a description of what an expression means in terms of emotion. I gave somebody help with this it a while ago - I'll link you to the documentation I directed them to.

Basically, you can query individual landmark points instead of all of them by using a function called QueryPoint.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/querypoint_landmarksdata_pxcfacedata.html QueryPoint

The values of particular landmark points are on a face chart.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_face_face_landmark_data.html Intel® RealSense™ SDK 2016 R2 Documentation

Also, it is possible also to recognize camera-generated emotional expressions on the face in the Unity game engine but it's not built into the SDK. It involves custom mechanisms that I built myself. I can try to give explanations if you are interested.

0 Kudos
idata
Employee
993 Views

Yes Marty, any help would be fantastic!

How did you classify the emotions with the landmarks information?

0 Kudos
MartyG
Honored Contributor III
993 Views

It wasn't with the landmark information. In my original system, I analysed the current angle of objects representing the parts of the face and used 'If' logic statements to draw conclusions based on the angle of the object generated by the SDK's face tracking script.

For example, with the eyebrows, eyelids and lips,I would assume an angle around the '0' degree mark to be "neutral" - neither flexing up into a happy expression or down into a frown / grimace.

If the angle increased to something like +10 degrees and not less than 0 degrees, that was assumed to be a happy expression, as the camera script rotated the object representing the face part upwards.

If the angle was less than '0' and greater than -10 (e.g -15) then the expression was judged to be a negative expression.

Using the If logic conditions, you could take each individual analysis and use AND logic to compare different face parts.

For example, IF only the lips are in the negative range then the emotion is "Sad".

IF the lips are in the negative range AND the eyebrows are in the negative range then the emotion is "Angry".

My current system is more complex than this, but the above method works fine for simple emotional analysis of facial inputs to the camera.

0 Kudos
idata
Employee
993 Views

Thanks for the info MartyG! It helps a lot!

0 Kudos
Reply