Currently i am having some issues coding with GStreamer in Windows. It seems that i get a few errors that relate to creating a pipeline for the program to work with. Not sure if it is my environment or some other issue. So currently getting a linux distribution working on a computer for easier coding environment setup and should have a prototype done in about 1 week or 2.
If anyone viewing this blog has any experience with GStreamer on windows (www.gstreamer.com - sdk), and has any information for me it would be really appreciated.
For a little more information about the prototype. If i get the time for a UI then it will be done with the QT Framework
Thursday, 2 August 2012
Over these past few weeks, i have decided on a few changes to my system.
The previous UI that i had prepared, now seems to be a little too "clunky". So i have decided to mainstream the UI to make it easier to use. Essentially there will be no video screen on the actual program, but instead will have options that the user will be able to set. I decided on this because this program, put simply is changing a webcam stream and replacing the face with a 2d caricature or 3d avatar, and because of this the UI does not need any screens of its own. The settings for the user to modify will include the choice of type of avatar (2d or 3d), then the avatar itself, along with a checkbox with the name activate which is checked when the apply button is pressed which then activates the program overlay, and others with if i find that more options would be useful.
The system itself will be :
- GStreamer on the frontend that the UI code deals with
- OpenCV on the backend which does the algorithmic functions for face detection etc, called only via GStreamer functions made.
Essentially, the UI will call the GStreamer functions when the checkbox for activate (explained in the above paragraph) is on when the user clicks apply. At the moment i am trying to find the best method to pass the screen to OpenCV so that the OpenCV libraries can do the algorithmic functions.
My current view is that i send via GStreamer a mat (or the relevant format) to OpenCV, which then calls functions for face detection and feature detection. OpenCV will then return a 2 dimensional array which are the coordinates of the features on the face, in reference to the mat that was sent to OpenCV. Then GStreamer will draw on the webcam video an overlay with the drawing according to the coordinates given by OpenCV.