While doing research on some possible way to control a 3D face with OpenGL, i found this paper (http://paper.ijcsns.org/07_book/200704/20070420.pdf) named "3D Face Deformation Using Control Points and Vector Muscles" which was published on the International Journal of Computer Science and Network Security.
This paper provides information about a method of deforming a 3d face by using control points and vector muscles.
During my reading of this i have thought about actually using control points as a base point to control a 3D face while reading data from the users face. This should allow for a 3D face to appear to convey the same emotions as the user
I have not yet seen if a kinect or a webcam would either suit this method or which one would be better to use.
NB. All information used from this paper will be quoted and acknowledged