Ph.D Dissertation Defense "ORTHOGONAL MOMENT-BASED HUMAN SHAPE QUERY AND ACTION RECOGNITION FROM 3D POINT CLOUD PATCHES” by Huaining Cheng

Friday, December 4, 2015, 1 pm to 3 pm
Campus: 
Dayton
304 Russ Engineering Center
Audience: 
Current Students
Faculty

ABSTRACT:

Human shape data from standoff 3D sensors, such as Light Detection and Ranging (LIDAR), often comprise low-resolution, disjoint, and irregular patches of points resulted from self-occlusions and viewing angle variations. 3D analysis of this type of data for shape-based pattern recognition has been rarely studied; instead recent approaches focus on 2D-based depth image analysis. In this research, a new degeneracy-tolerable, multi-scale 3D shape descriptor based on the discrete orthogonal Tchebichef moment is proposed for single-view partial point cloud representation and characterization. Our shape descriptor consists of low-order 3D Tchebichef moments computed with respect to a new point cloud voxelization scheme that normalizes translation, scale, and resolution. To evaluate the effectiveness of our descriptor, we built a multi-subject pose shape baseline to produce simulated LIDAR captures of human actions at different viewing angles and developed point cloud reconstruction, nearest neighbor pose shape search, and human action recognition. The shape search experiments show that our descriptor performs better than the Fourier descriptor and the wavelet descriptor. The action recognition, built on the Naïve Bayes classifiers using a new temporal statistics of a ‘bag of pose shapes’, demonstrates excellent action and viewing direction predictions with superb consistency across a large range of image scales and viewing angle variations, unlike the 2D-based depth image analysis whose performance could deteriorate at small image scales.

Ph.D. Committee: Drs. Soon Chung, Advisor, Nikolaos Bourbakis, Yong Pei, and Vincent Schmidt, WPAFB

For information, contact
Log in to submit a correction for this event (subject to moderation).