Fig. 6From: A multisource fusion framework driven by user-defined knowledge for egocentric activity recognitionThe sample image of each activity in the training set. Images a through o correspond to “cleaning,” “computer use,” “eating,” “entertainment,” “lying down,” “meeting,” “reading,” “shopping,” “talking,” “telephone use,” “transportation” (driving), “walking outside,” “washing up,” “watching TV,” and “writing,” respectivelyBack to article page