 Research
 Open access
 Published:
Joint modality fusion and temporal context exploitation for semantic video analysis
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 89 (2011)
Abstract
In this paper, a multimodal contextaware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modelingbased approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multimodal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distributionrelated information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
1. Introduction
Due to the continuously increasing amount of video content generated everyday and the richness of the available means for sharing and distributing it, the need for efficient and advanced methodologies regarding video manipulation emerges as a challenging and imperative issue. As a consequence, intense research efforts have concentrated on the development of sophisticated techniques for effective management of video sequences [1]. More recently, the fundamental principle of shifting video manipulation techniques towards the processing of the visual content at a semantic level has been widely adopted. Semantic video analysis is the cornerstone of such intelligent video manipulation endeavors, attempting to bridge the so called semantic gap[2] and efficiently capture the underlying semantics of the content.
An important issue in the process of semantic video analysis is the number of modalities which are utilized. A series of singlemodality based approaches have been proposed, where the appropriate modality is selected depending on the specific application or analysis methodology followed [3, 4]. On the other hand, approaches that make use of two or more modalities in a collaborative fashion exploit the possible correlations and interdependencies between their respective data [5]. Hence, they capture more efficiently the semantic information contained in the video, since the semantics of the latter are typically embedded in multiple forms that are complementary to each other [6]. Thus, modality fusion generally enables the detection of more complex and higherlevel semantic concepts and facilitates the effective generation of more accurate semantic descriptions.
In addition to modality fusion, the use of context has been shown to further facilitate semantic video analysis [7]. In particular, contextual information has been widely used for overcoming ambiguities in the audiovisual data or for solving conflicts in the estimated analysis results. For that purpose, a series of diverse contextual information sources have been utilized [8, 9]. Among the available contextual information types, temporal context is of particular importance in video analysis. This is used for modeling temporal relations between semantic elements or temporal variations of particular features [10].
In this paper, a multimodal contextaware approach to semantic video analysis is presented. Objective of this work is the association of each video shot with one of the semantic classes that are of interest in the given application domain. Novelties include the development of: (i) a graphical modelingbased approach for jointly realizing multimodal fusion and temporal context exploitation, and (ii) a new representation for providing motion distribution information to Hidden Markov Models (HMMs). More specifically, for multimodal fusion and temporal context exploitation an integrated Bayesian Network (BN) is proposed that incorporates the following key characteristics:

(a)
It simultaneously handles the problems of modality fusion and temporal context modeling, taking advantage of all possible correlations between the respective data. This is a sharp contradistinction to the usual practice of performing each task separately.

(b)
It encompasses a probabilistic approach for acquiring and modeling complex contextual knowledge about the longterm temporal patterns followed by the semantic classes. This goes beyond common practices that e.g. are limited to only learning pairwise temporal relations between the classes.

(c)
Contextual constraints are applied within a restricted time interval, contrary to most of the methods in the literature that rely on the application of a time evolving procedure (e.g. HMMs, dynamic programming techniques, etc.) to the whole video sequence. The latter set of methods are usually prone to cumulative errors or are significantly affected by the presence of noise in the data.
All the above characteristics enable the developed BN to outperform other generative and discriminative learning methods. Concerning motion information processing, a new representation for providing motion energy distributionrelated information to HMMs is presented that:

(a)
Supports the combined use of motion characteristics from the current and previous frames, in order to efficiently handle cases of semantic classes that present similar motion patterns over a period of time.

(b)
Adopts a finegrained motion representation, rather than being limited to e.g. dominant global motion.

(c)
Presents recognition rates comparable to those of the best performing methods of the literature, while exhibiting computational complexity much lower than them and similar to that of considerably simpler and less wellperforming techniques.
An overview of the proposed video semantic analysis approach is illustrated in Figure 1.
The paper is organized as follows: Section 2 presents an overview of the relevant literature. Section 3 describes the proposed new representation for providing motion information to HMMs, while Section 4 outlines the respective audio and color information processing. Section 5 details the proposed new joint fusion and temporal context exploitation framework. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented in Section 6, and conclusions are drawn in Section 7.
2. Related work
2.1. Machine learning for video analysis
The usage of Machine Learning (ML) algorithms constitutes a robust methodology for modeling the complex relationships and interdependencies between the lowlevel audiovisual data and the perceptually higherlevel semantic concepts. Among the algorithms of the latter category, HMMs and BNs have been used extensively for video analysis tasks. In particular, HMMs have been distinguished due to their suitability for modeling pattern recognition problems that exhibit an inherent temporality [11]. Among others, they have been used for performing video temporal segmentation, semantic event detection, highlight extraction and video structure analysis (e.g. [12–14]). On the other hand, BNs constitute an efficient methodology for learning causal relationships and an effective representation for combining prior knowledge and data [15]. Additionally, their ability to handle situations of missing data has also been reported [16]. BNs have been utilized in video analysis tasks such as semantic concept detection, video segmentation and event detection (e.g. [17, 18]), to name a few. A review of machine learningbased methods for various video processing tasks can be found in [19]. Machine learning and other approaches specifically for modality fusion and temporal context exploitation towards semantic video analysis are discussed in the sequel.
2.2. Modality fusion and temporal context exploitation
Modality fusion aims at exploiting the correlations between data coming from different modalities to improve singlemodality analysis results [6]. Bruno et al. introduce the notion of the multimodal dissimilarity spaces for facilitating the retrieval of video documents [20]. Additionally, a subspacebased multimedia data mining framework is presented for semantic video analysis in [21], which makes use of audiovisual information. Hoi et al. propose a multimodalmultilevel ranking scheme for performing largescale video retrieval [22]. Tjondronegoro et al. [23] propose a hybrid approach, which integrates statistics and domain knowledge into logical rulebased models, for highlight extraction in sports video based on audiovisual features. Moreover, Xu et al. [24] incorporate webcasting text in sports video analysis using a textvideo alignment framework.
On the other hand, contextual knowledge, and specifically temporalrelated contextual information, has been widely used in semantic video manipulation tasks, in order to overcome possible audiovisual information ambiguities. In [25], temporal consistency is defined with respect to semantic concepts and its implications to video analysis and retrieval are investigated. Additionally, Xu et al. [26] introduce a HMMbased framework for modeling temporal contextual constraints in different semantic granularities. Dynamic programming techniques are used for obtaining the maximum likelihood semantic interpretation of the video sequence in [27]. Moreover, Kongwah [28] utilizes storylevel contextual cues for facilitating multimodal retrieval, while Hsu et al. [29] model video stories, in order to leverage the recurrent patterns and to improve the video search performance.
While a plethora of advanced methods have already been proposed for either modality fusion or temporal context modeling, the possibility of jointly performing these two tasks has not been examined. The latter would allow the exploitation of all possible correlations and interdependencies between the respective data and consequently could further improve the recognition performance.
2.3. Motion representation for HMMbased analysis
A prerequisite for the application of any modality fusion or context exploitation technique is the appropriate and effective exploitation of the content lowlevel properties, such as color, motion, etc., in order to facilitate the derivation of a first set of highlevel semantic descriptions. In video analysis, the focus is on motion representation and exploitation, since the motion signal bears a significant portion of the semantic information that is present in a video sequence. Particularly for use together with HMMs, which have been widely used in semantic video analysis tasks, a plurality of motion representations have been proposed. You et al. [30] utilize global motion characteristics for realizing video genre classification and event analysis. In [26], a set of motion filters are employed for estimating the frame dominant motion in an attempt to detect semantic events in various sports videos. Additionally, Huang et al. consider the first four dominant motions and simple statistics of the motion vectors in the frame, for performing scene classification [12]. In [31], particular camera motion types are used for the analysis of football video. Moreover, Gibert et al. estimate the principal motion direction of every frame [32], while Xie et al. calculate the motion intensity at frame level [27], for realizing sport video classification and structural analysis of soccer video, respectively. Common characteristic of all the above methods is that they rely on the extraction of coarsegrained motion features, which may perform sufficiently well in certain cases. On the other hand, in [33] a more elaborate motion representation is proposed, making use of higherorder statistics for providing locallevel motion information to HMMs. This accomplishes increased recognition performance, at the expense of high computational complexity.
Although several motion representations have been proposed for use together with HMMs, the development of a finegrained representation combining increased recognition rates with low computational complexity remains a significant challenge. Additionally, most of the already proposed methods make use of motion features extracted at individual frames, which is insufficient when considering video semantic classes that present similar motion patterns over a period of time. Hence, the potential of incorporating motion characteristics from previous frames to the currently examined one needs also to be investigated.
3. Motionbased analysis
HMMs are employed in this work for performing an initial association of each shot s_{ i } , i = 1, ..., I, of the examined video with one of the semantic classes of a set E = {e_{ j } }_{1≤j≤J}based on motion information, as is typically the case in the relevant literature. Thus, each semantic class e_{ j } corresponds to a process that is to be modeled by an individual HMM, and the features extracted for every shot s_{ i } constitute the respective observation sequence [11]. For shot detection, the algorithm of [34] is used, mainly due to its low computational complexity.
According to the HMM theory [11], the set of sequential observation vectors that constitute an observation sequence need to be of fixed length and simultaneously of lowdimensionality. The latter constraint ensures the avoidance of HMM undertraining occurrences. Thus, compact and discriminative representations of motion features are required. Among the approaches that have already been proposed (Section 2.3), simple motion representations such as frame dominant motion (e.g. [12, 27, 32]) have been shown to perform sufficiently well, when considering semantic classes that present quite distinct motion patterns. When considering classes with more complex motion characteristics, such approaches have been shown to be significantly outperformed by methods exploiting finegrained motion representations (e.g. [33]). However, the latter is achieved at the expense of increased computational complexity. Taking into account the aforementioned considerations, a new method for motion information processing is proposed in this section. The proposed method makes use of finegrained motion features, similarly to [33] to achieve superior performance, while having computational requirements that match those of much simpler and less wellperforming approaches.
3.1. Motion preprocessing
For extracting the motion features, a set of frames is selected for each shot s_{ i } . This selection is performed using a constant temporal sampling frequency, denoted by SF_{ m } , and starting from the first frame. The choice of starting the selection procedure from the first frame of each shot is made for simplicity purposes and in order to maintain the computational complexity of the proposed approach low. Then, a dense motion field is computed for every selected frame making use of the optical flow estimation algorithm of [35]. Consequently, a motion energy field is calculated, according to the following equation:
Where \stackrel{\u20d7}{V}\left(u,v,t\right) is the estimated dense motion field, . denotes the norm of a vector and M(u, v, t) is the resulting motion energy field. Variables u and v get values in the ranges [1, V_{dim}] and [1, H_{dim}] respectively, where V_{dim} and H_{dim} are the motion field vertical and horizontal dimensions (same as the corresponding frame dimensions in pixels). Variable t denotes the temporal order of the selected frames. The choice of transforming the motion vector field to an energy field is justified by the observation that often the latter provides more appropriate information for motionbased recognition problems [26, 33]. The estimated motion energy field M(u, v, t) is of high dimensionality. This decelerates the video processing, while motion information at this level of detail is not always required for analysis purposes. Thus, it is consequently downsampled, according to the following equations:
where R(x, y, t) is the estimated downsampled motion energy field of predetermined dimensions and H_{ s } , V_{ s } are the corresponding horizontal and vertical spatial sampling frequencies.
3.2. Polynomial approximation
The computed downsampled motion energy field R(x, y, t), which is estimated for every selected frame, actually represents a motion energy distribution surface and is approximated by a 2D polynomial function of the following form:
where T is the order of the function, β_{ γδ } its coefficients and μ_{0}, ν_{0} are defined as {\mu}_{0}={\nu}_{0}=\frac{D}{2}. The approximation is performed using the leastsquares method.
The polynomial coefficients, which are calculated for every selected frame, are used to form an observation vector. The observation vectors computed for each shot s_{ i } are utilized to form an observation sequence, namely the shot's motion observation sequence. This observation sequence is denoted by O{S}_{i}^{m}, where superscript m stands for motion. Then, a set of J HMMs can be directly employed, where an individual HMM is introduced for every defined semantic class e_{ j } , in order to perform the shotclass association based on motion information. Every HMM receives as input the aforementioned motion observation sequence O{S}_{i}^{m} for each shot s_{ i } and at the evaluation stage returns a posterior probability, denoted by {h}_{ij}^{m}=P\left({e}_{j}\mid O{S}_{i}^{m}\right). This probability, which represents the observation sequence's fitness to the particular HMM, indicates the degree of confidence with which class e_{ j } is associated with shot s_{ i } based on motion information. HMM implementation details are discussed in the experimental results section.
3.3. Accumulated motion energy field computation
Motion characteristics at a single frame may not always provide an adequate amount of information for discovering the underlying semantics of the examined video sequence, since different classes may present similar motion patterns over a period of time. This fact generally hinders the identification of the correct semantic class through the examination of motion features at distinct sequentially selected frames. To overcome this problem, the motion representation described in the previous subsection is appropriately extended to incorporate motion energy distribution information from previous frames as well. This results in the generation of an accumulated motion energy field.
Starting from the calculated motion energy fields M(u, v, t) (Equation (2)), for each selected frame an accumulated motion energy distribution field is formed according to the following equation:
where t is the current frame, τ denotes previously selected frames and w(τ) is a timedependent normalization factor that receives different values for every previous frame. Among other possible realizations, the normalization factor w(τ) is modeled by the following time descending function:
As can be seen from Equation (5), the accumulated motion energy distribution field takes into account motion information from previous frames. In particular, it gradually adds motion information from previous frames to the currently examined one with decreasing importance. The respective downsampled accumulated motion energy field is denoted by R_{acc}(x, y, t, τ) and is calculated similarly to Equation (2) using M_{acc}(u, v, t, τ) instead of M(u, v, t). An example of computing the accumulated motion energy fields for two tennis shots, belonging to the break and serve class respectively, is illustrated in Figure 2. As can be seen from this example, the incorporation of motion information from previous frames (τ = 1, 2) causes the resulting M_{acc}(u, v, t, τ) fields to present significant dissimilarities with respect to the motion energy distribution, compared to the case when no motion information from previous frames (τ = 0) is taken into account. These dissimilarities are more intense for the second case (τ = 2) and they can facilitate towards the discrimination between these two semantic classes.
During the estimation of the M_{acc}(u, v, t, τ) fields, motion energy values from neighboring frames at the same position are accumulated, as described above. These values may originate from object motion, camera motion or both. Inevitably, when intense camera motion is present, it will superimpose any possible movement of the objects in the scene. For example, during a rally event in a volleyball video, sudden and extensive camera motion is observed, when the ball is transferred from one side of the court to the other. This camera motion supersedes any action of the players during that period. Under the proposed approach, the presence of camera motion is considered to be part of the motion pattern of the respective semantic class. In other words, for the aforementioned example it is considered that the motion pattern of the rally event comprises relatively small player movements that are periodically interrupted by intense camera motions (i.e. when a team's offence incident occurs). The latter consideration constitutes the typical case in the literature [12, 26, 27].
Since the downsampled accumulated motion energy field, R_{acc}(x, y, t, τ), is computed for every selected frame, a procedure similar to the one described in Section 3.2 is followed for providing motion information to the respective HMM structure and realizing shotclass association based on motion features. The difference is that now the accumulated energy fields, R_{acc}(x, y, t, τ), are used during the polynomial approximation process, instead of the motion energy fields, R(x, y, t).
3.4. Discussion
In the authors' previous work [33], motion field estimation by means of optical flow was initially performed for all frames of each video shot. Then, the kurtosis of the optical flow motion estimates at each pixel was calculated for identifying which motion values originate from true motion rather than measurement noise. For the pixels where only true motion was observed, energy distributionrelated information, as well as a complementary set of features that highlight particular spatial attributes of the motion signal, were extracted. For modeling the energy distributionrelated information, the polynomial approximation method also described in Section 3.2 was followed. Although this locallevel representation of the motion signal was shown to significantly outperform previous approaches that relied mainly on global or cameralevel representations, this was accomplished at the expense of increased computational complexity. The latter was caused by: (a) the need to process all frames of every shot, and (b) the need to calculate higherorder statistics from them and compute additional features.
The aim of the approach proposed in this work was to overcome the aforementioned limitations in terms of computational complexity, while also attempting to maintain increased recognition performance. For achieving this, the polynomial approximation that can model motion information was directly applied to the accumulated motion energy fields M_{acc}(u, v, t, τ). These were estimated for only a limited number of frames, i.e. those selected at a constant temporal sampling frequency (SF_{ m } ). This choice alleviates both the need for processing all frames of each shot and the need for computationally expensive statistical and other features calculations. The resulting method is shown by experimentation to be comparable with simpler motion representation approaches [12, 27, 32] in terms of computational complexity, while maintaining a recognition performance similar to that of [33].
4. Color and audiobased analysis
For the color and audio information processing, common techniques from the relevant literature are adopted. In particular, a set of globallevel color histograms of F_{ c } bins in the RGB color space [36] is estimated at equally spaced time intervals for each shot s_{ i } , starting from the first frame; the corresponding temporal sampling frequency is denoted by SF_{ c } . The aforementioned set of color histograms are normalized in the interval [1, 1] and subsequently they are utilized to form a corresponding observation sequence, namely the color observation sequence which is denoted by O{S}_{i}^{c}. Similarly to the motion analysis case, a set of J HMMs is employed, in order to realize the association of the examined shot s_{ i } with the defined classes e_{ j } based solely on color information. At the evaluation stage each HMM returns a posterior probability, which is denoted by {h}_{ij}^{c}=P\left({e}_{j}\mid O{S}_{i}^{c}\right) and indicates the degree of confidence with which class e_{ j } is associated with shot s_{ i } . On the other hand, the widely used Mel Frequency Cepstral Coefficients (MFCC) are utilized for the audio information processing [37]. In the relative literature, apart from the MFCC coefficients, other features that highlight particular attributes of the audio signal have also been used for HMMbased audio analysis (like standard deviation of zero crossing rate [12], pitch period [38], shorttime energy [39], etc.). However, the selection of these individual features is in principle performed heuristically and the efficiency of each of them has only been demonstrated in specific application cases. On the contrary, the MFCC coefficients provide a more complete representation of the audio characteristics and their efficiency has been proven in numerous and diverse application domains [40–44]. Taking into account the aforementioned facts, while also considering that this work aims at adopting common techniques of the literature for realizing generic audiobased shot classification, only the MFCC coefficients are considered in the proposed analysis framework. More specifically, F_{ a } MFCC coefficients are estimated at a sampling rate of SF_{ a } , while for their extraction a sliding window of length F_{ w } is used. The set of MFCC coefficients calculated for shot s_{ i } serves as the shot's audio observation sequence, denoted by O{S}_{i}^{a}. Similarly to the motion and color analysis cases, a set of J HMMs is introduced. The estimated posterior probability, denoted by {h}_{ij}^{a}=P\left({e}_{j}\mid O{S}_{i}^{a}\right), indicates this time the degree of confidence with which class e_{ j } is associated with shot s_{ i } based solely on audio information. It must be noted that a set of annotated video content, denoted by {U}_{tr}^{1}, is used for training the developed HMM structure. Using this, the constructed HMMs acquire the appropriate implicit knowledge that will enable the mapping of the lowlevel audiovisual data to the defined highlevel semantic classes separately for every modality.
5. Joint modality fusion and temporal context exploitation
Graphical models constitute an efficient methodology for learning and representing complex probabilistic relationships among a set of random variables [45]. BNs are a specific type of graphical models that are particularly suitable for learning causal relationships [15]. To this end, BNs are employed in this work for probabilistically learning the complex relationships and interdependencies that are present among the audiovisual data. Additionally, their ability of learning causal relationships is exploited for acquiring and modeling temporal contextual information. In particular, an integrated BN is proposed for jointly performing modality fusion and temporal context exploitation. Key part of the latter is the definition of an appropriate and expandable network structure. The developed structure enables contextual knowledge acquisition in the form of temporal relations among the supported highlevel semantic classes and incorporation of information from different sources. For that purpose, a series of subnetwork structures, which are integrated to the overall network, are defined. The individual components of the developed framework are detailed in the sequel.
5.1. Modality fusion
A BN structure is initially defined for performing the fusion of the computed singlemodality analysis results. Subsequently, a set of J such structures is introduced, one for every defined class e_{ j } . The first step in the development of any BN is the identification and definition of the random variables that are of interest for the given application. For the task of modality fusion the following random variables are defined: (a) variable CL_{ j } , which corresponds to the semantic class e_{ j } with which the particular BN structure is associated, and (b) variables A_{ j } , C_{ j } and M_{ j } , where an individual variable is introduced for every considered modality. More specifically, random variable CL_{ j } denotes the fact of assigning class e_{ j } to the examined shot s_{ i } . Additionally, variables A_{ j } , C_{ j } and M_{ j } represent the initial shotclass association results computed for shot s_{ i } from every separate modality processing for the particular class e_{ j } , i.e. the values of the estimated posterior probabilities {h}_{ij}^{a}, {h}_{ij}^{c}and {h}_{ij}^{m} (Sections 3 and 4). Subsequently, the space of every introduced random variable, i.e. the set of possible values that it can receive, needs to be defined. In the presented work, discrete BNs are employed, i.e. each random variable can receive only a finite number of mutually exclusive and exhaustive values. This choice is based on the fact that discrete space BNs are less prone to undertraining occurrences compared to the continuous space ones [16]. Hence, the set of values that variable CL_{ j } can receive is chosen equal to {cl_{j 1}, cl_{j 2}} = {True, False}, where True denotes the assignment of class e_{ j } to shot s_{ i } and False the opposite. On the other hand, a discretization step is applied to the estimated posterior probabilities {h}_{ij}^{a}, {h}_{ij}^{c} and {h}_{ij}^{m} for defining the spaces of variables A_{ j } , C_{ j } and M_{ j } , respectively. The aim of the selected discretization procedure is to compute a close to uniform discrete distribution for each of the aforementioned random variables. This was experimentally shown to better facilitate the BN inference, compared to discretization with constant step or other common discrete distributions like gaussian and poisson.
The discretization is defined as follows: a set of annotated video content, denoted by {U}_{tr}^{2}, is initially formed and the singlemodality shotclass association results are computed for each shot. Then, the estimated posterior probabilities are grouped with respect to every possible classmodality combination. This results in the formulation of sets {L}_{j}^{b}={\left\{{h}_{nj}^{b}\right\}}_{1\le n\le N}, where b ∈ {a, c, m}≡ {audio, color, motion} is the modality used and N is the number of shots in {U}_{tr}^{2}. Consequently, the elements of the aforementioned sets are sorted in ascending order, and the resulting sets are denoted by {\u0139}_{j}^{b}. If Q denotes the number of possible values of every corresponding random variable, these are defined according to the following equations:
where K=\lfloor \frac{N}{Q}\rfloor ,{\u0139}_{j}^{b}\left(0\right) denotes the o th element of the ascending sorted set {\u0139}_{j}^{b}, and b_{j 1}, b_{j 2}, ..., b_{ jQ }denote the values of variable B_{ j } (B ∈ {A, C, M}). From the above equations, it can be seen that although the number of possible values for all random variables B_{ j } is equal to Q, the corresponding posterior probability ranges with which they are associated are generally different.
The next step in the development of this BN structure is to define a Directed Acyclic Graph (DAG), which represents the causality relations among the introduced random variables. In particular, it is assumed that each of the variables A_{ j } , C_{ j } and M_{ j } is conditionally independent of the remaining ones given CL_{ j } . In other words, it is considered that the semantic class, to which a video shot belongs, fully determines the features observed with respect to every modality. This assumption is typically the case in the relevant literature [17, 46] and it is formalized as follows:
where Ip(.) stands for statistical independence. Based on this assumption, the following condition derives, with respect to the conditional probability distribution of the defined random variables:
where P(.) denotes the probability distribution of a random variable, and a_{ j } , c_{ j } , m_{ j } and cl_{ j } denote values of the variables A_{ j } , C_{ j } , M_{ j } and CL_{ j } , respectively. The corresponding DAG, denoted by {G}_{j}, that incorporates the conditional independence assumptions expressed by Equation (7) is illustrated in Figure 3a. As can be seen from this figure, variable CL_{ j } corresponds to the parent node of {G}_{j}, while variables A_{ j } , C_{ j } and M_{ j } are associated with children nodes of the former. It must be noted that the direction of the arcs in {G}_{j} defines explicitly the causal relationships among the defined variables.
From the casual DAG depicted in Figure 3a and the conditional independence assumption stated in Equation (8), the conditional probability P(cl_{ j }a_{ j } , c_{ j } , m_{ j } ) can be estimated. This represents the probability of assigning class e_{ j } to shot s_{ i } given the initial singlemodality shotclass association results and it can be calculated as follows:
From the above equation, it can be seen that the proposed BNbased fusion mechanism accomplishes to adaptively learn the impact of every utilized modality on the detection of each supported semantic class. In particular, it adds variable significance to every singlemodality analysis value (i.e. values a_{ j } , c_{ j } and m_{ j } ) by calculating the conditional probabilities P(a_{ j } cl_{ j } ), P(c_{ j } cl_{ j } ) and P(m_{ j } cl_{ j } ) during training, instead of determining a unique impact factor for every modality.
5.2. Temporal context exploitation
Besides multimodal information, contextual information can also contribute towards improved shotclass association performance. In this work, temporal contextual information in the form of temporal relations among the different semantic classes is exploited. This choice is based on the observation that often classes of a particular domain tend to occur according to a specific order in time. For example, a shot belonging to the class 'rally' in a tennis domain video is more likely to be followed by a shot depicting a 'break' incident, rather than a 'serve' one. Thus, information about the classes' occurrence order can serve as a set of constraints denoting their 'allowed' temporal succession. Since BNs constitute a robust solution to probabilistically learning causality relationships, as described in the beginning of Section 5, another BN structure is developed for acquiring and modeling this type of contextual information. Although other methods that utilize the same type of temporal contextual information have already been proposed, the presented method includes several novelties and advantageous characteristics: (a) it encompasses a probabilistic approach for automatically acquiring and representing complex contextual information after a training procedure is applied, instead of defining a set of heuristic rules that accommodate to a particular application case [47], and (b) contextual constraints are applied within a restricted time interval, i.e. whole video sequence structure parsing is not required for reaching good recognition results, as opposed to e.g. the approaches of [12, 26].
Under the proposed approach, an appropriate BN structure is constructed for supporting the acquisition and the subsequent enforcement of temporal contextual constraints. This structure enables the BN inference to take into account shotclass association related information for every shot s_{ i } , as well as for all its neighboring shots that lie within a certain time window, for deciding upon the class that is eventually associated with shot s_{ i } . For achieving this, an appropriate set of random variables is defined, similarly to the case of the development of the BN structure used for modality fusion in Section 5.1. Specifically, the following random variables are defined: (a) a set of J variables, one for every defined class e_{ j } , and which are denoted by C{L}_{j}^{i}; these variables represent the classes that are eventually associated with shot s_{ i } , after the temporal context exploitation procedure is performed, and (b) two sets of J · TW variables denoted by C{L}_{j}^{ir} and C{L}_{j}^{i+r}, which denote the shotclass associations of previous and subsequent shots, respectively; rε[1, TW ], where TW denotes the length of the aforementioned time window, i.e. the number of previous and following shots, whose shotclass association results will be taken into account for reaching the final class assignment decision for shot s_{ i } . All together the aforementioned variables will be denoted by C{L}_{j}^{k}, where i  TW ≤ k ≤ i + TW. Regarding the set of possible values for each of the aforementioned random variables, this is chosen equal to \left\{c{l}_{j1}^{k},c{l}_{j2}^{k}\right\}=\left\{True,False\right\}, where True denotes the association of class e_{ j } with the corresponding shot and False the opposite.
The next step in the development of this BN structure is the identification of the causality relations among the defined random variables and the construction of the respective DAG, which represents these relations. For identifying the causality relations, the definition of causation based on the concept of manipulation is adopted [15]. The latter states that for a given pair of random variables, namely X and Y, variable X has a causal influence on Y if a manipulation of the values of X leads to a change in the probability distribution of Y. Making use of the aforementioned definition of causation, it can be easily observed that each defined variable C{L}_{j}^{i} has a causal influence on every following variable C{L}_{j}^{i+1},\phantom{\rule{2.77695pt}{0ex}}\forall j. This can be better demonstrated by the following example: suppose that for a given volleyball game video, it is known that a particular shot belongs to the class 'serve'. Then, the subsequent shot is more likely to depict a 'rally' instance rather than a 'replay' one. Additionally, from the extension of the aforementioned example, it can be inferred that any variable C{L}_{j}^{{i}_{1}} has a causal influence on variable C{L}_{j}^{{i}_{2}} for i_{1}< i_{2}. However, for constructing a causal DAG, only the direct causal relations among the corresponding random variables must be defined [15]. To this end, only the causal relations between variables C{L}_{j}^{{i}_{1}} and C{L}_{j}^{{i}_{2}}, ∀j, and for i_{2} = i_{1} + 1, are included in the developed DAG, since any other variable C{L}_{j}^{{i}_{1}} is correlated with C{L}_{j}^{{i}_{2}}, where {\xed}_{1}+1<{\xed}_{2}, transitively through variables C{L}_{j}^{{\xed}_{3}}, for {\xed}_{1}<{\xed}_{3}<{\xed}_{2}. Taking into account all the aforementioned considerations, the causal DAG {G}_{c} illustrated in Figure 3b is defined. Regarding the definition of the causality relations, it can be observed that the following three conditions are satisfied for {G}_{c}: (a) there are no hidden common causes among the defined variables, (b) there are no causal feedback loops, and (c) selection bias is not present, as demonstrated by the aforementioned example. As a consequence, the causal Markov assumption is warranted to hold. Additionally, a BN can be constructed from the causal DAG {G}_{c} and the joint probability distribution of its random variables satisfies the Markov condition with {G}_{c}[15].
5.3. Integration of modality fusion and temporal context exploitation
Having developed the causal DAGs {G}_{c}, used for temporal context exploitation, and {G}_{j}, utilized for modality fusion, the next step is to construct an integrated BN structure for jointly performing modality fusion and temporal context exploitation. This is achieved by replacing each of the nodes that correspond to variables C{L}_{j}^{k} in {G}_{c} with the appropriate {G}_{j}, using j as selection criterion and maintaining that the parent node of {G}_{j} takes the position of the respective node in {G}_{c}. Thus, the resulting overall BN structure, denoted by G, comprises of a set of substructures integrated to the DAG depicted in Figure 3b. This overall structure encodes both crossmodal as well as temporal relations among the supported semantic classes. Moreover, for the integrated causal DAG G, the causal Markov assumption is warranted to hold, as described above. To this end, the joint probability distribution of the random variables that are included in G, which is denoted by P_{joint} and satisfies the Markov condition with G, can be defined. The latter condition states that every random variable X that corresponds to a node in G is conditionally independent of the set of all variables that correspond to its nondescendent nodes, given the set of all variables that correspond to its parent nodes [15]. For a given node X, the set of its nondescendent nodes comprises all nodes with which X is not connected through a path in G, starting from X. Hence, the Markov condition is formalized as follows:
where ND_{ X } denotes the set of variables that correspond to the nondescendent nodes of X and PA_{ X } the set of variables that correspond to its parent nodes. Based on the condition stated in Equation (10), P_{joint} is equal to the product of the conditional probability distributions of the random variables in G given the variables that correspond to the parent nodes of the former, and is represented by the following equations:
where {a}_{j}^{k}, {c}_{j}^{k} and {m}_{j}^{k} are the values of the variables {A}_{j}^{k}, {C}_{j}^{k} and {M}_{j}^{k}, respectively. The pair \left(G,{P}_{\mathsf{\text{joint}}}\right), which satisfies the Markov condition as already described, constitutes the developed integrated BN.
Regarding the training process of the integrated BN, the set of all conditional probabilities among the defined conditionallydependent random variables of G, which are also reported in Equation (11), are estimated. For this purpose, the set of annotated video content {U}_{tr}^{2}, which was also used in Section 5.1 for input variable discretization, is utilized. At the evaluation stage, the integrated BN receives as input the singlemodality shotclass association results of all shots that lie within the time window TW defined for shot s_{ i } , i.e. the set of values {W}_{i}={\left\{{a}_{j}^{k},{c}_{j}^{k},{m}_{j}^{k}\right\}}_{1\le j\le J}^{iTW\le k\le i+TW} defined in Equation (11). These constitute the so called evidence data that a BN requires for performing inference. Then, the BN estimates the following set of posterior probabilities (degrees of belief), making use of all the precomputed conditional probabilities and the defined local independencies among the random variables of G:P\left(C{L}_{j}^{i}=True\mid {W}_{i}\right), for 1 ≤ j ≤ J. Each of these probabilities indicates the degree of confidence, denoted by {h}_{ij}^{f}, with which class e_{ j } is associated with shot s_{ i } .
5.4. Discussion
Dynamic Bayesian Networks (DBNs), and in particular HMMs, have been widely used in semantic video analysis tasks due to their suitability for modeling pattern recognition problems that exhibit an inherent temporality (Section 2.1). Regardless of the considered analysis task, significant weaknesses that HMMs present have been highlighted in the literature. In particular: (a) Standard HMMs have been shown not to be adequately efficient in modeling longterm temporal dependencies in the data that they receive as input [48]. This is mainly due to their state transition distribution, which obeys the Markov assumption, i.e. the current state that a HMM lies in depends only on its previous state. (b) HMMs rely on the Viterbi algorithm during the decoding procedure, i.e. during the estimation of the most likely sequence of states that generates the observed data. The resulting Viterbi sequence usually represents only a small fraction of the total probability mass, with many other state sequences potentially having nearly equal likelihoods [49]. As a consequence, the Viterbi alignment is rather sensitive to the presence of noise in the input data, i.e. it may be easily misguided.
In order to overcome the limitations imposed by the traditional HMM theory, a series of improvements and modifications have been proposed. Among the most widely adopted ones is the concept of Hierarchical HMMs (HHMMs) [50]. These make use of HMMs at different levels, in order to model data at different time scales; hence, aiming at efficiently capturing and modeling longterm relations in the input data. However, this results in a significant increase of the parameter space, and as a consequence HHMMs suffer from the problem of overfitting and require large amounts of data for training [48]. To this end, Layered HMMs (LHMMs) have been proposed [51] for increasing the robustness to overfitting occurrences, by reducing the size of the parameter space. LHMMs can be considered as a variant of HHMMs, where each layer of HMMs is trained independently and the inferential results from each layer serve as training data for the layer above. Although LHMMs are advantageous in terms of robustness to undertraining occurrences compared to HHMMs, this attribute is accompanied by reduced efficiency in modeling longterm temporal relationships in the data. While both HHMMs and LHMMs have been experimentally shown to generally outperform the traditional HMMs, maintaining that the requirements concerning their application are met, their efficiency still depends heavily on the corresponding generalized Viterbi algorithm; hence, they do not fully overcome the limitations of standard HMMs.
Regarding the integrated BN developed in this work, on the other hand, a fixed time window of predetermined length is used with respect to each shot s_{ i } . This window denotes the number of previous and following shots whose shotclass association results (coming from all considered modalities) are taken into account for reaching the final class assignment decision for shot s_{ i } . Hence, the resulting BN is capable of modeling complex and longterm temporal relationships among the supported semantic classes in a time interval equal to the defined time window, as can be seen from term P_{2} in Equation (11). This advantageous characteristic significantly differentiates the proposed BN from HMMbased approaches (including both HHMMs and LHMMs). The latter take into account information about only the previous state ω_{t1}for estimating the current state ω_{ t } of the examined stochastic process [11]. Furthermore, the final class association decision is reached independently for each shot s_{ i } , while taking into account the evidence data W_{ i } defined for it rather than being dependent upon the final class association decision reached for shot s_{i1}. More specifically, the set of posterior probabilities P\left(C{L}_{j}^{i}=True\mid {W}_{i}\right) (for 1 ≤ j ≤ J), which are estimated after performing the proposed BN inference for shot s_{ i } (as described in Section 5.3), are computed without being affected by the calculation of the respective probabilities P\left(C{L}_{j}^{i1}=True\mid {W}_{i1}\right) estimated for shot s_{i1}. To this end, the detrimental effects caused by the presence of noise in the input data are reduced, since evidence over a series of consecutive shots are examined in order to decide on the final class assignment for shot s_{ i } . At the same time propagation of errors caused by noise to following shots (e.g. shots s_{i+1}, s_{i+2}, etc.) is prevented. On the other hand, HMMbased systems rely on the fundamental principle that for estimating the current state ω_{ t } of the system information about only its previous state ω_{t1}is considered; thus, rendering the HMM decoding procedure rather sensitive to the presence of noise and likely to be misguided. Taking into account the aforementioned considerations, the proposed integrated BN is expected to outperform other similar HMMbased approaches of the literature, as will be experimentally shown in Section 6.
6. Experimental results
The proposed approach was experimentally evaluated and compared with literature approaches using videos of the tennis, news and volleyball broadcast domains. The selection of these application domains is made mainly due to the following characteristics that the videos of the aforementioned categories present: (a) a set of meaningful highlevel semantic classes, whose detection often requires the use of multimodal information, is present in such videos, and (b) videos belonging to these domains present relatively welldefined temporal structure, i.e. the semantic classes that they contain tend to occur according to a particular order in time. In addition, the semantic analysis of such videos remains a challenging problem, which makes them suitable for the evaluation and comparison of relevant techniques. It should be emphasized here that application of the proposed method to any other domain, where an appropriate set of semantic classes that tend to occur according to particular temporal patterns can be defined, is straightforward, i.e. no domainspecific algorithmic modifications or adaptations are needed. In particular, only a set of manually annotated video content is required by the employed HMMs and BNs for parameter learning.
6.1. Datasets
For experimentation in the domain of tennis broadcast video, four semantic classes of interest were defined, coinciding with four highlevel semantic events that typically dominate a broadcasted game. These are: (a) rally: when the actual game is played, (b) serve: is the event starting at the time that the player is hitting the ball to the ground, while he is preparing to serve, and finishes at the time the player performs the serve hit, (c) replay: when a particular incident of increased importance is broadcasted again, usually in slow motion, and (d) break: when a break in the game occurs, i.e. the actual game is interrupted and the camera may show the players resting, the audience, etc. For the news domain, the following classes were defined: (a) anchor: when the anchor person announces the news in a studio environment, (b) reporting: when livereporting takes place or a speech/interview is broadcasted, (c) reportage: comprises of the displayed scenes, either indoors or outdoors, relevant to every broadcasted news item, and (d) graphics: when any kind of graphics is depicted in the video sequence, including news start/end signals, maps, tables or text scenes. Finally, for experimentation in the domain of volleyball broadcast video, two sets of semantic classes were defined. The first one comprises the same semantic classes defined for the tennis domain (volleyballI), while for the second set (volleyballII) the following nine classes are defined: rally, ace, serve, serve preparation, replay, player celebration, tracking single player, face closeup and tracking multiple players. The semantic classes defined for the volleyballII domain are generally subclasses of the corresponding ones defined for the volleyballI domain.
Following the definition of the semantic classes of interest, an appropriate set of videos was collected for every selected domain. Each video was temporally segmented using the algorithm of [34] and every resulting shot was manually annotated according to the respective class definitions. Then, the aforementioned videos were used to form the following content sets for each domain: training set {U}_{tr}^{1} (used for training the developed HMM structure), training set {U}_{tr}^{2} (utilized for training the integrated BN) and test set U_{ te } (used for evaluation). Detailed descriptions of these datasets, which constitute extensions of the datasets used in [33], are given in Table 1. Additionally, the annotations and features for each dataset are publicly available ^{a} .
Due to the large quantity and significant diversity of the reallife videos that were collected for each domain, the risk of overtraining (i.e., of classifier overfitting) was considered to be low in our experiments. This assumption is reinforced by the fact that the proposed approach achieves high recognition rates on 4 datasets of diverse nature and varying complexity, while also outperforming other common techniques of the literature, as shown in the following sections. Based on this, only typical methodologies for avoiding overfitting occurrences and maintaining high generalization ability were considered in this work (e.g. selecting appropriate training algorithms for the employed ML models, as outlined in the sequel; setting not too strict termination criteria during training; etc.). However, for use or evaluation of the proposed techniques on smaller, rather specific or less diverse datasets (e.g. datasets generated under significantly constrained environmental conditions), exploiting more sophisticated techniques such as crossvalidation (rather than employing fixed training/evaluation sets) can be envisaged, similarly to e.g. [14].
6.2. Implementation details
For the initial shotclass association (Sections 3 and 4), the value of the temporal sampling frequency SF_{ m } used for motion feature extraction was set equal to 125 ms. Considering that the frame rate of the utilized videos is equal to 25 fps (Table 1), the aforementioned value of the sampling frequency means that the processing of approximately 8 frames per second is required by the proposed approach, i.e. every third frame of each shot is selected. A third order polynomial function was used, according to Equation (3), and the value of parameter D in Equation (2), which is used to define the horizontal and vertical spatial sampling frequencies (H_{ s } and V_{ s } , respectively) was set equal to 40, similarly to [33]. Parameters η and ζ that define the time descending function in Equation (5) were set equal to 3 and 0.5, respectively. In parallel to motion feature extraction, color histograms of F_{ c } = 16 bins were calculated at a temporal sampling frequency of SF_{ c } = 125 ms (Section 4). With respect to the audio information processing, F_{ a } = 12 MFCC coefficients were estimated at a sampling rate of SF_{ a } = 20 ms, while for their extraction a sliding window of length F_{ w } = 30 ms was used. The value of SF_{ a } is different than that of SF_{ m } (used for motion feature extraction) due to the nature of the audio information and its MFCC representation, which require that MFCC coefficients are calculated at a relatively high rate and in temporal windows of correspondingly short duration [52]. The values of the aforementioned parameters were selected after experimentation. It was observed that small deviations from these values resulted into negligible variations in the overall classification performance.
Regarding the HMM structure of Sections 3 and 4, fully connected first order HMMs, i.e. HMMs allowing all possible hidden state transitions, were utilized for performing the mapping of the singlemodality lowlevel features to the highlevel semantic classes. For every hidden state the observations were modeled as a mixture of Gaussians (a single Gaussian was used for every state). The employed Gaussian Mixture Models (GMMs) were set to have full covariance matrices for exploiting all possible correlations between the elements of each observation. Additionally, the BaumWelch (or ForwardBackward) algorithm was used for training, while the Viterbi algorithm was utilized during the evaluation. Furthermore, the number of hidden states of the HMM models for every separate modality was considered as a free variable. The developed HMM structure was realized using the software libraries of [53].
After shotclass association based on singlemodality information is performed separately for every utilized modality, the integrated BN described in Section 5 was used for realizing joint modality fusion and temporal context exploitation. The value of variable Q in Equation (6), which determines the number of possible values of random variables A_{ j } , C_{ j } and M_{ j } in the {G}_{j} BN substructure, was set equal to 9, 11, 7 and 10, for the tennis, news, volleyballI and volleyballII domains, respectively. These values led to the best overall inferential results, as will be discussed in detail in Section 6.4.1. The developed BN was trained using the Expectation Maximization (EM) approach, while probability propagation was realized using a junction tree mechanism [54].
6.3. Motion analysis results
In this section experimental results from the application of the proposed motionbased shotclass association approach are presented. In Table 2, quantitative class association results are given in the form of the calculated recognition rates when the accumulated motion energy fields, R_{acc}(x, y, t, τ), are used during the approximation step for τ = 0, 1, 2 and 3, respectively, for all selected domains. The class recognition rate is defined as the percentage of the video shots that belong to the examined class and are correctly associated with it. Additionally, the values of the overall classification accuracy and the average precision are also given. The overall classification accuracy is set equal to the percentage of all shots that are associated with the correct semantic class. On the other hand, the average precision is defined equal to the weighted sum of the estimated precision values of every supported class, using the classes' frequency of appearance as weight; the precision value of a given class is equal to the percentage of the shots that are associated with it and they truly belong to it. It has been regarded that arg{max}_{j}\left({h}_{ij}^{m}\right) indicates the class e_{ j } that is associated with shot s_{ i } . Moreover, the frame processing rate, which is defined as the number of video frames that are processed per second (fps) on average, is also given. The latter metric is introduced for approximating the computational complexity of the proposed method; a frame rate of 25 fps would indicate realtime processing for the videos used. It must be noted that all experiments were conducted using a PC with Intel Quad Core processor at 2.4 GHz and a total of 3 GB RAM.
From the presented results, it can be seen that the proposed approach achieves high values for both overall classification accuracy and average precision for τ = 0 in all selected domains, while most of the supported classes exhibit increased recognition rates. It is also shown that the class association performance generally increases when the R_{acc}(x, y, t, τ) are used for small values of τ, compared to the case where no motion information from previous frames is utilized, i.e. when τ = 0. Specifically, a maximum increase, up to 5.46% in the news domain, is observed in the overall class association accuracy when τ = 1. On the other hand, it can be seen that when the value of τ is further increased (τ = 2, 3), the overall performance improvement decreases. This is mainly due to the fact that when taking into account information from many previous frames the estimated R_{acc}(x, y, t, τ) fields for each frame tend to become very similar. Thus, polynomial coefficients tend to also have very similar values and hence HMMs cannot observe a characteristic sequence of features that unfolds in time for every supported semantic class. The above results demonstrate that the proposed accumulated motion energy fields can lead to improved shotclass association performance.
The performance of the proposed method is compared with the motion representation approaches for providing motion information to HMMbased systems presented in [12, 26, 27, 32], as well as with the authors' previous work [33] (as described in Section 3.4). Specifically, Huang et al. consider the first four dominant motion vectors and their appearance frequencies, along with the mean and the standard deviation of motion vectors in the frame [12]. Additionally, Gibert et al. make use of the available motion vectors for estimating the principal motion direction of every frame [32]. On the other hand, Xie et al. calculate the motion intensity at frame level [27], while Xu et al. estimate the energy redistribution for every frame and subsequently a set of motion filters are applied for detecting the observed dominant motions [26]. From the presented results, it is shown that the proposed approach outperforms the algorithms of [12, 26, 27, 32] for most of the supported classes as well as in overall classification performance in all selected domains. On the other hand, it can also be seen that the performance of the proposed approach is comparable with the one attained by the application of the method of [33] (note that the results for the method of [33] and other works that are reported in Table 2 may be somewhat different from those reported in [33], in absolute numbers; this is due to the datasets used in [33] being different than those used here). In particular, the method of [33] presents higher overall classification accuracy and average precision in the ranges [0.55, 3.07%] and [0.23, 1.74%], respectively, in the selected domains. However, it is shown that the proposed method performs faster than the method of [33] by a factor in the range [4.90, 5.91], while its time performance is also comparable or better than that of [12, 26, 27, 32] that were implemented; all the latter methods exhibit considerably lower overall classification performance in all domains. Thus, the proposed motionbased shotclass association approach achieves to combine increased recognition performance with relatively low computational complexity, compared to the relevant literature. It must be noted that the approximation of the methods' computational complexity by the introduced frame processing rate metric is performed due to the inevitable difficulty in defining the computational complexity in a closed form for most cases (e.g. the computational complexity of the method of [33] depends heavily on the type of the videos and the kinds of the motion patterns that they contain).
6.3.1. Effect of the degree of the polynomial function
In order to investigate the effect of the introduced polynomial function's degree on the overall motionbased shotclass association performance (Section 3), the latter was evaluated when parameter T (Equation (3)) receives values ranging from 1 to 6. Additionally, the accumulated motion energy fields, R_{acc}(x, y, t, τ), are used for τ = 1 in all selected domains. Values of parameter T greater than 6 resulted in significantly decreased recognition performance. The corresponding shotclass association results are illustrated in Figure 4, where it can be seen that the use of a 3rd order polynomial function leads to the best overall performance in all domains. It must be noted that for the cases of the 5th and 6th order polynomial function, Principal Component Analysis (PCA) was used for reducing the dimensionality of the observation vectors and overcoming HMM undertraining occurrences. The target dimension of the PCA output was set equal to the dimension of the observation vector that is generated when using a 4th order polynomial function (i.e. the highest value of T for which HMM undertraining occurrences were not observed).
6.4. Overall analysis results
In this section experimental results of the overall developed framework are presented. In order to demonstrate and comparatively evaluate the efficiency of the proposed integrated BN, the following experiments were conducted:

(1)
application of the developed BN

(2)
application of a variant of the proposed approach, where a SVMbased classifier is used instead of the developed BN
 (34)

(56)
application of the methods of [12] and [26], using the lowlevel features of Sections 3 and 4 instead of the ones originally proposed in [12] and [26].
Experiment 1 demonstrates the shotclass association performance obtained by the application of the proposed integrated BN, which jointly performs modality fusion and temporal context exploitation. Experiment 2 is conducted in order to comparatively evaluate the effectiveness of the developed BN, which constitutes a generative classifier, against a discriminative one. Discriminative classifiers are easier to be developed, while they are generally considered to outperform generative ones [55], when sufficient amount of training data is available. To this end, a variant of the proposed approach is implemented, where a SVMbased classifier is used instead of the developed BN. In particular, an individual SVM is introduced for every defined class e_{ j } to detect the corresponding instances and is trained under the 'oneagainstall' approach. Each SVM, which receives as input the same set of posterior probabilities with the developed BN (i.e. the evidence data W_{ i } defined in Section 5.3), returns at the evaluation stage for every shot s_{ i } a numerical value in the range [0, 1]. This value denotes the degree of confidence with which the corresponding shot is assigned to the class associated with the particular SVM (similarly to the {h}_{ij}^{f} value also defined in Section 5.3). Implementation details regarding the developed SVMbased classifier can be found in [9]. In all cases, it has been considered that arg{max}_{j}\left({h}_{ij}^{a}\right), arg{max}_{j}\left({h}_{ij}^{c}\right), arg{max}_{j}\left({h}_{ij}^{m}\right) and arg{max}_{j}\left({h}_{ij}^{f}\right) indicate the class e_{ j } that is associated with shot s_{ i } after every respective algorithmic step. The performance of the developed BN is also compared with the HMMbased video analysis approaches presented in [12] and [26] (experiments 3 and 4). Specifically, Huang et al. [12] propose a 'class transition penalty' approach, where HMMs are initially employed for detecting the semantic classes of concern using multimodal information and a product fusion operator. Subsequently, a dynamic programming technique is adopted for searching for the most likely class transition path. On the other hand, Xu et al. [26] present a HMMbased framework capable of modeling temporal contextual constraints in different semantic granularities, while multistream HMMs are used for modality fusion. It must be noted that apart from the motion and color features proposed in [26] (observed dominant motions and mean RGB values, respectively), audio information is also used for the purpose of comparison in experiment 4. In particular, the MFCC coefficients (described in Section 4) are also provided as input to the employed multistream HMMs. Additionally, in order to compensate the effect of the different approaches originally using different color, motion and audio features, in experiments 5 and 6 the methods of [12] and [26] receive as input the same video lowlevel features utilized by the proposed method and described in Sections 3 and 4. Hence, the latter two experiments will facilitate in better demonstrating the effectiveness of the proposed BN, compared to other similar approaches that perform the modality fusion and temporal context exploitation procedures separately. It must be highlighted at this point that the method of [26] actually constitutes a particular type of LHMMs, namely a composite HMM with 3 layers.
Results of experiments 1 and 2, which are affected by parameter TW (Equation 11), were carried out for TW between 1 and 6. In Figure 5, the results for TW = 1, 2 and 3 are reported in detail, in terms of the difference in classification accuracy compared to the best singlemodality analysis result for each domain. The latter are depicted in parentheses. From these results, it can be seen that the proposed integrated BN achieves a significant increase (up to 15.80% in the volleyballII domain) in the overall classification accuracy for all selected domains for TW = 1, while the recognition rates of most of the supported classes are substantially enhanced. Additionally, it can also be seen that further increase of the value of parameter TW (TW = 2, 3) leads to a corresponding increase of the overall classification accuracy. Among the classes that are particularly favored by the application of the proposed integrated BN are those that present significant variations in their video lowlevel features, while also having quite welldefined temporal context. Such classes are break and graphics in the tennis and news domain, respectively. In particular, shots belonging to the class break usually depict significantly different types of scenes (e.g. the players resting or the audience), while also having quite welldefined temporal context (video shots belonging to the class break are often successive and usually interrupted by shots depicting a serve hit). Similarly, shots belonging to the class graphics differ significantly in terms of their lowlevel audiovisual features (due to the different graphical environments that are presented during a news broadcast, like news start/end signals, weather maps, sport tables, etc.), while they also present characteristic temporal relations. It was observed that values of parameter TW greater than 3 (i.e. TW = 4, 5, 6) were experimentally shown to result into marginal changes in the overall classification performance (i.e. changes in the overall accuracy smaller than 0.10%) and negligible variations in the classes' recognition rates; these results are not included in Figure 5 for brevity. All the above results demonstrate the potential of reaching increased shotclass association results by jointly performing modality fusion and temporal context exploitation.
Considering the corresponding SVM results (experiment 2), it is shown in Figure 5 that a significant increase (up to 9.91% in the tennis domain) in the overall classification accuracy can also be obtained for TW = 1 compared to the best singlemodality analysis result, when a SVMbased classifier is used instead of the developed BN for all domains. This is lower or equal to the corresponding results of the BN for TW = 1, with the highest difference of approximately 12.46% being observed in the volleyballII domain, i.e. the domain with the highest number of supported semantic classes. Additionally, two important observations can be made: (a) the overall performance improvement decreases when parameter TW receives greater values (TW = 2, 3), as opposed to the results of experiment 1, and (b) not all supported classes are favored (e.g. reporting exhibits a dramatic decrease of 64.29% in its recognition rate for TW = 1 in the news domain). These observations suggest that the methodology proposed in this work for representing and learning the joint probability distribution {P}_{\mathsf{\text{joint}}}\left({\left\{{a}_{j}^{k},{c}_{j}^{k},{m}_{j}^{k},c{l}_{j}^{k}\right\}}_{1\le j\le J}^{iTW\le k\le i+TW}\right) (Section 5.3) is advantageous compared to directly modeling the probability distributions P\left(C{L}_{j}^{i}=True\mid {W}_{i}\right), as the employed SVMbased classifier does. This observation can be considered as an extension of the findings presented by Adams et al. in [17], where BNs and SVMs were experimentally shown to be equally efficient for the task of modality fusion.
In Table 3, quantitative class association results are given for experiments 16, as well as from every separate modality processing, in the form of the calculated recognition rates for all selected domains. The values of the overall classification accuracy and the average precision are also given for every case. It must be noted that a timeperformance measure (similar to the average frame processing rate defined in Section 6.3) is not included in Table 3. This is due to the fact that the execution of any of the modality fusion and temporal context exploitation methods reported in experiments 16 represents a very small fraction (less than 2%) of the overall video processing time. The latter essentially corresponds to the generation of the respective singlemodality analysis results. Following the discussion on Figure 5, only the best results of experiments 1 and 2 are reported here, i.e. using TW = 3 for the BN and TW = 1 for the SVMbased classifier. It can be seen that the proposed BN outperforms the SVMbased approach as well as the methods of [12] and [26] for most of the supported classes as well as in overall classification performance. Additionally, it is also advantageous compared to the case where the methods of [12] and [26] utilize the video lowlevel features described in Sections 3 and 4 (experiments 5 and 6). This is mainly due to: (a) the more sophisticated modality fusion mechanism developed in this work, compared to the heuristic assignment of weights to every modality in [26] and the assumption of statistical independence between the features of different modalities in [12], (b) the more complex temporal relations that are modeled by the developed integrated BN, compared to the methods of [26] and [12] that rely on class transition probability learning, and (c) the fact that the proposed method performs jointly modality fusion and temporal context exploitation; hence, taking advantage of all possible correlations between the respective numerical data. It must be emphasized here that these results verify the theoretic analysis given in Section 5.4, which indicated that the proposed integrated BN was expected to outperform other similar HMMbased approaches, e.g. [26].
In order to investigate whether the employed datasets are sufficiently large for the differences in performance observed in Table 3 to be statistically significant, a statistical significance test is used. This takes into account the overall shot classification accuracy in each selected domain and uses the chisquare measure [56] together with the following null hypothesis: "there is no significant difference in the total number of correctly classified shots between the results obtained after the application of the BN and the results obtained after the application of another similar approach of the literature". The latter is the hypothesis that is to be rejected if the test is passed. The test revealed that all performance differences observed in Table 3 between the proposed approach and the methods of [26] and [12] (using either their original features or the lowlevel features proposed in Sections 3 and 4) are statistically significant. In particular, the lowest chisquare values calculated for the tennis, news, volleyballI and volleyballII domains according to the aforementioned pairwise method comparisons are as follows: (Chisquare = 10.09, df = 1, P < 0.05), (Chisquare = 17.06, df = 1, P < 0.05), (Chisquare = 29.34, df = 1, P < 0.05) and (Chisquare = 13.95, df = 1, P < 0.05). Regarding the comparison with the SVMbased method (experiment 2), the difference in performance is statistically significant for the challenging volleyballII domain (Chisquare = 6.96, df = 1, P < 0.05). For the other three datasets that involve only 4 classes, less pronounced performance differences (thus also of lower statistical significance) are observed between the proposed approach and the SVM one. However, it should be noted that: (a) despite the small difference in overall performance, the SVM classifier often introduces a dramatic decrease in the recognition rate of some of the supported semantic classes, as discussed earlier in this section, and (b) the SVM classifier, as applied in this work, constitutes a variation of the proposed approach, i.e. its performance is also boosted by jointly realizing modality fusion and temporal context exploitation, as opposed to the literature works of [26] and [12].
6.4.1. Effect of discretization
In order to examine the effect of the proposed discretization procedure on the performance of the developed integrated BN, the latter was evaluated for different values of parameter Q (Equation (6)). This parameter determines the number of possible values of random variables A_{ j } , C_{ j } and M_{ j } in the {G}_{j} BN substructure. Results when parameter Q receives values in the interval [3, 15] are illustrated in Figure 6. It can be seen that the developed BN tends to exhibit relatively decreased recognition performance, when parameter Q receives low values (Qε[3, 6]) for TW = 1, 2, 3 in all domains. This is mainly due to the fact that low values of Q led to coarse discretization, which resulted to decreased shotclass association performance. Additionally, when Q receives values ranging approximately from 7 to 11, the proposed approach presents relatively small variations in its recognition performance, which is close to its maximum overall shotclass association accuracy for any value of TW and in all domains. On the other hand, values greater than 11 led to increased network complexity and resulted to undertraining/overfitting occurrences; hence, leading to a corresponding gradual decrease in the overall shotclass association performance.
7. Conclusions
In this paper, a multimodal contextaware framework for semantic video analysis was presented. The core functionality of the proposed approach relies on the introduced integrated BN, which is developed for performing joint modality fusion and temporal context exploitation. With respect to the utilized motion features, a new representation for providing motion energy distributionrelated information to HMMs is described, where motion characteristics from previous frames are exploited. Experimental results in the domains of tennis, news and volleyball broadcast video demonstrated the efficiency of the proposed approaches. Future work includes the examination of additional shotclass association schemes as well as the investigation of alternative algorithms for acquiring and modeling contextual information, and their integration in the proposed framework.
Endnotes
References
Hanjalic A, Lienhart R, Ma W, Smith J: The holy grail of multimedia information retrieval: so close or yet so far away? Proc IEEE 2008,96(4):541547.
Smeaton A: Techniques used and open challenges to the analysis, indexing and retrieval of digital video. Inf Syst 2007,32(4):545559. 10.1016/j.is.2006.09.001
Zhu W, Toklu C, Liou S: Automatic news video segmentation and categorization based on closedcaptioned text. IEEE International Conference on Multimedia and Expo (ICME) 2001, 829832.
Wang HL, Cheong LF: Taxonomy of directing semantics for film shot classification. IEEE Trans Circuits Syst Video Technol 2009,19(10):15291542.
Snoek C, Worring M: Multimodal video indexing: a review of the stateoftheart. Multimedia Tools Appl 2005,25(1):535.
Wang Y, Liu Z, Huang J: Multimedia content analysisusing both audio and visual clues. Signal Process Mag IEEE 2000,17(6):1236. 10.1109/79.888862
Luo J, Boutell M, Brown C: Pictures are not taken in a vacuum. Signal Process Mag IEEE 2006,23(2):101114.
Vallet D, Castells P, Fernandez M, Mylonas P, Avrithis Y: Personalized content retrieval in context using ontological knowledge. IEEE Trans Circuits Syst Video Technol 2007,17(3):336.
Papadopoulos GT, Mezaris V, Kompatsiaris I, Strintzis MG: Combining global and local information for knowledgeassisted image analysis and classification. EURASIP J Adv Signal Process 2007.,2007(2):
Byrne D, Wilkins P, Jones G, Smeaton A, Connor NO': Measuring the impact of temporal context on video retrieval. Proceedings of International Conference on ContentBased Image and Video Retrieval 2008, 299308.
Rabiner L: A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 1989,77(2):257286. 10.1109/5.18626
Huang J, Liu Z, Wang Y: Joint scene classification and segmentation based on hidden Markov model. IEEE Trans Multimedia 2005,7(3):538550.
Zhou J, Zhang XP: An ica mixture hidden markov model for video content analysis. IEEE Trans Circuits Syst Video Technol 2008,18(11):15761586.
Gao X, Yang Y, Tao D, Li X: Discriminative optical flow tensor for video semantic analysis. Comput Vis Image Underst 2009,113(3):372383. 10.1016/j.cviu.2008.08.007
Neapolitan R: Learning Bayesian Networks. Prentice Hall Upper Saddle River, NJ; 2003.
Heckerman D: A tutorial on learning with Bayesian networks. In Learning in graphical models. MIT Press Cambridge, MA; 1998.
Adams W, Iyengar G, Lin C, Naphade M, Neti C, Nock H, Smith J: Semantic indexing of multimedia content using visual, audio, and text cues. EURASIP J Appl Signal Process 2003, 2: 170185.
Hung MH, Hsieh CH: Event detection of broadcast baseball videos. IEEE Trans Circuits Syst Video Technol 2008,18(12):17131726.
Gong Y, Xu W: Machine Learning for Multimedia Content Analysis. Springer, New York; 2007.
Bruno E, MoenneLoccoz N, MarchandMaillet S: Design of multimodal dissimilarity spaces for retrieval of video documents. IEEE Trans Pattern Anal Mach Intell 2008,30(9):15201533.
Shyu M, Xie Z, Chen M, Chen S: Video semantic event/concept detection using a subspacebased multimedia data mining framework. IEEE Trans Multimedia 2008,10(2):252259.
Hoi S, Lyu M: A multimodal and multilevel ranking scheme for largescale video retrieval. IEEE Trans Multimedia 2008,10(4):607619.
Tjondronegoro D, Chen Y: Knowledgediscounted event detection in sports video. IEEE Trans Syst Man Cybern Part A Syst Hum 2010,40(5):10091024.
Xu C, Wang J, Lu L, Zhang Y: A novel framework for semantic annotation and personalized retrieval of sports video. IEEE Trans Multimedia 2008,10(3):421436.
Yang J, Hauptmann A: Exploring temporal consistency for video analysis and retrieval. Proceedings of ACM International Workshop on Multimedia Information Retrieval 2006, 3342.
Xu G, Ma Y, Zhang H, Yang S: An HMMbased framework for video semantic analysis. IEEE Trans Circuits Syst Video Technol 2005,15(11):14221433.
Xie L, Xu P, Chang S, Divakaran A, Sun H: Structure analysis of soccer video with domain knowledge and hidden Markov models. Pattern Recognit Lett 2004,25(7):767775. 10.1016/j.patrec.2004.01.005
Wan K: Exploiting storylevel context to improve video search. IEEE International Conference on Multimedia and Expo (ICME) 2008, 289292.
Hsu W, Kennedy L, Chang S: Video search reranking through random walk over documentlevel context graph. Proceedings of International Conference on Multimedia 2007, 971980.
You J, Liu G, Perkis A: A semantic framework for video genre classification and event analysis. Signal Process Image Commun 2010,25(4):287302. 10.1016/j.image.2010.02.001
Ding Y, Fan G: Sports video mining via multichannel segmental hidden Markov models. IEEE Trans Multimedia 2009,11(7):1301.
Gibert X, Li H, Doermann D: Sports video classification using HMMs. IEEE Int Conf Multimedia Expo (ICME) 2003, 2: 345348.
Papadopoulos GT, Briassouli A, Mezaris V, Kompatsiaris I, Strintzis MG: Statistical motion information extraction and representation for semantic video analysis. IEEE Transactions Circuits Syst Video Technol 2009,19(10):15131528.
Kobla V, Doermann D, Lin K: Archiving, indexing, and retrieval of video in the compressed domain. Proceedings of SPIE Conference on Multimedia Storage Archiving Systems 1996, 2916: 7889.
Lucas B, Kanade T: An iterative image registration technique with an application to stereo vision. International Joint Conference on Artifical Intelligence 1981, 3: 674679.
Geetha M, Palanivel S: HMM Based Automatic Video Classification Using Static and Dynamic Features. Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA) 2007, 277281.
Xiong Z, Radhakrishnan R, Divakaran A, Huang T: Comparing MFCC and MPEG7 audio features for feature extraction, maximum likelihood HMM and entropic prior HMM for sports audio classification. IEEE International Conference on Multimedia and Expo (ICME) 2003., 3:
Cheng C, Hsu C: Fusion of audio and motion information on HMMbased highlight extraction for baseball games. IEEE Trans Multimedia 2006,8(3):585599.
Kolekar M, Sengupta S: A hierarchical framework for generic sports video classification. Comput Vis ACCV 2006, 3852: 633642. 10.1007/11612704_63
Ikbal S, Faruquie T: HMM based event detection in audio conversation. Proceedings of IEEE International Conference on Multimedia and Expo, IEEE 2008, 14971500.
Zhang B, Dou W, Chen L: Audio contentbased highlight detection using adaptive Hidden Markov Model. International Conference on Intelligent Systems Design and Applications 2006.
Zhang D, GaticaPerez D, Bengio S, McCowan I: Semisupervised adapted hmms for unusual event detection. IEEE Comput Soc Conf Comput Vis Pattern Recognit 2005, 1: 611618.
Wang J, Chng E, Xu C, Lu H, Tian Q: Generation of personalized music sports video using multimodal cues. IEEE Trans Multimedia 2007,9(3):576588.
Xu M, Chia L, Jin J: Affective content analysis in comedy and horror videos by audio emotional event detection. IEEE International Conference on Multimedia and Expo 2005, 4.
Bishop C: Pattern Recognition and Machine Learning. Springer; 2006.
Petkovic M, Mihajlovic V, Jonker W, DjordjevicKajan S: Multimodal extraction of highlights from TV formula 1 programs. IEEE International Conference on Multimedia and Expo (ICME) 2002.
Liang B, Lao S, Zhang W, Jones G, Smeaton AF: Video Semantic Content Analysis Framework Based on Ontology Combined MPEG7, vol. 4918/2008. Springer, Berlin/Heidelberg; 2008:237250.
Barnard M, Odobez J: Sports Event Recognition Using Layered HMMs. IEEE International Conference on Multimedia and Expo (ICME) 2005, 11501153.
Brand M: Voice puppetry. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques. ACM Press/AddisonWesley Publishing Co. New York, NY, USA; 1999:2128.
Fine S, Singer Y, Tishby N: The hierarchical hidden Markov model: analysis and applications. Mach Learn 1998,32(1):4162. 10.1023/A:1007469218079
Oliver N, Horvitz E, Garg A: Layered representations for human activity recognition. Fourth IEEE International Conference on Multimodal Interfaces 2002.,3(8):
Davis S, Mermelstein P: Comparison of parametric representations for monosylabic word recognition in continuously spoken sentences. IEEE Trans Acoust Speech Signal Process 1980,28(10):357366.
Hidden Markov Model Toolkit, HTK. [http://htk.eng.cam.ac.uk/]
Jensen FV, Jensen F: Optimal junction trees. Proceedings of Conference on Uncertainty in Artificial Intelligence 1994.
Ng A, Jordan M: On discriminative versus generative classifiers: a comparison of logistic regression and naive bayes. Adv Neural Inf Process Syst 2002, 2: 841848.
Greenwood P, Nikulin M: A guide to chisquared testing. WileyInterscience; 1996.
Acknowledgements
The work presented in this paper was supported by the European Commission under contracts FP7248984 GLOCAL and FP7249008 CHORUS+.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Papadopoulos, G.T., Mezaris, V., Kompatsiaris, I. et al. Joint modality fusion and temporal context exploitation for semantic video analysis. EURASIP J. Adv. Signal Process. 2011, 89 (2011). https://doi.org/10.1186/16876180201189
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201189