In this section, we will present the pyroelectric sensor model and the high-resolution visibility modulation with Fresnel lens arrays.
Pyroelectric Sensor Modeling
Figure 3 shows the sensing model of a PIR sensor. The pyroelectric sensors D 205b commercially available are employed for sensing the thermal radiation changes in the object space [24]. The sensor has the characteristic of being a bandpass filter with the wavelengths of 8 μ m to 14 μ m, which just corresponds to the thermal radiation emitted from the human body. The intrusion of visible illumination can be removed. Object space is defined as the collection of the thermal radiation field in a walker’s body. Fresnel lenses are used to bridge the field of view (FOV) of the PIR sensor to the motion sensing space. When a person moves across the object space continuously, one can motivate the corresponding sensors.
The sensing model can be represented by a form of reference structure [25]:
(1)
where ∗ is the convolution operator; H(t) = [ dP
s
/ dT ] [ dT/dt ] denotes the impulse response function of the PIR sensor; T is the temperature; t is the time tag andP
s
denotes the polarization per unit volume [26]. The quantity P = dP
s
/ dT is known as the pyroelectric coefficient and related with the pyroelectric materials, so the H(t) = P [dT / dt] is the rate of temperature change. In particular, a stationary human body does not trigger the sensor and the PIR sensor only responds to human movement without considering the textures of clothing. S and m denote the radiation state vector and the measurement vector, respectively. V(r
m
r
s
) is the visibility function, which is “1” whenr
s
is visible to the sensor atr
m
, otherwise is “0”.
Sensor modules with visibility modulation
An important issue in the walker recognition task is to extract the salient features that will most effectively represent identity characteristics. In a seminal work, Johansson has demonstrated that human could distinguish the gender from the motion of a few light points attached to the body [27]. The movement information is the essential feature of human gait. Following this idea, Little and Boyd introduced the dense optical flow of a walker for individual recognition [8]. Their research also highlights the importance of motion information for gait representation. Besides, limb-level motion information are able to aid in designing efficient and task-specific biometric sensing. Lee and Grimson extracted each silhouette from video sequence, and then divided them into 7 parts [9]. Head, shoulder region, front of the torso were matched in accordance with these parts and ellipses were fitted to each region. Finally, the joint motion parameters of these ellipses were extracted as gait features for recognition task. The above research results provide noteworthy clues for biometric feature acquisition. Human motion is connected with locomotion of the motion by limbs explicitly. Moreover, walking movements can be decomposed into the limb-swinging along the horizontal and vertical directions.
In the study of [18], Fang et al. explores the performances of PIR based walker recognition with the different horizontal resolutions. They conclude the higher horizontal resolution enables to get more details and better results. However, our design focuses on how to capture the motion feature caused by horizontal and vertical movements with the looking-down fashion.
The compound eyes of insect provide the noteworthy intuitions for our visual modulation, due to its outstanding ability to find and track the moving objects in the horizontal and radial direction accurately [28]. The Fresnel lenses are employed to modulate the FOV of the PIR sensor, which are made of a light-weight, low-cost plastic material and has good focusing characteristics. Figure 4a shows the top view of the compound eye structured Fresnel lens, where the white and dark regions correspond to the visible and invisible regions, respectively. The sampling space is split into 19 non-overlapping subregions, and denoted as:
(2)
Figure 4b shows the lateral view of the Fresnel lens. The characteristics of this compound-eye visual modulation enable higher resolution and sensitivity in the horizontal direction. More precisely, we can represent a single sensing channel in the form of:
(3)
We should note that a single view based sensor is only sensitive to horizontal motion.
To capture the motion details in the radial direction, two more same modulated sensors are provided with small disparity. Figure 5 shows the proposed sensing setup. The object space is divided into many subregions by three compound-eye modulated sensors, and the radial movement information is fused in the parallel multi-channel output. This sensing method leads to capture characteristics of omnidirectional movements during walking. Comparing with previously mentioned studies, the designed field structure not only has high resolution in the horizontal direction, but also in the vertical direction.
Feature generation
With the sensor arrays deployed as in Figure 5, the object space is partitioned into a set of subregions. When one walks through the prescribed path, the corresponding sensor will be activated by the limb movements. Figure 6a shows the response collected from a single sensor channel. Human motion detection can be achieved easily based on the energy of sensors’ signal.
The three feature sequences produced from three sensor channels are combined in a simplest way, as illustrated in Figure 6b. The fused feature vector is built by concatenating the feature sequences into a higher dimensional sequence, which can be denoted by:
(4)
where them1(t),m2(t),m3(t) are the outputs of the three sensors in Figure 5. Compared with the image based feature exaction, the proposed method reduces the dimension of feature drastically, which just contains a three-dimensional sequence. This not only saves the sensor resources, but also the power consumption, storage space and bandwidth needed for transmission.