Fig. 6From: Viewpoint robust knowledge distillation for accelerating vehicle re-identificationa Different student networks’ R1 via varying distillation temperatures. Here, the KLD loss’s weight (i.e., λ of Equation (1)) is set to 1. b Different student networks’ R1 via varying of KLD’s weights (i.e., λ of Equation (2))Back to article page