 Research
 Open Access
A nonconvex gradient fidelitybased variational model for image contrast enhancement
 Qiegen Liu^{1},
 Jianbo Liu^{2},
 Biao Xiong^{3} and
 Dong Liang^{2}Email author
https://doi.org/10.1186/168761802014154
© Liu et al.; licensee Springer. 2014
 Received: 3 July 2014
 Accepted: 4 September 2014
 Published: 10 October 2014
Abstract
We propose a novel image contrast enhancement method via nonconvex gradient fidelitybased (NGF) variational model which consists of the data fidelity term and the NGF regularization. The NGF prior assumes that the gradient of the desired image is close to the multiplication of the gradient of the original image by a scale factor, which is adaptively proportional to the difference of their gradients. The presented variational model can be viewed as a datadriven alpharooting method in the gradient domain. An augmented Lagrangian method is proposed to address this optimization issue by first transforming the unconstrained problem to an equivalent constrained problem and then applying an alternating direction method to iteratively solve the subproblems. Experimental results on a number of images consistently demonstrate that the proposed algorithm can efficiently obtain visual pleasure results and achieve favorable performance than the current stateoftheart methods.
Keywords
 Contrast enhancement
 Variational model
 Augmented Lagrangian method
1 Introduction
Image enhancement is an important issue in many fields like computer vision, pattern recognition, and medical image processing. It aims to make the resultant quality better than the original image for a specific application or a set of objectives, where the source of degradation may be unknown. Many images such as medical images, remote sensing images, electron microscopy images, and even real life photographic pictures suffer from poor contrast. Therefore, it is necessary to enhance the contrast to obtain a more visually pleasing image [1].
1.1 Related work
Many image contrast enhancement techniques are available which can be roughly be classified into two categories, i.e., spatialdomain algorithm and transformdomain algorithm.
Spatialdomain techniques such as the power law transform and histogram equalization (HE) [1] directly deal with the image pixels by manipulating them to achieve the desired enhancement. This category of algorithms is particularly useful for directly altering the gray level values of individual pixels and hence the overall contrast of the entire image. However, they usually enhance the whole image in a uniform manner which may produce undesirable results in many cases. Since then, several improved approaches have been developed [2–5]. For instance, by regarding each subhistogram as a class, Menotti et al. [3] developed a localadaptive method that first partitions the overall histogram into multiple subhistograms by minimizing withinclass variance and then applies HE to each subhistogram separately. As a generalization of HE, the enhancement process of 2DHE is based on the observation that contrast of an image can be improved by increasing the graylevel differences between the pixels of an input image and their neighbors [4]. Huang et al. [5] proposed a hybrid histogram modification method by combining power law transform and HE.
Transformdomain algorithms are related to the data domain they are applied, which may be in frequency domain, discrete cosine transform (DCT) or wavelet transform [6–9]. Alpharooting (AR) algorithm is a simple but effective technique for image enhancement in the transform or frequency domain. This algorithm is based on the fact that after applying an orthogonal transform to an image, the high frequency coefficients will have smaller magnitudes than the low frequency coefficients; hence, these coefficients may be more amplified to reveal details [6]. Additionally, contrast enhancement algorithms in the DCT domain have attracted many researchers since DCT is adopted in the JPEG compression standard. However, they often introduce unfavorable blocking artifacts, e.g., multicontrast enhancement method (MCE) [7, 8].
Some approaches treat contrast enhancement as an optimization problem. For example, the method of flattest histogram specification with accurate brightness preservation (FHSABP) [10] formulates the transformation of the input image histogram into the flattest histogram as a convex optimization, subject to a mean brightness constraint. Contrast enhancement in histogram modification framework (HMF) is also treated as an optimization problem that minimizes a cost function to address noise and black/white stretching [3]. The contextual and variational contrast (CVC) method enhances the image contrast by constraining a 2D histogram of the input image [11]. A smooth 2D target histogram is obtained by minimizing the sum of the Frobenius norms of the difference from the input histogram and the uniformly distributed histogram. However, it requires a high level of computation when increasing the graylevel differences between neighboring pixels. Unlike the above three methods of formulating optimization with 1D or 2D histogram as variables, in this paper, we utilize the image gradient as a optimization variable for contrast enhancement, where the solution is obtained by minimizing the data fidelity as well as the sum of norms of the difference from datadriven weightingmultiplication of the input gradients and the target gradients.
1.2 Contributions

For better contrast enhancement, we introduce a novel image prior, the nonconvex gradient fidelity (NGF), which assumes that the gradient of the desired image is close to the multiplication of the gradient of the original image by a scale factor, which is adaptively proportional to the difference of their gradients. A datadriven variational model for contrast enhancement is then formulated by combining the datafidelity term and this prior.

The difference between the NGF and the conventional AR is revealed. Compared to the AR that is usually operated in Fourier and DCT domain and gives an analytical solution, NGF is datadriven, operated in gradient domain and its optimal solution is achieved by minimizing penalty functions.

An efficient alternating algorithm is developed to solve the nonlinear optimization problem with the advantage of fast convergence. The adaptive model and efficient implementation indicate the potential of the method to be applied in realtime image/video applications.
2 Nonconvex gradient fidelity regularization
In this section, after briefly surveying some previous results employing gradient fidelitybased prior in image processing, we state our motivation and subsequently propose a NGF variational model for efficient contrast enhancement. Then, the potential relation between NGF and the traditional AR method is explained and an iterative algorithm is developed to efficiently solve it.
2.1 Proposed model NGF
Until recently, there were several works utilizing the gradient fidelitybased prior in image processing [12–15]. In [14], Fattal et al. presented a method for rendering high dynamic range (HDR) compression. After manipulating the gradient field of images by attenuating the magnitudes of large gradients, they proposed a gradient fidelitybased functional to reconstruct the result image from the modified gradient information in the least square sense. In [12], Didas et al. combined ℓ_{2} data and gradient fitting in conjunction with ℓ_{1} regularization for image denoising, where the gradient fidelity term is used to compensate for the loss of edge and undesirable staircase effect introduced by the resulting higher partial differential equation. In [13], Xu and Jia used one previously predicted sharp edge gradient as a spatial prior to guide the recovery of a coarse version of the latent image for robust motion deblurring. All the methods above could be considered as constructing an image from the specific gradient fields, and their motivations are quite different to ours. The authors in [12] aimed to recover the true image by minimizing gradient fidelity term between the corrupted image and the solution. The authors in [13] and [14] first used some filters to produce a guide image and then applied gradient fidelity term between the guide image and the solution.
where η is a weighting factor. For each i, D_{ i }x∈R^{2} represents the firstorder finite difference of x at pixel i in both horizontal and vertical directions. The symbol ⊗ denotes pointwise multiplication. The first term in the cost function enforces data fidelity in imagedomain. The second term suggests that the target image and the original image should be close with a scale factor which is adaptively proportional to the difference of their gradients. When letting α=1, then model (1) degrades to the classical gradient fidelity model as used in [12, 13, 15]. In the circumstance of α<1, the weight matrix w_{ i }=D_{ i }f−D_{ i }x^{α−1} measures the distance between the blurred and desired clean image at each pixel i.
2.2 Solver
Although the NGF regularized model has exhibited some appealing properties in model setting, the issues of computational complexity and local optimality have to be addressed since these issues limit its practical application. Therefore, developing an efficient and robust solver is highly desirable. In this subsection, an AL method is proposed to solve the problem. AL method is a well studied optimization algorithm for solving the constrained problems in mathematical programming community [16]. Recently, it is enjoying a repopularization mainly due to the work of Yin et al. [17] and has been used in various applications of signal/image processing [18, 19]. In this work, we use a combination of the reweighted technique and AL scheme, which has been successfully used in [19].
Since solving (5) for x and y simultaneously can be difficult, an alternative choice is the alternating direction method (ADM) [18, 19] that minimizes it with respect to one variable at a time while fixing the other variables at its latest value.
∙ x subproblem
where represents the twodimensional discrete Fourier transform. The symbol ⋆ denotes complex conjugacy. Both the ⊗ and the division signs are componentwise operations.
∙ y subproblem
2.3 Computation cost, convergence, and parameter setting
At each iteration, the computational cost of Equation 9 is linear with respect to problem size, namely O(n^{2}). Additionally, the main cost for solving Equation 8 is two fast Fourier transforms (FFTs) (including one inverse FFT), and each is at a cost of O(n^{2}log(n)), hence the method enables realtime processing. When working on the color images such as in the standard RGB domain, the variable $x\in {R}^{{n}^{2}}$ will be extended to $x=\left[{x}^{r};{x}^{g};{x}^{b}\right]\in {R}^{3{n}^{2}}$. This extension is the same as that in [20].
As for the convergence, because of the nonconvexity and nonlinearity of the problem, the global solution may not be found easily. Nevertheless, since the iterative procedure is updated by AL scheme combined with weighted strategy, local minimum is expected to be attained. Both the value of the objective function and the norm of the reconstruction difference between successive iterations can be chosen as the stopping criterion. Admittedly, providing convergence proof of the proposed algorithm is very difficult. We are still working on the theoretic ground of this deeper issue.
In practice, ${w}_{i}^{k}={D}_{i}\phantom{\rule{0.3em}{0ex}}f{y}_{i}^{k}{}^{\alpha 1}$ was modified to ${w}_{i}^{k}=1/\left[\right{D}_{i}\phantom{\rule{0.3em}{0ex}}f{y}_{i}^{k}{}^{1\alpha}+\epsilon ]$, 0<ε<0.5 to prevent the denominator to be zero. In our work, the algorithm was initialized by letting ${w}_{i}^{0}=1/\epsilon $, x^{0}=f, λ^{0}=0. It runs k iterations until the relative tolerance satisfies ∥x^{k+1}−x^{ k }∥_{2}/∥x^{k+1}∥_{2}≤ζ with ζ=10^{−3}. The setting of parameter β can be referred according to [20]. We empirically choose β=100 in all the experiments of this article.
3 Experiments
In this section, the performance of the proposed method is demonstrated on a variety of images, which show wide variations in terms of average image intensity and contrast. We used a data set comprising standard test images from [21–23] to evaluate and compare the proposed algorithm with HE^{a}[1], 2DHE^{a}[4], AR^{b} (alpha rooting in DCT domain [6]), and MCE^{b}[7]. The parameter setting of the four algorithms were according to [4, 8]. When extending the graylevel algorithms to color images, HE and 2DHE first transform the input RGB image to CIE L^{⋆}a^{⋆}b^{⋆} color space, while AR and MCE first transform RGB to be YCbCr space. As analyzed in subsection 2.3, NGF is directly employed in the RGB space.
In the experiments, the performances of these algorithms are investigated in terms of visual quality and quantitative measures. As discussed in a number of papers, the assessment of image enhancement is not an easy task, and there is no any accepted objective criterion that gives meaningful results for every image. Therefore, we choose multiple measures to quantify the improved perception between input image f and output image x as done in [4]. The quantitative measures used in this work include the following: normalized absolute mean brightness error AMBE_{N}(f,x), normalized discrete entropy DE_{N}, and normalized edgebased contrast measure CM_{N}(f,x). The range of all the three measures is in the interval [ 0,1]. In general, the higher the value of AMBE_{N}, the better is the brightness preservation, and vice versa. Similarly, higher DE_{N} value indicates an estimate image with richer details, and higher CM_{N} value indicates an image with higher contrast.
3.1 Parameter adjustment
There are two parameters η and α in the proposed NGF regularized model. One is used for measuring the level of gradient regularization and the other is for the weighting factor. Therefore, the selection of parameters (η,α) consists of a twodimension parameter space, which enables to produce various image style determined by the user.
3.2 Comparison on standard test images
The original Plane image in Figure 4a shows a lowcontrast image comprising light and dark regions corresponding to ground, plane, and shadow. HE has darkened the image considerably to increase the contrast between regions. Although this method has increased the contrast between different regions of the input image, the contrast within each region of the image is considerably reduced. For example, the texture on the plane is not identifiable. 2DHE produces a brighter image which has better visual quality and contrast than the result of HE. However, the ahead ground results in a slightly brighter output image. Since AR and MCE are conducted in small DCT blocks, their results do not change the overall contrast of the image well, although it can be observed that some details on the plane are enhanced in the MCE result. Our method improves the overall contrast while preserving the image details. In Figure 4e, it is easy to identify the ground texture as well as the plane.
Figure 5 displays the results of the Tank image. In the HE result shown in Figure 5a, the contrast between the tank and its surrounding is significantly increased. However, the details in the darker area of the tank body are barely noticeable. 2DHE alleviates the drawback of HE by considering the contextual information in the image when producing the 2D histogram, which makes the details of the tank body better perceived. However, it produces a higher contrast image but brighter image overall, especially on the ground. AR leads to an image that is almost similar to its original, and hence contrast has been poorly improved. MCE retains more detail than the image obtained with AR. However, the photometric difference between the tank and its surrounding is still limited. The output of NGF is visually pleasing and the contrast between the tank and its surrounding is high enough to reveal details on both areas.
The image Cessna in Figure 6a shows a plane on a grass field against a background of sky. The image consists of bright (i.e., the sky) and dark (i.e., shadow and grass) regions. Therefore, it is difficult to discriminate the details on the plane and its surrounding. HE generates an output image with high image degradation, where the sky region with original orange tint has been changed to noticeable layers of colored regions ranging from dark orange to light gray. HE also darkens the input image, making its details difficult to be observed. 2DHE generates an improved output image in the area below the plane with no degradations in the sky. NGF also provides an output image with no image degradation in the sky, and furthermore, the details on the plane are better visible. In particular, the shadow below the plane in our result is much smaller than that in the result of 2DHE. Besides, many block effectlike artifacts are observed in the MCE result.For the input Beach image as shown in Figure 7a, a darkening effect on the couple and the distant hill occurred in the results of the HE and 2DHE methods, which make details not identifiable. MCE and our proposed NGF considerably increase the overall contrast by making the colors in the image richer and enabling image details to be identified.
Quantitative measurement results of AMBE _{ N } , DE _{ N } , and CM _{ N } methods
AMBE_{N}  DE_{N}  CM_{N}  

HE  2DHE  AR  MCE  NGF  HE  2DHE  AR  MCE  NGF  HE  2DHE  AR  MCE  NGF  
Plane  0.0260  0.3269  0.5087  0.4998  0.6340  0.4920  0.4990  0.5598  0.6399  0.7693  0.5540  0.5264  0.5010  0.5108  0.5334 
Tank  0.4928  0.0360  0.5079  0.4998  0.6207  0.4880  0.4950  0.5972  0.6929  0.8120  0.5556  0.5351  0.5017  0.5232  0.5577 
Cam  0.0944  0.0473  0.5074  0.4957  0.4995  0.4458  0.4812  0.4957  0.4953  0.5490  0.5128  0.5158  0.5015  0.5322  0.5173 
Baboon  0.3618  0.1276  0.5079  0.4972  0.4989  0.4572  0.4802  0.4928  0.5058  0.7982  0.5422  0.5266  0.5027  0.5667  0.5362 
Cessna  0.0197  0.5973  0.5072  0.5013  0.6877  0.4623  0.4815  0.4916  0.4968  0.4991  0.5220  0.5103  0.4776  0.5060  0.5006 
Light  0.1141  0.0973  0.5052  0.5033  0.5043  0.4502  0.4866  0.4941  0.5554  0.5963  0.5348  0.5192  0.5014  0.5311  0.5365 
Beach  0.0198  0.0245  0.5064  0.5011  0.4193  0.4528  0.4767  0.4937  0.5571  0.6130  0.5305  0.5164  0.5009  0.5135  0.5427 
Island  0.2495  0.1873  0.5048  0.5016  0.5056  0.4532  0.4670  0.4897  0.6152  0.6748  0.5256  0.5234  0.5015  0.5257  0.5449 
Average  0.1723  0.1805  0.5069  0.5000  0.5463  0.4627  0.4830  0.5143  0.5698  0.6640  0.5347  0.5217  0.4985  0.5262  0.5337 
3.3 Comparison on image database
Average quantitative measurement results on 300 test images from BSDS dataset[23]
Method  AMBE_{N}  DE_{N}  CM_{N} 

HE  0.1034  0.4496  0.5253 
2DHE  0.2052  0.4822  0.5263 
AR  0.4986  0.4296  0.5290 
MCE  0.4982  0.4624  0.5636 
NGF  0.5051  0.6862  0.5405 
3.4 Extension by combining detail enhancement
As we know, classical unsharp masking techniques aiming at enhancing sharpness/detail of the image usually suffer from the halo effect. Recently, a number of edgepreserving filters like weighted least squares (WLS) [24] have been proposed to alleviate this drawback and achieved impressive performances. As discussed in [25], enhancement of the overall contrast and sharpness of the image are two related but different tasks. On one hand, contrast enhancement does not necessarily lead to sharpness enhancement. On the other hand, when enhancing the sharpness of an image, the noise is also enhanced as well. Hence, combined methods can be attained by integrating NGF and WLS into a unified framework. One possible choice is that after decomposing an image by WLS, we use NGF to level the contrast of its base layer and then combine the remaining detail layers with boosted coefficients (denoted as combined 1). Another choice is to employ NGF first and then use WLS to tackle the intermediate image (denoted as combined 2). These combined strategies may improve the visual quality.
4 Conclusions
This work presents the nonconvex gradient fidelity term as a regularizer for contrast enhancement. Following the straightforward model and simple implementation, the experimental results obtained for various types of images are highly encouraging and illustrate that our method is superior to the stateoftheart enhancement techniques. The approach is found to be computationally efficient in producing visually pleasing images.
Since our model is in a variational formulation, further extension is to extend our proposed model by incorporating other penalty priority for some specific applications. For example, by combining the ℓ_{ 0 } gradient minimization prior [26], the compound model has the potential to simultaneously enhancing and denoising. Furthermore, it is very desirable to integrate the proposed NGF prior with the recently popular dictionary learningbased sparse representation model [27, 28] for textureenhanced image denoising, as demonstrated in [27] that a gradient histogram preservation algorithm was presented to enhance the texture structures while removing noise. Applying NGF prior in the trained filters [29] or learned transforms [30] will also be considered in the further study.
Endnotes
^{ a } The codes of HE and 2DHE are available at http://www.sciencedirect.com/science/article/pii/S0031320312001525.
^{ b } The codes of AR and MCE are available at http://www.facweb.iitkgp.ernet.in/~jay/CES/.
Declarations
Acknowledgements
This work was partly supported by the National Natural Science Foundation of China under grant numbers 61261010, 61362001, 61365013, 61340025, and 51165033, the Natural Science Foundation of Jiangxi province (20132BAB211030, 20121BBE50023, 20122BAB211015), the international scientific and technological cooperation projects of Jiangxi Province (No. 20141BDH80001), the Technology Foundation of the Department of Education in Jiangxi Province (Nos. GJJ13061, GJJ13376, GJJ14196), and the Young Scientist Training Program of Jiangxi province (No.20142BCB23001). The authors are indebted to two anonymous referees for their useful suggestions and for having drawn the authors’ attention to additional relevant references.
Authors’ Affiliations
References
 Gonzalez RC, Woods RE: Digital Image Processing (3rd Edition). PrenticeHall, Inc., Upper Saddle River; 2006.Google Scholar
 Agaian SS, Silver B, Panetta KA: Transform coefficient histogrambased image enhancement algorithms using contrast entropy. IEEE Trans. Image Process 2007, 16(3):741758.MathSciNetView ArticleGoogle Scholar
 Menotti D, Najman L, Facon J, De Araujo A: Multihistogram equalization methods for contrast enhancement and brightness preserving. IEEE Trans. Consum. Electron 2007, 53(3):11861194.View ArticleGoogle Scholar
 Celik T: Twodimensional histogram equalization and contrast enhancement. Pattern Recognit 2012, 45(10):38103824.View ArticleGoogle Scholar
 Huang S, Cheng F, Chiu Y: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process 2013, 22(3):10321041.MathSciNetView ArticleGoogle Scholar
 Aghagolzadeh S, Ersoy OK: Transform image enhancement. Opt. Eng 1992, 31(3):614626.View ArticleGoogle Scholar
 Tang J, Peli E, Acton S: Image enhancement using a contrast measure in the compressed domain. IEEE Signal Process. Lett 2003, 10(10):289292.View ArticleGoogle Scholar
 Mukherjee J, Mitra SK: Enhancement of color images by scaling the DCT coefficients. IEEE Trans. Image Process 2008, 17(10):17831794.MathSciNetView ArticleGoogle Scholar
 Fattal R: Edgeavoiding wavelets and their applications. ACM Trans. Graph. (TOG) 2009, 28(3):110.View ArticleGoogle Scholar
 Wang C, Peng J, Ye Z: Flattest histogram specification with accurate brightness preservation. IET Image Process 2008, 2(5):249262.View ArticleGoogle Scholar
 Celik T, Tjahjadi T: Contextual and variational contrast enhancement. IEEE Trans. Image Process 2011, 20(12):34313441.MathSciNetView ArticleGoogle Scholar
 Didas S, Setzer S, Steidl G: Combined ℓ_{2} data and gradient fitting in conjunction with ℓ_{1} regularization. Adv. Comput. Math 2009, 30(1):7999.MathSciNetView ArticleMATHGoogle Scholar
 Xu L, Jia J: Twophase kernel estimation for robust motion deblurring. Proceedings of European Conference on Computer Vision 2010, 157170.Google Scholar
 Fattal R, Lischinski D, Werman M: Gradient domain high dynamic range compression. ACM Trans. Graph. (TOG) 2002, 21(3):249256.View ArticleGoogle Scholar
 Bhat P, Zitnick CL, Cohen M, Curless B: Gradientshop: a gradientdomain optimization framework for image and video filtering. ACM Trans. Graph. (TOG) 2010, 29(2):114.View ArticleGoogle Scholar
 Rockafellar RT: Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res 1976, 1(2):97116.MathSciNetView ArticleMATHGoogle Scholar
 Yin W, Osher S, Goldfarb D, Darbon J: Bregman iterative algorithms for ℓ_{1}minimization with applications to compressed sensing. SIAM J. Imaging Sci 2008, 1(1):143168.MathSciNetView ArticleMATHGoogle Scholar
 Afonso MV, BioucasDias JM, Figueiredo MA: An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process 2011, 20(3):681695.MathSciNetView ArticleGoogle Scholar
 Liu Q, Wang S, Luo J, Zhu Y, Ye M: An augmented lagrangian approach to general dictionary learning for image denoising. J. Vis. Commun. Image Representation 2012, 23(5):753766.View ArticleGoogle Scholar
 Tao M, Yang J: Alternating direction algorithms for total variation deconvolution in image deconstruction. Optimization Online 2009.Google Scholar
 The USCSIPI Image Database http://sipi.usc.edu/database/
 Kodak Lossless True Color Image Suite http://r0k.us/graphics/kodak/
 Martin D, Fowlkes C, Tal D, Malik J: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Proceedings of IEEE International Conference on Computer Vision 2001, 416423.Google Scholar
 Farbman Z, Fattal R, Lischinski D, Szeliski R: Edgepreserving decompositions for multiscale tone and detail manipulation. ACM Trans. Graph. (TOG) 2008, 27(3):6777.View ArticleGoogle Scholar
 Deng G: A generalized unsharp masking algorithm. IEEE Trans. Image Process 2011, 20(5):12491261.MathSciNetView ArticleGoogle Scholar
 Xu L, Lu C, Xu Y, Jia J: Image smoothing via ℓ_{0} gradient minimization. ACM Trans. Graph. (TOG) 2011, 30(6):174185.Google Scholar
 Zuo W, Zhang L, Song C, Zhang D: Texture enhanced image denoising via gradient histogram preservation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition 2013, 12031210.Google Scholar
 Yan R, Shao L, Liu Y: Nonlocal hierachical dictionary learning using wavelets for image denoising. IEEE Trans. Image Process 2013, 22(12):46894698.MathSciNetView ArticleGoogle Scholar
 Shao L, Zhang H, De Haan G: An overview and performance evaluation of classificationbased least squares trained filters. IEEE Trans. Image Process 2008, 17(10):17721782.MathSciNetView ArticleGoogle Scholar
 Shao L, Yan R, Li X, Liu Y: From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms. IEEE Trans. Cybern 2014, 44(7):10011013.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.