- Research Article
- Open Access

# Segmentation, Reconstruction, and Analysis of Blood Thrombus Formation in 3D 2-Photon Microscopy Images

- Jian Mu
^{1}Email author, - Xiaomin Liu
^{1}, - Malgorzata M. Kamocka
^{3}, - Zhiliang Xu
^{2}, - Mark S. Alber
^{2}, - Elliot D. Rosen
^{3}and - Danny Z. Chen
^{1}

**2010**:147216

https://doi.org/10.1155/2010/147216

© Jian Mu et al. 2010

**Received:**1 May 2009**Accepted:**10 July 2009**Published:**6 September 2009

## Abstract

We study the problem of segmenting, reconstructing, and analyzing the structure growth of thrombi (clots) in blood vessels *in vivo* based on 2-photon microscopic image data. First, we develop an algorithm for segmenting clots in 3D microscopic images based on density-based clustering and methods for dealing with imaging artifacts. Next, we apply the union-of-balls (or alpha-shape) algorithm to reconstruct the boundary of clots in 3D. Finally, we perform experimental studies and analysis on the reconstructed clots and obtain quantitative data of thrombus growth and structures. We conduct experiments on laser-induced injuries in vessels of two types of mice (the wild type and the type with low levels of coagulation factor VII) and analyze and compare the developing clot structures based on their reconstructed clots from image data. The results we obtain are of biomedical significance. Our quantitative analysis of the clot composition leads to better understanding of the thrombus development, and is valuable to the modeling and verification of computational simulation of thrombogenesis.

## Keywords

- Thresholding Method
- Clot Structure
- Thrombus Growth
- Thrombus Development
- Clot Surface

## 1. Introduction

Upon vascular injury, to prevent blood loss following a break in the blood vessel, components in the blood and vessel wall interact rapidly to form a thrombus (clot) to limit hemorrhage. Qualitative and, more importantly, quantitative analysis of the structures of developing thrombi formed *in vivo* is of significant biomedical importance. Such analysis can help identifying the factors altering thrombus growth and the structures affecting thrombus instability. A better understanding of the thrombus structures and properties is also valuable for the development of therapeutics for treating bleeding disorders.

Recent development of multiphoton intravital microscopy makes it possible to collect high-resolution, multichannel images of developing thrombi. Thus, there is a need for computer-based methods for automatically analyzing 3D microscopic images of thrombi (i.e., stacks of 2D image slices of thrombus cross-sections). Such algorithms must be efficient, accurate, and robust, and be able to handle large quantities of high-resolution 3D image data for quantitative analysis. In our multidisciplinary research, such algorithms can help us advance thrombus studies by providing a vital connection between the biological experimental models and the multiscale computational models of thrombogenesis (e.g., [1, 2]).

Segmentation and reconstruction on 3D microscopic images is an important yet challenging problem in biomedical imaging, and many approaches have been proposed for different imaging settings (e.g., [3, 4]). Thresholding algorithms extract a sought image object from the background based on a threshold value. There are different methods for determining the threshold value. Typical thresholding methods can be classified into three categories: (1) Histogram shape-based thresholding methods, (2) entropy based thresholding methods, and (3) spatial thresholding methods.

Histogram shape-based thresholding methods are based on the shape property of the histograms. A commonly used thresholding algorithm in this category is due to Otsu [5] and aims to minimize the in-class variance and maximize the between-class variance. It assumes that the image to be thresholded contains two classes of pixels/voxels (e.g., the object and background), and computes the optimum threshold separating these two classes so that their combined spread (intraclass variance) is minimized. This is also equivalent to maximizing the inter-class variance. Sezan [6] performed the peak analysis by convolving the histogram function with a smoothing and differencing kernel and proposed the so-called peak-and-valley thresholding. Entropy-based thresholding algorithms exploit the entropy of the distribution of the gray levels. Johannsen and Bille [7] and Pal et al*.* [8] studied the Shannon entropy-based thresholding. Kapur et al*.* [9] strived to maximize the background and foreground entropies. Spatial thresholding methods utilize not only the gray value distribution but also the dependency of pixels in a neighborhood. Kirby and Rosenfeld [10] considered the local average gray levels for thresholding. Chanda and Majumder [11] used co-occurrence probabilities as indicators of the spatial dependency.

Unlike direct thresholding, density-based clustering methods (e.g., [12, 13]) group input points together based on not only the intensity of each point, but also the point density in its neighborhood. Thus, this approach can ignore isolated points while gathering points that are densely close to each other. It has been applied to several biomedical image segmentation problems [14–16]. Chan et al*.* [16] gave an automated density-based algorithm for segmenting gene expression in fluorescent confocal images, and reported that density-based segmentation outperforms direct thresholding on noisy images. However, in our setting, we noticed that applying only density-based clustering does not handle properly signal intensity fluctuation from 2D image slice to slice (the signals tend to become weaker as the slices are further away from the vessel wall). Hence, to deal with both the signal fluctuation and scattering isolated points in our problem, we develop an algorithm that combines Otsu's method [5] and density-based clustering [12, 13] to segment thrombi.

Our problem also presents other difficulties, such as fuzzy boundaries, photobleaching [17], and other imaging artifacts, which all add to the complexity of the problem. Such artifacts include movement of the vascular bed (e.g., due to animal breathing), the presence of fat and blood (caused by bleeding during tissue preparation for observation) around or on top of the vessel, and so forth. To overcome these difficulties, we first determine automatically the threshold for each type of channel values of voxels in every 2D image slice and classify the voxels using slice-specific threshold values. Then, clusters of clot voxels are obtained in 3D images using density-based clustering. Since clots contain nearby blood cells as part of their components, we also allow each cluster to include neighboring voxels for blood cells.

The main goal of our research is to establish a computer-aided platform for segmenting, reconstructing, and analyzing the development of thrombus structures in microscopic images (rather than, e.g., presenting a new image segmentation algorithm, although this paper does give a segmentation algorithm). Based on our image thrombus segmentation/reconstruction strategies, we are able to set up an effective platform for studying clot structures. This platform enables us to identify sequences of 3D clot structures (from series of 3D images) as they grow in time, and perform quantitative analysis of clots and their dynamic shape changes. The analysis allows us to examine experimental results of actual thrombus development on laser-induced injuries in vessels of two types of mice (the wild type and the type with low levels of coagulation factor VII) captured *in vivo* by microscopic images, and compare such results quantitatively with the thrombus development predictions from a multiscale computational model [1, 2]. Thus, our platform can help refine and validate simulation results generated by the computational model, providing a valuable tool for furthering our understanding of thrombus development.

The rest of this paper is organized as follows. Section 2 presents our clot segmentation algorithm. Section 3 discusses our clot surface reconstruction strategies. Section 4 shows the experimental results. Section 5 provides quantitative analysis of various clot structures and properties. Section 6 summarizes our work and gives some concluding statements.

## 2. Clot Segmentation

*blue*is for plasma (dextran),

*green*for fibrinogen/fibrin,

*red*for platelets, and

*black*for everything else (i.e., excluding the above three fluorescently tagged components), as shown in Figure 1. Therefore, our task is to identify and analyze the structures (or shapes) formed by red voxels and green voxels plus the surrounding voxels of "black" cells in 3D microscopic images.

As we observed from the image data, fibrin, platelets (or the red and green voxels), and surrounding black cells cluster together to form clots. However, other fibrin and platelet fluorophores also scatter around in the 3D images (since these clot components are supplied continuously by the blood flow along the vessel). That is, the scattering fluorophores may represent true data points. Thus, in this setting, while we see clusters of red and green points in the thrombi (plus surrounding black cells), the 3D space is also scattered with many other red and green points that are not part of any clot. Thus, our problem is to first identify the clusters (or galaxies) of discrete red/green points or voxels plus surrounding black voxels while at the same time ignore the "isolated" red/green points (or isolated stars), and then from the resulting clusters, reconstruct the (continuous) surfaces and volumes of the clots.

The input to our clot segmentation algorithm is a vertical sequence of 2D image slices (i.e., the slices are "parallel" to the vessel wall), called a
*-stack*. Our algorithm consists of the following main steps: Section 2.1 threshold determination; Section 2.2 voxel classification; Section 2.3 density-based clustering; Section 2.4 black voxel inclusion.

### 2.1. Threshold Determination

In our image setting, the voxel intensities often fluctuate throughout the slice sequence of a
-stack, probably due to the setup and chosen parameters of the imaging facility for particular experiments. That is, the intensities of voxels can vary up and down (even substantially) from slice to slice, and from
-stack to
-stack. Actually, the information for each voxel consists of three values (called *channels*), representing the levels of red, green, and blue (each in the range of 0 to 255) of the voxel. Thus, we need to determine a specific threshold value for each channel of every individual slice for an input
-stack (the threshold values of the three channels for different slices may be different).

Based on the outcomes of our preliminary experiments, we chose to apply Otsu's method [5] to compute the threshold values channel by channel and slice by slice. Assuming that the image to be thresholded contains two classes of pixels/voxels (e.g., object and background), Otsu's method computes the optimum threshold separating these two classes so that their combined spread (intraclass variance) is minimized. Although this method is efficient and works well for images with bimodal histograms, still it may not yield accurate segmentation results in our situation. Due to the scattering of many isolated red/green points, simple thresholding methods do not seem to be sufficient for identifying thrombi in our 2-photon microscopic images. We need to combine the thresholding method with the density-based clustering approach, as to be discussed in detail below.

### 2.2. Voxel Classification

In our image setting, since the information of any voxel consists of three channel values, representing its levels of red, green, and blue (each from 0 to 255), we need to classify each voxel as red, green, blue, or black (corresponding to the clot components of platelets, fibrin, plasma, and blood cells, respectively). Since the fluorescent signals in different channels of a voxel may not be independent of each other, there are many possible different combinations of channel values for a voxel. Thus, we need a method for voxel classification, based on the channel values of the voxels. Our classification method for each voxel of every slice is as follows: Find the maximum value among the three channels of (say, this value is red); if this red value is above the threshold of that slice for red, then is classified as red; otherwise, is black.

### 2.3. Density-Based Clustering

*.*'s density-based clustering (DBC) algorithm [12] to compute clusters of red/green voxels as well as ignoring isolated red/green voxels. Figure 2 illustrates the key concept of the DBC algorithm. The idea of density-based clustering is that, for two given parameters (for the

*neighborhood*) and (for the

*density*), if the 3D ball of radius centered at any red or green point contains at least (a mix of) red/green points, then all the red/green points in the ball are part of a cluster; further, if two clusters share any common red/green points, then they are merged into the same cluster.

As mentioned above, in the original images, there are many isolated red/green voxels (most of which are inactivated platelets and fibrin in the blood flow). Further, some platelets and fibrin may form relatively small or sparse clusters that are disconnected from the target clot and therefore should be ignored. One might consider applying filtering techniques (e.g., the median filter [18]) to remove such isolated data points and small clusters, since filtering techniques are often effective for removing noise in images. However, most filters have the undesired side-effects of changing the intensity values of certain voxels, blurring the boundary between different objects, or creating additional false positive points in the images. In our clot study, because we need to analyze the clot components quantitatively (both in the volume and on the surface), we prefer to keep the original voxel intensity values unchanged for the output precision of our quantitative analysis. The DBC approach can solve this kind of clustering problem without making any change to the image data. By using suitably chosen parameter values of the neighborhood and density , it allows us to identify large dense clusters (clots) and discard regions of low density (i.e., the background and isolated or small groups of inactivated platelet and fibrin voxels).

One important issue to the DBC approach is to choose appropriate values for the neighborhood parameter and density parameter . A heuristic algorithm for determining the parameter values of and was given in [19]. This general heuristic method, however, may not always produce effective parameter values for all different applications and situations. Expert input and decisions are often needed in determining the actual parameter values of and in specific applications, such as our particular case.

Based on our experiments and evaluations, we choose the ball radius = 5 and the density value = 80. The reason for using a "high" density value, = 80, is as follows. After a cluster is produced by the DBC approach (in this step), we need to "expand" it (in the next step) by including the surrounding black voxels (to capture the nearby blood cells). The cluster expansion should not take blue voxels, but it should include nearby red/green voxels as well. Thus, this expansion process actually includes all surrounding non-blue voxels. With a relatively high density value, we preserve a dense cluster structure (although some "sparse" red/green voxels around the current cluster boundary may be excluded in the DBC process). This loss of information is compensated by allowing the clot to capture the nearby red/green/black voxels in the cluster expansion process.

The value of the ball radius is determined as follows. For a given density parameter = 80, if we set = 5, then the threshold value for the density is about 15 (which means that at least 15 of the voxels inside the ball must belong to the point set of interest). The experimental results produced using these two parameters match well with the experts' manually segmented results. If we set the value to (say) 4, then accordingly the threshold is raised to about 30 . But, our experimental results show that this fails to capture some of the nearby voxels which the biologists think should be included as part of the clot. Of course, we could use larger values for and ; however, experimental results indicate that this does not make too much difference in the final results (i.e., the output clots). Yet, the larger values for and require considerably more computation. Therefore, the two parameter values we chose to use, = 80 and = 5, are suitable for our purpose. In different imaging settings, the users may estimate the percentage of the undesired points (the undesired points may be noise, or as in our application, scattered points of interest) and come up with other appropriate parameter values.

### 2.4. Black Voxel Inclusion

In the previous steps, we only look for voxel clusters of platelets and fibrin. Actually, there are also some blood cells which appear as black voxels surrounding the clot structure. These blood cells are also part of the clot and should be taken into account. The goal of this step is to include these nearby black voxels into the clot and compensate the loss of red/green voxels around the cluster boundary due to the DBC clustering. For every cluster voxel, we examine its neighboring voxels and decide whether these voxels should be added to the clot. Such a voxel is added to the clot if and only if is not yet part of the clot and is non-blue. Here we use the 6-connected neighborhood (in 3D) for clot expansion. The expansion process continues iteratively until all surrounding non-blue voxels are taken by the clot.

## 3. Clot Surface Reconstruction

Each cluster produced by the above segmentation algorithm is merely a collection (or "cloud") of discrete points (or voxels) in 3D. To obtain the clot formed by a point cloud, we need to "impose" some continuous "shape" to the voxel cluster in order to achieve structures such as the surface and volume of the clot. To construct the boundary of the clot, we first use the 3D morphological dilation method [20] to define a ball around each voxel of the cluster, resulting in the union of a cluster of balls in 3D. In this way, we connect or attach nearby discrete voxels into a continuous boundary of the clot. We then use the marching cube algorithm [21] to transform the dilated clot volume into meshed surfaces.

An alternative method is to apply the alpha shape algorithm [22] that selects a subset of the input points to define the "shape" boundary of an input point cloud based on a parameter . With different values, one can attain different levels of details of the clot surface. The -shape of the point cloud degenerates to the input point set as the value of approaches to 0, and it becomes the convex hull of the input point set as approaches to . This feature of the alpha shape algorithm may serve as a good tool for further analysis of the clot shapes, as the users can control the level of details on the clot surface based on their needs.

## 4. Experimental Results

In our experiments, we use a Zeiss LSM-510 Meta confocal/multiphoton microscopy system equipped with a tunable Titanium-Sapphire laser at the Indiana Center for Biological Microscopy. Direct laser-induced injuries are made in the mesentery veins of mice that either are normal (the wild type) or have different levels of coagulation factor VII (we use FVII to denote coagulation factor VII).

Our algorithms are performed on 17 wild-type injuries and 15 low FVII injuries. For each injury, we produce a sequence of 3D images ( -stacks), every forty seconds per 3D image, for a total of 15 -stacks. Typically, each -stack consists of about 80 2D slices; each slice is of a size of voxels.

In the experiments, the development of thrombi is monitored by intravital multiphoton microscopy in a single optical plane. In addition to the confocal video microscopy in one plane, we can also generate a vertical stack of 2-photon images that can be compiled to form a 3D reconstruction of thrombi. This allows us to obtain a vertical stack of plane images (a -stack), or a series of -stacks (a 4D image with time as the dimension). A key feature of this model that distinguishes it from other experimental models of intravital fluorescence video microscopy is that we record in 2-photon confocal mode.

### 4.1. Evaluation

To evaluate the effectiveness of our algorithms, a biologist manually identified clots from -stacks, assisted by the commercially available software Metamorph. Although Metamorph is a powerful tool for image acquisition, process, and analysis, manually generating segmentation results with it is still a very tedious and time-consuming process since it takes lots of human efforts to estimate parameter values. The biologist manually set the threshold for each voxel channel based on experience and segmented the thrombi on some 2D slices using Metamorph. As an example, a manually segmented result and the output of our algorithms on the same image data are compared in Figure 3(c). One can see that these two results match very well with each other. A quantitative comparison of the example shapes in Figure 3(c) is as follows. The area inside the solid curve: 16779; the area inside the dashed curve: 16957; the area of their intersection: 15505; the symmetric difference error: 2726.

### 4.2. Implementation and Execution Time

We implemented our image segmentation algorithm on a computer with a 1.73 GHz Pentium Dual-Core CPU and 2 GB memory. The reconstruction algorithm was implemented on a computer with a 2.5 GHz Intel Quad-Core CPU and 4 GB memory. The typical execution time is the following. That for a -stack of 80 slices, each slice of size , the segmentation and reconstruction run in well under one minute (about 15 seconds for segmentation and about 30 seconds for reconstruction).

## 5. Analysis Results

Porosity of a wild-type clot at different time points: T1 (40 seconds after injury), T2 (80 seconds), and T6 (4 minutes).

Sample no. | T1 | Porosity (%) | T2 | Porosity (%) | T6 | Porosity (%) |
---|---|---|---|---|---|---|

1 | 59325 | 20.90 | 63333 | 15.56 | 69012 | 7.98 |

2 | 57746 | 23.00 | 63794 | 14.94 | 69581 | 7.23 |

3 | 58120 | 22.51 | 64041 | 14.61 | 68837 | 8.22 |

4 | 58901 | 21.47 | 64183 | 14.42 | 68540 | 8.61 |

5 | 58311 | 22.25 | 64370 | 14.17 | 69904 | 6.79 |

6 | 58019 | 22.64 | 64494 | 14.01 | 69331 | 7.56 |

7 | 57908 | 22.79 | 64450 | 14.07 | 68799 | 8.27 |

8 | 57899 | 22.80 | 64323 | 14.24 | 68736 | 8.35 |

9 | 58062 | 22.58 | 64139 | 14.48 | 69012 | 7.98 |

10 | 57803 | 22.93 | 63916 | 14.78 | 69538 | 7.28 |

From Table 2, one can see that at the earlier time points, a clot is more permeable than it is at the later time points. As time goes, the clot tends to become more and more compact. This is due largely to the fact that cells on and near the clot surface (most of these cells are platelets at earlier time points) are less adhesive to each other than cells in the inside and are easily flushed away by the blood flow. For further analysis, two of the coauthors of this paper, Drs. Alber and Xu, are leading a research effort aiming to construct a multiscale simulation model for predicting how clots grow under different flow conditions and different factors which may regulate the clot growth [2].

## 6. Conclusions

We presented a new approach for segmentation, reconstruction, and analysis of 3D thrombi in 2-photon microscopic images. Our method and platform have been applied to study the structural differences between thrombi formed in wild-type and low FVII mice. Thrombi in low FVII mice are smaller, have a lower fibrin content, and are less stable than those in wild-type mice.

Our platform for reconstruction and analysis of 3D thrombi from 2-photon microscopic images will be a valuable tool, allowing one to process a large amount of images in a relatively short time. The high-resolution quantitative structural analysis using our algorithms provides new metrics that are likely to be critical to characterizing and understanding biomedically relevant features of thrombi. For instance, the reconstructed structures of the developing thrombi (Figure 7) show the shapes of heterogeneous subdomains of the clot enriched with different thrombus components. Since these subdomains have different mechano-elastic properties, the interfaces between such subdomains are potential sites responsible for structural instability.

With the ability to provide a quantitative description of the thrombus structures, it will be possible to compare biological experimental thrombi monitored by multiphoton microscopy for their development *in vivo* with the predictions of a multiscale computational model of thrombogenesis [1, 2]. Such quantitative comparisons are essential to the refinement and validation of the simulation model. Currently, we have the individual modules and procedures of the programs working, and the effectiveness of our approaches has been shown by our experiments, as discussed in Sections 4 and 5. However, the software system as a whole is still under development (it is not yet ready and available as a software tool to the research community at this time, while we are working towards this goal). Nevertheless, we anticipate that the integration of the experimental and computational approaches for thrombogenesis made possible by our image processing strategies will provide an effective tool for analyzing and understanding the biomedically important yet complex processes of thrombus development.

## Declarations

### Acknowledgments

The authors would like to thank Amy Zollman for technical assistance and Professor Kenneth W. Dunn and Professor Sherry G. Clendenon for assistance with multiphoton microscopy. This research was supported in part by NSF Grants CCF-0515203, CCF-0916606, and DMS-0800612, NIH Grants R01-EB004640 and HL073750-01A1, and the INGEN Initiative to Indiana University School of Medicine. The work of X. Liu was supported in part by a graduate fellowship from the Center for Applied Mathematics, University of Notre Dame.

## Authors’ Affiliations

## References

- Xu Z, Chen N, Kamocka MM, Rosen ED, Alber M: A multiscale model of thrombus development.
*Journal of the Royal Society Interface*2008, 5(24):705-722. 10.1098/rsif.2007.1202View ArticleGoogle Scholar - Xu Z, Chen N, Shadden SC,
*et al*.: Study of blood flow impact on growth of thrombi using a multiscale model.*Soft Matter*2009, 5(4):769-779. 10.1039/b812429aView ArticleGoogle Scholar - Yang X, Beyenal H, Harkin G, Lewandowski Z: Quantifying biofilm structure using image analysis.
*Journal of Microbiological Methods*1999, 39(2):109-119.View ArticleGoogle Scholar - Zhu T, Zhao HC, Wu J, Hoylaerts MF: Three-dimensional reconstruction of thrombus formation during photochemically induced arterial and venous thrombosis.
*Annals of Biomedical Engineering*2003, 31(5):515-525.View ArticleGoogle Scholar - Otsu N: A threshold selection method from gray-level histograms.
*IEEE Transactions on Systems, Man, and Cybernetics*1979, 9(1):62-66.MathSciNetView ArticleGoogle Scholar - Sezan MI: A peak detection algorithm and its application to histogram-based image data reduction.
*Graphical Models and Image Processing*1985, 29: 47-59.View ArticleGoogle Scholar - Johannsen G, Bille J: A threshold selection method using information measures.
*Proceedings of the 6th International Conference of Pattern Recognition (ICPR '82), 1982, Munich, Germany*140-143.Google Scholar - Pal SK, King RA, Hashim AA: Automatic grey level thresholding through index of fuzziness and entropy.
*Pattern Recognition Letters*1983, 1(3):141-146. 10.1016/0167-8655(83)90053-3View ArticleGoogle Scholar - Kapur JN, Sahoo PK, Wong AKC: A new method for gray-level picture thresholding using the entropy of the histogram.
*Computer Vision, Graphics, & Image Processing*1985, 29(3):273-285. 10.1016/0734-189X(85)90125-2View ArticleGoogle Scholar - Kirby RL, Rosenfeld A: A note on the use of (gray level, local average gray level) space as an aid in threshold selection.
*IEEE Transactions on Systems, Man and Cybernetics*1979, 9(12):860-864.View ArticleGoogle Scholar - Chanda B, Majumder DD: A note on the use of the graylevel co-occurrence matrix in threshold selection.
*Signal Processing*1988, 15(2):149-167. 10.1016/0165-1684(88)90067-9View ArticleGoogle Scholar - Chen DZ, Smid M, Xu B: Geometric algorithms for density-based data clustering.
*International Journal of Computational Geometry and Applications*2005, 15(3):239-260. 10.1142/S0218195905001683MathSciNetView ArticleMATHGoogle Scholar - Ester M, Kriegel H-P, Sander J, Xu X: A density-based algorithm for discovering clusters in large spatial databases with noise.
*Proceedings of 2nd International Conference on Knowledge Discovery and Data Mining (KDD '96), 1996, Portland, Ore, USA*226-231.Google Scholar - Celebi ME, Aslandogan YA, Bergstresser PR: Mining biomedical images with density-based clustering.
*Proceedings of International Conference on Information Technology: Coding and Computing (ITCC '05), April 2005, Las Vegas, Nev, USA*1: 163-168.Google Scholar - Song Y, Xie C, Zhu Y, Li C, Chen J: Function based medical image clustering analysis and research.
*Advances in Computer, Information, and Systems Sciences, and Engineering*2006, 149-155.Google Scholar - Chan P-K, Cheng S-H, Poon T-C: Automated segmentation in confocal images using a density clustering method. Journal of Electronic Imaging 2007, 16(4):-9.Google Scholar
- Herman B, Parry-Hill MJ, Johnson ID, Davidson MW: Introduction to optical microscopy. 2003, http://micro.magnet.fsu.edu/primer/java/fluorescence/photobleaching/index.htmlGoogle Scholar
- Weiss B: Fast median and bilateral filtering.
*ACM Transactions on Graphics*2006, 25(3):519-526. 10.1145/1141911.1141918View ArticleGoogle Scholar - Ester M, Kriegel H-P, Sander J, Xu X: A density-based algorithm for discovering clusters in large spatial databases with noise.
*Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD '96), 1996, Portland, Ore, USA*226-231.Google Scholar - Dougherty ER:
*An Introduction to Morphological Image Processing*. SPIE Optical Engineering Press, Center for Imaging Science Rochester Institute of Technology, Bellingham, Wash, USA; 1992.Google Scholar - Lorensen WE, Cline HE: Marching cubes: a high resolution 3D surface construction algorithm.
*Computer Graphics*1987, 21(4):163-169. 10.1145/37402.37422View ArticleGoogle Scholar - Edelsbrunner H, Mucke EP: Three-dimensional alpha shapes.
*ACM Transactions on Graphics*1994, 13(1):43-72. 10.1145/174462.156635View ArticleMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.