List of functions:
- Preprocessing:
- Core segmentation algorithms:
- Postprocessing:
- Auxillary functions:
Normalize the intensity of input image so that the value range is from 0 to 1.
from aicssegmentation.core.pre_processing_utils import intensity_normalization
normalized_img = intensity_normalization(img, scaling_param):
Parameters
- img: a 3d numpy array
- scaling_param: a list of values
- a list with only one value 0, i.e.
[0]
: Min-Max normlaizaiton, the max intensity of img will be mapped to 1 and min will be mapped to 0 - a list with a single positive integer v, e.g.
[5000]
: Min-Max normalization, but first any original intensity value > v will be considered as outlier and reset of min intensity ofimg
. After the max will be mapped to 1 and min will be mapped to 0 - a list with two float values [a, b], e.g.
[1.5, 10.5]
: Auto-contrast normalizaiton. First, mean and standard deviaion (std) of the original intensity inimg
are calculated. Next, the intensity is truncated into range [mean - a * std, mean + b * std], and then recaled to [0, 1] - a list with four float values [a, b, c, d], e.g.
[0.5, 15.5, 200, 4000]
: Auto-contrast normalization. Similat to above case, but only intensity value between c and d will be used to calculated mean and std.
- a list with only one value 0, i.e.
Return: a 3d numpy array of the same shape as img
with real value between 0 and 1
perform 3D Gaussian smoothing
from aicssegmentation.core.pre_processing_utils import image_smoothing_gaussian_3d
smooth_img = image_smoothing_gaussian_3d(normalized_img, sigma=gaussian_smoothing_sigma)
Parameters
- normalized_img: a 3d numpy array, usually the image after intensity normalization
- gaussian_smoothing_sigma: a positive real value, we usually use 1. The larger the value is the smoother the result will be.
Return: a 3d numpy array of the same size as normalized_img
.
image_smoothing_gaussian_slice_by_slice
perform 2D Gaussian smoothing slice-by-slice
from aicssegmentation.core.pre_processing_utils import image_smoothing_gaussian_slice_by_slice
smooth_img = image_smoothing_gaussian_slice_by_slice(normalized_img, sigma=gaussian_smoothing_sigma)
Input
- normalized_img: a 3d numpy array, usually the image after intensity normalization
- gaussian_smoothing_sigma: a positive real value, we usually use 1. The larger the value is the smoother the result will be.
Return: a 3d numpy array of the same size as normalized_img
.
perform edge-preserving smoothing
from aicssegmentation.core.pre_processing_utils import edge_preserving_smoothing_3d
smooth_img = edge_preserving_smoothing_3d(normalized_img)
Parameters
- normalized_img: a 3d numpy array, usually the image after intensity normalization
Return: a 3d numpy array of the same size as normalized_img
.
apply 2D filament filter slice by slice
from aicssegmentation.core.vessel import filament_2d_wrapper
bw = filament_2d_wrapper(smooth_img, f2_param)
Parameters:
- smooth_img: a 3d numpy array, usually the image after smoothing
- f2_param = [[scale_1, cutoff_1], [scale_2, cutoff_2], ....], e.g.,
[[1, 0.01]]
or[[1,0.05], [0.5, 0.1]]
:scale_x
is set based on the estimated thickness of your target filaments. For example, if visually the thickness of the filaments is usually 3~4 pixels, then you may want to setscale_x
as1
or something near1
(like1.25
). Multiple scales can be used, if you have filaments of very different thickness.cutoff_x
is a threshold applied on the actual filter reponse to get the binary result. Smallercutoff_x
may yielf more filaments, especially detecting more dim ones and thicker segmentation, while largercutoff_x
could be less permisive and yield less filaments and slimmer segmentation.
Return: a 3d numpy array of 0 and 1
apply 3D filament filter
from aicssegmentation.core.vessel import filament_3d_wrapper
bw = filament_3d_wrapper(smooth_img, f3_param)
Parameters:
- smooth_img: a 3d numpy array, usually the image after smoothing
- f3_param = [[scale_1, cutoff_1], [scale_2, cutoff_2], ....], e.g.,
[[1, 0.01]]
or[[1,0.05], [0.5, 0.1]]
:scale_x
is set based on the estimated thickness of your target filaments. For example, if visually the thickness of the filaments is usually 3~4 pixels, then you may want to setscale_x
as1
or something near1
(like1.25
). Multiple scales can be used, if you have filaments of very different thickness.cutoff_x
is a threshold applied on the actual filter reponse to get the binary result. Smallercutoff_x
may yielf more filaments, especially detecting more dim ones and thicker segmentation, while largercutoff_x
could be less permisive and yield less filaments and slimmer segmentation.
Return a 3d numpy array of 0 and 1
apply 2D spot filter slice by slice
from aicssegmentation.core.seg_dot import dot_2d_slice_by_slice_wrapper
bw = dot_2d_slice_by_slice_wrapper(structure_img_smooth, s2_param)
Parameters:
- smooth_img: a 3d numpy array, usually the image after smoothing
- s2_param= [[scale_1, cutoff_1], [scale_2, cutoff_2], ....], e.g.
[[1, 0.1]]
or[[1,0.12], [3,0.1]]
:scale_x
is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to setscale_x
as1
or something near1
(like1.25
). Multiple scales can be used, if you have dots of very different sizes.cutoff_x
is a threshold applied on the actual filter reponse to get the binary result. Smallercutoff_x
may yielf more dots and fatter segmentation, while largercutoff_x
could be less permisive and yield less dots and slimmer segmentation.
Return: a 3d numpy array of 0 and 1
apply 3D spot filter
from aicssegmentation.core.seg_dot import dot_3d_wrapper
bw = dot_3d_wrapper(smooth_img, s3_param)
Parameters:
- smooth_img: a 3d numpy array, usually the image after smoothing
- s3_param= [[scale_1, cutoff_1], [scale_2, cutoff_2], ....], e.g.
[[1, 0.1]]
or[[1,0.12], [3,0.1]]
:scale_x
is set based on the estimated radius of your target dots. For example, if visually the diameter of the dots is usually 3~4 pixels, then you may want to setscale_x
as1
or something near1
(like1.25
). Multiple scales can be used, if you have dots of very different sizes.cutoff_x
is a threshold applied on the actual filter reponse to get the binary result. Smallercutoff_x
may yielf more dots and fatter segmentation, while largercutoff_x
could be less permisive and yield less dots and slimmer segmentation.
Return: a 3d numpy array of 0 and 1
Masked-object (MO) thresholding: The basic idea is to apply a relatively low global threshold to roughly mask out each individual object, and then apply a relatively high threshold within each object. This is meant to handle intensity variation from cell to cell. In general, triangle method and median method are two thresholding algorithms usually yield relatively low threshold. Otsu is used within each object for the relatively high threshold.
from aicssegmentation.core.MO_threshold import MO
bw, object_for_debug = MO(structure_img_smooth, global_thresh_method, object_minArea, return_object)
Parameters
global_thresh_method
: Support'tri'
,'med'
,'ave'
in current version.'tri'
is triangle method,'med'
is median method,'ave'
is the average of the values returned by triangle method and median method.object_minArea
: The minimal area of connected components after global thresholding to be considered as valid objects. Due to some background noise there could be some small connected components in the global thresholding result. Doing Otsu thresholding within such regions will certainly result in errors. So, we want remove them before doing thresholding within each object
from skimage.morphology import watershed
seg = watershed(watershed_image, markers, mask, watershed_line)
Perform watershed transformation, see doc on skimage for details.
from skimage.morphology import watershed, dilation, ball
from skimage.feature import peak_local_max
from skimage.measure import label
from scipy.ndimage import distance_transform_edt
seed = dilation(peak_local_max(normalized_img, labels=label(preliminary_segmentation), min_distance=2, indices=False), selem=ball(1))
watershed_map = -1*distance_transform_edt(preliminary_segmentation)
seg = watershed(watershed_map, markers=label(seed), mask=preliminary_segmentation, watershed_line=True)
In this example, preliminary_segmentation
is a binary image where some objects are falsely merged. normalized_img
is the image after intensity normalization. The watershed is applied on the distance transform of preliminary_segmentation
with the local maximum on normalized_img
as the seeds for cutting the objects. seed
may also be obtained in different ways.
from skimage.morphology import watershed, dilation, ball
from skimage.measure import label
seg_filled = watershed(normalized_img, markers=label(seed), watershed_line=True)
seg_shell = np.logical_xor(seg_filled, dilation(seg_filled, selem=ball(1)))
In this example, normalized_img
is the image after intensity normalization. seed
is an image with one connected component as one seed for a particular object. Suppose the structure to be segmented has shell-like shape, i.e., brighter rings and dimmer inside. The watershed
transformation will return the segmentation of each object but with filled inside. A logical_xor
can be used to get a one-voxel thick shell.
perform size filtering
from skimage.morphology import remove_small_objects
final_seg = remove_small_objects(seg>0, min_size=minArea, connectivity=1, in_place=False)
see doc on skimage
Fill holes in 2D/3D segmentation
from aicssegmentation.core.utils import hole_filling
bw_filled = hole_filling(bw, hole_min, hole_max, fill_2d=True)
Parameters:
- bw: a binary 2D/3D image.
- hole_min: the minimum size of the holes to be filled
- hole_max: the maximum size of the holes to be filled
- fill_2d: if
fill_2d=True
, a 3D image will be filled slice by slice. If you think of a hollow tube alone z direction, the inside is not a hole under 3D topology, but the inside on each slice is indeed a hole under 2D topology.
<a name=tpt'>topology_preserving_thinning
perform thinning on segmentation without breaking topology
from aicssegmentation.core.utils import topology_preserving_thinning
bw_thin = topology_preserving_thinning(bw>0, thin_dist_preserve, thin_dist)
Parameters:
thin_dist_preserve
: Half of the minimum width you want to keep from being thinned. For example, when the object width is smaller than 4, you don't want to make this part even thinner (may break the thin object and alter the topology), you can setthin_dist_preserve
as2
.thin_dist
: the amount to thin (has to be an positive integer). The number of pixels to be removed from outter boundary towards center.
suggest scaling parameter assuming img
is a representative example image of this cell structure
from aicssegmentation.core.pre_processing_utils import suggest_normalization_param
suggest_normalization_param(img)
Parameter:
- img: a 3d numpy array
Return: suggested parameters for intensity_normalization
and other useful information will be printed out on console.
estimate the index of the center slice of the colony (e.g, an image with 70 z-slices and cells mostly in slice 30-60 should have 45 as the center slice of the colony instead of 70/2=35)
from aicssegmentation.core.utils import get_middle_frame
mid_z = get_middle_frame(structure_img_smooth, method='intensity')
We support two methods to get middle frame: method='intensity'
and method='z'
. 'intensity'
method assumes the number of foreground pixels (estimated by intensity) along z dimension has a unimodal distribution (such as Gaussian). Then, the middle frame is defined as the frame with peak of the distribution along z. 'z'
method simply return the middle z frame.
build a 3D seed image from the binary segmentation of a single slice
from aicssegmentation import get_3dseed_from_mid_frame
seed = get_3dseed_from_mid_frame(bw2d, shape_3d, frame_index, area_min, bg_seed)
Parameters
- bw2d: the 2d segmentation of a single frame
- shape_3d: the shape of original 3d image, e.g.
shape_3d = img.shape
- frame_index: the index of where
bw2d
is from the whole z-stack - area_min: any connected component in
bw2d
with size smaller thanarea_min
will be excluded from seed image generation - bg_seed:
bg_seed=True
will add a background seed at the first frame (z=0).
find the boundaries of all connected components in the segmentation
from skimage.segmentation import find_boundaries
seg_boundary = find_boundaries(seg_filled, connectivity=1, mode='thick')