ImageJ Ops is a library for N-dimensional image processing. The primary design goals of Ops are:
%classpath config resolver scijava.public https://maven.scijava.org/content/groups/public
%classpath add mvn net.imagej imagej 2.0.0-rc-71
ij = new net.imagej.ImageJ()
"ImageJ ${ij.getVersion()} is ready to go."
Added new repo: scijava.public
ImageJ 2.0.0-rc-71 is ready to go.
Let's open up the friendly Clown image to use for our experiments. (For the coulrophobic, feel free to use the Fluorescent Cells instead.)
clown = ij.io().open("https://imagej.net/images/clown.png")
It's a bit large, so let's scale it down. We'll use the transform.scaleView
op. For succinctness, we'll write only scaleView
rather than the fully qualified op name transform.scaleView
. We'll use N-linear interpolation, since it tends to look pretty nice.
import net.imglib2.interpolation.randomaccess.NLinearInterpolatorFactory
scaleFactors = [0.5, 0.5, 1] // Reduce X and Y to 50%; leave C dimension alone.
interpolationStrategy = new NLinearInterpolatorFactory()
image = ij.op().run("scaleView", clown, scaleFactors, interpolationStrategy)
Let's also make a single-channel 32-bit floating point version:
import net.imglib2.FinalInterval
w = image.dimension(0); h = image.dimension(1)
slice = FinalInterval.createMinSize(0, 0, 0, w, h, 1)
grayImage = ij.op().run("crop", image, slice, true)
image32 = ij.op().convert().float32(grayImage)
Finally, let's define a function for showing multiple uniformly scaled images:
import net.imglib2.RandomAccessibleInterval
tile = { images ->
int[] gridLayout = images[0] in List ?
[images[0].size, images.size] : // 2D images list
[images.size] // 1D images list
RandomAccessibleInterval[] rais = images.flatten()
ij.notebook().mosaic(gridLayout, rais)
}
script1547140967863$_run_closure1@24f40593
You can get a complete list of available ops using the ops()
method. But it is a bit overwhelming, so here is a structured version organized by namespace:
import net.imagej.ops.OpUtils
opsByNS = [:]
ij.op().ops().each{op ->
ns = OpUtils.getNamespace(op)
name = OpUtils.stripNamespace(op)
if (!opsByNS.containsKey(ns)) opsByNS.put(ns, name)
else opsByNS.put(ns, opsByNS.get(ns) + ', ' + name)
}
opsByNS.put('<global>', opsByNS.remove(null))
ij.notebook().display(opsByNS)
coloc | icq, kendallTau, maxTKendallTau, pValue, pearsons |
convert | bit, cfloat32, cfloat64, clip, copy, float32, float64, imageType, int16, int32, int64, int8, normalizeScale, scale, uint12, uint128, uint16, uint2, uint32, uint4, uint64, uint8 |
copy | img, imgLabeling, iterableInterval, labelingMapping, rai, type |
create | img, imgFactory, imgLabeling, imgPlus, integerType, kernel, kernel2ndDerivBiGauss, kernelBiGauss, kernelDiffraction, kernelGabor, kernelGaborComplexDouble, kernelGaborDouble, kernelGauss, kernelLog, kernelSobel, labelingMapping, nativeType, object |
deconvolve | accelerate, firstGuess, normalizationFactor, richardsonLucy, richardsonLucyCorrection, richardsonLucyTV, richardsonLucyUpdate |
filter | addNoise, addPoissonNoise, allPartialDerivatives, bilateral, convolve, correlate, createFFTOutput, derivativeGauss, dog, fft, fftSize, frangiVesselness, gauss, hessian, ifft, linearFilter, max, mean, median, min, padFFTInput, padInput, padShiftFFTKernel, paddingIntervalCentered, paddingIntervalOrigin, partialDerivative, sigma, sobel, tubeness, variance |
geom | boundarySize, boundarySizeConvexHull, boundingBox, boxivity, centerOfGravity, centroid, circularity, compactness, contour, convexHull, convexity, eccentricity, feretsAngle, feretsDiameter, mainElongation, majorAxis, marchingCubes, maximumFeret, maximumFeretsAngle, maximumFeretsDiameter, medianElongation, minimumFeret, minimumFeretsAngle, minimumFeretsDiameter, minorAxis, roundness, secondMoment, size, sizeConvexHull, smallestEnclosingBoundingBox, solidity, spareness, sphericity, vertexInterpolator, verticesCount, verticesCountConvexHull, voxelization |
haralick | asm, clusterPromenence, clusterShade, contrast, correlation, differenceEntropy, differenceVariance, entropy, icm1, icm2, ifdm, maxProbability, sumAverage, sumEntropy, sumVariance, textureHomogeneity, variance |
hog | hog |
image | ascii, cooccurrenceMatrix, distancetransform, equation, fill, histogram, integral, invert, normalize, squareIntegral, watershed |
imagemoments | centralMoment00, centralMoment01, centralMoment02, centralMoment03, centralMoment10, centralMoment11, centralMoment12, centralMoment20, centralMoment21, centralMoment30, huMoment1, huMoment2, huMoment3, huMoment4, huMoment5, huMoment6, huMoment7, moment00, moment01, moment10, moment11, normalizedCentralMoment02, normalizedCentralMoment03, normalizedCentralMoment11, normalizedCentralMoment12, normalizedCentralMoment20, normalizedCentralMoment21, normalizedCentralMoment30 |
labeling | cca, merge |
lbp | lbp2D |
linalg | rotate |
logic | and, conditional, equal, greaterThan, greaterThanOrEqual, lessThan, lessThanOrEqual, not, notEqual, or, xor |
math | abs, add, and, arccos, arccosh, arccot, arccoth, arccsc, arccsch, arcsec, arcsech, arcsin, arcsinh, arctan, arctanh, assign, ceil, complement, complexConjugateMultiply, cos, cosh, cot, coth, csc, csch, cubeRoot, divide, exp, expMinusOne, floor, gamma, invert, leftShift, log, log10, log2, logOnePlusX, max, min, multiply, nearestInt, negate, or, power, randomGaussian, randomUniform, reciprocal, remainder, rightShift, round, sec, sech, signum, sin, sinc, sincPi, sinh, sqr, sqrt, step, subtract, tan, tanh, ulp, unsignedRightShift, xor, zero |
morphology | blackTopHat, close, dilate, erode, extractHoles, fillHoles, floodFill, open, outline, thinGuoHall, thinHilditch, thinMorphological, thinZhangSuen, topHat |
segment | detectJunctions, detectRidges |
stats | geometricMean, harmonicMean, integralMean, integralSum, integralVariance, kurtosis, leastSquares, max, mean, median, min, minMax, moment1AboutMean, moment2AboutMean, moment3AboutMean, moment4AboutMean, percentile, quantile, size, skewness, stdDev, sum, sumOfInverses, sumOfLogs, sumOfSquares, variance |
tamura | coarseness, contrast, directionality |
thread | chunker |
threshold | apply, huang, ij1, intermodes, isoData, li, localBernsenThreshold, localContrastThreshold, localMeanThreshold, localMedianThreshold, localMidGreyThreshold, localNiblackThreshold, localPhansalkarThreshold, localSauvolaThreshold, maxEntropy, maxLikelihood, mean, minError, minimum, moments, otsu, percentile, renyiEntropy, rosin, shanbhag, triangle, yen |
topology | boxCount, eulerCharacteristic26N, eulerCharacteristic26NFloating, eulerCorrection |
transform | addDimensionView, collapseNumericView, collapseRealView, collapseView, concatenateView, crop, dropSingletonDimensionsView, extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView, flatIterableView, hyperSliceView, interpolateView, intervalView, invertAxisView, offsetView, permuteCoordinatesInverseView, permuteCoordinatesView, permuteView, project, rasterView, rotateView, scaleView, shearView, stackView, subsampleView, translateView, unshearView, zeroMinView |
zernike | magnitude, phase |
<global> | eval, help, identity, join, loop, map, op, run, slice |
There is a helpful help
op which prints out information about an op. We can use it to better understand what sorts of operations are possible, and what kinds of arguments they expect:
ij.op().help('gauss')
Available operations: (RandomAccessibleInterval out) = net.imagej.ops.filter.gauss.DefaultGaussRA( RandomAccessibleInterval out, RandomAccessible in, double[] sigmas) (RandomAccessibleInterval out?) = net.imagej.ops.filter.gauss.GaussRAISingleSigma( RandomAccessibleInterval out?, RandomAccessibleInterval in, double sigma, OutOfBoundsFactory outOfBounds?) (RandomAccessibleInterval out?) = net.imagej.ops.filter.gauss.DefaultGaussRAI( RandomAccessibleInterval out?, RandomAccessibleInterval in, double[] sigmas, OutOfBoundsFactory outOfBounds?)
The information above tells us that there are two available gauss
operations: one which takes a list (double[]
) of sigmas, and another which takes a single (double
) sigma.
There are a couple of important subtleties here:
Some arguments have a ?
suffix, which means they are optional. If you leave them off, something reasonable will be done by default. In this case, the out
and outOfBounds
arguments are optional. You can leave them off ___in right-to-left order___. E.g., in the case above, calling gauss(a, b, c)
means gauss(out, in, sigmas)
and not gauss(in, sigmas, outOfBounds)
. If you want to omit the out
argument while passing the outOfBounds
argument, you can pass null
for out
—i.e., gauss(null, in, sigmas, outOfBounds)
.
The out
argument is both an input and an output. Hence, whatever object you pass as the out
parameter will also be returned as the output. In this case, since out
is optional, if you do not pass the out
parameter (or you pass null
), then one will be synthesized and returned.
Lastly, you might be wondering what a RandomAccessibleInterval
is. For now, it is enough to know it is an ImageJ image. See the "N-dimensional image processing" tutorial for a more detailed introduction.
Let's try calling the first gauss
op:
// Smudge it up horizontally.
double[] horizSigmas = [8, 0, 0]
horizGauss = ij.op().filter().gauss(image, horizSigmas)
// And now vertically.
double[] vertSigmas = [0, 8, 0]
vertGauss = ij.op().filter().gauss(image, vertSigmas)
// We can also blur the channels.
double[] channelSigmas = [0, 0, 1]
channelGauss = ij.op().filter().gauss(image, channelSigmas)
ij.notebook().display([["image":image, "horizontal blur":horizGauss, "vertical blur":vertGauss, "channel blur":channelGauss]])
image | horizontal blur | vertical blur | channel blur |
---|---|---|---|
There are two main ways to execute an op:
The type-safe way using a built-in method:
ij.op().foo().bar(...)
This way tends to be nicer from type-safe languages like Java.
The dynamic way using the run
method:
ij.op().run("foo.bar", ...)
This way tends to be nicer from dynamically typed scripting languages.
With the dynamic approach, passing the namespace is optional, so you can also write:
ij.op().run("bar", ...)
If there are ops with the same name across multiple namespaces (e.g., math.and
and logic.and
), then passing the short name will consider the op signatures across all namespaces.
You will see both syntaxes used throughout these tutorials.
Images have to come from somewhere. You can use ij.io().open(...)
to read one in from an external source such as files on disk, like we did with previous notebooks. But there are other ways, too.
You can create an empty (zero-filled) image using the create.img
op.
ij.op().help("create.img")
Available operations: (Img out) = net.imagej.ops.create.img.CreateImgFromImg( Img in) (Img out) = net.imagej.ops.create.img.CreateImgFromII( IterableInterval in) (Img out) = net.imagej.ops.create.img.CreateImgFromRAI( RandomAccessibleInterval in) (Img out) = net.imagej.ops.create.img.CreateImgFromDimsAndType( Dimensions in1, NativeType in2, ImgFactory factory?) (Img out) = net.imagej.ops.create.img.CreateImgFromInterval( Interval in)
Here we will use the clown image as a reference image.
def info(name, image) {
imageType = image.firstElement().getClass().getSimpleName()
name + " = " + image.dimension(0) + " x " + image.dimension(1) + " x " + image.dimension(2) + " : " +
imageType + "\n"
}
// Create an empty image of the same size as an existing image.
empty = ij.op().run("create.img", image)
// Create an empty image based on another image, but of different type.
import net.imglib2.type.logic.BitType
bitType = ij.op().run("create.img", image, new BitType())
// Create an image from scratch.
import net.imglib2.type.numeric.real.FloatType
smallFloat = ij.op().run("create.img", [25, 17, 2], new FloatType())
info("Original ", image) +
info("Empty one", empty) +
info("Bit image", bitType) +
info("Small float", smallFloat)
Original = 160 x 100 x 3 : UnsignedByteType Empty one = 160 x 100 x 3 : UnsignedByteType Bit image = 160 x 100 x 3 : BitType Small float = 25 x 17 x 2 : FloatType
You can easily make a copy of an image using ops from the copy
namespace.
ij.op().help("copy.rai")
Available operations: (RandomAccessibleInterval out?) = net.imagej.ops.copy.CopyRAI( RandomAccessibleInterval out?, RandomAccessibleInterval in)
copy = ij.op().run("copy.rai", image)
The equation
op allows you to generate an image from a formula in JavaScript syntax.
Such images can be useful for testing without needing an external image source, or a long and bulky list of numbers.
This Op
will execute the equation on each pixel of the image. Within the equation the current location can be retrieved via the variable p[]
(see example equation below).
ij.op().help("equation")
Available operations: (IterableInterval out) = net.imagej.ops.image.equation.DefaultCoordinatesEquation( IterableInterval out, UnaryFunctionOp in) (IterableInterval out?) = net.imagej.ops.image.equation.DefaultEquation( IterableInterval out?, String in) (IterableInterval out) = net.imagej.ops.image.equation.DefaultXYEquation( IterableInterval out, DoubleBinaryOperator in)
import net.imglib2.type.numeric.integer.UnsignedByteType
sinusoid = ij.op().run("create.img", [150, 100], new UnsignedByteType())
formula = "63 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 127"
ij.op().image().equation(sinusoid, formula)
There is no op yet to create an image by wrapping an existing array of values. For now, you can use ImgLib2 utility methods for that:
// Here we have a gradient ramp in an array.
w = 160; h = 96
byte[] pixels = new byte[w * h]
for (y in 0..h-1) {
for (x in 0..w-1) {
pixels[y * w + x] = x + y
}
}
// Wrap the array into an image.
import net.imglib2.img.array.ArrayImgs
ramp = ArrayImgs.unsignedBytes(pixels, w, h)
The stats
package has routines for computing numerical statistics.
Here is a rundown of many of them:
sinusoid32 = ij.op().run("create.img", [150, 100])
formula = "63 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 127"
ij.op().image().equation(sinusoid32, formula)
ij.notebook().display([
"geometricMean": ij.op().stats().geometricMean(sinusoid32).toString(),
"harmonicMean": ij.op().stats().harmonicMean(sinusoid32).toString(),
"kurtosis": ij.op().stats().kurtosis(sinusoid32).toString(),
"max": ij.op().stats().max(sinusoid32).toString(),
"mean": ij.op().stats().mean(sinusoid32).toString(),
"median": ij.op().stats().median(sinusoid32).toString(),
"min": ij.op().stats().min(sinusoid32).toString(),
"moment1AboutMean": ij.op().stats().moment1AboutMean(sinusoid32).toString(),
"moment2AboutMean": ij.op().stats().moment2AboutMean(sinusoid32).toString(),
"moment3AboutMean": ij.op().stats().moment3AboutMean(sinusoid32).toString(),
"moment4AboutMean": ij.op().stats().moment4AboutMean(sinusoid32).toString(),
"size": ij.op().stats().size(sinusoid32).toString(),
"skewness": ij.op().stats().skewness(sinusoid32).toString(),
"stdDev": ij.op().stats().stdDev(sinusoid32).toString(),
"sum": ij.op().stats().sum(sinusoid32).toString(),
"sumOfInverses": ij.op().stats().sumOfInverses(sinusoid32).toString(),
"sumOfLogs": ij.op().stats().sumOfLogs(sinusoid32).toString(),
"sumOfSquares": ij.op().stats().sumOfSquares(sinusoid32).toString(),
"variance": ij.op().stats().variance(sinusoid32).toString()
])
geometricMean | 106.86402515788897 |
harmonicMean | 60.81451530914875 |
kurtosis | 2.250389182944452 |
max | 252.9996057999923 |
mean | 130.35596075261444 |
median | 129.34019425677636 |
min | 1.293813403205192 |
moment1AboutMean | -2.877214910768089E-13 |
moment2AboutMean | 3982.1563522625042 |
moment3AboutMean | -10221.960619118927 |
moment4AboutMean | 3.569046079615442E7 |
size | 15000.0 |
skewness | -0.04067366532821499 |
stdDev | 63.106432691542885 |
sum | 1955339.4112892165 |
sumOfInverses | 246.65164103911627 |
sumOfLogs | 70073.35850104403 |
sumOfSquares | 3.146224928399938E8 |
variance | 3982.4218470523483 |
In the math
namespace, Ops provides traditional mathematical operations such as add
, subtract
, multiply
and divide
. These operations are overloaded in several ways:
math.add(image1, image2)
when image1
and image2
have the same dimensions.math.add(image, 5)
to add 5 to each sample of image
.math.add(3, 5)
to compute the sum of 3 and 5.Some math
ops are also already heavily optimized, since we used the math.add
op as a testbed to validate that Ops could perform as well or better than ImageJ 1.x does.
// Prepare a couple of equally sized images.
import net.imglib2.type.numeric.real.FloatType
image1 = ij.op().run("create.img", [160, 96], new FloatType())
image2 = ij.op().run("copy.rai", image1)
// Gradient toward bottom right.
ij.op().image().equation(image1, "p[0] + p[1]")
minMax1 = ij.op().stats().minMax(image1)
println("image1 range = (" + minMax1.getA() + ", " + minMax1.getB() + ")")
// Sinusoid.
ij.op().image().equation(image2, "64 * (Math.sin(0.1 * p[0]) + Math.cos(0.1 * p[1])) + 128")
minMax2 = ij.op().stats().minMax(image2)
println("image2 range = (" + minMax2.getA() + ", " + minMax2.getB() + ")")
ij.notebook().display([["image1":image1, "image2":image2]])
image1 range = (0.0, 254.0) image2 range = (0.020272091031074524, 255.97271728515625)
image1 | image2 |
---|---|
Let's test math.add(image, number)
:
addImage = image1 // Try also with image2!
tile([
addImage,
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 60),
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 120),
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 180)
])
Notice how we had to make a copy of the source image for each add(image, number)
above? This is because right now, the best-matching math.add
op is an inplace operation, modifying the source image. Ops is still young, and needs more fine tuning! In the meantime, watch out for details like this.
Now we'll try math.add(image1, image2)
and math.subtract(image1, image2)
:
sum = ij.op().run("math.add", image1, image2)
diff = ij.op().run("math.subtract", image1, image2)
tile([sum, diff])
Here is math.multiply(image1, image2)
:
ij.op().run("math.multiply", image1, image2)
And finally math.divide(image1, image2)
:
ij.op().run("math.divide", image1, image2)
ImageJ Ops offers a powerful expression evaluation op, built on SciJava's Parsington library:
ij.op().help("eval")
Available operations: (Object out) = net.imagej.ops.eval.DefaultEval( String in, Map vars?)
Operators in the expression map to ops as follows:
import net.imagej.ops.eval.OpEvaluator
ij.notebook().display(new OpEvaluator().getOpMap().entrySet().stream().map{
entry -> ["Operator": entry.getKey().toString(), "Op name": entry.getValue()]
}.collect())
Operator | Op name |
---|---|
|| | logic.or |
>>> | math.unsignedRightShift |
+ | math.add |
== | logic.equal |
&& | logic.and |
> | logic.greaterThan |
>= | logic.greaterThanOrEqual |
<< | math.leftShift |
!= | logic.notEqual |
- | math.negate |
+ | identity |
% | math.remainder |
/ | math.divide |
& | math.and |
| | math.or |
- | math.subtract |
>> | math.rightShift |
^ | math.power |
* | math.multiply |
< | logic.lessThan |
<= | logic.lessThanOrEqual |
You can also call any op in an eval
statement as a function, using familiar function syntax.
The following is an example of the eval
op being used to compute a Difference of Gaussians. Inputs within the formula may be accessed via the Map vars?
argument of the eval function and the key of the map corresponds to the name of the variable that can be used in the formula.
dogFormula = "gauss(image, [10, 10, 1]) - gauss(image, [5, 5, 1])"
dog = ij.op().eval(dogFormula, ["image": image32])
This tutorial illustrates ways to convert between pixel types.
dogFormula = "gauss(image, [10, 10, 1]) - gauss(image, [5, 5, 1])"
dog = ij.op().eval(dogFormula, ["image": image32])
Did you notice in the previous cell that we computed the gauss on a float32
version of the image? If we do not do this, the result will be wrong:
dogWrong = ij.op().eval(dogFormula, ["image": grayImage])
dogConverted = ij.op().convert().uint8(dog)
ij.notebook().display([["dogWrong":dogWrong, "dogConverted":dogConverted]])
dogWrong | dogConverted |
---|---|
As you can see, a DoG computed on the original uint8
version of the image looks essentially the same as the float32
DoG converted to uint8
.
The issue is that uint8
images have only 8 bits per channel, with values ranging from 0 to 255. So while the equation is dutifully evaluated, non-integer values are truncated, and then only the lowest 8 bits are retained.
Using a richer image type like float32
avoids the problem. The float32
type uses 32-bit IEEE floating point numbers for its samples, which an approximate range of
$-3.4 \times 10^{38}$
to
$3.4 \times 10^{38}$.
This type requires four times as much memory as uint8
, but math errors will accumulate much less severely in common cases.
For those familiar with ImageJ 1.x, float32
corresponds to ImageJ 1.x's "32-bit" type in the Image → Type menu, and is typically what people choose when performing sequences of mathematical operations.
In ImageJ Ops, many image types are available:
ij.op().ops().findAll{op ->
op.startsWith("convert.big") ||
op.startsWith("convert.bit") ||
op.startsWith("convert.cfloat") ||
op.startsWith("convert.float") ||
op.startsWith("convert.int") ||
op.startsWith("convert.uint")
}.collect{op -> op[8..-1]}
[bit, cfloat32, cfloat64, float32, float64, int16, int32, int64, int8, uint12, uint128, uint16, uint2, uint32, uint4, uint64, uint8]
In some circumstances, one of the above types is more appropriate than float32
. For example, if we are squaring an image, we could use uint16
, since the square of a number from 0 to 255 will always be in the range of 0 to 65535. It may sometimes make sense to use a high-precision integer type such as uint128
rather than a potentially lossy floating point type like float32
.
Ops provides four built-in ways of converting image values across types:
convert.copy
tranfers the values directly. This is the fastest method, but there is no bounds checking, so you may see overflow, or in some cases even end up with invalid sample values.convert.clip
is like copy
but includes bounds checking. Values less than the minimum are clipped (also called clamped) to the minimum value, and values greater than the maximum are clipped to the maximum.convert.scale
multiplies sample values by a scale factor, such that the intensities fall into the same relative histogram over the whole target type:
$[sourceTypeMin, sourceTypeMax] \to [destTypeMin, destTypeMax]$.
E.g., converting from uint8
to uint16
linearly scales
$[0, 255] \to [0, 65535]$.convert.normalizeScale
multiplies sample values by a scale factor, such that intensities are distributed across the full range of the target type:
$[sourceDataMin, sourceDataMax] \to [destTypeMin, destTypeMax]$.
E.g., converting a uint8
image with an actual minimum sample value of 16 and actual maximum sample value of 127 to uint16
linearly scales
$[16, 127] \to [0, 65535]$.Below, we show how to convert images using these approaches.
// Make a handy method for printing the image min/max.
printMinMax = { prefix, image ->
minMax = ij.op().stats().minMax(image)
println(prefix + ": min = " + minMax.getA() + ", max = " + minMax.getB())
}
// Create an image with an illustrative range of intensities.
import net.imglib2.type.numeric.integer.UnsignedShortType
imageToConvert = ij.op().run("create.img", [96, 64], new UnsignedShortType())
formula = "128 * (Math.cos(0.3*p[0]) + Math.sin(0.3*p[1])) + 384"
ij.op().image().equation(imageToConvert, formula)
printMinMax("Original", imageToConvert)
imageToConvert
Original: min = 129, max = 640
Now we will call the various convert ops. Please be aware that these ops currently have architectural issues, which we plan to address in a future version of ImageJ Ops. The code below works, but is subject to change until the ImageJ Ops 1.0.0 release.
import net.imagej.ops.Ops
import net.imagej.ops.special.computer.Computers
import net.imglib2.type.numeric.integer.UnsignedByteType
convertMethods = [
Ops.Convert.Copy.class,
Ops.Convert.Clip.class,
// HACK: convert.scale op is disabled here due to a bug in Ops.
// Ops.Convert.Scale.class,
Ops.Convert.NormalizeScale.class
]
convertedImages = []
for (convertMethod in convertMethods) {
// Create a uint8 destination image for the conversion.
convertedImage = ij.op().run("create.img", imageToConvert, new UnsignedByteType())
// Look up the needed convert op for converting from source to destination.
inType = imageToConvert.firstElement() // Type from which we are converting.
outType = convertedImage.firstElement() // Type to which we are converting.
convertOp = Computers.unary(ij.op(), convertMethod, outType, inType)
// NB: Prepare the convert op to behave properly.
// The need for these calls is hacky, and will be fixed in a future version of Ops.
convertOp.checkInput(inType, outType)
convertOp.checkInput(imageToConvert)
// Run this convert op on every sample of our source image.
ij.op().run("map", convertedImage, imageToConvert, convertOp)
methodName = convertMethod.getField("NAME").get(null)
printMinMax(methodName, convertedImage)
convertedImages.add(convertedImage)
}
tile(convertedImages)
convert.copy: min = 0, max = 255 convert.clip: min = 129, max = 255 convert.normalizeScale: min = 0, max = 255
The code above makes use of the map
op to execute the appropriate convert
op on each element of a collection.
This notebook demonstrates various image filtering operations. There are lots of image filtering operations available in the filter
namespace:
ij.op().ops().findAll{op ->
op.startsWith("filter.")
}.collect{op -> op[7..-1]}
[addNoise, addPoissonNoise, allPartialDerivatives, bilateral, convolve, correlate, createFFTOutput, derivativeGauss, dog, fft, fftSize, frangiVesselness, gauss, hessian, ifft, linearFilter, max, mean, median, min, padFFTInput, padInput, padShiftFFTKernel, paddingIntervalCentered, paddingIntervalOrigin, partialDerivative, sigma, sobel, tubeness, variance]
Here is a demonstration of some of them:
ij.op().help("filter.addPoissonNoise")
Available operations: (RealType out) = net.imagej.ops.filter.addPoissonNoise.AddPoissonNoiseRealType( RealType out, RealType in, long seed?) (IterableInterval out) = net.imagej.ops.filter.addPoissonNoise.AddPoissonNoiseMap( IterableInterval out, IterableInterval in)
imageToFilter = image32
radius = 3
// We will use a 3x3 diamond as our neighborhood here.
import net.imglib2.algorithm.neighborhood.DiamondShape
shape = new DiamondShape(radius)
// Add Poisson noise.
addPoissonNoise = ij.op().run("create.img", imageToFilter)
ij.op().filter().addPoissonNoise(addPoissonNoise, imageToFilter)
// Gaussian blur.
gauss = ij.op().filter().gauss(imageToFilter, radius)
// Median filter.
median = ij.op().run("create.img", imageToFilter)
ij.op().filter().median(median, imageToFilter, shape)
// Min filter.
min = ij.op().run("create.img", imageToFilter)
ij.op().filter().min(min, imageToFilter, shape)
// Max filter.
max = ij.op().run("create.img", imageToFilter)
ij.op().filter().max(max, imageToFilter, shape)
// Sobel filter.
sobel = ij.op().filter().sobel(imageToFilter)
// Display the results side by side.
ij.notebook().display([
[["image":imageToFilter, "poissonNoise":addPoissonNoise, "gauss":gauss, "Sobel":sobel]],
[["median":median,"min":min,"max":max]]
])
image | poissonNoise | gauss | Sobel |
---|---|---|---|
median | min | max |
---|---|---|
Finally, let's take a special look at the Difference of Gaussians op, filter.dog
:
imageToProcess = image32
sigma1 = 5
sigma2 = 10
// Difference of Gaussians.
dog = ij.op().filter().dog(imageToProcess, sigma1, sigma2) // gauss(sigma2) - gauss(sigma1)
// We can also use eval to perform the DoG.
vars = [
"image": imageToProcess,
"sigma1": [sigma1, sigma1],
"sigma2": [sigma2, sigma2]
]
evalDoG = ij.op().eval("gauss(image, sigma2) - gauss(image, sigma1)", vars)
ij.notebook().display([["dog":dog, "evalDoG":evalDoG]])
dog | evalDoG |
---|---|
This plugin implements the algorithm for detection of vessel- or tube-like structures in 2D and 3D images described Frangi et al 1998. Example is shown below. Here we use an MRA image of the brain (By SBarnes - Own work, CC BY-SA 3.0).
input = ij.io().open("https://upload.wikimedia.org/wikipedia/commons/6/66/Mra-mip.jpg")
[INFO] Populating metadata [INFO] Populating metadata
ij.op().help("filter.frangiVesselness")
Available operations: (RandomAccessibleInterval out) = net.imagej.ops.filter.vesselness.DefaultFrangi( RandomAccessibleInterval out, RandomAccessibleInterval in, double[] spacing, int scale)
import net.imglib2.img.array.ArrayImgs
// spacing refers to the physical distance between pixels. The default setting is {1, 1, 1...}
spacings = [1, 1, 1]
//parse the strings
dims = new long[input.numDimensions()]
input.dimensions(dims)
// scale refers the the pixel distance at which the filter operates. Larger scales measure larger structures.
filtered = [:]
for (scale in [2, 5, 8, 13, 21])
filtered.put("Scale " + scale, ij.op().run("filter.frangiVesselness", ArrayImgs.doubles(dims), input, spacings, scale))
ij.notebook().display([filtered])
Scale 2 | Scale 5 | Scale 8 | Scale 13 | Scale 21 |
---|---|---|---|---|
Convolution is a very helpful filter for many circumstances. Below is an example of how to use the convolution filter.
import net.imglib2.type.numeric.real.FloatType;
// create the sample image
base = ij.op().run("create.img", [150, 100], new FloatType())
formula = "p[0]^2 * p[1]"
ij.op().image().equation(base, formula)
// create kernel
kernel_small = ij.op().run("create.img", [3,3], new FloatType())
kernel_big = ij.op().run("create.img", [20,20], new FloatType())
ij.op().image().equation(kernel_small, "p[0]")
ij.op().image().equation(kernel_big, "p[0]")
// convolved
convolved_small = ij.op().filter().convolve(base, kernel_small)
convolved_big = ij.op().filter().convolve(base, kernel_big)
ij.notebook().display([[
"base":base,
"small kernel":kernel_small,
"big kernel":kernel_big,
"small convolved":convolved_small,
"big convolved":convolved_big
]])
base | small kernel | big kernel | small convolved | big convolved |
---|---|---|---|---|
ImageJ can automatically select which filter to use according to the size of kernel. If kernel is small enough, then NaiveF (brute force way, less efficient, but useful when kernel is small) is used, otherwise, fast fourier transforms is used.
op = ij.op().op("filter.convolve", base, kernel_small)
op2 = ij.op().op("filter.convolve", base, kernel_big)
[op.getClass().getName(), op2.getClass().getName()]
[net.imagej.ops.filter.convolve.ConvolveNaiveF, net.imagej.ops.filter.convolve.ConvolveFFTF]
Deconvolution in ImageJ is implemented using Richardson–Lucy deconvolution algorithm.
import net.imglib2.util.Util
import net.imglib2.outofbounds.OutOfBoundsConstantValueFactory
import net.imagej.ops.deconvolve.RichardsonLucyF
base_deconvolved = ij.op().run(RichardsonLucyF.class, convolved_small, kernel_small, null, new OutOfBoundsConstantValueFactory<>(Util.getTypeFromInterval(kernel_small).createVariable()), 10)
base_deconvolved_big = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, new OutOfBoundsConstantValueFactory<>(Util.getTypeFromInterval(kernel_small).createVariable()), 10)
ij.notebook().display([[
"small kernel":base_deconvolved,
"big kernel":base_deconvolved_big
]])
small kernel | big kernel |
---|---|
A small kernel can give a fairly accurate result. However, if the kernel is a big one, the result would be much better if we run the deconvolution process for many iterations. Here we are going to use the deconvolution filter for 50 times.
import net.imagej.ops.deconvolve.RichardsonLucyF
base_deconvoled_big_iterate = ij.op().run(RichardsonLucyF.class,convolved_big,kernel_big,50)
In order to better dealing with edges, we can try to use non-circulant mode. With the same 50 iterations, the result shown below is apparently better than the one above
import net.imagej.ops.deconvolve.RichardsonLucyF
base_deconvolved_big_noncirc = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, null,null, null,null,50,true,false )
What if the iterations take too long? In that case, we can try to minimize the time we spend by taking larger steps, hence saving the time by using accelerated algorithms.
import net.imagej.ops.deconvolve.RichardsonLucyF
base_deconvolved_big_acc_noncirc = ij.op().run(RichardsonLucyF.class, convolved_big, kernel_big, null, null,null, null, null,50,true,true)
ImageJ Ops has forward and inverse Fourier transforms.
import net.imglib2.converter.Converters
import net.imglib2.type.numeric.real.FloatType
// create image
fft_in = ij.op().run("create.img", [150, 100], new FloatType())
formula = "p[0]^2 + p[1]"
ij.op().image().equation(fft_in, formula)
// apply fft
fft_out = ij.op().run("filter.fft", fft_in)
// display the complex-valued result
// Quantitatively, it is not very useful to do this, but we like pictures. :-)
import net.imglib2.RandomAccessibleInterval
fft_out_real = Converters.convert((RandomAccessibleInterval) fft_out, {i, o -> o.setReal(i.getRealFloat())}, new FloatType())
fft_out_imaginary = Converters.convert((RandomAccessibleInterval) fft_out, {i, o -> o.setReal(i.getImaginaryFloat())}, new FloatType())
ij.notebook().display([[
"real": ij.notebook().display(fft_out_real, -1, 1),
"imaginary": ij.notebook().display(fft_out_imaginary, -1, 1)
]])
real | imaginary |
---|---|
The usage of inverse FFT is very similar to FFT. Here we are going to invert the result we got from the above example. The result is the same as the original image.
import net.imglib2.type.numeric.real.FloatType
inverted = ij.op().run("create.img", [150,100], new FloatType())
ij.op().filter().ifft(inverted, fft_out)
inverted
Image transformations such as rotation, scaling and cropping are accomplished using ops of the transform
namespace.
Most ops of this namespace have the nice property of being views: they do not actually copy image samples, but rather wrap the image, offering a modified "window" into the original data. Using views helps to greatly reduce computer memory usage, at a very minimal time performance cost. If you need a deep copy of the image for some reason (e.g., if time performance is paramount, or if you want to modify the transformed image samples in-place without affecting other transformed versions of the image), you can still copy it using the copy
namespace; see Generating images notebook for details.
ij.op().ops().findAll{op ->
op.startsWith("transform.")
}.collect{op -> op[10..-1]}
[addDimensionView, collapseNumericView, collapseRealView, collapseView, concatenateView, crop, dropSingletonDimensionsView, extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView, flatIterableView, hyperSliceView, interpolateView, intervalView, invertAxisView, offsetView, permuteCoordinatesInverseView, permuteCoordinatesView, permuteView, project, rasterView, rotateView, scaleView, shearView, stackView, subsampleView, translateView, unshearView, zeroMinView]
The transform.rotateView
op rotates the image 90 degrees from one dimensional axis to another.
ij.op().help("rotateView")
Available operations: (IntervalView out) = net.imagej.ops.transform.rotateView.IntervalRotateView( RandomAccessibleInterval in, int fromAxis, int toAxis) (MixedTransformView out) = net.imagej.ops.transform.rotateView.DefaultRotateView( RandomAccessible in, int fromAxis, int toAxis)
Here is an example usage of the transform.rotateView
op.
// set parameter for rotate
x = 0; y = 1; c = 2
// define functions to see the bounds of an image
bounds = {interval ->
return "(" + interval.min(0) + ", " + interval.min(1) + ") - " +
"(" + interval.max(0) + ", " + interval.max(1) + ")"
}
// Rotate the image (image, fromAxis, toAxis)
rotated = ij.op().run("rotateView", image, x, y) // 90 degrees clockwise
//rotated = ij.op().run("rotateView", image, y, x)// 90 degrees counter-clockwise
//rotated = ij.op().run("rotateView", image, x, c) // rotate through channels! WAT
// The interval bounds have automatically changed!
println("Old bounds: " + bounds(image))
println("New bounds: " + bounds(rotated))
rotated
Old bounds: (0, 0) - (159, 99) New bounds: (-99, 0) - (0, 159)
The transform.crop
op crops an image N-dimensionally. E.g., you can use it to create a substack of a 3D dataset, cut out irrelevant channels, or crop the XY planes as with 2D image processing software.
ij.op().help("crop")
Available operations: (ImgPlus out) = net.imagej.ops.transform.crop.CropImgPlus( ImgPlus in1, Interval in2, boolean dropSingleDimensions?) (RandomAccessibleInterval out) = net.imagej.ops.transform.crop.CropRAI( RandomAccessibleInterval in1, Interval in2, boolean dropSingleDimensions?)
Below, we show two ways to crop an image: 1) with transform.crop
; and 2) using transform.intervalView
. The former translates the image back to the origin, while the latter does not.
import net.imglib2.FinalInterval
region = FinalInterval.createMinSize(75, 27, 0, 40, 28, 3)
eye = ij.op().run("crop", image, region)
eyeView = ij.op().run("intervalView", image, region)
println("Eye bounds: " + bounds(eye))
println("EyeView bounds: " + bounds(eyeView))
ij.notebook().display([["eye":eye, "view":eyeView]])
Eye bounds: (0, 0) - (39, 27) EyeView bounds: (75, 27) - (114, 54)
eye | view |
---|---|
To perform image scaling, use the transform.scaleView
op. You already saw it in action in the "Getting started" section, but here it is again—this time enlarging an image rather than reducing it.
Just for fun, we compare three different interpolation strategies: nearest neighbor, N-linear, and Lanczos.
import net.imglib2.interpolation.randomaccess.NearestNeighborInterpolatorFactory
import net.imglib2.interpolation.randomaccess.NLinearInterpolatorFactory
import net.imglib2.interpolation.randomaccess.LanczosInterpolatorFactory
scaleFactors = [4, 4, 1] // Enlarge X and Y by 4x; leave channel count the same.
nearestNeighborEye = ij.op().run("scaleView", eye, scaleFactors, new NearestNeighborInterpolatorFactory())
nLinearEye = ij.op().run("scaleView", eye, scaleFactors, new NLinearInterpolatorFactory())
lanczosEye = ij.op().run("scaleView", eye, scaleFactors, new LanczosInterpolatorFactory())
ij.notebook().display([["nearest neighbor":nearestNeighborEye, "N-linear":nLinearEye, "Lanczos":lanczosEye]])
nearest neighbor | N-linear | Lanczos |
---|---|---|
Of course, some detail from the original image has been lost, since we scaled down and then back up again.
The transform.intervalView
can also be used to expand the bounding box of an image. However, there is one catch: you must first decide what the out-of-bounds sample values will be. The transform.extend
ops achieve this goal. If you forget to extend the image before padding it via intervalView
, you will receive an exception when attempting to query any out-of-bounds samples.
Note that the various transform.extend
ops explicitly remove the bounding box of an image, expanding the defined sample values to infinity in all directions. In most circumstances, you will want to use transform.intervalView
to rebound the image after extending it.
ij.op().ops().findAll{op ->
op.startsWith("transform.extend")
}.collect{op -> op[10..-1]}
[extendBorderView, extendMirrorDoubleView, extendMirrorSingleView, extendPeriodicView, extendRandomView, extendValueView, extendView, extendZeroView]
Here are some side-by-side examples of what happens when you pad an image with these various approaches:
def pad(image, extended, t, r, b, l) {
min = new long[image.numDimensions()]
max = new long[image.numDimensions()]
image.min(min)
image.max(max)
min[0] -= l; min[1] -= t; max[0] += r; max[1] += b
return ij.op().run("intervalView", extended, min, max)
}
// Define the top, right, bottom and left padding amounts.
t = r = b = l = 20
// Pad the image with different out-of-bounds strategies.
extendedBorder = ij.op().run("transform.extendBorderView", eye)
paddedBorder = pad(eye, extendedBorder, t, r, b, l)
extendedMirrorDouble = ij.op().run("transform.extendMirrorDoubleView", eye)
paddedMirrorDouble = pad(eye, extendedMirrorDouble, t, r, b, l)
extendedMirrorSingle = ij.op().run("transform.extendMirrorSingleView", eye)
paddedMirrorSingle = pad(eye, extendedMirrorSingle, t, r, b, l)
extendedPeriodic = ij.op().run("transform.extendPeriodicView", eye)
paddedPeriodic = pad(eye, extendedPeriodic, t, r, b, l)
minValue = eye.firstElement().getMinValue().doubleValue()
maxValue = eye.firstElement().getMaxValue().doubleValue()
extendedRandom = ij.op().run("transform.extendRandomView", eye, minValue, maxValue)
paddedRandom = pad(eye, extendedRandom, t, r, b, l)
value = eye.firstElement().copy(); value.set(100)
extendedValue = ij.op().run("transform.extendValueView", eye, value)
paddedValue = pad(eye, extendedValue, t, r, b, l)
extendedZero = ij.op().run("transform.extendZeroView", eye)
paddedZero = pad(eye, extendedZero, t, r, b, l)
ij.notebook().display([[
"border":paddedBorder,
"mirror double":paddedMirrorDouble,
"mirror single":paddedMirrorSingle,
"periodic":paddedPeriodic,
"random":paddedRandom,
"value":paddedValue,
"zero":paddedZero
]])
border | mirror double | mirror single | periodic | random | value | zero |
---|---|---|---|---|---|---|
The transform.hyperSliceView
op is used to cut out a portion of an image's dimensions—for example, to slice out a single plane. This notebook demonstrates how to use it in conjunction with a Hessian filter.
// Hessian filter
hessianComposite = ij.op().filter().hessian(image32)
hessian = hessianComposite.interval
println(hessian.dimension(2))
stackIndex = hessian.numDimensions() - 1
stackLength = hessian.dimension(stackIndex)
hessianSlices = [:]
for (i = 0; i < stackLength; i++)
hessianSlices.put("Derivative #" + i, ij.op().transform().hyperSliceView(hessian, stackIndex, i))
ij.notebook().display([hessianSlices])
4
Derivative #0 | Derivative #1 | Derivative #2 | Derivative #3 |
---|---|---|---|
Thresholding is a technique for dividing an image into two (or more) classes of pixels, which are typically called "foreground" and "background."
threshImage = ij.op().run("crop", image, slice, true)
Ops has all the same global thresholding methods as ImageJ 1.x, as well as some local thresholding methods.
ij.op().ops().findAll{op ->
op.startsWith("threshold.")
}.collect{op -> op[10..-1]}
[apply, huang, ij1, intermodes, isoData, li, localBernsenThreshold, localContrastThreshold, localMeanThreshold, localMedianThreshold, localMidGreyThreshold, localNiblackThreshold, localPhansalkarThreshold, localSauvolaThreshold, maxEntropy, maxLikelihood, mean, minError, minimum, moments, otsu, percentile, renyiEntropy, rosin, shanbhag, triangle, yen]
Let's check out the global thresholding algorithms:
tHuang = ij.op().threshold().huang(threshImage)
tIJ1 = ij.op().threshold().ij1(threshImage) // ImageJ 1.x calls this "Default"
tIntermodes = ij.op().threshold().intermodes(threshImage)
tIsoData = ij.op().threshold().isoData(threshImage)
tLi = ij.op().threshold().li(threshImage)
tMaxEntropy = ij.op().threshold().maxEntropy(threshImage)
tMaxLikelihood = ij.op().threshold().maxLikelihood(threshImage)
tMean = ij.op().threshold().mean(threshImage)
tMinError = ij.op().threshold().minError(threshImage)
tMinimum = ij.op().threshold().minimum(threshImage)
tMoments = ij.op().threshold().moments(threshImage)
tOtsu = ij.op().threshold().otsu(threshImage)
tPercentile = ij.op().threshold().percentile(threshImage)
tRenyiEntropy = ij.op().threshold().renyiEntropy(threshImage)
tRosin = ij.op().threshold().rosin(threshImage)
tShanbhag = ij.op().threshold().shanbhag(threshImage)
tTriangle = ij.op().threshold().triangle(threshImage)
tYen = ij.op().threshold().yen(threshImage)
ij.notebook().display([
[["huang":tHuang, "ij1":tIJ1, "intermodes":tIntermodes, "isodata":tIsoData, "li":tLi, "max entropy":tMaxEntropy]],
[["max likelihood":tMaxLikelihood, "mean":tMean, "min error":tMinError, "minimum":tMinimum, "moments":tMoments, "otsu":tOtsu]],
[["percentile":tPercentile, "renyi entropy":tRenyiEntropy, "rosin":tRosin, "shanbhag":tShanbhag, "triangle":tTriangle, "yen":tYen]]
])
huang | ij1 | intermodes | isodata | li | max entropy |
---|---|---|---|---|---|
max likelihood | mean | min error | minimum | moments | otsu |
---|---|---|---|---|---|
percentile | renyi entropy | rosin | shanbhag | triangle | yen |
---|---|---|---|---|---|
Let's try some local thresholding. This is a bit more complex, because local thresholding algorithms typically need a neighborhood, and often some other secondary parameters. We will take the Bernsen algorithm for a spin:
ij.op().help("threshold.localBernsenThreshold")
Available operations: (IterableInterval out) = net.imagej.ops.threshold.localBernsen.LocalBernsenThreshold( IterableInterval out, RandomAccessibleInterval in, Shape shape, OutOfBoundsFactory outOfBoundsFactory?, double contrastThreshold, double halfMaxValue)
import net.imglib2.algorithm.neighborhood.HyperSphereShape
// Secondary parameter values. These are wild guesses.
contrastThreshold = 10
halfMaxValue = 10
// Let's try with a variety of neighborhood radii and compare side-by-side.
import net.imglib2.type.logic.BitType
radii = [1, 3, 5, 8, 12, 15]
bernsenImages = [:]
for (radius in radii) {
binaryImage = ij.op().run("create.img", threshImage, new BitType())
ij.op().threshold().localBernsenThreshold(binaryImage, threshImage, new HyperSphereShape(radius),
contrastThreshold, halfMaxValue)
bernsenImages.put("radius " + radius, binaryImage)
}
ij.notebook().display([bernsenImages])
radius 1 | radius 3 | radius 5 | radius 8 | radius 12 | radius 15 |
---|---|---|---|---|---|
Nice illustration of how much better local thresholding can be, eh? The limited scope when thresholding each pixel can go a long way toward correcting for issues like uneven illumination.
Get ImageJ ready and prepare both grey image and binary image for processing.
import net.imglib2.algorithm.neighborhood.HyperSphereShape
// get binary image
threshImage = grayImage
// Secondary parameter values. These are wild guesses.
contrastThreshold = 10
halfMaxValue = 10
import net.imglib2.type.logic.BitType
radius = 15
binaryImage = ij.op().run("create.img", threshImage, new BitType())
ij.op().threshold().localBernsenThreshold(binaryImage, threshImage, new HyperSphereShape(radius),
contrastThreshold, halfMaxValue)
ij.notebook().display([["gray": grayImage, "binary": binaryImage]])
gray | binary |
---|---|
Ops includes operators for mathematical morphology in both binary and grayscale. The two most basic morphological operators are:
The next two most basic are:
These latter two are not strictly necessary, since you can simply call erode followed by dilate or vice versa, but they are convenient as a shorthand.
// HINT: Try different binary images here!
morphImage = binaryImage
// We will use the smallest radius: a single pixel.
import net.imglib2.algorithm.neighborhood.HyperSphereShape
shape = new HyperSphereShape(1)
erode = ij.op().morphology().erode(morphImage, shape)
dilate = ij.op().morphology().dilate(morphImage, shape)
open = ij.op().morphology().open(morphImage, [shape])
close = ij.op().morphology().close(morphImage, [shape])
// Let's also check whether open and close visually match what we expect.
manualOpen = ij.op().morphology().dilate(erode, shape)
manualClose = ij.op().morphology().erode(dilate, shape)
ij.notebook().display([["erode":erode, "dilate":dilate, "open":open, "close":close,
"manual open":manualOpen, "manual close":manualClose]])
erode | dilate | open | close | manual open | manual close |
---|---|---|---|---|---|
The open and close look good—they match their respective definitions.
Let's quickly try some grayscale morphological operations as well. This time we also invoke the top-hat transform.
morphImage = grayImage
// We will use a larger radius of 3 for our neighborhood this time.
import net.imglib2.algorithm.neighborhood.HyperSphereShape
shape = new HyperSphereShape(3)
erode = ij.op().morphology().erode(morphImage, shape)
dilate = ij.op().morphology().dilate(morphImage, shape)
open = ij.op().morphology().open(morphImage, [shape])
close = ij.op().morphology().close(morphImage, [shape])
blackTopHat = ij.op().morphology().blackTopHat(morphImage, [shape])
whiteTopHat = ij.op().morphology().topHat(morphImage, [shape])
ij.notebook().display([["erode":erode, "dilate":dilate, "open":open, "close":close,
"black top-hat":blackTopHat, "white top-hat":whiteTopHat]])
erode | dilate | open | close | black top-hat | white top-hat |
---|---|---|---|---|---|
Ops also provides hit-and-miss operations, more specifically, approaches for thinning:
import net.imagej.ops.morphology.thin.ThinGuoHall
import net.imagej.ops.morphology.thin.ThinHilditch
import net.imagej.ops.morphology.thin.ThinMorphological
import net.imagej.ops.morphology.thin.ThinZhangSuen
// HINT: Try different binary images here!
morphImage = binaryImage
GuoHall = ij.op().run(ThinGuoHall.class, morphImage)
Hilditch = ij.op().run(ThinHilditch.class, morphImage)
Morphological = ij.op().run(ThinMorphological.class, morphImage)
zhangSuen = ij.op().run(ThinZhangSuen.class, morphImage)
ij.notebook().display([["GuoHall":GuoHall, "Hilditch":Hilditch,
"Morphological":Morphological, "zhangSuen":zhangSuen]])
GuoHall | Hilditch | Morphological | zhangSuen |
---|---|---|---|
colCount = 5 // Change it if you wish.
// Define some helper functions.
notebookPath = { op ->
tokens = op.split("\\.")
tokens[-1] += ".ipynb"
opNotebook = java.nio.file.Paths.get("Ops", tokens)
if (opNotebook.toFile().exists()) return opNotebook
ns = net.imagej.ops.OpUtils.getNamespace(op)
if (ns == null) return null
nsNotebook = java.nio.file.Paths.get("Ops", ns, ns + ".ipynb")
if (nsNotebook.toFile().exists()) return nsNotebook
}
ns = { net.imagej.ops.OpUtils.getNamespace(it) ?: "<global>" }
// Organize op namespaces by column.
opsByNS = [:]; ij.op().ops().each{ opsByNS.get(ns(it), []).add(it) }
cols = []; (1..colCount).each { cols.add([]) }
for (ns in opsByNS.keySet().sort()) {
col = cols.min { it.flatten().size() } // Add to shortest column.
item = ["<h4>${ns}</h4>"]
item.addAll(opsByNS.get(ns))
col.add(item)
}
// Generate HTML table.
s = "<h3>Index of available ops</h3>"
s += "<style>"
s += ".op-index > tbody > tr > td { vertical-align: top; }"
s += ".op-index > tbody > tr { background-color: transparent !important; }"
s += ".op-namespace { border: 1px solid gray !important; margin: 0 !important; }"
s += ".op-namespace td { width: 16%; padding: 1px 4px 1px 4px; }"
s += ".op-namespace a:hover { cursor: pointer !important; }"
s += "</style>"
s += "<table class=\"op-index\" border=0 cellpadding=0><tbody><tr>"
for (col in cols) {
s += "<td valign=\"top\">"
for (ns in col) {
// Nested tables! Great plan? Or the greatest plan?
s += "<table class=\"op-namespace\"><tbody>"
for (op in ns) {
label = net.imagej.ops.OpUtils.stripNamespace(op)
url = notebookPath(op)
html = url == null ? label : "<a href=\"${url}\">${label}</a>"
s += "<tr><td>${html}</td></tr>"
}
s += "</tbody></table><br>"
}
s += "</td>"
}
s += "</tr></tbody></table>"
(net.imagej.notebook.mime.HTMLObject) { s }
|
|
|
|
|