LSSTApplications  10.0+286,10.0+36,10.0+46,10.0-2-g4f67435,10.1+152,10.1+37,11.0,11.0+1,11.0-1-g47edd16,11.0-1-g60db491,11.0-1-g7418c06,11.0-2-g04d2804,11.0-2-g68503cd,11.0-2-g818369d,11.0-2-gb8b8ce7
LSSTDataManagementBasePackage
Image Locators

(Return to Images)

Iterators provide access to an image, pixel by pixel. You often want access to neighbouring pixels (e.g. computing a gradient, or smoothing). Let's consider the problem of smoothing with a

1 2 1
2 4 2
1 2 1

kernel (the code's in image2.cc):

Start by including Image.h defining a namespace for clarity:

namespace geom=lsst::afw::geom;
namespace image = lsst::afw::image;
typedef image::Image<int> ImageT;
int main() {
Declare an Image
ImageT in(geom::Extent2I(10, 6));
Set the image to a ramp
for (int y = 0; y != in.getHeight(); ++y) {
for (ImageT::xy_locator ptr = in.xy_at(0, y), end = in.xy_at(in.getWidth(), y);
ptr != end; ++ptr.x()) {
*ptr = y;
}
}

That didn't gain us much, did it? The code's a little messier than using x_iterator. But now we can add code to calculate the smoothed image. First make an output image, and copy the input pixels:

//
// Convolve with a pseudo-Gaussian kernel ((1, 2, 1), (2, 4, 2), (1, 2, 1))
//
ImageT out(in.getDimensions()); // Make an output image the same size as the input image
out <<= in;
(we didn't need to copy all of them, just the ones around the edge that we won't smooth, but this is an easy way to do it).

Now do the smoothing:

for (int y = 1; y != in.getHeight() - 1; ++y) {
for (ImageT::xy_locator ptr = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y),
optr = out.xy_at(1, y); ptr != end; ++ptr.x(), ++optr.x()) {
*optr = ptr(-1,-1) + 2*ptr(0,-1) + ptr(1,-1) +
2*ptr(-1, 0) + 4*ptr(0, 0) + 2*ptr(1, 0) +
ptr(-1, 1) + 2*ptr(0, 1) + ptr(1, 1);
}
}
(N.b. you don't really want to do this; not only is this kernel separable into 1 2 1 in first the x then the y directions, but lsst::afw::math can do convolutions for you).

Here's a faster way to do the same thing (the use of an Image::Ptr is just for variety)

//
// Do the same thing a faster way, using cached_location_t
//
ImageT::Ptr out2(new ImageT(in.getDimensions()));
*out2 <<= in;
typedef ImageT::const_xy_locator xy_loc;
for (int y = 1; y != in.getHeight() - 1; ++y) {
// "dot" means "cursor location" in emacs
xy_loc dot = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y);
xy_loc::cached_location_t nw = dot.cache_location(-1,-1);
xy_loc::cached_location_t n = dot.cache_location( 0,-1);
xy_loc::cached_location_t ne = dot.cache_location( 1,-1);
xy_loc::cached_location_t w = dot.cache_location(-1, 0);
xy_loc::cached_location_t c = dot.cache_location( 0, 0);
xy_loc::cached_location_t e = dot.cache_location( 1, 0);
xy_loc::cached_location_t sw = dot.cache_location(-1, 1);
xy_loc::cached_location_t s = dot.cache_location( 0, 1);
xy_loc::cached_location_t se = dot.cache_location( 1, 1);
for (ImageT::x_iterator optr = out2->row_begin(y) + 1; dot != end; ++dot.x(), ++optr) {
*optr = dot[nw] + 2*dot[n] + dot[ne] +
2*dot[w] + 4*dot[c] + 2*dot[e] +
dot[sw] + 2*dot[s] + dot[se];
}
}
The xy_loc::cached_location_t variables remember relative positions.

We can rewrite this to move setting nw, se etc. out of the loop:

//
// Do the same calculation, but set nw etc. outside the loop
//
xy_loc pix11 = in.xy_at(1, 1);
xy_loc::cached_location_t nw = pix11.cache_location(-1,-1);
xy_loc::cached_location_t n = pix11.cache_location( 0,-1);
xy_loc::cached_location_t ne = pix11.cache_location( 1,-1);
xy_loc::cached_location_t w = pix11.cache_location(-1, 0);
xy_loc::cached_location_t c = pix11.cache_location( 0, 0);
xy_loc::cached_location_t e = pix11.cache_location( 1, 0);
xy_loc::cached_location_t sw = pix11.cache_location(-1, 1);
xy_loc::cached_location_t s = pix11.cache_location( 0, 1);
xy_loc::cached_location_t se = pix11.cache_location( 1, 1);
for (int y = 1; y != in.getHeight() - 1; ++y) {
// "dot" means "cursor location" in emacs
xy_loc dot = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y);
for (ImageT::x_iterator optr = out2->row_begin(y) + 1; dot != end; ++dot.x(), ++optr) {
*optr = dot[nw] + 2*dot[n] + dot[ne] +
2*dot[w] + 4*dot[c] + 2*dot[e] +
dot[sw] + 2*dot[s] + dot[se];
}
}

You may have noticed that that kernel isn't normalised. We could change the coefficients, but that'd slow things down for integer images (such as the one here); but we can normalise after the fact by making an Image that shares pixels with the central part of out2 and manipulating it via overloaded operator/=

//
// Normalise the kernel. I.e. divide the smoothed parts of image2 by 16
//
{
ImageT center = ImageT(
*out2,
geom::Point2I(1, 1), in.getDimensions() - geom::Extent2I(2)
),
);
center /= 16;
}

N.b. you can use the iterator embedded in the locator directly if you really want to, e.g.

for (int y = 0; y != in.getHeight(); ++y) {
for (ImageT::xy_x_iterator ptr = in.xy_at(0, y).x(), end = in.xy_at(in.getWidth(), y).x();
ptr != end; ++ptr) {
*ptr = 0;
}
}
we called the iterator xy_x_iterator, not x_iterator, for consistency with MaskedImage.

Finally write some output files and close out main():

//
// Save those images to disk
//
out.writeFits("foo.fits");
out2->writeFits("foo2.fits");
return 0;
}