LSSTApplications  17.0+11,17.0+34,17.0+56,17.0+57,17.0+59,17.0+7,17.0-1-g377950a+33,17.0.1-1-g114240f+2,17.0.1-1-g4d4fbc4+28,17.0.1-1-g55520dc+49,17.0.1-1-g5f4ed7e+52,17.0.1-1-g6dd7d69+17,17.0.1-1-g8de6c91+11,17.0.1-1-gb9095d2+7,17.0.1-1-ge9fec5e+5,17.0.1-1-gf4e0155+55,17.0.1-1-gfc65f5f+50,17.0.1-1-gfc6fb1f+20,17.0.1-10-g87f9f3f+1,17.0.1-11-ge9de802+16,17.0.1-16-ga14f7d5c+4,17.0.1-17-gc79d625+1,17.0.1-17-gdae4c4a+8,17.0.1-2-g26618f5+29,17.0.1-2-g54f2ebc+9,17.0.1-2-gf403422+1,17.0.1-20-g2ca2f74+6,17.0.1-23-gf3eadeb7+1,17.0.1-3-g7e86b59+39,17.0.1-3-gb5ca14a,17.0.1-3-gd08d533+40,17.0.1-30-g596af8797,17.0.1-4-g59d126d+4,17.0.1-4-gc69c472+5,17.0.1-6-g5afd9b9+4,17.0.1-7-g35889ee+1,17.0.1-7-gc7c8782+18,17.0.1-9-gc4bbfb2+3,w.2019.22
LSSTDataManagementBasePackage
Image Locators

(Return to Images)

Iterators provide access to an image, pixel by pixel. You often want access to neighbouring pixels (e.g. computing a gradient, or smoothing). Let's consider the problem of smoothing with a

1 2 1
2 4 2
1 2 1

kernel (the code's in image2.cc):

Start by including Image.h defining a namespace for clarity:

#include "lsst/geom.h"
namespace image = lsst::afw::image;
typedef image::Image<int> ImageT;
int main() {
Declare an Image
ImageT in(lsst::geom::Extent2I(10, 6));
Set the image to a ramp
for (int y = 0; y != in.getHeight(); ++y) {
for (ImageT::xy_locator ptr = in.xy_at(0, y), end = in.xy_at(in.getWidth(), y); ptr != end;
++ptr.x()) {
*ptr = y;
}
}

That didn't gain us much, did it? The code's a little messier than using x_iterator. But now we can add code to calculate the smoothed image. First make an output image, and copy the input pixels:

//
// Convolve with a pseudo-Gaussian kernel ((1, 2, 1), (2, 4, 2), (1, 2, 1))
//
ImageT out(in.getDimensions()); // Make an output image the same size as the input image
out.assign(in);
(we didn't need to copy all of them, just the ones around the edge that we won't smooth, but this is an easy way to do it).

Now do the smoothing:

for (int y = 1; y != in.getHeight() - 1; ++y) {
for (ImageT::xy_locator ptr = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y),
optr = out.xy_at(1, y);
ptr != end; ++ptr.x(), ++optr.x()) {
*optr = ptr(-1, -1) + 2 * ptr(0, -1) + ptr(1, -1) + 2 * ptr(-1, 0) + 4 * ptr(0, 0) +
2 * ptr(1, 0) + ptr(-1, 1) + 2 * ptr(0, 1) + ptr(1, 1);
}
}
(N.b. you don't really want to do this; not only is this kernel separable into 1 2 1 in first the x then the y directions, but lsst::afw::math can do convolutions for you).

Here's a faster way to do the same thing (the use of an Image::Ptr is just for variety)

//
// Do the same thing a faster way, using cached_location_t
//
std::shared_ptr<ImageT> out2(new ImageT(in.getDimensions()));
out2->assign(in);
typedef ImageT::const_xy_locator xy_loc;
for (int y = 1; y != in.getHeight() - 1; ++y) {
// "dot" means "cursor location" in emacs
xy_loc dot = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y);
xy_loc::cached_location_t nw = dot.cache_location(-1, -1);
xy_loc::cached_location_t n = dot.cache_location(0, -1);
xy_loc::cached_location_t ne = dot.cache_location(1, -1);
xy_loc::cached_location_t w = dot.cache_location(-1, 0);
xy_loc::cached_location_t c = dot.cache_location(0, 0);
xy_loc::cached_location_t e = dot.cache_location(1, 0);
xy_loc::cached_location_t sw = dot.cache_location(-1, 1);
xy_loc::cached_location_t s = dot.cache_location(0, 1);
xy_loc::cached_location_t se = dot.cache_location(1, 1);
for (ImageT::x_iterator optr = out2->row_begin(y) + 1; dot != end; ++dot.x(), ++optr) {
*optr = dot[nw] + 2 * dot[n] + dot[ne] + 2 * dot[w] + 4 * dot[c] + 2 * dot[e] + dot[sw] +
2 * dot[s] + dot[se];
}
}
The xy_loc::cached_location_t variables remember relative positions.

We can rewrite this to move setting nw, se etc. out of the loop:

//
// Do the same calculation, but set nw etc. outside the loop
//
xy_loc pix11 = in.xy_at(1, 1);
xy_loc::cached_location_t nw = pix11.cache_location(-1, -1);
xy_loc::cached_location_t n = pix11.cache_location(0, -1);
xy_loc::cached_location_t ne = pix11.cache_location(1, -1);
xy_loc::cached_location_t w = pix11.cache_location(-1, 0);
xy_loc::cached_location_t c = pix11.cache_location(0, 0);
xy_loc::cached_location_t e = pix11.cache_location(1, 0);
xy_loc::cached_location_t sw = pix11.cache_location(-1, 1);
xy_loc::cached_location_t s = pix11.cache_location(0, 1);
xy_loc::cached_location_t se = pix11.cache_location(1, 1);
for (int y = 1; y != in.getHeight() - 1; ++y) {
// "dot" means "cursor location" in emacs
xy_loc dot = in.xy_at(1, y), end = in.xy_at(in.getWidth() - 1, y);
for (ImageT::x_iterator optr = out2->row_begin(y) + 1; dot != end; ++dot.x(), ++optr) {
*optr = dot[nw] + 2 * dot[n] + dot[ne] + 2 * dot[w] + 4 * dot[c] + 2 * dot[e] + dot[sw] +
2 * dot[s] + dot[se];
}
}

You may have noticed that that kernel isn't normalised. We could change the coefficients, but that'd slow things down for integer images (such as the one here); but we can normalise after the fact by making an Image that shares pixels with the central part of out2 and manipulating it via overloaded operator/=

//
// Normalise the kernel. I.e. divide the smoothed parts of image2 by 16
//
{
ImageT center = ImageT(
*out2,
center /= 16;
}

N.b. you can use the iterator embedded in the locator directly if you really want to, e.g.

for (int y = 0; y != in.getHeight(); ++y) {
for (ImageT::xy_x_iterator ptr = in.xy_at(0, y).x(), end = in.xy_at(in.getWidth(), y).x(); ptr != end;
++ptr) {
*ptr = 0;
}
}
we called the iterator xy_x_iterator, not x_iterator, for consistency with MaskedImage.

Finally write some output files and close out main():

//
// Save those images to disk
//
out.writeFits("foo.fits");
out2->writeFits("foo2.fits");
return 0;
}