Thursday, November 14, 2013

Lab 5: Image mosaic and Miscellaneous image functions 2

 
Goal and Background
The goal of this lab is to introduce important analytical processes in remote sensing. Using Erdas Imagine 2013, we will explore RGB to IHS transformations, image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. With these skills, we will be able to apply the processes to real life projects.

Part 1: RGB to IHS Transform

Method



Figure 1 - Original RGB image on left, IHS image on right
This part of the lab consisted of transforming a Western Wisconsin satellite image from RGB to IHS. IHS represents intensity, hue, and saturation, respectively. The tool for doing this (RGB to IHS) is found in the 'Raster' tab under 'Spectral.' I made sure that band 3 was on red, band 2 on green, and 1 on blue. All other options were left at default and I ran the tool. Figure 1 shows the original and the IHS image.

Then the IHS image had to be transformed back to an RGB image, which is done in the same menu as the RGB to IHS tool (IHS to RGB). When running this tool, there is an option to stretch intensity, saturation, or both. I tried running the tool twice, once with no stretching and once stretching both intensity and saturation at the same time.



Results


Figure 2 - Transformed with no stretching
Figure 3 - Transformed with both Intensity and Saturation stretched
Figures 2 and 3 show the results. Figure 2 is transformed from RGB to IHS and back to RGB with no stretching, and figure 3 is transformed from RGB to IHS and back but stretching both intensity and saturation. As you can see, there doesn't appear to be any visual change. Comparing figures 2 and 3 with the original image in figure 1 looks, at first glance, to have had no effect at all on the image. But when I went into the metadata tab and looked and the histogram for each image, both the transformed images had histograms that were stretched. This means they have higher contrast than the original image. When comparing stretched vs non-stretched, however, the image and the histogram appeared identical.





Part 2: Image Mosaicking

Method
Figure 4 - Pre- Mosaic
The first step in mosaicking was to bring in the images we want to combine. Figure 4 shows these images, both satellite images of Wisconsin. When importing these images for mosaic purposes, it is important to click the 'Multiple' tab in the 'Select Layers to Add' window and select 'Multiple Images in Virtual Mosaic.'

There are a couple of different ways to mosaic images. Mosaic Express is a simplified form that automates most of the process. Essentially, I just added the two files when the tool asked for them and gave it an output destination.


Figure 5 - MosaicPro
The other mosaicking tool I tried was MosaicPro. This allows for far more options and control. I imported the base images just as before, then in the 'Mosaic' drop down menu selected MosaicPro. This opens a separate window where it shows only an outline of the images once they are added (Figure 5).

To synchronize the radiometric properties where the images overlap, I clicked the 'Color Correction' icon. In this menu, I selected 'Use Histogram Matching,' then clicked the 'Set' button. This allowed me to choose only to match the histogram for the areas that overlapped by selecting 'Overlap Areas.' By only choosing to match the overlapping areas, it preserves brightness values of the other areas of the images. In the 'Set Overlap Function' menu, I made sure it was set to the default 'Overlay,' so that the brightness values of the top image were used where the images overlapped.

Results

Figure 6 - MosaicPro finished product
Figure 7 - Mosaic Express finished
product

MosaicPro produced a much better looking image, but was more demanding when it came to user input (Figure 6). By synchronizing the radiometric properties at the intersection points, the transition from one image to the next is almost entirely seamless. Mosaic Express (Figure 7) can be used to produce a mosaicked image quicker, but will not have seamless edges. This option can be used for image interpretation, but would not be recommended when aesthetics are a concern.





Part 3: Band Ratioing

Method

Band ratioing is a technique used to try and limit environmental factors that might have an affect on image interpretation. Ratioing bands together can reveal information that might not otherwise have been found in a single band. Ratioing bands 3 and 4 (red and near infrared bands) can show unique data about vegetation. For this part of the exercise, I ran the NDVI (normalized difference vegetation index) tool to utilize this type of ratioing.

I loaded my Western Wisconsin map and went to the 'Raster' tab. Under 'Unsupervised,' I clicked on 'NDVI.' I left the options at their default settings, making sure 'Landsat 4 TM' was set for the type of sensor and 'NDVI' was selected as the index. Once these were set, I ran the tool.

Results


Figure 8 - NDVI image
The output for this tool was a black and white image of Western Wisconsin (Figure 8). The white areas depicted vegetation and black areas represented water. The various shades of gray signified cities or towns, roads, or a lack of vegetation.






Part 4: Spatial and Spectral Image Enhancement

Method

Part 4 is the part of the lab that is designed to introduce us to some of the spatial enhancement techniques that can be performed on remotely sensed images.

Figure 9 - Chicago high frequency


First, we opened an image of Chicago that was fairly high frequency (Figure 9). High frequency images show significant changes in brightness values over short distances, which needed to be suppressed. To do this, I went to the 'Raster' tab, clicked the 'Spatial' drop down menu, and selected 'Convolution.'

In the 'Convolution' menu, I chose the '5x5 Low Pass' kernel type, used my image of Chicago as the input image, and left all other options at their defaults.





Figure 10 - Sierra Leone low frequency 
To practice spatial convolution filtering on a low frequency image, I used an aerial image of Sierra Leone from 2002 (Figure 10). I went to the exact same menu as when I ran the '5x5 Low Pass' on the Chicago image, except this time I used "5x5 High Pass' kernel type.


Results

Figure 11 - Original Chicago image on left,
new image on right

Figure 11 shows the results from my suppression of the high frequency image of Chicago. The new image (on the right) has a much lower contrast and level of detail. When viewed at larger scales, the new image will look smoother, but it would not be useful for interpretation at small scales.




Figure 12 - Original image on left, transformed image on right
For the Sierra Leone image, the transformation improved the contrast and level of detail drastically. As you can see in figure 12, the new image (on the right) is a much easier image to interpret.





Method


Figure 13 - Sierra Leone, pre-edge enhancement
The next technique I tried was edge enhancement. I started with a different image of Sierra Leone (Figure 13). Again, this is found in the same 'Convolution' window accessed before. For my kernel type, I chose '3x3 Laplacian Edge Detection.' I also checked the 'Fill' option under 'Handle Edges by,' unchecked 'Normalize the Kernel,' and then ran the tool.



Results


Figure 14 - Laplacian edge enhanced image (right)
Original image (left)
The new edge enhanced image seemed, at first glance, to not be very helpful. The image appeared to have lower functionality than the original image. When I zoomed in, however, I noticed that many features were distinguishable that were not on the original image. Figure 14 shows the results on the right, and the original on the left. As you can see, the stream is much more obvious in the new image.

Method

Figure 15 - Western Wisconsin before minimum-maximum
stretching 
Linear contrast stretch is a common way to improve the visual appearance of images, and there are several ways to do it. The first image of Western Wisconsin (Figure 15) that I used had a histogram that was positively skewed, but still Gaussian, so I used minimum-maximum contrast stretch. This type of stretch works best when applied to Gaussian or near Gaussian histograms.

To run this tool, I went to the 'Panchromatic' tab, the 'General Contrast' drop down menu, and clicked on 'General Contrast.' By selecting Gaussian as the method, and leaving the other options at default, the tool created an image with much higher resolution.

The next image I opened was of the same area, but the histogram for this image was bimodal (Figure 16). The minimum-maximum contrast stretch would not work as well in this case. I used the piecewise linear contrast stretch for this one, since it is best used with images that have multiple modes.


Figure 16 - Western Wisconsin before piecewise contrast stretch
In the 'General Contrast' drop down menu, I found the 'Piecewise Contrast' option to access the tool. I also opened the histogram of the image so that I could see the extent of the ranges that I wanted to stretch. Then, I set the low, middle, and high ranges and applied the changes




Results


Figure 17 - Minimum-maximum contrast stretch
Figure 17 shows the results of the minimum-maximum contrast stretch. The image on the right shows the new image, while the image on the left is the original image. The new image has a much better contrast and is much easier to analyse.


Figure 18 - Piecewise linear contrast stretch

The results of the piecewise linear contrast stretch are in figure 18. The altered image, shown on the left, also has a higher contrast than the original image. This part of the lab showed that, depending on the original image, different tools may have to be used to get the same effect. Knowing and understanding these differences is important when deciding what tool to use.





Method





Figure 19 - Image before histogram equalization
Histogram equalization is a nonlinear method of stretching to improve contrast. I opened a Landsat TM image of Western Wisconsin from 2011. The image had poor contrast, and the histogram covered a relatively small area which was slightly positively skewed.

'Histogram Equalization' is found on the 'Raster' tab under the 'Radiometric' drop down menu. This was an easy technique to use. I simply left all the default options and ran the tool.

Results


Figure 20 - Original image (left), and an image with equalized
histograms (right)
Again, the contrast was significantly improved. Figure 10 shows the original image as well as its histogram on the left, and the new image with its histogram on the right. The histograms show a very large difference between each other. The spread out histogram on the right means a much higher contrast than the thin histogram on the left, which is proven by looking at the images.



Part 5: Binary Change Detection (Image Differencing)

Method


Figure 21 - Image of several Wisconsin counties, 1991 (left),
2011 (right)

Image differencing is a technique used to show changes between images taken at different times by comparing differences in pixel brightness values. I used an image of Western Wisconsin taken in 1991 and an image of the same area taken again in 2011(Figure 21). The goal was to see which pixels changed during that time.

Under the 'Raster' tab, I clicked on the 'Functions' icon and chose 'Two Image Functions.' This interface is used for simple arithmetic operations on images. For the input file #1, I used the 2011 image and for input file #2, I used the 1991 image. Under the inputs, the 'Layer' selection for both images was changed from 'All' to 'layer 4.' The operator also needed to be changed from '+' to '-'.



Figure 22 - First step image differencing
Figure 22 is the output image. Obviously, this image does not help identify change. That requires more steps. By opening up the histogram, I saw that it was a normal distribution (Gaussian). The areas that changed would normally be at the tail ends of each side. How much of the tail to use is subjective, but the rule of thumb for deciding the threshold is the mean + 1.5 standard deviations from the mean. The mean and standard deviation are found in the metadata. Using this equation, I found the upper and lower thresholds.

Next I used Model Maker to develop a model to bring both of my raster images into a function that calculated the differences between the two images.


Figure 23 - Changed areas
The image we just created has a Gaussian distribution, but this time the changed areas are at the upper tail of the histogram. Mean +  (3 x standard deviation) gives us this threshold. Using Model Maker again, I input the image generated from the last step into another function. Figure 23 shows the results. These are the pixels whose brightness values have changed between 1991 and 2011.

Results


Figure 24 - Final map of changed areas based on brightness
values of pixels. Yellow represents changed areas between
1991 and 2011
I took the results from the last step (figure 23) and overlaid the data on an image of the same area. Figure 24 is my finished map. The areas in yellow are the pixels that had different brightness values, and so are the areas that have changed. One thing I noticed, however, is that areas that I know for a fact that have changed (the Highway 53 bypass in Chippewa Falls and Eau Claire, or expanding towns like Altoona, for example) do not show up on this map. There were changes I could identify with my naked eye looking at these two images, but did not show up in this data. I think this may have something to do with where the thresholds were set on the histogram.