Thursday, December 19, 2013

Lab 8: Spectral Signature Analysis

Goal and Background

This purpose of this exercise was to get some experience interpreting spectral reflectance of features on the Earth's surface from images taken by satellite sensors. We found the spectral signatures of various materials and surfaces in a remotely sensed image of Western Wisconsin.

Methods

Figure 1 - AOI in Lake Wissota
First, I digitized an area of Lake Wissota using the draw polygon tool (figure 1). Then, under the Raster toolbar, I clicked the Supervised drop-down menu and selected Signature Editor. This brought up the Signature Editor, where I could create a new signature from AOI (area of interest). Once this was added to the menu, I changed the name to Standing Water to differentiate it from the other signatures I would be collecting. Figure 2 shows the Signature Editor menu.

Figure 2 - Standing Water signature
From this menu, I could also display the Mean Plot window, shown in figure 3.
Figure 3 - Standing water mean plot









We were then tasked with finding the signatures of 11 more features using the same technique:

1. Standing Water
2. Moving Water
3. Vegetation
4. Riparian vegetation
5. Crops
6. Urban Grass
7. Dry soil (uncultivated)
8. Moist soil (uncultivated)
9. Rock
10. Asphalt highway
11. Airport runway
12. Concrete surface (parking lot)

Figures 4 and 5 show the signatures and the mean plots of all the features together.
Figure 4 - Signature of 12 features

Figure 5 - Mean plot of all 12 features




Results

The results demonstrated the difference in spectral reflectance between many different features and materials. A skilled remote sensing technician would be able to identify these features by their signature plots alone.

Sunday, December 15, 2013

Lab 7: Photogrammetry

Goal and Background

The goal of this lab was to improve our understanding of the math behind scales, measurement of areas and perimeters, and calculating relief displacement. It also serves as an introduction to stereoscopy and orthorectification.

Part 1: Scales, measurements and relief displacement

Section 1

The first part of the lab involved finding the scale of an aerial image. Given a section of highway that we already knew the real world distance for, we were able to find the scale of the image by measuring the distance from one point to the other on the image itself.Next we found the scale of a photo when only knowing the altitude of the aircraft carrying the camera, the focal length of the camera, and the elevation of Eau Claire County.

Section 2

Figure 1 - Digitized area to be measured
Section 2 involved finding the area and perimeter of features in an aerial photo of Eau Claire. I opened the 'Measure' tool and selected 'Measure Perimeters and Areas'. This allowed me to digitize an area and find out what the area or perimeter was. After digitizing, I was able to change the units of measurement and have the results update on the fly.

 Section 3

Figure 2 - Image of Eau Claire area used for calculating relief
displacement
For section 3, we calculated relief displacement. Figure 2 shows the image and feature that was used for this exercise.

Knowing the height of the aerial camera above datum (3,980 ft), the scale of the image (1:3,209), and by finding the real world height of the smoke stack (by using the image scale), we can find the relief displacement of the smoke stack labeled 'A' in the image.




Part 2: Stereoscopy

In part 2 of the lab, we learned how to create a stereoscopic image of Eau Claire. First, I opened an image of Eau Claire at 1 meter spatial resolution and a DEM (digital elevation model) of the same area at 10 meter spatial resolution. Under 'Terrain' I chose 'Anaglyph' to open 'Anaglyph Generation', the tool I would be using. I brought in my two images and set the vertical exaggeration to 2. 

When the tool finished running, I had a 3-D image of Eau Claire needing polaroid glasses in order to view it. The result was impressive, though slightly too exaggerated in some places. It did, however, make it easier to interpret geographical features in Eau Claire.

Part 3: Orthorectification

Figure 3 - LPS Project Manager window
Satellite and aerial images usually have many geometric errors that must be corrected before they can be used professionally. Orthorectification is the process of using currently orthorectified photos to correct new ones.

LPS Project Manager (Figure 3) is the tool used for orthorectification. This is found under the "Toolbox" tab.








Figure 5 - GCP collection
By setting a projection, and adding two overlapping images, we can orthorectify (figure 4). Similar to geometric correction, we used GCPs (ground control points) on both images to match the locations. Figures 4 and 5 illustrate this process.
Figure 4
The final images are perfectly lines up at the edges. Figure 6 shows how effective the process of Orthorectification is. This process is normal done by the company or group collecting the images, so this is completed already when the final image gets to public use.

Figure 6 - Orthorectified image.



Lab 6: Geometric Correction

Goal and Background

The purpose of this lab was to introduce us to geometric correction. Geometric correction is performed on satellite images before they can be utilized for data interpretation. This is done to ensure that spatial errors are kept to a minimum. We explored two major types of geometric correction during this exercise.

Part 1: Image-to-Map Rectification

Method


Figure 1 - Uncorrected Landsat TM image of Chicago on right,
USGS 7.5 minute DRG of Chicago on left
Image-to-map rectification involves geometrically correcting an aerial photo using a scanned topographic map of the same area as a reference. These digital maps, called digital raster graphics (DRG), have a coordinate system while the uncorrected aerial photo does not yet.

To practice this technique, I opened a Landsat TM satallite image of the Chicago area as well as a USGS 7.5 minute DRG of the same area (Figure 1) that I could use as reference. The tool for doing this is found in the "Multispectral" tab by clicking on the "Control Points" button. I chose to use a first order polynomial equation because the image was not distorted enough to justify using a higher order polynomial. I also chose to use the Chicago DRG as my reference layer.


Figure 2 - Multipoint Geometric Correction window.
First set of GPCs placed, but RMS error too high.
Once I was in the Multipoint Geometric Correction window, I was able to start correction. This window had my image to be corrected in the pane on the left and my reference image in a pane on the right. Both panes also contained smaller windows with the full image and the zoomed Inquire box images (Figure 2).



 Next, I began the process of adding ground control points (GCPs) on each image. This involved adding pairs of points on each image as geographicly close as possible. This can be time consuming, as the points in each image must be very close to reduce error. Since I was using a first order polynomial, I only needed three pairs of control points (though I used four to be more precise). The higher the order of polynomial, the more GPCs are required. Figure 2 shows my original GPCs, however the RMS (root mean square) error is still too high. I took additional time to perfect the GPCs.

Results

Idealy, the total RMS error should be below .05. For the purposes of this lab, however, we were only required to have an RMS below 2.0. Once this was accomplished, I ran the tool and created a geometrically corrected image.






Thursday, November 14, 2013

Lab 5: Image mosaic and Miscellaneous image functions 2

 
Goal and Background
The goal of this lab is to introduce important analytical processes in remote sensing. Using Erdas Imagine 2013, we will explore RGB to IHS transformations, image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection. With these skills, we will be able to apply the processes to real life projects.

Part 1: RGB to IHS Transform

Method



Figure 1 - Original RGB image on left, IHS image on right
This part of the lab consisted of transforming a Western Wisconsin satellite image from RGB to IHS. IHS represents intensity, hue, and saturation, respectively. The tool for doing this (RGB to IHS) is found in the 'Raster' tab under 'Spectral.' I made sure that band 3 was on red, band 2 on green, and 1 on blue. All other options were left at default and I ran the tool. Figure 1 shows the original and the IHS image.

Then the IHS image had to be transformed back to an RGB image, which is done in the same menu as the RGB to IHS tool (IHS to RGB). When running this tool, there is an option to stretch intensity, saturation, or both. I tried running the tool twice, once with no stretching and once stretching both intensity and saturation at the same time.



Results


Figure 2 - Transformed with no stretching
Figure 3 - Transformed with both Intensity and Saturation stretched
Figures 2 and 3 show the results. Figure 2 is transformed from RGB to IHS and back to RGB with no stretching, and figure 3 is transformed from RGB to IHS and back but stretching both intensity and saturation. As you can see, there doesn't appear to be any visual change. Comparing figures 2 and 3 with the original image in figure 1 looks, at first glance, to have had no effect at all on the image. But when I went into the metadata tab and looked and the histogram for each image, both the transformed images had histograms that were stretched. This means they have higher contrast than the original image. When comparing stretched vs non-stretched, however, the image and the histogram appeared identical.





Part 2: Image Mosaicking

Method
Figure 4 - Pre- Mosaic
The first step in mosaicking was to bring in the images we want to combine. Figure 4 shows these images, both satellite images of Wisconsin. When importing these images for mosaic purposes, it is important to click the 'Multiple' tab in the 'Select Layers to Add' window and select 'Multiple Images in Virtual Mosaic.'

There are a couple of different ways to mosaic images. Mosaic Express is a simplified form that automates most of the process. Essentially, I just added the two files when the tool asked for them and gave it an output destination.


Figure 5 - MosaicPro
The other mosaicking tool I tried was MosaicPro. This allows for far more options and control. I imported the base images just as before, then in the 'Mosaic' drop down menu selected MosaicPro. This opens a separate window where it shows only an outline of the images once they are added (Figure 5).

To synchronize the radiometric properties where the images overlap, I clicked the 'Color Correction' icon. In this menu, I selected 'Use Histogram Matching,' then clicked the 'Set' button. This allowed me to choose only to match the histogram for the areas that overlapped by selecting 'Overlap Areas.' By only choosing to match the overlapping areas, it preserves brightness values of the other areas of the images. In the 'Set Overlap Function' menu, I made sure it was set to the default 'Overlay,' so that the brightness values of the top image were used where the images overlapped.

Results

Figure 6 - MosaicPro finished product
Figure 7 - Mosaic Express finished
product

MosaicPro produced a much better looking image, but was more demanding when it came to user input (Figure 6). By synchronizing the radiometric properties at the intersection points, the transition from one image to the next is almost entirely seamless. Mosaic Express (Figure 7) can be used to produce a mosaicked image quicker, but will not have seamless edges. This option can be used for image interpretation, but would not be recommended when aesthetics are a concern.





Part 3: Band Ratioing

Method

Band ratioing is a technique used to try and limit environmental factors that might have an affect on image interpretation. Ratioing bands together can reveal information that might not otherwise have been found in a single band. Ratioing bands 3 and 4 (red and near infrared bands) can show unique data about vegetation. For this part of the exercise, I ran the NDVI (normalized difference vegetation index) tool to utilize this type of ratioing.

I loaded my Western Wisconsin map and went to the 'Raster' tab. Under 'Unsupervised,' I clicked on 'NDVI.' I left the options at their default settings, making sure 'Landsat 4 TM' was set for the type of sensor and 'NDVI' was selected as the index. Once these were set, I ran the tool.

Results


Figure 8 - NDVI image
The output for this tool was a black and white image of Western Wisconsin (Figure 8). The white areas depicted vegetation and black areas represented water. The various shades of gray signified cities or towns, roads, or a lack of vegetation.






Part 4: Spatial and Spectral Image Enhancement

Method

Part 4 is the part of the lab that is designed to introduce us to some of the spatial enhancement techniques that can be performed on remotely sensed images.

Figure 9 - Chicago high frequency


First, we opened an image of Chicago that was fairly high frequency (Figure 9). High frequency images show significant changes in brightness values over short distances, which needed to be suppressed. To do this, I went to the 'Raster' tab, clicked the 'Spatial' drop down menu, and selected 'Convolution.'

In the 'Convolution' menu, I chose the '5x5 Low Pass' kernel type, used my image of Chicago as the input image, and left all other options at their defaults.





Figure 10 - Sierra Leone low frequency 
To practice spatial convolution filtering on a low frequency image, I used an aerial image of Sierra Leone from 2002 (Figure 10). I went to the exact same menu as when I ran the '5x5 Low Pass' on the Chicago image, except this time I used "5x5 High Pass' kernel type.


Results

Figure 11 - Original Chicago image on left,
new image on right

Figure 11 shows the results from my suppression of the high frequency image of Chicago. The new image (on the right) has a much lower contrast and level of detail. When viewed at larger scales, the new image will look smoother, but it would not be useful for interpretation at small scales.




Figure 12 - Original image on left, transformed image on right
For the Sierra Leone image, the transformation improved the contrast and level of detail drastically. As you can see in figure 12, the new image (on the right) is a much easier image to interpret.





Method


Figure 13 - Sierra Leone, pre-edge enhancement
The next technique I tried was edge enhancement. I started with a different image of Sierra Leone (Figure 13). Again, this is found in the same 'Convolution' window accessed before. For my kernel type, I chose '3x3 Laplacian Edge Detection.' I also checked the 'Fill' option under 'Handle Edges by,' unchecked 'Normalize the Kernel,' and then ran the tool.



Results


Figure 14 - Laplacian edge enhanced image (right)
Original image (left)
The new edge enhanced image seemed, at first glance, to not be very helpful. The image appeared to have lower functionality than the original image. When I zoomed in, however, I noticed that many features were distinguishable that were not on the original image. Figure 14 shows the results on the right, and the original on the left. As you can see, the stream is much more obvious in the new image.

Method

Figure 15 - Western Wisconsin before minimum-maximum
stretching 
Linear contrast stretch is a common way to improve the visual appearance of images, and there are several ways to do it. The first image of Western Wisconsin (Figure 15) that I used had a histogram that was positively skewed, but still Gaussian, so I used minimum-maximum contrast stretch. This type of stretch works best when applied to Gaussian or near Gaussian histograms.

To run this tool, I went to the 'Panchromatic' tab, the 'General Contrast' drop down menu, and clicked on 'General Contrast.' By selecting Gaussian as the method, and leaving the other options at default, the tool created an image with much higher resolution.

The next image I opened was of the same area, but the histogram for this image was bimodal (Figure 16). The minimum-maximum contrast stretch would not work as well in this case. I used the piecewise linear contrast stretch for this one, since it is best used with images that have multiple modes.


Figure 16 - Western Wisconsin before piecewise contrast stretch
In the 'General Contrast' drop down menu, I found the 'Piecewise Contrast' option to access the tool. I also opened the histogram of the image so that I could see the extent of the ranges that I wanted to stretch. Then, I set the low, middle, and high ranges and applied the changes




Results


Figure 17 - Minimum-maximum contrast stretch
Figure 17 shows the results of the minimum-maximum contrast stretch. The image on the right shows the new image, while the image on the left is the original image. The new image has a much better contrast and is much easier to analyse.


Figure 18 - Piecewise linear contrast stretch

The results of the piecewise linear contrast stretch are in figure 18. The altered image, shown on the left, also has a higher contrast than the original image. This part of the lab showed that, depending on the original image, different tools may have to be used to get the same effect. Knowing and understanding these differences is important when deciding what tool to use.





Method





Figure 19 - Image before histogram equalization
Histogram equalization is a nonlinear method of stretching to improve contrast. I opened a Landsat TM image of Western Wisconsin from 2011. The image had poor contrast, and the histogram covered a relatively small area which was slightly positively skewed.

'Histogram Equalization' is found on the 'Raster' tab under the 'Radiometric' drop down menu. This was an easy technique to use. I simply left all the default options and ran the tool.

Results


Figure 20 - Original image (left), and an image with equalized
histograms (right)
Again, the contrast was significantly improved. Figure 10 shows the original image as well as its histogram on the left, and the new image with its histogram on the right. The histograms show a very large difference between each other. The spread out histogram on the right means a much higher contrast than the thin histogram on the left, which is proven by looking at the images.



Part 5: Binary Change Detection (Image Differencing)

Method


Figure 21 - Image of several Wisconsin counties, 1991 (left),
2011 (right)

Image differencing is a technique used to show changes between images taken at different times by comparing differences in pixel brightness values. I used an image of Western Wisconsin taken in 1991 and an image of the same area taken again in 2011(Figure 21). The goal was to see which pixels changed during that time.

Under the 'Raster' tab, I clicked on the 'Functions' icon and chose 'Two Image Functions.' This interface is used for simple arithmetic operations on images. For the input file #1, I used the 2011 image and for input file #2, I used the 1991 image. Under the inputs, the 'Layer' selection for both images was changed from 'All' to 'layer 4.' The operator also needed to be changed from '+' to '-'.



Figure 22 - First step image differencing
Figure 22 is the output image. Obviously, this image does not help identify change. That requires more steps. By opening up the histogram, I saw that it was a normal distribution (Gaussian). The areas that changed would normally be at the tail ends of each side. How much of the tail to use is subjective, but the rule of thumb for deciding the threshold is the mean + 1.5 standard deviations from the mean. The mean and standard deviation are found in the metadata. Using this equation, I found the upper and lower thresholds.

Next I used Model Maker to develop a model to bring both of my raster images into a function that calculated the differences between the two images.


Figure 23 - Changed areas
The image we just created has a Gaussian distribution, but this time the changed areas are at the upper tail of the histogram. Mean +  (3 x standard deviation) gives us this threshold. Using Model Maker again, I input the image generated from the last step into another function. Figure 23 shows the results. These are the pixels whose brightness values have changed between 1991 and 2011.

Results


Figure 24 - Final map of changed areas based on brightness
values of pixels. Yellow represents changed areas between
1991 and 2011
I took the results from the last step (figure 23) and overlaid the data on an image of the same area. Figure 24 is my finished map. The areas in yellow are the pixels that had different brightness values, and so are the areas that have changed. One thing I noticed, however, is that areas that I know for a fact that have changed (the Highway 53 bypass in Chippewa Falls and Eau Claire, or expanding towns like Altoona, for example) do not show up on this map. There were changes I could identify with my naked eye looking at these two images, but did not show up in this data. I think this may have something to do with where the thresholds were set on the histogram.







Thursday, October 31, 2013

Lab 4: Miscellaneous Image Functions 1

Goal and Background
The goals of this lab were to gain an understanding of the following: subsetting an image, optimizing image resolution for the purpose of visual interpretation, radiometric enhancement, linking satellite images to Google Earth to assist in image interpretation, and resampling of satellite images. The lab was broken up into several sections, each focusing on different image functions.

Part 1: Image Subsetting

Methods
There are two different ways to subset an image: By using a rectangular box with the Inquire Box, or by creating an AOI (area of interest) using a shapefile of the area. We learned the box method first.





Figure 1 - Original satellite image
The image I used was false color satellite image of Western Wisconsin, titled eau_claire_2011.img (figure 1). The first step was to open the image up in ERDAS IMAGINE 2013. By right clicking on the image, I was able to choose 'Inquire Box' and use this box to outline the area I wanted to subset: Eau Claire and Chippewa counties.

After getting the box the appropriate size, I clicked apply in the Inquire Box Viewer to set the extents. Then, under the raster tool bar, I clicked Subset & Chip, Create Subset Image. From this menu, I was able to specify that I wanted the subset image to come from the box I created by clicking From Inquire Box. I chose to save the new image in my lab file and hit OK.

Next, we learned the alternate way to subset: creating an AOI using a shapefile. This is the strategy used most when the subset image will be used in ArcMap, as it ensures that the image is the same size as the shapefiles being used.

I loaded a shapefile of Chippewa and Eau Claire counties that our professor had prepared for us, which was overlaid onto the original satellite image that we used in the last step. I selected both counties and clicked on 'paste from selected object' on the Home tool bar. This created an 'area of interest' around my counties, and from there I went to 'Save As' - AOI Layer As, and saved it in my lab folder.
 

With the AOI saved, I could then use it as the basis for a subset image. I went to Subset & Chip on the raster tool bar as I did before, but set the AOI for use instead of the Inquire Box.

Results



Figure 2 - Subset image using Inquire Box
Figure 2 shows the image that resulted from the Inquire Box subset technique. Not only is the image cropped down to show only the area I want to use, but the image is sharper and free of haze.






Figure 3 - Subset image using AOI


Figure 3 shows the image that resulted from using an AOI for the subset range. Notice that this image is more exact because it matches the county shape files. The haze has not been reduced in this image, however, and so the first image shows better contrast.




Figure 4 - Images to be fused

Part 2: Image Fusion

Methods

In this section, we learned how to increase the spatial resolution of an image by fusing it with a higher resolution panchromatic image.

First I opened both images to be fused. Figure 4 shows the 30m spatial resolution image on the left, and the 15m spatial resolution panchromatic image on the right.

On the Raster tool bar, I clicked on the Pan Sharpen icon and selected Resolution Merge in the drop-down menu. In the Resolution Merge window, I set the panchromatic image as the High Resolution Input File and 30m resolution reflective image as the Multispectral Input File. For output file, I set it to save to my lab folder. I chose 'Multiplicative' for the Method, and 'Nearest Neighbor' for the Resampling Technique. All other options I left at default settings and ran the tool.

Results


Figure 5 - Original image (left), pan-sharpened image (right)
As you can see in figure 5, the pan-sharpened image (on the right) retains the colors of the original image, but has the higher spatial resolution of the panchromatic image. This creates an image with "the best of both worlds," with high resolution and greater contrast and allows for easier image interpretation.




 

Part 3: Simple Radiometric Enhancement Techniques

Methods

For this portion of the lab exercise, we learned to use a simple haze filter to increase radiometric resolution. We started once again with a false color satellite image of Western Wisconsin.

On the Raster tool bar, I clicked on the 'Haze Reduction' under the Radiometric drop-down menu. For input file, I set the Wisconsin image I had opened, and the output file I saved to my lab folder. All other options I left at their default settings.

Results



Figure 5 - Original image on left, haze reduction on right
The results were drastic. The enhanced image looks much more clear than the original image on the right. The colors are more pronounced, the contrast is higher, and it is much easier to decipher details.






Part 4: Linking Image Viewer to Google Earth

Method

This feature is relatively new to Erdas, added in the 2011 update. It allows for a satellite image with a defined coordinate system to be linked to Google Earth to quickly and easily see differences, and to use Google Earth as an interpretation key.

Again, we loaded a false color image of Western Wisconsin. Erdas has a dedicated Google Earth tool bar. I clicked on the Connect to Google Earth button, and the program opened up. From the same tool bar, I can link and sync the programs together, allowing me to see the same spatial extent on Google Earth as I pan and zoom in Erdas.

Results


Figure 6 - Google Earth linked to Erdas Imagine
Being linked to Google Earth can be extremely helpful for image interpretation. Google Earth utilizes the high resolution GeoEye satellite, and these images are usually pretty recent. When the two programs are linked, Google Earth can be used as a guide for picking out features. Some things may be difficult to identify in a 30m resolution false color image, but with Google Earth displaying a higher resolution image on a second monitor, it can make it much easier. An example is shown in figure 6.

Part 5: Resampling

Methods

Resampling is used to change the size of pixels. Resampling up will reduce the size of pixels, while resampling down will increase the size of pixels. Each is equally easy to employ, depending on what the job requires.

The 30m spatial resolution Western Wisconsin image was loaded and fit to frame. Using the Raster tool bar, I selected the Spatial icon and clicked 'Resample Pixel Size.' The open image defaulted to the input image and I set the output image to save in my lab folder. I chose 'Nearest Neighbor' as my method of resampling and changed the cell size from 30m to 20m. All other options were left at default.


Figure 7 - Resampling by Nearest Neighbor (left)
Resampling by Bilinear Interpolation (right)
I did the same process once again, but this time I selected 'Bilinear Interpolation' as my resampling method. This too was saved to my lab folder.

Results

In figure 7, you can see both resampling methods. On the left is an image resampled using the Nearest Neighbor method, while on the right is the Bilinear Interpolation method. Both have had a reduction in pixel size from 30m - 20m spatial resolution (resampling up), but because of the method used, they look slightly different.