首页 > > 详细

辅导ENG 4061/7060-Assignment 4调试Matlab

ELEC ENG 4061/7060 Image Processing Page 1 of 10

7060 Image Sensors Processing / 4061 Image Processing
Assignment 4
May 2020
(Assignment Value: 18%)
(note: there are no written questions for this assignment)

Overview
This assignment is intended to give you some hands-on experience with thresholding,
morphological cleaning, feature estimation and classification as covered in lectures. In this
assignment work you will be required to modify several functions which implement thresholding,
clean up and feature estimation stages of a classification system.

NOTE: Unlike previous assignments you may use any of the functions in the MATLAB image
processing toolbox except for graythresh (unless otherwise instructed by me – see comments in
red in exercise 4B) to achieve this.

Background
A manufacturer of different types of steel bolts, washers and attachment hooks has a approached
your company to develop an automated sorting system for their production lines to deal with the
large number of items that are currently manually sorted as they travel along a conveyor belt. The
intent is to develop a camera based classification system which can identify the metal parts.
However due to conditions on the production line space and lighting is limited and only small
noisy imagery of the parts can be obtained.
As your company’s key trouble-shooter, you have been assigned the task of developing a simple
demonstrator to show that such a system might work. To this end you have been given some
example imagery of some 6 metallic parts of the many dozen which they sell. Example images are
shown on the next page.
To solve this problem, you will need to firstly threshold and filter out the region containing the
metal parts and then implement a feature extraction step stage that extracts shape estimates
from the data. The thresholding will be performed using Otsu’s method and an Euler number
feature will be used to discriminate classes 2,4 and 6 from 1,3 and 5. It is up to you which of the
other simple shape measures covered in class you might use to separate out the other shapes.
Your resulting shape measures will then be fed into a KNN classifier which you have been given
to test if the approach works.

ELEC ENG 4061/7060 Image Processing Page 2 of 10

Class 1 Class 2
Class 3 Class 4
Class 5 Class 6
Above: example images of the 6 metallic parts (1 Bolt, 2 Eye Bolt, 3 Hook Bolt, 4 Carabiner, 5 U-
bolt and 6 Washer). Note the noise present in the captured images. The images in the dataset are
identified by the file prefixes bolt, eyeb, hook, loop, ubol and wash respectively.

Assignment Submission
Assignments are to be submitted via MyUni as a ZIP archive file containing the four key MATLAB
functions ( otsu_threshold.m, simple_euler.m, object_extract.m and feature_extract.m plus
any others you modify) and a Word or PDF document summarising your results with comments
on the performance of the processing steps employed.
Marks will be deducted for late submission.
IMPORTANT: DO NOT INCLUDE THE SUPPLIED IMAGE DATABASE IN YOUR ASSIGNMENT
SUBMISSION. I WILL DEDUCT MARKS IF YOU DO !!!
Source Materials
Source code, test functions and example imagery for this assignment are located on the MyUni
website (https://www.myuni.adelaide.edu.au).
The image data is contained in the noisy_data subdirectory and includes some 150 images per
object type (900 in all) which should give you plenty of examples to work with.

ELEC ENG 4061/7060 Image Processing Page 3 of 10

Exercise 4A (5%) – Image Thresholding (Otsu’s Method)
Modify the supplied function [B,th]=otsu_threshold(I) such that it thresholds an image using
Otsu’s method as described below. Here I is the input image, B is the binary image output and th
is the estimated threshold. An optional argument N can also be supplied which defines the
number of thresholds tested by the algorithm (see below).
Otsu’s thresholding method is a form of automatic class clustering based on maximising the
variance between the greylevel of pixels in the thresholded and un-thresholded regions of the
image whilst minimising the grey-level variance between pixels within each region.
That is, for all possible values of t, we week a threshold t that minimises:
)()()()( ttNttN oobb  +
Where
)(tN b number of pixels below the threshold t
)(tb is the standard deviation of the values below t
)(tN o number of pixels above (or equal to) the threshold t
)(to is the standard deviation of the values above (or equal to) t
However, the solution for this turns out to be the same as maximising for the between class
variance. This can be easily calculated as follows:
For each potential threshold T (between the max and min values of the image)
1. Separate the pixels into two groups according to the current threshold T.
2. Find the mean of each group )(tb and )(to .
3. Square the difference between the two means ( )2)()( tt ob  − .
4. Multiply this value by the number of pixels in one cluster )(tN b times the number
in the other )(tNo .
5. Record the result and the threshold T used
The best threshold is the value T associated with the largest calculated result.
Modify the function [B,th] = otsu_threshold(I) such that it implements the algorithm given above.
Test your algorithm on some example images including those from the test database and the
supplied function otsu_test.m
IMPORTANT: Do not use ‘greythresh’ or any other thresholding technique for this part of the
assignment or you will receive zero marks for part 4A.
ELEC ENG 4061/7060 Image Processing Page 4 of 10


Above: the Otsu algorithm given earlier applied to two test images (see otsu_test.m).
Exercise 4B (3%) – Region (Object) Extraction
Modify the supplied function B = object_extract(I) such that it thresholds the supplied image I
using Otsu’s method and returns a "cleaned up" binary image B representing the metal part.
The function should remove artefacts from the thresholded imagery such as point noise and
merge fragmented regions together to form a good representation of the irrigation object. You
will need to use morphological filters to achieve this.
Important Note: One problem you may face is in dealing with the excessive noise in the raw
imagery and this may adversely affect the threshold result. You may wish to reduce this noise
before considering the threshold step. So in effect the steps are:
1. Reduce noise
2. Threshold
3. Cleanup using morphological filtering (eg. erode/dilate/open/close etc)
The resulting binary image should ideally contain one connected region representing the plastic
part to make it easy for your feature extraction processing.
An example of this processing is given below (note: your own solutions may not look exactly like
this):
ELEC ENG 4061/7060 Image Processing Page 5 of 10


Illustrate your processing results using examples from the image datasets and write this up in your
assignment report. Take special note of any situations where the thresholding may be
problematic.
You can use the supplied test script object_extract_all to check your thresholding on the 6 sets
of test images (but be warned it will take some time to go through all 900 images).
Comments: The quality of the thresholding and cleanup stages will have an impact on any features
you estimate from the regions and hence the quality of the final classification performances in
exercises 4D. You may need to revisit this step once you have the classifier working to do some fine
tuning.
SPECIAL NOTE: If you are unable to get the Ostu thresholder going in part 4A you can temporarily
use the function greythresh() to obtain a threshold. If you do so please make it very clear in your
code and write up that you have done so otherwise marks will be deducted.
Exercise 4C (3.5%) – Feature Estimation (Euler Number)
As described in lectures the Euler number of a binary image is the number of binary regions minus
the number of holes.
There are several ways of estimating the Euler number for a binary image, but the easiest is to
estimate the number of convex and concave corner-like regions in the binary image by performing
a series of tests on 2x2 neighbourhoods around each pixel in the image. Geometrically, each
region or hole will have some factor of 4 of these.
This simple algorithm for 4-connected regions is as follows:
1. Set convex count and concave count to zero.
ELEC ENG 4061/7060 Image Processing Page 6 of 10

2. For each pixel (i,j) examine the local 2x2 patch of image data
a. If the 2x2 patch contains 1 set pixel increment the convex count by 1.
b. If the 2x2 patch contains 3 set pixels increment the concave count by 1.
c. If the 2x2 patch contains 2 set pixels diagonally opposite one another (ie [10;01]
or [01;10]) then increment the convex count by 2.
3. The final Euler number is the convex count minus the concave count divided by 4.
To loosely explain how this works. Imagine a simple box shape (Euler number = 1). The above
algorithm will only increment the convex count at each corner of the box giving a total value of 4
convex to 0 concave → (4-0)/4 = 1. Next imagine this same box now has a square hole in it (ie.
Euler=0). In this case the above algorithm will give a convex count of 4 and also a convex count of
4 at the 4 corners of the hole → (4-4)/4 =0. If instead there were 2 square holes rather than two
then the convex count would be 8 leading to an Euler number estimate of -1. Geometrically
speaking this reasoning can be extended for any number or shape of hole or object. In each case
the differences in count will always come down to a factor of 4, the only exception being when
the object is touching the image boundary. To handle this case a 1 pixel wide empty border is
usually added to the image before applying the above algorithm. Case (c) is included above to
improve the estimate in the presence of small adjacent holes etc.
Modify the supplied file simple_euler.m such that it implements the algorithm shown above.
Make sure you also add the 1 pixel boundary mentioned above before doing the calculation. Use
the script euler_test.m to check you code and write up your results.

Above: Example Euler number calculations (see euler_test).
Note: you do not need this solution to complete Exercise 4E. If you wish to use the Euler number
for exercise 4E then you may use regionprops() to get an estimate.
Euler Number 1 (est 1)
20 40 60
20
40
60
Euler Number 0 (est 0)
20 40 60
20
40
60
Euler Number 3 (est 3)
20 40 60
20
40
60
Euler Number 5 (est 5)
20 40 60 80 100 120
20
40
60
80
100
120
ELEC ENG 4061/7060 Image Processing Page 7 of 10

Exercise 4D (3.5%) – Feature Estimation (Eccentricity)
The eccentricity of an object can be used to distinguish between rounded and elongated objects.
There are several ways of computing such a measure. Here we will use an approximation based
on the central image moments discussed in class. The measure for eccentricity is as follows:
=
(2,0 − 0,2)
2
+ 41,1
2
(2,0 + 0,2)
2
From the lecture notes recall that the central moments are defined as follows:

Where xc and yc are the centres of mass defined as:

Modify the supplied function simple_eccentricity.m to implement this estimate of eccentricity
and test your results using the script eccentricity_test.m and write up your results. The supplied
code includes a call to meshgrid() to supply you with the x and y coordinates. An example output
is shown below:

Above: Example eccentricity calculations (see eccentricity_test).
NOTE: this estimate does not match the one used in regionprops() so don’t try comparing the two.
Note: you do not need this solution to complete Exercise 4E. If you wish to use eccentricity for
exercise 4E then you may use regionprops() to get an estimate.

ELEC ENG 4061/7060 Image Processing Page 8 of 10

Exercise 4E (3%) – Shape Measure Classification
As already noted the Euler number could be used as a simple shape measure to distinguish classes
2,4 and 6 from classes 1,3 and 5. However to distinguish between all 5 classes we will need to
make use of some of the other shape measures described in class.
Modify the function F=feature_extract(B) such that when it is given the cleaned up binary image
B it returns a row vector of shape measurements of your choice (which may or may not include
the Euler number).
To assist you many of the shape simple measures described in class can be constructed from
values returned by the MATLAB function regionprops() which works with binary images. Your task
is to figure out which ones can be used to successfully classify the 6 shapes.
The function regionprops takes a binary region B and returns a record with fields including such
measurements as Area, MajorAxis, MinorAxis EulerNumber etc. However, be warned that if your
object is fragmented into several regions the function will return estimates for each fragment
rather than a single result (make sure your solution for part 4B does the right thing).
Suggested measurements from the notes that you might like to try include:
• Euler number
• Minor axis / major axis
• Perimeter / square root of the area
• Area / convex area
Note your estimates should be, if possible, invariant to translation scale and rotation.
To test if your proposed shape measures might work, use the supplied function:
[F,C] = feature_extract_all();
If this runs to completion without producing an error (which it should unless you have a bug in
your feature_extract.m file) then the values F and C will contain the feature measurements and
class numbers for each of the 900 images in the dataset.
The function cplot(F,C) can then be used to generate scatterplots of the features you have
created. You can use tricks like cplot(F(:,4:5),C) to just plot features 4 and 5 of your data.
ELEC ENG 4061/7060 Image Processing Page 9 of 10


Above: illustration of feature clustering based on two possible shape measures (created using
cplot.m). Your own feature plots will differ from the example shown here.
Once you are satisfied that your feature measurements are working well enough you can use the
supplied KNN classifier script as a final test:
[C_est,C_true] = knn_classify_all(F,C,k);
Where F and C are previously calculated features and class numbers and ‘k’ is the number of
nearest neighbours to test to see if your solution works (eg. 3). C_est and C_true are the estimates
and true class labels based on a 30% train 70% test split of the data. An example screen shot of
the run-time plot and resulting confusion summary generated by this script are shown below.

ELEC ENG 4061/7060 Image Processing Page 10 of 10


Above: sample screenshots of knn_classify_all.m. Your own results and feature plots will differ
from what is shown here. Ideally the higher the numbers along the diagonal in the confusion
matrix the better.
Unlike most real-world problems you are ever likely to encounter, this dataset is relatively easy
to classify. If you get this working correctly, you should be able to easily achieve over 90% correct
classification (and probably into the high nineties).
Important: Plot histograms and/or scatterplots of the feature values for each class and record
your classification scores and put these in your report. Try different combinations of feature
measures and see what happens. What did you find were the best combinations?

联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!