Cotton Grading: Can Image Features Predict Human’s Visual Judgment?

J Dong1, T Zhang1, L Qi1, P Chen2, D Wang2

1Department of Computer Science and Technology, Ocean University of China, China
2Shandong Entry-Exit Inspection and Quarantine, China

Contact: qilin@ouc.edu.cn

Although optical devices have been invented for grading cottons, the output is not well coincident with human’s visual judgment. Trained workers are still widely used to manually inspect and grade cottons; the process is inefficient and subjective. We proposed an economic method that analyzed digital images of cottons and used machine learning techniques to simulate human’s visual perception for cotton grading. Since “Color”, “leaf” and “preparation” are three major factors used by human graders, we extracted following computational properties from cotton images to represent these factors: the mean and variance of L*, b*, a* colour components for “color”; the percentage area, average size, number and scatter of trashes for “leaf”; the Gray Level Co-occurrence Matrices for “preparation”. A 21 dimensional feature was generated from one single image. After performing Principal Component Analysis, we used these features to train a k-Nearest Neighbor classifier. We tested our method using standard references (7 grades) and real samples (4 grades). 126 standard references (18 in each grade) were used as the training set. The grading accuracy is 90.5% when using 42 standard references (6 in each grade) as the validation set, and is 87.5% when using 15 real samples as the testing set. [NSFC Project No. 61271405]

Up Home