|Search by item||HOME > Access full text > Search by item|
JBE, vol. 24, no. 2, pp.234-242, March, 2019
SIFT Image Feature Extraction based on Deep Learning
Jae-Eun Lee, Won-Jun Moon, Young-Ho Seo, and Dong-Wook Kim
C.A E-mail: email@example.com
In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into 33×33 size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.
Keyword: SIFT Feature extraction, Deep learning, VGG, CNN(Convolutional Neural Network), Repeatability
 C. Harris, M. Stephens, “A combined corner and edge detector,” Proceedings of the Alvey Vision Conference, pp.147-151, 1988.
 K. Mikolajczyk, C. Schmid, “Indexing based on scale invariant interest points,” ICCV, Vol.1, pp. 525-531, 2001.
 J. Shi, C. Tomasi, “Good features to track,” 9th IEEE Conference on Computer Vision and Pattern Recognition, Springer, Heidelberg, 1994.
 D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Vol.60, No.2, pp.91-110, 2004.
 H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” In European Conference on Computer Vision, Vol.1, No.2, May 2006.
 E. Rosten, T. Drummond, “Machine learning for high-speed corner detection,” Proc. 9th European Conference on Computer Vision (ECCV'06), May 2006.
 E. Mair, G. Hager, D. Burschka, M. Suppa, and G. Hirzinger, “Adaptive and generic corner detection based on the accelerated segment test,” Computer Vision-ECCV 2010, Vol.2, No.2, pp.183-196, 2010.
 M. WonJun, S. Youngho, and K. Dongwook, “Parameter Analysis for Time Reduction in Extracting SIFT Keypoints in the Aspect of Image Stitching,” Journal of Broadcast Engineering, Vol.23, No.4, pp.559-573, July 2018.
 E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” In Proc. of the IEEE Intl. Conf. on Computer Vision (ICCV), Vol.13, 2011.
 E. Agustsson, R. Timofte, “NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study,” In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017.
 R. Hess, “An Open-Source SIFT Library,” ACM Multimedia, pp.1493-1496, 2010.
 K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” In Proc. International Conference on Learning Representations (ICLR), 2015.
 K. Mikolajczyk, C. Schmid, “Scale and affine invariant interest point detectors,” IJCV, Vol.1, No.60, pp.63-86, 2004.