Search by item HOME > Access full text > Search by item

JBE, vol. 24, no. 4, pp.573-579, July, 2019

DOI: https://doi.org/10.5909/JBE.2019.24.4.573

Improved Object Recognition using Multi-view Camera for ADAS

Dong-hun Park and Hakil Kim

C.A E-mail: hikim@inha.ac.kr

Abstract:

To achieve fully autonomous driving, the perceptual skills of the surrounding environment must be superior to those of humans. The 60°angle, 120°wide angle cameras, which are used primarily in autonomous driving, have their disadvantages depending on the viewing angle. This paper uses a multi-angle object recognition system to overcome each of the disadvantages of wide and narrow-angle cameras. Also, the aspect ratio of data acquired with wide and narrow-angle cameras was analyzed to modify the SSD(Single Shot Detector) algorithm, and the acquired data was learned to achieve higher performance than when using only monocular cameras.



Keyword: Object Detection, Multi-Angle Camera, Deep learning, Light Weight, Vehicle camera system

Reference:
[1] Sebastian Houben, Johannes Stallkamp. “Detection of traffic signs in real-world images: The German traffic sign detection benchmark” The 2013 International Joint Conference on Neural Networks, IJCNN.
[2] M. Cordts, M. Omran, “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2016
[3] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp.3354-3361, 2012.
[4] Son, K. Choi, N. Song, and D Lee, “Real-Time Dynamic Simulation of Vehicle and Occupant Using a Neural Network” Transactions of KSAE, Vol.10, No2, pp.132-140,2002.
[5] Wei Liu, Dragomir Anguelov, “SSD: Single Shot MultiBox Detector” In ECCV 2016.
[6] IIjoo Baek, Albert Davies, “Real-time Detection, Tracking, and Classification of Moving and Stationary Objects using Multiple Fisheye Images” arXiv:1803.06077.
[7] Dumitru Erhan, Christian Szegedy, “Scalable Object Detection using Deep Neural Networks” In CVPR 2014.
[8] Andrew G Howard, Menglong Zhu, “MobileNets: Efficient Convolu- tional Neural Networks for Mobile Vision Applications” In CVPR 2017.
[9] Mark Sandler, Andrew Howard, “MobileNetV2: Inverted Residuals and Linear Bottlenecks”, In CVPR 2018.
[10] Christian Szegedy, Vincent Vanhoucke, “Rethinking the Inception Architecture for Computer Vision” In CVPR 2015.
[11] Xiaozhi Chen, Kaustav Kundu, “3D Object Proposals for Accurate Object Class Detection” In NIPS 2015

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved