Monocular vision-based position sensor using neural networks for automated vehicle following

Yasushi Omura, Shigeyuki Funabiki, Toshihiko Tanaka

Research output: Contribution to conferencePaperpeer-review

6 Citations (Scopus)

Abstract

This paper presents a new position sensor with a CCD camera based on neural networks that measures the distance and direction angle to and the pose angle of the lead vehicle in automated vehicle following. A picture image of lamps mounted on the lead vehicle is obtained with the CCD camera. Lamp positions are established in a rectangular coordinate system by means of graphic data processing. The measuring process of the proposed position sensor is developed by neural network learning with backpropagation. The number of lamps can be reduced four to three without sacrificing sensor accuracy. This reduction in the number of lamps shortens acquisition time in graphic data processing. Experimental results show that the distance, direction angle and pose angle are sufficiency accurate for practical use in automated vehicle following.

Original languageEnglish
Pages388-393
Number of pages6
Publication statusPublished - 1999
Externally publishedYes
EventProceedings of the 1999 3rd IEEE International Conference on Power Electronics and Drive Systems (PEDS'99) - Kowloon, Hong Kong
Duration: Jul 27 1999Jul 29 1999

Other

OtherProceedings of the 1999 3rd IEEE International Conference on Power Electronics and Drive Systems (PEDS'99)
CityKowloon, Hong Kong
Period7/27/997/29/99

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Monocular vision-based position sensor using neural networks for automated vehicle following'. Together they form a unique fingerprint.

Cite this