https://doi.org/10.15344/2456-4451/2016/107
Abstract
One of the main tasks of vision systems is to support autonomous vehicle navigation in unstructured environments, where unexpected objects can suddenly appear. For this, they can use various information sources (cameras, ultrasonic sensors, GPS systems, LIDAR, etc.) to model the environment in which they operate. All these data are combined to extract all the information needed to guide their movement through the environment. The complexity of this task prevents the integration of vision systems into real-time control systems (autonomous vehicles, mobile robots, etc.). This is because most of the research carried out within the computer vision field focuses on hardware development or on creating new algorithms and methods for performing the analysis and manipulation of the image data. However, system development issues are treated as secondary. Consequently, designs are very efficient but very little reusable. On the other hand, real-time systems possess features that make them particularly sensitive to whatever architectural decisions are made. The use of software frame works and components has demonstrated its effectiveness in improving software productivity and quality. This work proposes a novel approach, called ViSel-TR, for developing vision systems seeking two main objectives: (1) efficient interpretation and reasonable response time in an unstructured environment and (2) use of different development paradigms offered by software engineering that allow their integration in realtime systems. In order to achieve these objectives, ViSel-TR uses model driven software development techniques in order to separate the description of component-based real-time applications from their possible implementations for different platforms.