Introduction to QVision

QVision is a software framework aimed to develop Computer Vision based applications and prototypes.

QVision programming paradigm

Computer Vision applications usually perform intense data processing, which can be logically divided into separated stages due to the nature of Computer Vision algorithms. The following image depicts the structure for one of such algorithms, the Canny edge detector:

cannyExample.png

It shows the application structure implementing the algorithm, divided in several processing stages: gaussian filtering, horizontal gradient, vertical gradient, hysteresis thresholding, etc...

Each one of those stages can be easily designed and developed separatedly as an independent data processing block. When all of these blocks are coded and tested, they can be linked together to create the final application. This leads to some programming advantages:

QVision processing blocks

There are three main kind of processing blocks in the QVision:

Dynamic properties

Dynamic properties are the tool used in the QVision architecture to interconnect the different processing blocks between them. A dynamic property is similar to a property contained in a class (namely stored data values), with the exception that they can be created at execution time, and they offer what is called data introspection.

The processing blocks in the QVision are dynamic property containers. They inherit from a common parent class named QVPropertyContainer, which contains the functionality to create new dynamic properties inside the object, and store and retrieve values from them.

Data introspection means that the list of dynamic properties contained in a property container can be obtained in execution time using a function. Reference to dynamic properties to create, delete, or update their content is always performed using a string identificator unique for the property in the object. Thus introspection of a QVPropertyContainer object can be performed using function QVPropertyContainer::getPropertyList(), which returns a list of string identifying the dynamic properties contained in the object.

This data introspection allows two important things in the QVision: first, it allows the final application user to modify input parameters for the application stored in dynamic properties at execution time through the input command line or using closed graphical interface widgets, which the developer can include in the applications with a couple of lines. The class QVGUI is an example of these graphical interface widgets.

Second, it allows connecting and performing thread safe data sharing between two objects without requiring any design dependency too, so worker objects can be developed independently, and later connected using dynamic properties. When linking a worker to a camera or a video output widget, actually a dynamic property from the worker and a dynamic property for the camera or the widget will be linked, impelling the QVision application to perform data sharing between them whenever a new frame is written by the camera object.

Dynamic properties are an homogeneous way of sharing data between objects, without creating design dependencies between them. No matter the real type of two objects derived from QVPropertyContainer class, if they have type compatible dynamic properties, they can be linked to make these property container objects to share data between them.

You can read more about property containers functionality in the QVPropertyContainer class reference page.

Parallel programming with QVision

QVision allows with the processing block division architecture to easily develop multi-threaded applications, with synchronized and thread-safe data sharing, without requiring from the programming explicit use of thread locks, semaphores, or any other typical complication of clasic parallel programming. So QVision applications can improve their performance over parallel architectures if the processing block structure is well designed, without requiring too much parallel programming expertise from the developer.

Each different procesing block is prone to be conveniently mapped onto a different thread, because different processing algorithms work over separable data, and the volume of data processed by a computer vision task is commonly high enough to dedicate a thread to process it. That is why QVWorker class inherits from Qt's QThread class. So, worker objects can be considered as threads, that just call their iterate() function time and again.

By just dividing a task into several subtasks, and executing each task by a different worker, we are doing parallel computing, because every worker will be executed in a different thread, that the operative system can map onto different cores or CPU's. Also it is convenient for workers to be executed in separated threads from the main application thread, to keep the GUI responsive when a call to a worker's iterate() function may take too long.