To create a Qt project you need to write a .pro file. This is like an advanced Makefile file, were you should specify the source and header files, compilation options, libraries, and other stuff, that you will include or use in your project. With this .pro file you can use the qmake program, which is a tool from the Qt library, to create the specific Makefile for the project, that you can use to finaly compile the program, with the Qt binaries linked correctly, and everything properly configured for your machine.
To make a program based on QVision you should create a Qt project file, that at least references to all the source and header files, and includes the project file for QVision qvproject.pri. That file should be located in the install directory of the QVision, and should be included in the .pro file with this line:
include(<path to QVision>/qvproject.pri)
where <path to QVision> should be the absolute path to QVision install directory. For example:
include(/usr/local/QVision.0.0.5/qvproject.pri)
This line includes in the project all the references to the library binaries, and configurations that require the use of the QVision.
In the next section it is shown an example .pro file for a simple QVision project. For futher info about the qmake tool, and the sintax of .pro files you can check the online manual for QMake.
include(/usr/local/QVision.0.0.5/qvproject.pri) TARGET = example # Input SOURCES += example.cpp
This file must include the project file for QVision projects:
include(/usr/local/QVision.0.0.5/qvproject.pri)
If you don't have QVision installed in that directory just change the path in the first line according to what is explained in section QVision projects. Next, you can create the file example.cpp which will contain the code for the application. Note that this file is referenced in the file example.pro at the line:
SOURCES += example.cpp
That makes the project to include this source file. File example.cpp will contain the following file includes:
#include <QVMplayerCamera> #include <QVApplication> #include <QVGUI>
and the following <it>main</it> function:
int main(int argc, char *argv[]) { // QVApplication object QVApplication app(argc, argv, "Example program for QVision library. Play a video on a canvas window."); QVMPlayerCamera camera("Video"); // QVCamera object PlayerWorker worker("Player worker"); // QVWorker object camera.link(&worker,"Input image"); // GUI object QVGUI interface; // Video/image output object QVImageCanvas imageCanvas("Test image"); imageCanvas.linkProperty(worker,"Output image"); return app.exec(); }
The class QVMPlayerCamera is a subclass for QVCamera. Both represent video or image inputs in the QVision framework, but the class QVCamera is virtual, and every class modelling video or image inputs in the QVision should inherit from it. Actually, it only has one subclass, the QVMPlayerCamera class, which is a flexible video input that can read frames from diverse video file formats, webcams and digital video cameras. It is based on the MPlayer program. To allow a worker object to read frames from a video source, it should be linked to the camera object, specifying the name of the image for the worker, with a line like this:
camera.link(&worker,"Input image");
The object QVApplication registers worker and camera objects in the system, and process command line input parameters. It inherits from the QApplication class, so it has a similar functionality. It is the most important object in a QVision application, so it always should be created first in the main() function, before any worker, camera or graphic interface object. Only one object derived from this class should be created in a QVision application.
The class QVGUI is the main GUI widget. This object reads the workers and cameras registered in the QVApplication object, so it contains buttons and sliders to control the algorithm parameters (computer vision or video processing algorithms parameters) for the system, and the video input flow, allowing the user to stop an resume frame read from the video sources opened by the program, in execution time. Also, it can display information about cpu usage stadistics, frame rate, seconds read from the video inputs, etc... and it allows the user to stop and close the program. It is recommended to create only one object of its type for any QVision application.
The class QVImageCanvas is a video or image output widget. Any object created with it will display an image window showing an output image from the worker object. Like the camera object, it should be connected to the worker to read the resulting images, specifying the name of the output image with a line like this:
imageCanvas.linkProperty(worker,"Output image");
int main(int argc, char *argv[]) { // QVApplication object QApplication *app = new QVApplication(argc, argv, "Example program for QVision library. Play a video on a canvas window."); [...} return app->exec(); // this will execute QApplication::exec() function, not QVApplication::exec() function. }
This function should read input images, parameters, and other data inputs from dynamic properties contained in the worker, process them, and store the resulting images, data structures, or values, in other dynamic properties contained in the worker object.
The file example.cpp must contain the following code for the worker:
class PlayerWorker: public QVWorker { public: PlayerWorker(QString name): QVWorker(name) { addProperty< QVImage<uChar,1> >("Input image", inputFlag|outputFlag); addProperty< QVImage<uChar,1> >("Output image", outputFlag); } void iterate() { QVImage<uChar,1> image = getPropertyValue< QVImage<uChar,1> >("Input image"); // image processing / computer vision code should go here setPropertyValue< QVImage<uChar,1> >("Output image", image); } };
This worker has no input parameters, and it just reads an input image, and stores it in an output property.
# qmake # make
This will generate the binary example. Note that the name of the executable was specified in the example.pro file, with the line:
TARGET = example
You can execute the example program with this command line:
# ./example --URL=http://perception.inf.um.es/public_data/videos/misc/minuto.avi
The program will start and display the video pointed by the URL in a window:
Beside that interface, the user can set initial values for some properties, through comman line parameters. You can check the properties of the example program with the command line parameter help. Every QVision application recognices that parameter, and shows a list of the usage and main properties that you can set through the command line. The output of the program example when invoqued with that parameter:
# ./example --help
is:
Usage: ./example [OPTIONS] Example program for QVision library. Play a video on a canvas window.
Input parameters for Video: --Rows=[int] (def. 0) .............................. Rows to open the camera. --Cols=[int] (def. 0) ........................... Columns to open the camera. --RealTime=[true,false](def. false) If the camera should be opened in real time mode. --Deinterlaced=[true,false](def. false) If the camera should be opened in deinterlaced mode. --NoLoop=[true,false](def. false) If the camera should be opened in no loop mode. --RGBMEncoder=[true,false](def. false) If the camera should be opened in RGB using mencoder. --URL=[text] ............................ URL of the video source (see doc.).
Input parameters for Player worker: --print stats=[true,false](def. false) Enables realtime stats console output for worker.
You can change the input video source, changing the value for the URL command line parameter, so the example program reads from a video file different than the file minuto.avi, as well as other video input configuration parameters such as the size of the image (parameters Cols and Rows), amongst others. For example:
# ./example --URL=http://perception.inf.um.es/public_data/videos/misc/penguin.dvOr:
# ./example --URL=http://perception.inf.um.es/public_data/videos/misc/penguin.dv --Rows=240 --Cols=320
QVision's command line parameter system will be explained in the section TheGUI, along the graphical interface, but it is best recommended to read the following section of this documentation, Programming, to understand how to add and use these parameters.
In witch the items have the next functionality:
Here is detailed how to make a simple QVision project, with a .pro file and all the code in just one .cpp source file. Supossing you have installed QVision in the url /usr/local/QVision.0.0.5 you can create a file called features.pro with the following content:
include(/usr/local/QVision.0.0.5/qvproject.pri) TARGET = features # Input SOURCES += features.cpp
If you don't have QVision installed in that directory just change the path in the first line according to what is explained in section QVision projects.
Next, you can create the file features.cpp:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVGUI> #include <QVImageCanvas> #include <QVFilterSelectorWorker> int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); QVGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); return app.exec(); }
This example create a simple QVisionApplication, with the following items: contains an input QVMPlayerCamera to get the input source video; a QVWorker, that is of an internal QVision worker type (QVFilterSelectorWorker) that lets us select a filter to apply over the "Input image" property; a QVGUI interface for interacts with the items; and a QVImageCanvas to show the output processed image. As well as creating these items, we need to connect them, as doing in 'camera.link(&filterWorker, "Input image")' and 'filteredCanvas.linkProperty(filterWorker, "Output image")', in witch we connect the camera to the worker and the worker to the canvas.
This example's first part has the following structure:
Note that the file features.cpp is referenced in the file features.pro, at the line:
SOURCES += features.cpp
Also note that as it is explained in the previous section The first program, the project file also includes the project file for QVision projects:
include(/usr/local/QVision.0.0.5/qvproject.pri)
If both files are in the same directory, you can compile from that location writting these instructions in the command line:
# qmake # make
This will generate the binary features. Note that the name of the executable was specified in the features.pro file, in the line:
TARGET = features
You can execute the example program with this command line:
# ./features --URL=http://perception.inf.um.es/public_data/videos/misc/penguin.dv
It is just a simple video player, that will show the video in the file pointed by the URL, in a window, with some control widgets.
#include <QVApplication> #include <QVMPlayerCamera> #include <QVGUI> #include <QVImageCanvas> #include <QVFilterSelectorWorker> class HarrisExtractorWorker: public QVWorker { public: HarrisExtractorWorker(QString name): QVWorker(name) { addProperty< int >("Points", inputFlag, 15, "window size ", 1, 100); addProperty< double >("Threshold", inputFlag, 1.0, "window size ", 0.0, 256.0); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QPointF> >("Corners", outputFlag); } void iterate() { // 0. Read input parameters QVImage<uChar> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const double threshold = getPropertyValue<double>("Threshold"); const int pointNumber = getPropertyValue<int>("Points"); timeFlag("grab Frame"); // 1. Obtain corner response image. QVImage<sFloat> cornerResponseImage(image.getRows(), image.getCols()); FilterHessianCornerResponseImage(image, cornerResponseImage); timeFlag("Corner response image"); // 2. Local maximal filter. QList<QPointF> hotPoints = GetMaximalResponsePoints3(cornerResponseImage, threshold); timeFlag("Local maximal filter"); // 3. Output resulting data. setPropertyValue< QList<QPointF> >("Corners", hotPoints.mid(MAX(0,hotPoints.size() - pointNumber))); } }; int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); HarrisExtractorWorker cornersWorker("Corners Worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); filterWorker.linkProperty("Output image", &cornersWorker, "Input image", QVWorker::SynchronousLink); QVGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); QVImageCanvas cornersCanvas("Corners"); cornersCanvas.linkProperty(cornersWorker, "Input image"); cornersCanvas.linkProperty(cornersWorker,"Corners", Qt::blue, false); return app.exec(); }
In this extensión, we have added a new worker, we have linked this worker "Input image" to the "Output image" of the filterworker in order to generate a image processed twice, the QVWorker::SynchronousLink parameter indicates that each filterworker iteration waits for the previous new worker's iteration. Besides we have created a new canvas and we have linked it to the new worker "Input image" (it doesn't change the image) and a new workers QList<QPointF> property, "Corners", the canvas will draw these points.
The new worker, from now 'cornersWorker', get the corners of a image with a harris estractor. The cornersWorker must reimplement the constructor and the iterator metods: in the constructor it adds three input properties, the image and two unlinked intut properties that we can change from the QVGUI interface, and a a output property, tne corners of the image; in the iterate it get the input properties, process the input image ( using two QVision internal algorithym FilterHessianCornerResponseImage and GetMaximalResponsePoints3) and set the output property value, besides we can se some timeFlags, they mark time point to be represented in the cpu stats graphic plots (the GUI holds one of this).
This example's second part has the following structure:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVGUI> #include <QVImageCanvas> #include <QVPolyline> #include <QVFilterSelectorWorker> class CannyOperatorWorker: public QVWorker { public: CannyOperatorWorker(QString name): QVWorker(name) { addProperty<double>("cannyHigh", inputFlag, 150, "High threshold for Canny operator", 50, 1000); addProperty<double>("cannyLow", inputFlag, 50, "Low threshold for Canny operator", 10, 500); addProperty<bool>("applyIPE", inputFlag, TRUE, "If we want to apply the IPE algorithm"); addProperty<double>("paramIPE", inputFlag, 5.0, "IPE parameter (max. allowed distance to line)", 1.0, 25.0); addProperty<bool>("intersectLines", inputFlag, TRUE, "If we want IPE to postprocess polyline (intersecting lines)"); addProperty<int>("minLengthContour", inputFlag, 25, "Minimal length of a contour to be considered", 1, 150); addProperty<int>("showNothingCannyImage", inputFlag, 0, "If we want nothing|Canny|original image to be shown",0,2); addProperty<bool>("showContours", inputFlag, TRUE, "If we want contours to be shown"); addProperty< QVImage<uChar,1> >("Output image", outputFlag); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Output contours", outputFlag); } void iterate() { // 0. Read input parameters const double cannyHigh = getPropertyValue<double>("cannyHigh"); const double cannyLow = getPropertyValue<double>("cannyLow"); const bool applyIPE = getPropertyValue<bool>("applyIPE"); const double paramIPE = getPropertyValue<double>("paramIPE"); const bool intersectLines = getPropertyValue<bool>("intersectLines"); const int minLengthContour = getPropertyValue<int>("minLengthContour"); const int showNothingCannyImage = getPropertyValue<int>("showNothingCannyImage"); const bool showContours = getPropertyValue<bool>("showContours"); QVImage<uChar,1> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const uInt cols = image.getCols(), rows = image.getRows(); QVImage<sFloat> imageFloat(cols, rows), dX(cols, rows), dY(cols, rows), dXNeg(cols, rows); QVImage<uChar> canny(cols, rows), buffer; // 1. Convert image from uChar to sShort Convert(image, imageFloat); timeFlag("Convert image from uChar to sShort"); // 2. Obtain horizontal and vertical gradients from image FilterSobelHorizMask(imageFloat,dY,3); FilterSobelVertMask(imageFloat,dX,3); MulC(dX, dXNeg, -1); timeFlag("Obtain horizontal and vertical gradients from image"); // 3. Apply Canny operator CannyGetSize(canny, buffer); Canny(dXNeg, dY, canny, buffer, cannyLow,cannyHigh); timeFlag("Apply Canny operator"); // 4. Get contours const QList<QVPolyline> contourList = getLineContoursThreshold8Connectivity(canny, 128); timeFlag("Get contours"); QList<QVPolyline> outputList; foreach(QVPolyline contour,contourList) { if(contour.size() > minLengthContour) { if(applyIPE) { QVPolyline IPEcontour; IterativePointElimination(contour,IPEcontour,paramIPE,FALSE,intersectLines); outputList.append(IPEcontour); } else outputList.append(contour); } } timeFlag("IPE on contours"); // 5. Publish resulting data if(showNothingCannyImage == 1) setPropertyValue< QVImage<uChar,1> >("Output image",canny); else if(showNothingCannyImage == 2) setPropertyValue< QVImage<uChar,1> >("Output image",image); else { QVImage<uChar> whiteImage(cols, rows); Set(whiteImage,255); setPropertyValue< QVImage<uChar,1> >("Output image",whiteImage); } if(showContours) setPropertyValue< QList< QVPolyline> >("Output contours",outputList); else setPropertyValue< QList< QVPolyline> >("Output contours",QList<QVPolyline>()); timeFlag("Publish results"); } }; class ContourExtractorWorker: public QVWorker { public: ContourExtractorWorker(QString name): QVWorker(name) { addProperty<int>("Threshold", inputFlag, 128, "Threshold for a point to count as pertaining to a region", 0, 255); addProperty<int>("MinAreaIPE", inputFlag, 0, "Minimal area to keep points in the IPE algorithm", 0, 50); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Internal contours", outputFlag); addProperty< QList<QVPolyline> >("External contours", outputFlag); } void iterate() { // 0. Read input parameters QVImage<uChar,1> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const uInt threshold = getPropertyValue< int >("Threshold"), minAreaIPE = getPropertyValue< int >("MinAreaIPE"); timeFlag("Read input parameters"); // 1. Get contours from image const QList<QVPolyline> contours = getConnectedSetBorderContoursThreshold(image, threshold); timeFlag("Get contours from image"); // 2. Apply IPE QList<QVPolyline> ipeContours; foreach(QVPolyline polyline, contours) { QVPolyline ipePolyline; IterativePointElimination(polyline, ipePolyline, minAreaIPE); if (ipePolyline.size() > 0) ipeContours.append(ipePolyline); } timeFlag("IPE filtering"); // 3. Export contours to output property QList<QVPolyline> internalContours, externalContours; foreach(QVPolyline polyline, ipeContours) if (polyline.direction) internalContours.append(polyline); else externalContours.append(polyline); setPropertyValue< QList< QVPolyline> >("Internal contours",internalContours); setPropertyValue< QList< QVPolyline> >("External contours",externalContours); timeFlag("Computed output contours"); } }; class HarrisExtractorWorker: public QVWorker { ... int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); CannyOperatorWorker cannyWorker("Canny Operator Worker"); ContourExtractorWorker contoursWorker("Contours Extractor Worker"); HarrisExtractorWorker cornersWorker("Corners Worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); filterWorker.linkProperty("Output image", &cannyWorker, "Input image", QVWorker::SynchronousLink); filterWorker.linkProperty("Output image", &contoursWorker, "Input image", QVWorker::SynchronousLink); filterWorker.linkProperty("Output image", &cornersWorker, "Input image", QVWorker::SynchronousLink); QVGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); QVImageCanvas cannyCanvas("Canny"); cannyCanvas.linkProperty(cannyWorker,"Output image"); cannyCanvas.linkProperty(cannyWorker,"Output contours"); QVImageCanvas contourCanvas("Contours"); contourCanvas.linkProperty(contoursWorker, "Input image"); contourCanvas.linkProperty(contoursWorker,"Internal contours", Qt::red); contourCanvas.linkProperty(contoursWorker,"External contours", Qt::blue); QVImageCanvas cornersCanvas("Corners"); cornersCanvas.linkProperty(cornersWorker, "Input image"); cornersCanvas.linkProperty(cornersWorker,"Corners", Qt::blue, false); return app.exec(); }
In this extensión, we have added two new workers, like the previous step. In this case we have added a contour extractor and a canny operator. Those workers have QList<QVPolyline> properties, that have been linked to two new canvas, the canvas will draw those polylines over the image.
This example's third part has the following structure:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVGUI> #include <QVImageCanvas> #include <QVPolyline> #include <QVFilterSelectorWorker> class CannyOperatorWorker: public QVWorker { public: CannyOperatorWorker(QString name): QVWorker(name) { addProperty<double>("cannyHigh", inputFlag, 150, "High threshold for Canny operator", 50, 1000); addProperty<double>("cannyLow", inputFlag, 50, "Low threshold for Canny operator", 10, 500); addProperty<bool>("applyIPE", inputFlag, TRUE, "If we want to apply the IPE algorithm"); addProperty<double>("paramIPE", inputFlag, 5.0, "IPE parameter (max. allowed distance to line)", 1.0, 25.0); addProperty<bool>("intersectLines", inputFlag, TRUE, "If we want IPE to postprocess polyline (intersecting lines)"); addProperty<int>("minLengthContour", inputFlag, 25, "Minimal length of a contour to be considered", 1, 150); addProperty<int>("showNothingCannyImage", inputFlag, 0, "If we want nothing|Canny|original image to be shown",0,2); addProperty<bool>("showContours", inputFlag, TRUE, "If we want contours to be shown"); addProperty< QVImage<uChar,1> >("Output image", outputFlag); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Output contours", outputFlag); addProperty<int>("Num output contours", outputFlag); } void iterate() { ... setPropertyValue<int>("Num output contours",outputList.size()); timeFlag("Publish results"); } }; class ContourExtractorWorker: public QVWorker { public: ContourExtractorWorker(QString name): QVWorker(name) { addProperty<int>("Threshold", inputFlag, 128, "Threshold for a point to count as pertaining to a region", 0, 255); addProperty<int>("MinAreaIPE", inputFlag, 0, "Minimal area to keep points in the IPE algorithm", 0, 50); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Internal contours", outputFlag); addProperty< QList<QVPolyline> >("External contours", outputFlag); addProperty<int>("Num internal contours", outputFlag); addProperty<int>("Num External contours", outputFlag); } void iterate() { ... setPropertyValue< QList< QVPolyline> >("Internal contours",internalContours); setPropertyValue< QList< QVPolyline> >("External contours",externalContours); setPropertyValue<int>("Num internal contours",internalContours.size()); setPropertyValue<int>("Num External contours",externalContours.size()); timeFlag("Computed output contours"); } }; class HarrisExtractorWorker: public QVWorker { ... int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); ... QVNumericPlot numericPlot("Num contours"); numericPlot.linkProperty(cannyWorker, "Num output contours"); numericPlot.linkProperty(contoursWorker); return app.exec(); }
In this extensión, we have added a grafic plot, in this case a QVNumericPlot that shows int and double linked properties. To get those properties we have created int properties in the contoursWorker and the cannyWorker, that symbolize the number of contours they generate. And we link those properties to the QVNumericPlot (if don't indicate the property name they link all int and double worker's properties).
This example's fourth part has the following structure: