These item classes have the following functionality in the architecture:
This will be the content of the Qt's project file for our new programming example:
include(/usr/local/QVision.0.1.0/qvproject.pri) TARGET = features SOURCES = features.cpp
The project will contain this time a source file called features.cpp. The source code is the following:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVGUI> #include <QVImageCanvas> #include <QVFilterSelectorWorker> int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); QVDefaultGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); return app.exec(); }
This example creates a simple QVision application with the following items: an input QVMPlayerCamera for the input video source; a QVWorker of type QVFilterSelectorWorker, which lets us select a filter to apply over an input image; a QVDefaultGUI interface; and a QVImageCanvas to show the output processed image.
Calls to methods linkProperty and QVMPlayerCamera::link are used to connect the elements in our application architecture, as it was done in the simple previous example. We use them to connect the camera to the input of the worker, and its output to the canvas.
This example's first part has the following structure:
Now we will add a new worker to process the filtered video:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVDefaultGUI> #include <QVImageCanvas> #include <QVFilterSelectorWorker> class HarrisExtractorWorker: public QVWorker { public: HarrisExtractorWorker(QString name): QVWorker(name) { addProperty< int >("Points", inputFlag, 15, "window size ", 1, 100); addProperty< double >("Threshold", inputFlag, 1.0, "window size ", 0.0, 256.0); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QPointF> >("Corners", outputFlag); } void iterate() { // 0. Read input parameters QVImage<uChar> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const double threshold = getPropertyValue<double>("Threshold"); const int pointNumber = getPropertyValue<int>("Points"); timeFlag("grab Frame"); // 1. Obtain corner response image. QVImage<sFloat> cornerResponseImage(image.getRows(), image.getCols()); FilterHessianCornerResponseImage(image, cornerResponseImage); timeFlag("Corner response image"); // 2. Local maximal filter. QList<QPointF> hotPoints = GetMaximalResponsePoints3(cornerResponseImage, threshold); timeFlag("Local maximal filter"); // 3. Output resulting data. setPropertyValue< QList<QPointF> >("Corners", hotPoints.mid(MAX(0,hotPoints.size() - pointNumber))); } }; int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); HarrisExtractorWorker cornersWorker("Corners Worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); filterWorker.linkProperty("Output image", &cornersWorker, "Input image", QVWorker::SynchronousLink); QVDefaultGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); QVImageCanvas cornersCanvas("Corners"); cornersCanvas.linkProperty(cornersWorker, "Input image"); cornersCanvas.linkProperty(cornersWorker,"Corners", Qt::blue, false); return app.exec(); }
We have created a class for the new worker, and a new object derived from it in the main function. This worker is linked to the linked this worker "Input image" to the "Output image" of the filterworker in order to generate a image processed twice, the QVWorker::SynchronousLink parameter indicates that each filterworker iteration waits for the previous new worker's iteration. Besides we have created a new canvas and we have linked it to the new worker "Input image" (it doesn't change the image) and a new workers QList<QPointF> property, "Corners", the canvas will draw these points.
The new worker, from now 'cornersWorker', get the corners of a image with a harris estractor. The cornersWorker must reimplement the constructor and the iterator metods: in the constructor it adds three input properties, the image and two unlinked intut properties that we can change from the QVGUI interface, and a a output property, tne corners of the image; in the iterate it get the input properties, process the input image ( using two QVision internal algorithym FilterHessianCornerResponseImage and GetMaximalResponsePoints3) and set the output property value, besides we can se some timeFlags, they mark time point to be represented in the cpu stats graphic plots (the GUI holds one of this).
This example's second part has the following structure:
We will now add two new workers to our extended example:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVDefaultGUI> #include <QVImageCanvas> #include <QVPolyline> #include <QVFilterSelectorWorker> class CannyOperatorWorker: public QVWorker { public: CannyOperatorWorker(QString name): QVWorker(name) { addProperty<double>("cannyHigh", inputFlag, 150, "High threshold for Canny operator", 50, 1000); addProperty<double>("cannyLow", inputFlag, 50, "Low threshold for Canny operator", 10, 500); addProperty<bool>("applyIPE", inputFlag, TRUE, "If we want to apply the IPE algorithm"); addProperty<double>("paramIPE", inputFlag, 5.0, "IPE parameter (max. allowed distance to line)", 1.0, 25.0); addProperty<bool>("intersectLines", inputFlag, TRUE, "If we want IPE to postprocess polyline (intersecting lines)"); addProperty<int>("minLengthContour", inputFlag, 25, "Minimal length of a contour to be considered", 1, 150); addProperty<int>("showNothingCannyImage", inputFlag, 0, "If we want nothing|Canny|original image to be shown",0,2); addProperty<bool>("showContours", inputFlag, TRUE, "If we want contours to be shown"); addProperty< QVImage<uChar,1> >("Output image", outputFlag); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Output contours", outputFlag); } void iterate() { // 0. Read input parameters const double cannyHigh = getPropertyValue<double>("cannyHigh"); const double cannyLow = getPropertyValue<double>("cannyLow"); const bool applyIPE = getPropertyValue<bool>("applyIPE"); const double paramIPE = getPropertyValue<double>("paramIPE"); const bool intersectLines = getPropertyValue<bool>("intersectLines"); const int minLengthContour = getPropertyValue<int>("minLengthContour"); const int showNothingCannyImage = getPropertyValue<int>("showNothingCannyImage"); const bool showContours = getPropertyValue<bool>("showContours"); QVImage<uChar,1> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const uInt cols = image.getCols(), rows = image.getRows(); QVImage<sFloat> imageFloat(cols, rows), dX(cols, rows), dY(cols, rows), dXNeg(cols, rows); QVImage<uChar> canny(cols, rows), buffer; // 1. Convert image from uChar to sShort Convert(image, imageFloat); timeFlag("Convert image from uChar to sShort"); // 2. Obtain horizontal and vertical gradients from image FilterSobelHorizMask(imageFloat,dY,3); FilterSobelVertMask(imageFloat,dX,3); MulC(dX, dXNeg, -1); timeFlag("Obtain horizontal and vertical gradients from image"); // 3. Apply Canny operator CannyGetSize(canny, buffer); Canny(dXNeg, dY, canny, buffer, cannyLow,cannyHigh); timeFlag("Apply Canny operator"); // 4. Get contours const QList<QVPolyline> contourList = getLineContoursThreshold8Connectivity(canny, 128); timeFlag("Get contours"); QList<QVPolyline> outputList; foreach(QVPolyline contour,contourList) { if(contour.size() > minLengthContour) { if(applyIPE) { QVPolyline IPEcontour; IterativePointElimination(contour,IPEcontour,paramIPE,FALSE,intersectLines); outputList.append(IPEcontour); } else outputList.append(contour); } } timeFlag("IPE on contours"); // 5. Publish resulting data if(showNothingCannyImage == 1) setPropertyValue< QVImage<uChar,1> >("Output image",canny); else if(showNothingCannyImage == 2) setPropertyValue< QVImage<uChar,1> >("Output image",image); else { QVImage<uChar> whiteImage(cols, rows); Set(whiteImage,255); setPropertyValue< QVImage<uChar,1> >("Output image",whiteImage); } if(showContours) setPropertyValue< QList< QVPolyline> >("Output contours",outputList); else setPropertyValue< QList< QVPolyline> >("Output contours",QList<QVPolyline>()); timeFlag("Publish results"); } }; class ContourExtractorWorker: public QVWorker { public: ContourExtractorWorker(QString name): QVWorker(name) { addProperty<int>("Threshold", inputFlag, 128, "Threshold for a point to count as pertaining to a region", 0, 255); addProperty<int>("MinAreaIPE", inputFlag, 0, "Minimal area to keep points in the IPE algorithm", 0, 50); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Internal contours", outputFlag); addProperty< QList<QVPolyline> >("External contours", outputFlag); } void iterate() { // 0. Read input parameters QVImage<uChar,1> image = getPropertyValue< QVImage<uChar,3> >("Input image"); const uInt threshold = getPropertyValue< int >("Threshold"), minAreaIPE = getPropertyValue< int >("MinAreaIPE"); timeFlag("Read input parameters"); // 1. Get contours from image const QList<QVPolyline> contours = getConnectedSetBorderContoursThreshold(image, threshold); timeFlag("Get contours from image"); // 2. Apply IPE QList<QVPolyline> ipeContours; foreach(QVPolyline polyline, contours) { QVPolyline ipePolyline; IterativePointElimination(polyline, ipePolyline, minAreaIPE); if (ipePolyline.size() > 0) ipeContours.append(ipePolyline); } timeFlag("IPE filtering"); // 3. Export contours to output property QList<QVPolyline> internalContours, externalContours; foreach(QVPolyline polyline, ipeContours) if (polyline.direction) internalContours.append(polyline); else externalContours.append(polyline); setPropertyValue< QList< QVPolyline> >("Internal contours",internalContours); setPropertyValue< QList< QVPolyline> >("External contours",externalContours); timeFlag("Computed output contours"); } };
The main function remains as follows:
int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); QVFilterSelectorWorker<uChar, 3> filterWorker("Filter worker"); CannyOperatorWorker cannyWorker("Canny Operator Worker"); ContourExtractorWorker contoursWorker("Contours Extractor Worker"); HarrisExtractorWorker cornersWorker("Corners Worker"); QVMPlayerCamera camera("Video"); camera.link(&filterWorker, "Input image"); filterWorker.linkProperty("Output image", &cannyWorker, "Input image", QVWorker::SynchronousLink); filterWorker.linkProperty("Output image", &contoursWorker, "Input image", QVWorker::SynchronousLink); filterWorker.linkProperty("Output image", &cornersWorker, "Input image", QVWorker::SynchronousLink); QVDefaultGUI interface; QVImageCanvas filteredCanvas("Input"); filteredCanvas.linkProperty(filterWorker, "Output image"); QVImageCanvas cannyCanvas("Canny"); cannyCanvas.linkProperty(cannyWorker,"Output image"); cannyCanvas.linkProperty(cannyWorker,"Output contours"); QVImageCanvas contourCanvas("Contours"); contourCanvas.linkProperty(contoursWorker, "Input image"); contourCanvas.linkProperty(contoursWorker,"Internal contours", Qt::red); contourCanvas.linkProperty(contoursWorker,"External contours", Qt::blue); QVImageCanvas cornersCanvas("Corners"); cornersCanvas.linkProperty(cornersWorker, "Input image"); cornersCanvas.linkProperty(cornersWorker,"Corners", Qt::blue, false); return app.exec(); }
This time we have added a contour extractor and a canny operator. Both workers get an input image, and obtain lists of points from them, storing the results on dynamic properties of QList<QVPolyline> type. The main function links those properties to QVImageCanvas objects to display the Canny edges and Harris corners over the original image.
This example's third part has the following structure:
Now we will add a graphic plot over the number of contours that the two new workers generate:
#include <QVApplication> #include <QVMPlayerCamera> #include <QVDefaultGUI> #include <QVImageCanvas> #include <QVPolyline> #include <QVFilterSelectorWorker> class CannyOperatorWorker: public QVWorker { public: CannyOperatorWorker(QString name): QVWorker(name) { addProperty<double>("cannyHigh", inputFlag, 150, "High threshold for Canny operator", 50, 1000); addProperty<double>("cannyLow", inputFlag, 50, "Low threshold for Canny operator", 10, 500); addProperty<bool>("applyIPE", inputFlag, TRUE, "If we want to apply the IPE algorithm"); addProperty<double>("paramIPE", inputFlag, 5.0, "IPE parameter (max. allowed distance to line)", 1.0, 25.0); addProperty<bool>("intersectLines", inputFlag, TRUE, "If we want IPE to postprocess polyline (intersecting lines)"); addProperty<int>("minLengthContour", inputFlag, 25, "Minimal length of a contour to be considered", 1, 150); addProperty<int>("showNothingCannyImage", inputFlag, 0, "If we want nothing|Canny|original image to be shown",0,2); addProperty<bool>("showContours", inputFlag, TRUE, "If we want contours to be shown"); addProperty< QVImage<uChar,1> >("Output image", outputFlag); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Output contours", outputFlag); addProperty<int>("Num output contours", outputFlag); } void iterate() { ... setPropertyValue<int>("Num output contours",outputList.size()); timeFlag("Publish results"); } }; class ContourExtractorWorker: public QVWorker { public: ContourExtractorWorker(QString name): QVWorker(name) { addProperty<int>("Threshold", inputFlag, 128, "Threshold for a point to count as pertaining to a region", 0, 255); addProperty<int>("MinAreaIPE", inputFlag, 0, "Minimal area to keep points in the IPE algorithm", 0, 50); addProperty< QVImage<uChar,3> >("Input image", inputFlag|outputFlag); addProperty< QList<QVPolyline> >("Internal contours", outputFlag); addProperty< QList<QVPolyline> >("External contours", outputFlag); addProperty<int>("Num internal contours", outputFlag); addProperty<int>("Num External contours", outputFlag); } void iterate() { ... setPropertyValue< QList< QVPolyline> >("Internal contours",internalContours); setPropertyValue< QList< QVPolyline> >("External contours",externalContours); setPropertyValue<int>("Num internal contours",internalContours.size()); setPropertyValue<int>("Num External contours",externalContours.size()); timeFlag("Computed output contours"); } }; class HarrisExtractorWorker: public QVWorker { ... int main(int argc, char *argv[]) { QVApplication app(argc, argv, "Example program for QVision library. Obtains several features from input video frames." ); ... QVNumericPlot numericPlot("Num contours"); numericPlot.linkProperty(cannyWorker, "Num output contours"); numericPlot.linkProperty(contoursWorker); return app.exec(); }
In this extensión, we have added a grafic plot, in this case a QVNumericPlot that shows int and double linked properties. To get those properties we have created int properties in the contoursWorker and the cannyWorker, that symbolize the number of contours they generate. And we link those properties to the QVNumericPlot (if don't indicate the property name they link all int and double worker's properties).
This last example has the following structure: