Important Information
Warranty
The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects
in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National
Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives
notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be
uninterrupted or error free.
A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before
any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are
covered by warranty.
National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical
accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent
editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected.
In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it.
EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. CUSTOMER’S RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF
NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR
DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY
THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including
negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments
shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover
damages, defects, malfunctions, or service failures caused by owner’s failure to follow the National Instruments installation, operation, or
maintenance instructions; owner’s modification of the product; owner’s abuse, misuse, or negligent acts; and power failure or surges, fire,
flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying,
recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National
Instruments Corporation.
Trademarks
CVI™, IMAQ™, LabVIEW™, National Instruments™, National Instruments Alliance Partner™, NI™, ni.com™, NI Developer Zone™, and
NI-IMAQ™ are trademarks of National Instruments Corporation.
Product and company names mentioned herein are trademarks or trade names of their respective companies.
Members of the National Instruments Alliance Partner Program are business entities independent from National Instruments and have no
agency, partnership, or joint-venture relationship with National Instruments.
WARNING REGARDING USE OF NATIONAL INSTRUMENTS PRODUCTS
(1) NATIONAL INSTRUMENTS PRODUCTS ARE NOT DESIGNED WITH COMPONENTS AND TESTING FOR A LEVEL OF
RELIABILITY SUITABLE FOR USE IN OR IN CONNECTION WITH SURGICAL IMPLANTS OR AS CRITICAL COMPONENTS IN
ANY LIFE SUPPORT SYSTEMS WHOSE FAILURE TO PERFORM CAN REASONABLY BE EXPECTED TO CAUSE SIGNIFICANT
INJURY TO A HUMAN.
(2) IN ANY APPLICATION, INCLUDING THE ABOVE, RELIABILITY OF OPERATION OF THE SOFTWARE PRODUCTS CAN BE
IMPAIRED BY ADVERSE FACTORS, INCLUDING BUT NOT LIMITED TO FLUCTUATIONS IN ELECTRICAL POWER SUPPLY,
COMPUTER HARDWARE MALFUNCTIONS, COMPUTER OPERATING SYSTEM SOFTWARE FITNESS, FITNESS OF COMPILERS
AND DEVELOPMENT SOFTWARE USED TO DEVELOP AN APPLICATION, INSTALLATION ERRORS, SOFTWARE AND
HARDWARE COMPATIBILITY PROBLEMS, MALFUNCTIONS OR FAILURES OF ELECTRONIC MONITORING OR CONTROL
DEVICES, TRANSIENT FAILURES OF ELECTRONIC SYSTEMS (HARDWARE AND/OR SOFTWARE), UNANTICIPATED USES OR
MISUSES, OR ERRORS ON THE PART OF THE USER OR APPLICATIONS DESIGNER (ADVERSE FACTORS SUCH AS THESE ARE
HEREAFTER COLLECTIVELY TERMED “SYSTEM FAILURES”). ANY APPLICATION WHERE A SYSTEM FAILURE WOULD
CREATE A RISK OF HARM TO PROPERTY OR PERSONS (INCLUDING THE RISK OF BODILY INJURY AND DEATH) SHOULD
NOT BE RELIANT SOLELY UPON ONE FORM OF ELECTRONIC SYSTEM DUE TO THE RISK OF SYSTEM FAILURE. TO AVOID
DAMAGE, INJURY, OR DEATH, THE USER OR APPLICATION DESIGNER MUST TAKE REASONABLY PRUDENT STEPS TO
PROTECT AGAINST SYSTEM FAILURES, INCLUDING BUT NOT LIMITED TO BACK-UP OR SHUT DOWN MECHANISMS.
BECAUSE EACH END-USER SYSTEM IS CUSTOMIZED AND DIFFERS FROM NATIONAL INSTRUMENTS' TESTING
PLATFORMS AND BECAUSE A USER OR APPLICATION DESIGNER MAY USE NATIONAL INSTRUMENTS PRODUCTS IN
COMBINATION WITH OTHER PRODUCTS IN A MANNER NOT EVALUATED OR CONTEMPLATED BY NATIONAL
INSTRUMENTS, THE USER OR APPLICATION DESIGNER IS ULTIMATELY RESPONSIBLE FOR VERIFYING AND VALIDATING
THE SUITABILITY OF NATIONAL INSTRUMENTS PRODUCTS WHENEVER NATIONAL INSTRUMENTS PRODUCTS ARE
INCORPORATED IN A SYSTEM OR APPLICATION, INCLUDING, WITHOUT LIMITATION, THE APPROPRIATE DESIGN,
PROCESS AND SAFETY LEVEL OF SUCH SYSTEM OR APPLICATION.
Download from Www.Somanuals.com. All Manuals Search And Download.
About This Manual
Chapter 1
niocr.ocx..........................................................................................................1-4
NIOCR control..................................................................................1-4
CWMachineVision control ...............................................................1-4
Creating IMAQ Vision Applications.............................................................................1-5
Chapter 2
Acquiring an Image.........................................................................................2-4
Continuous Acquisition.....................................................................2-5
Reading a File..................................................................................................2-6
Converting an Array to an Image....................................................................2-6
Display an Image ...........................................................................................................2-6
Attach Calibration Information......................................................................................2-7
Analyze an Image ..........................................................................................................2-7
© National Instruments Corporation
v
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Contents
Filters .............................................................................................................. 2-9
Convolution Filter............................................................................. 2-10
Grayscale Morphology.................................................................................... 2-10
Chapter 3
Defining Regions Programmatically............................................................... 3-5
Comparing Colors........................................................................................... 3-9
Learning Color Information............................................................................ 3-9
Using the Entire Image..................................................................... 3-10
Chapter 4
Create a Binary Image................................................................................................... 4-1
Improve the Binary Image............................................................................................. 4-2
Separating Touching Particles ........................................................................ 4-3
Chapter 5
Performing Machine Vision Tasks
Locate Objects to Inspect .............................................................................................. 5-2
Using Edge Detection to Build a Coordinate Transformation........................ 5-3
Using Pattern Matching to Build a Coordinate Transformation..................... 5-5
Choosing a Method to Build the Coordinate Transformation......................... 5-7
IMAQ Vision for Visual Basic User Manual
vi
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Finding Points Using Pattern Matching ..........................................................5-12
Make Measurements......................................................................................................5-26
Analytic Geometry Measurements..................................................................5-27
Reading Characters..........................................................................................5-29
Reading Barcodes............................................................................................5-29
Read 1D Barcodes.............................................................................5-29
Read Data Matrix Barcode................................................................5-30
Read PDF417 Barcode......................................................................5-31
Display Results ..............................................................................................................5-31
© National Instruments Corporation
vii
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Contents
Chapter 6
Calibrating Images
Learning the Correction Table.......................................................... 6-8
Setting the Scaling Mode.................................................................. 6-8
Simple Calibration......................................................................................................... 6-9
Save Calibration Information ........................................................................................ 6-10
Technical Support and Professional Services
Glossary
Index
IMAQ Vision for Visual Basic User Manual
viii
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
About This Manual
The IMAQ Vision for Visual Basic User Manual is intended for engineers
and scientists who have knowledge of Microsoft Visual Basic and need to
create machine vision and image processing applications using Visual
Basic objects. The manual guides you through tasks beginning with setting
up the imaging system to taking measurements.
Conventions
The following conventions appear in this manual:
»
The » symbol leads you through nested menu items and dialog box options
to a final action. The sequence File»Page Setup»Options directs you to
pull down the File menu, select the Page Setup item, and select Options
from the last dialog box.
This icon denotes a tip, which alerts you to advisory information.
This icon denotes a note, which alerts you to important information.
bold
Bold text denotes items that you must select or click in the software, such
as menu items and dialog box options. Bold text also denotes parameter
names.
italic
Italic text denotes variables, emphasis, a cross reference, or an introduction
to a key concept. This font also denotes text that is a placeholder for a word
or value that you must supply.
monospace
Text in this font denotes text or characters that you should enter from the
keyboard, sections of code, programming examples, and syntax examples.
This font is also used for the proper names of disk drives, paths, directories,
programs, subprograms, subroutines, device names, functions, operations,
variables, filenames, and extensions.
© National Instruments Corporation
ix
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
About This Manual
Related Documentation
This manual assumes that you are familiar with Visual Basic and can use
ActiveX controls in Visual Basic. The following are good sources of
information about Visual Basic and ActiveX controls:
• msdn.microsoft.com
•
Documentation that accompanies Microsoft Visual Studio
In addition to this manual, the following documentation resources are
available to help you create your vision application.
IMAQ Vision
•
•
IMAQ Vision Concepts Manual—If you are new to machine vision
and imaging, read this manual to understand the concepts behind
IMAQ Vision.
IMAQ Vision for Visual Basic Reference—If you need information
about IMAQ Vision objects, methods, properties, or events while
creating your application, refer to this help file. You can access this file
by selecting Start»Programs»National Instruments»
Documentation»Vision»IMAQ Vision for Visual Basic Reference.
NI Vision Assistant
•
•
NI Vision Assistant Tutorial—If you need to install NI Vision
Assistant and learn the fundamental features of the software, follow
the instructions in this tutorial.
NI Vision Assistant Help—If you need descriptions or step-by-step
guidance about how to use any of the functions or features of NI Vision
Assistant, refer to this help file.
NI Vision Builder for Automated Inspection
•
NI Vision Builder for Automated Inspection Tutorial—If you have
little experience with machine vision, and you need information about
how to solve common inspection tasks with NI Vision Builder AI,
follow the instructions in this tutorial.
•
NI Vision Builder for Automated Inspection: Configuration
Help—If you need descriptions or step-by-step guidance about how to
use any of the NI Vision Builder AI functions to create an automated
vision inspection system, refer to this help file.
IMAQ Vision for Visual Basic User Manual
x
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
About This Manual
•
NI Vision Builder for Automated Inspection: Inspection Help—If you
need information about how to run an automated vision inspection
system using NI Vision Builder AI, refer to this help file.
Other Documentation
•
NI OCR Training Interface Help—If you need information about the
OCR Training Interface, refer to this help file.
•
•
National Instruments IMAQ device user manual—If you need
installation instructions and device-specific information, refer to your
device user manual.
Getting Started With Your IMAQ System—If you need instructions for
installing the NI-IMAQ software and your IMAQ hardware,
connecting your camera, running Measurement & Automation
Explorer (MAX) and the NI-IMAQ Diagnostics, selecting a camera
file, and acquiring an image, refer to this getting started document.
•
•
•
•
NI-IMAQ User Manual—If you need information about how to use
NI-IMAQ and IMAQ image acquisition devices to capture images for
processing, refer to this manual.
NI-IMAQ VI or function reference guides—If you need information
about the features, functions, and operation of the NI-IMAQ image
acquisition VIs or functions, refer to these help files.
IMAQ Vision Deployment Engine Note to Users—If you need
information about how to deploy your custom IMAQ Vision
applications on target computers, read this CD insert.
Example programs—If you want examples of how to create specific
applications in Visual Basic, go to Vision\Examples\MSVB. If you
want examples of how to create specific applications in Microsoft
Visual Basic .NET, go to Vision\Examples\MSVB.NET.
•
•
Application Notes—If you want to know more about advanced
IMAQ Vision concepts and applications, refer to the Application
Notes located on the National Instruments Web site at ni.com/
appnotes.nsf.
NI Developer Zone (NIDZ)—If you want even more information
about developing your vision application, visit the NI Developer Zone
at ni.com/zone. The NI Developer Zone contains example
programs, tutorials, technical presentations, the Instrument Driver
Network, a measurement glossary, an online magazine, a product
advisor, and a community area where you can share ideas, questions,
and source code with vision developers around the world.
© National Instruments Corporation
xi
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
1
Introduction to IMAQ Vision
This chapter describes the IMAQ Vision for Visual Basic software and
associated software products, discusses the documentation and examples
available, outlines the IMAQ Vision for Visual Basic architecture, and lists
the steps for creating a machine vision application.
Note For information about the system requirements and installation procedure for
IMAQ Vision for Visual Basic, refer to the Vision Development Module Release Notes that
came with the software.
About IMAQ Vision
IMAQ Vision for Visual Basic is a collection of ActiveX controls that you
can use to develop machine vision and scientific imaging applications. The
Vision Development Module also includes the same imaging functions for
LabWindows™/CVI™ and other C development environments, as well as
VIs for LabVIEW. Vision Assistant, another Vision Development Module
software product, enables you to prototype your application strategy
quickly without having to do any programming. Additionally, NI offers
Vision Builder for Automated Inspection: configurable machine vision
software that you can use to prototype, benchmark, and deploy
applications.
Documentation and Examples
This manual assumes that you are familiar with Visual Basic and can use
ActiveX controls in Visual Basic. The following are good sources of
information about Visual Basic and ActiveX controls:
• msdn.microsoft.com
•
Documentation that accompanies Microsoft Visual Studio
© National Instruments Corporation
1-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
In addition to this manual, several documentation resources are available
to help you create a vision application:
•
IMAQ Vision Concepts Manual—If you are new to machine vision
and imaging, read this manual to understand the concepts behind
IMAQ Vision.
•
IMAQ Vision for Visual Basic Reference—If you need information
about individual methods, properties, or objects, refer to this help file.
Access this file from within Visual Basic or from the Start menu by
selecting Programs»National Instruments»Vision»
Documentation.
•
•
NI-IMAQ User Manual—If you have a National Instruments image
acquisition (IMAQ) device and need information about the functions
that control the IMAQ device, refer to this portable document (PDF)
file which was installed at the following location when you installed
NI-IMAQ: Start»Programs»National Instruments»Vision»
Documentation. You need Adobe Acrobat Reader to open this file.
Example programs—If you want examples of how to create specific
applications in Visual Basic, go to Vision\Examples\MSVB. If you
want examples of how to create specific applications in Microsoft
Visual Basic .NET, go to Vision\Examples\MSVB.NET.
•
•
CWMachineVision source code—If you want to refer to the source
code for the CWMachineVision control, go to Vision\Source\
MSVB.
Application Notes—If you want to know more about advanced
IMAQ Vision concepts and applications, refer to the Application
Notes located on the National Instruments Web site at ni.com/
appnotes.nsf.
•
NI Developer Zone (NIDZ)—For additional information about
developing a vision application, visit the NI Developer Zone at
ni.com/zone. The NI Developer Zone contains example programs,
tutorials, technical presentations, the Instrument Driver Network, a
measurement glossary, an online magazine, a product advisor, and a
community area where you can share ideas, questions, and source code
with vision developers around the world.
IMAQ Vision for Visual Basic Organization
IMAQ Vision for Visual Basic consists of five ActiveX controls contained
in three files: cwimaq.ocx, cwmv.ocx, and niocr.ocx.
IMAQ Vision for Visual Basic User Manual
1-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
cwimaq.ocx
cwimaq.ocxcontains the following three ActiveX controls and a
collection of ActiveX objects: CWIMAQ, CWIMAQVision, and
CWIMAQViewer. Refer to the ActiveX Objects section for information
about the ActiveX objects.
CWIMAQ Control
Use this control to configure and perform an acquisition from the IMAQ
device. The CWIMAQ control has property pages that allow you to modify
various parameters to configure the acquisition and gather information
about the IMAQ device. Most of the functionality available from the
property pages during design time is also available through the properties
of the CWIMAQ control during run-time. The control has methods that
allow you to perform and control acquisitions, as well.
Note You must have the NI-IMAQ driver software installed on the target system to use the
CWIMAQ control. For information about NI-IMAQ, refer to the NI-IMAQ User Manual
that came with the IMAQ device.
CWIMAQVision Control
Use this control to analyze and process images and their related data. The
CWIMAQVision control provides methods for reading and writing images
to and from files, analyzing images, and performing a variety of image
processing algorithms on images.
CWIMAQViewer Control
Use this control to display images and provide the interface through which
the user will interact with the displayed image. This includes the ability to
zoom and pan images and to draw regions of interest (ROIs) on an image.
The CWIMAQViewer control has property pages that allow you to
configure the viewer’s appearance and behavior during design time as well
as properties that you can configure during run-time. The control has
methods that allow you to attach images to and detach images from the
viewer for display purposes.
Note The CWIMAQViewer control is referred to as a viewer in the remainder of this
document.
© National Instruments Corporation
1-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
niocr.ocx
niocr.ocxprovides one ActiveX control and a collection of ActiveX
objects you use in a machine vision application to perform optical character
recognition (OCR).
NIOCR control
Use this control to perform OCR, which is the process by which the
machine vision software reads text and/or characters in an image. OCR
consists of the following two procedures:
•
•
Training characters
Reading characters
Training characters is the process by which you teach the machine vision
software the types of characters and/or patterns you want to read in the
image during the reading procedure. You can use IMAQ Vision to train any
number of characters, creating a character set, which is the set of characters
that you later compare with objects during the reading procedure. You store
the character set you create in a character set file. Training might be a
one-time process, or it might be a process you repeat several times, creating
several character sets to broaden the scope of characters you want to detect
in an image.
Reading characters is the process by which the machine vision application
you create analyzes an image to determine if the objects match the
characters you trained. The machine vision application reads characters in
characters.
cwmv.ocx
cwmv.ocxcontains one ActiveX control and a collection of ActiveX
objects. Refer to the ActiveX Objects section for more information about
ActiveX objects.
Use this control to perform high-level machine vision tasks, such as
measuring distances. This control is written entirely in Visual Basic using
the methods on the CWIMAQVision and CWIMAQViewer controls. The
source code for the CWMachineVision control is included in the product.
For more information about CWMachineVision methods, refer to
Chapter 5, Performing Machine Vision Tasks.
IMAQ Vision for Visual Basic User Manual
1-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
Tip Refer to the source code of the CWMachineVision control for an example of how to
use the CWIMAQVision methods.
ActiveX Objects
Use the objects to group related input parameters and output parameters to
certain methods, thus reducing the number of parameters that you actually
need to pass to those methods.
Note ActiveX objects in cwimaq.ocxhave a CWIMAQ prefix, objects in niocr.ocx
have an NIOCR prefix, and objects in cwmv.ocxhave a CWMV prefix.
You must create an ActiveX object before you can use it. You can use the
Newkeyword in Visual Basic to create these objects. For example, use the
following syntax to create and store an image in a variable named image:
Dim image As New CWIMAQImage
Tip If you intend to develop an application in Visual C++, National Instruments
recommends that you use IMAQ Vision for LabWindows/CVI. However, if you decide
to use IMAQ Vision for Visual Basic to develop applications for Visual C++, you can
create objects using the respective Create methods on the CWIMAQVision control or
CWIMAQVision.CreateCWIMAQImagemethod.
Figures 1-1 and 1-2 illustrate the steps for creating an application with
IMAQ Vision. Figure 1-1 describes the general steps for designing a
Vision application. The last step in Figure 1-1 is expanded upon in
Figure 1-2. You can use a combination of the items in the last step to create
a IMAQ Vision application. For more information about items in either
diagram, refer to the corresponding chapter listed beside the item.
Note Diagram items enclosed with dashed lines are optional steps.
© National Instruments Corporation
1-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
Set Up Your Imaging System
Chapter 6:
Calibrating Images
Calibrate Your Imaging System
Create an Image
Acquire or Read an Image
Chapter 2:
Getting
Measurement-Ready
Images
Display an Image
Attach Calibration Information
Analyze an Image
Improve an Image
Make Measurements or Identify Objects
in an Image Using
1
2
3
Grayscale or Color Measurements, and/or
Particle Analysis, and/or
Machine Vision
Figure 1-1. General Steps for Designing a Vision Application
IMAQ Vision for Visual Basic User Manual
1-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 1
Introduction to IMAQ Vision
2
Define Regions of Interest
Chapter 3:
Making Grayscale and Color
Measurements
Measure
Grayscale Statistics
Measure
Color Statistics
3
4
Locate Objects to Inspect
Set Search Areas
Create a Binary Image
Chapter 4:
Performing
Particle
Improve a Binary Image
Analysis
Find Measurement Points
Identify Parts Under Inspection
Chapter 5:
Performing
Machine
Vision
Make Particle Measurements
Classify
Read
Read
Objects Characters Symbologies
Tasks
Convert Pixel Coordinates to
Real-World Coordinates
Make Measurements
Display Results
Figure 1-2. Inspection Steps for Building a Vision Application
© National Instruments Corporation
1-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
2
Getting Measurement-Ready
Images
This chapter describes how to set up an imaging system, acquire and
display an image, analyze the image, and prepare the image for additional
processing.
Set Up Your Imaging System
Before you acquire, analyze, and process images, you must set up an
imaging system. The manner in which you set up the system depends on the
imaging environment and the type of analysis and processing you need to
do. Your imaging system should produce images with high enough quality
so that you can extract the information you need from the images.
Follow the guidelines below to set up an imaging system.
1. Determine the type of equipment you need based on the space
constraints and the size of the object you need to inspect. For more
information, refer to Chapter 3, System Setup and Calibration, of the
IMAQ Vision Concepts Manual.
a. Make sure the camera sensor is large enough to satisfy the
minimum resolution requirement.
b. Make sure the lens has a depth of field high enough to keep all of
the objects in focus regardless of their distance from the lens.
Also, make sure the lens has a focal length that meets your needs.
c. Make sure the lighting provides enough contrast between the
object under inspection and the background for you to extract the
information you need from the image.
2. Position the camera so that it is parallel to the object under inspection.
If the camera acquires images of the object from an angle, perspective
errors occur. Even though you can compensate for these errors with
software, NI recommends that you use a perpendicular inspection
angle to obtain the fastest and most accurate results.
3. Select an image acquisition device that meets your needs. National
Instruments offers several image acquisition devices, such as analog
© National Instruments Corporation
2-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
color and monochrome devices as well as digital devices. Visit
ni.com/imaqfor more information about IMAQ devices.
4. Configure the driver software for the image acquisition device. If
you have a National Instruments image acquisition device, configure
the NI-IMAQ driver software through Measurement & Automation
Explorer (MAX). Open MAX by double-clicking the Measurement &
Automation Explorer icon on the desktop. For more information, refer
to the NI-IMAQ User Manual and the Measurement & Automation
Explorer Help for IMAQ.
Calibrate Your Imaging System
After you set up the imaging system, you may want to calibrate the system.
Calibrate the imaging system to assign real-world coordinates to pixel
coordinates and compensate for perspective and nonlinear errors inherent
in the imaging system.
Perspective errors occur when the camera axis is not perpendicular to the
object under inspection. Nonlinear distortion may occur from aberrations
in the camera lens. Perspective errors and lens aberrations cause images to
appear distorted. This distortion displaces information in an image, but it
does not necessarily destroy the information in the image.
Use simple calibration if you want only to assign real-world coordinates to
pixel coordinates. Use perspective and nonlinear distortion calibration if
you need to compensate for perspective errors and nonlinear lens distortion.
For detailed information about calibration, refer to Chapter 6, Calibrating
Images.
Create an Image
The CWIMAQImage object encapsulates all the information required to
represent an image.
Note CWIMAQImage is referred to as an image in the remainder of this document.
An image can be one of many types, depending on the data it stores.
The following image types are valid:
•
•
16-bit
Single-precision floating point
IMAQ Vision for Visual Basic User Manual
2-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
•
•
•
•
Complex
32-bit RGB
32-bit HSL
64-bit RGB
When you create an image, it is an 8-bit image by default. You can set the
Typeproperty on the image object to change the image type.
When you create an image, no memory is allocated to store the image
pixels. IMAQ Vision methods automatically allocate the appropriate
amount of memory when the image size is modified. For example, methods
that acquire or resample an image alter the image size, so they allocate the
appropriate memory space for the image pixels.
Most methods belonging to the IMAQ Vision library require an input of one
or more image objects. The number of images a method takes depends on
the image processing function and the type of image you want to use.
IMAQ Vision methods that analyze the image but do not modify the image
contents require the input of one source image. Methods that process the
contents of the image require one or more source images and a destination
image. Exceptions to the preceding statements are methods that take a mask
image as input.
The presence of a MaskImageparameter indicates that the processing or
analysis is dependent on the contents of the mask image. The only pixels
in the source image that are processed are those whose corresponding
pixels in the mask image are non-zero. If a mask image pixel is 0, the
corresponding source image pixel is not processed or analyzed. The mask
image must be an 8-bit image.
If you want to apply a processing or analysis method to the entire image,
do not supply the optional mask image. Using the same image for both the
source image and mask image also has the same effect as not using the
mask image, except in this case the source image must be an 8-bit image.
Most operations between two images require that the images have the same
type and size. However, arithmetic operations work between two different
types of images. For example, an arithmetic operation between an 8-bit
image and 16-bit image results in a 16-bit image.
© National Instruments Corporation
2-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Acquire or Read an Image
After you create an image, you can acquire an image into the imaging
system in one of the following three ways:
•
•
•
Acquire an image with a camera through the image acquisition device.
Load an image from a file stored on the computer.
Convert the data stored in a 2D array to an image.
Methods that acquire images, load images from file, or convert data
from a 2D array automatically allocate the memory space required to
accommodate the image data.
Acquiring an Image
Use the CWIMAQ control to acquire images with a National Instruments
IMAQ device. You can use IMAQ Vision for Visual Basic to perform
one-shot and continuous acquisitions. You can choose the acquisition type
during design time by setting the value of the Acquisition Type combo box
to One-Shot or Continuous. The Acquisition Type combo box is located
on the Acquisition property page of the CWIMAQ control. You can set the
value at run-time by setting the CWIMAQ.AcquisitionTypeproperty to
cwimaqAcquisitionOneShotor cwimaqAcquisitionContinuous.
One-Shot Acquisition
Use a one-shot acquisition to start an acquisition, perform the acquisition,
and stop the acquisition using a single method. The number of frames
acquired is equal to the number of images in the images collection. Use the
CWIMAQ.AcquireImagemethod to perform this operation synchronously.
Use the CWIMAQ.Startmethod to perform this operation asynchronously.
For information about synchronous and asynchronous acquisitions, refer to
the NI-IMAQ User Manual.
If you want to acquire a single field or frame into a buffer, set the image
count to 1. This operation is also referred to as a snap. Use a snap for
low-speed or single capture applications. The following code illustrates a
synchronous snap:
Private Sub Start_Click()
CWIMAQ1.AcquisitionType = cwimaqAcquisitionOneShot
CWIMAQ1.AcquireImage
End Sub
IMAQ Vision for Visual Basic User Manual
2-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
If you want to acquire multiple frames, set the image count to the number
of frames you want to acquire. This operation is called a sequence. Use a
sequence for applications that process multiple images. The following code
illustrates an asynchronous sequence, where numberOfImagesis the
number of images that you want to process:
Private Sub Start_Click()
CWIMAQ1.AcquisitionType = cwimaqAcquisitionOneShot
CWIMAQ1.Images.RemoveAll
CWIMAQ1.Images.Add numberOfImages
CWIMAQ1.Start
End Sub
Continuous Acquisition
Use a continuous acquisition to start an acquisition and continuously
acquire frames into the image buffers, and then explicitly stop the
acquisition. Use the CWIMAQ.Startmethod to start the acquisition. Use
the CWIMAQ.Stopmethod to stop the acquisition. If you use a single buffer
for the acquisition, this operation is called a grab. The following code
illustrates a grab:
Private Sub Start_Click()
CWIMAQ1.AcquisitionType=_
cwimaqAcquisitionContinuous
CWIMAQ1.Start
End Sub
Private Sub Stop_Click()
CWIMAQ1.Stop
End Sub
A ring operation uses multiple buffers for the acquisition. Use a ring for
high-speed applications that require processing on every image. The
following code illustrates a ring, where numberOfImagesis the number of
images that you want to process:
Private Sub Start_Click()
CWIMAQ1.AcquisitionType =_
cwimaqAcquisitionContinuous
CWIMAQ1.Images.RemoveAll
CWIMAQ1.Images.Add numberOfImages
CWIMAQ1.Start
End Sub
© National Instruments Corporation
2-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Private Sub Stop_Click()
CWIMAQ1.Stop
End Sub
Reading a File
Use the CWIMAQVision.ReadImagemethod to open and read data from
a file stored on the computer into the image reference. You can read from
image files stored in several standard formats, such as BMP, TIFF, JPEG,
PNG, and AIPD. In all cases, the software automatically converts the pixels
it reads into the type of image you pass in.
Use the CWIMAQVision.ReadImageAndVisionInfomethod to open an
image file containing additional information, such as calibration
information, template information for pattern matching, or overlay
information. For more information about pattern matching templates
and overlays, refer to Chapter 5, Performing Machine Vision Tasks.
You also can use the CWIMAQVision.GetFileInformationmethod
to retrieve image properties—image size, pixel depth, recommended image
type, and calibration units—without actually reading all the image data.
Converting an Array to an Image
Use the CWIMAQImage.ArrayToImagemethod to convert an array to an
image. You also can use the CWIMAQImage.ImageToArraymethod to
convert an image to an array.
Display an Image
Display an image using the CWIMAQViewer control. Use
CWIMAQViewer.Attachto attach the image you want the viewer
to display. When you attach an image to a viewer, the image automatically
updates the viewer whenever an operation modifies the contents of the
image. You can access the image attached to the viewer using the
CWIMAQViewer.Imageproperty. Before you attach an image to the
viewer, the viewer already has an image attached by default. Therefore, the
viewer has an image attached to it at all times. You can use the attached
image as either a source image, destination image, or both using the
CWIMAQViewer.Imageproperty.
You can use the CWIMAQViewer.Paletteproperty to access the
CWIMAQPalette object associated with the viewer. Use the
CWIMAQPalette object to programmatically apply a color palette to
IMAQ Vision for Visual Basic User Manual
2-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
the viewer. You can set the CWIMAQPalette.Typeproperty to apply
predefined color palettes. For example, if you need to display a binary
image—an image that contains particle regions with pixel values of 1
and a background region with pixel values of 0—set the Typeproperty to
cwimaqPaletteBinary. For more information about color palettes, refer
to Chapter 2, Display, of the IMAQ Vision Concepts Manual.
You also can set a default palette during design time using the Menu
property page. Users can change the color palette during run time by using
the right-click menu on the viewer.
Attach Calibration Information
If you want to attach the calibration information of the current setup
CWIMAQVision.SetCalibrationInformation. This method takes
in a source image that contains the calibration information and a
destination image that you want to calibrate. The output image is the
inspection image with the calibration information attached to it. For
detailed information about calibration, refer to Chapter 6, Calibrating
Images.
Note Because calibration information is part of the image, it is propagated throughout
the processing and analysis of the image. Methods that modify the image size,
such as geometrical transforms, void the calibration information. Use
CWIMAQVision.WriteImageAndVisionInfoto save the image and all of the
attached calibration information to a file.
Analyze an Image
When you acquire and display an image, you may want to analyze the
contents of the image for the following reasons:
•
To determine if the image quality is high enough for the inspection
task.
•
To obtain the values of parameters that you want to use in processing
methods during the inspection process.
The histogram and line profile tools can help you analyze the quality of the
images.
© National Instruments Corporation
2-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Use CWIMAQVision.Histogram2to analyze the overall grayscale
distribution in the image. Use the histogram of the image to analyze
two important criteria that define the quality of an image—saturation and
contrast. If the image does not have enough light, the majority of the pixels
will have low intensity values, which appear as a concentration of peaks on
the left side of the histogram. If the image has too much light, the majority
of the pixels will have a high intensity values, which appear as
a concentration of peaks on the right side of the histogram. If the image has
an appropriate amount of contrast, the histogram will have distinct regions
of pixel concentrations. Use the histogram information to decide if the
image quality is high enough to separate objects of interest from the
background.
If the image quality meets your needs, use the histogram to determine the
range of pixel values that correspond to objects in the image. You can use
this range in processing methods, such as determining a threshold range
during particle analysis.
If the image quality does not meet your needs, try to improve the imaging
re-evaluate and modify each component of the imaging setup: lighting
equipment and setup, lens tuning, camera operation mode, and acquisition
board parameters. If you reach the best possible conditions with the setup
but the image quality still does not meet your needs, try to improve the
image quality using the image processing techniques described in the
Improve an Image section of this chapter.
Use CWIMAQVision.LineProfile2to get the pixel distribution along a
line in the image, or use CWIMAQVision.RegionsProfileto get the
pixel distribution along a one-dimensional path in the image. By looking at
the pixel distribution, you can determine if the image quality is high enough
to provide you with sharp edges at object boundaries. Also, you can
determine if the image is noisy, and identify the characteristics of the noise.
If the image quality meets your needs, use the pixel distribution
information to determine some parameters of the inspection methods you
want to use. For example, use the information from the line profile to
determine the strength of the edge at the boundary of an object. You can
input this information into CWIMAQVision.FindEdges2to find the edges
of objects along the line.
IMAQ Vision for Visual Basic User Manual
2-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Improve an Image
Using the information you gathered from analyzing the image, you may
want to improve the quality of the image for inspection. You can improve
the image with lookup tables, filters, grayscale morphology, and Fast
Fourier transforms (FFT).
Lookup Tables
Apply lookup table (LUT) transformations to highlight image details in
areas containing significant information at the expense of other areas.
A LUT transformation converts input grayscale values in the source image
into other grayscale values in the transformed image. IMAQ Vision
provides four methods that directly or indirectly apply lookup tables to
images:
• CWIMAQVision.MathLookup—Converts the pixel values of an
image by replacing them with values from a predefined lookup table.
IMAQ Vision has seven predefined lookup tables based on
mathematical transformations. For more information about these
lookup tables, refer to Chapter 5, Image Processing, in the IMAQ
Vision Concepts Manual.
• CWIMAQVision.UserLookup—Converts the pixel values of an
image by replacing them with values from a user-defined lookup table.
• CWIMAQVision.Equalize2—Distributes the grayscale values
evenly within a given grayscale range. Use this method to increase the
contrast in images containing few grayscale values.
• CWIMAQVision.Inverse—Inverts the pixel intensities of an image
to compute the negative of the image. For example, use this method
before applying an automatic threshold to the image if the background
pixels are brighter than the object pixels.
Filters
Filter the image when you need to improve the sharpness of transitions in
the image or increase the overall signal-to-noise ratio of the image. You can
choose either a lowpass or highpass filter, depending on your needs.
Lowpass filters remove insignificant details by smoothing the image,
removing sharp details, and smoothing the edges between the objects
your own lowpass filter with CWIMAQVision.Convoluteor
CWIMAQVision.NthOrder.
© National Instruments Corporation
2-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Highpass filters emphasize details, such as edges, object boundaries,
or cracks. These details represent sharp transitions in intensity value.
You can define your own highpass filter with CWIMAQVision.Convolute
or CWIMAQVision.NthOrder, or you can use a predefined highpass
filter with CWIMAQVision.EdgeFilteror
CWIMAQVision.CannyEdgeFilter. CWIMAQVision.EdgeFilter
allows you to find edges in an image using predefined edge detection
kernels, such as the Sobel, Prewitt, and Roberts kernels.
Convolution Filter
CWIMAQVision.Convoluteallows you to use a predefined set of
lowpass and highpass filters. Each filter is defined by a kernel of
coefficients. Use the CWIMAQKernel object to define the filter. Use
CWIMAQKernel.LoadKernelto load a predefined kernel into the
object. If the predefined kernels do not meet your needs, use the
CWIMAQKernel.SetSizemethod to set the size of the kernel and the
CWIMAQKernel.Elementproperty to set the data in the kernel.
Nth Order Filter
CWIMAQVision.NthOrderallows you to define a lowpass or highpass
filter depending on the value of N that you choose. One specific Nth order
filter, the median filter, removes speckle noise, which appears as small
black and white dots. For more information about Nth order filters, refer to
Chapter 5, Image Processing, of the IMAQ Vision Concepts Manual.
Grayscale Morphology
Perform grayscale morphology when you want to filter grayscale
features of an image. Grayscale morphology helps you remove or
enhance isolated features, such as bright pixels on a dark background.
Use these transformations on a grayscale image to enhance non-distinct
features before thresholding the image in preparation for particle analysis.
Grayscale morphological transformations, which include erosions and
dilations, compare a pixel to those pixels that surround it. An erosion keeps
the smallest pixel values. A dilation keeps the largest pixel values.
For more information about grayscale morphology transformations, refer
to Chapter 5, Image Processing, of the IMAQ Vision Concepts Manual.
IMAQ Vision for Visual Basic User Manual
2-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
Use CWIMAQVision.GrayMorphologyto perform one of the following
seven transformations:
•
•
•
•
•
•
•
Erosion—Reduces the brightness of pixels that are surrounded by
neighbors with a lower intensity.
Dilation—Increases the brightness of pixels surrounded by neighbors
with a higher intensity. A dilation has the opposite effect of an erosion.
Opening—Removes bright pixels isolated in dark regions and smooths
boundaries.
Closing—Removes dark pixels isolated in bright regions and smooths
boundaries.
Proper-opening—Removes bright pixels isolated in dark regions and
smooths the inner contours of particles.
Proper-closing—Removes dark pixels isolated in bright regions and
smooths the inner contours of particles.
Auto-median—Generates simpler particles that have fewer details.
FFT
Use the Fast Fourier Transform (FFT) to convert an image into its
frequency domain. In an image, details and sharp edges are associated
with mid to high spatial frequencies because they introduce significant
gray-level variations over short distances. Gradually varying patterns are
associated with low spatial frequencies.
An image can have extraneous noise, such as periodic stripes, introduced
during the digitization process. In the frequency domain, the periodic
pattern is reduced to a limited set of high spatial frequencies. Also, the
imaging setup may produce non-uniform lighting of the field of view,
which produces an image with a light drift superimposed on the
information you want to analyze. In the frequency domain, the light drift
appears as a limited set of low frequencies around the average intensity of
the image, which is the DC component.
You can use algorithms working in the frequency domain to isolate and
remove these unwanted frequencies from the image. Complete the
following steps to obtain an image in which the unwanted pattern has
disappeared but the overall features remain:
1. Use CWIMAQVision.FFTto convert an image from the spatial domain
to the frequency domain. This method computes the FFT of the image
and results in a complex image representing the frequency information
of the image.
© National Instruments Corporation
2-11
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 2
Getting Measurement-Ready Images
2. Improve the image in the frequency domain with a lowpass or highpass
frequency filter. Specify which type of filter to use with
CWIMAQVision.CxAttenuate or CWIMAQVision.CxTruncate.
Lowpass filters smooth noise, details, textures, and sharp edges in an
image. Highpass filters emphasize details, textures, and sharp edges in
images, but they also emphasize noise.
•
Lowpass attenuation—The amount of attenuation is directly
proportional to the frequency information. At low frequencies,
there is little attenuation. As the frequencies increase, the
attenuation increases. This operation preserves all of the zero
frequency information. Zero frequency information corresponds
to the DC component of the image or the average intensity of
the image in the spatial domain.
•
Highpass attenuation—The amount of attenuation is inversely
proportional to the frequency information. At high frequencies,
there is little attenuation. As the frequencies decrease, the
attenuation increases. The zero frequency component is removed
entirely.
•
•
Lowpass truncation—Specify a frequency. The frequency
components above the ideal cutoff frequency are removed, and
the frequencies below it remain unaltered.
Highpass truncation—Specify a frequency. The frequency
components above the ideal cutoff frequency remain unaltered,
and the frequencies below it are removed.
3. To transform the image back to the spatial domain, use
CWIMAQVision.InverseFFT.
Complex Image Operations
CWIMAQVision.ReplaceComplexPlane and
CWIMAQVision.ExtractComplexPlane allow you to access, process,
and update independently the magnitude, phase, real, and imaginary
planes of a complex image. You can also convert a complex image to
an array and back with CWIMAQImage.ImageToArray and
CWIMAQImage.ArrayToImage.
IMAQ Vision for Visual Basic User Manual
2-12
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
3
Making Grayscale and Color
Measurements
images. You can make inspection decisions based on image statistics, such
as the mean intensity level in a region. Based on the image statistics, you
can perform many machine vision inspection tasks on grayscale or color
images, such as detecting the presence or absence of components, detecting
flaws in parts, and comparing a color component with a reference.
Figure 3-1 illustrates the basic steps involved in making grayscale and
color measurements.
Define Regions of Interest
Measure
Measure
Grayscale Statistics
Color Statistics
Figure 3-1. Steps to Taking Grayscale and Color Measurements
Define Regions of Interest
An ROI is an area of an image in which you want to focus the image
analysis. You can define an ROI interactively, programmatically, or with
an image mask.
Defining Regions Interactively
You can interactively define an ROI in a viewer that displays an image. Use
the tools from the right-click menu to interactively define and manipulate
the ROIs. Table 3-1 describes each of the tools and the manner in which
you use them.
© National Instruments Corporation
3-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Table 3-1. Tools Palette Functions
Tool Name
None
Function
Disable the tools.
Selection Tool
Select an ROI in the image and adjust the position
of its control points and contours.
Action: Click the appropriate ROI or control
points.
Point
Line
Select a pixel in the image.
Action: Click the appropriate position.
Draw a line in the image.
Action: Click the initial position and click again
on the final position.
Rectangle
Draw a rectangle or square in the image.
Action: Click one corner and drag to the opposite
corner.
Rotated Rectangle Draw a rotated rectangle in the image.
Action: Click one corner and drag to the opposite
corner to create the rectangle. Then, click on the
lines inside the rectangle and drag to adjust the
rotation angle.
Oval
Draw an oval or circle in the image.
Action: Click the center position and drag to the
appropriate size.
Annulus
Draw an annulus in the image.
Action: Click the center position and drag to the
appropriate size. Adjust the inner and outer radii,
and adjust the start and end angle.
Broken Line
Draw a broken line in the image.
Action: Click to place a new vertex and
double-click to complete the ROI element.
IMAQ Vision for Visual Basic User Manual
3-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Table 3-1. Tools Palette Functions (Continued)
Tool Name
Function
Polygon
Draw a polygon in the image.
Action: Click to place a new vertex and
double-click to complete the ROI element.
Freeline
Draw a freehand line in the image.
Action: Click the initial position, drag to the
appropriate shape and release the mouse button to
complete the shape.
Free Region
Draw a freehand region in the image.
Action: Click the initial position, drag to the
appropriate shape and release the mouse button to
complete the shape.
Zoom
Pan
Zoom in or zoom out in an image.
Action: Click the image to zoom in. Hold down
<Shift> and click to zoom out.
Pan around an image.
Action: Click an initial position, drag to the
appropriate position, and release the mouse button
to complete the pan.
Hold down <Shift> when drawing an ROI if you want to constrain the ROI
to the horizontal, vertical, or diagonal axes, when possible. Use the
selection tool to position an ROI by its control points or vertices. ROIs are
context sensitive, meaning that the cursor actions differ depending on the
ROI with which you interact. For example, if you move the cursor over the
side of a rectangle, the cursor changes to indicate that you can click and
in a window, hold down <Ctrl> while drawing additional ROIs. You also
can use CWIMAQViewer.MaxContoursto set the maximum number of
contours the viewer can have in its ROI.
In the status bar of the viewer, you can display tool information about the
characteristics of ROIs you draw, as shown in Figure 3-2. Check the Show
or set the CWIMAQViewer.ShowToolInfoproperty to Trueduring run
time to display tool information. You also can show or hide the tool
information from the right-click menu.
© National Instruments Corporation
3-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
8
1
2
3
4
5
6
7
1
2
3
4
Anchoring Coordinates of a Region of Interest
Size of the Image
Zoom Factor
Image Type Indicator (8-bit, 16-bit, Float,
RGB32, RGBU64, HSL, Complex)
5
6
7
8
Pixel Intensity
Coordinates of the Mouse
Size of an Active Region of Interest
Length and Horizontal Angle of a Line Region
Figure 3-2. Tools Information
IMAQ Vision for Visual Basic User Manual
3-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
During design time, use the Menu property page to select which tools
appear in the right-click menu. You also can designate a default tool from
this property page. During run time, set the CWIMAQViewer.MenuItems
to select the tools to display, and set CWIMAQViewer.Toolto select the
default tool.
Defining Regions Programmatically
You can define ROIs programmatically using the CWIMAQRegions
collection. In IMAQ Vision, shapes are represented by shape objects.
For example, CWIMAQPoint represents a point, and CWIMAQLine
represents a line. Use the following methods listed in Table 3-2 to add
various shapes to the regions.
Table 3-2. Methods that Add Shapes to Regions
Method
Description
adds a point to the ROI
AddPoint
AddLine
adds a line to the ROI
AddRectangle
AddRotatedRectangle
AddOval
adds a rectangle to the ROI
adds a rotated rectangle to the ROI
adds an oval to the ROI
AddAnnulus
adds an annulus to the ROI
adds a broken line to the ROI
adds a polygon to the ROI
adds a free line to the ROI
adds a free region to the ROI
adds a region object to the ROI
AddBrokenLine
AddPolygon
AddFreeline
AddFreeregion
AddRegion
Use the CWIMAQRegions.CopyTomethod to copy all the data from one
CWIMAQRegions object to another.
You can define the regions on a viewer and access the regions using the
CWIMAQViewer.Regionsproperty.
The individual CWIMAQRegion objects provide access to the shapes in the
collection. Each region has one shape object associated with it. Use the
CWIMAQRegion.Shapeproperty to determine what type of shape the
© National Instruments Corporation
3-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
CWIMAQRegion contains. When you know the type of shape that the
region contains, you can set the region into a shape variable and use that
variable to manipulate the shape properties. For example, the following
code resizes a rectangle selected on the viewer:
Dim MyRectangle As CWIMAQRectangle
Set MyRectangle = CWIMAQViewer1.Regions(1)
MyRectangle.Width = 100
MyRectangle.Height = 100
You also can pass CWIMAQRegion objects to any IMAQ Vision method
that takes a shape as a parameter. However, if the CWIMAQRegion does
not contain the type of shape object that the method requires, a type
mismatch error results.
Defining Regions with Masks
You can define regions to process with image masks. An image mask is
an 8-bit image of the same size as or smaller than the image you want to
process. Pixels in the mask image determine if the corresponding pixel
in the source image needs to be processed. If a pixel in the image mask
has a value other than 0, the corresponding pixel in the source image is
pixel in the source image is left unchanged.
You can use a mask to define particles in a grayscale image when you need
to make intensity measurements on those particles. First, threshold the
image to make a new binary image. For more information about binary
images, refer to Chapter 4, Performing Particle Analysis. You can input the
binary image or a labeled version of the binary image as a mask image to
the intensity measurement method. If you want to make color comparisons,
convert the binary image into a CWIMAQRegions collection using
CWIMAQVision.MaskToRegions.
Measure Grayscale Statistics
You can measure grayscale statistics in images using light meters or
quantitative analysis methods. You can obtain the center of energy for an
image with the centroid method.
Use CWMachineVision.LightMeterPointto measure the
light intensity at a point in the image. Use
CWMachineVision.LightMeterLineto get the pixel value statistics
along a line in the image, such as mean intensity, standard deviation,
IMAQ Vision for Visual Basic User Manual
3-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
minimum intensity, and maximum intensity. Use
CWMachineVision.LightMeterRectangleto get the pixel
value statistics within a rectangular region in an image.
Use CWIMAQVision.Quantifyto obtain the following statistics about the
entire image or individual regions in the image: mean intensity, standard
deviation, minimum intensity, maximum intensity, area, and the percentage
of the image that you analyzed. You can specify regions in the image with
a labeled image mask. A labeled image mask is a binary image that has
been processed so that each region in the image mask has a unique intensity
value. Use CWIMAQVision.Label2to label the image mask.
Use CWIMAQVision.Centroid2to compute the energy center of the
image, or of a region within an image.
Measure Color Statistics
Most image processing and analysis methods apply to 8-bit and 16-bit
images. However, you can analyze and process individual components of a
color image.
Using CWIMAQVision.ExtractColorPlanes, you can break down
a color image into various sets of primary components, such as
RGB (Red, Green, and Blue), HSI (Hue, Saturation, and Intensity),
HSL (Hue, Saturation, and Luminance), or HSV (Hue, Saturation, and
process like any other grayscale image. Use
CWIMAQVision.ExtractSingleColorPlaneto extract a single color
plane from an image. Use CWIMAQVision.ReplaceColorPlanesto
reassemble a color image from a set of three 8-bit or 16-bit images, where
each image becomes one of the three primary components. Figures 3-3
and 3-4 illustrate how a color image breaks down into its three components.
© National Instruments Corporation
3-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Red
Red
Green
Blue
8
8
8
8
8
8
8
8
8
8
8
8
8
Green
8
Blue
8
Hue
Hue
Saturation
Intensity
8
or
or Saturation
8
Color
Color
Intensity
8
Image
Image
32
32
8-bit Image Processing
Hue
Hue
Saturation
Luminance
Hue
8
8
8
8
8
8
or
Saturation
Luminance
Hue
or
or
Saturation
Value
Saturation or
Value
Figure 3-3. Primary Components of an 32-bit Color Image
16
16
16
16
Red
Red
16-bit
Image
Processing
Color
Image
Color
Image
64
64
Green
Green
Blue 16
16 Blue
Figure 3-4. Primary Components of a 64-bit Color Image
A color pixel encoded as a Longvalue can be decomposed into its
individual components using CWIMAQVision.IntegerToColorValue.
You can convert a pixel value represented in any color model into
its components in any other color model using
CWIMAQVision.ColorValueConversion2.
IMAQ Vision for Visual Basic User Manual
3-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Comparing Colors
You can use the color matching capability of IMAQ Vision to compare or
evaluate the color content of an image or regions in an image.
Complete the following steps to compare colors using color matching:
1. Select an image containing the color information that you want to use
as a reference. The color information can consist of a single color or
multiple dissimilar colors, such as red and blue.
2. Use the entire image or regions in the image to learn the color
information using CWIMAQVision.LearnColor, which stores the
results of the operation in a CWIMAQColorInformation object that
you supply as a parameter. The color information object has a color
spectrum that contains a compact description of the color information
that you learned. Refer to Chapter 14, Color Inspection, of the
IMAQ Vision Concepts Manual for more information. Use the
CWIMAQColorInformation object to represent the learned color
information for all subsequent matching operations.
3. Define an entire image, a region, or multiple regions in an image as the
inspection or comparison area.
4. Use CWIMAQVision.MatchColorto compare the learned color
information to the color information in the inspection regions. This
method returns an array of scores that indicates how close the matches
are to the learned color information.
5. Use the color matching score as a measure of similarity between the
reference color information and the color information in the image
regions being compared.
Learning Color Information
When learning color information, choose the color information carefully:
•
Specify an image or regions in an image that contain the color or color
set that you want to learn.
•
•
Specify the granularity required to represent the color information.
Choose colors that you want to ignore during matching.
© National Instruments Corporation
3-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Specifying the Color Information to Learn
Because color matching only uses color information to measure similarity,
the image or regions in the image representing the object should contain
only the significant colors that represent the object, as shown in
Figure 3-5a. Figure 3-5b illustrates an unacceptable region containing
background colors.
a.
b.
Figure 3-5. Template Color Information
The following sections specify when to learn the color information
associated with an entire image, a region in an image, or multiple regions
in an image.
Using the Entire Image
You can use an entire image to learn the color spectrum that represents the
entire color distribution of the image. In a fabric identification application,
for example, an entire image can specify the color information associated
with a certain fabric type, as shown in Figure 3-6.
Figure 3-6. Using the Entire Image to Learn Color Distribution
IMAQ Vision for Visual Basic User Manual
3-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Making Grayscale and Color Measurements
Using a Region in the Image
You can select a region in the image to provide the color information for
comparison. A region is helpful for pulling out the useful color information
in an image. Figure 3-7 shows an example of using a region that contains
the color information that is important for the application.
Figure 3-7. Using a Single Region to Learn Color Distribution
Using Multiple Regions in the Image
that object. The color of a surface depends on the directions of illumination
and the direction from which the surface is observed. Two identical objects
may have different appearances because of a difference in positioning or a
change in the lighting conditions.
Figure 3-8 shows how light reflects differently off of the 3D surfaces of the
fuses, resulting in slightly different colors for identical fuses. To view the
color differences, compare the 3-amp fuse in the upper row with the 3-amp
fuse in the lower row.
If you learn the color spectrum by drawing a region of interest around the
3-amp fuse in the upper row, and then do a color matching for the 3-amp
fuse in the upper row, you get a very high match score for it—close to 1000.
The match score for the 3-amp fuse in the lower row is low—around 500.
This problem could cause a mismatch for the color matching in a fuse box
inspection process.
The color learning functionality of IMAQ Vision uses a clustering process
to find the representative colors from the color information specified by one
or multiple regions in the image. To create a representative color spectrum
for all 3-amp fuses in the learning phase, draw a Region around the 3-amp
fuse in the upper row, hold down <Ctrl>, and draw another Region around
the 3 amp fuse in the lower row. The new color spectrum represents 3-amp
© National Instruments Corporation
3-11
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
fuses much better and results in high match scores—around 800—for both
the fuses. You can use an unlimited number of samples to learn the
representative color spectrum for a specified template.
1
1
Regions used to learn color information
Figure 3-8. Using Multiple Regions to Learn Color Distribution
Choosing a Color Representation Sensitivity
When you learn a color, you need to specify the granularity required to
colors in the color space requires a lower granularity to describe the
color than an image that contains colors that are close to one another
in the color space. Use the ColorSensitivityparameter of
use to represent the colors. For more information about color sensitivity,
refer to the Color Sensitivity section of Chapter 5, Performing Machine
Vision Tasks.
IMAQ Vision for Visual Basic User Manual
3-12
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 3
Making Grayscale and Color Measurements
Ignoring Learned Colors
You can ignore certain color components in color matching by setting the
corresponding component in the input color spectrum array to –1. To set a
particular color component, follow these steps:
1. Copy CWIMAQColorInformation.ColorSpectrum, or create your
own array.
2. Set the corresponding components of the array.
3. Assign this array to CWIMAQColorInformation.ColorSpectrum
on the CWIMAQColorInformation object you want to use as input
during the match phase.
For example, setting the last component in the color spectrum to –1 ignores
the color white. Setting the second to last component in the color spectrum
array to –1 ignores the color black. To ignore other color components in
color matching, determine the index to the color spectrum by locating the
corresponding bins in the color wheel, where each bin corresponds to a
component in the color spectrum array. Ignoring certain colors such as the
background color results in a more accurate color matching score. Ignoring
the background color also provides more flexibility when defining the
regions of interest in the color matching process. Ignoring certain colors,
such as the white color created by glare on a metallic surface, also improves
the accuracy of the color matching. Experiment learning the color
information about different parts of the images to determine which colors
to ignore. For more information about the color wheel and color bins, refer
to Chapter 14, Color Inspection, in the IMAQ Vision Concepts Manual.
© National Instruments Corporation
3-13
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
4
Performing Particle Analysis
This chapter describes how to perform particle analysis on the images. Use
particle analysis to find statistical information about particles, such as the
presence, size, number, and location of particle regions. With this
as detecting flaws on silicon wafers or detecting soldering defects on
electronic boards. Examples of how particle analysis can help you perform
web inspection tasks include locating structural defects on wood planks or
detecting cracks on plastic sheets.
Figure 4-1 illustrates the steps involved in performing particle analysis.
Create a Binary Image
Improve a Binary Image
Make Particle Measurements
in Pixels or Real-World Units
Figure 4-1. Steps for Performing Particle Analysis
Create a Binary Image
Threshold the grayscale or color image to create a binary image. Creating
a binary image separates the objects that you want to inspect from the
background. The threshold operation sets the background pixels to 0 in the
binary image, while setting the object pixels to a non-zero value. Object
pixels have a value of 1 by default, but you can set the object pixels to any
value or retain their original value.
objects of interest in the grayscale image fall within a continuous range
of intensities and you can specify this threshold range manually, use
CWIMAQVision.Thresholdto threshold the image.
© National Instruments Corporation
4-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
If all the objects in the grayscale image are either brighter or darker than
the background, you can use CWIMAQVision.AutoThresholdto
automatically determine the optimal threshold range and threshold the
image. Automatic thresholding techniques offer more flexibility than
simple thresholds based on fixed ranges. Because automatic thresholding
techniques determine the threshold level according to the image histogram,
the operation is more independent of changes in the overall brightness and
contrast of the image than a fixed threshold. These techniques are more
resistant to changes in lighting, which makes them well suited for
automated inspection tasks.
If the grayscale image contains objects that have multiple discontinuous
grayscale values, use CWIMAQVision.MultiThreshold2to specify
multiple threshold ranges.
If you need to threshold a color image, use
CWIMAQVision.ColorThreshold. You must specify threshold
ranges for each of the color planes—Red, Green, and Blue; or Hue,
Saturation, and Luminance. The binary image resulting from a color
threshold is an 8-bit binary image.
Improve the Binary Image
After you threshold the image, you may want to improve the resulting
binary image with binary morphology. You can use primary binary
morphology or advanced binary morphology to remove unwanted
particles, separate connected particles, or improve the shape of particles.
Primary morphology methods work on the image as a whole by processing
pixels individually. Advanced morphology operations are built upon
the primary morphological operators and work on particles as opposed
to pixels.
The advanced morphology methods that improve binary images require
that you specify the type of connectivity to use. Connectivity specifies how
IMAQ Vision determines if two adjacent pixels belong to the same particle.
If you have a particle that contains narrow areas, use connectivity-8 to
ensure that the software recognizes the connected pixels as one particle.
If you have two particles that touch at one point, use connectivity-4 to
ensure that the software recognizes the pixels as two separate particles.
For more information about connectivity, refer to Chapter 9, Binary
Note Use the same type of connectivity throughout the application.
IMAQ Vision for Visual Basic User Manual
4-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Removing Unwanted Particles
Use CWIMAQVision.RejectBorderto remove particles that touch the
border of the image. Reject particles on the border of the image when you
suspect that the information about those particles is incomplete.
Use CWIMAQVision.RemoveParticleto remove large or small particles
that do not interest you. You also can use the Erode, Open, and POpen
methods in CWIMAQVision.Morphologyto remove small particles.
Unlike CWIMAQVision.RemoveParticle, these three methods alter the
size and shape of the remaining particles.
Use the hit-miss method of CWIMAQVision.Morphologyto locate
particular configurations of pixels, which you define with a structuring
element. Depending on the configuration of the structuring element,
the hit-miss method can locate single isolated pixels, cross-shape or
longitudinal patterns, right angles along the edges of particles, and other
user-specified shapes. For more information about structuring elements,
refer to Chapter 9, Binary Morphology, of the IMAQ Vision Concepts
Manual.
If you know enough about the shape features of the particles you want to
keep, use CWIMAQVision.ParticleFilter2to filter out particles that
do not interest you. If you do not have enough information about the
particles you want to keep at this point in the processing, use the particle
measurement methods to obtain this information before applying a particle
filter. Refer to the Make Particle Measurements section for more
information about the measurement methods.
Separating Touching Particles
Use CWIMAQVision.Separationor apply an erosion or an open
operation with CWIMAQVision.Morphologyto separate touching
objects. CWIMAQVision.Separationis an advanced operation that
separates particles without modifying their shapes. However, erosion and
open operations alter the shape of all the particle.
Note A separation is a time-intensive operation compared to an erosion or open operation.
Consider using an erosion if speed is an issue with the application.
© National Instruments Corporation
4-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Improving Particle Shapes
Use CWIMAQVision.FillHoleto fill holes in the particles. Use
CWIMAQVision.Morphologyto perform a variety of operations on the
particles. You can use the Open, Close, Proper Open, Proper Close, and
auto-median operations to smooth the boundaries of the particles. Open and
Proper Open Smooth the boundaries of the particle by removing small
isthmuses, while close widens the isthmuses. Close and Proper Close fill
small holes in the particle. Auto-median removes isthmuses and fills holes.
For more information about these operations, refer to Chapter 9, Binary
Morphology, in the IMAQ Vision Concepts Manual.
Make Particle Measurements
After you create a binary image and improve it, you can make particle
measurements. With these measurements you can determine the location of
particles and their shape features. Use the following methods to perform
particle measurements:
• CWIMAQVision.ParticleReport—This method returns a
CWIMAQParticleReport object, which contains, for each particle,
nine of the most commonly used measurements, including the particle
area, bounding rectangle, and center of mass. The bounding rectangle
is returned as one measurement, but contains four measurement
contains two elements.
• CWIMAQVision.ParticleMeasurement—This method takes the
measurement you want to apply to all particles, and returns an array
that contains the specified measurement for each particle.
Table 4-1 lists all of the measurements that
CWIMAQVision.ParticleMeasurementreturns.
Table 4-1. Measurement Types
Measurement
Description
Area of the particle.
cwimaqMeasurementArea
cwimaqMeasurementAreaByImageArea
Percentage of the particle Area covering
the Image Area.
cwimaqMeasurementAreaByParticleAndHolesArea
Percentage of the particle Area in
relation to its Particle & Holes’ Area.
IMAQ Vision for Visual Basic User Manual
4-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementAverageHorizSegmentLength
Average length of a horizontal segment
in the particle.
cwimaqMeasurementAverageVertSegmentLength
Average length of a vertical segment in
the particle.
cwimaqMeasurementBoundingRectBottom
cwimaqMeasurementBoundingRectDiagonal
Y-coordinate of the lowest particle point.
Distance between opposite corners of
the bounding rectangle.
cwimaqMeasurementBoundingRectHeight
Distance between the Y-coordinate
of highest particle point and the
Y-coordinate of the lowest particle point.
cwimaqMeasurementBoundingRectLeft
cwimaqMeasurementBoundingRectRight
X-coordinate of the leftmost particle
point.
X-coordinate of the rightmost particle
point.
cwimaqMeasurementBoundingRectTop
cwimaqMeasurementBoundingRectWidth
Y-coordinate of highest particle point.
Distance between the X-coordinate
of the leftmost particle point and the
X-coordinate of the rightmost particle
point.
cwimaqMeasurementCenterMassX
cwimaqMeasurementCenterMassY
X-coordinate of the point representing
the average position of the total particle
mass assuming every point in the
particle has a constant density.
Y-coordinate of the point representing
the average position of the total particle
mass assuming every point in the
particle has a constant density.
cwimaqMeasurementCompactnessFactor
cwimaqMeasurementConvexHullArea
Area divided by the product of
Bounding Rect Width and Bounding
Rect Height.
containing all points in the particle.
© National Instruments Corporation
4-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementConvexHullPerimeter
Perimeter of the smallest convex
polygon containing all points in the
particle.
cwimaqMeasurementElongationFactor
Max Feret Diameter divided by
Equivalent Rect Short Side (Feret).
cwimaqMeasurementEquivalentEllipseMajorAxis
Length of the major axis of the ellipse
with the same perimeter and area as the
particle.
cwimaqMeasurementEquivalentEllipseMinorAxis
Length of the minor axis of the ellipse
with the same perimeter and area as the
particle.
cwimaqMeasurementEquivalentEllipseMinorAxisFeret Length of the minor axis of the ellipse
with the same area as the particle, and
Major Axis equal in length to the Max
Feret Diameter.
cwimaqMeasurementEquivalentRectDiagonal
Distance between opposite corners of
the rectangle with the same perimeter
and area as the particle.
cwimaqMeasurementEquivalentRectLongSide
cwimaqMeasurementEquivalentRectShortSide
cwimaqMeasurementEquivalentRectShortSideFeret
Longest side of the rectangle with the
same perimeter and area as the particle.
Shortest side of the rectangle with the
same perimeter and area as the particle.
Shortest side of the rectangle with the
same area as the particle, and longest
side equal in length to the Max Feret
Diameter.
cwimaqMeasurementFirstPixelX
X-coordinate of the highest, leftmost
particle pixel.
cwimaqMeasurementFirstPixelY
Y-coordinate of the highest, leftmost
particle pixel.
Perimeter divided by the circumference
of a circle with the same area.
IMAQ Vision for Visual Basic User Manual
4-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementHolesArea
Sum of the areas of each hole in the
particle.
cwimaqMeasurementHolesPerimeter
Sum of the perimeters of each hole in the
particle.
cwimaqMeasurementHuMoment1
cwimaqMeasurementHuMoment2
cwimaqMeasurementHuMoment3
cwimaqMeasurementHuMoment4
cwimaqMeasurementHuMoment5
cwimaqMeasurementHuMoment6
cwimaqMeasurementHuMoment7
cwimaqMeasurementHydraulicRadius
The first Hu moment.
The second Hu moment.
The third Hu moment.
The fourth Hu moment.
The fifth Hu moment.
The sixth Hu moment.
The seventh Hu moment.
The particle area divided by the particle
perimeter.
cwimaqMeasurementImageArea
Area of the image.
cwimaqMeasurementMaxFeretDiameter
Distance between the start and end of the
line segment connecting the two
perimeter points that are the furthest
apart.
cwimaqMeasurementMaxFeretDiameterEndX
cwimaqMeasurementMaxFeretDiameterEndY
cwimaqMeasurementMaxFeretDiameterOrientation
cwimaqMeasurementMaxFeretDiameterStartX
X-coordinate of the end of the line
segment connecting the two perimeter
points that are the furthest apart.
Y-coordinate of the end of the line
segment connecting the two perimeter
points that are the furthest apart.
The angle of the line segment
connecting the two perimeter points that
are the furthest apart.
X-coordinate of the start of the line
segment connecting the two perimeter
points that are the furthest apart.
© National Instruments Corporation
4-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementMaxFeretDiameterStartY
cwimaqMeasurementMaxHorizSegmentLengthLeft
cwimaqMeasurementMaxHorizSegmentLengthRight
cwimaqMeasurementMaxHorizSegmentLengthRow
Y-coordinate of the start of the line
segment connecting the two perimeter
points that are the furthest apart.
X-coordinate of the leftmost pixel in the
longest row of contiguous pixels in the
particle.
X-coordinate of the rightmost pixel in
the longest row of contiguous pixels in
the particle.
Y-coordinate of all of the pixels in the
longest row of contiguous pixels in the
particle.
cwimaqMeasurementMomentOfInertiaXX
cwimaqMeasurementMomentOfInertiaXXX
cwimaqMeasurementMomentOfInertiaXXY
cwimaqMeasurementMomentOfInertiaXY
cwimaqMeasurementMomentOfInertiaXYY
cwimaqMeasurementMomentOfInertiaYY
cwimaqMeasurementMomentOfInertiaYYY
cwimaqMeasurementNormMomentOfInertiaXX
cwimaqMeasurementNormMomentOfInertiaXXX
cwimaqMeasurementNormMomentOfInertiaXXY
The moment of inertia in the X direction
twice.
The moment of inertia in the X direction
three times.
The moment of inertia in the X direction
twice and the Y direction once.
The moment of inertia in the X and Y
directions.
The moment of inertia in the X direction
once and the Y direction twice.
The moment of inertia in the Y direction
twice.
The moment of inertia in the Y direction
three times.
The normalized moment of inertia in the
X direction twice.
The normalized moment of inertia in the
X direction three times.
The normalized moment of inertia in the
X direction twice and the Y direction
once.
IMAQ Vision for Visual Basic User Manual
4-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementNormMomentOfInertiaXY
The normalized moment of inertia in the
X and Y directions.
cwimaqMeasurementNormMomentOfInertiaXYY
The normalized moment of inertia in the
X direction once and the Y direction
twice.
cwimaqMeasurementNormMomentOfInertiaYY
cwimaqMeasurementNormMomentOfInertiaYYY
The normalized moment of inertia in the
Y direction twice.
The normalized moment of inertia in the
Y direction three times.
cwimaqMeasurementNumberOfHoles
Number of holes in the particle.
cwimaqMeasurementNumberOfHorizSegments
Number of horizontal segments in the
particle.
cwimaqMeasurementNumberOfVertSegments
cwimaqMeasurementOrientation
Number of vertical segments in the
particle.
The angle of the line that passes through
the particle Center of Mass about which
the particle has the lowest moment of
inertia.
cwimaqMeasurementParticleAndHolesArea
cwimaqMeasurementPerimeter
Percentage of the particle Area in
relation to its Particle & Holes’ Area.
Sum of the perimeters of each hole in the
particle.
cwimaqMeasurementRatioOfEquivalentEllipseAxes
cwimaqMeasurementRatioOfEquivalentRectSides
cwimaqMeasurementSumX
Equivalent Ellipse Major Axis divided
by Equivalent Ellipse Minor Axis.
Equivalent Rect Long Side divided by
Equivalent Rect Short Side.
The sum of all X-coordinates in the
particle.
cwimaqMeasurementSumXX
The sum of all X-coordinates squared in
the particle.
cwimaqMeasurementSumXXX
The sum of all X-coordinates cubed in
the particle.
© National Instruments Corporation
4-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 4
Performing Particle Analysis
Table 4-1. Measurement Types (Continued)
Measurement
Description
cwimaqMeasurementSumXXY
cwimaqMeasurementSumXY
cwimaqMeasurementSumXYY
cwimaqMeasurementSumY
The sum of all X-coordinates squared
times Y-coordinates in the particle.
The sum of all X-coordinates times
Y-coordinates in the particle.
The sum of all X-coordinates times
Y-coordinates squared in the particle.
The sum of all Y-coordinates in the
particle.
cwimaqMeasurementSumYY
cwimaqMeasurementSumYYY
cwimaqMeasurementTypesFactor
cwimaqMeasurementWaddelDiskDiameter
The sum of all Y-coordinates squared in
the particle.
The sum of all Y-coordinates cubed in
the particle.
Factor relating area to moment of
inertia.
Diameter of a disk with the same area as
the particle.
IMAQ Vision for Visual Basic User Manual
4-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
5
Performing Machine Vision
Tasks
This chapter describes how to perform many common machine vision
inspection tasks. The most common inspection tasks are detecting the
presence or absence of parts in an image and measuring the dimensions
of parts to see if they meet specifications.
Measurements are based on characteristic features of the object represented
in the image. Image processing algorithms traditionally classify the type
of information contained in an image as edges, surfaces and textures, or
patterns. Different types of machine vision algorithms leverage and extract
one or more types of information.
Edge detectors and derivative techniques—such as rakes, concentric rakes,
and spokes—use edges represented in the image. They locate, with high
accuracy, the position of the edge of an object in the image. For example,
you can a technique called clamping, which uses the edge location to
measure the width of the part. You can combine multiple edge locations
to compute intersection points, projections, circles, or ellipse fits.
Pattern matching algorithms use edges and patterns. Pattern matching can
locate with very high accuracy the position of fiducials or characteristic
features of the part under inspection. Those locations can then be combined
to compute lengths, angles, and other object measurements.
acquisition conditions. Sensor resolution, lighting, optics, vibration
control, part fixture, and general environment are key components of the
imaging setup. All the elements of the image acquisition chain directly
affect the accuracy of the measurements.
Figure 5-1 illustrates the basic steps involved in performing machine
vision.
© National Instruments Corporation
5-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Locate Objects to Inspect
Set Search Areas
Find Measurement Points
Identify Parts Under Inspection
Classify
Read
Read
Objects Characters Symbologies
Convert Pixel Coordinates to
Real-World Coordinates
Make Measurements
Display Results
Figure 5-1. Steps to Performing Machine Vision
Note Diagram items enclosed with dashed lines are optional steps.
Locate Objects to Inspect
In a typical machine vision application, you extract measurements from
parts of the object you are interested in must always appear inside the
regions of interest you define.
If the object under inspection is always at the same location and orientation
in the images you need to process, defining regions of interest is simple.
Refer to the Set Search Areas section of this chapter for information about
selecting a region of interest.
Often, the object under inspection appears rotated or shifted in the image
you need to process with respect to the reference image in which you
located the object. When this occurs, the ROIs must shift and rotate with
the parts of the object in which you are interested. For the ROIs to move
with the object, you must define a reference coordinate system relative to
the object in the reference image. During the measurement process, the
coordinate system moves with the object when it appears shifted and
rotated in the image you need to process. This coordinate system is referred
IMAQ Vision for Visual Basic User Manual
5-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
to as the measurement coordinate system. The measurement methods
automatically move the ROIs to the correct position using the position of
the measurement coordinate system with respect to the reference
coordinate system. For information about coordinate systems, refer to
Chapter 13, Dimensional Measurements, of the IMAQ Vision Concepts
Manual.
You can build a coordinate transformation using edge detection or
pattern matching. The output of the edge detection and pattern
matching methods that build a coordinate transformation is a
CWMVCoordinateTransformation object, which contains a reference
coordinate system and a measurement coordinate system. Some machine
vision methods take this transformation and adjust the regions of inspection
automatically. You also can use these outputs to move the regions of
inspection relative to the object programmatically.
Using Edge Detection to Build a Coordinate Transformation
You can build a coordinate transformation using two edge detection
techniques. UseCWMachineVision.FindCoordTransformUsingRect
to define a reference coordinate system using one rectangular region. Use
CWMachineVision.FindCoordTransformUsingTwoRectsto define a
reference coordinate system using two independent rectangular regions.
Follow the steps below to build a coordinate transformation using edge
detection.
Note To use this technique, the object cannot rotate more than 65° in the image.
1. Specify one or two rectangular ROIs.
a. If you use
CWMachineVision.FindCoordTransformUsingRect,
specify one rectangular ROI that includes part of two straight,
nonparallel boundaries of the object, as shown in Figure 5-2.
This rectangular region must be large enough to include these
boundaries in all the images you want to inspect.
© National Instruments Corporation
5-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
1
1
2
4
2
3
3
4
a.
b.
1
2
Search Area for the Coordinate System
Object Edges
3
4
Origin of the Coordinate System
Measurement Area
Figure 5-2. Coordinate Systems of a Reference Image and Inspection Image
b. If you use
CWMachineVision.FindCoordTransformUsingTwoRects,
specify two rectangular ROIs, each containing one separate,
straight boundary of the object, as shown in Figure 5-3. The
boundaries cannot be parallel. The regions must be large enough
to include the boundaries in all of the images you want to inspect.
IMAQ Vision for Visual Basic User Manual
5-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
4
2
4
2
3
3
1
1
b.
a.
1
2
Primary Search Area
Secondary Search Area
3
4
Origin of the Coordinate System
Measurement Area
Figure 5-3. Locating Coordinate System Axes with Two Search Areas
2. Choose the parameters you need to locate the edges on the object.
3. Choose the coordinate system axis direction.
4. Choose the results that you want to overlay onto the image.
5. Choose the mode for the method. To build a coordinate transformation
for the first time, set the FirstRunparameter to True. To update the
coordinate transformation in subsequent images, set this parameter
to False.
Using Pattern Matching to Build a Coordinate Transformation
You can build a coordinate transformation using pattern matching. Use
CWMachineVision.FindCoordTransformUsingPatternto define a
reference coordinate system based on the location of a reference feature.
Use this technique when the object under inspection does not have straight,
distinct edges. Follow the steps below to build a coordinate transformation
using pattern matching.
Note The object may rotate 360° in the image using this technique if you use
rotation-invariant pattern matching.
© National Instruments Corporation
5-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
1. Define a template that represents the part of the object that you want
to use as a reference feature. For more information about defining a
template, refer to the Find Measurement Points section.
2. Define a rectangular search area in which you expect to find the
template.
3. Set the MatchModeproperty of the
CWMVFindCTUsingPatternOptions object to
cwimaqRotationInvariantwhen you expect the template
to appear rotated in the inspection images. Otherwise, set it to
cwimaqShiftInvariant.
4. Choose the results you want to overlay onto the image.
5. Choose the mode for the method. To build a transformation for the
first time, set the FirstRunparameter to True. To update the
transformation in subsequent images, set this parameter to False.
IMAQ Vision for Visual Basic User Manual
5-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Choosing a Method to Build the Coordinate Transformation
Figure 5-4 guides you through choosing the best method for building a
coordinate transformation for the application.
Start
Object positioning
No
accuracy better
than 65 degrees.
Yes
The object under
No
inspection has a straight,
distinct edge (main axis).
Yes
The object contains a
second distinct edge not parallel
No
to the main axis in the same
search area.
The object contains
No
a second distinct edge not
parallel to the main axis in a
separate search area.
Yes
Build a
coordinate transformation
based on edge detection
using a single search area.
Object positioning
accuracy better
than 5 degrees.
No
Yes
Build a coordinate
transformation based on
edge detection using two
distinct search areas.
Yes
Build a coordinate
transformation based on
pattern matching
Build a coordinate
transformation based on
pattern matching
rotation invariant strategy.
shift invariant strategy.
End
Figure 5-4. Building a Coordinate Transformation
© National Instruments Corporation
5-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Set Search Areas
Select ROIs in the images to limit the areas in which you perform the
processing and inspection. You can define ROIs interactively or
programmatically.
Defining Regions Interactively
Follow these steps to interactively define an ROI:
1. Call
CWMachineVision.SetupViewerFor<shapename>Selection.
The following <shapename>values are available: Annulus, Line,
Point, Rectangle, and RotatedRect. This method configures the
viewer to display the appropriate tools for the shape you want to select.
the area of the image you want to process.
3. Use CWMachineVision.GetSelected<shapename>FromViewer
to programmatically retrieve the shape from the viewer.
You also can use the techniques described in Chapter 3, Making Grayscale
and Color Measurements, to select an ROI.
Table 5-1 indicates which ROI selection methods to use with a given
CWMachineVision method.
Table 5-1. ROI Selection Methods to Use with CWMachineVision Methods
CWMachineVision ROI Selection Methods
SetupViewerForRotatedRectSelection
GetSelectedRotatedRectFromViewer
CWMachineVision Method
FindPattern
MeasureMaximumDistance
MeasureMinimumDistance
FindStraightEdge
LightMeterRectangle
FindCircularEdge
SetupViewerForAnnulusSelection
GetSelectedAnnulusFromViewer
FindConcentricEdge
IMAQ Vision for Visual Basic User Manual
5-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Table 5-1. ROI Selection Methods to Use with CWMachineVision Methods (Continued)
CWMachineVision ROI Selection Methods
SetupViewerForPointSelection
GetSelectedPointFromViewer
CWMachineVision Method
LightMeterPoint
SetupViewerForLineSelection
GetSelectedLineFromViewer
LightMeterLine
Defining Regions Programmatically
When you have an automated application, you need to define regions of
interest programmatically. You can programmatically define regions by
providing basic parameters that describe the region you want to define. You
can specify a rotated rectangle by creating a CWIMAQRotatedRectangle
object and setting the coordinates of the center, width, height, and rotation
and setting the coordinates of the center, inner radius, outer radius, start
angle, and end angle. You can specify a point by setting its x-coordinates
and y-coordinates. You can specify a line by setting the coordinates of the
start and end points.
Refer to Chapter 3, Making Grayscale and Color Measurements, for more
information about defining regions of interest.
Find Measurement Points
After you set regions of inspection, locate points in the regions on which
you can base measurements. You can locate measurement points using
edge detection, pattern matching, color pattern matching, and color
location.
Finding Features Using Edge Detection
Use the edge detection tools to identify and locate sharp discontinuities
in an image. Discontinuities typically represent abrupt changes in pixel
intensity values, which characterize the boundaries of objects.
© National Instruments Corporation
5-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Finding Lines or Circles
If you want to find points along the edge of an object and find a line
describing the edge, use CWMachineVision.FindStraightEdge
and CWMachineVision.FindConcentricEdge.
CWMachineVision.FindStraightEdgefinds edges based
on rectangular search areas, as shown in Figure 5-5.
CWMachineVision.FindConcentricEdgefinds edges based
on annular search areas.
4
3
1
2
1
2
Search Region
Search Lines
3
4
Detected Edge Points
Line Fit to Edge Points
Figure 5-5. Finding a Straight Feature
If you want to find points along a circular edge and find the circle
that best fits the edge, as shown in Figure 5-6, use
CWMachineVision.FindCircularEdge.
IMAQ Vision for Visual Basic User Manual
5-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
1
4
3
2
1
2
Annular Search Region
Search Lines
3
4
Detected Edge Points
Circle Fit To Edge Points
Figure 5-6. Finding a Circular Feature
These methods locate the intersection points between a set of search
lines in the search region and the edge of an object. Specify the separation
between the lines that the methods use to detect edges. The methods
determine the intersection points based on their contrast, width, and
steepness. The software calculates a best-fit line with outliers rejected or a
best-fit circle through the points it found. The methods return the
coordinates of the edges found.
Finding Edge Points Along One Search Contour
Use CWIMAQVision.SimpleEdgeand CWIMAQVision.FindEdges2to
find edge points along a contour. You can find the first edge, last edge, or
all edges along the contour. Use CWIMAQVision.SimpleEdgewhen the
image contains little noise and the object and background are clearly
differentiated. Otherwise, use CWIMAQVision.FindEdges2.
These methods require you to input the coordinates of the points along the
search contour. Use CWIMAQVision.RegionsProfileto obtain the
coordinates from a CWIMAQRegions object that describes the contour.
If you have a straight line, use CWIMAQVision.GetPointsOnLineto
obtain the points along the line instead of using regions.
These methods determine the edge points based on their contrast and slope.
You can specify if you want to find the edge points using subpixel accuracy.
© National Instruments Corporation
5-11
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Finding Edge Points Along Multiple Search Contours
Use the CWIMAQVision.Rake, CWIMAQVision.Spoke, and
CWIMAQVision.ConcentricRakemethods to find edge points
along multiple search contours. These methods behave like
CWIMAQVision.FindEdges2, but they find edges on multiple contours.
These methods find only the first edge that meets the criteria along each
contour. Pass in a CWIMAQRegions object to define the search region for
these methods.
CWIMAQVision.Rakeworks on a rectangular search region. The search
lines are drawn parallel to the orientation of the rectangle. Control the
number of search lines in the region by specifying the distance, in pixels,
between each line. Specify the search direction as left to right or right to left
for a horizontally oriented rectangle. Specify the search direction as top to
bottom or bottom to top for a vertically oriented rectangle.
CWIMAQVision.Spokeworks on an annular search region, scanning the
search lines that are drawn from the center of the region to the outer
boundary and that fall within the search area. Control the number of lines
in the region by specifying the angle, in degrees, between each line. Specify
the search direction as either going from the center outward or from the
outer boundary to the center.
CWIMAQVision.ConcentricRakeworks on an annular search region.
The concentric rake is an adaptation of the rake to an annular region. Edge
detection is performed along search lines that occur in the search region and
that are concentric to the outer circular boundary. Control the number of
concentric search lines that are used for the edge detection by specifying
the radial distance between the concentric lines in pixels. Specify the
direction of the search as either clockwise or counterclockwise.
Finding Points Using Pattern Matching
The pattern matching algorithms in IMAQ Vision measure the similarity
between an idealized representation of a feature, called a template, and the
feature that may be present in an image. A feature is defined as a specific
pattern of pixels in an image. Pattern matching returns the location of the
center of the template and the template orientation. Follow these
generalized steps to find features in an image using pattern matching:
1. Define a reference or fiducial pattern in the form of a template image.
2. Use the reference pattern to train the pattern matching algorithm with
CWIMAQVision.LearnPattern2.
IMAQ Vision for Visual Basic User Manual
5-12
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
3. Define an image or an area of an image as the search area. A small
search area reduces the time to find the features.
4. Set the tolerances and parameters to specify how the algorithm
operates at run time using CWIMAQMatchPatternOptions.
5. Test the search algorithm on test images using
CWIMAQVision.MatchPattern2.
6. Verify the results using a ranking method.
Defining and Creating Effective Template Images
The selection of a effective template image plays a critical part in obtaining
good results. Because the template image represents the pattern that you
want to find, make sure that all the important and unique characteristics of
the pattern are well defined in the image.
Several factors are critical in creating a template image. These critical
factors include symmetry, feature detail, positional information, and
background information.
Symmetry
A rotationally symmetric template is less sensitive to changes in rotation
than one that is rotationally asymmetric. A rotationally symmetric template
provides good positioning information but no orientation information.
a.
b.
a
Rotationally Symmetric
b
Rotationally Asymmetric
Figure 5-7. Symmetry
© National Instruments Corporation
5-13
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Feature Detail
A template with relatively coarse features is less sensitive to variations in
size and rotation than a model with fine features. However, the model must
contain enough detail to identify it.
a.
b.
a
Good Feature Detail
b
Ambiguous Feature Detail
Figure 5-8. Feature Detail
Positional Information
A template with strong edges in both the x and y directions is easier to
locate.
a.
b.
a
b
Good Positional Information in x and y
Insufficient Positional Information in y
Figure 5-9. Positional Information
IMAQ Vision for Visual Basic User Manual
5-14
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Background Information
Unique background information in a template improves search
performance and accuracy.
a.
b.
a
b
Pattern with Insufficient Background Information
Pattern with Sufficient Background Information
Figure 5-10. Background Information
Training the Pattern Matching Algorithm
After you create a good template image, the pattern matching
algorithm has to learn the important features of the template. Use
CWIMAQVision.LearnPattern2to learn the template. The learning
process depends on the type of matching that you expect to perform. If you
do not expect the instance of the template in the image to rotate or change
its size, the pattern matching algorithm has to learn only those features
from the template that are necessary for shift-invariant matching. However,
if you want to match the template at any orientation, the learning mode
must consider the possibility of arbitrary orientations. To specify
which type of learning mode to use, pass the learn mode to
the LearnPatternOptionsparameter of
CWIMAQVision.LearnPattern2. You also can set the LearnMode
property of a CWIMAQLearnPatternOptions object and pass this object
for the LearnPatternOptionsparameter of
CWIMAQVision.LearnPattern2.
The learning process is usually time intensive because the algorithm
attempts to find unique features of the template that allow for fast, accurate
matching. The learning mode you choose also affects the speed of the
learning process. Learning the template for shift-invariant matching is
by training the pattern matching algorithm offline, and then saving the
template image with CWIMAQVision.WriteImageAndVisionInfo.
© National Instruments Corporation
5-15
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Defining a Search Area
Two equally important factors define the success of a pattern matching
algorithm: accuracy and speed. You can define a search area to reduce
ambiguity in the search process. For example, if the image has multiple
instances of a pattern and only one of them is required for the inspection
task, the presence of additional instances of the pattern can produce
incorrect results. To avoid this, reduce the search area so that only the
appropriate pattern lies within the search area.
The time required to locate a pattern in an image depends on both the
template size and the search area. By reducing the search area or increasing
the template size, you can reduce the required search time.
example, in a typical component placement application, each printed
circuit board (PCB) being tested may not be placed in the same location
with the same orientation. The location of the PCB in various images can
move and rotate within a known range of values, as illustrated in
Figure 5-11. Figure 5-11a shows the template used to locate the PCB in the
image. Figure 5-11b shows an image containing a PCB with a fiducial you
want to locate. Notice the search area around the fiducial. If you know,
before the matching process begins, that the PCB can shift or rotate in the
image within a fixed range, as shown in Figure 5-11c and Figure 5-11d,
respectively, you can limit the search for the fiducial to a small region of
the image.
IMAQ Vision for Visual Basic User Manual
5-16
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
a.
b.
c.
d.
Figure 5-11. Selecting a Search Area for Grayscale Pattern Matching
Setting Matching Parameters and Tolerances
Every pattern matching algorithm makes assumptions about the images
and pattern matching parameters used in machine vision applications.
These assumptions work for a high percentage of the applications.
However, there may be applications in which the assumptions used in the
algorithm are not optimal. To efficiently select the best pattern matching
parameters for the application, you must have a clear understanding of the
application and the images you want to process. The following sections
discuss parameters that influence the IMAQ Vision pattern matching
algorithm.
Match Mode
You can set the match mode to control how the pattern matching algorithm
handles the template at different orientations. If you expect the orientation
of valid matches to vary less than 5° from the template, set
CWIMAQMatchPatternOptions.MatchModeto
cwimaqMatchShiftInvariant. Otherwise, set the mode element
to cwimaqMatchRotationInvariant.
Note Shift-invariant matching is faster than rotation-invariant matching.
© National Instruments Corporation
5-17
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Minimum Contrast
Contrast is the difference between the smallest and largest pixel values in a
region. You can set the minimum contrast to potentially increase the speed
of the pattern matching algorithm. The pattern matching algorithm ignores
all image regions where contrast values fall beneath a set minimum contrast
value. If the search image has high contrast but contains some low contrast
regions, you can set a high minimum contrast value. Using a high minimum
contrast value excludes all areas in the image with low contrast,
significantly reducing the region in which the pattern matching algorithm
must search. If the search image has low contrast throughout, set a low
minimum contrast parameter to ensure that the pattern matching
algorithm looks for the template in all regions of the image. Use
CWIMAQMatchPatternOptions.MinimumContrastto set the
minimum contrast.
Rotation Angle Ranges
If you know that the pattern rotation is restricted to a certain range,
such as between –15° to 15°, provide this restriction information to
the pattern matching algorithm in the
CWIMAQMatchPatternOptions.RotationAngleRangesproperty.
This information improves your search time because the pattern
matching algorithm looks for the pattern at fewer angles. Refer to
Chapter 12, Pattern Matching, of the IMAQ Vision Concepts Manual
for information about pattern matching.
Testing the Search Algorithm on Test Images
To determine if the selected template or reference pattern is appropriate for
the machine vision application, test the template on a few test images.
These test images should reflect the images generated by the machine
vision application during true operating conditions. If the pattern matching
algorithm locates the reference pattern in all cases, you have selected a
good template. Otherwise, refine the current template, or select a better
template until both training and testing are successful.
IMAQ Vision for Visual Basic User Manual
5-18
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Using a Ranking Method to Verify Results
The manner in which you interpret the pattern matching results depends
on the application. For typical alignment applications, such as finding
a fiducial on a wafer, the most important information is the
position and bounding rectangle of the best match. Use
CWIMAQPatternMatchReportItem.Positionand
CWIMAQPatternMatchReportItem.BoundingPointsto get
the position and location of a match.
In inspection applications, such as optical character verification (OCV), the
score of the best match is more useful. The score of a match returned by the
pattern matching method is an indicator of the closeness between the
original pattern and the match found in the image. A high score indicates a
very close match, while a low score indicates a poor match. The score can
be used as a gauge to determine if a printed character is acceptable. Use
CWIMAQPatternMatchReportItem.Scoreto get a match score.
Finding Points Using Color Pattern Matching
Color pattern matching algorithms provide a quick way to locate objects
when color is present. Use color pattern matching under the following
circumstances:
•
The object you want to locate has color information that is very
different from the background, and you want to find a very precise
location of the object in the image.
•
The object to locate has grayscale properties that are very difficult to
characterize or that are very similar to other objects in the search
image. In such cases, grayscale pattern matching can give inaccurate
results. If the object has color information that differentiates it from the
other objects in the scene, color provides the machine vision software
with the additional information to locate the object.
Color pattern matching returns the location of the center of the template and
the template orientation. Follow these general steps to find features in an
image using color pattern matching:
1. Define a reference or fiducial pattern in the form of a template image.
2. Use the reference pattern to train the color pattern matching algorithm
with CWIMAQVision.LearnColorPattern.
search area reduces the time to find the features.
4. Set CWIMAQMatchColorPatternOptions.FeatureModeto
cwimaqFeatureAll.
© National Instruments Corporation
5-19
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
5. Set the tolerances and parameters to specify how the algorithm
operates at run time using CWIMAQMatchColorPatternOptions.
6. Test the search algorithm on test images using
CWIMAQVision.MatchColorPattern.
7. Verify the results using a ranking method.
Defining and Creating Effective Color Template
Images
The selection of a effective template image plays a critical part in obtaining
accurate results with the color pattern matching algorithm. Because the
template image represents the color and the pattern that you want to find,
make sure that all the important and unique characteristics of the pattern are
well defined in the image.
Several factors are critical in creating a template image. These critical
factors include color information, symmetry, feature detail, positional
information, and background information.
Color Information
A template with colors that are unique to the pattern provides better results
than a template that contains many colors, especially colors found in the
background or other objects in the image.
Symmetry
A rotationally symmetric template in the luminance plane is less sensitive
to changes in rotation than one that is rotationally asymmetric.
Feature Detail
A template with relatively coarse features is less sensitive to variations in
size and rotation than a model with fine features. However, the model must
contain enough detail to identify it.
Positional Information
A template with strong edges in both the x and y directions is easier to
locate.
IMAQ Vision for Visual Basic User Manual
5-20
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Background Information
Unique background information in a template improves search
This requirement could conflict with the “color information” requirement
because background colors may not be appropriate during the color
location phase. Avoid this problem by choosing a template with sufficient
background information for grayscale pattern matching while specifying
the exclusion of the background color during the color location phase.
Refer to the Training the Pattern Matching Algorithm section of this
chapter for more information about how to ignore colors.
Training the Color Pattern Matching Algorithm
After you have created a good template image, the color pattern
matching algorithm learns the important features of the template. Use
CWIMAQVision.LearnColorPatternto learn the template. The
learning process depends on the type of matching that you expect to
perform. By default, the color pattern matching algorithm learns only those
features from the template that are necessary for shift-invariant matching.
However, if you want to match the template at any orientation, the learning
process must consider the possibility of arbitrary orientations. Use the
CWIMAQLearnColorPatternOptions.LearnModeproperty to specify
which type of learning mode to use.
Exclude colors in the template that you are not interested in using during
the search phase. Typically, you should ignore colors that either belong to
the background of the object or are not unique to the template, reducing the
potential for incorrect matches during the color location phase. You can
learn the colors to ignore using CWIMAQVision.LearnColor. Use the
CWIMAQLearnColorPatternOptions.IgnoreBlackAndWhiteor
CWIMAQLearnColorPatternOptions.IgnoreColorSpectra
properties to ignore background colors.
The training or learning process is time-intensive because the
algorithm attempts to find optimal features of the template for the
particular matching process. However, you can train the pattern
matching algorithm offline, and save the template image using
CWIMAQVision.WriteImageAndVisionInfo.
© National Instruments Corporation
5-21
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Defining a Search Area
Two equally important factors define the success of a color pattern
matching algorithm—accuracy and speed. You can define a search area to
reduce ambiguity in the search process. For example, if the image has
multiple instances of a pattern and only one instance is required for the
inspection task, the presence of additional instances of the pattern can
the appropriate pattern lies within the search area. For example, in the fuse
box inspection example use the location of the fuses to be inspected to
define the search area. Because the inspected fuse box may not be in the
exact location or have the same orientation in the image as the previous
one, the search area you define should be large enough to accommodate
these variations in the position of the box. Figure 5-12 shows how search
areas can be selected for different objects.
1
2
1
Search Area for 20 Amp Fuses
2
Search Area for 25 Amp Fuses
Figure 5-12. Selecting a Search Area for Color Pattern Matching
IMAQ Vision for Visual Basic User Manual
5-22
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
The time required to locate a pattern in an image depends on both the
template size and the search area. By reducing the search area or increasing
the template size, you can reduce the required search time. Increasing the
size of the template can improve search time, but doing so reduces match
accuracy if the larger template includes an excess of background
information.
Setting Matching Parameters and Tolerances
Every color pattern matching algorithm makes assumptions about the
images and color pattern matching parameters used in machine vision
applications. These assumptions work for a high percentage of the
applications.
In some applications, the assumptions used in the algorithm are not
optimal. In such cases, you must modify the color pattern matching
parameters. To efficiently select the best pattern matching parameters for
the application, you must have a clear understanding of the application and
the images you want to process.
The following sections discuss parameters of the IMAQ Vision color
pattern matching algorithm, and how they influence the algorithm.
Color Sensitivity
Use the color sensitivity to control the granularity of the color information
in the template image. If the background and objects in the image contain
colors that are very close to colors in the template image, use a higher color
sensitivity setting. A higher sensitivity setting distinguishes colors with
very close hue values. Three color sensitivity settings are available in
IMAQ Vision: low, medium, and high. Use the low setting, which is the
default, if the colors in the template are very different from the colors in the
background or other objects that you are not interested in. Increase the
color sensitivity settings as the color differences decrease. Use
CWIMAQMatchColorPatternOptions.ColorSensitivityto set the
color sensitivity. For information about color sensitivity, refer to
Chapter 14, Color Inspection, of the IMAQ Vision Concepts Manual.
Search Strategy
Use the search strategy to optimize the speed of the color pattern matching
algorithm. The search strategy controls the step size, sub-sampling factor,
and the percentage of color information used from the template.
© National Instruments Corporation
5-23
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Use one of the following four search strategies:
•
Very aggressive—Uses the largest step size, the most sub-sampling
and only the dominant color from the template to search for the
template. Use this strategy when the color in the template is almost
uniform, the template is well contrasted from the background and there
is a good amount of separation between different occurrences of the
template in the image. This strategy is the fastest way to find templates
in an image.
•
•
•
Aggressive—Uses a large step size, a large amount of subsampling,
and all the color spectrum information from the template.
Balanced—Uses values in between the aggressive and conservative
strategies.
Conservative—Uses a very small step size, the least amount of
subsampling, and all the color information present in the template. The
conservative strategy is the most reliable method to look for a template
in any image at potentially reduced speed.
Note Use the conservative strategy if you have multiple targets located very close to each
other in the image.
Decide on the best strategy by experimenting with the different options.
Use CWIMAQMatchColorPatternOptions.SearchStrategyto select
a search strategy.
Color Score Weight
When you search for a template using both color and shape information, the
color and shape scores generated during the match process are combined
to generate the final color pattern matching score. The color score
weight determines the contribution of the color score to the final color
pattern matching score. If the template color information is superior to
its shape information, set the weight higher. For example, if you use a
weight of 1000, the algorithm finds each match by using both color
and shape information, and then ranks the matches based entirely on
their color scores. If the weight is 0, the matches are ranked based
entirely on only their shape scores. Use the
CWIMAQMatchColorPatternOptions.ColorScoreWeight
property to set the color score weight.
IMAQ Vision for Visual Basic User Manual
5-24
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Minimum Contrast
Use the minimum contrast to increase the speed of the color pattern
matching algorithm. The color pattern matching algorithm ignores
all image regions where grayscale contrast values fall
beneath a set minimum contrast value. Use
CWIMAQMatchColorPatternMatchingOptions.MinimumContrast
to set the minimum contrast. Refer to the Setting Matching Parameters and
Tolerances section of this chapter for more information about minimum
contrast.
Rotation Angle Ranges
If you know that the pattern rotation is restricted to a certain range, provide
this restriction information to the pattern matching algorithm by using
the CWIMAQMatchPatternOptions.RotationAngleRangesproperty.
This information improves the search time because the color pattern
matching algorithm looks for the pattern at fewer angles. Refer to
Chapter 12, Pattern Matching, in the IMAQ Vision Concepts Manual
for more information about pattern matching.
Testing the Search Algorithm on Test Images
To determine if the selected template or reference pattern is appropriate for
the machine vision application, test the template on a few test images by
using the CWIMAQVision.MatchColorPatternmethod. These test
images should reflect the images generated by the machine vision
application during true operating conditions. If the color pattern matching
algorithm locates the reference pattern in all cases, you have selected a
good template. Otherwise, refine the current template, or select a better
template until both training and testing are successful.
Finding Points Using Color Location
Color location algorithms provide a quick way to locate regions in an image
with specific colors.
Use color location under the following circumstances:
•
•
•
Requires the location and the number of regions in an image with their
specific color information
Relies on the cumulative color information in the region, instead of the
color arrangement in the region
Does not require the orientation of the region
© National Instruments Corporation
5-25
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
•
•
Does not always require the location with sub-pixel accuracy
Does not require shape information for the region
Complete the following steps to find features in an image using color
location:
1. Define a reference pattern in the form of a template image.
2. Use the reference pattern to train the color location algorithm with
CWIMAQVision.LearnColorPattern.
3. Define an image or an area of an image as the search area. A small
search area reduces the time to find the features.
4. Set CWIMAQMatchColorPatternOptions.FeatureModeto
cwimaqFeatureColorInformation.
5. Set the tolerances and parameters to specify how the method operates
at run time using CWIMAQMatchColorPatternOptions.
6. Use CWIMAQVision.MatchColorPatternto test the color location
algorithm on test images.
7. Verify the results using a ranking method.
Use CWIMAQVision.WriteImageAndVisionInfoto save the template
image.
Convert Pixel Coordinates to Real-World Coordinates
The measurement points you located with edge detection and pattern
matching are in pixel coordinates. If you need to make measurements using
real-world units, use
CWIMAQVision.ConvertPixelToRealWorldCoordinatesto convert
the pixel coordinates into real-world units.
Make Measurements
You can make different types of measurements either directly from the
image or from points that you detect in the image.
Distance Measurements
Use the following methods to make distance measurements for the
inspection application.
Clamp methods measure the separation between two edges in a rectangular
search region. First, clamp methods detect points along the two edges using
IMAQ Vision for Visual Basic User Manual
5-26
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
the rake method, and then they compute the distance between the points
detected on the edges along each search line of the rake and return the
largest or smallest distance in either the horizontal or vertical direction. The
MeasurementAxis parameter specifies the axis along which to measure.
You also need to specify the parameters for edge detection and the
separation between the search lines that you want to use within the search
region to find the edges. These methods work directly on the image under
inspection, and they output the coordinates of all the edge points that they
find. The following list describes the available clamp methods:
• CWMachineVision.MeasureMaximumDistance—Measures the
largest separation between two edges in a rectangular search region.
smallest separation between two edges in a rectangular search region.
Use CWIMAQVision.FindPointDistancesto compute the distances
between consecutive pairs of points in an array of points. You can obtain
these points from the image using any one of the feature detection methods
described in the Find Measurement Points section of this chapter.
Analytic Geometry Measurements
Use the following CWIMAQVision methods to make geometrical
measurements from the points you detect in the image:
• FitLine—Fits a line to a set of points and computes the equation of
the line.
• FitCircle2—Fits a circle to a set of at least three points and
computes its area, perimeter and radius.
• FitEllipse2—Fits an ellipse to a set of at least six points and
computes its area, perimeter, and the lengths of its major and
minor axis.
• FindIntersectionPoint—Finds the intersection point of two lines
specified by their start and end points.
• FindAngleBetweenLines—Finds the smaller angle between two
lines.
• FindPerpendicularLine—Finds the perpendicular line from a
point to a line.
• FindDistanceFromPointToLine—Computes the perpendicular
distance between the point and the line.
• FindBisectingLine—Finds the line that bisects the angle formed
by two lines.
© National Instruments Corporation
5-27
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
• FindMidLine—Finds the line that is midway between a point and a
line and is parallel to the line.
• FindPolygonArea—Calculates the area of a polygon specified by its
vertex points.
Instrument Reader Measurements
You can make measurements based on the values obtained by meter, LCD,
and barcode readers.
Use CWIMAQMeterArc.CreateFromPointsor
CWIMAQMeterArc.CreateFromLinesto calibrate a meter or
gauge that you want to read. CWIMAQMeterArc.CreateFromLines
calibrates the meter using the initial position and the full-scale position
of the needle. CWIMAQMeterArc.CreateFromPointscalibrates the
meter using three points on the meter: the base of the needle, the tip of
the needle at its initial position, and the tip of the needle at its full-scale
position. Use CWIMAQVision.ReadMeterto read the position of the
needle using the CWIMAQMeterArc object.
Use CWIMAQVision.FindLCDSegmentsto calculate the regions of
interest around each digit in an LCD or LED. To find the area of each
digit, all the segments of the indicator must be activated. Use
CWIMAQVision.ReadLCDto read the digits of an LCD or LED.
Identify Parts Under Inspection
In addition to making measurements after you set regions of inspection,
you also can identify parts using classification, OCR, and barcode reading.
Classifying Samples
Use classification to identify an unknown object by comparing a set of its
significant features to a set of features that conceptually represent classes
of known objects. Typical applications involving classification include the
following:
•
Sorting—Sorts objects of varied shapes. For example, sorting different
mechanical parts on a conveyor belt into different bins.
•
Inspection—Inspects objects by assigning each object an identification
score and then rejecting objects that do not closely match members of
the training set.
IMAQ Vision for Visual Basic User Manual
5-28
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
Before you classify objects, you must create a classifier file with samples
of the objects using the NI Classification Training Interface. Go to Start»
Programs»National Instruments»Classification Training to launch the
NI Classification Training Interface.
After you have trained samples of the objects you want to classify, use the
following methods to classify the image under inspection:
•
Use CWIMAQVision.ReadClassifierFileto read in the classifier
file you created with the NI Classification Training Interface.
•
Use CWIMAQClassifier.Classifyto classify the image under
inspection.
Reading Characters
Use OCR to read text and/or characters in an image. Typical uses for OCR
in an inspection application include identifying or classifying components.
Before you read text and/or characters in an image, you must create a
character set file with samples of the characters using the OCR Training
Interface. Go to Start»Programs»National Instruments»Vision»OCR
Training to launch the OCR Training Interface.
After you have trained samples of the characters you want to read, use the
following methods to read the characters:
•
Use NIOCR.ReadOCRFileto read in a character set file that you
created using the OCR Training Interface.
•
Use NIOCR.ReadTextto read the characters inside the ROI of the
image under inspection.
Reading Barcodes
Use barcode reading objects to read values encoded into 1D barcodes, Data
Matrix symbols, and PDF417 symbols.
Read 1D Barcodes
To read a 1D barcode, locate the barcode in the image using one of the
techniques described in the Instrument Reader Measurements section, and
then pass the Regions parameter into CWIMAQVision.ReadBarcode.
barcode. Specify the type of 1D barcode in the application using the
BarcodeType parameter. IMAQ Vision supports the following 1D barcode
© National Instruments Corporation
5-29
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
types: Codabar, Code 39, Code 93, Code 128, EAN 8, EAN 13,
Interleaved 2 of 5, MSI, and UPCA.
Read Data Matrix Barcode
Use CWIMAQVision.ReadDataMatrixBarcodeto read values encoded
in a Data Matrix barcode. This method can automatically determine the
location of the barcode and appropriate search options for the application.
However, you can improve the performance of the application by
specifying control values specific to the application.
CWIMAQVision.ReadDataMatrixBarcodecan automatically locate
one or multiple Data Matrix barcodes in an image. However, you can
improve the inspection performance by locating the barcodes using one of
the techniques described in the Instrument Reader Measurements section,
and then passing the Regions parameter into
CWIMAQVision.ReadDataMatrixBarcode.
Tip If you need to read only one barcode per image, set
CWIMAQDataMatrixOptions.SearchModeto
cwimaqBarcode2DSearchSingleConservativeto increase the speed of the method.
By default, CWIMAQVision.ReadDataMatrixBarcodedetermines if the
barcode has black cells on a white background or white cells on a black
background.
Note Specify round cells only if the Data Matrix cells are round and have clearly defined
edges. If the cells in the matrix touch one another, you must set CellShapeto
cwimaqBarcode2DCellShapeSquare.
By default, CWIMAQVision.ReadDataMatrixBarcodeassumes the
barcode cells are square. If the barcodes you need to read have round cells,
set the CellShapemember of the CWIMAQDataMatrixOptionsobject to
cwimaqBarcode2DCellShapeRound.
Set the BarcodeShapemember of the CWIMAQDataMatrixOptions
object to cwimaqBarcode2DShapeRectangularor
cwimaqBarcode2DShapeSquaredepending on the shape of the
barcode you need to read.
cwimaqBarcode2DShapeRectangularwhen the barcode you need to read is square
reduces the reliability of the application.
IMAQ Vision for Visual Basic User Manual
5-30
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
By default, CWIMAQVision.ReadDataMatrixBarcodeautomatically
detects the type of barcode to read. You can improve the performance of the
function by specifying the type of barcode in the application. IMAQ Vision
supports Data Matrix types ECC 000 to ECC 140, and ECC 200.
Read PDF417 Barcode
PDF417 barcode.
By default, CWIMAQVision.ReadPDF417Barcodeautomatically locates
one or multiple PDF417 barcodes in an image. However, you can improve
the inspection performance by locating the barcodes using one of the
techniques described in the Instrument Reader Measurements section,
and then passing in Regions of the locations into
CWIMAQVision.ReadPDF417Barcode.
Tip If you need to read only one barcode per image, set the SearchMode parameter to
cwimaqBarcode2DSearchSingleConservativeto increase the speed of the method.
Display Results
You can display the results obtained at various stages of the inspection
process on the window that displays the inspection image by overlaying
information about an image. The software attaches the information that you
want to overlay to the image, but it does not modify the image.
Access overlays using the CWIMAQImage.Overlaysproperty. The
CWIMAQOverlays collection contains a single CWIMAQOverlay
object that you can access using CWIMAQImage.Overlay(1).
Note The CWIMAQImage.Overlayscollection does not support usual collection
methods—such as Add, Remove, and RemoveAll—because they are reserved for
future use.
Use the following methods on the CWIMAQOverlay object to overlay
search regions, inspection results, and other information, such as text and
pictures. Overlays on a viewer image are automatically updated when you
call one of these methods.
• DrawLine—Overlays a CWIMAQLine object on an image.
• DrawConnectedPoints—Overlays a CWIMAQPoints collection
and draws a line between sequential points.
© National Instruments Corporation
5-31
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
• DrawRectangle—Overlays a CWIMAQRectangle object on an
image.
• DrawOval—Overlays a CWIMAQOval object on an image.
• DrawArc—Overlays a CWIMAQArc object on an image.
• DrawPicture—Overlays a picture object onto the image.
• DrawText—Overlays text on an image.
• DrawRegions—Overlays an ROI described by the CWIMAQRegions
object on an image.
Tip You can select the color of overlays by using one of these methods. If you do not
supply a color to an overlay method, the CWIMAQOverlay.DefaultColorproperty
is used.
You can configure the following CWMachineVision methods to overlay
different types of information about the inspection image:
• FindStraightEdge
• FindCircularEdge
• FindConcentricEdge
• MeasureMaximumDistance
• MeasureMinimumDistance
• FindPattern
• CountAndMeasureObjects
• FindCoordTransformUsingRect
• FindCoordTransformUsingTwoRects
• FindCoordTransformUsingPattern
You can overlay the following information with all the above methods
except CWMachineVision.FindPattern:
•
•
•
•
The search area input into the method
The search lines used for edge detection
The edges detected along the search lines
The result of the method
Each of the above CWMachineVision methods has a settings object input
that allows you to select the information you want to overlay. Set the
boolean property that corresponds to the information you want to overlay
IMAQ Vision for Visual Basic User Manual
5-32
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 5
Performing Machine Vision Tasks
to True. With CWMachineVision.FindPattern, you can overlay the
search area and the result.
Use CWIMAQOverlay.Clearto clear any previous overlay information
from the image. Use CWIMAQVision.WriteImageAndVisionInfo
to save an image with its overlay information to a file. You can
read the information from the file into an image using the
CWIMAQVision.ReadImageAndVisionInfo.
Note As with calibration information, overlay information is removed from an image
when the image size or orientation changes.
© National Instruments Corporation
5-33
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
6
Calibrating Images
This chapter describes how to calibrate the imaging system, save
calibration information, and attach calibration information to an image.
After you set up the imaging system, you may want to calibrate the system.
If the imaging setup is such that the camera axis is perpendicular or nearly
perpendicular to the object under inspection and the lens has no distortion,
use simple calibration. With simple calibration, you do not need to learn a
template. Instead, you define the distance between pixels in the horizontal
and vertical directions using real-world units.
If the camera axis is not perpendicular to the object under inspection or the
lens is distorted, use perspective and nonlinear distortion calibration to
calibrate the system.
Perspective and Nonlinear Distortion Calibration
Perspective errors and lens aberrations cause images to appear distorted.
This distortion misplaces information in an image, but it does not
necessarily destroy the information in the image. Calibrate the imaging
system if you need to compensate for perspective errors or nonlinear lens
distortion.
Follow these general steps to calibrate the imaging system:
1. Define a calibration template.
2. Define a reference coordinate system.
3. Learn the calibration information.
After you calibrate the imaging setup, you can attach the calibration
information to an image. Refer to the Attach Calibration Information
section of this chapter for more information. Depending on your needs, you
can either apply calibration information in one of the following ways:
•
Convert pixel coordinates to real-world coordinates without correcting
the image
•
Create a distortion-free image by correcting the image for perspective
errors and lens aberrations
© National Instruments Corporation
6-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Refer to Chapter 5, Performing Machine Vision Tasks, for more
information about applying calibration information before making
measurements.
Defining a Calibration Template
You can define a calibration template by supplying an image of a grid or
providing a list of pixel coordinates and their corresponding real-world
coordinates. This section discusses the grid method in detail.
A calibration template is a user-defined grid of circular dots. As shown in
Figure 6-1, the grid has constant spacings in the x and y directions. You can
•
•
•
•
The displacement in the x and y directions must equal (dx = dy).
The radius of the dots must be 6–10 pixels.
The center-to-center distance between dots must range from
18 to 32 pixels, as shown in Figure 6-1.
•
The minimum distance between the edges of the dots must be 6 pixels,
as shown in Figure 6-1.
dx
1
dy
2
3
1
Center-to-Center Distance
2
Center of Grid Dots
3
Distance Between Dot Edges
Figure 6-1. Defining a Calibration Grid
Note You can use the calibration grid installed with IMAQ Vision at Start»Programs»
National Instruments»Vision»Documentation»Calibration Grid. The dots have radii
of 2 mm and center-to-center distances of 1 cm. Depending on the printer, these
measurements may change by a fraction of a millimeter. You can purchase highly
accurate calibration grids from optics suppliers, such as Edmund Industrial Optics.
IMAQ Vision for Visual Basic User Manual
6-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Defining a Reference Coordinate System
To express measurements in real-world units, you must define a
coordinate system in the image of the grid. Use
CWIMAQLearnCalibrationOptions.CalibrationAxisInfo
to define a coordinate system by its origin, angle, and axis direction.
The origin, expressed in pixels, defines the center of the coordinate system.
The angle specifies the orientation of the coordinate system with respect to
the angle of the topmost row of dots in the grid image. The calibration
procedure automatically determines the direction of the horizontal axis in
the real world. The vertical axis direction can either be indirect, as shown
in Figure 6-2a, or direct, as shown in Figure 6-2b.
X
Y
Y
X
a.
b.
Figure 6-2. Axis Direction in the Image Plane
If you do not specify a coordinate system, the calibration process defines a
default coordinate system. If you specify a grid for the calibration process,
Figure 6-3:
1. The origin is placed at the center of the left, topmost dot in the
calibration grid.
2. The angle is set to 0°. This aligns the x-axis with the first row of dots
in the grid, as shown in Figure 6-3b.
3. The axis direction is set to indirect using
CWIMAQCoordinateSystem.AxisOrientation=
cwimaqAxisOrientationIndirect. This aligns the y-axis to the
first column of the dots in the grid, as shown in Figure 6-3b.
© National Instruments Corporation
6-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
1
2
x
y
a.
b.
Origin of the Same Calibration Grid in an Image
1
Origin of a Calibration Grid in the Real World
2
Figure 6-3. A Calibration Grid and an Image of the Grid
Note If you specify a list of points instead of a grid for the calibration process,
the software defines a default coordinate system, as follows:
1. The origin is placed at the point in the list with the lowest x-coordinate
value and then the lowest y-coordinate value.
2. The angle is set to 0°.
3. The axis direction is set to indirect using
CWIMAQCoordinateSystem.AxisOrientation=
cwimaqAxisOrientationIndirect.
If you define a coordinate system yourself, carefully consider the
requirements of the application:
•
within the calibration grid so that you can convert the location to
real-world units.
•
Specify the angle as the angle between the x-axis of the new coordinate
system (x') and the top row of dots (x), as shown in Figure 6-4. If the
imaging system exhibits nonlinear distortion, you cannot visualize the
angle as you can in Figure 6-4 because the dots do not appear in
straight lines.
IMAQ Vision for Visual Basic User Manual
6-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
x
1
x'
x
2
y'
y
y
1
Default Origin in a Calibration Grid Image
2
User-Defined Origin
Learning Calibration Information
After you define a calibration grid and reference axis, acquire an image of
the grid using the current imaging setup. For information about acquiring
images, refer to the Acquire or Read an Image section of Chapter 2, Getting
Measurement-Ready Images. The grid does not need to occupy the entire
image. You can choose a region within the image that contains the grid.
After you acquire an image of the grid, learn the calibration information
by inputting the image of the grid into
CWIMAQVision.LearnCalibrationGrid.
Note If you want to specify a list of points instead of a grid, use
CWIMAQVision.LearnCalibrationPointsto learn the calibration information.
Use the CWIMAQCalibrationPoints object to specify the pixel to real-world mapping.
© National Instruments Corporation
6-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Specifying Scaling Factors
Scaling factors are the real-world distances between the dots
in the calibration grid in the x and y directions and the units in
which the distances are measured. Use
CWIMAQCalibrationGridOptions.GridDescriptorto
specify the scaling factors.
Choosing a Region of Interest
Define a learning ROI during the learning process to define a region of the
calibration grid you want to learn. The software ignores dot centers outside
this region when it estimates the transformation. Creating a user-defined
ROI is an effective way to increase correction speeds depending on the
other calibration options selected. Pass a CWIMAQRegions collection
CWIMAQVision.LearnCalibrationGridor
CWIMAQVision.LearnCalibrationPoints.
Note The user-defined ROI represents the area in which you are interested. The learning
Select a method in which to learn the calibration information: perspective
projection or nonlinear. Figure 6-5 illustrates the types of errors the image
can exhibit. Figure 6-5a shows an image of a calibration grid with no
errors. Figure 6-5b shows an image of a calibration grid with perspective
projection. Figure 6-5c shows an image of a calibration grid with nonlinear
distortion.
a.
b.
c.
Figure 6-5. Types of Image Distortion
IMAQ Vision for Visual Basic User Manual
6-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Choose the perspective projection algorithm when the system exhibits
perspective errors only. A perspective projection calibration has an
accurate transformation even in areas not covered by the calibration
grid, as shown in Figure 6-6. Set
CWIMAQLearnCalibrationOptions.CalibrationMethodto
cwimaqPerspectiveCalibrationto choose the perspective calibration
algorithm. Learning and applying perspective projection is less
computationally intensive than the nonlinear method. However, perspective
projection cannot handle nonlinear distortions.
If the imaging setup exhibits nonlinear distortion, use the nonlinear
method. The nonlinear method guarantees accurate results only in the
area that the calibration grid covers, as shown in Figure 6-6. If the
system exhibits both perspective and nonlinear distortion, use the
nonlinear method to correct for both. Set
CWIMAQLearnCalibrationOptions.CalibrationMethodto
cwimaqNonLinearCalibrationto chose the nonlinear calibration
algorithm.
2
1
1
Calibration ROI Using the
Perspective Algorithm
2
Calibration ROI Using the
Nonlinear Algorithm
Figure 6-6. Calibration ROIs
Using the Learning Score
The learning process returns a score that reflects how well the software
learned the input image. A high learning score indicates that you chose the
the appropriate learning algorithm, that the grid image complies with the
guideline, and that the vision system setup is adequate.
© National Instruments Corporation
6-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Note A high score does not reflect the accuracy of the system.
If the learning process returns a learning score below 600, try the following:
1. Make sure the grid complies with the guidelines listed in the
Defining a Calibration Template section.
2. Check the lighting conditions. If you have too much or too little
lighting, the software may estimate the center of the dots incorrectly.
Also, adjust the threshold range to distinguish the dots from the
background.
3. Select another learning algorithm. When nonlinear lens distortion is
present, using perspective projection sometimes results in a low
learning score.
Learning the Error Map
An error map helps you gauge the quality of the complete system. The error
map returns an estimated error range to expect when a pixel coordinate
is transformed into a real-world coordinate. The transformation
accuracy may be higher than the value the error range indicates. Set
CWIMAQLearnCalibrationOptions.LearnErrorMapto Trueto learn
the error map.
Learning the Correction Table
If the speed of image correction is a critical factor for the application, use
a correction table. The correction table is a lookup table that contains
the real-world location information of all the pixels in the image. The
correction table is stored in memory. The extra memory requirements for
this option are based on the size of the image. Use this option when you
want to simultaneously correct multiple images in the vision application.
Set CWIMAQLearnCalibrationOptions.LearnCorrectionTableto
Trueto learn the correction table.
Setting the Scaling Mode
Use the scaling mode option to choose the appearance of the
corrected image. Set
CWIMAQLearnCalibrationOptions.CorrectionScalingModeto
cwimaqScaleToFitor cwimaqScaleToPreserveArea. For more
information about the scaling mode, refer to Chapter 3, System Setup and
Calibration, in the IMAQ Vision Concepts Manual.
IMAQ Vision for Visual Basic User Manual
6-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Calibration Invalidation
Any image processing operation that changes the image size or orientation
voids the calibration information in a calibrated image. Examples
of methods that void calibration information include
CWIMAQVision.Resample2, CWIMAQVision.Extract2,
CWIMAQVision.Unwrap, and CWIMAQImage.ArrayToImage.
Simple Calibration
When the axis of the camera is perpendicular to the image plane and lens
distortion is negligible, use simple calibration. In simple calibration, a pixel
coordinate is transformed into a real-world coordinate through scaling in
the horizontal and vertical directions.
Use simple calibration to map pixel coordinates to real-world coordinates
directly without a calibration grid. The software rotates and scales a pixel
coordinate according to predefined coordinate reference and scaling
factors. You can assign the calibration to an arbitrary image using
CWIMAQVision.SetSimpleCalibration.
To perform a simple calibration, set a coordinate system (angle, center,
and axis direction) and scaling factors on the defined axis, as shown in
Figure 6-7. Express the angle between the x-axis and the horizontal axis
of the image in degrees. Express the center as the position, in pixels, where
you want the coordinate system origin. Set the axis direction to direct or
indirect. Simple calibration also offers a correction table option and a
scaling mode option.
Use CWIMAQSimpleCalibrationOptions.CalibrationAxisInfo
to define the coordinate reference. Use
CWIMAQSimpleCalibrationOptions.GridDescriptor
to specify the scaling factors. Use
CWIMAQSimpleCalibrationOptions.CorrectionScalingMode
to set the scaling mode. Set
CWIMAQSimpleCalibrationOptions.LearnCorrectionTable
to Trueto learn the correction table.
© National Instruments Corporation
6-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
Y
X
dy
1
dx
1
Origin
Figure 6-7. Defining a Simple Calibration
Save Calibration Information
After you learn the calibration information, you can save it so that you
CWIMAQVision.WriteImageAndVisionInfoto save the image of
the grid and its associated calibration information to a file. To read the
file containing the calibration information use
CWIMAQVision.ReadImageAndVisionInfo. For more information
about attaching the calibration information you read from another image,
refer to the Attach Calibration Information section.
Attach Calibration Information
When you finish calibrating the setup, you can apply the calibration
settings to images that you acquire. Use
CWIMAQVision.SetCalibrationInformationto attach the
calibration information of the current setup to each image you acquire.
This method takes in a source image containing the calibration information
and a destination image that you want to calibrate. The output image is the
inspection image with the calibration information attached to it.
Using the calibration information attached to the image, you can
accurately convert pixel coordinates to real-world coordinates to
make any of the analytic geometry measurements with
IMAQ Vision for Visual Basic User Manual
6-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Chapter 6
Calibrating Images
CWIMAQVision.ConvertPixelToRealWorldCoordinates. If
the application requires shape measurements, correct the image by
removing distortion with CWIMAQVision.CorrectCalibratedImage.
Note Correcting images is a time-intensive operation.
A calibrated image is different from a corrected image.
Note Because calibration information is part of the image, it is propagated throughout
the processing and analysis of the image. Methods that modify the image size,
such as an image rotation method, void the calibration information. Use
CWIMAQVision.WriteImageAndVisionInfoto save the image and all of the attached
calibration information to a file. If you modify the image after using
CWIMAQVision.WriteImageAndVisionInfo, you must relearn the calibration
information and use CWIMAQVision.WriteImageAndVisionInfoagain.
© National Instruments Corporation
6-11
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
A
Technical Support and
Professional Services
Visit the following sections of the National Instruments Web site at
ni.comfor technical support and professional services:
•
Support—Online technical support resources at ni.com/support
include the following:
–
Self-Help Resources—For immediate answers and solutions,
visit the award-winning National Instruments Web site for
software drivers and updates, a searchable KnowledgeBase,
product manuals, step-by-step troubleshooting wizards, thousands
of example programs, tutorials, application notes, instrument
drivers, and so on.
–
Free Technical Support—All registered users receive free Basic
Service, which includes access to hundreds of Application
Engineers worldwide in the NI Developer Exchange at
ni.com/exchange. National Instruments Application Engineers
make sure every question receives an answer.
•
•
Training and Certification—Visit ni.com/trainingfor
self-paced training, eLearning virtual classrooms, interactive CDs,
and Certification program information. You also can register for
instructor-led, hands-on courses at locations around the world.
System Integration—If you have time constraints, limited in-house
technical resources, or other project challenges, National Instruments
Alliance Partner members can help. To learn more, call your local
NI office or visit ni.com/alliance.
If you searched ni.comand could not find the answers you need, contact
your local office or NI corporate headquarters. Phone numbers for our
worldwide offices are listed at the front of this manual. You also can visit
the Worldwide Offices section of ni.com/niglobalto access the branch
office Web sites, which provide up-to-date contact information, support
phone numbers, email addresses, and current events.
© National Instruments Corporation
A-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
Numbers
1D
2D
3D
One-dimensional.
Two-dimensional.
Three-dimensional.
A
AIPD
The National Instruments internal image file format used for saving
complex images and calibration information associated with an image
(extension APD).
alignment
The process by which a machine vision application determines the location,
orientation, and scale of a part being inspected.
alpha channel
The channel used to code extra information, such as gamma correction,
about a color image. The alpha channel is stored as the first byte in the
four-byte representation of an RGB pixel.
area
(1) A rectangular portion of an acquisition window or frame that is
controlled and defined by software.
(2) The size of an object in pixels or user-defined units.
The image operations multiply, divide, add, subtract, and modulo.
An ordered, indexed set of data elements of the same type.
arithmetic operators
array
auto-median function
A function that uses dual combinations of opening and closing operations
to smooth the boundaries of objects.
B
b
Bit. One binary digit, either 0 or 1.
B
Byte. Eight related bits of data, an eight-bit binary number. Also denotes
the amount of memory required to store one byte of data.
© National Instruments Corporation
G-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
barycenter
The grayscale value representing the centroid of the range of an image’s
grayscale values in the image histogram.
binary image
An image in which the objects usually have a pixel intensity of 1 (or 255)
and the background has a pixel intensity of 0.
binary morphology
binary threshold
Functions that perform morphological operations on a binary image.
The separation of an image into objects of interest (assigned a pixel value
of 1) and background (assigned pixel values of 0) based on the intensities
of the image pixels.
bit depth
blurring
The number of bits (n) used to encode the value of a pixel. For a given n,
a pixel can take 2n different values. For example, if n equals 8, a pixel can
take 256 different values ranging from 0 to 255. If n equals 16, a pixel can
take 65,536 different values ranging from 0 to 65,535 or –32,768 to 32,767.
Reduces the amount of detail in an image. Blurring commonly occurs
because the camera is out of focus. You can blur an image intentionally by
applying a lowpass frequency filter.
BMP
Bitmap. An image file format commonly used for 8-bit and color images.
BMP images have the file extension BMP.
border function
brightness
Removes objects (or particles) in a binary image that touch the image
border.
(1) A constant added to the red, green, and blue components of a color pixel
during the color decoding process.
(2) The perception by which white objects are distinguished from gray and
light objects from dark objects.
buffer
Temporary storage for acquired data.
IMAQ Vision for Visual Basic User Manual
G-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
C
caliper
(1) A function in the NI Vision Assistant and in NI Vision Builder for
Automated Inspection that calculates distances, angles, circular fits, and the
center of mass based on positions given by edge detection, particle analysis,
centroid, and search functions.
(2) A measurement function that finds edge pairs along a specified path in
the image. This function performs an edge extraction and then finds edge
pairs based on specified criteria such as the distance between the leading
and trailing edges, edge contrasts, and so forth.
center of mass
The point on an object where all the mass of the object could be
concentrated without changing the first moment of the object about
any axis.
chroma
The color information in a video signal.
chromaticity
The combination of hue and saturation. The relationship between
chromaticity and brightness characterizes a color.
closing
A dilation followed by an erosion. A closing fills small holes in objects and
smooths the boundaries of objects.
clustering
A technique where the image is sorted within a discrete number of classes
corresponding to the number of phases perceived in an image. The gray
values and a barycenter are determined for each class. This process is
repeated until a value is obtained that represents the center of mass for each
phase or class.
CLUT
Color lookup table. A table for converting the value of a pixel in an image
into a red, green, and blue (RGB) intensity.
color image
color space
An image containing color information, usually encoded in the RGB form.
The mathematical representation for a color. For example, color can be
described in terms of red, green, and blue; hue, saturation, and luminance;
or hue, saturation, and intensity.
complex image
connectivity
Stores information obtained from the FFT of an image. The complex
numbers that compose the FFT plane are encoded in 64-bit floating-point
values: 32 bits for the real part and 32 bits for the imaginary part.
Defines which of the surrounding pixels of a given pixel constitute its
neighborhood.
© National Instruments Corporation
G-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
connectivity-4
Only pixels adjacent in the horizontal and vertical directions are considered
neighbors.
connectivity-8
contrast
All adjacent pixels are considered neighbors.
A constant multiplication factor applied to the luma and chroma
components of a color pixel in the color decoding process.
convex hull
The smallest convex polygon that can encapsulate a particle.
Computes the convex hull of objects in a binary image.
See linear filter.
convex hull function
convolution
convolution kernel
2D matrices, or templates, used to represent the filter in the filtering
process. The contents of these kernels are a discrete two-dimensional
representation of the impulse response of the filter that they represent.
D
Danielsson function
Similar to the distance functions, but with more accurate results.
determinism
A characteristic of a system that describes how consistently it can respond
to external events or perform operations within a given time limit.
digital image
dilation
An image f (x, y) that has been converted into a discrete number of pixels.
Both spatial coordinates and brightness are specified.
Increases the size of an object along its boundary and removes tiny holes in
the object.
driver
Software that controls a specific hardware device, such as an IMAQ or
DAQ device.
E
edge
Defined by a sharp transition in the pixel intensities in an image or along an
array of pixels.
edge contrast
edge detection
The difference between the average pixel intensity before and the average
pixel intensity after the edge.
Any of several techniques to identify the edges of objects in an image.
IMAQ Vision for Visual Basic User Manual
G-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
edge steepness
The number of pixels that corresponds to the slope or transition area
of an edge.
energy center
equalize function
erosion
The center of mass of a grayscale image. See center of mass.
See histogram equalization.
Reduces the size of an object along its boundary and eliminates isolated
points in the image.
exponential and
gamma corrections
Expand the high gray-level information in an image while suppressing low
gray-level information.
exponential function
Decreases brightness and increases contrast in bright regions of an image,
and decreases contrast in dark regions of an image.
F
FFT
Fast Fourier Transform. A method used to compute the Fourier transform
of an image.
fiducial
A reference pattern on a part that helps a machine vision application find
the part's location and orientation in an image.
Fourier transform
frequency filters
Transforms an image from the spatial domain to the frequency domain.
The counterparts of spatial filters in the frequency domain. For images,
frequency information is in the form of spatial frequency.
ft
Feet.
function
A set of software instructions executed by a single line of code that may
have input and/or output parameters and returns a value when executed.
G
gamma
The nonlinear change in the difference between the video signal’s
brightness level and the voltage level needed to produce that brightness.
gradient convolution
filter
See gradient filter.
© National Instruments Corporation
G-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
gradient filter
An edge detection algorithm that extracts the contours in gray-level values.
Gradient filters include the Prewitt and Sobel filters.
gray level
The brightness of a pixel in an image.
gray-level dilation
Increases the brightness of pixels in an image that are surrounded by other
pixels with a higher intensity.
gray-level erosion
Reduces the brightness of pixels in an image that are surrounded by other
pixels with a lower intensity.
grayscale image
An image with monochrome information.
grayscale morphology
Functions that perform morphological operations on a gray-level image.
H
h
Hour.
highpass attenuation
highpass filter
The inverse of lowpass attenuation.
Emphasizes the intensity variations in an image, detects edges or object
boundaries, and enhances fine details in an image.
highpass frequency
filter
Removes or attenuates low frequencies present in the frequency domain of
the image. A highpass frequency filter suppresses information related to
slow variations of light intensities in the spatial image.
highpass truncation
histogram
The inverse of lowpass truncation.
Indicates the quantitative distribution of the pixels of an image per
gray-level value.
histogram equalization
histogram inversion
histograph
Transforms the gray-level values of the pixels of an image to occupy the
entire range of the histogram, thus increasing the contrast of the image.
The histogram range in an 8-bit image is 0 to 255.
Finds the photometric negative of an image. The histogram of a reversed
image is equal to the original histogram flipped horizontally around the
center of the histogram.
Histogram that can be wired directly into a graph.
IMAQ Vision for Visual Basic User Manual
G-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
hit-miss function
Locates objects in the image similar to the pattern defined in the structuring
element.
HSI
A color encoding scheme in hue, saturation, and intensity.
HSL
A color encoding scheme using hue, saturation, and luminance information
where each image in the pixel is encoded using 32 bits: 8 bits for hue, 8 bits
HSV
hue
A color encoding scheme in hue, saturation, and value.
Represents the dominant color of a pixel. The hue function is a continuous
function that covers all the possible colors generated using the R, G, and
B primaries. See also RGB.
Hz
Hertz. Frequency in units of 1/second.
I
I/O
Input/output. The transfer of data to/from a computer system involving
communications channels, operator interface devices, and/or data
acquisition and control interfaces.
image
A two-dimensional light intensity function f (x, y) where x and y denote
spatial coordinates and the value f at any point (x, y) is proportional to the
brightness at that point.
image border
Image Browser
A user-defined region of pixels surrounding an image. Functions that
process pixels based on the value of the pixel neighbors require image
borders.
An image that contains thumbnails of images to analyze or process in a
vision application.
image buffer
A memory location used to store images.
image definition
The number of values a pixel can take on, which is the number of colors or
shades that you can see in the image.
image display
environment
A window or control that displays an image.
© National Instruments Corporation
G-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
image enhancement
The process of improving the quality of an image that you acquire from
a sensor in terms of signal-to-noise ratio, image contrast, edge definition,
and so on.
image file
A file containing pixel data and additional information about the image.
image format
Defines how an image is stored in a file. Usually composed of a header
followed by the pixel data.
image mask
A binary image that isolates parts of a source image for further processing.
A pixel in the source image is processed if its corresponding mask pixel has
a non-zero value. A source pixel whose corresponding mask pixel has a
value of 0 is left unchanged.
image palette
The gradation of colors used to display an image on screen, usually defined
by a CLUT.
image processing
Encompasses various processes and analysis functions that you can apply
to an image.
image source
imaging
The original input image.
Any process of acquiring and displaying images and analyzing image data.
Image Acquisition.
IMAQ
inner gradient
inspection
Finds the inner boundary of objects.
The process by which parts are tested for simple defects such as missing
parts or cracks on part surfaces.
inspection function
instrument driver
intensity
Analyzes groups of pixels within an image and returns information about
the size, shape, position, and pixel connectivity. Typical applications
include quality of parts, analyzing defects, locating objects, and sorting
objects.
A set of high-level software functions, such as NI-IMAQ, that control
specific plug-in computer boards. Instrument drivers are available in
several forms, ranging from a function callable from a programming
language to a VI in LabVIEW.
The sum of the Red, Green, and Blue primary colors divided by three,
IMAQ Vision for Visual Basic User Manual
G-8
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
intensity calibration
Assigns user-defined quantities such as optical densities or concentrations
to the gray-level values in an image.
intensity profile
intensity range
The gray-level distribution of the pixels along an ROI in an image.
Defines the range of gray-level values in an object of an image.
intensity threshold
Characterizes an object based on the range of gray-level values in the
object. If the intensity range of the object falls within the user-specified
range, it is considered an object. Otherwise it is considered part of the
background.
J
jitter
The maximum amount of time that the execution of an algorithm varies
from one execution to the next.
JPEG
Joint Photographic Experts Group. An image file format for storing 8-bit
and color images with lossy compression. JPEG images have the file
extension JPG.
K
kernel
A structure that represents a pixel and its relationship to its neighbors.
The relationship is specified by weighted coefficients of each neighbor.
L
labeling
A morphology operation that identifies each object in a binary image and
assigns a unique pixel value to all the pixels in an object. This process is
useful for identifying the number of objects in the image and giving each
object a unique pixel intensity.
line gauge
line profile
Measures the distance between selected edges with high-precision subpixel
accuracy along a line in an image. For example, this function can be used
to measure distances between points and edges. This function also can step
and repeat its measurements across the image.
Represents the gray-level distribution along a line of pixels in an image.
© National Instruments Corporation
G-9
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
linear filter
A special algorithm that calculates the value of a pixel based on its own
pixel value as well as the pixel values of its neighbors. The sum of this
calculation is divided by the sum of the elements in the matrix to obtain
a new pixel value.
logarithmic function
logic operators
Increases the brightness and contrast in dark regions of an image and
decreases the contrast in bright regions of the image.
The image operations AND, NAND, OR, XOR, NOR, XNOR, difference,
mask, mean, max, and min.
lossless compression
lossy compression
lowpass attenuation
Compression in which the decompressed image is identical to the original
image.
Compression in which the decompressed image is visually similar but not
identical to the original image.
Applies a linear attenuation to the frequencies in an image, with no
attenuation at the lowest frequency and full attenuation at the highest
frequency.
lowpass FFT filter
lowpass filter
Removes or attenuates high frequencies present in the FFT domain of an
image.
Attenuates intensity variations in an image. You can use these filters to
smooth an image by eliminating fine details and blurring edges.
lowpass
frequency filter
Attenuates high frequencies present in the frequency domain of the image.
A lowpass frequency filter suppresses information related to fast variations
of light intensities in the spatial image.
lowpass truncation
L-skeleton function
luma
Removes all frequency information above a certain frequency.
The brightness information in the video picture. The luma signal amplitude
varies in proportion to the brightness of the video signal and corresponds
exactly to the monochrome picture.
luminance
LUT
See luma.
Lookup table. A table containing values used to transform the gray-level
values of an image. For each gray-level value in the image, the
corresponding new value is obtained from the lookup table.
IMAQ Vision for Visual Basic User Manual
G-10
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
M
M
(1) Mega, the standard metric prefix for 1 million or 106, when used with
units of measure such as volts and hertz.
(2) Mega, the prefix for 1,048,576, or 220, when used with B to quantify
data or computer memory.
machine vision
mask FFT filter
match score
An automated application that performs a set of visual inspection tasks.
Removes frequencies contained in a mask (range) specified by the user.
A number ranging from 0 to 1000 that indicates how closely an acquired
image matches the template image. A match score of 1000 indicates a
perfect match. A match score of 0 indicates no match.
MB
Megabyte of memory.
median filter
A lowpass filter that assigns to each pixel the median value of its neighbors.
This filter effectively removes isolated pixels without blurring the contours
of objects.
memory buffer
MMX
See buffer.
Multimedia Extensions. An Intel chip-based technology that allows
parallel operations on integers, which results in accelerated processing
of 8-bit images.
morphological
transformations
Extract and alter the structure of objects in an image. You can use these
transformations for expanding (dilating) or reducing (eroding) objects,
filling holes, closing inclusions, or smoothing borders. They are used
primarily to delineate objects and prepare them for quantitative inspection
analysis.
M-skeleton function
Uses an M-shaped structuring element in the skeleton function.
N
neighbor
A pixel whose value affects the value of a nearby pixel when an image is
processed. The neighbors of a pixel are usually defined by a kernel or a
structuring element.
neighborhood
operations
Operations on a point in an image that take into consideration the values of
the pixels neighboring that point.
© National Instruments Corporation
G-11
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
NI-IMAQ
The driver software for National Instruments IMAQ hardware.
nonlinear filter
Replaces each pixel value with a nonlinear function of its surrounding
pixels.
nonlinear
A highpass edge-extraction filter that favors vertical edges.
gradient filter
nonlinear Prewitt filter
A highpass, edge-extraction filter based on two-dimensional gradient
information.
nonlinear Sobel filter
A highpass, edge-extraction filter based on two-dimensional gradient
information. The filter has a smoothing effect that reduces noise
enhancements caused by gradient operators.
Nth order filter
Filters an image using a nonlinear filter. This filter orders (or classifies)
the pixel values surrounding the pixel being processed. The pixel being
processed is set to the Nth pixel value, where N is the order of the filter.
number of planes
(in an image)
The number of arrays of pixels that compose the image. A gray-level or
pseudo-color image is composed of one plane, while an RGB image is
composed of three planes (one for the red component, one for the blue,
and one for the green).
O
OCR
Optical Character Recognition. The ability of a machine to read
human-readable text.
OCV
offset
Optical Character Verification. A machine vision application that inspects
the quality of printed characters.
The coordinate position in an image where you want to place the origin of
another image. Setting an offset is useful when performing mask
operations.
opening
An erosion followed by a dilation. An opening removes small objects and
smooths boundaries of objects in the image.
operators
Allow masking, combination, and comparison of images. You can use
arithmetic and logic operators in IMAQ Vision.
IMAQ Vision for Visual Basic User Manual
G-12
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
optical representation
outer gradient
Contains the low-frequency information at the center and the high-
frequency information at the corners of an FFT-transformed image.
Finds the outer boundary of objects.
P
palette
The gradation of colors used to display an image on screen, usually defined
by a CLUT.
particle
A connected region or grouping of non-zero pixels in a binary image.
particle analysis
A series of processing operations and analysis functions that produce some
information about the particles in an image.
pattern matching
The technique used to locate quickly a grayscale template within a
grayscale image
picture element
pixel
An element of a digital image. Also called pixel.
Picture element. The smallest division that makes up the video scan line.
For display on a computer monitor, a pixel's optimum dimension is square
(aspect ratio of 1:1, or the width equal to the height).
pixel aspect ratio
The ratio between the physical horizontal size and the vertical size of the
region covered by the pixel. An acquired pixel should optimally be square,
thus the optimal value is 1.0, but typically it falls between 0.95 and 1.05,
depending on camera quality.
pixel calibration
pixel depth
PNG
Directly calibrates the physical dimensions of a pixel in an image.
The number of bits used to represent the gray level of a pixel.
Portable Network Graphic. An image file format for storing 8-bit, 16-bit,
and color images with lossless compression. PNG images have the file
extension PNG.
Prewitt filter
An edge detection algorithm that extracts the contours in gray-level values
using a 3 × 3 filter kernel.
© National Instruments Corporation
G-13
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
proper-closing
proper-opening
A finite combination of successive closing and opening operations that you
can use to fill small holes and smooth the boundaries of objects.
A finite combination of successive opening and closing operations that you
can use to remove small particles and smooth the boundaries of objects.
Q
quantitative analysis
Obtaining various measurements of objects in an image.
R
real time
A property of an event or system in which data is processed as it is acquired
instead of being accumulated and processed at a later time.
resolution
reverse function
RGB
The number of rows and columns of pixels. An image composed of m rows
and n columns has a resolution of
Inverts the pixel values in an image, producing a photometric negative of
the image.
A color encoding scheme using red, green, and blue (RGB) color
information where each pixel in the color image is encoded using 32 bits:
8 bits for red, 8 bits for green, 8 bits for blue, and 8 bits for the alpha value
(unused).
RGB U64
A color encoding scheme using red, green, and blue (RGB) color
information where each pixel in the color image is encoded using 64 bits:
16 bits for red, 16 bits for green, 16 bits for blue, and 16 bits for the alpha
value (unused).
Roberts filter
An edge detection algorithm that extracts the contours in gray level,
favoring diagonal edges.
IMAQ Vision for Visual Basic User Manual
G-14
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
ROI
Region of interest.
(1) An area of the image that is graphically selected from a window
displaying the image. This area can be used focus further processing.
(2) A hardware-programmable rectangular portion of the acquisition
window.
ROI tools
A collection of tools that enable you to select a region of interest from an
image. These tools let you select points, lines, annuli, polygons, rectangles,
rotated rectangles, ovals, and freehand open and closed contours.
rotational shift
The amount by which one image is rotated relative to a reference image.
This rotation is computed relative to the center of the image.
rotation-invariant
matching
A pattern matching technique in which the reference pattern can be located
at any orientation in the test image as well as rotated at any degree.
S
saturation
The amount of white added to a pure color. Saturation relates to the richness
of a color. A saturation of zero corresponds to a pure color with no white
added. Pink is a red with low saturation.
scale-invariant
matching
A pattern matching technique in which the reference pattern can be any size
in the test image.
segmentation function
Fully partitions a labeled binary image into non-overlapping segments,
with each segment containing a unique object.
separation function
Separates objects that touch each other by narrow isthmuses.
shift-invariant
matching
A pattern matching technique in which the reference pattern can be located
anywhere in the test image but cannot be rotated or scaled.
skeleton function
smoothing filter
Sobel filter
Applies a succession of thinning operations to an object until its width
becomes one pixel.
Blurs an image by attenuating variations of light intensity in the
neighborhood of a pixel.
An edge detection algorithm that extracts the contours in gray-level values
using a 3 × 3 filter kernel.
spatial calibration
Assigns physical dimensions to the area of a pixel in an image.
© National Instruments Corporation
G-15
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
spatial filters
Alter the intensity of a pixel relative to variations in intensities of its
enhancement, noise reduction, smoothing, and so forth.
spatial resolution
The number of pixels in an image, in terms of the number of rows and
columns in the image.
square function
See exponential function.
square root function
standard representation
See logarithmic function.
Contains the low-frequency information at the corners and high-frequency
information at the center of an FFT-transformed image.
structuring element
subpixel analysis
A binary mask used in most morphological operations. A structuring
element is used to determine which neighboring pixels contribute in the
operation.
Finds the location of the edge coordinates in terms of fractions of a pixel.
T
template
A color, shape, or pattern that you are trying to match in an image using the
color matching, shape matching, or pattern matching functions. A template
can be a region selected from an image or it can be an entire image.
threshold
Separates objects from the background by assigning all pixels with
intensities within a specified range to the object and the rest of the pixels to
the background. In the resulting binary image, objects are represented with
a pixel intensity of 255 and the background is set to 0.
threshold interval
TIFF
Two parameters, the lower threshold gray-level value and the upper
threshold gray-level value.
Tagged Image File Format. An image format commonly used for encoding
8-bit, 16-bit, and color images. TIFF images have the file extension TIF.
time-bounded
tools palette
Describes algorithms that are designed to support a lower and upper bound
on execution time.
A collection of tools that enable you to select regions of interest, zoom in
IMAQ Vision for Visual Basic User Manual
G-16
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Glossary
V
value
The grayscale intensity of a color pixel computed as the average of the
maximum and minimum red, green, and blue values of that pixel.
© National Instruments Corporation
G-17
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
building
coordinate transformation with edge
Numerics
reading, 5-29
detection, 5-3
coordinate transformation with pattern
building coordinate transformations, 5-7
A
acquiring images, 2-4
continuous acquisition, 2-5
one-shot acquisition, 2-4
Acquisition Type combo box, 2-4
ActiveX objects, 1-5
adding shapes to ROIs, 3-5
analyzing images, 2-7, 2-8
Annulus tool, 3-2
calibrating images, 2-2, 6-1
nonlinear, 6-1
perspective, 6-1
Application, 1-6
application development
arrays, converting to images, 2-6
attaching calibration information to images,
2-7, 6-10
attaching to images, 2-7, 6-10
method, 3-6
centroid method, 3-6
characters
attenuation
highpass, 2-12
lowpass, 2-12
reading, 5-29
training, 5-29
circle, finding points along the edge, 5-10
circles, finding, 5-10
classifying objects, 5-29
color content, evaluating in images, 3-9
color information
learning, 3-9
specifying, 3-10
color location, finding points, 5-25
color matching, 3-10
B
barcodes
reading, 5-29
reading 1D, 5-29
reading data matrix barcodes, 5-30
reading PDF417 barcodes, 5-31
binary images, improving, 4-2
Broken Line tool, 3-2
© National Instruments Corporation
I-1
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
color pattern matching
creating
binary images, 4-1, 4-2
finding points, 5-19
optimize speed with search strategy, 5-23
color pattern matching algorithms
training, 5-21
images, 2-2
IMAQ Vision applications, 1-5
template images, 5-13
CWIMAQ control, 1-3
using contrast, 5-25
color scores, 5-24
color sensitivity, using to control granularity
in template images, 5-23
color spectrums, learning, 3-10
color statistics, measuring, 3-6, 3-7
color template images, defining, 5-20
color, comparing in a specified region, 3-11
colors
CWIMAQViewer control, 1-3
CWIMAQVision, 1-3
data matrix barcodes, 5-30
reading, 5-30
learning, 3-12
significant colors in the image, 3-10
comparing colors in a specified region, 3-11
complex images, 2-12
converting to arrays, 2-12
continuous acquisition, 2-5
contrast
color pattern matching algorithms, 5-25
pattern matching algorithms, 5-18
converting
defining
calibration templates, 6-2
color template images, 5-20
effective template images, 5-13
reference coordinate systems, 6-3
regions interactively, 5-8
regions of interest, 3-1
regions of interest interactively, 3-1
regions programmatically, 5-9
ROIs programmatically, 3-5
ROIs with masks, 3-6
arrays to images, 2-6
complex images to arrays, 2-12
coordinates, 5-26
search areas, 5-16
templates with colors that are unique to
the pattern, 5-20
convolution filter, 2-10
deployment, application, xi
detecting objects, 5-2
diagnostic tools (NI resources), A-1
displaying
coordinate systems, reference, 6-3
coordinate transformation
building with edge detection, 5-3
building with pattern matching, 5-5
correction tables, learning, 6-8
images, 2-6
results, 5-31
distance measurements, 5-26
IMAQ Vision for Visual Basic User Manual
I-2
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
documentation
conventions used in manual, ix
Freeline tool, 3-3
NI resources, A-1
related documentation, x
G
geometrical measurements, 5-27
granularity
drivers
NI resources, A-1
color, 3-12
NI-IMAQ, xi
using color sensitivity to control, 5-23
grayscale features, filtering, 2-10
features, 2-10
E
edge detection, 5-3
finding features, 5-9
grayscale statistics, measuring, 3-6
edge points, finding along multiple search
contours, 5-12
error map, learning, 6-8
examples (NI resources), A-1
help, technical support, A-1
highpass
attenuation, 2-12
filter, 2-9
F
features, finding with edge detection, 5-9
FFT, 2-11
I
files, reading, 2-6
filtering
images, 2-9, 2-10
acquiring, 2-4
attaching calibration information,
2-7, 6-10
calibrating, 2-2
complex, 2-12
creating, 2-2
finding
edge points along multiple search
contours, 5-12
edge points along one search
contour, 5-11
features with edge detection, 5-9
lines, 5-10
displaying, 2-6
evaluating color content, 3-9
filtering, 2-9, 2-10
filtering grayscale features, 2-10
highlighting details using LUTs, 2-9
improving, 2-9
measuring, 5-26
reading, 2-4
signal-to-noise ratio, 2-9
transitions, 2-9
measurement points, 5-9
points along the edge of a circle, 5-10
points using color pattern matching, 5-19
points using pattern matching, 5-12
points with color location, 5-25
Free Region tool, 3-3
© National Instruments Corporation
I-3
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
imaging systems, setting up, 2-1
IMAQ Vision applications, creating, 1-5
improving
light intensity, measuring, 3-6
lighting effects on image colors, 3-11
Line tool, 3-2
binary images, 4-2
lines, finding, 5-10
locating objects to detect, 5-2
increasing
attenuation, 2-12
speed of the color pattern matching
algorithm, 5-25
filter, 2-9
LUTs, 2-9
speed of the pattern matching
algorithm, 5-18
highlighting details in images, 2-9
instrument, A-1
instrument drivers, xi
instrument drivers (NI resources), A-1
instrument reader measurements, 5-28
interactively defining regions, 5-8
M
machine vision, 5-1
masks, defining regions of interest, 3-6
measurement points, finding, 5-9
measurements
distance, 5-26
geometry, 5-27
instrument reader, 5-28
measuring
K
color statistics, 3-6, 3-7
grayscale statistics, 3-6
light intensity, 3-6
particles, 4-4
L
learning, 6-5
calibration information, 6-5
color information, 3-9
color spectrums, 3-10
colors, 3-12
correction tables, 6-8
method for building coordinate
transformations, 5-7
multiple ROIs, using to view color differences
in an image, 3-11
error maps, 6-8
learning algorithm, specifying, 6-6
learning calibration information
correction tables, 6-8
points, 5-12
error maps, 6-8
setting the scaling mode, 6-8
specifying a learning algorithm, 6-6
specifying a region of interest, 6-6
using learning scores, 6-7
voiding calibrations, 6-9
learning score, using, 6-7
National Instruments support and
services, A-1
NI-IMAQ, xi
niocr.ocx, 1-4
nonlinear calibration, 6-1
Nth order filter, 2-10
IMAQ Vision for Visual Basic User Manual
I-4
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
finding along one search contour, 5-11
O
objects
finding along the edge of a circle, 5-10
finding measurement points, 5-9
finding with color location, 5-25
finding with color pattern matching, 5-19
finding with pattern matching, 5-12
classifying, 5-29
detecting, 5-2
locating, 5-2
OCR, 5-29
one-shot acquisition, 2-4
optimizing speed of the color pattern matching
algorithm, 5-23
Polygon tool, 3-3
programmatically defining
regions, 5-9
Oval tool, 3-2
regions of interest, 3-5
programming examples (NI resources), A-1
P
Pan tool, 3-3
particle analysis, 4-1
results, 5-19
performing, 4-1
particle measurements, 4-4
particle shapes, improving, 4-4
particles
measuring, 4-4
removing unwanted, 4-3
pattern matching
barcodes, 5-29
characters, 5-29
files, 2-6
images, 2-4
Rectangle tool, 3-2
reference coordinate systems, 6-3
defining, 6-3
region of interest, 6-6
specifying, 6-6
building a coordinate transformation, 5-5
finding points, 5-12
score, 5-24
setting rotation angle ranges, 5-18
setting tolerances, 5-17, 5-23
tolerances, setting, 5-23
training algorithm, 5-15
verifying results, 5-19
pattern matching algorithms
using contrast, 5-18
defining interactively, 5-8
programmatically defining, 5-9
regions of interest
defining, 3-1
defining interactively, 3-1
defining with masks, 3-6
related documentation, x
removing unwanted particles, 4-3
results
displaying, 5-31
verifying for pattern matching, 5-19
ROI selection methods, 5-8
reading, 5-31
performing particle analysis, 4-1
perspective calibration, 6-1
pixel coordinates, converting to real-world
coordinates, 5-26
Point tool, 3-2
© National Instruments Corporation
I-5
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
Index
ROIs
specifying
color information, 3-10
adding shapes, 3-5
programmatically defining, 3-5
Rotated Rectangle tool, 3-2
rotation angle ranges
granularity to learn a color, 3-12
learning algorithm, 6-6
region of interest, 6-6
rotationally symmetric template, 5-20
scaling factors, 6-6
increasing for color pattern matching
algorithms, 5-25
increasing for pattern matching
algorithms, 5-18
S
saving calibration information, 6-10
scaling mode, setting, 6-8
search algorithms, testing on test images,
5-18, 5-25
support, technical, A-1
symmetric templates, 5-13
search area, defining, 5-16, 5-22
search areas, 5-8
template
setting, 5-8
search contour, finding points, 5-11
search contours, finding edge points along
multiple search contours, 5-12
color pattern matching algorithms, 5-23
Selection tool, 3-2
background information, 5-15
coarse features, 5-14
defining with colors that are unique to the
pattern, 5-20
strong edges, 5-14
template images
defining, 5-13
separating touching particles, 4-3
setting
granularity, 5-23
templates
rotation angle ranges for pattern
matching, 5-18, 5-25
background information, 5-21
calibration, 6-2
scaling mode, 6-8
search areas, 5-8
coarse features, 5-20
detail, 5-20
setting up measurement systems, 2-1
shape scores, 5-24
positional information, 5-20
strong edges, 5-20
signal-to-noise ratio, 2-9
simple calibration, 6-9
test images, testing search algorithms,
5-18, 5-25
software (NI resources), A-1
IMAQ Vision for Visual Basic User Manual
I-6
ni.com
Download from Www.Somanuals.com. All Manuals Search And Download.
testing search algorithms, 5-18, 5-25
tolerances, setting for pattern matching, 5-17
touching particles, separating, 4-3
training
viewing color differences in an image using
Vision for Visual Basic organization, 1-2
voiding calibrations, 6-9
characters, 5-29
color pattern matching algorithms, 5-21
pattern matching algorithm, 5-15
troubleshooting (NI resources), A-1
Web resources, A-1
U
using
Z
Zoom tool, 3-3
learning scores, 6-7
ranking to verify pattern matching
results, 5-19
© National Instruments Corporation
I-7
IMAQ Vision for Visual Basic User Manual
Download from Www.Somanuals.com. All Manuals Search And Download.
|