White Papers, Case Studies, and Other Technical Papers
You can download White Papers, video, and other useful application information from the links below along with the application notes on the Applications Notes page.
Emergent Vision Technologies
This application note describes the mechanism by which multiple Emergent cameras can be synchronized. Figure 1 : System Hardwarebelow illustrates the hardware involved: Emergent cameras connected to Myricom Sync NICs via SFP+ cabling(fiber, direct attach). Myricom Dual Port Sync NICswith SMB IRIGB00X input in one or more PCs. IRIGB00X Timecode Generator using either GPS or internal based timecode.
This application note describes in detail the advantages of using Myricom NICs and MVA with Emergent Vision cameras. It describes the way in which MVA gains its significant advantages and provides a comparison to scaled up versions of the Intel Pro 1000 GigEVision NIC and driver which is the current choice for 1 Gigabit machine vision applications and software.
Emergent Vision Technologies FAQs
What is 10GigE?
Does GigEVision work for 10GigE?
What is the bandwidth of 10GigE?
What are the cable options for 10GigE and what is the max cable length?
What are the cost implications for 10GigE?
What is the power consumption for 10GigE?
What is Myricom’s MVA and how does it impact performance?
What is the benefit of using 10GigE for my application?
What is the jitter and latency of 10GigE? How does this compare to 1GigE?
How does 10GigE compare with other interfaces?
What operating systems are supported?
Will 10GigE cameras work with GigEVision compatible software?
What off the shelf components are available for 10GigE?
Where will 10GigE be in the future?
Comparison of 10GigE and the future USB 3.1?
How can I synchronize multiple cameras?
What software can I use for my 10GigE camera?
What computers support 10GigE?
What 10GigE accessories does Emergent offer?
What is the maximum frame rate I can achieve with my Emergent camera?
This application note describes the mechanism by which Emergent cameras can use Birger Engineering Canon EF adapters for electronic iris and focus control. Below is pictured the HS-12000 connected to the Birger Canon EF adapter and a standard Canon EF lens.
Almost every modern manufacturing process or apparatus uses machine vision systems. These systems ensure fast, accurate and repeatable results, and therefore guarantee consistent quality over time. The main component of every machine vision system is a camera utilizing either a CCD or CMOS image sensor. While there are many variations of image sensors, most of the cameras utilize interline transfer imaging sensors, which can be generally classed in three categories -area,scan, line scan or Time, Delay Integration (TDI).
Wide Dynamic Range Imaging
The Cheetah C4080 and C2880 cameras support the recently introduced high speed On Semiconductor KAC12040 12MP and KAC-06040 (6MP) CMOS sensors.
The Cheetah C4080 and C2880 image sensors have a full well capacity of 17,000 electrons and a readout noise of 25 electrons in Global Shutter (GS) [3.7 electrons in Rolling Shutter (RS)].
One way to increase dynamic range is to take several images at different exposures and then combine them to achieve a net higher dynamic range. The Cheetah camera, however, has been optimized for high speed surveillance and tracking applications by employing a proprietary method to extend the sensors dynamic range using multiple-slope exposures within a single frame time eliminating motion blurring effects. This white paper will examine, dual exposure, calculating dynamic range improvement, multi-slope integrations just to name a few.
- Chameleon Firmware Update v1_26
- Komodo Firmware Update v1_12
- Predator API Data Book
- Predator Application Installation Guide
- Predator Firmware Update v1_15
- Predator HW and Installation Guide
Automated license or number plate recognition (ALPR/ANPR) is one of the most challenging applications for optical character recognition (OCR) because of the variable conditions encountered and the expected effectiveness. A successful implementation depends not only on the strength of the underlying OCR tool but also on complementary image processing techniques and tools.
Digital cameras with color image sensors are now commonplace. The same is true for the computing power and device interfaces necessary to handle the additional data from color images. What’s more, as users become familiar and comfortable with machine vision technology, they seek to tackle more difficult or previously unsolvable applications. These circumstances combine to make color machine vision an area of mounting interest. Color machine vision poses unique challenges but it also brings some unique capabilities for manufacturing control and inspection.
Today’s Graphics Processing Units (GPUs), with their massive parallel architectures and tremendous memory bandwidth, are particularly well suited for accelerating image processing. A careful understanding of a GPU’s capabilities and available programming methods is needed to take full advantage of its computational power while achieving quick time‐to market and maintaining maximum flexibility.
Commercial machine vision software is currently classified along two lines: the conventional vision library and the vision‐specific integrated development environment (IDE). Determining which software is right for your vision project depends upon a variety of factors: ease‐of‐use, productivity, flexibility, performance, completeness, and maintenance. This white paper uses these factors to contrast the two software development approaches and clearly establish the merits and drawbacks of each. The discussion assumes that the vision tools available in both types of software are similar—if not identical—and does not explore possible discrepancies with these tools. Also, the discussion ignores the hardware platform that the vision applications run on as to not bias one over the other
Lens Cleaning: Keeping your lens clean is one of the most important steps you can take to maintain your lens’ highest level of performance. Dirt, dust, or grease on the front or rear surface of a lens can reduce the brightness and contrast of the image it reproduces. Regular lens cleaning is a cheap and effective way to maximize screen presentation quality.
A Few Considerations When Selecting And Using Graduated Filters: Graduated color or neutral filters are among the most useful and versatile filters in the Director of Photography’s repertoire. They allow visual looks to be achieved that are limited only by the cinematographer’s imagination and skill. Since they affect only part of the picture, and not all of it, they can be used alone, and in combinations, to make filmed images that don’t exist in nature.