Asee peer logo

Teaching Robot Vision In Manufacturing Technology

Download Paper |


1996 Annual Conference


Washington, District of Columbia

Publication Date

June 23, 1996

Start Date

June 23, 1996

End Date

June 26, 1996



Page Count


Page Numbers

1.425.1 - 1.425.5



Permanent URL

Download Count


Request a correction

Paper Authors

author page

Zhongming Liang

Download Paper |

NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Session 1463

Teaching Robot Vision in Manufacturing Technology

Zhongming Liang Purdue University Fort Wayne


This paper discusses a number of experiments developed for teaching robot vision. The experiments help students with fundamental theories of machine vision and its applications in robotics.


With machine vision playing an increasingly important role in areas of robotics such as inspection, identification, and visual servoing and navigation,1 the manufacturing technology department sees the importance of teaching fundamentals of machine vision. It has been a difficult topic to teach since it involves a number of concepts that many students in manufacturing technology programs are not familiar, especially when laboratory support was not completely ready.

In the spring and the summer of 1995, with help of a student majoring in electrical engineering technology, the author used the basic vision system to develop a number of experiments for robot vision. They include thresholding, image binarization, edge detection, object recognition, image feature extraction and random object picking.

This paper will briefly discuss all the experiments listed above. Computer programs will be available at the presentation. In addition, video showing the experiments will also be included in the presentation.

The Vision System

The vision system includes a Sony XC-57 monochrome CCD (Charge-Coupled Device) camera, a Sony professional video monitor, and a Data Translation DT 2851 frame grabber, which is installed in a 486 IBM- compatible personal computer. The camera has 512 by 492 photosites and outputs analog signals in the EIA RS-170 format. The video monitor can display original image from the camera or processed image from the frame grabber in either the RS-170 standard or the NTSC standard.2 The frame grabber, shown in Figure 1, grabs one monochrome video frame from the camera within 1/30 second, converts it to a digital frame of 512 by 512 pixels of eight bits each, and stores it in either buffer 0 or buffer 1 on the board. The digital frames in the buffers can be transferred to the internal memory of the computer for further processing, analysis, and storage. Each look-up table (LUT) stored in the on-board memory is a conversion table of light intensity values for altering the image in a frame. For example, the LUT

1996 ASEE Annual Conference Proceedings

Liang, Z. (1996, June), Teaching Robot Vision In Manufacturing Technology Paper presented at 1996 Annual Conference, Washington, District of Columbia. 10.18260/1-2--6331

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 1996 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015