Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
2153-5965
12
9.1096.1 - 9.1096.12
10.18260/1-2--12915
https://peer.asee.org/12915
3247
Session 1793
Simple Hardware Implementation of Neural Networks for Instruction in Analog Electronics
Kenneth J. Soda and Daniel J. Pack Department of Electrical Engineering United States Air Force Academy
In light of the growing predominance of microprocessors and embedded electronic systems, instruction in basic analog and digital electronic circuits has come to appear less interesting and important to contemporary students of electrical engineering. Despite the continuing importance of foundation circuit concepts, curricula across the country are reducing their emphasis in required courses or shifting them into optional courses. In hopes of mitigating this trend, we discuss a circuit system which applies traditional analog and digital MOSFET sub-circuits into a meaningful contemporary system, the neural network. Neural networks offer a unique approach for processing complex data streams without the need for digital processors. Constructed in a fashion which mimics biological nervous systems, these networks are finding applications in signal processing, control and object recognition. In many cases, a properly prepared neural network can function faster than a comparable microprocessor based system, with lower power consumption and lower level of complexity. Despite their potential and relative conceptual simplicity, it has been difficult to present electronic neural networks in a form convenient for the university classroom or electronics laboratory setting. In this paper we describe an approach for implementing a neural network though which many major analog and digital MOSFET circuit concepts can be illustrated and demonstrated. This approach is amenable to realization in discrete electronic modules through which associated laboratory exercises and design projects may be created. Furthermore, the same concepts can be extended into Very Large Scale Integration (VLSI), where the limitations of component count and performance can be overcome and addressed to a far greater degree.
Introduction
The fundamental motivation to study neural networks is based on the belief that humans make better decisions than machines due to our abilities to process information in parallel. By treating a large amount of data while extracting and processing relevant contextual data from diverse source simultaneously, we are believed to fuse the necessary information to arrive at fairly sophisticated decisions.
The idea of parallel distributed processing models received significant attention when Minsky showed a number of applications of connected networks called perceptrons1 in
1 M. Minsky and S. Papert, Perceptrons, The MIT Press, Cambridge, MA, 1969. Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education Page 1
Pack, D., & Soda, K. (2004, June), Simple Hardware Implementation Of Neural Networks For Instruction In Analog Electronics Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--12915
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015