Minneapolis, MN
August 23, 2022
June 26, 2022
June 29, 2022
24
10.18260/1-2--41725
https://peer.asee.org/41725
410
Sam Siewert has studied at University of California Berkeley, University of Notre Dame, University of Houston and University of Colorado Boulder and has a BS in Aerospace and Mechanical Engineering and MS/Ph.D. in Computer Science. He has worked in the computer engineering industry for twenty four years before starting an academic career in 2012. Half of his time was spent on NASA space exploration programs including the Spitzer space telescope, Space Shuttle mission control, and deep space
programs. The other half of that time he has spent on commercial product development. His commercial work has ranged from I/O chip firmware architecture to scalable systems design of storage and networking solutions for high performance computing. In 2020 Dr. Siewert joined California State University Chico as full-time faculty and retains an adjunct professor role in addition with Embry Riddle Aeronautical University and University of Colorado Boulder. Overall, his focus has been embedded systems with an emphasis on autonomous systems, computer and machine vision, hybrid reconfigurable
architecture and operating systems. Related research interests include real-time theory, digital media and fundamental computer architecture. Dr. Siewert has published numerous research, industry, and educational papers on these topics.
Student projects for “Real-Time Embedded Systems”, a course taught at University XYZ and online with Coursera, stresses ability to put theory into practice. The course includes a final project, where students design, implement, and test a computer vision synchronizing experiment. The goal of the experiment is to observe a clock at two different rates, which must be glitch free, with no skipped values, blurs, or repeats and have minimal drift with millisecond precision as a monotonic process. The gap between basic real-time concepts and more in-depth analysis required to complete a final project is significant. Completion of this project requires hands-on practice with an RTOS (Real-Time Operating System) or embedded Linux while at the same time, learning rate monotonic theory and methods of feasibility analysis. Presently, an introductory exercise is used, along with introduction of practices using trace and profile tools. Students learn multi-core programming and analyze multi-core performance of ARM A-series microprocessors running Linux. Current versions of Linux support fixed priority and dynamic deadline scheduling for predictable response multi-service systems development. This introductory exercise, using synthetic workloads, has led to successful student outcomes for key practice and theory learning objectives, but it is somewhat abstract. The use of synthetic workload executing a series computation gives students experience working with fixed and dynamic priority preemptive scheduling. While students learn to develop Linux POSIX concurrent code, using shared memory threading, they are simultaneously learning rate monotonic theory. The math background and timing analysis experience with diagrams and tools in addition to learning new programming can be demanding. The final project further requires students to make a leap from synthetic workload to computer vision processing. To better bridge this gap, and separate thread programming from analysis and theory, a simulation workload has been developed and is presented in this paper. The simulation makes use of multiple processor cores, requires asymmetric and symmetric multi-processing, and adds a problem-based learning and experimental aspect to early exercises. This separation of concerns and early hands-on experimentation is hypothesized to benefit student success for the final project where practice and theory must be combined. The hypothesis is based upon success using simulations to teach undergraduate parallel processing at California State University Chico. Given that many real-time systems today make use of multi-core shared memory parallel processing, the idea to use simulation has become more feasible based upon allocation of cores to real-time services with other cores dedicated to simulation. In this paper, more than a decade of feedback is summarized and critically reviewed based upon the present method of synthetic workload and progression of synthetic workload that leads up to a real-time application. Further, the proposal to replace synthetic workload with simulation and to create an improved problem-based learning scaffold for the final project is detailed. The simulation includes dynamic control of single degree of freedom holonomic transportation systems, trains, where simulation is required to execute synchronized with real-time using monotonic sequencing of thread execution. The single degree-of-freedom holonomic simulation has been designed to include a dashboard showing: 1) train speed, 2) train odometry, 3) position on track between stations. The simple physical intuition of the simulation is intended to help students think critically about the consequences of monitoring and control service scheduling. The services they must schedule and integrate are: 1) Acceleration/Braking Control, 2) Speed Monitoring, 3) Odometry. They must observe the computation load imparted to a single shared core for these services using “htop” and syslog traces and adjust service rates for optimized control and monitoring of the process with careful consideration of processor loading bounds (rate monotonic and full utility). In all cases deadlines are assumed to be the service period. The actual train dynamic simulation runs on a different core (to avoid interference between the control and monitoring services and the simulation). All code is provided, and students are asked to focus on integration and tuning of the rates and to observe the impact of scheduling method (scheduler type selected from default SCHED_OTHER, reference SCHED_RR, rate monotonic SCHED_FIFO, and dynamic SCHED_DEADLINE). Likewise, they are expected to understand basic utility computation and priority assignment policy for fixed rate monotonic (SCHED_FIFO). The problem introduces student to the use of timestamp tracing (using syslog) and observation of how scheduling mechanism and policy can impact process control. The systems in use all have four or more cores, and simulation starter code will be provided, so students can focus upon real-time service creation and control of simulations running on separate processor cores. The revised approach also features early experimentation with dynamic priorities (Earliest Deadline First with Constant Bandwidth Scheduling – SCHED_DEADLINE) as well as traditional rate monotonic fixed priority (SCHED_FIFO) using Linux. The design for the new exercises using simulation is presented along with the simulation software design and code and will be the basis of an experiment to determine if this problem-based learning approach improves overall student learning outcomes.
Siewert, S., & Shah, R. (2022, August), Addressing Learning Objective Gaps Between Rate Monotonic Theory and Practice using Real-Time Simulation Exercises Paper presented at 2022 ASEE Annual Conference & Exposition, Minneapolis, MN. 10.18260/1-2--41725
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2022 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015