California Polytechnic University, California
April 10, 2025
April 10, 2025
April 12, 2025
10.18260/1-2--55186
https://peer.asee.org/55186
Matthew Jabson is a graduate student in the Mechanical Engineering Department at California State Polytechnic University Pomona, CA. He holds a Bachelors of Science in Mechanical Engineering, and his research emphasis is in hardware integration and motion planning.
Abhishek Vishwakarma is a graduate student in the Computer Science Department at California State Polytechnic University Pomona, CA. He holds a Bachelors of Science in Information Technology, and his research emphasis is in computer vision and motion planning.
(We intend to follow this research with a full paper.) Designing an autonomous navigation system for an Ackerman-style vehicle presents unique challenges due to the complexity of its motion and the limitations of its sensors. When observing ground plane detection with lidar, smooth surfaces like paved roads are more likely to be considered as drivable. Though drivable, soft patches of grass and gravel may be classified as non-drivable. Cameras can reliably detect additional areas of travel like through deep learning, expanding the range of drivable areas and providing path planning tools a greater breadth of routes. This essay outlines the approach for integrating ROS 2 (Robot Operating System) with Nav2 for effective navigation of a front-steered Ackerman vehicle. Camera and lidar data will be used to enhance the classification of drivable areas through cost map augmentation. The Ackerman-style vehicle in this project features a servo motor for steering, tracked using a 360-degree hall sensor to measure the steering angle accurately. Rear propulsion is produced by two DC motors, with incremental encoders for velocity tracking. The integration of these components allows for more robust odometry, which we use to estimate the pose and position of the entire vehicle. An Arduino Mega serves as the intermediary for electrical controls, processing data from the sensors and handling motor commands through motor drivers. Data communication between the Arduino and an NVIDIA Jetson Orin Nano Developer Kit is facilitated via serial communication. The Orin acts as a host for ROS2 and will process large streams of data from cameras and lidar. Computational tasks such as point cloud handling, cost map augmentation, and path planning will be managed by the Orin through ROS topics and the Nav2 plug-ins. For visual perception, a ZEDX stereo camera is stationed at the front of the vehicle. Ground plane detection is performed using YOLOPv2 on the left image of the stereo camera. This deep learning model performs instance segmentation, identifying regions of drivable area and marking limits with lane boundaries when driving on a road. The segmented image is further utilized with stereo vision to create a depth map of the ground. A Velodyne Lidar will be used for more precise ground plane detection. This lidar creates a full 360 degree representation of its surroundings through a 3D point cloud. A reliable ground plane can be achieved by applying ground segmentation to this point cloud. The core of the navigation system lies in the generation and augmentation of the 2D cost map. The ground plane from the lidar is used to define the primary drivable area, serving as the base cost map. This map is augmented with information from the stereo cameras, incorporating the driveable regions detected through road segmentation. Areas identified as non-drivable post-augmentation are inflated in the cost map to create buffer zones, discouraging the path planner from hugging obstacles and deterring passage in tight areas. By combining inputs from lidar and stereo cameras in a layered approach to cost map generation, the system can account for obstacles and terrain variations effectively, enhancing reliability in diverse environments. Nav2 leverages the augmented cost map to compute optimal paths within the constraints of the Ackerman motion model. Custom ROS 2 controllers are configured to translate Nav2’s velocity commands into Ackerman-compatible movements. One significant challenge is the process of retrieving and synchronizing sensor data, and the transmission of control inputs to external hardware. For this purpose, we will dedicate a resource manager and controller manager with the ros2_control framework. The resource manager will organize data from multiple hardware interfaces, such as the lidar and Arduino. The controller will publish the data for use with ROS topics. This system represents a comprehensive approach to autonomous navigation for Ackerman-style vehicles, integrating adaptive perception techniques, robust cost map generation, and sophisticated path planning using ROS 2 and Nav2. The combination of lidar and camera data allows for a more robust identification of drivable and non-drivable areas, making it well-suited for real-world applications.
Jabson, M. D., & Vishwakarma, A. B. (2025, April), Path Planning for an Ackerman-Style Vehicle Using ROS 2 and Nav2 Paper presented at 2025 ASEE PSW Conference, California Polytechnic University, California. 10.18260/1-2--55186
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2025 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015