Hi! Sorry, this is quite a long post with quite a lot of questions! I’ve been slowly working on a VR heritage preservation project for the past few years, and photogrammetry has been a central aspect in acquiring 3D models of these heritage sites.
However, taking photos for photogrammetry is a slow and painful process, and so recently I along with a few friends decided to build a robotics platform that would autonomously navigate interior spaces, and allow external cameras to be connected for a high-quality reconstruction as a post-production step.
The current state of development [picture]
I have 0 robotics experience, but we’ve managed to figure out the hardware and get it controllable (without ROS) for now. We’ve settled with:
- An extruded aluminum frame
- 4 wheel omni wheel drive
- DC motors with encoders as actuators
- Arduino for motor control
- Depth (Orbbec Astra) and LIDAR (RPLIDAR A1) sensors for SLAM
- RPi 4B for ROS2
- LiPo batteries
For now, I’ve tried reading the official ROS2 and its packages’ documentation, but I find that the information they provide are rather fragmented for beginners, and I can’t build a mental overview on what package is used for what, and how they are connected in a node graph.
We have so far managed to have the hardware all set up, and have the Arduino take in button input to control the wheels in preset modes using a rough open-loop control. The next step would be to have the bot remote controllable using ROS2, and implement closed-loop control. However, there are so many more questions yet to be answered:
How would the motion control be done with ROS2?
As I understand, the navigation stack that we’ll later use will command the vehicle movement using Twist messages on /cmd_vel. So, how do I go from /cmd_vel to wheel movement?
What is the process of interpreting robot movement to individual motor movement called in this context? Would this be classified as inverse kinematics?
How will the control system be distributed from the RPi to Arduino? A few thoughts:
- RPi interprets the cmd_vel to individual wheel RPM and receive the encoder ticks, and have the ROS node do all the control system logic and change the duty cycle of wheels
- RPi interprets the cmd_vel to individual wheel RPM, and send targeted wheel angular velocity to the Arduino, which runs its own velocity PID control system to try and reach the targeted omega for each wheel
- RPi forwards /cmd_vel to the Arduino, which calculates on its own individual wheel omegas and using closed loop control to reach each of the wheel’s target speed
Also, wouldn’t using any of the above control system solutions lead to TWO control systems in the end? One for reaching wheel velocity, and one later one to reach a goal location by modifying the /cmd_vel of the robot? Is this how its normally done, or is this redundant?
Can you give a rough idea of how I can get this setup (of simulating our motor controllers) working on Gazebo?
Would appreciate any tips and pointers. Thanks a lot!