Discussion - Docker and Devices (Docker for Robotics Pt 4)

This is the discussion topic for the video and/or blog post linked above. Please keep all replies relevant to the content, otherwise create a new topic.

I just wanted to share that I have moved all of my ros2 development to docker.

One of my biggest concerns with both ROS and ROS2 is that a system can get very messy if you work on multiple projects on the same machine. It is easy to forget what you have installed for one project.

It is not always clear what gets pulled in when you install something and it is even less clear that everything is uninstall and cleaned up when you uninstall something.

Now I can freely experiment with multiple versions of software without worrying if I have a physical machine partition running the correct OS.

Finally, docker makes it very easy to file reproducible bug reports. Historically it seems that 90% of ROS bugs were closed with ā€œworks for meā€ or ā€œcanā€™t reproduce.ā€ Even when 10ā€™s of people have reported the same issue across multiple software releases.

Long story shortā€¦ anything you can do to promote the encapsulation of ROS development environments in docker machines is a great step forward.

Interestingly this notion of preconfigured ROS development environments seems to be the entire business model of the construct.

1 Like

Hi,
Iā€™ve created my first ROS2 Docker image on my host machine following the video (great video again, thanks @JoshNewans). I then ran up my ROS2 robot, but when I run ā€œros2 topic listā€ in the host docker ā€˜containerā€™(have I got the correct terminology?) it canā€™t see any of the topics created by my robot, it just lists parameter_events and rosout :frowning_face:

I can happily ssh from the host Docker container to the robot - so I know networking is working.

Any ideas?

This is a fresh install of Ubuntu 22.04 on the host machine, with no native ROS2 install.

Regards
Andrew

In your docker run command are you setting your network and ipc as host?

    --network=host \
    --ipc=host \

These are two security-related parameters that let you set the docker network and interprocess communicates run on the host machine rather than be encapsulated in the container itself.

Personally, I use

docker run -it --user ros --network=host --ipc=host -v /tmp/.X11-unix:/tmp/.X11-unix:rw  --env=DISPLAY -v $PWD/config:/my_source ros2-humble-dev-env

Hello @JoshNewans,

I am trying something that should be simple but I am struggling: accessing a USB webcam from a docker container.

Using the dockerfile in your tutorial as starting point (i.e. osrf/ros:humble-desktop-full is the base image) I added a RUN instruction to install cheese, then built the image, then run it with:

$ docker run -it --user ros --network=host --ipc=host -v $PWD/code/:/my_source_code  -v /tmp/.X11-unix:/tmp/.X11-unix:rw --env=DISPLAY --device=/dev/video2 --device=/dev/video3 nano_image

(when I plug the camera /dev/video2 and /dev/video3 are the two new devices that appear)

from inside the container cheese app launches fine, but there is no streaming and the device pulldown menu is grayed out. On console I get the following error messages:

(cheese:53): dbind-WARNING **: 19:52:59.649: Couldn't register with accessibility bus: An AppArmor policy prevents this sender from sending this message to this recipient; type="method_call", sender="(null)" (inactive) interface="org.freedesktop.DBus" member="Hello" error name="(unset)" requested_reply="0" destination="org.freedesktop.DBus" (bus)
MESA: error: Failed to query drm device.
libGL error: glx: failed to create dri3 screen
libGL error: failed to load driver: iris
libGL error: failed to open /dev/dri/card0: No such file or directory
libGL error: failed to load driver: iris
** Message: 19:52:59.899: cheese-application.vala:222: Error during camera setup: No device found


(cheese:53): cheese-CRITICAL **: 19:52:59.903: cheese_camera_device_get_name: assertion 'CHEESE_IS_CAMERA_DEVICE (device)' failed

(cheese:53): GLib-CRITICAL **: 19:52:59.903: g_variant_new_string: assertion 'string != NULL' failed

(cheese:53): GLib-CRITICAL **: 19:52:59.903: g_variant_ref_sink: assertion 'value != NULL' failed

(cheese:53): GLib-GIO-CRITICAL **: 19:52:59.903: g_settings_schema_key_type_check: assertion 'value != NULL' failed

(cheese:53): GLib-CRITICAL **: 19:52:59.903: g_variant_get_type_string: assertion 'value != NULL' failed

(cheese:53): GLib-GIO-CRITICAL **: 19:52:59.903: g_settings_set_value: key 'camera' in 'org.gnome.Cheese' expects type 's', but a GVariant of type '(null)' was given

(cheese:53): GLib-CRITICAL **: 19:52:59.903: g_variant_unref: assertion 'value != NULL' failed

** (cheese:53): CRITICAL **: 19:52:59.903: cheese_preferences_dialog_setup_resolutions_for_device: assertion 'device != NULL' failed

Seems the device is not accessible. sudo cheese fails completely (does not even launch) with:

No protocol specified

** (cheese:145): ERROR **: 19:54:13.667: cheese-application.vala:89: Unable to initialize libcheese-gtk
Trace/breakpoint trap

Note: cheese works just fine in the host

Any idea what I am doing wrong?

Iā€™m having similar issues as @rekabuk. I can ssh into my jetson nano, in my case, but the ROS nodes on it and the PC donā€™t see each otherā€™s topics. I was working with docker-compose files but also tried to run the images directly with your docker run options. No luck.

I am not quite sure if what you (and rekabuk) are talking about is having a devcontainer on the PC and running ros on the robot normally or deploying on the robot using containers as well. I am trying to do the latter, because this allows me to use humble on the old ubuntu 18.04 based l4t jetson OS. Would I have to do the docker swarm thing in that case?

My issue was that Iā€™m also using WSL2 on my dev PC, which isnā€™t automatically connected to the host network (like wsl1 was). Needed to proxy the required ports through my dev machine to WSL2.

Only trouble is, ROS doesnā€™t only use the ports associated with ROS_DOMAIN_ID but also a whole bunch of other ones that get assigned random ports all across the port range. In a desperate and hacky attempt, I used a script to proxy every single port 1-65000 to the wsl2 distro but the ROS instances STILL donā€™t see each other.
I did confirm that I can access an nginx container website running on wsl2 from the jetson earlier.

I donā€™t know WSL2 just doesnā€™t seem to gel with this networking setup of ROS. At least until the mirrored networking mode that makes it so it behaves like WSL1 again is still in experimental.

Hmm it looks like it really wants to use the video card in this, although itā€™s Intel graphics which I havenā€™t played around with as much.

You could try setting --gpu=all and possibly also passing through /dev/dri/card0? Iā€™d also try using --privileged and -v /dev:/dev to map everything through and see if that works to start with.

Thatā€™s helpful info! Iā€™ve never tried using ROS in WSL so havenā€™t had to work through any of these issues for myself yet.

I definitely conflated this issue with the Docker networking issue I talked about in the Docker for Robotics Pt1 thread. If you use the unicast discovery method I talked about there, you can control which ports are used. If you use WSL you then have to portproxy those.

The answer to this issue gives a good idea for how you can configure (in this case cycloneDDS) to use unicast which works more nicely with Docker. @rekabuk

EDIT:
Actually let me explain what I did, because itā€™s not that many steps, but they are a bit different than the example from that issue I started out from. Iā€™m on linux on the host as well now.

This is what I did on both machines:

  1. Run your containers in host networking mode:
    docker run -it --net=host myImage

  2. Switch over to CycloneDDS
    sudo apt install ros-iron-rmw-cyclonedds-cpp
    export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

  3. Configure CycloneDDS for unicast
    Create an xml file (anywhere, name doesnā€™t matter either) and paste the following inside, depending on whether youā€™re on the PC or the robot, replacing networkinterface names and peer addresses with your actual ips as strings (find out with ifconfig):

<!--PC config (net=host)-->
<?xml version="1.0"?>
<CycloneDDS>
  <Domain id="any">
    <General>
      <Interfaces>
        <NetworkInterface name="enp3s0"/>
      </Interfaces>
      <AllowMulticast>false</AllowMulticast>
    </General>
    <Discovery>
      <ParticipantIndex>0</ParticipantIndex>
      <Peers>
        <Peer Address="${ROBOT_IP}"/>
      </Peers>
    </Discovery>
  </Domain>
</CycloneDDS>
<!--Pi config (net=host)-->
<?xml version="1.0"?>
<CycloneDDS>
  <Domain id="any">
    <General>
      <Interfaces>
        <NetworkInterface name="wlan0"/>
      </Interfaces>
      <AllowMulticast>false</AllowMulticast>
    </General>
    <Discovery>
      <ParticipantIndex>1</ParticipantIndex> 
      <Peers>
        <Peer Address="${PC_IP}"/>
      </Peers>
    </Discovery>
  </Domain>
</CycloneDDS>
  1. Set CYCLONEDDS_URI environment variable to the path of your xml, in my case:
    export CYCLONEDDS_URI=/root/ddsConfig.xml

It makes sense to put the export commands in .bashrc as well.

And thatā€™s it, now you should be able to run the talker listener demo nodes successfully.