ROS 2 Python package for ArUco pose estimation using RGB-D fusion in simulation and real sensor pipelines.
This node is part of the CHARS execution and perception layer. It detects ArUco markers from RGB images, fuses depth at marker center pixels, and publishes precise TF frames for downstream pick-and-place actions.
- Detects ArUco markers (
DICT_6X6_250) from RGB frames. - Synchronizes RGB and depth streams with
ApproximateTimeSynchronizer. - Computes 3D translation using camera intrinsics + depth value.
- Uses OpenCV pose estimation for marker orientation.
- Publishes marker transforms to
/tfasaruco_box_<id>frames. - Falls back to OpenCV translation when depth is invalid.
In CHARS, this package provides high-precision perception for Layer 1 (Execution and Perception):
- Robots navigate/manipulate via Nav2 + MoveIt2.
aruco_depth_fusioncontinuously updates object poses in TF.- Local action servers query TF at execution time to improve pick/place accuracy.
aruco_depth_fusion/
aruco_depth_fusion/
aruco_fusion_node.py
launch/
aruco_fusion.launch.py
package.xml
setup.py
Declared in package.xml:
rclpysensor_msgsgeometry_msgscv_bridgemessage_filterstf2_rospython3-opencvpython3-numpypython3-scipy
From your ROS 2 workspace root:
colcon build --packages-select aruco_depth_fusion
source install/setup.bashros2 run aruco_depth_fusion aruco_fusion_noderos2 launch aruco_depth_fusion aruco_fusion.launch.py- RGB image:
/cam1/depth_camera/image - Depth image:
/cam1/depth_camera/depth_image - Camera info:
/cam1/depth_camera/camera_info
- Parent frame:
cam1/optical_frame - Child frame(s):
aruco_box_<marker_id>
The node opens an OpenCV window:
- Marker boundaries and center points
- Fused axis projection
- Pose label with source (
DEPTHorOPENCV)
# Check TF stream
ros2 topic echo /tf
# List available frames
ros2 run tf2_tools view_framesThe launch file declares configurable arguments (rgb_topic, depth_topic, camera_info_topic, marker_size, etc.).
Current node implementation uses hardcoded topic names and marker size internally. If launch arguments are changed, the node will not yet apply them until parameter handling is added in aruco_fusion_node.py.
- Start simulation and spawn camera + ArUco targets.
- Start this node.
- Confirm
aruco_box_*frames are appearing in/tf. - Use TF lookups in robot action servers for precise end-effector alignment.
- No detections:
- Confirm marker dictionary is
DICT_6X6_250. - Verify camera image contains visible markers.
- Confirm marker dictionary is
- Invalid depth warnings:
- Check depth stream encoding and range.
- Ensure marker center pixels are inside valid depth regions.
- No TF frames:
- Confirm
camera_infois published and intrinsics are valid. - Verify synchronized RGB/depth streams are active.
- Confirm
Apache-2.0