Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org http://planet.ros.org
ROS Discourse General: What if your Rosbags could talk? Meet Bagel🥯, the open-source tool we just released!
Huge thanks to @Katherine_Scott and @mrpollo for hosting us at the Joint ROS / PX4 Meetup at Neros in El Segundo, CA! It was an absolute blast connecting with the community in person!
Missed the demo? No worries! Here’s the scoop on what we unveiled (we showed it with PX4 ULogs, but yes, ROS2 and ROS1 are fully supported!)
The problem? We felt the pain of wrestling with robotics data and LLMs. Unlike PDF files, we’re talking about massive sensor arrays, complex camera feeds, dense LiDAR point clouds – making LLMs truly useful here has been a real challenge… at least for us.
The solution? Meet Bagel ( GitHub - shouhengyi/bagel: Bagel is ChatGPT for physical data. Just ask questions. No Fuss. )! We built this powerful open-source tool to bridge that gap. Imagine simply asking questions about your robotics data, instead of endless parsing and plotting.
With Bagel, loaded with your ROS2 bag or PX4 ULog, you can ask things like:
- “Is this front left camera calibrated?”
- “Were there any hard decelerations detected in the IMU data?”
Sound like something that could change your workflow? We’re committed to building Bagel in the open, with your help! This is where you come in:
- Dive In! Clone the repo, give Bagel a spin, and tell us what you think.
- Speak Your Mind! Got an idea? File a feature request. Your insights are crucial to Bagel’s evolution.
- Code with Us! Open a PR and become a core contributor. Let’s build something amazing together.
- Feeling the Love? If Bagel sparks joy (or solves a big headache!), please consider giving us a star on GitHub
. It’s a huge motivator!
Thanks a lot for being part of this journey. Happy prompting!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS Naija Linedlin Group
Exciting News for Nigerian Roboticists!
We now have a ROS Naija Community group on here ,a space for engineers, developers, and enthusiasts passionate about ROS (Robot Operating System) and robotics.
Whether you’re a student, hobbyist, researcher, or professional, this is the place to:
Connect with like-minded individuals
Share knowledge, resources, and opportunities
Collaborate on robotics and ROS-based projects
Ask questions and learn from others in the community
If you’re interested in ROS and robotics, you’re welcome to join:
Join here: LinkedIn Login, Sign in | LinkedIn
Let’s build and grow the Nigerian robotics ecosystem together!
ROS robotics #ROSNaija #NigeriaTech #Engineering #ROSCommunity #RobotOperatingSystem
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: [Case Study] Cross-Morphology Policy Learning with UniVLA and PiPER Robotic Arm
We’d like to share a recent research project where our AgileX Robotics PiPER 6-DOF robotic arm was used to validate UniVLA, a novel cross-morphology policy learning framework developed by the University of Hong Kong and OpenDriveLab.
Paper: Learning to Act Anywhere with Task-Centric Latent Actions
arXiv: [2505.06111] UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
Code: GitHub - OpenDriveLab/UniVLA: [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
Motivation
Transferring robot policies across platforms and environments is difficult due to:
- High dependence on manually annotated action data
- Poor generalization between different robot morphologies
- Visual noise (camera motion, background movement) causing instability
UniVLA addresses this by learning latent action representations from videos, without relying on action labels.
Framework Overview
UniVLA introduces a task-centric, latent action space for general-purpose policy learning. Key features include:
- Cross-hardware and cross-environment transfer via a unified latent space
- Unsupervised pretraining from video data
- Lightweight decoder for efficient deploymen
Figure2: Overview of the UniVLA framework. Visual-language features from third-view RGB and task instruction are tokenized and passed through an auto-regressive transformer, generating latent actions which are decoded into executable actions across heterogeneous robot morphologies.
PiPER in Real-World Experiments
To validate UniVLA’s transferability, the researchers selected the AgileX PiPER robotic arm as the real-world testing platform.
Tasks tested:
- Store a screwdriver
- Clean a cutting board
- Fold a towel twice
- Stack the Tower of Hanoi
These tasks evaluate perception, tool use, non-rigid manipulation, and semantic understanding.
Experimental Results
- Average performance improved by 36.7% over baseline models
- Up to 86.7% success rate on semantic tasks (e.g., Tower of Hanoi)
- Fine-tuned with only 20–80 demonstrations per task
- Evaluated using a step-by-step scoring system
About PiPER
PiPER is a 6-DOF lightweight robotic arm developed by AgileX Robotics. Its compact structure, ROS support, and flexible integration make it ideal for research in manipulation, teleoperation, and multimodal learning.
Learn more: PiPER
Company website: https://global.agilex.ai
Click the link below to watch the experiment video using PIPER:
🚨 Our PiPER robotic arm was featured in cutting-edge robotics research!
Collaborate with Us
At AgileX Robotics, we work closely with universities and labs to support cutting-edge research. If you’re building on topics like transferable policies, manipulation learning, or vision-language robotics, we’re open to collaborations.
Let’s advance embodied intelligence—together.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: [Demo] Remote Teleoperation with Pika on UR7e and UR12e
Hello ROS developers,
We’re excited to share a new demo featuring Pika, AgileX Robotics’ portable and ergonomic teleoperation gripper system. Pika integrates multiple sensors to enable natural human-to-robot skill transfer and rich multimodal data collection.
Key Features of Pika:
- Lightweight design (~370g) for comfortable extended handheld use
- Integrated multimodal sensors including fisheye RGB camera, Intel RealSense depth camera, 6-DoF IMU, and high-precision gripper encoders
- USB-C plug-and-play connectivity supporting ROS 1 and ROS 2
- Open-source Python and C++ APIs for easy integration and control
- Compatible with URDF models, suitable for demonstration-based and teleoperation control
In this demo, Pika teleoperation system remotely controls two collaborative robot arms — UR7e (7.5 kg payload, 850 mm reach) and UR12e (12 kg payload, 33.5 kg robot weight) — to complete several everyday manipulation tasks:
Task Set:
- Twist open a bottle cap
- Pick up a dish and place it in a cabinet
- Grab a toy and put it in a container
System Highlights:
- Precise gripper control with high-resolution encoder feedback
- 6-DoF IMU for accurate motion tracking
- Synchronized multimodal data capture (vision, 6D pose, gripper status)
- Low-latency USB-C connection ensuring real-time responsiveness
- Ergonomic and lightweight design for comfortable long-duration use
Application Scenarios:
- Human-in-the-loop teleoperation
- Learning from Demonstration (LfD) and Imitation Learning (IL)
- Vision-based dexterous manipulation and robot learning
- Remote maintenance and industrial collaboration
- Bimanual coordination and complex task execution
Watch the demo here: Pika Remote Control Demo
Learn more about Pika: https://global.agilex.ai/products/pika
Feel free to contact us for GitHub repositories, integration guides, or collaboration opportunities — we look forward to your feedback!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: TecGihan Force Sensor Amplifier for Robot Now Supports ROS 2
I would like to share that Tokyo Opensource Robotics Kyokai Association (TORK) has supported the development and release of the ROS 2 / Linux driver software for the DMA-03 for Robot, a force sensor amplifier manufactured by TecGihan Co., Ltd.
- GitHub – tecgihan_driver
The DMA-03 for Robot is a real-time output version of the DMA-03, a compact 3-channel strain gauge amplifier, adapted for robotic applications.
- TecGihan Website (English)
As of July 2025, tecgihan_driver
supports the following Linux / ROS environments:
- Ubuntu 22.04 + ROS 2 Humble
- Ubuntu 24.04 + ROS 2 Jazzy
A bilingual (Japanese/English) README with detailed usage instructions is available on the GitHub repository:
If you have any questions or need support, feel free to open an issue on the repository.
–
Yosuke Yamamoto
Tokyo Opensource Robotics Kyokai Association
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: RobotCAD 9.0.0 (Assemly WB -> RobotCAD converter)
Improvements:
- Add converter FreeCAD Assembly WB (default) to RobotCAD structure.
- Add tool for changing Joint Origin without touching downstream kinematic chain (move only target Joint Origin)
- Optimization of Set placement tools performance. Now it does not require intermediate recalculation scene in process.
- Decrease size of joint arrows to 150.
- Add created collisions to Collision group (folder). Unification of collision part prefix.
- Fix Set placement by orienteer for root link (align it to zero Placement)
- Refactoring of Set Placement tools.
Fixes:
- Fix error when creating collision for empty part.
- Fix getting wrapper for LCS body container. It fixes LCS adding to some objects.
- Fix NotImplementedError (some joint types units) to warning. Instead of error it will give warning and let possible to set values for other types of joints.
https://vkvideo.ru/video-219386643_456239081 - Converter Assembly WB → RobotCAD in work
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: 🚀 [New Release] BUNKER PRO 2.0 – Reinforced Tracked Chassis for Extreme Terrain and Developer-Friendly Integration
Hello ROS community,
AgileX Robotics is excited to introduce the BUNKER PRO 2.0, a reinforced tracked chassis designed for demanding off-road conditions and versatile field robotics applications.
Key Features:
- Christie suspension system + Matilda four-wheel independent balancing suspension provide excellent terrain adaptability and ride stability.
- Easily crosses 30° slope terrain.
- Maximum unloaded range: 20 km; maximum loaded range: 15 km.
- Capable of crossing 40 cm trenches and clearing obstacles up to 180 mm in height.
- IP67-rated enclosure ensures robust protection against dust, water, and mud.
- Rated payload capacity: 120 kg, supporting a wide range of sensors, manipulators, and payloads.
- Maximum speed at full load: 1.5 m/s.
- Minimum turning radius: 67 cm.
- Developer-ready interfaces and ROS compatibility.
Intelligent Expansion, Empowering the Future
- Supports customizable advanced operation modes.
- Communication via CAN bus protocol.
- Open-source SDK and ROS packages for easy integration and development.
Typical Use Cases:
- Outdoor Inspection & Patrol
- Agricultural Transport
- Engineering & Construction Operations
- Specialized Robotics Applications
AgileX Robotics provides full ROS driver support and SDK documentation to accelerate your development process. We welcome collaboration opportunities and field testing partnerships with the community.
For detailed technical specifications or to discuss integration options, please contact us at sales@agilex.ai.
Learn more at https://global.agilex.ai/
4 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Cloud Robotics WG Meeting 2025-07-28 | Heex Technologies Tryout and Anomaly Detection Discussion
Please come and join us for this coming meeting at Mon, Jul 28, 2025 4:00 PM UTC→Mon, Jul 28, 2025 5:00 PM UTC, where we will be trying out Heex Technologies service offering from their website and discussing anomaly detection for Logging & Observability.
Last meeting, we heard from Bruno Mendes De Silva, Co-Founder and CEO of Heex Technologies, and Benoit Hozjan, Project Manager in charge of customer experience at Heex Technologies. The two discussed the company and purpose of the service they offer, then demonstrated a showcase workspace for the visualisation and anomaly detection capabilities of the server. If you’d like to see the meeting, it is available on YouTube.
The meeting link for nex meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
2 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Sponsoring open source project, what do you think?
Hi,
I just saw this and I was thinking about the ROS community.
We have a large and amazing ecosystem of free software, free as in beer and speech!
That accelerated robotic development and we are all very grateful for it.
But I thin that it is also interesting to discuss how to support financially mantainers, keeping the software free for small companies (pre-revenue), students and individuals.
Thoughts’
6 posts - 6 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Baxter Robot Troubleshooting Tips
Hey everyone,
I’ve been working with the Baxter robot recently and ran into a lot of common issues that come up when dealing with an older platform with limited support. Since official Rethink Robotics docs are gone, I compiled this troubleshooting guide from my experience and archived resources. Hopefully, this saves someone hours of frustration!
Finding Documentation
- Use the Wayback Machine to access old docs:
Archived SDK Wiki
Startup & Boot Issues
1. Baxter not powering on / unresponsive screen
- Power cycle at least 3 times, waiting 30 sec each time.
- If it still doesn’t work, go into FSD (Field Service Menu):
PressAlt + F
→ reboot from there.
2. BIOS password lockout
- Use BIOS Password Recovery
- Enter system number shown when opening BIOS.
- Generated password is admin → confirm with
Ctrl+Enter
.
3. Real-time clock shows wrong date (e.g., 2016)
- Sync Baxter’s time with your computer.
- Set in Baxter FSM or use NTP from your computer via command line.
Networking & Communication
4. IP mismatch between Baxter and workstation
- Set Baxter to Manual IP in FSM.
5. Static IP configuration on Linux (example: 192.168.42.1)
- First 3 numbers must match between workstation and Baxter.
- Ensure Baxter knows your IP in
intera.sh
.
6. Ping test: can’t reach baxter.local
- Make sure Baxter’s hostname is set correctly in FSM.
- Disable firewall on your computer.
- Try pinging Baxter’s static IP.
7. ROS Master URI not resolving
export ROS_MASTER_URI=http://baxter.local:11311
8. SSH into Baxter fails
- Verify SSH installed, firewall off, IP correct.
ROS & Intera SDK Issues
9. Wrong catkin workspace sourcing
source ~/ros_ws/devel/setup.bash
10. enable_robot.py or joint_trajectory_action_server.py missing
- Run
catkin_make
orcatkin_build
after troubleshooting.
11. intera.sh script error
- Ensure file is in root of catkin workspace:
~/ros_ws/intera.sh
12. MoveIt integration not working
- Ensure robot is enabled and joint trajectory server is active in a second terminal.
Hardware & Motion Problems
13. Arms not enabled or unresponsive
rosrun baxter_tools enable_robot.py -e
- Test by gripping cuffs (zero-g mode should enable).
14. Joint calibration errors
- Restart robot. Happens if you hit
CTRL+Z
mid-script.
Software/Configuration Mismatches
15. Time sync errors causing ROS disconnect
- Sync Baxter’s time in FSM or use
chrony
orntp
.
Testing, Debugging, & Logging
16. Check robot state:
rostopic echo /robot/state
17. Helpful debug commands:
rostopic list
rosnode list
rosservice list
18. Reading logs:
- Robot:
~/.ros/log/latest/
- Workstation:
/var/log/roslaunch.log
19. Confirm joint angles:
rostopic echo /robot/joint_states
If you have more tips or fixes, add them in the comments. Let’s keep these robots running.
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Remote (Between Internet Networks) Control of Robot Running Micro-ROS
Hello,
I am looking into solutions for communicating with a robot running Micro-ROS that is not on the same network as the host computer (the computer running ROS 2).
The only solution I have found till now is this blog post by Husarnet. The only problem is that this use-case no longer works, and the Husarnet team does not plan to resolve the issue any time soon.
Does anybody know any solution for this that work?
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: AgileX Robotics at 2025 ROS Summer School: PiPER & LIMO Hands-on Tracks and Schedule
AgileX Robotics at 2025 ROS Summer School
AgileX Robotics is thrilled to announce our participation in the upcoming 2025 ROS Summer School
July 26 – August 1, 2025
Zhejiang University International Science and Innovation Center, Hangzhou, China
Official site: http://www.roseducation.org.cn/ros2025/
Hands-on Tracks
This year, we are bringing two dedicated hands-on tracks designed to empower developers with practical skills in robot navigation and mobile manipulation.
PiPER – Mobile Manipulation Track
Our PiPER-based curriculum introduces core concepts in robotic grasping, visual perception, and motion control. Ideal for those exploring real-world robotic manipulation with ROS!
Date | Time | Session | Topic |
---|---|---|---|
Day 4 | AM | Session 1 | Introduction to PiPER |
Day 4 | AM | Session 2 | Motion analysis |
Day 4 | PM | Session 1 | Overview of PiPER-sdk |
Day 4 | PM | Session 2 | MoveIt + Gazebo simulation |
Day 5 | AM | Session 1 | QR code recognition grasping |
Day 5 | AM | Session 2 | Code-level analysis of grasping logic |
Day 5 | PM | Session 1 | YOLO-based Object Recognition and Grasping with Code Analysis |
Day 5 | PM | Session 2 | Frontier Insights on Embodied Intelligence |
LIMO – Navigation & AI Track
Focused on the LIMO platform, this track offers structured ROS-based training in navigation, SLAM, perception, and deep learning.
Date | Time | Session | Topic |
---|---|---|---|
Day 1 | AM | Session 1 | LIMO basic functions overview |
Day 1 | AM | Session 2 | Chassis Kinematics Analysis |
Day 1 | PM | Session 1 | ROS communication mechanisms |
Day 1 | PM | Session 2 | LiDAR-based Mapping |
Day 2 | AM | Session 1 | Path planning |
Day 2 | AM | Session 2 | Navigation frameworks |
Day 2 | PM | Session 1 | Navigation practice |
Day 2 | PM | Session 2 | Visual perception |
Day 3 | AM | Session 1 | Intro to deep reinforcement learning |
Day 3 | AM | Session 2 | DRL hands-on session |
Day 3 | PM | Session 1 | Multi-robot systems intro |
Day 3 | PM | Session 2 | Multi-robot simulation practice |
We look forward to meeting all ROS developers, enthusiasts, and learners at the event. Come join us for hands-on learning and exciting robotics innovation!
— AgileX Robotics
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Is DDS suitable for RF datalink communication with intermittent connection?
I’m not using ROS myself, but I understand that ROS 2 relies on DDS as its middleware, so I thought this community might be a good place to ask.
I’m working on a UAV system that includes a secondary datalink between the drone and the ground segment, used for control/status messages. The drone flies up to 35 km away and communicates over an RF-based datalink with an estimated bandwidth of around 2 Mbps, though the link is prone to occasional disconnections and packet loss due to the nature of the environment.
I’m considering whether DDS is a suitable protocol for this kind of scenario, or if its overhead and discovery/heartbeat mechanisms might cause issues in a lossy or intermittent RF link.
Has anyone here tried using DDS over real-world RF communication (not simulated Wi-Fi or Ethernet), and can share experiences or advice?
Thanks in advance!
S.
10 posts - 6 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Feature freeze for Gazebo Jetty (x-post from Gazebo Community)
Hello everyone!
The feature freeze period for Gazebo Jetty starts on Fri, Jul 25, 2025 12:00 AM UTC.
During the feature freeze period, we will not accept new features to Gazebo. This includes new features to Jetty as well as to currently stable versions. If you have a new feature you want to contribute, please open a PR before we go into feature freeze noting that changes can be made to open PRs during the feature freeze period. This period will be close when we go into code freeze on Mon, Aug 25, 2025 12:00 AM UTC.
Bug fixes and documentation changes will still be accepted after the freeze date.
More information on the release timeline can be here: Release Jetty · Issue #1271 · gazebo-tooling/release-tools · GitHub
The Gazebo Dev Team
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Donate your rosbag (Cloudini benchmark)
Hi,
as my presentation about Cloudini was accepted at ROSCon 2025, I want to come prepared with an automated benchmarking suite that measure performance over a wide range of datasets.
You can contribute to this donating a rosbag!!!
Thanks for your help. Let’s make pointcloud smaller together
How to
-
You can submit it using the Github Issues or sending it to me at davide.faconti@gmail.com, but consider that the rosbag will be made public (I am creating an open benchmarking repo in Gihub).
-
It should contain at least one sensor_msgs::PointCloud2. All other topics can be removed (if you don’t, i will)
-
Please cut your rosbag to 30-45 seconds. You can use any of these resources:
Data Donation Disclaimer: Public Availability for CI Benchmarking
By donating your data files, you acknowledge and agree to the following terms regarding their use and public availability:
Purpose: The donated data will be used for research purposes, specifically to perform and validate benchmarking within Continuous Integration (CI) environments.
Public Availability: You understand and agree that the donated data, or subsets thereof, will be made publicly available. This public release is essential for researchers and the wider community to reproduce, verify, and build upon the benchmarking results, fostering transparency and collaborative progress in pointcloud compression.
Anonymization/Pseudonymization: Please ensure that no personally identifiable information is included in the data you submit, as it will be made public as-is.
5 posts - 3 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Everything I Know About ROS Interfaces: Explainer Video
I made a video about everything I’ve learned about ROS Interfaces (messages/services/actions) in my fifteen years of working with ROS.
Text Version: ROS Interface Primer - Google Docs (Google Doc)
Featuring:
Information about Interfaces, from Super Basic to Complex Design Issues
Original Research analyzing all the interfaces in ROS 2 Humble
Best Practices for designing new interfaces
Hot takes (i.e. the things that I think ROS 2 Interfaces do wrong)
Three different ways to divide information among topics
Fun with multidimensional arrays
Nine different recipes for “optional” components of interfaces
Strong opinions that defy the ROS Orthodoxy
Zero content generated by AI/LLM
Making video is hard, and so I’m calling this version 1.0 of the video, so please let me know what I got wrong and what I’m missing, and I may make another version in the future.
In closing: bring back Pose2D you monsters.
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS and ROS2 Logging Severity Level
Hi All!
I’m working on an application for containerizing ROS (1 & 2) projects.
I’m asking for the help of everyone experienced with ROS loggers.
In particular, I’m looking for a solution to generalize the definition of the minimum severity level for all the nodes running in a project.
This configuration should be possible outside of the node source code, so using parameters, environmental variables, or configuration files.
I know that In ROS 1 (C++ base nodes) it is possible to set the minimum severity level from rosconsole.config
. (What about ROS 1 Python nodes? It still uses rosconsole.config?)
Also I may have some doubts about how named loggers works, each node has its own logger? In principle it is not possible to define the minimum severity level for all the nodes running in a project?
In ROS 2 (C++ and Python nodes) I know that the --log-level
args works to configure the severity when running a node. But again I’m looking for a global solution…
Anyone with useful resources or insights on this aspect?
As anticipated before, the final goal is having an environmental variable or a configuration file that can be used to set the severity level of all the nodes that will be executed when the project start (so for example multiple nodes running from a launch file).
Moreover, I want it to be independent of the language used to write the node (Python or C++).
I’m not referring to a “global parameters” because I know that ROS 2 is structured such that each node has its parameters.
Thanks to all of you!
(I hope the question is not badly formulated, I?m not very experienced with this aspects and the different structure of ROS 1 and ROS 2 in managing loggers… So also study resources on this aspects can be very helpful for me)
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Ros2top - top-like utility for ROS2
Hi everyone!
Repo: GitHub - AhmedARadwan/ros2top
I’ve always found it hard to track each node’s resource usage, so I thought it might be a good idea to build a tool that works for ROS 2 and essentially any Python or C++ process to monitor resource usage in real time. The goal? Quickly see which processes are consuming the most resources and gain better visibility into a running system.
This is an initial release: it relies on the node registering itself to become visible and tracked by the ros2top
utility.
What it does so far:
- Shows per-node CPU, RAM, GPU, and GPU Mem usage.
- Get active nodes based on registration.
- Offers a simple terminal UI interface similar to htop to monitor everything in one place.
How it works:
- Node imports/includes
ros2top
and register itself. ros2top
then polls resource stats.- It displays a list of processes, so you can spot hot resource consumers at a glance.
Why it might help:
- Instead of juggling
htop
,nvtop
,ros2 node info
, etc., you get everything in one screen. - Ideal for multi‑node systems where it’s easy to lose track of who’s using what.
I’d love to hear your thoughts:
- Does this sound helpful in your debugging or monitoring workflow?
- Any ideas for features, UI improvements, or integrations?
- Thoughts on automatic registration vs. manual config?
This is very early-stage, but I hope it can evolve into a valuable tool for the ROS 2 community. Feedback, suggestions, or even contributions are all welcome!
9 posts - 7 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Best GPU for Large-Scale Multi-Robot Simulation (20–50 Robots) with Open RMF in ROS2
Hello everyone,
I’m planning to run a large-scale multi-robot simulation using ROS2. The setup involves simulating 100 and more robots in a shared environment, using:
Simulation tools like Gazebo or Ignition
Visualization through RViz2
Open RMF for fleet coordination, traffic scheduling, and path planning
I’m looking for suggestions regarding a suitable GPU that can smoothly handle the simulation load without performance issues.
Specifically, I’d like to ask:
Which NVIDIA GPU models are recommended for this scale of simulation?
Would GPUs like RTX 3060 / 3070 / 3080 / 4090 or Quadro series be sufficient?
Is CUDA support helpful for improving performance in Gazebo/Ignition + RViz2?
What minimum VRAM (GPU memory) is advisable (e.g., 8GB vs 16GB+)?
Will the suggested GPU models work well across all ROS2 distributions and Ubuntu versions, including future upgrades?
My aim is to choose a future-ready GPU that supports high-scale multi-robot simulation involving Open RMF logic and visual rendering, with consistent performance.
Any guidance or shared experiences would be greatly appreciated.
And also how many robot Gazebo and Rviz realistically handle in simulation?
Thank you!d
7 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Installation and configuration of the Raspberry Pi Camera on a ROS 2/Jazzy Raspberry Pi 5
We are pleased to release the following information in a document posted in a repository on Raspberry Pi Camera ROS Install that describes the steps to get a Raspberry Pi ™ (or compliant 3rd party) V1, V2, or V3 Camera working on a Raspberry Pi 5 configured with Ubuntu 24.04/ROS 2 Jazzy. It may also be applicable on selected Raspberry Pi 4 configurations . The document was the result of an ongoing dialog on content, posted on the HBRobotics Forum HBRobotics , from notes, Linux Terminal scripts and libraries contributed by Alan Federman, Marco Walther, Sergei Grichine, Ross Lunan. The necessary libraries are installed from downloaded binaries. The purpose was to enable the functioning of the “camera_ros" package developed by Christian Rauch camera_ros , which publishes the camera image as ROS 2 messages: /camera/camera_info, /camera/image_raw, /camera/image_compressed, /parameter_events and /rosout .
3 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: ROS 2 Rust Meeting: July 2025
The next ROS 2 Rust Meeting will be Mon, Jul 14, 2025 2:00 PM UTC
The meeting room will be at https://meet.google.com/rxr-pvcv-hmu
In the unlikely event that the room needs to change, we will update this thread with the new info!
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: 🌉 California ROS Events for July 2025 (including Open Sauce!)
ROS Events in California for July 2025
Hi Everyone,
I’ve put together a string of exciting ROS and open source events in California this July!
I had a fantastic time at Open Sauce last year talking to other open source projects (like OpenSCAD, and FreeCAD). This year I’ve organized a joint Open Robotics / ROS / Open Source Hardware Association / OpenCV booth. If you are attending Open Sauce we would love for you to stop by (we’ll have tons of free OSHWA / OpenCV / ROS Stickers).
I’ve also worked with our friends at Hackster.io to organize an open source @ Open Sauce after party at the Studio 45 fabrication space in San Francisco on Saturday night. The after party is open to everyone, regardless of whether you are planning to attend Open Sauce.
RSVP for After Party Here
ROS By-The-Bay
We’re planning to hold our next ROS By-The-Bay Meetup on Fri, Jul 25, 2025 1:00 AM UTC. I’ve lined up two fantastic speakers from Ember Robotics and Orangewood Robotics.
ROS By-The-Bay Meetup
ROS Meetup in LA
Finally, @mrpollo and I will be in LA the last week of July for IEEE SCM-IT/SCC 2025. @mrpollo and @ivanperez have organized a workshop on open source software for space missions..
We are tentatively planning to hold a joint ROS / Dronecode meetup on July 31st but we’re still looking for space and speakers (we just had our venue fall through). We were hoping to find a space in the El Segundo / Long Beach area but we’re open to anything right about now (perhaps Pasadena?). If you have suggestions please reach out.
I’ll post additional information as we figure it out.
2 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: Embodied AI Community group meeting #9
The Embodied AI Community Group dedicated to the topic of applications of Generative AI to ROS 2 robotics will have a ninth meeting on 9 July 16:00 UTC (9:00 am PST) - in less than 24h!
Join us to keep up with the newest advancements in embodied AI field.
We have some exciting topics in the agenda:
- OSRA Technical Governance Committee Special Interest Group on Physical AI by Adam DÄ…browski
- LeRobot framework: what is it, how to use it with ROS and integrate with Agentic AI by Kacper DÄ…browski
- Agentic AI literature review (5 minutes) by Bartłomiej Boczek (myself) and Kajetan Rachwał
- Quick RAI framework update (3 minutes) by Bartłomiej Boczek
- Free discussion
Here is the meeting link, meetings take place every month, so feel free to subscribe to the calendar and visit the group landing page.
Detailed agenda can be found in the meeting document. You can also find there all materials and recordings of past meetings.
See you there!
4 posts - 2 participants
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: 🏎️ ROS 2 Online Robot Racing Contest — Fun & Challenge Await This July!
Hi community!
This July, we’ve prepared something fun for you — an Online ROS 2 Robot Racing Contest!
This 100% free, simulation-based competition invites robotics developers from around the world to compete. Build your fastest robot lap — and the winner will receive a real ROS 2 robot equipped with a camera, LiDAR, and a hot-swappable battery system!
How to Participate
-
Open the simulation project in your browser: https://app.theconstruct.ai/l/6c3e7d72/
(No installation needed — everything runs directly in your browser.) -
Build your project and complete a full lap in the shortest time possible.
-
Use any tech you like — ROS 2, OpenCV, line following, deep learning…
-
When you finished, submit your project link to contest@theconstruct.ai by July 28th
Winners will be announced** during a live online event on July 31st
This contest is more than just a race — it’s a fun way to strengthen your ROS 2 skills and connect with the global ROS community.
We invite you to race, learn, and enjoy with this robot contest!
The Construct Robotics Institute
theconstruct.ai
2 posts - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)
ROS Discourse General: AI Worker Redefines Agility in Logistics with Swerve Drive
AI Worker Redefines Agility in Logistics with Swerve Drive
AI WORKER #4: Swerve Drive at Work – Logistics Task Demo
Are you interested in the logistics and distribution environment? Our team is thrilled to finally release a new video showcasing our AI Worker’s enhanced driving and operational capabilities!
This video vividly demonstrates how our AI Worker, equipped with Swerve Drive technology, moves with incredible flexibility and intelligence in a real-world logistics setting. While “Omni-Directional” methods typically include Omni wheels and Mecanum wheels, both rely on friction with the floor, which can lead to lower positional accuracy and even floor damage. In contrast, the Swerve Drive type offers superior positional accuracy, a significant advantage in terms of reduced data noise from a Physical AI perspective. After all, if the data crucial for learning isn’t accurate, the learning outcomes won’t be good either. This Swerve Drive technology allows the AI Worker to navigate narrow spaces within the work area, and most importantly, its horizontal movement capability significantly enhances efficiency for tasks involving conveyor belts or tabletop operations.
While this video still features some teleoperated segments, our ultimate goal is for this AI Worker to evolve into a fully autonomous system, capable of self-judgment and movement, by integrating with Robot Foundation Models (RFM). Your continued support and encouragement as we pursue this journey would mean a great deal to us!
As with all ROBOTIS robot systems, this AI Worker is also open-source and built on Open Robotics ROS 2 and Hugging Face LeRobot, so those interested in the technical aspects will find it engaging.
You can watch the original YouTube video at the link below:
AI WORKER #4: Swerve Drive at Work – Logistics Task Demo
https://youtu.be/WNpRlIr4zbw
Our open-source GitHub repositories are here:
GitHub - ROBOTIS-GIT/ai_worker: AI Worker: FFW (Freedom From Work)
GitHub - ROBOTIS-GIT/physical_ai_tools: ROBOTIS Physical AI Tools: Physical AI Development Interface with LeRobot and ROS 2
GitHub - ROBOTIS-GIT/robotis_lab: robotis_lab
A comprehensive overview of the AI Worker is available on our webpage:
https://ai.robotis.com/
Please feel free to leave any questions or feedback in the comments after watching the video!
#ROBOTIS #AIWorker #Humanoid #DYNAMIXEL robot #OpenSource ROS #PhysicalAI #EmbodiedAI
1 post - 1 participant
![[WWW] [WWW]](./rostheme/img/moin-www.png)