Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org
http://planet.ros.org
ROS Discourse General: ros2_lingua: A safe, dependency-aware grounding engine for LLMs
Hi everyone,
Like many of us, I’ve been experimenting with giving LLMs control over robot hardware. However, I quickly ran into the classic problems: LLMs hallucinate actions, assume prerequisites that haven’t been met (e.g., trying to drive a humanoid before stabilizing it), and most existing integrations are just tightly coupled, hardcoded scripts.
To solve this, I built ros2_lingua — an open-source bridge that introduces a structured capability contract between ROS 2 nodes and LLMs.
Instead of letting the LLM guess what topics or actions to call, ros2_lingua forces the LLM to output a plan based only on explicitly registered capabilities, and uses a backward-chaining planner to automatically inject missing prerequisite steps.
How it works:
- Capability Advertisement: Any ROS 2 node can inherit from LinguaMixin to self-advertise its capabilities at boot. It defines its name, ROS action/service, parameters, preconditions, and postconditions.
- Backward-Chaining Planner: When a user gives a natural language instruction (e.g., “go to the table and pick up the bottle”), the Grounding Engine checks the robot’s current state against the capability schema. If the robot isn’t balanced, the planner automatically injects a stabilize_robot capability before the navigation step.
- Safe Dispatch: The DispatcherNode safely executes the validated plan over standard ROS 2 actions and services.
Decoupled Architecture
One of my main goals was to ensure the core logic was highly testable. The project is split into two layers:
ros2_lingua_core: A pure Python library containing the schema, registry, planner, and LLM backends (Ollama, OpenAI, Anthropic). It has zero ROS 2 dependencies, meaning the grounding engine can be unit-tested purely in Python.
ros2_lingua: The ROS 2 interface layer containing the GroundingNode, DispatcherNode, and mixins.
Links & Demo
You can see a demo of the engine running with a local Ollama model and a mock humanoid setup, along with the full architecture documentation here:
Documentation & Architecture: ros2_lingua — Documentation
GitHub Repository: GitHub - purahan/ros2_lingua: Natural language to ROS2 actions — a structured LLM grounding engine for any robot. · GitHub
What’s Next & Feedback Request
The project is currently a working prototype in Python. My immediate roadmap includes taking this to a release-ready state and building a C++ bridge so native controller nodes can easily advertise their capabilities.
Since this is early development, I would love to get feedback from the community on the architecture—specifically on the schema design for the capability registry and how best to handle complex, long-running action pre-emptions within the Dispatcher.
Thanks for your time, and I’d love to hear your thoughts!
1 post - 1 participant
ROS Discourse General: Control Algorithm Dominance Survey
Hey guys, I’m doing a survey to ascertain the dominance of different control engineering paradigms in the industry, to ascertain whether there has been a noticeable shift from classical controls to more modern algorithms, or whether modern algorithms, while looking good on paper, are stuck on research papers for the most part.
I would love everyone’s inputs, from student to seasoned researcher.
Your still welcome to contribute if you don’t work directly in controls, or if your work is controls-adjacent, like SWE or mechanical design.
2 posts - 2 participants
ROS Discourse General: Where does latency in WebRTC video streaming come from? An analysis
We analyzed the glass-to-glass latency of streaming video from robots to the web using WebRTC. Typical, total latency for remote streaming is 150-180 ms but how does this break down?
Tl;dr:
- The vast majority of latency actually comes from the camera itself and the USB bus (~100 ms).
- H264 encoding and decoding add around 10 ms each (or less).
- WebRTC only adds around 10 ms of latency for remote streaming (jitter buffers).
- The rest is due to static network delay (“ping timing”, speed of light).
Read the full analysis here:
1 post - 1 participant
ROS Discourse General: ROS2 + Gazebo Harmonic on macOS 26
Hello,
It seems ROS2 on macOS 26 installation guide is this: Installing ROS 2 on macOS — ROS 2 Documentation: Crystal documentation
Gazebo Harmonic on macOS 26 installation guide is this: Binary Installation on macOS — Gazebo harmonic documentation
I read online there might be some incompatibility issues. I would like to understand what’s the current picture of the ROS2+Gazebo Harmonic. Would those two guides work and are there any expected issues?
1 post - 1 participant
ROS Discourse General: Camera streaming made easy [NEW Release]
Greetings fellow roboticists,
you might know me from other projects such as ros_babel_fish, the QML ROS 2 module, or RQml.
Maybe as the dude who ends all their posts with an AI-generated image.
TLDR: ros_camera_server out now (link below). Efficient streaming to ROS, robust low-bandwidth to operator station over H.264/H.265 using rtp/srt/webrtc. Simple config, automatically optimized pipelines.
Camera streaming in ROS is kind of cumbersome, even though it’s always the same issue.
On the robot, you want raw (or compressed between PCs) ROS images for processing, and on the remote operator station, you want a low-latency, ideally low-bandwidth live video feed.
In academia, many setups I have encountered use compressed images for remote operator setups because it’s simple and, in good network conditions, works well enough.
Using usb_cam or gscam with compressed images is also what Opus 4.7 would suggest when asked.
While this does work, it’s quite resource- and bandwidth-intensive and is one of the roadblocks that need to be addressed to bring research solutions to the field, especially in rescue robotics, where bandwidth is a limited resource.
To save bandwidth, you can use something like gst_bridge to receive the ROS image, encode it in H.264, and send it to your remote operator station.
This will be approximately 1/10 to 1/30 of the bandwidth for comparable quality.
If your camera is a high-resolution USB camera, it will most likely stream jpeg encoded data for the high resolution, high fps options.
So your pipeline becomes:
Camera (jpeg) → usb_cam (decodes jpeg) → image_transport (re-encodes as jpeg for compressed) → gst_bridge → handwritten gstreamer pipeline (requires some technical knowledge to get right) → stream
Doesn’t take an expert to see that this is not optimal.
Here’s where my new ros_camera_server comes in.
You specify one yaml file with your cameras, each with one input and as many outputs as you want (and your compute can handle).
The outputs can differ in resolution and framerate.
Currently supported are ROS 2, RTP, SRT, and WebRTC.
The camera server will automatically create and optimize GStreamer pipelines based on your available hardware accelerations, which, in parallel, produce your ROS output and streams applying scale and framerate limiters as necessary.
Cutting the decode and re-encode overheads and significantly reducing latency and CPU usage.
JPEG camera input can also be published directly as a ROS-compressed image or forced to be decoded if needed.
Check the plots from my benchmark in the comments to see that the much easier configuration is not paid for with higher latency or overhead, and it beats the alternatives in both.
Here’s the repo:
If you can’t comply with the AGPL, you can contact me to see if we can find a suitable license for your use.
PS: The ros_camera_server preserves the image capture/header timestamp as a custom RTP header extension and can restore it from ros_camera_server H.264/H.265 streams.
So you can stream from the robot over RTP/SRT/WebRTC and restore it to ROS on the operator station or another robot, and the timestamp will be preserved in the ROS image output.
Benchmarks
I hope this helps groups without video streaming experts to create more robust remote control setups.
If you read this far and this was not of interest to you, I’m sorry, here’s your AI picture:
1 post - 1 participant
ROS Discourse General: ALERT: do NOT visit plotjuggler.com
Hi,
avoid the page “plotjuggler DOT com”
I did not create this page and I have no idea who did.
Please do not download any file from there, there is a high risk of Malware / Phishing.
I will try to have a solution to this ASAP.
Davide
3 posts - 2 participants
ROS Discourse General: Rclnodejs 2.0.0 beta — ROS 2 Lyrical (beta) and Node.js 26 support
Hi all,
For those new to the project: rclnodejs is the Node.js client library for ROS 2, maintained under the Robot Web Tools umbrella. It lets you write ROS 2 nodes — publishers, subscribers, services, actions, parameters, lifecycle, etc. — in plain JavaScript or TypeScript, with a native binding to rcl/rmw so messages stay zero-copy where possible. It’s a good fit for web dashboards, Electron desktop apps, browser bridges, scripting, and rapid prototyping.
I’m happy to announce that rclnodejs 2.0.0-beta.0 is out — the first preview of the 2.x line, with first-class support for the upcoming ROS 2 Lyrical Luth release on Ubuntu 26.04 and the latest Node.js 26.
What’s new in 2.0.0-beta.0
-
ROS 2 Lyrical Luth (Ubuntu 26.04) is supported in addition to existing distros (Humble / Jazzy / Kilted / Rolling).
-
Node.js 26.x is supported, with Linux x64 and arm64 prebuilds so
npm installworks without a local toolchain.
New: rosocket — talk to ROS 2 from a browser, no JS library
This release ships a lightweight WebSocket bridge called rosocket plus an end-to-end demo. The point: a browser tab can subscribe and publish to topics (and call services) using only the built-in WebSocket and JSON APIs — no rosbridge or roslibjs needed.
URL convention:
ws://<host>:<port>/topic/<name>
ws://<host>:<port>/service/<name>
Browser-side pub/sub in a few lines:
const BRIDGE = 'ws://localhost:9000';
// Subscribe — every published message arrives as onmessage.
const sub = new WebSocket(`${BRIDGE}/topic/chatter`);
sub.onmessage = (ev) => console.log('recv:', ev.data); // {"data":"hi"}
// Publish — open, send JSON, close.
const pub = new WebSocket(`${BRIDGE}/topic/chatter`);
pub.onopen = () => {
pub.send(JSON.stringify({ data: 'hello from browser' }));
setTimeout(() => pub.close(), 200);
};
Live walkthrough:
Code: demo/rosocket.
Electron desktop visualization demos
For richer UIs, the same library powers cross-platform desktop apps via Electron — HTML/CSS/Three.js/WebGL on the front end, with real ROS 2 nodes running in the Electron main process. The demos cover topics, a car controller, a manipulator, and a turtle TF2 visualizer.
rclnodejs electron manipulator demo
Code: demo/electron.
Cheers,
Minggang
1 post - 1 participant
ROS Discourse General: Is FastDDS (default config) totally unusable for simulation with rclpy?
I was fighting some performance issues in our simulation for a subject I teach.
It boiled down to one slight performance improvement in rclpy which I started discussing, but the larger surprise was how super-bad FastDDS is with distributing the 250 Hz /clock topic.
I did an experiment with our full autonomy stack (but on a Turtlebot with 2D lidar, so nothing heavy). Most nodes run rclpy with MultiThreadedExecutor (more on that later). There are in total 16 nodes that subscribe /clock topic and Gazebo runs at ~70% RTF, so the real frequency of /clock is more like 200 Hz.
Pink is FastDDS, blue is Zenoh.
Let me explain the plot:
The vertical axis shows the dt - delta time, i.e. the time difference between two consecutive /clock messages received by the node (measured by instrumenting the default clock callback of a node).
With FastDDS, some nodes sometimes do not receive any /clock message for more than 0.5 s!!! With Zenoh, the worst case is about 0.18 s.
Even more interesting is the best case - FastDDS achieves 0.004 dt (what is expected) for only a small fraction of the dataset, like 5%. Zenoh achieves this dt most of the time (the almost solid blue line on the bottom). (no, this blue line does not hide the pink dots, there are almost none underneath it).
This is the histogram of dt values (notice logarithmic axis, Zenoh left, FastDDS right):
I understand that the combination of rclpy and MultiThreadedExecutor is one of the worst things you can do in ROS 2, but still, this huge difference in usability between FastDDS and Zenoh hits me.
11 posts - 5 participants
ROS Discourse General: [Release] emcon_gz_hardware_interface: A DDS-bypassing hardware interface for Gazebo Harmonic
Hi everyone,
When scaling up multi-robot simulations or running complex network-isolated state estimators in Gazebo Harmonic, relying on the standard gz_ros2_control shared memory architecture can become a bottleneck or present domain isolation challenges.
To solve this, I’ve open-sourced emcon_gz_hardware_interface.
It acts as a standard ros2_control SystemInterface, but instead of routing simulation traffic through ROS 2 DDS, it acts as a “data diode” and subscribes/publishes directly to native gz-transport topics.
Key Features:
-
Total DDS Bypass: Keeps your
ROS_DOMAIN_IDcompletely isolated from high-frequency simulation traffic. -
Strict Real-Time Safety: Uses
realtime_tools::RealtimeBufferto ensure the control loop never blocks. -
Fully Parameterized: Configurable directly via URDF
<hardware>tags.
Repository: https://github.com/yenode/emcon_gz_hardware_interface
I’ve already opened an RFC with the ros-controls team to see if this fits upstream, but the standalone package is fully CI-tested and ready to use for ROS 2 Jazzy! I’d love to hear feedback from anyone running massive fleets or strict network simulations.
3 posts - 2 participants
ROS Discourse General: "ROS 2 in a Nutshell: A Survey" is now published in ACM Computing Surveys
Hi everyone,
I have been a silent observer here since 2019, and this is my first post on Discourse. I hope you will excuse the sudden intrusion ![]()
I am pleased to share that our survey paper, “ROS 2 in a Nutshell: A Survey,” has been published in ACM Computing Surveys .
Paper DOI: https://doi.org/10.1145/3815113
ACM Computing Surveys is recognized with an Impact Factor of 28.0 and is ranked #1 out of 147 journals in Computer Science, Theory & Methods .
The goal of this survey is to provide a broad and systematic overview of the ROS 2 ecosystem. In particular, the paper covers:
- the evolution of ROS 2
- the motivations behind ROS 2 and its architectural redesign
- middleware and RMW evolution
- a taxonomy of ROS 2 literature and research directions
- frameworks, simulators, community packages, and the broader ROS 2 software ecosystem
The paper is organized around three main research questions:
- How does ROS 2 improve upon ROS 1, and what new limitations arise?
- What advances address redesign challenges and enable deployment?
- Which frameworks and tools shape the ROS 2 ecosystem, and where are the remaining gaps?
A major outcome of the work is an open-access companion database that organizes ROS-related literature, tools, and ecosystem resources:
ROS 2 survey database: ROS 2 in a Nutshell: A Survey
We also welcome community contributions to improve and extend the database:
Contribution guide: awesome-ros/CONTRIBUTING.md at main · asmbatati/awesome-ros · GitHub
This work was carried out by:
We would also like to express our appreciation to:
for their valuable support.
We are also grateful for the support of:
- RIOTU Lab
- Research and Initiatives Center (RIC)
- AlfaisalX Center at Alfaisal University
I hope the survey and the companion database will be useful to researchers, developers, and students working with ROS 2. ![]()
Feedback, corrections, and suggestions for additions to the database are very welcome.
1 post - 1 participant
ROS Discourse General: Guidance on learning ROS2 focusing on motion planning and Perception.
Hello guys, I am a beginner at learning ROS2. I want guidance on learning it from scratch, right now I am stuck on where to start. Watching a couple of YouTube tutorials, but struggling to understand certain concepts. I am here to learn from people who have prior knowledge of ROS.
Thanks
1 post - 1 participant
ROS Discourse General: Ouster unveil the first native color Lidar sensor
5 posts - 3 participants
ROS Discourse General: RVizSplat: Visualize 3D Gaussian splats in RViz!
Hey everyone!
We are happy to share Release 1.0.0 of RVizSplat!
In this release, we provide an RViz display plugin that renders 3D Gaussian splats alongside other conventional markers in the scene. Along with this, we also provide an interface to stream GSplats over ROS topics and the ability to read .ply files which are in the 3DGS INRIA format directly from the plugin.
In case you wish to render Gaussian splatted scenes in a resource constrained environment, we also provide OIT based implementations to bypass sorting the Gaussians at the cost of rendering quality.
On an Nvidia RTX 3060+, we render at 40-60 FPS on a large scene with > 6 million splats and about 20 FPS on a modern integrated GPU for the same scene. On scenes that are comparatively smaller (1-3 million splats), we are able to achieve 100+ FPS. We also provide the ability to sort on an Nvidia GPU (Radix sort) and on the CPU (PDQ Sort).
Additional sorting techniques can be implemented through a simple interface if they suit your needs better.
The package is currently well tested on ROS 2 Rolling and is experimental on Jazzy and Kilted.
Please try out our work, let us know what you think, and star the repository to support the project ![]()
Team members that have made this project possible: Videh Patel, Akash Chikhalikar, Aditya Mathur, and Suchetan Saravanan.


3 posts - 2 participants
ROS Discourse General: I tested URDF format conversion on NASA Valkyrie, UR5, and Franka Panda results and a free tool
I validated and converted URDFs from 5 popular robots to both
Gazebo SDF and MuJoCo MJCF format.
The tool is free to try (14-day Pro trial, no credit card):
- Validate: POST /api/urdf/validate
- Convert to SDF: POST /api/urdf/convert-format?target=sdf
- Convert to MJCF: POST /api/urdf/convert-format?target=mjcf
Python SDK: pip install roboinfra-sdk
GitHub Action: roboinfra/validate-urdf-action@v1
1 post - 1 participant
ROS Discourse General: ROS Lyrical Luth Beta and Call for Testing
We’re in the “beta” phase of development for ROS 2 Lyrical Luth! We have binary packages available for Ubuntu Resolute and RHEL 10, and rosdistro is open for newly released packages for Lyrical.
Testing the Lyrical beta
We published Installation instructions for Lyrical here. However, binary packages for Lyrical are only available in the testing repository.
Follow the pre-release testing instructions to use the ros-testing repository so that you can install the ros-lyrical-* packages.
For those using the comprehensive archive for installation, download the archive from the artifacts of this pre-release tag. If you’re building from source, use the ros2.repos file at that release.
If you think you’ve discovered a bug, please:
- check the open issues and PRs on the related repository, or
- discuss the issue in this thread, or
- open a new issue
We’ll triage the issue or PR and decide when and how it should be fixed in Lyrical.
Releasing your packages
If you are a package maintainer, please follow this guide to release your package in Lyrical.
Reminder that the tutorial party starts Thu, Apr 30, 2026 7:00 AM UTC. Find all the details in this post.
Thanks!
The ROS 2 Team
2 posts - 1 participant
ROS Discourse General: ROS 2 Lyrical Luth Release Illustration and Swag 🎸
Hi Everyone,
It is my pleasure to present you with the illustration for ROS 2 Lyrical Luth! This release illustration is the work of our illustrator Ryan Hungerford. Ryan is an illustrator based in the Bay Area and his AI (actual intelligence) makes for some wonderful illustrations.
Lyrical Swag Sale
We’re also happy to announce that the ROS 2 Lyrical Luth swag sale is now live. We’re now using Fourth Wall for all of our ROS swag sales as the platform supports both a wide array of items and allows us to produce merch on demand and ship it almost anywhere on earth We’ve also created a permanent URL for ROS swag at store.openrobotics.org so it is easy to find. For this release we are offering eight different items for sale including:
Mens, womens, and kids shirts (we’re big fans of the tri-blend shirts)
Baby Onsies
Hoodies and long sleeve shirts
Throw pillows
Mugs
Decorative prints
All profits from the Lyrical swag sale go directly to the Open Source Robotics Foundation and help support the ROS, Gazebo, ROS Control, and Open-RMF projects. If you order today you might just receive your swag by release day on May 22rd, 2026. If you would like to earn Lyrical swag by contributing to the project please consider contributing to the Lyrical Test and Tutorial party that is currently taking place. The top twenty test contributors will be sent a code to our swag store.
6 posts - 3 participants
ROS Discourse General: Lyrical Luth Test and Tutorial Party Instructions
Lyrical Luth Test and Tutorial Party Instructions
Update: Lyrical board has been updated and is live! Docs are live too. A recording of the kickoff meeting can be found here.
Hi Everyone,
As mentioned previously, we’re conducting a testing and tutorial party for the next ROS release, Lyrical Luth. If you happened to miss the kickoff of the Lyrical Luth Testing and Tutorial party this morning I have put together some written instructions that should let everyone participate, no matter their time zone. Here are the slides from the kickoff meeting.
TL;DR
We need your help to test the next ROS Distro before its release on Friday, May 22nd. We’re asking the community to pick a particular system setup, a combination of host operating system, CPU architecture, RMW vendor, and build type (source, debian, binary), and run through a set of ROS tutorials to make sure everything is working smoothly. Depending on the outcome of your tutorials, you can either close the ticket as completed or report the errors you found. If you can’t assign the ticket to yourself, leave a comment, and an admin will take care of it for you. Please do not sign up for more than one ticket at any given time. Everything you need to know about this process can be found in this Github repository.
As a thank you for your help, we’re planning to provide the top twenty contributors to the T&T party with their choice of either ROS Lyrical swag or OSRA membership.
To be eligible to receive swag, you must register using this short Google Form so we can match email addresses to GitHub usernames and count the total tickets closed.![]()
The testing and tutorial party will close on May 14, 2026, but we’re asking everyone to get started right away! We have 10,000 tickets to work through and with Lyrical’s transition to C++20, we fully anticipate that we’ll need to update a few tutorials and fix some broken source builds.
Full Instructions
We’re planning to release ROS 2 Lyrical Luth on May 22, 2026, and we need the community’s help to make sure that we’ve thoroughly tested the distro on a variety of platforms before we make the final release. What do we mean by testing? Well, lots of things, but in the context of the testing and tutorial party, we are talking about the package-level ROS unit tests and anything else you want to test. What do we mean by tutorials? We also want to make sure all our docs.ros.org tutorials are in working order before the release.
The difficulty in testing a ROS release is that people have lots of different ways they use ROS, and we can’t possibly test all of those combinations. For the testing and tutorial party we have created what we call, “a setup.” A setup is a combination of:
- RMW vendor: FASTDDS, CYCLONEDDS, CONNEXTDDS or ZENOH
- BuildType: binary, debian or source
- OS: Ubuntu Resolute 26.04, Windows 11 and RHEL-10
- Chipset: Amd64 or Arm64
If you already have a particular system setup that you work with, we suggest that you roll with that; otherwise, feel free to create a new system setup just for testing purposes. If you normally use Windows or RHEL (or binary compatible distributions to RHEL like Rocky Linux / Alma Linux) we would really appreciate your help as we don’t have a ton of internal resources to test these distributions.
Here are the steps for participating in the testing and tutorial party:
- Before you begin please fill out the Google form so we have your contact information We can’t send you swag if we don’t have both your email address and your Github user name.
- First go to the Tutorial Party Github repo (bit.ly/LyricalBoard) and read the README.md.
- Figure out your setup!
- Note your computer’s host operating system (either Ubuntu Resolute 26.04, Windows 11, or RHEL-10)
- Note your chipset, either AMD64 or ARM64, if you don’t know it is probably AMD64.
- Note your installed RMW / DDS Vendor (this varies by host OS).
- Figure out how you want to install the ROS Lyrical Luth Beta, your options are:
- Binaries
- Debian installation
- Source installation
4.A full list of pre-release binaries are available here.
- Once you’ve got your “setup” all figured out take a look at the you can use the bottom of the Lyrical Tutorial Party party ReadMe file to filter by setup. There should be a set of tickets for your “setup”. Click on the links and review the available tickets. If you want to test something other than the available tickets, feel free to open a new ticket and describe exactly what you are testing.
- Pick a single ticket for your setup and use the assignees option to assign it to yourself. If you can’t assign yourself, leave a comment and an admin will assign the ticket to you
- Take a look at the ticket and do as it asks in the “Links” section. For example, in this ticket, its links section points you to this tutorial. You should use your new ROS Lyrical Luth setup to run through that tutorial.
Please note that we’re using the Rolling documentation. If you see instructions to install a rolling package you’ll need to modify those to point to lyrical.
- Once you complete the links section things will either go smoothly or you will run into problems. Please report your results using the check boxes in the “Checks” section of your Github issue.
- If everything goes well, note as such in your ticket’s comment section. We ask that you attach your terminal’s output as a code block or as a gist file or include a screenshot. At this point feel free to close the ticket by clicking “close as completed.”
- If something went poorly please note it in your ticket’s comment section. Try to include a full stack trace or other debug output if possible. Please also run
ros2 doctor --reportand dump the output in your ticket.
- You can use the discussion board or our Zulip channel (#Lyrical-Luth-testing-party) to report issues. If you run into issues please feel free to post them to our discussion board on Github (bit.ly/LyricalTrackingBoard).
The testing and tutorial party wraps up on May 14, 2026 , but we’re asking everyone to get started early as we will need some lead time to address any bugs.
New for Lyrical: Pull Requests and Reviews
For 2026’s Test and Tutorial Party, we’re piloting a new feature: Lyrical Bug Fixes and PR Reviews. We’re looking for community members to help us out by lending us their eyes and expertise. For the T&T party we’re allowing participants to gain one extra point for each completed bug fix and PR review from the Lyrical board. We anticipate the majority of these issues will be documentation related so they should be fairly straightforward to fix.
For the T&T party we will provide you with one extra point if you do one of the following:
- Create a pull request for a bug fix that addresses documented issues listed in the Lyrical issues board.
- Review one of the pull requests or bug fixes listed in the Lyrical issues board.
- We also have a limited number of general ROS pull request reviews that are also in scope for the T&T party. You can find those here (bit.ly/Lyrical-PR-Reviews)
For Lyrical pull requests and bug fixes:
- Ask to be assigned the issue in the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard).
- Write the relevant code or documentation. Remember to use the correct branch!
- Build your solution and run the necessary tests and linters. This step is key to getting your PR approved.
- Submit your PR. You must include a brief description of the issue and the issue number from the tracking board.
- You must work with the reviewers to address all necessary feedback until the PR is accepted and merged.
- If you use AI for your pull request you must report it in a manner consistent with OSRF policy.
- Report your work using the form (bit.ly/LyricalPR).
For Lyrical reviews:
- Request to be assigned to the pull request from the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard). You can be assigned to one pull request at a time.
- Once you are assigned to the pull request you must do the following:
- Verify the fix by checking out the PR, building it, and replicating the bug conditions. For documentation this means checking out the PR and running make html.
- Take one or more screenshots of the result.
- Perform a realistic review of the pull request. There are two potential outcomes for your review:
- You find no issues.
- If that’s the case you must briefly list the steps you took to verify that the PR works and attach a screenshot.
- You find an issue and request changes.
- Changes should use the format: “Nit:” (minor change, usually a matter of preference, non-blocking), “Issue:” (major issue, blocking), “Suggestion:” (friendly suggestion, non-blocking), “Question:” (clarification, non-blocking), or “Chore:” (generally formatting issue, non-blocking)
- For issues and chores the feedback in the pull request should include the following:
- What specifically needs attention.
- Why this change is necessary.
- A suggestion on how to fix it.
- You must follow up with the PR author to make sure their changes fix your issue. We suggest using the “suggest changes” feature liberally to expedite the process.
- You find no issues.
- Generative AI should not be used for pull request reviews.
- Report your work using the form (bit.ly/LyricalPR).
14 posts - 6 participants
ROS Discourse General: I'm done manually tuning DDS parameters!
Raise your hand if this sounds familiar:
- You just want better latency or higher throughput for your ROS2 app — but DDS throws hundreds of parameters at you and you have no idea where to start.
- So you end up spending hours (or days) manually tweaking, re-running benchmarks, and tweaking again… only to end up with “good enough” instead of actually good.
If that’s you, I have something that might help: GitHub - qualcomm-qrb-ros/ROS2-DDSConfig-Optimizer: An AI-driven tool that automatically tunes DDS configuration for ROS2 applications. · GitHub
It’s an AI-driven tool that automatically tunes FastDDS configuration for your ROS 2 application. All you need to provide is:
- Your performance targets — latency, throughput, reliability, CPU/memory limits, whatever matters to you — in a simple XML file
- An initial DDS config as the baseline
That’s it. You’ll get back the best DDS configuration tailored to your application. ![]()
Would love to hear your feedback, bug reports, or feature ideas — issues and PRs are very welcome!
4 posts - 2 participants
ROS Discourse General: New ROS controller app
https://play.google.com/store/apps/details?id=com.jax.roscontroller
My app was finally approved on the play store. I have been using this app to control my quadruped running ROS2 on a Pi. This release has the fundamentals working. I will be adding additional features soon.
2 posts - 2 participants
ROS Discourse General: RobotCAD 10.8.1 new AI tool - Generate robot from primitives by text
Improvements:
- Reforged Explode View tool and now it support robot links position states (with memory), explode offset slider.
Features:
-
New AI Tool - “Generate primitive robot by text”.
Creates robot from primitives by your description. Supports various LLM providers. -
New Tool “Manage Link Display”.
Toggles display of Visual, Collision, Real elements of robot links. It has “Set Placement Mode” for fast activate visibility of Real and disable others.
Added Sponsorship Block to Settings Window. There is can be your company or ads.
Explode View tool
AI Generator of Primitive Robots tool
Sponsorship Block
1 post - 1 participant
ROS Discourse General: micro-ROS public infrastructure transition
As part of an OSRA-led clarification of governance boundaries within the ROS ecosystem, the public infrastructure of micro-ROS is transitioning to the Vulcanexus ecosystem.
This change is limited to public infrastructure and hosting. The micro-ROS project itself, its goals, roadmap, APIs, and technical direction remain unchanged. micro-ROS continues to be fully aligned with ROS 2 and supports standard ROS 2 workflows.
The new canonical website for micro-ROS is:
https://micro.vulcanexus.org
During the transition period, micro.ros.org will display a notice page indicating the new location of the project and guiding users to update bookmarks and references.
We recommend updating any existing links, documentation, or automation to point to the new domain.
Further updates will be shared as the transition progresses.
1 post - 1 participant
ROS Discourse General: Participants wanted for a survey on tooling and AI use in the ROS community!
Do you have opinions on the available ROS tooling? Are you using AI in your ROS development workflow? Or maybe you refuse to use AI and want to tell us why?
We want to hear from you!
We are a group of software engineering researchers at Carnegie Mellon University, VORTEX Collab, and the University of Lisbon investigating how ROS developers find and use information, what tools they rely on across different development tasks, and how AI-powered tools fit into the development workflow.
We are conducting a research survey to better understand the information needs, tooling gaps, and the role of AI in the ROS development process. This survey is estimated to take ~20 minutes to complete.
The research survey is open to ROS developers who are at least 18 years old and with at least one year of experience. If you are interested in sharing your experiences, please visit the SURVEY LINK to complete the survey.
Responses are anonymous and will be used solely for research purposes. This research survey is part of a study (STUDY2026_00000158) conducted by Claire Le Goues and Christopher Timperley at Carnegie Mellon University. If you have any questions about the study, please contact Andrea Miller (PhD student) at andreami@andrew.cmu.edu.
2 posts - 1 participant
ROS Discourse General: Custom Capabilities in Transitive Robotics - Again | Cloud Robotics WG Meeting 2026-05-04
Please come and join us for this coming meeting at Mon, May 4, 2026 4:00 PM UTC→Mon, May 4, 2026 5:00 PM UTC, where we plan to continue our Transitive Robotics tryout by trying one of the more advanced features: writing and deploying a custom capability. This feature allows customers to write their own custom code and deploy it to their robots alongside the features available directly from Transitive Robotics.
We did attempt this tryout last session (hence why the title might be familiar!), but as I used an unsupported system for setting up the development environment, most of the session was spent on the initial setup. Hence, we’re repeating the session using a supported operating system. If you’re interested in watching the meeting anyway, it is available on YouTube.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
2 posts - 2 participants
ROS Discourse General: Request for testing on pyspacemouse
Do you use a Spacemouse to control your robot? Then you might depend on `pyspacemouse`. If so, please check test this merge request and let me know if it has any effect on your usage!
1 post - 1 participant
ROS Discourse General: [ACM Computing Surveys] ROS 2 in a Nutshell: A Survey
I am delighted to share that our manuscript, “ğ�—¥ğ�—¢ğ�—¦ ğ�Ÿ® ğ�—¶ğ�—» ğ�—® ğ�—¡ğ�˜‚ğ�˜�ğ�˜€ğ�—µğ�—²ğ�—¹ğ�—¹: ğ�—” ğ�—¦ğ�˜‚ğ�—¿ğ�˜ƒğ�—²ğ�˜†,â€� has been officially ğ�—®ğ�—°ğ�—°ğ�—²ğ�—½ğ�˜�ğ�—²ğ�—± ğ�—³ğ�—¼ğ�—¿ ğ�—½ğ�˜‚ğ�—¯ğ�—¹ğ�—¶ğ�—°ğ�—®ğ�˜�ğ�—¶ğ�—¼ğ�—» in ğ�—”ğ�—–ğ�— ğ�—–ğ�—¼ğ�—ºğ�—½ğ�˜‚ğ�˜�ğ�—¶ğ�—»ğ�—´ ğ�—¦ğ�˜‚ğ�—¿ğ�˜ƒğ�—²ğ�˜†ğ�˜€ — one of the leading journals for high-impact review articles in computer science.
With an ğ�—œğ�—ºğ�—½ğ�—®ğ�—°ğ�˜� ğ�—™ğ�—®ğ�—°ğ�˜�ğ�—¼ğ�—¿ ğ�—¼ğ�—³ ğ�Ÿ®ğ�Ÿ´.ğ�Ÿ¬, and ğ�—¿ğ�—®ğ�—»ğ�—¸ğ�—²ğ�—± #ğ�Ÿ ğ�—¼ğ�˜‚ğ�˜� ğ�—¼ğ�—³ ğ�Ÿğ�Ÿ°ğ�Ÿ³ ğ�—·ğ�—¼ğ�˜‚ğ�—¿ğ�—»ğ�—®ğ�—¹ğ�˜€ ğ�—¶ğ�—» ğ�—–ğ�—¼ğ�—ºğ�—½ğ�˜‚ğ�˜�ğ�—²ğ�—¿ ğ�—¦ğ�—°ğ�—¶ğ�—²ğ�—»ğ�—°ğ�—² ğ�—§ğ�—µğ�—²ğ�—¼ğ�—¿ğ�˜† & ğ�— ğ�—²ğ�˜�ğ�—µğ�—¼ğ�—±ğ�˜€, this journal is recognized among the most prestigious venues for high-impact survey research.
This work is the result of a valuable collaboration between researchers from the RIOTU Lab at Prince Sultan University and the AlfaisalX Center at Alfaisal University.
Co-authors: ğ�—”ğ�—¯ğ�—±ğ�˜‚ğ�—¹ğ�—¿ğ�—®ğ�—µğ�—ºğ�—®ğ�—» ğ�—¦. ğ�—”ğ�—¹-ğ�—•ğ�—®ğ�˜�ğ�—®ğ�˜�ğ�—¶, ğ�—”ğ�—»ğ�—¶ğ�˜€ ğ�—�ğ�—¼ğ�˜‚ğ�—¯ğ�—®ğ�—®, ğ�—�ğ�—µğ�—®ğ�—¹ğ�—²ğ�—± ğ�—šğ�—®ğ�—¯ğ�—¿, ğ�— ğ�—¼ğ�—µğ�—®ğ�—ºğ�—²ğ�—± ğ�—”ğ�—¯ğ�—±ğ�—²ğ�—¹ğ�—¸ğ�—®ğ�—±ğ�—²ğ�—¿, ğ�—®ğ�—»ğ�—± ğ�—›ğ�—®ğ�—ºğ�—®ğ�—± ğ�—”ğ�—¹ğ�—¼ğ�—¾ğ�—®ğ�—¶ğ�—¹ğ�˜† for more than 2 years of consistent work
Our paper presents one of the most comprehensive and systematic reviews of ROS 2, covering its architecture, ecosystem, advances, challenges, and future directions in modern robotics.
Survey Highlights
8,033 papers surveyed
960 ROS 2 publications analyzed
176 community packages reviewed
2009–2025 research timeline covered
The survey explores:
Evolution from ROS 1 to ROS 2
Middleware architecture and DDS
Real-time systems and hardware acceleration
Security and safety
Multi-robot and distributed robotics
Simulators, frameworks, and open-source ecosystems
Applications in autonomous vehicles, healthcare, aerospace, logistics, agriculture, and public safety
Companion website and open-access database:
We also provide an open-access companion database for the ROS research community.
I sincerely thank my co-authors and, in particular, ����������� �. ��-������ for his perseverance, editors, and reviewers for their valuable feedback and support throughout this journey.
Excited to see this contribution help researchers, students, and engineers advance the future of robotics.
1 post - 1 participant
































