[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: ros2_lingua: A safe, dependency-aware grounding engine for LLMs

​Hi everyone,

​Like many of us, I’ve been experimenting with giving LLMs control over robot hardware. However, I quickly ran into the classic problems: LLMs hallucinate actions, assume prerequisites that haven’t been met (e.g., trying to drive a humanoid before stabilizing it), and most existing integrations are just tightly coupled, hardcoded scripts.

​To solve this, I built ros2_lingua — an open-source bridge that introduces a structured capability contract between ROS 2 nodes and LLMs.

​Instead of letting the LLM guess what topics or actions to call, ros2_lingua forces the LLM to output a plan based only on explicitly registered capabilities, and uses a backward-chaining planner to automatically inject missing prerequisite steps.

​How it works:

  1. ​Capability Advertisement: Any ROS 2 node can inherit from LinguaMixin to self-advertise its capabilities at boot. It defines its name, ROS action/service, parameters, preconditions, and postconditions.
  2. ​Backward-Chaining Planner: When a user gives a natural language instruction (e.g., “go to the table and pick up the bottle”), the Grounding Engine checks the robot’s current state against the capability schema. If the robot isn’t balanced, the planner automatically injects a stabilize_robot capability before the navigation step.
  3. ​Safe Dispatch: The DispatcherNode safely executes the validated plan over standard ROS 2 actions and services.

Decoupled Architecture

​One of my main goals was to ensure the core logic was highly testable. The project is split into two layers:

ros2_lingua_core: A pure Python library containing the schema, registry, planner, and LLM backends (Ollama, OpenAI, Anthropic). It has zero ROS 2 dependencies, meaning the grounding engine can be unit-tested purely in Python.

ros2_lingua: The ROS 2 interface layer containing the GroundingNode, DispatcherNode, and mixins.

Links & Demo

​You can see a demo of the engine running with a local Ollama model and a mock humanoid setup, along with the full architecture documentation here:

​Documentation & Architecture: ros2_lingua — Documentation

​GitHub Repository: GitHub - purahan/ros2_lingua: Natural language to ROS2 actions — a structured LLM grounding engine for any robot. · GitHub

​What’s Next & Feedback Request

​The project is currently a working prototype in Python. My immediate roadmap includes taking this to a release-ready state and building a C++ bridge so native controller nodes can easily advertise their capabilities.

​Since this is early development, I would love to get feedback from the community on the architecture—specifically on the schema design for the capability registry and how best to handle complex, long-running action pre-emptions within the Dispatcher.

​Thanks for your time, and I’d love to hear your thoughts!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-lingua-a-safe-dependency-aware-grounding-engine-for-llms/54645

ROS Discourse General: Control Algorithm Dominance Survey

Hey guys, I’m doing a survey to ascertain the dominance of different control engineering paradigms in the industry, to ascertain whether there has been a noticeable shift from classical controls to more modern algorithms, or whether modern algorithms, while looking good on paper, are stuck on research papers for the most part.
I would love everyone’s inputs, from student to seasoned researcher.
Your still welcome to contribute if you don’t work directly in controls, or if your work is controls-adjacent, like SWE or mechanical design.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/control-algorithm-dominance-survey/54632

ROS Discourse General: Where does latency in WebRTC video streaming come from? An analysis

We analyzed the glass-to-glass latency of streaming video from robots to the web using WebRTC. Typical, total latency for remote streaming is 150-180 ms but how does this break down?

Tl;dr:

Read the full analysis here:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/where-does-latency-in-webrtc-video-streaming-come-from-an-analysis/54566

ROS Discourse General: ROS2 + Gazebo Harmonic on macOS 26

Hello,

It seems ROS2 on macOS 26 installation guide is this: Installing ROS 2 on macOS — ROS 2 Documentation: Crystal documentation
Gazebo Harmonic on macOS 26 installation guide is this: Binary Installation on macOS — Gazebo harmonic documentation

I read online there might be some incompatibility issues. I would like to understand what’s the current picture of the ROS2+Gazebo Harmonic. Would those two guides work and are there any expected issues?

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-gazebo-harmonic-on-macos-26/54564

ROS Discourse General: Camera streaming made easy [NEW Release]

Greetings fellow roboticists,

you might know me from other projects such as ros_babel_fish, the QML ROS 2 module, or RQml.
Maybe as the dude who ends all their posts with an AI-generated image.

TLDR: ros_camera_server out now (link below). Efficient streaming to ROS, robust low-bandwidth to operator station over H.264/H.265 using rtp/srt/webrtc. Simple config, automatically optimized pipelines.

Camera streaming in ROS is kind of cumbersome, even though it’s always the same issue.
On the robot, you want raw (or compressed between PCs) ROS images for processing, and on the remote operator station, you want a low-latency, ideally low-bandwidth live video feed.
In academia, many setups I have encountered use compressed images for remote operator setups because it’s simple and, in good network conditions, works well enough.
Using usb_cam or gscam with compressed images is also what Opus 4.7 would suggest when asked.
While this does work, it’s quite resource- and bandwidth-intensive and is one of the roadblocks that need to be addressed to bring research solutions to the field, especially in rescue robotics, where bandwidth is a limited resource.

To save bandwidth, you can use something like gst_bridge to receive the ROS image, encode it in H.264, and send it to your remote operator station.
This will be approximately 1/10 to 1/30 of the bandwidth for comparable quality.

If your camera is a high-resolution USB camera, it will most likely stream jpeg encoded data for the high resolution, high fps options.
So your pipeline becomes:
Camera (jpeg) → usb_cam (decodes jpeg) → image_transport (re-encodes as jpeg for compressed) → gst_bridge → handwritten gstreamer pipeline (requires some technical knowledge to get right) → stream

Doesn’t take an expert to see that this is not optimal.
Here’s where my new ros_camera_server comes in.
You specify one yaml file with your cameras, each with one input and as many outputs as you want (and your compute can handle).
The outputs can differ in resolution and framerate.
Currently supported are ROS 2, RTP, SRT, and WebRTC.
The camera server will automatically create and optimize GStreamer pipelines based on your available hardware accelerations, which, in parallel, produce your ROS output and streams applying scale and framerate limiters as necessary.
Cutting the decode and re-encode overheads and significantly reducing latency and CPU usage.
JPEG camera input can also be published directly as a ROS-compressed image or forced to be decoded if needed.
Check the plots from my benchmark in the comments to see that the much easier configuration is not paid for with higher latency or overhead, and it beats the alternatives in both.

Here’s the repo:

If you can’t comply with the AGPL, you can contact me to see if we can find a suitable license for your use.

PS: The ros_camera_server preserves the image capture/header timestamp as a custom RTP header extension and can restore it from ros_camera_server H.264/H.265 streams.
So you can stream from the robot over RTP/SRT/WebRTC and restore it to ROS on the operator station or another robot, and the timestamp will be preserved in the ROS image output.

Benchmarks



I hope this helps groups without video streaming experts to create more robust remote control setups.
If you read this far and this was not of interest to you, I’m sorry, here’s your AI picture:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/camera-streaming-made-easy-new-release/54562

ROS Discourse General: ALERT: do NOT visit plotjuggler.com

Hi,

avoid the page “plotjuggler DOT com”

I did not create this page and I have no idea who did.

Please do not download any file from there, there is a high risk of Malware / Phishing.

I will try to have a solution to this ASAP.

Davide

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/alert-do-not-visit-plotjuggler-com/54561

ROS Discourse General: Rclnodejs 2.0.0 beta — ROS 2 Lyrical (beta) and Node.js 26 support

Hi all,

For those new to the project: rclnodejs is the Node.js client library for ROS 2, maintained under the Robot Web Tools umbrella. It lets you write ROS 2 nodes — publishers, subscribers, services, actions, parameters, lifecycle, etc. — in plain JavaScript or TypeScript, with a native binding to rcl/rmw so messages stay zero-copy where possible. It’s a good fit for web dashboards, Electron desktop apps, browser bridges, scripting, and rapid prototyping.

I’m happy to announce that rclnodejs 2.0.0-beta.0 is out — the first preview of the 2.x line, with first-class support for the upcoming ROS 2 Lyrical Luth release on Ubuntu 26.04 and the latest Node.js 26.

What’s new in 2.0.0-beta.0

New: rosocket — talk to ROS 2 from a browser, no JS library

This release ships a lightweight WebSocket bridge called rosocket plus an end-to-end demo. The point: a browser tab can subscribe and publish to topics (and call services) using only the built-in WebSocket and JSON APIs — no rosbridge or roslibjs needed.

URL convention:


ws://<host>:<port>/topic/<name>

ws://<host>:<port>/service/<name>

Browser-side pub/sub in a few lines:


const BRIDGE = 'ws://localhost:9000';

// Subscribe — every published message arrives as onmessage.

const sub = new WebSocket(`${BRIDGE}/topic/chatter`);

sub.onmessage = (ev) => console.log('recv:', ev.data); // {"data":"hi"}

// Publish — open, send JSON, close.

const pub = new WebSocket(`${BRIDGE}/topic/chatter`);

pub.onopen = () => {

pub.send(JSON.stringify({ data: 'hello from browser' }));

setTimeout(() => pub.close(), 200);

};

Live walkthrough:

rclnodejs rosocket demo

Code: demo/rosocket.

Electron desktop visualization demos

For richer UIs, the same library powers cross-platform desktop apps via Electron — HTML/CSS/Three.js/WebGL on the front end, with real ROS 2 nodes running in the Electron main process. The demos cover topics, a car controller, a manipulator, and a turtle TF2 visualizer.

rclnodejs electron manipulator demo

Code: demo/electron.

Cheers,

Minggang

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/rclnodejs-2-0-0-beta-ros-2-lyrical-beta-and-node-js-26-support/54559

ROS Discourse General: Is FastDDS (default config) totally unusable for simulation with rclpy?

I was fighting some performance issues in our simulation for a subject I teach.

It boiled down to one slight performance improvement in rclpy which I started discussing, but the larger surprise was how super-bad FastDDS is with distributing the 250 Hz /clock topic.

I did an experiment with our full autonomy stack (but on a Turtlebot with 2D lidar, so nothing heavy). Most nodes run rclpy with MultiThreadedExecutor (more on that later). There are in total 16 nodes that subscribe /clock topic and Gazebo runs at ~70% RTF, so the real frequency of /clock is more like 200 Hz.

Pink is FastDDS, blue is Zenoh.

Let me explain the plot:

The vertical axis shows the dt - delta time, i.e. the time difference between two consecutive /clock messages received by the node (measured by instrumenting the default clock callback of a node).

With FastDDS, some nodes sometimes do not receive any /clock message for more than 0.5 s!!! With Zenoh, the worst case is about 0.18 s.

Even more interesting is the best case - FastDDS achieves 0.004 dt (what is expected) for only a small fraction of the dataset, like 5%. Zenoh achieves this dt most of the time (the almost solid blue line on the bottom). (no, this blue line does not hide the pink dots, there are almost none underneath it).

This is the histogram of dt values (notice logarithmic axis, Zenoh left, FastDDS right):

I understand that the combination of rclpy and MultiThreadedExecutor is one of the worst things you can do in ROS 2, but still, this huge difference in usability between FastDDS and Zenoh hits me.

11 posts - 5 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/is-fastdds-default-config-totally-unusable-for-simulation-with-rclpy/54553

ROS Discourse General: [Release] emcon_gz_hardware_interface: A DDS-bypassing hardware interface for Gazebo Harmonic

Hi everyone,

When scaling up multi-robot simulations or running complex network-isolated state estimators in Gazebo Harmonic, relying on the standard gz_ros2_control shared memory architecture can become a bottleneck or present domain isolation challenges.

To solve this, I’ve open-sourced emcon_gz_hardware_interface.

It acts as a standard ros2_control SystemInterface, but instead of routing simulation traffic through ROS 2 DDS, it acts as a “data diode” and subscribes/publishes directly to native gz-transport topics.

Key Features:

Repository: https://github.com/yenode/emcon_gz_hardware_interface

I’ve already opened an RFC with the ros-controls team to see if this fits upstream, but the standalone package is fully CI-tested and ready to use for ROS 2 Jazzy! I’d love to hear feedback from anyone running massive fleets or strict network simulations.

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/release-emcon-gz-hardware-interface-a-dds-bypassing-hardware-interface-for-gazebo-harmonic/54549

ROS Discourse General: "ROS 2 in a Nutshell: A Survey" is now published in ACM Computing Surveys

Hi everyone,

I have been a silent observer here since 2019, and this is my first post on Discourse. I hope you will excuse the sudden intrusion :grinning_face_with_smiling_eyes:

I am pleased to share that our survey paper, “ROS 2 in a Nutshell: A Survey,” has been published in ACM Computing Surveys .

Paper DOI: https://doi.org/10.1145/3815113

ACM Computing Surveys is recognized with an Impact Factor of 28.0 and is ranked #1 out of 147 journals in Computer Science, Theory & Methods .

The goal of this survey is to provide a broad and systematic overview of the ROS 2 ecosystem. In particular, the paper covers:

  • the evolution of ROS 2


  • the motivations behind ROS 2 and its architectural redesign


  • middleware and RMW evolution

  • a taxonomy of ROS 2 literature and research directions


  • frameworks, simulators, community packages, and the broader ROS 2 software ecosystem





The paper is organized around three main research questions:

  1. How does ROS 2 improve upon ROS 1, and what new limitations arise?
  2. What advances address redesign challenges and enable deployment?
  3. Which frameworks and tools shape the ROS 2 ecosystem, and where are the remaining gaps?

A major outcome of the work is an open-access companion database that organizes ROS-related literature, tools, and ecosystem resources:

ROS 2 survey database: ROS 2 in a Nutshell: A Survey

We also welcome community contributions to improve and extend the database:

Contribution guide: awesome-ros/CONTRIBUTING.md at main · asmbatati/awesome-ros · GitHub

This work was carried out by:

We would also like to express our appreciation to:

for their valuable support.

We are also grateful for the support of:

I hope the survey and the companion database will be useful to researchers, developers, and students working with ROS 2. :folded_hands:

Feedback, corrections, and suggestions for additions to the database are very welcome.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-in-a-nutshell-a-survey-is-now-published-in-acm-computing-surveys/54524

ROS Discourse General: Guidance on learning ROS2 focusing on motion planning and Perception.

Hello guys, I am a beginner at learning ROS2. I want guidance on learning it from scratch, right now I am stuck on where to start. Watching a couple of YouTube tutorials, but struggling to understand certain concepts. I am here to learn from people who have prior knowledge of ROS.

Thanks

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/guidance-on-learning-ros2-focusing-on-motion-planning-and-perception/54513

ROS Discourse General: Ouster unveil the first native color Lidar sensor

5 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ouster-unveil-the-first-native-color-lidar-sensor/54504

ROS Discourse General: RVizSplat: Visualize 3D Gaussian splats in RViz!

Hey everyone!

We are happy to share Release 1.0.0 of RVizSplat!
In this release, we provide an RViz display plugin that renders 3D Gaussian splats alongside other conventional markers in the scene. Along with this, we also provide an interface to stream GSplats over ROS topics and the ability to read .ply files which are in the 3DGS INRIA format directly from the plugin.

In case you wish to render Gaussian splatted scenes in a resource constrained environment, we also provide OIT based implementations to bypass sorting the Gaussians at the cost of rendering quality.

On an Nvidia RTX 3060+, we render at 40-60 FPS on a large scene with > 6 million splats and about 20 FPS on a modern integrated GPU for the same scene. On scenes that are comparatively smaller (1-3 million splats), we are able to achieve 100+ FPS. We also provide the ability to sort on an Nvidia GPU (Radix sort) and on the CPU (PDQ Sort).
Additional sorting techniques can be implemented through a simple interface if they suit your needs better.

The package is currently well tested on ROS 2 Rolling and is experimental on Jazzy and Kilted.
Please try out our work, let us know what you think, and star the repository to support the project :grin:

Team members that have made this project possible: Videh Patel, Akash Chikhalikar, Aditya Mathur, and Suchetan Saravanan.

le_robot_6mil (1)

alpha_0_5_marker

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/rvizsplat-visualize-3d-gaussian-splats-in-rviz/54487

ROS Discourse General: I tested URDF format conversion on NASA Valkyrie, UR5, and Franka Panda results and a free tool

I validated and converted URDFs from 5 popular robots to both
Gazebo SDF and MuJoCo MJCF format.

The tool is free to try (14-day Pro trial, no credit card):

Python SDK: pip install roboinfra-sdk
GitHub Action: roboinfra/validate-urdf-action@v1

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/i-tested-urdf-format-conversion-on-nasa-valkyrie-ur5-and-franka-panda-results-and-a-free-tool/54476

ROS Discourse General: ROS Lyrical Luth Beta and Call for Testing

We’re in the “beta” phase of development for ROS 2 Lyrical Luth! We have binary packages available for Ubuntu Resolute and RHEL 10, and rosdistro is open for newly released packages for Lyrical.

Testing the Lyrical beta

We published Installation instructions for Lyrical here. However, binary packages for Lyrical are only available in the testing repository.

Follow the pre-release testing instructions to use the ros-testing repository so that you can install the ros-lyrical-* packages.

For those using the comprehensive archive for installation, download the archive from the artifacts of this pre-release tag. If you’re building from source, use the ros2.repos file at that release.

If you think you’ve discovered a bug, please:

We’ll triage the issue or PR and decide when and how it should be fixed in Lyrical.

Releasing your packages

If you are a package maintainer, please follow this guide to release your package in Lyrical.

Reminder that the tutorial party starts Thu, Apr 30, 2026 7:00 AM UTC. Find all the details in this post.

Thanks!

The ROS 2 Team

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-lyrical-luth-beta-and-call-for-testing/54441

ROS Discourse General: ROS 2 Lyrical Luth Release Illustration and Swag 🎸

Hi Everyone,

It is my pleasure to present you with the illustration for ROS 2 Lyrical Luth! This release illustration is the work of our illustrator Ryan Hungerford. Ryan is an illustrator based in the Bay Area and his AI (actual intelligence) makes for some wonderful illustrations.

Lyrical Swag Sale

We’re also happy to announce that the ROS 2 Lyrical Luth swag sale is now live. We’re now using Fourth Wall for all of our ROS swag sales as the platform supports both a wide array of items and allows us to produce merch on demand and ship it almost anywhere on earth We’ve also created a permanent URL for ROS swag at store.openrobotics.org so it is easy to find. For this release we are offering eight different items for sale including:

All profits from the Lyrical swag sale go directly to the Open Source Robotics Foundation and help support the ROS, Gazebo, ROS Control, and Open-RMF projects. If you order today you might just receive your swag by release day on May 22rd, 2026. If you would like to earn Lyrical swag by contributing to the project please consider contributing to the Lyrical Test and Tutorial party that is currently taking place. The top twenty test contributors will be sent a code to our swag store.

6 posts - 3 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-lyrical-luth-release-illustration-and-swag/54432

ROS Discourse General: Lyrical Luth Test and Tutorial Party Instructions

Lyrical Luth Test and Tutorial Party Instructions

:tada: Update: Lyrical board has been updated and is live! Docs are live too. A recording of the kickoff meeting can be found here.

Hi Everyone,

As mentioned previously, we’re conducting a testing and tutorial party for the next ROS release, Lyrical Luth. If you happened to miss the kickoff of the Lyrical Luth Testing and Tutorial party this morning I have put together some written instructions that should let everyone participate, no matter their time zone. Here are the slides from the kickoff meeting.

TL;DR

We need your help to test the next ROS Distro before its release on Friday, May 22nd. We’re asking the community to pick a particular system setup, a combination of host operating system, CPU architecture, RMW vendor, and build type (source, debian, binary), and run through a set of ROS tutorials to make sure everything is working smoothly. Depending on the outcome of your tutorials, you can either close the ticket as completed or report the errors you found. If you can’t assign the ticket to yourself, leave a comment, and an admin will take care of it for you. Please do not sign up for more than one ticket at any given time. Everything you need to know about this process can be found in this Github repository.

As a thank you for your help, we’re planning to provide the top twenty contributors to the T&T party with their choice of either ROS Lyrical swag or OSRA membership. :warning: To be eligible to receive swag, you must register using this short Google Form so we can match email addresses to GitHub usernames and count the total tickets closed.:warning:

The testing and tutorial party will close on May 14, 2026, but we’re asking everyone to get started right away! We have 10,000 tickets to work through and with Lyrical’s transition to C++20, we fully anticipate that we’ll need to update a few tutorials and fix some broken source builds.

Full Instructions

We’re planning to release ROS 2 Lyrical Luth on May 22, 2026, and we need the community’s help to make sure that we’ve thoroughly tested the distro on a variety of platforms before we make the final release. What do we mean by testing? Well, lots of things, but in the context of the testing and tutorial party, we are talking about the package-level ROS unit tests and anything else you want to test. What do we mean by tutorials? We also want to make sure all our docs.ros.org tutorials are in working order before the release.

The difficulty in testing a ROS release is that people have lots of different ways they use ROS, and we can’t possibly test all of those combinations. For the testing and tutorial party we have created what we call, “a setup.” A setup is a combination of:

If you already have a particular system setup that you work with, we suggest that you roll with that; otherwise, feel free to create a new system setup just for testing purposes. If you normally use Windows or RHEL (or binary compatible distributions to RHEL like Rocky Linux / Alma Linux) we would really appreciate your help as we don’t have a ton of internal resources to test these distributions.

Here are the steps for participating in the testing and tutorial party:

The testing and tutorial party wraps up on May 14, 2026 , but we’re asking everyone to get started early as we will need some lead time to address any bugs.

New for Lyrical: Pull Requests and Reviews

For 2026’s Test and Tutorial Party, we’re piloting a new feature: Lyrical Bug Fixes and PR Reviews. We’re looking for community members to help us out by lending us their eyes and expertise. For the T&T party we’re allowing participants to gain one extra point for each completed bug fix and PR review from the Lyrical board. We anticipate the majority of these issues will be documentation related so they should be fairly straightforward to fix.

For the T&T party we will provide you with one extra point if you do one of the following:

To help us track and tabulate scores, you must fill out this short form every time you complete a review or a PR.

For Lyrical pull requests and bug fixes:

For Lyrical reviews:

14 posts - 6 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/lyrical-luth-test-and-tutorial-party-instructions/54427

ROS Discourse General: I'm done manually tuning DDS parameters!

Raise your hand if this sounds familiar:

  1. You just want better latency or higher throughput for your ROS2 app — but DDS throws hundreds of parameters at you and you have no idea where to start.
  2. So you end up spending hours (or days) manually tweaking, re-running benchmarks, and tweaking again… only to end up with “good enough” instead of actually good.

If that’s you, I have something that might help: GitHub - qualcomm-qrb-ros/ROS2-DDSConfig-Optimizer: An AI-driven tool that automatically tunes DDS configuration for ROS2 applications. · GitHub

It’s an AI-driven tool that automatically tunes FastDDS configuration for your ROS 2 application. All you need to provide is:

  1. Your performance targets — latency, throughput, reliability, CPU/memory limits, whatever matters to you — in a simple XML file
  2. An initial DDS config as the baseline

That’s it. You’ll get back the best DDS configuration tailored to your application. :sparkles:

Would love to hear your feedback, bug reports, or feature ideas — issues and PRs are very welcome!

4 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/im-done-manually-tuning-dds-parameters/54415

ROS Discourse General: New ROS controller app

https://play.google.com/store/apps/details?id=com.jax.roscontroller

My app was finally approved on the play store. I have been using this app to control my quadruped running ROS2 on a Pi. This release has the fundamentals working. I will be adding additional features soon.

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-ros-controller-app/54393

ROS Discourse General: RobotCAD 10.8.1 new AI tool - Generate robot from primitives by text

Improvements:

  1. Reforged Explode View tool and now it support robot links position states (with memory), explode offset slider.

Features:

  1. New AI Tool - “Generate primitive robot by text”.
    Creates robot from primitives by your description. Supports various LLM providers.

  2. New Tool “Manage Link Display”.
    Toggles display of Visual, Collision, Real elements of robot links. It has “Set Placement Mode” for fast activate visibility of Real and disable others.

Added Sponsorship Block to Settings Window. There is can be your company or ads.

Explode View tool

AI Generator of Primitive Robots tool

Sponsorship Block

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/robotcad-10-8-1-new-ai-tool-generate-robot-from-primitives-by-text/54384

ROS Discourse General: micro-ROS public infrastructure transition

As part of an OSRA-led clarification of governance boundaries within the ROS ecosystem, the public infrastructure of micro-ROS is transitioning to the Vulcanexus ecosystem.

This change is limited to public infrastructure and hosting. The micro-ROS project itself, its goals, roadmap, APIs, and technical direction remain unchanged. micro-ROS continues to be fully aligned with ROS 2 and supports standard ROS 2 workflows.

The new canonical website for micro-ROS is:
https://micro.vulcanexus.org

During the transition period, micro.ros.org will display a notice page indicating the new location of the project and guiding users to update bookmarks and references.

We recommend updating any existing links, documentation, or automation to point to the new domain.

Further updates will be shared as the transition progresses.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/micro-ros-public-infrastructure-transition/54375

ROS Discourse General: Participants wanted for a survey on tooling and AI use in the ROS community!

Do you have opinions on the available ROS tooling? Are you using AI in your ROS development workflow? Or maybe you refuse to use AI and want to tell us why?

We want to hear from you!

We are a group of software engineering researchers at Carnegie Mellon University, VORTEX Collab, and the University of Lisbon investigating how ROS developers find and use information, what tools they rely on across different development tasks, and how AI-powered tools fit into the development workflow.

We are conducting a research survey to better understand the information needs, tooling gaps, and the role of AI in the ROS development process. This survey is estimated to take ~20 minutes to complete.

The research survey is open to ROS developers who are at least 18 years old and with at least one year of experience. If you are interested in sharing your experiences, please visit the SURVEY LINK to complete the survey.

Responses are anonymous and will be used solely for research purposes. This research survey is part of a study (STUDY2026_00000158) conducted by Claire Le Goues and Christopher Timperley at Carnegie Mellon University. If you have any questions about the study, please contact Andrea Miller (PhD student) at andreami@andrew.cmu.edu.

2 posts - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/participants-wanted-for-a-survey-on-tooling-and-ai-use-in-the-ros-community/54371

ROS Discourse General: Custom Capabilities in Transitive Robotics - Again | Cloud Robotics WG Meeting 2026-05-04

Please come and join us for this coming meeting at Mon, May 4, 2026 4:00 PM UTCMon, May 4, 2026 5:00 PM UTC, where we plan to continue our Transitive Robotics tryout by trying one of the more advanced features: writing and deploying a custom capability. This feature allows customers to write their own custom code and deploy it to their robots alongside the features available directly from Transitive Robotics.

We did attempt this tryout last session (hence why the title might be familiar!), but as I used an unsupported system for setting up the development environment, most of the session was spent on the initial setup. Hence, we’re repeating the session using a supported operating system. If you’re interested in watching the meeting anyway, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/custom-capabilities-in-transitive-robotics-again-cloud-robotics-wg-meeting-2026-05-04/54358

ROS Discourse General: Request for testing on pyspacemouse

Do you use a Spacemouse to control your robot? Then you might depend on `pyspacemouse`. If so, please check test this merge request and let me know if it has any effect on your usage!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/request-for-testing-on-pyspacemouse/54346

ROS Discourse General: [ACM Computing Surveys] ROS 2 in a Nutshell: A Survey

I am delighted to share that our manuscript, “��� � �� � ��������: � ������,� has been officially �������� ��� ����������� in ��� ��������� ������� — one of the leading journals for high-impact review articles in computer science.

With an ������ ������ �� ��.�, and ������ #� ��� �� ��� �������� �� �������� ������� ������ & �������, this journal is recognized among the most prestigious venues for high-impact survey research.

This work is the result of a valuable collaboration between researchers from the RIOTU Lab at Prince Sultan University and the AlfaisalX Center at Alfaisal University.
Co-authors: ����������� �. ��-������, ���� ������, ������ ����, ������� ����������, ��� ����� �������� for more than 2 years of consistent work

Our paper presents one of the most comprehensive and systematic reviews of ROS 2, covering its architecture, ecosystem, advances, challenges, and future directions in modern robotics.

:bar_chart: Survey Highlights
:star: 8,033 papers surveyed
:star: 960 ROS 2 publications analyzed
:star: 176 community packages reviewed
:star: 2009–2025 research timeline covered

The survey explores:
:star: Evolution from ROS 1 to ROS 2
:star: Middleware architecture and DDS
:star: Real-time systems and hardware acceleration
:star: Security and safety
:star: Multi-robot and distributed robotics
:star: Simulators, frameworks, and open-source ecosystems
:star: Applications in autonomous vehicles, healthcare, aerospace, logistics, agriculture, and public safety

:globe_with_meridians: Companion website and open-access database:

We also provide an open-access companion database for the ROS research community.

I sincerely thank my co-authors and, in particular, ����������� �. ��-������ for his perseverance, editors, and reviewers for their valuable feedback and support throughout this journey.

Excited to see this contribution help researchers, students, and engineers advance the future of robotics.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/acm-computing-surveys-ros-2-in-a-nutshell-a-survey/54345


2026-05-09 12:18