Research using InDro robots for real-world autonomy

Research using InDro robots for real-world autonomy

By Scott Simmie

 

As you’re likely aware by now, InDro builds custom robots for a wide variety of clients. Many of those clients are themselves researchers, creating algorithms that push the envelope in multiple sectors.

Recently, we highlighted amazing work being carried out at the University of Alberta, where our robots are being developed as Smart Walkers – intended to assist people with partial paralysis. (It’s a really fascinating story you can find right here.)

Today, we swing the spotlight down to North Carolina State University. That’s where we find Donggun Lee, Assistant Professor in the Departments of Mechanical Engineering and Aerospace Engineering. Donggun holds a PhD in Mechanical Engineering from UC Berkely (2022), as well as a Master’s of Science in the same discipline from the Korea Advanced Institute of Science and Technology. He oversees a small number of dedicated researchers at NCSU’s Intelligent Control Lab.

“We are working on safe autonomy in various vehicle systems and in uncertain conditions,” he explains.

That work could one day lead to safer and more efficient robot deliveries and enhance the use of autonomous vehicles in agriculture.

Below: Four modified AgileX Scout Mini platforms, outfitted with LiDAR, depth cameras and Commander Navigate are being used for research at NCSU. Chart below shows features of the Commander Navigate package

Research Robots
Commander Navigate

“UNCERTAIN” CONDITIONS

 

When you head out for a drive, it’s usually pretty predictable – but never certain. Maybe an oncoming vehicle will unexpectedly turn in front of you, or someone you’re following will spill a coffee on their lap and slam on their brakes. Perhaps the weather will change and you’ll face slippery conditions. As human beings, we’ve learned to respond as quickly as we can to uncertain scenarios or conditions. And, thankfully, we’re usually pretty good at it.

But what about robots? Delivery robots, for example, are already being rolled out at multiple locations in North America (and are quite widespread in China). How will they adapt to other robots on the road, or human-driven vehicles and even pedestrians? How will they adapt to slippery patches or ice or other unanticipated changes in terrain? The big picture goes far beyond obstacle avoidance – particularly if you’re also interested in efficiency. How do you ensure safe autonomy without being so careful that you slow things down?

These are the kinds of questions that intrigue Donggun Lee. And, for several years now, he has been searching for answers through research. To give you an idea of how his brain ticks, here’s the abstract from one of his co-authored IEEE papers:

Autonomous vehicles (AVs) must share the driving space with other drivers and often employ conservative motion planning strategies to ensure safety. These conservative strategies can negatively impact AV’s performance and significantly slow traffic throughput. Therefore, to avoid conservatism, we design an interaction-aware motion planner for the ego vehicle (AV) that interacts with surrounding vehicles to perform complex maneuvers in a locally optimal manner. Our planner uses a neural network-based interactive trajectory predictor and analytically integrates it with model predictive control (MPC). We solve the MPC optimization using the alternating direction method of multipliers (ADMM) and prove the algorithm’s convergence.

That gives you an idea of what turns Donggun’s crank. But with the addition of four InDro robots to his lab, he says research could explore many potential vectors.

“Any vehicle applications are okay in our group,” he explains. “We just try to develop general control and AI machine learning framework that works well in real vehicle scenarios.”

One (of many) applications that intrigues Donggun is agriculture. He’s interested in algorithms that could be used on a real farm, so that an autonomous tractor could safely follow an autonomous combine. And, in this case, they’ve done some work where they’ve programmed the Open Source Crazy Flie drone to autonomously follow the InDro robot. Despite the fact it’s a drone, Donggun says the algorithm could be useful to that agricultural work.

“You can easily replace a drone with a ground vehicle,” he explains.

And that’s not all.

“We are also currently tackling food delivery robot applications. There are a lot of uncertainties there: Humans walking around the robot, other nearby robots…How many humans will these robots interact with – and what kind of human behaviours will occur? These kinds of things are really unknown; there are no prior data.”

And so Donggun hopes to collect some.

“We want to develop some sort of AI system that will utilise the sensor information from the InDro robots in real-time. We eventually hope to be able to predict human behaviours and make decisions in real-time.”

Plus, some of Donggun’s previous research can be applied to future research. The paper cited above is a good example. In addition to the planned work on human-robot interaction, that previous research could also be applied to maximise efficiency.

“There is trade-off between safety guarantees and getting high performance. You want to get to a destination as quickly as possible and at speed while still avoiding collisions.”

He explains that the pendulum tends to swing to the caution side, where algorithms contain virtually all scenarios – including occurrences that are unlikely. By excluding some of those exceedingly rare ‘what-ifs’, he says speed and efficiency can be maximised without compromising safety.

Below: Image from Donggun’s autonomy research showing the InDro robot being followed by an Open Source Crazy Flie drone

NCSU InDro Navigator Cray Flie

INDRO’S TAKE

 

We, obviously, like to sell robots. In fact, our business depends on it.

And while we put all of our clients on an equal playing field, we have a special place in our non-robotic hearts for academic institutions doing important R&D. This is the space where breakthroughs are made.

“I really do love working with people in the research space,” says Head of R&D Sales Luke Corbeth. “We really make a concerted effort to maximise their budgets and, when possible, try to value-add with some extras. And, as with all clients, InDro backs what we sell with post-sale technical support and troubleshooting.”

The robots we delivered to NCSU were purchased under a four-year budget, and delivered last summer. Though the team is already carrying out impressive work, we know there’s much more to come and will certainly check in a year or so down the road.

In the meantime, if you’re looking for a robot or drone – whether in the R&D or Enterprise sectors – feel free to get in touch with us here. He takes pride in finding clients solutions that work.

Research at U of Alberta focuses on robotics for medical applications

Research at U of Alberta focuses on robotics for medical applications

By Scott Simmie

 

You’ve probably heard of the “Three Ds” by now: Robots are perfect for tasks that are Dirty, Dull and Dangerous. In fact, we recently took a pretty comprehensive look at why inspection robots can tick all of these boxes – while saving companies from unplanned downtime.

Generally, that maxim holds true. But a recent conversation with two researchers from the University of Alberta got us thinking that some innovative robotics applications don’t truly fit this description. Specifically, certain medical or healthcare use-cases.

The people we spoke to carry out their research under the umbrella of a body that intersects the robotics and healthcare sectors. It’s called the Telerobotic and Biorobotic Systems Group in the Electrical and Computer Engineering Department of the U of A. It’s under the direction of Prof. Mahdi Tavakoli, who is kind of a big name in this sector. Within that group, there are three separate labs:

  • CREATE Lab (Collaborative, Rehabilitation, Assistive robotics research
  • HANDS Lab (Haptics and Surgery research
  • SIMULAT-OR Lab (A simulated operating room featuring a da Vinci Surgical System)

Broadly, the research can be thought of as belonging to one of two realms: Rehabilitation/assistive and surgical. But what does that actually mean? And how has a robot from InDro been modified to become a smart device that can assist people with certain disabilities?

Let’s dive in.

Below: Could a robotic platform like the Ranger Mini be put to use helping someone with mobility issues? We’ll find out…

Ranger Mini 3.0

HELPING PEOPLE (AND EVEN SURGEONS)

 

We spoke with researchers Sadra Zargarzadeh and Mahdi Chalaki. Sadra is a Master’s student in Electrical and Computer Engineering and previously studied Mechanical Engineering at Iran’s Sharif University of Technology. Mahdi is also a Master’s student in the same department, and studied Mechanical Engineering at the University of Tehran.

Sadra’s research has focused on healthcare robotics with an emphasis on autonomous systems leveraging Large Language Model AI.

“I’ve always had a passion for helping people that have disabilities,” he explains. “And in the rehab sector we often deal with patients that have some sort of fine motor skill issue or challenge in executing tasks the way they’d like to. Robotics has the potential to mitigate some of these issues and essentially be a means to remove some of the barriers patients are dealing with – so I think there’s a very big potential for engineering and robotics to increase the quality of life for these people.”

That’s not dirty, dull or dangerous. But it is a very worthwhile use-case.

 

SMART WALKER

 

People with mobility and/or balance issues often require the help of walkers. Some of these devices are completely manual, and some have their own form of locomotion that keeps pace with the user’s desired speed. The direction of these is generally controlled with two hands on some form of steering device. Usually, equal pressure from each hand and arm are required in order to go in a straight line and by pushing harder on one side or another steering is achieved.

But what about someone who has had a stroke that has left them with partial paralysis on one side? They might well not be able to compensate, meaning despite their intent to carry out a straight path forward the device would turn. That’s where Mahdi’s research comes in.

“Robotic walkers or Smart Walkers have been studied for more than 20 years,” he says. “But in almost all of them, their controllers assume you have the same amount of force in both of your hands. And people with strokes often don’t have the same strength in one side of their body as they have on the other side.”

So how can robotics compensate for that? Well, using an AgileX Ranger Mini with InDro Commander from InDro Robotics as the base, Mahdi and others got to work. They built a steering structure and integrated a force sensor, depth perception camera, and some clever algorithms. That camera zones in on the user’s shoulders and translates movement into user intent.

“We know, for example, if you are just trying to use your right hand to turn left, the shoulder angle increases. If you’re trying to turn right, the shoulder angle on the right arm decreases.”

By interpreting those shoulder movements in conjunction with the force being applied by each hand, this Smart Walker translates that data into desired steering action. As a result, the user doesn’t have to push so hard with that compromised side and it also reduces cognitive load. The wrist torque required by the user drops by up to 80 per cent.

Of course, there’s much more to this device than we’ve outlined here. Enough, in fact, that a scientific paper on it can be found here. You can also check out the video below:

 

ROBOTS IN THE O-R

 

While the Smart Walker is a great example of robotics being put to use on the assistive and rehabilitation side of things, let’s not forget that the Telerobotic and Biosystems Research Group also carries out work on the surgical side. Sadra explains that robotic devices – particularly in conjunction with AI – could prove of great benefit assisting a surgeon.

“My research centres around the use of Generative AI. With the growth of Large Language Models (LLM) such as ChatGPT, we want to see how these AI tools can translate into the physical world in robots. A big section of my projects have focused on Generative AI for surgical autonomy.”

For example, a robotic device with plenty of AI onboard might be able to handle tasks such as suctioning blood. Machine Vision and Machine Learning could help that device determine where and how much suction needs to be applied. And, if you push this far enough, a surgeon might be able to initiate that process with a simple voice command like: “Suction.”

“How can we have task planners and motion planners through generative AI such that the surgeon would communicate with the robot with natural language – so they could ask the robot to complete a task and it would execute?” asks Sadra. “This would allow robots to become more friendly to the average individual who doesn’t have robotics knowledge.”

On the flip side of the coin, there’s also the potential for robotic devices to inform the surgeon of something that might require attention. In breast cancer surgery, for example, an AI-enhanced robot with realtime data from an imaging device might notice remaining tumour tissue and give the all-clear to close the incision only after all cancerous material has been excised.

In other words, some of the algorithms Sadra works on involve working on that human-robotic interface while leveraging powerful Large Language Model systems.

“Exactly. And we look at this process in three stages: We think about high-level reasoning and task planning, then mid-level motion planning, then lower-level motion control. This is not only for surgery; it’s a similar workflow for assistive robotics.”

The head of the lab, Professor & Senior University of Alberta Engineering
Research Chair in Healthcare Robotics Dr. Mahdi Tavakoli, describes AI in this field as “a game-changer,” enabling the next level of human-robotics interface.

“Our focus is clear: We’re building robots that collaborate with humans — robots that can understand our language, interpret context, and assist with the kinds of repetitive or physically demanding tasks that free people up to focus on what they do best: The creative, the social, the human. We see the future in ‘collaborative intelligence,’ where people stay in control and robots amplify human capabilities.”

Fun fact: The most powerful LLMs are known as Generative Pretrained Transformers – which is where ChatGPT gets its name.

 

WHAT’S NEXT?

 

We asked the researchers if the plan is to ultimately explore commercialisation. Apparently it’s a little more complex when it comes to surgery due to regulatory issues, but this is definitely on the roadmap. Sadra has been doing research through a program called Lab2Market and says there’s been very positive feedback from clinicians, physical and occupational therapists and manufacturers.

Program head Dr. Tavakoli says the lab is “thinking big” about how such innovations can help diversify the Canadian economy. In Alberta specifically, which has traditionally been a resource-dominated economy, he says robotics presents a huge opportunity for growth.

“That’s part of why we’ve launched Alberta Robotics: To build a regional ecosystem for robotics research, education, and innovation. So, the University of Alberta is open for business when it comes to robotics; people should be watching for what will come out of Alberta in robotics!”

Below: A promotional video for the da Vinci Surgical System. Will research at the U of A someday enable machines like this to take verbal commands from a surgeon?

INDRO’S TAKE

 

The research being carried out at the University of Alberta is both fascinating and carries with it huge potential in both the surgery and rehabilitation/assistive spheres. We’re pleased to know that three Ranger Mini platforms with InDro Commander are being put to work for this purpose – which is unlike any other use-case we’ve seen for our robots.

“I’m incredibly impressed with what they’re doing,” says InDro Founder and CEO Philip Reece. “It’s researchers like these, quietly carrying out advanced and focussed work, who make breakthroughs that ultimately become real-world devices and applications. We’re pleased to put a well-deserved spotlight on their work.”

You can check out a list of researchers and alumni – and see a photo of Sadra and Mahdi – right here.

Dual manipulator Rosie the robot used for Industry 4.0 research

Dual manipulator Rosie the robot used for Industry 4.0 research

By Scott Simmie

 

At least some of you will remember The Jetsons.

The television series, created by Hanna-Barbera Cartoons Inc., was a space-age version of The Flintstones (another Hanna-Barbera production). It originally aired in 1962-1963 with later episodes created in a reboot from 1985 to 1987.

But while Fred Flintstone drove a stone-age car (complete with stone wheels) that he powered by pushing his feet along the ground, George Jetson and his family lived in Orbit City, where Jetson commuted to his two-hour per week job via a flying car with a bubble top. And instead of having dinosaurs (including pterodactyls) help carry out tasks, The Jetsons live in a future where they’re surrounded by automated devices. You could think of their surroundings as the 1960s vision of the Smart Home.

And an integral part of that home? Well, that would be Rosey (later changed to ‘Rosie’) the robot.

Rosey was the family’s robotic maid. She carried out tasks that weren’t performed by the many other automatic conveniences that filled the Jetson’s home. She had two manipulator arms and an internally stored vacuum that be deployed on demand.

She was very useful around the house, carrying out tasks to save the family time.

And this story? Well, it’s about our own Rosie – which is also very space-age.

Below: A Rosie the robot publicity cel, signed by show creators William Hanna and Joseph Barbera. The cel was auctioned in 2018; image by Heritage Auctions

Rosie the Robot from The Jetsons Heritage Auctions image

THE ROSIE STORY

 

So. What is Rosie? We asked Head of R&D Sales Luke Corbeth for a snapshot.

“Rosie is a dual arm mobile manipulation robot designed for pick and place in an industry 4.0 setting,” he says. In other words, it has two arms and manoeuvres on a wheeled platform, and is capable of moving objects from one location to another or even manipulating a single object with both end effectors.

And Rosie has a few tricks up her sleeve. Or, more accurately, sleeves.

“The actual robot is very unique because it has six mounting points for the arms. So you can mount the arms on top, high on the side or low on the side to access shelving of different heights. In fact, you could actually mount one arm directly on the top right, for example, and then mount the second one on the bottom left. So you could grab something from the top of the shelf and from the floor at the same time, which is kind of cool, right?”

Yes, indeed.

Rosie’s home is not with the Jetsons (she has no vacuum cleaner) but in a new lab that hasn’t yet been officially launched at Polytechnique Montréal. It’s called the Intelligent-Cyber Physical System Lab, or I-CPS. So we contacted Lionel Birglen, a professor with the Department of Mechanical Engineering. We wanted to learn more about what the lab does, what he does – and what plans he has for Rosie (which InDro built and shipped in 2023).

Dr. Birglen is a PhD Mechanical Engineer, with a specialisation in robotics. He’s particularly interested in – and an expert on – manipulators and end effectors and has designed and built them. He’s written two books, holds three patents, and is the author or contributing author of at least 94 research papers. He’s also – get this – been listed among the top two per cent most-cited scientists in the world in his area of specialisation.

So it kinda goes without saying, but he’s a pretty big deal in this field.

Dr. Birglen has a deep interest in the role robotics will play in the future of industry. And, within that realm, he’s intensely interested in ensuring that robots, particularly those that will be sharing space with human beings on a factory or warehouse floor, will be safe.

And – he emphasises – he doesn’t trust simulations for important work like this.

“Because simulations lie. They lie all the time,” he says. “You have to understand that reality is infinitely more complex than anything you can have in simulation – so actual experiments are absolutely essential to me. They are essential to my work, to my understanding of what robotic manipulation is.”

“I believe in math, but I know that reality is different. It’s more complex, more complicated, and includes so many un-modelled phenomena.”

 

ROSIE’S JOURNEY

 

Dr. Birglen knew he wanted a new robot for use in the new lab (which we’ll get to shortly). And he knew he wanted a robot with two manipulator arms.

“Dual-arm robots are, in my opinion, the future for industry applications,” he says.

And while humanoid bipeds grab a lot of attention, they’re far more complex (and expensive) than wheeled robots. Plus, he says, most factory applications take place on a single level and don’t require climbing stairs.

“From a factory perspective, a wheeled platform makes a lot of sense because typically in factories you don’t have, say, five levels connected by stairs.”

So he knew he wanted an autonomous, wheeled, dual-arm robot. And he started, initially, to think of a company other than InDro for the build.

“I came across InDro almost by accident,” he explains. “Giovanni Beltrame told me about you because he has purchased many, many robots from you. He said: ‘Those guys can build and assemble the robot for you. They’re close and they do a great job.’ So that’s how I came in contact with you.” (We’ve written previously about the amazing work Dr. Beltrame is working on involving robots and space. You can find that here.)

And so, after a number of calls with Luke Corbeth and the engineering team to settle on design and performance parameters, work on Rosie began.

Below: Technologist Tirth Gajera (‘T’) puts the finishing touches on Rosie in 2023

Rosie and Tirth T

THE LAB

 

Polytechnique Montréal’s Intelligent-Cyber Physical System Lab (I-CPS) is set up as a highly connected Industry 4.0 factory. Faculty from four different departments – computer engineering, electrical engineering, industrial engineering and mechanical engineering (Dr. Birglen) – are involved with the lab. Interns and students, under supervision, also work in the facility.

“So we have four departments involved in this lab and the idea is to build a small scale factory of the future, meaning that everything is connected. We are building a mini-factory inside this lab,” he says.

So think of cameras that can track objects on shelves – and people and robots within the environment. Think of smart tools like a CNC machine, which will eventually be operated by Rosie. And, perhaps just as important as the connectivity within the lab, is connectivity to other research institutes in Quebec, including Université Laval, Université de Sherbrooke and École de Technologié Supérieure (ÉTS). All of those institutes are working with similar mini-factories, and they’re all connected. There’s even a relationship (and connectivity) with manipulator manufacturer Kinova. Funding came via a significant grant from the Canada Foundation for Innovation, or CFI.

“So think of our lab as like one node of this network of mini-factories around Quebec,” explains Dr. Birglen. That connectivity of all components is still a work-in-progress, but “ultimately the goal is that there is a cyber-connection between these different mini-factories, these different laboratories around Quebec, so that one part of one node can work in collaboration with another node in realtime.”

Plus, of course, a lot of learnings will take place within the individual labs themselves.

“We want to bring collaborative robots to work in tandem with humans,” he says. “We want our robots to safely move around people, we want robots to help people. And we also want robots to learn how to work from people.”

 

SAFETY, SAFETY, SAFETY

 

As mentioned earlier, there’s a huge emphasis on safety. And while there are international safety standards for collaborative robots, even a ‘safe’ cobot can pose a threat.

“All the collaborative robots that you have currently on the market more or less follow this technical standard and they are more or less safe, but they’re still dangerous,” explains Dr. Birglen. “And the classical example that we’ve all heard, and which is true, is that if a safe cobot has a knife in its hand and is moving around – it is very dangerous.”

So safety in the lab(s) is paramount – and that means safety at multiple levels. There must be safety:

  • At the task level; you must not have tasks that could endanger people
  • Safety at the control level
  • Safety in terms of collision detection, mitigation, obstacle avoidance
  • Safety at the data security level

Plus – and this really interests Dr. Birglen – you must ensure safety with any additional mechanical innovations that are introduced.

“What you develop, any mechanical system you develop, must be as much as possible intrinsically safe. And actually that’s one of the topics I’m currently working on is to develop end effectors and tooling that is intrinsically safe.”

Below: A LinkedIn post from Luke Corbeth shows Rosie, using both arms, inside the I-CPS lab

THE FUTURE

 

And why is research like this so important? What difference will it make to have robots and humans working safely together, with safe manipulators and end effectors that might even be able to, for example, lift an object in concert with a human being? And why the focus on interconnectedness between all of these facilities?

Well, there’s obviously the value of the research itself – which will lead to greater efficiencies, improved manipulators, gripping technologies, new algorithms and AI enhancements – as well as enhanced safety down the road. But there’s a much bigger picture, says Dr. Birglen, especially if you can get your head around thinking about the future from a global perspective.

China, he says, is no longer a developing nation. The days when the words “Made in China” meant poor quality are – with rare exceptions – gone. The country is, in fact, highly developed – and working at breakneck speed when it comes to innovation and adoption of robotics at scale. A revolution is underway that has massive implications for competitive advantage that simply cannot be ignored. So the research at  I-CPS is not merely important from an academic perspective, it’s strategic when viewed through a global economic lens.

“We as a country – meaning Canada – are in competition with other countries for manufacturing, for producing goods and services. China is a developed country and it is very, very, very good in robotics,” he states. “You know how in the past we saw China as producing low quality goods, low quality robots? That’s over, man. That’s finished.”

And?

“If they are investing in robotics like mad and we are not, we’re going to be a leftover – Canada is going to sink as a rich country. If you want to produce wealth in the 21st Century, you need robots, you need automation, you need integration. In short, you need to be the leader of the pack or you’re going to be eaten.”

It’s a stark warning – and it’s true.

I step outside as author and state this having lived in China back when it was still a developing country in the late 1980s – and having returned several times since then. The transformation has been nothing short of astonishing. How, you might ask, did it achieve all this?

The answer has its genesis with former Chinese leader Deng Xiaoping. who led the country from 1978 to 1989. He didn’t merely open the door to reform; he created policies that began sending waves of students from what had been a xenophobic country abroad to study. There was an emphasis on careers that could help modernise the nation, including all aspects of engineering, aerospace, construction, transportation, architecture, etc. That’s where all this began.

Thankfully (and with credit to federal funding agencies like CFI), there are projects like I-CPS underway – and academics like Dr. Lionel Birglen with the vision to push the needle safely forward.

Below: “Baxter” – the original dual-arm robot. Baxter is still at Polytechnique Montréal, but Rosie is the mobile future. Photo by Luke Corbeth

Baxter
Rosie

INDRO’S TAKE

 

We’re obviously pleased Polytechnique Montréal selected InDro to build Rosie. And we’re particularly pleased to see that she’s being deployed at I-CPS, as part of an integrated and networked research project that has such potentially profound implications for the future.

“I believe Dr. Birglen is correct in his assessment of the importance of robotics and automation in the future,” says InDro Robotics Founder and CEO Philip Reece. “And when you throw innovations with drones and even autonomous Uncrewed Aerial Vehicles capable of carrying large cargo loads and passengers into the mix, we are actually heading into a Jetsons-like future,” he adds.

“I think there’s a growing understanding of the implications of this kind of future from not only the private sector, but also federal regulators and funding agencies. At InDro our mission will always focus on continued innovation. Sometimes those innovations are our own inventions, but a key piece of the puzzle is R&D work carried out by academics like Lionel Birglen. We’re confident that Rosie’s arms are in the right hands.”

Interested in learning more about a custom robotics solution? Feel free to contact us here.

InDro clients Polytechnique Montréal featured on CNN with swarm research on ‘Mars’

InDro clients Polytechnique Montréal featured on CNN with swarm research on ‘Mars’

By Scott Simmie

There’s nothing quite as satisfying as seeing really good R&D in the field.

And when that research gets coverage from CNN? Well, that’s even better.

The news network just profiled some cutting-edge work being carried out by students at Polytechnique Montréal. Specifically, students who work in the MIST Lab – where that acronym stands for Making Innovative Space Technology.

We’ve profiled the work being carried out there before (you can find it here). Essentially, students are working on innovative robotics research they hope will one day prove useful on the moon and Mars.

“What we want to do is to explore environments including caves and surfaces on other planets or satellites using robotics,” explained Dr. Giovanni Beltrame (Ph.D.), a full professor at Polytechnique’s Departments of Computer Engineering and Software Engineering during our earlier interview. “Caves and lava tubes can be ideal places for settlement: They can be sealed and provide radiation shielding. There’s also a chance of finding water ice in them.”

The research certainly caught our attention – partly because the MIST Lab is an InDro client. We’ve supplied them with platforms and robots which they’ve enhanced with “backpacks” enabling swarm robotics research. Recently, they took a fleet of those connected robots to the Canadian Space Agency’s Mars Yard. The site has been built to replicate the surface on Mars – what’s known as a Planetary Analogue Terrain.

The mission? To have these interconnected robots autonomously map that surface in high-resolution.

Below: The Mars Yard. Photo by the Canadian Space Agency, followed by a pic of some of the robots InDro modified and supplied to Polytechnique Montréal

CSA Mars Yard
MIST

SWARM ROBOTICS

 

Fundamental to this research is deploying the robots in a swarm – where the robots carry out tasks autonomously while communicating with each other. In this experiment, they’re mapping that Planetary Analogue Terrain and compiling the data into a high-resolution digital twin.

“We absolutely believe that swarm robotics is the future of space exploration,” PhD student Riana Gagnon Souleiman told CNN. “It’s more efficient to have more robots and you’re less reliant on a single agent failing.”

We’ve written about swarm robotics before (and recently shipped a swarm to a US academic client). But this CNN story provides a full look at what the MIST Lab team has accomplished, modifying the robots with their own “backpack” for creating a local area network and meshing all that data.

In the video, which we’ll link to in a moment, you’ll see several of the 18 platforms InDro can supply. At the Mars Yard, you’ll see a Scout Mini, two Bunker Minis (seen in the photo above) and one Scout 2.0 – all working collaboratively.

The MIST Lab team has done an incredible job with modifying these robots and pulling off what we know is a very difficult mission. Kudos also to CNN for doing an exemplary job in explaining this story.

All set? You can watch the video here.

Below: Some of the MIST Lab researchers in a screen grab from the CNN story

Robosense sets new bar for affordable, powerful LiDAR sensors

Robosense sets new bar for affordable, powerful LiDAR sensors

By Scott Simmie

 

Building or modifying a robot?

Specifically, are you working on something with autonomy that needs to understand an unfamiliar environment? Then you’re likely looking at adding two key sensors: A depth camera and a LiDAR unit.

LiDAR (as most of you likely know), scans the surrounding environment with a continuous barrage of eye-safe laser beams. It measures what’s known as the “Time of Flight” – meaning the time it takes for the photons to be reflected off surrounding surfaces and return to the LiDAR unit. The closer that surface is, the shorter the Time of Flight. LiDARs calculate the time of each of those reflected beams and convert that into distance. Scatter enough of those beams in a short period of time (and LiDARs do), and you get an accurate digital representation of the surrounding environment – even while the robot is moving through it.

This is particularly useful for autonomous missions and especially for Simultaneous Localisation and Mapping, or SLAM. That’s where a LiDAR-equipped robot can be placed in a completely unfamiliar (and even GPS-denied) environment and produce a point-cloud map of its surroundings while avoiding obstacles. Quality LiDARs are also capable of producing 3D precision scans for a wide variety of use-cases.

All great, right? Except for one thing: LiDAR sensors tend to be very expensive. So expensive, they can be out of reach for an R&D team, academic institution or Startup.

There is, however, a solution: Robosense.

The company produces LiDAR sensors (both mechanical and solid-state) that rival the established players in the market. And they do so for about one-third of the cost of the industry heavyweights.

“The performance of Robosense is outstanding – absolutely on par with its main competitors in North America,” says InDro Account Executive Callum Cameron. “We have been integrating Robosense LiDAR on our products for about two years, and their performance is exceptional.”

Below: A fleet of four robots, equipped with Robosense LiDAR, which recently shipped to an academic client.

 

Robosense LiDAR

ROBOSENSE

 

The company might not yet be a household name (unless your household has robots), but as of May 2024 the firm had sold 460,000 LiDAR units. Its sensors power a large number of autonomous cars, delivery vehicles and other robots – and it’s the first company to achieve mass production of automotive-grade LiDAR units with its own in-house developed chip.

The company was founded in 2014, with some A-level engineering talent – and it’s been on a stellar trajectory ever since. One of the reasons is because Robosense produces all three core technologies behind its products: The actual chipsets, the LiDAR hardware, and the perception software. We’ll let the company itself tell you more:

“In 2016, RoboSense began developing its R Platform mechanical LiDAR. One year later, in 2017, we introduced our perception software alongside the automotive-grade M Platform LiDAR sensors tailored for advanced driver assistance and autonomous driving systems. We achieved the start-of-production (SOP) of the M1 in 2021, becoming the world’s first LiDAR company to mass-produce automotive-grade LiDAR equipped with chips developed in-house,” says its website.

The company now has thousands of engineers. And it didn’t take long before the world noticed what they were producing.

“As of May 17, 2024, RoboSense has secured 71 vehicle model design wins and enabled 22 OEMs and Tier 1 customers to start mass production of 25 models. We serve over 2,500 customers in the robotics and other non-automotive industries and are the global LiDAR market leader in cumulative sales volume.”

The company has also received prestigious recognition for its products, including two CES Innovation awards, the Automotive News PACE award, and the Audi Innovation Lab Champion prize.

“This company has standout features, including Field of View, point cloud density and high frame rates,” says Cameron. “If you look at that fleet of four robots we recently built, using the competition those LiDAR units alone would have come to close to $80,000. The Robosense solution cost roughly one-quarter of that with similar capabilities.”

And the factories? State of the art. Though this video focuses on its solid-state LiDAR, Robosense uses the same meticulous process for its mechanical units:

LiDAR FOR EVERY APPLICATION

 

Robosense produces many different LiDAR sensors. But what particularly appeals to us is that the company has (excuse the pun) a laser-like focus on the robotics industry. Its Helios multi-beam LiDAR units have been designed from the ground up for robots and intelligent vehicles. There are customisable fields of view, depending on application, and a near-field blind-spot of ≤ 0.2 metres. In addition, Helios LiDAR comes in 16- and 32-beam options depending on point-cloud density and FOV requirements. Both are capable of functioning in temperatures as low as -40° or on a scorching day in the Sahara desert. There’s also protection against multi-radar interference and strong light (which can be an issue with LiDAR). You can learn more about its features here.

Its Bpearl unit proves that very good things can indeed come in small packages. With a 360° horizontal and 90° vertical hemispherical FOV, it’s been designed for near-field blind spots, capable of detection at ≤10 cm. That’s why we selected it for a robot designed to inspect cycling lanes for hazards (while avoiding cyclists, of course). We actually have two Bpearls on that robot (one on each side), since detecting blind spots and avoiding other obstacles is so critical to this application.

“We’ve integrated both the Bpearl and Helios LiDAR units into multiple different robots and the performance has been excellent, even under adverse conditions,” says Cameron. “Obstacle avoidance has been outstanding, and SLAM missions are a snap.”

Below: This InDro robot features two 32-beam Robosense Bpearl LiDAR units. You can see one of them – that tiny bubble on the side (and there’s another one on the opposite side):

InDro Sentinel

THE THREE “D”s

 

You’ve likely heard this before, but robots are perfect for jobs that are Dirty, Dull or Dangerous – because they remove humans from those scenarios. Robots, particularly inspection robots, are often subjected to extremes in terms of weather and other conditions.

So this is a good place to mention that if a Robosense LiDAR encounters fog, rain, dust or snow it has a de-noising function to ensure it’s still capturing accurate data and that your point cloud isn’t a representation of falling snow. All of the Robosense LiDAR sensors have outstanding Ingress Protection ratings.

Because adverse conditions are quite likely to occur at some point during a robotic mission, Robosense puts its products through absolutely gruelling tests. Hopefully your robot won’t encounter the scenarios seen below, but if it does – the LiDAR will keep working:

INDRO’S TAKE

 

We take pride in putting only the highest quality sensors into our products.

Prior to adopting Robosense as our “go-to” LiDAR about two years ago, we were using big-name products. But those products also came with a big price tag. When we discovered the quality and price of Robosense LiDAR units, it was an obvious choice to make the switch. We have shipped multiple Robosense-enabled robots to clients, saving them thousands of dollars – in one case, tens of thousands – while still capturing every bit of data they require. Robosense is now our go-to, even on our flagship products. (We recently did a demonstration of one of our newer Helios-equipped autonomous quadrupeds to a high-profile client; they were amazed with the results.)

“Robosense is every bit the equal of the heavyweight LiDAR manufacturers, without the downside of the high cost,” says InDro Robotics CEO Philip Reece. “The field-of-view, point cloud density and quality of construction are all state-of-the-art, as are the manufacturing facilities. What’s more, Robosense continues to push the envelope with every new product it releases.”

Interested in learning more, including price and options? Contact Account Executive Callum Cameron right here, and he’ll give you all the info you need.

George Mason U. researchers enable robots to intelligently navigate challenging terrain

George Mason U. researchers enable robots to intelligently navigate challenging terrain

By Scott Simmie

 

Picture this: You’re out for a drive and in a hurry to reach your destination.

At first, the road is clear and dry. You’ve got great traction and things are going smoothly. But then the road turns to gravel, with twists and turns along the way. You know your vehicle well, and have navigated such terrain before.

And so, instinctually, you slow the vehicle to navigate the more challenging conditions. By doing so, you avoid slipping on the gravel. Your experience with driving, and in detecting the conditions, has saved you from a potential mishap. Yes, you slowed down a bit. But you’ll speed up again when the conditions improve. The same scenario could apply to driving on grass, ice – or even just a hairpin corner on a dry paved road.

For human beings, especially those with years of driving experience, such adjustments are second-nature. We have learned from experience, and we know the limitations of our vehicles. We see and instantly recognize potentially hazardous conditions – and we react.

But what about if you’re a robot? Particularly, a robot that wants to reach a destination at the maximum safe speed?

That’s the crux of fascinating research taking place at George Mason University: Building robots that are taught – and can subsequently teach themselves – how to adapt to changing terrain to ensure stable travel at the maximum safe speed.

It’s very cool research, with really positive implications.

Below: You don’t want this happening on a critical mission…

George Mason Xuesu Xiao Hunter SE

“XX”

 

Those are the initials of Dr. Xuesu Xiao, an Assistant Professor at George Mason University. He holds a PhD in Computer Science, and runs a lab that plays off his initials, called the RobotiXX Lab. Here’s a snippet of the description from his website:

“At RobotiXX lab, researchers (XX-Men) and robots (XX-Bots) perform robotics research at the intersection of motion planning and machine learning with a specific focus on robustly deployable field robotics. Our research goal is to develop highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision.”

We spoke with Dr. Xiao about this work.

It turns out he’s particularly interested in making robots that are particularly useful to First Responders, and carrying out those dull, dirty and dangerous tasks. Speed in such situations can be critical, but comes with its own set of challenges. A robot that makes too sharp a turn at speed on a high friction surface can easily roll over – effectively becoming useless in its task. Plus, there are the difficulties previously flagged with other terrains.

This area of “motion planning” fascinates Dr. Xiao. Specifically, how to take robots beyond traditional motion planning and enable them to identify and adapt to changing conditions. And that involves machine vision and machine learning.

“Most motion planners used in existing robots are classical methods,” he says. “What we want to do is embed machine learning techniques to make those classical motion planners more intelligent. That means I want the robots to not only plan their own motion, but also learn from their own past experiences.”

In other words, he and his students have been focussing on pushing robots to develop capabilities that surpass the instructions and algorithms a roboticist might traditionally program.

“So they’re not just executing what has been programmed by their designers, right? I want them to  improve on their own, utilising all the different sources of information they can get while working in the field.”

 

THE PLATFORM

 

The RobotiXX Lab has chosen the Hunter SE from AgileX as its core platform for this work. That platform was supplied by InDro Robotics, and modified with the InDro Commander module. That module enables communication over 5G (and 4G) networks, enabling high speed data throughput. It comes complete with multiple USB slots and the Robot Operating System (ROS) library onboard, enabling the easy addition (or removal) of multiple sensors and other modifications. It also has a remote dashboard for controlling missions, plotting waypoints, etc.

Dr. Xiao was interested in this platform for a specific reason.

“The main reason is it is because it’s high speed, with a top speed of 4.8m per second. For a one-fifth/one-sixth scale vehicle that is a very, very high speed. And we want to study what will happen when you are executing a turn, for example, while driving very quickly.”

As noted previously, people with driving experience instinctively get it. They know how to react.

“Humans have a pretty good grasp on what terrain means,” he says. “Rocky terrain means things will get bumpy, grass can impede a motion, and if you’re driving on a high-friction surface you can’t turn sharply at speed. We understand these phenomenon. The problem is, robots don’t.”

So how can we teach robots to be more human in their ability to navigate and adjust to such terrains – and to learn from their mistakes?

As you’ll see in the diagram below, it gets *very* technical. But we’ll do our best to explain.

George Mason Hunter Xuesu Xiao

THE APPROACH

 

The basics here are pretty clear, says Dr. Xiao.

“We want to teach the robots to know the consequences of taking some aggressive maneuvers at different speeds on different terrains. If you drive very quickly while the friction between your tires and the ground is high, taking a very sharp turn will actually cause the vehicle to roll over – and there’s no way the robot by itself will be able to recover from it, right? So the whole idea of the paper is trying to enable robots to understand all these consequences; to make them ‘competence aware.'”

The paper Dr. Xiao is referring to has been submitted for scientific publication. It’s pretty meaty, and is intended for engineers/roboticists. It’s authored by Dr. Xiao and researchers Anuj Pokhrel, Mohammad Nazeri, and Aniket Datar. It’s entitled: CAHSOR: Competence-Aware High-Speed Off-Road Ground Navigation in SE(3).

That SE(3) term is used to describe how objects can move and rotate in 3D space. Technically, it stands for Special Euclidean group in three dimensions. It refers to keeping track of an object in 3D space – including position and orientation.

We’ll get to more of the paper in a minute, but we asked Dr. Xiao to give us some help understanding what the team did to achieve these results. Was it just coding? Or were there some hardware adjustments as well?

Turns out, there were both. Yes, there was plenty of complex coding. There was also the addition of an RTK GPS unit so that the robot’s position in space could be measured as accurately as possible. Because the team soon discovered that intense vibration over rough surfaces could loosen components, threadlock was used to keep things tightly in place.

But, as you might have guessed, machine vision and machine learning are a big part of this whole process. The robot needs to identify the terrain in order to know how to react.

We asked Dr. Xiao if an external data library was used and imported for the project. The answer? “No.”

“There’s no dataset out there that includes all these different basic catastrophic consequences when you’re doing aggressive maneuvers. So all the data we used to train the robot and to train our machine learning algorithms were all collected by ourselves.”

 

SLIPS, SLIDES, ROLLOVERS

 

As part of the training process, the Hunter SE was driven over all manner of demanding terrain.

“We actually bumped it through very large rocks many times and also slid it all over the place,” he says. “We actually rolled the vehicle over entirely many times. This was all very important for us to collect some data so that it learns to not do that in the future, right?”
 
And while the cameras and machine vision were instrumental in determining what terrain was coming up, the role of the robot’s Inertial Measurement Unit was also key.

“It’s actually multi-modal perception, and vision is just part of it. So we are looking at the terrain using camera images and we are also using our IMU. Those inertial measurement unit readings  sense the acceleration and the angular velocities of the robot so that it can better respond,” he says.

“Because ultimately it’s not only about the visual appearance of the terrain, it is also about how you drive on it, how you feel it.”

 

THE RESULTS

 

Well, they’re impressive.

The full details are outlined in this paper, but here’s the headline: Regardless of whether the robot was operating autonomously heading to defined waypoints, or whether a human was controlling it, there was a significant reduction in incidents (slips, slides, rollovers etc.) with only a small reduction in overall speed.

Specifically, “CAHSOR (Competence-Aware High-Speed Off-Road Ground Navigation) can efficiently reduce vehicle instability by 62% while only compromising 8.6% average speed with the help of TRON (visual and inertial Terrain Representation for Off-road Navigation).”

That’s a tremendous reduction in instability – meaning the likelihood that these robots will reach their destination without incident is greatly improved. Think of the implications for a First Responder application, where without this system a critical vehicle rushing to a scene carrying medical supplies – or even simply for situational awareness – might roll over and be rendered useless. The slight reduction in speed is a small price to pay for greatly enhancing the odds of an incident-free mission.

“Without using our method, a robot will just blindly go very aggressively over every single terrain – while risking rolling over, bumps and vibrations on rocks, maybe even sliding and rolling off a cliff.”

What’s more, these robots continue to learn with each and every mission. They can also share data with each other, so that the experience of one machine can be shared with many. Dr. Xiao also says the learnings from this project, which began in January of 2023, can also be applied to marine and even aerial robots.

For the moment, though, the emphasis has been fully on the ground. And there can be no question this research has profound and positive implications for First Responders (and others) using robots in mission-critical situations.

Below: The Hunter SE gets put through its paces. (All images courtesy of Dr. Xiao.)

Hunter SE George Mason Xuesu Xiao

INDRO’S TAKE

 

We’re tremendously impressed with the work being carried out by Dr. Xiao and his team at George Mason University. We’re also honoured to have played a small role in supplying the Hunter SE, InDro Commander, as well as occasional support as the project progressed.

“The use of robotics by First Responders is growing rapidly,” says InDro Robotics CEO Philip Reece. “Improving their ability to reach destinations safely on mission-critical deployments is extremely important work – and the data results are truly impressive.

“We are hopeful the work of Dr. Xiao and his team are adopted in future beyond research and into real-world applications. There’s clearly a need for this solution.”

If your institution or R&D facility is interested in learning more how InDro’s stable of robots (and there are many), please reach out to us here.