George Mason U. researchers enable robots to intelligently navigate challenging terrain

George Mason U. researchers enable robots to intelligently navigate challenging terrain

By Scott Simmie

 

Picture this: You’re out for a drive and in a hurry to reach your destination.

At first, the road is clear and dry. You’ve got great traction and things are going smoothly. But then the road turns to gravel, with twists and turns along the way. You know your vehicle well, and have navigated such terrain before.

And so, instinctually, you slow the vehicle to navigate the more challenging conditions. By doing so, you avoid slipping on the gravel. Your experience with driving, and in detecting the conditions, has saved you from a potential mishap. Yes, you slowed down a bit. But you’ll speed up again when the conditions improve. The same scenario could apply to driving on grass, ice – or even just a hairpin corner on a dry paved road.

For human beings, especially those with years of driving experience, such adjustments are second-nature. We have learned from experience, and we know the limitations of our vehicles. We see and instantly recognize potentially hazardous conditions – and we react.

But what about if you’re a robot? Particularly, a robot that wants to reach a destination at the maximum safe speed?

That’s the crux of fascinating research taking place at George Mason University: Building robots that are taught – and can subsequently teach themselves – how to adapt to changing terrain to ensure stable travel at the maximum safe speed.

It’s very cool research, with really positive implications.

Below: You don’t want this happening on a critical mission…

George Mason Xuesu Xiao Hunter SE

“XX”

 

Those are the initials of Dr. Xuesu Xiao, an Assistant Professor at George Mason University. He holds a PhD in Computer Science, and runs a lab that plays off his initials, called the RobotiXX Lab. Here’s a snippet of the description from his website:

“At RobotiXX lab, researchers (XX-Men) and robots (XX-Bots) perform robotics research at the intersection of motion planning and machine learning with a specific focus on robustly deployable field robotics. Our research goal is to develop highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision.”

We spoke with Dr. Xiao about this work.

It turns out he’s particularly interested in making robots that are particularly useful to First Responders, and carrying out those dull, dirty and dangerous tasks. Speed in such situations can be critical, but comes with its own set of challenges. A robot that makes too sharp a turn at speed on a high friction surface can easily roll over – effectively becoming useless in its task. Plus, there are the difficulties previously flagged with other terrains.

This area of “motion planning” fascinates Dr. Xiao. Specifically, how to take robots beyond traditional motion planning and enable them to identify and adapt to changing conditions. And that involves machine vision and machine learning.

“Most motion planners used in existing robots are classical methods,” he says. “What we want to do is embed machine learning techniques to make those classical motion planners more intelligent. That means I want the robots to not only plan their own motion, but also learn from their own past experiences.”

In other words, he and his students have been focussing on pushing robots to develop capabilities that surpass the instructions and algorithms a roboticist might traditionally program.

“So they’re not just executing what has been programmed by their designers, right? I want them to  improve on their own, utilising all the different sources of information they can get while working in the field.”

 

THE PLATFORM

 

The RobotiXX Lab has chosen the Hunter SE from AgileX as its core platform for this work. That platform was supplied by InDro Robotics, and modified with the InDro Commander module. That module enables communication over 5G (and 4G) networks, enabling high speed data throughput. It comes complete with multiple USB slots and the Robot Operating System (ROS) library onboard, enabling the easy addition (or removal) of multiple sensors and other modifications. It also has a remote dashboard for controlling missions, plotting waypoints, etc.

Dr. Xiao was interested in this platform for a specific reason.

“The main reason is it is because it’s high speed, with a top speed of 4.8m per second. For a one-fifth/one-sixth scale vehicle that is a very, very high speed. And we want to study what will happen when you are executing a turn, for example, while driving very quickly.”

As noted previously, people with driving experience instinctively get it. They know how to react.

“Humans have a pretty good grasp on what terrain means,” he says. “Rocky terrain means things will get bumpy, grass can impede a motion, and if you’re driving on a high-friction surface you can’t turn sharply at speed. We understand these phenomenon. The problem is, robots don’t.”

So how can we teach robots to be more human in their ability to navigate and adjust to such terrains – and to learn from their mistakes?

As you’ll see in the diagram below, it gets *very* technical. But we’ll do our best to explain.

George Mason Hunter Xuesu Xiao

THE APPROACH

 

The basics here are pretty clear, says Dr. Xiao.

“We want to teach the robots to know the consequences of taking some aggressive maneuvers at different speeds on different terrains. If you drive very quickly while the friction between your tires and the ground is high, taking a very sharp turn will actually cause the vehicle to roll over – and there’s no way the robot by itself will be able to recover from it, right? So the whole idea of the paper is trying to enable robots to understand all these consequences; to make them ‘competence aware.'”

The paper Dr. Xiao is referring to has been submitted for scientific publication. It’s pretty meaty, and is intended for engineers/roboticists. It’s authored by Dr. Xiao and researchers Anuj Pokhrel, Mohammad Nazeri, and Aniket Datar. It’s entitled: CAHSOR: Competence-Aware High-Speed Off-Road Ground Navigation in SE(3).

That SE(3) term is used to describe how objects can move and rotate in 3D space. Technically, it stands for Special Euclidean group in three dimensions. It refers to keeping track of an object in 3D space – including position and orientation.

We’ll get to more of the paper in a minute, but we asked Dr. Xiao to give us some help understanding what the team did to achieve these results. Was it just coding? Or were there some hardware adjustments as well?

Turns out, there were both. Yes, there was plenty of complex coding. There was also the addition of an RTK GPS unit so that the robot’s position in space could be measured as accurately as possible. Because the team soon discovered that intense vibration over rough surfaces could loosen components, threadlock was used to keep things tightly in place.

But, as you might have guessed, machine vision and machine learning are a big part of this whole process. The robot needs to identify the terrain in order to know how to react.

We asked Dr. Xiao if an external data library was used and imported for the project. The answer? “No.”

“There’s no dataset out there that includes all these different basic catastrophic consequences when you’re doing aggressive maneuvers. So all the data we used to train the robot and to train our machine learning algorithms were all collected by ourselves.”

 

SLIPS, SLIDES, ROLLOVERS

 

As part of the training process, the Hunter SE was driven over all manner of demanding terrain.

“We actually bumped it through very large rocks many times and also slid it all over the place,” he says. “We actually rolled the vehicle over entirely many times. This was all very important for us to collect some data so that it learns to not do that in the future, right?”
 
And while the cameras and machine vision were instrumental in determining what terrain was coming up, the role of the robot’s Inertial Measurement Unit was also key.

“It’s actually multi-modal perception, and vision is just part of it. So we are looking at the terrain using camera images and we are also using our IMU. Those inertial measurement unit readings  sense the acceleration and the angular velocities of the robot so that it can better respond,” he says.

“Because ultimately it’s not only about the visual appearance of the terrain, it is also about how you drive on it, how you feel it.”

 

THE RESULTS

 

Well, they’re impressive.

The full details are outlined in this paper, but here’s the headline: Regardless of whether the robot was operating autonomously heading to defined waypoints, or whether a human was controlling it, there was a significant reduction in incidents (slips, slides, rollovers etc.) with only a small reduction in overall speed.

Specifically, “CAHSOR (Competence-Aware High-Speed Off-Road Ground Navigation) can efficiently reduce vehicle instability by 62% while only compromising 8.6% average speed with the help of TRON (visual and inertial Terrain Representation for Off-road Navigation).”

That’s a tremendous reduction in instability – meaning the likelihood that these robots will reach their destination without incident is greatly improved. Think of the implications for a First Responder application, where without this system a critical vehicle rushing to a scene carrying medical supplies – or even simply for situational awareness – might roll over and be rendered useless. The slight reduction in speed is a small price to pay for greatly enhancing the odds of an incident-free mission.

“Without using our method, a robot will just blindly go very aggressively over every single terrain – while risking rolling over, bumps and vibrations on rocks, maybe even sliding and rolling off a cliff.”

What’s more, these robots continue to learn with each and every mission. They can also share data with each other, so that the experience of one machine can be shared with many. Dr. Xiao also says the learnings from this project, which began in January of 2023, can also be applied to marine and even aerial robots.

For the moment, though, the emphasis has been fully on the ground. And there can be no question this research has profound and positive implications for First Responders (and others) using robots in mission-critical situations.

Below: The Hunter SE gets put through its paces. (All images courtesy of Dr. Xiao.)

Hunter SE George Mason Xuesu Xiao

INDRO’S TAKE

 

We’re tremendously impressed with the work being carried out by Dr. Xiao and his team at George Mason University. We’re also honoured to have played a small role in supplying the Hunter SE, InDro Commander, as well as occasional support as the project progressed.

“The use of robotics by First Responders is growing rapidly,” says InDro Robotics CEO Philip Reece. “Improving their ability to reach destinations safely on mission-critical deployments is extremely important work – and the data results are truly impressive.

“We are hopeful the work of Dr. Xiao and his team are adopted in future beyond research and into real-world applications. There’s clearly a need for this solution.”

If your institution or R&D facility is interested in learning more how InDro’s stable of robots (and there are many), please reach out to us here.

Elroy Air’s Chaparral brings long-range, heavy lift cargo solution

Elroy Air’s Chaparral brings long-range, heavy lift cargo solution

By Scott Simmie

 

Some history has just been made in the world of Advanced Air Mobility (AAM).

On November 12, Elroy Air successfully flew its Chaparral C1 – the first flight of a turbogenerator-hybrid electric vertical take-off and landing (hVTOL) aircraft. The hover test of the full-scale aircraft took place at the company’s test-flight facility in Byron, California.

It’s an important milestone as the world moves toward the AAM era, when new and transformative aircraft will move goods and people to destinations that would have been impractical or too expensive using traditional aircraft.

“This is an exhilarating day for our team and the industry as a whole,” says Elroy Air co-founder and CEO Dave Merrill.

There are plenty of companies competing for this new space with innovative autonomous designs. Some are designed to carry people, cargo, or both. There are several excellent designs out there, but Elroy Air’s Chaparral C1 has been on our radar for reasons you’re about to discover.

Before we get into the history, though, let’s get straight to the news. Here’s a video of the test flight:

AND DOWN ON THE GROUND

 

Check out the Chaparral C1 on the ground. Take a good look, as we’ll be discussing these features.

Elroy Air Chaparral AAM

THE CHAPARRAL

 

Let’s get into why this aircraft will fill a niche.

It’s been designed to move large payloads long distances – and do so efficiently. Humanitarian aid, military resupply and middle-mile logistics are all perfect use-cases for the Chaparral. Its sole purpose is to move significant amounts of cargo efficiently – and be ready for the return trip in minutes.

Here’s the one-floor elevator pitch:

“We’re building an aircraft that will be able to fly 300 miles (483 km) and carry 300 pounds (136 kg) of cargo,” explains Jason Chow, the company’s Director of Strategy and Business Development.

“It’s VTOL, so we don’t need runways. It’s also hybrid electric, so in many situations where there are remote areas, we’re still able to fly where electric power is unavailable.”

Hybrid electric makes sense when you’re after this kind of range, since the craft benefits from the energy density of jet fuel.

“A turboshaft engine powers the batteries, and the batteries power flight,” says Chow.

“One of the most intensive parts of flight is the takeoff portion, where you’re vertically flying upwards. And once you get into forward flight, the turbine is able to throttle back to meet the reduced demand while maintaining battery charge.”

As you can see from the photo, there are eight motors for vertical lift and four for forward propulsion. Once the craft transitions into forward flight, its fixed-wing design brings greater efficiency and range than would be possible with a traditional multi-rotor (which don’t generally have lifting surfaces aside from the rotors themselves).

But while all this looks great, Chaparral’s real secret sauce is its cargo capabilities – which have been designed, literally, from the ground up.

Take a look again at the photo above. Note the design of the wheel struts, as well as the ample space between the bottom of the fuselage and the ground. That’s all for a very specific reason: Chaparral has been designed to carry an aerodynamic, quickly-swappable cargo pod.

Have a look:

 

Elroy Air Chaparral AAM

THE POD

 

Chow says the system is comparable to a tractor-trailer. On a road, the tractor provides the power to move the goods. In the air, “the trailer is the equivalent of the cargo pod. We imagine customers will have multiple cargo pods.”

Those pods can be quickly interchanged on the ground – because the Chaparral’s autonomy abilities aren’t limited to flight. The aircraft can taxi to a predetermined location, lower and disengage a cargo pod, then reposition itself and pick up the next one. You can imagine the advantage of such a system when transporting food or critical medical supplies in an emergency situation. This isn’t simply an aircraft: It’s a delivery system.

It’s also worth noting that the pod has been designed to be compatible with existing infrastructure and tools such as forklifts. As the Elroy Air website explains:

The Palletized Pod uses a fairing-on-pallet design to ease loading of heavy cargo. This configuration features a standardized L-Track system for securing shipments, ensuring simple loading and safe travel for hefty items.”

Below: The Chaparral C1 with the pod snugged up and ready for business…

Elroy Air Chaparral AAM

BUSINESS MODEL

 

So, will Elroy Air be a service provider, overseeing autonomous flights for clients? Or will it be producing the Chaparral to be sold to clients who will operate it themselves?

“The current thinking is that we would do both,” explains Chow. “There are a lot of our partners that are very good at operating aircraft: FedEx, Bristow, the United States Air Force. The main thing they do is operate aircraft really well. So in those situations we would sell to them only as the OEM (Original Equipment Manufacturer).”

“But we also have customers who are interested in what we can provide. So in those situations we could provide the service ourselves or rely on very experienced operators.”

Elroy Air Chaparral Test Flight

MANUFACTURING

 

Producing an aircraft of this scale – it has a wingspan of 26.3 feet (8.01 metres) and a length of 19.3 feet (5.88 metres) – is no small task. Elroy Air made the decision early on that the most efficient approach would be as a highly selective and meticulous integrator. So its composite fuselage, for example, is outsourced.

“There are folks in the general Advanced Air Mobility industry that are building everything in-house. That’s great, you can own the IP (Intellectual Property) for everything,” says Chow.

“That being said, it takes longer. So our approach has been to be an integrator. We source the best parts to help us get to market – including the generator.”

Elroy Air Chaparral Test Flight

TRAJECTORY

 

There are a lot of startups in this space, including plenty of newcomers. Elroy Air was formed back in 2016 in San Francisco by Dave Merrill (now CEO) and Clint Cope (Chief Product Officer).

By 2018 the company flight-tested sub-scale Chaparral aircraft and user-tested its automated cargo‑handling systems. The following year it had established a relationship (and contract) with the United States Air Force “enabling Elroy Air to understand and inform the USAF’s operational needs for distributed aerial logistics in contested environments. We developed our custom simulation environment for Chaparral aircraft and ran a successful flight test campaign on an early 1200‑pound, full-scale Chaparral prototype outfitted with an all-electric powertrain.”

The milestones have kept coming. The year 2020 brought refinements to its simulation system, allowing the team to carry out thousands of virtual flights and ground/cargo mission experiments. Development began in earnest that same year on the hybrid-electric powertrain, including multiple turboshaft engine runs.

A Series A financing in 2021 brought in partners Lockheed Martin, Prosperity7 and Marlinspike, who came to the table with $40M. In 2022 an additional $36M in capital arrived, and the company unveiled its C1-1 Chaparral to the public. (The aircraft also made it to the cover of Aviation Week.)

It’s been a careful, methodical journey that has brought the company this far – and it clearly has ambitious plans for the future. If you’d like to read about these milestones in greater detail, you’ll find a company timeline here

But the biggest milestone so far? The flight that opened this story.

“This marks a major moment for the industry as hybrid-electric aircraft enable the dual benefits of runway-independent safe redundant propulsion, and long-range flight well in excess of battery power alone,” says co-founder and CEO Dave Merrill. 

“Our accomplishment puts Elroy Air one step closer to delivering a transformative logistics capability to our customers and partners.”

Elroy Air Chaparral Test Flight

INDRO’S TAKE

 

We at InDro obviously have a stake in the future of Advanced Air Mobility. We know from our own work in this field of the pent-up demand for efficient VTOL aircraft that can safely shuttle critical cargo – whether across major cities or to isolated communities lacking runways.

We’ve also been watching, with interest, the companies that are vying for space in this coming market.

“From everything we’ve seen, Chaparral is going to be a perfect fit,” says InDro Robotics President Philip Reece. “It’s cargo capacity and range will really fill a void, and the pod system – complete with its autonomous coupling and decoupling feature – will be hugely advantageous. We congratulate Elroy Air on this milestone, and look forward to seeing a transition flight before long.”

As with all new aircraft, it will take time before certification takes place and the FAA gives Elroy Air its full blessings. We’re confident that not only will that day come – but that Elroy Air and Chaparral will play a significant role in the era of Advanced Air Mobility.

All images supplied with permission by Elroy Air

Engineers put skills to the test in F1tenth autonomous challenge

Engineers put skills to the test in F1tenth autonomous challenge

By Scott Simmie

 

Want to win a scale model car race?

Normally you’d pimp your ride, slam the throttle to the max, and do your best at the steering control to overtake any opponents while staying on the track.

Now imagine a race where no one is controlling the car remotely. Where, in fact, the car is driving itself  – using sensors and algorithms to detect the course, avoid obstacles, and look continuously for the most efficient path to the finish line.

That’s the concept of F1TENTH, a regular competition held at major robotics conferences. The latest contest was carried out in Detroit at IROS 2023, the International Conference on Intelligent Robots and Systems. The contest brings together researchers, engineers, and autonomous systems enthusiasts.

“It’s about Formula racing, but on a smaller scale – and it’s autonomous,” explains Hongrui (Billy) Zheng, a University of Pennsylvania PhD in electrical engineering, and a key organizer of the F1TENTH series.

And what does it take to win?

“I would say 90 per cent software, and 10 per cent hardware,” says Zheng.

And that means it’s more about brainpower than horsepower.

Before we dive in, check out one of the cars below:

F1tenth

A LEVEL PLAYING FIELD

 

To keep things truly competitive, all teams begin with the same basic platform. They can either build that platform, based on the build guides at F1TENTH.org, or purchase the platform. The price of the vehicle, which this year incorporated a 2D LiDAR unit (which makes up the bulk of the cost), is about $2500-$2800 US.

“I would say 60 per cent is spent on the LiDAR,” says Zheng. “Some teams use a camera only, and that drives it down to around $1000.”

So it’s a lot more accessible – and a lot safer – than real Formula 1. And instead of high octane fuel, the teams are more concerned with powerful algorithms.

Once again, the basic Open-source Robot Operating System autonomy and obstacle avoidance software is part of the basic package that all teams start out with. But just as real F1 teams work together to extract every ounce of performance, so too do the F1TENTH teams, which usually represent universities but are occasionally sponsored by companies. At this year’s competition six of the nine teams were from universities.

The F1TENTH organization says there are four pillars to its overall mission. Here they are, taken directly:

1. Build – We designed and maintain the F1TENTH Autonomous Vehicle System, a powerful and versatile open-source platform for autonomous systems research and education.

2. Learn – We create courses that teach the foundations of autonomy but also emphasize the analytical skills to recognize and reason about situations with moral content in the design of autonomous.

3. Race – We bring our international community together by holding a number of autonomous race car competitions each year where teams from all around the world gather to compete.

4. Research – Our platform is powerful and versatile enough to be used for a variety of research that includes and is not limited to autonomous racing, reinforcement learning, robotics, communication systems, and much more.

In other words, there are real-world applications to all of this. Plus, for engineers, it’s not that difficult to dive in.

“The entire project is Open Source,” explains competitor Po-Jen Wang, a computer engineer from the University of California Santa Cruz. “It uses a Jetson Xavier (for compute). And for perception it uses a Hokuyo 2D LiDAR. Some people will mount a camera for computer vision. You can make it by yourself – it’s very easy to make.”

The following video provides a good introduction to the competition. In actual races, a piece of cardboard – sometimes modified for aerodynamics – is affixed to the rear of the car. These are to aid other vehicles on the track with obstacle avoidance.

 

PIMP THAT RIDE

 

Okay. So you’ve got your basic build, along with the basic ROS software.

Now it’s time to get to work. Engineers will add or modify algorithms for obstacle avoidance, acceleration, braking – as well as for determining the most efficient and optimal path. Depending on their approach, some teams will plot waypoints for the specific course.

Of course, like a real F1 race, a lot of modifications take place once teams are at the track. But in the case of F1tenth, those alterations tend to be code (though we’ll get to mechanical changes in a moment). Of course, scrolling through endless lines of programming isn’t the most efficient way to detect and eliminate bugs or improve efficiency. This is particularly true since multiple types of software are involved.

“There is software for SLAM (Simultaneous Localization and Mapping) for the mapping part, there’s software for localisation, there’s software for basic tracking if you give it a waypoint,” says organizer Billy Zheng. “Some of the basic drivers are found in a repository on Github.

“Most of the good teams are very consistent, and most of the consistent ones use mappingand localisation. The second place winner this year was using a reactive method – you just drop it and it will work.”

With all those moving parts, many teams use a dashboard that displays multiple parameters in real-time as the car moves down the track. This allows them to more rapidly nail down areas where performance can be optimised.

“The good teams usually have a better visualisation setup, so it’s easier to debug what’s going on,” adds Zheng. “The good teams are using Foxglove – a spinoff from an autonomous driving company that created a dashboard for ROS.”

To get a better idea of what the engineers are seeing trackside, here’s a look at Foxglove in action during F1TENTH.

MECHANICALS

 

Though it’s 90 per cent about code, that’s not all.

“Some modify their vehicles in different ways, maybe make it more aerodynamic, change the wheels,” explains competitor Tejas Agarwal, a graduate of uPenn with a Masters in Robotics. Agarwal and Po-Jen Wang were both contracted by Japanese self-driving software company/foundation Autoware.

(As it turned out, Wang and Agarwal placed second and third, respectively.)

The wheels on the stock vehicles are more suited to pavement and dirt rather than indoors tracks, so wheels are a common modification. But this year’s winning team, from Université Laval, took it further.

“We lowered the centre of mass as much as possible, changed the wheels, and changed our motor for better control,” says Laval team leader Jean-Michel Fortin, a PhD student in computer science specialising in robotics.

Of course, they weren’t allowed to increase the power of the motor in order to keep things on an even playing field. But they wanted one that offered greater control at lower speeds.

“Usually at low speeds the (stock) motor is bad, so we changed that for a sensor equipped motor,” says Fortin.

“We also replaced our suspension because it was too soft. As soon as we were braking our LiDAR wasn’t seeing what it should. For the software part, we tuned everything to the maximum that we could. We also optimised the race line to make sure the race line that we predict is as close to what the car can do as possible.”

And it paid off. The Laval team, pictured below, was clearly in a celebratory mood after winning (Jean-Michel Fortin in centre). Following is second-place winner Po-Jen Wang, third-place winner Tejas Agarwal and organizer Billy Zheng.

 

Laval F1tenth
Po-Jen F1tenth
Billy F1tenth

INDRO’S TAKE

 

Competitions – particularly ones like this one – are highly useful. They foster collaborative teams and encourage innovative thinking. Plus, they’re just plain fun.

“F1TENTH is a tremendous initiative and a really great challenge for young engineers and autonomy enthusiasts,” says InDro Robotics CEO Philip Reece. “Those participating today could well be leaders in the autonomy sector tomorrow. We congratulate all who took part, with a special nod to the top three. Well done!”

Is there a similar engineering challenge you think is worth some words from us? Feel free to contact InDro’s Chief of Content Scott Simmie here.

And, if you’re a competitor beginning a job search, feel free to drop us a line with your resume here. InDro Robotics is Canada’s leading R&D aerial and ground robotics company and in a current phase of scaling. We’re always on the lookout to expand our talented and diverse engineering team.

2022 InDro Highlights

2022 InDro Highlights

Wow. Another year has passed.

And for InDro Robotics, it was a year marked by new products, new deployments, and multiple milestones.

We did a deep dive on the year with our Year in Review story, which you can find here. But we realise some people might prefer a condensed version. So here we go, starting with the image above.

That’s Sentinel, our rugged teleoperated robot for remote inspections. We launched that product at the very beginning of 2022. It’s been designed for remote inspections, carried out over 4G and 5G networks. We customise Sentinel’s sensors based on client needs, but the standard model comes with a 30x optical Pan-Tilt-Zoom camera, thermal sensor, and a high-res wide-angle camera that gives the operator a clear view of surroundings.

Operations are a snap, using an Xbox controller plugged into a laptop with a comprehensive dashboard. Even dense data can be streamed and downloaded in real-time over 5G. In fact, we’ve proven Sentinel’s minimal-latency capabilities from more than 4,000 kilometres away. (That took place from Bellevue, Washington State, where we were invited to demo our system by T-Mobile.)

It didn’t take long for word to get around. A couple of months after Sentinel’s launch, we were invited to put it through its paces at the Electric Power Research Institute, or EPRI. Check out this next image – it’s a frame grab from the Sentinel dashboard, and was captured during a mission at an EPRI testbed electrical substation in Lenox, Massachussetts.

InDro Robotics

There’s a lot we could tell you about Sentinel but we’re trying to keep this tight. If you’d like more in-depth details you can read this story, or reach out to superstar Account Executive Luke Corbeth here

InDro Commander

INDRO COMMANDER

A significant part of what gives Sentinel its various superpowers is an innovation we call InDro Commander. It’s a bolt-on box that contains a camera, EDGE computer, a 4G- and 5G-compatible modem and the ROS1 and ROS2 software libraries. That means it’s a snap for clients to add any additional sensors they’d like without the hassle of coding. 

You can operate any InDro Commander-enabled platform using an Xbox controller over a highly intuitive dashboard on your desktop or laptop. FYI, that box you see just above the InDro Logo (and just below the camera) in the image below? That’s InDro Commander.

DRONES

Though InDro has a stellar reputation for ground robotics, the company was founded on the Unmanned Aerial System side of things. So we’ve been busy in that arena, as well.

The most significant development of the year is a software and hardware package we call InDro Pilot. In a nutshell, it’s a bolt-on module that’s similar to Commander. It enables remote BVLOS operations over 4G or 5G, dense data throughput, simplified sensor integration – plus the ability to broadcast to nearby traditional crewed aircraft that drone operations are taking place in the vicinity. 

That hexagonal box in the image is Version 1.0 of the hardware side. We’ve since created a much smaller and lighter version, capable of transforming any Enterprise drone using the Pixhawk flight controller into a low-latency, BVLOS super-RPAS.

InDro Pilot

GROUND (BREAKING) DELIVERIES

In 2022, InDro teamed up with London Drugs to test out ground deliveries via our ROLL-E robot. 

There were two separate trials. The first, in Victoria, involved ROLL-E taking goods ordered online to a parking lot for touchless, curbside delivery. The second, in Surrey BC, involved the second generation of ROLL-E delivering goods from a London Drugs outlet to a consumer’s home. ROLL-E features 4G and 5G teleoperation, and is equipped with a total of six cameras (including two depth-perception cameras), giving the operator tremendous spatial awareness.

That’s ROLL-E 2.0, in the image below. Its secure cargo bay – which unlocks when the robot reaches its destination – can carry up to 50 kilograms.

Delivery Robot

GOOD DOG

Also in 2022, InDro Robotics became a North American distributor for the Unitree line of quadruped robots. These rugged, agile robots are well-suited to a variety of tasks, including remote inspection. In the video below, Luke Corbeth puts the entry-level GO 1 through its paces.

InDro backpack

Remember InDro Commander, the box that enables remote teleoperations and easy sensor integration? We figured a module like that could really expand the capabilities of the Unitree quadrupeds. 

And so – you guessed it – we built it. We call this device InDro Backpack, and you can see it on the robot below.

InDro Backpack

meet limo

InDro Robotics is also a North American distributor of the excellent AgileX robot line. This past year saw the introduction of a very cool machine for education and R&D called LIMO. It comes with an impressive number of features straight out of the box, including:

  • An NVIDIA Jetson Nano, capable of remote teleoperation over 4G
  • An EAI X2L LiDAR unit
  • Stereo camera
  • Four steering modes (tracked, Ackerman, four-wheel differential, and omni-directional)

It’s a powerful, SLAM-capable machine. Best of all? It’s really affordable.

LIMO

montreal marathon

Three InDro employees took part in the Montreal Marathon – but they were largely standing still.

In fact, they were operating sub-250 gram drones as part of a medical research pilot project. The drones – two were constantly in the air throughout the run – were positioned at a point near the end of the course where runners sometimes encounter medical distress. Live feeds from the drones were monitored in a tent by researchers for two reasons: To see if the aerial view could help them quickly identify someone needing help, and to help pinpoint their location so assistance could be rapidly dispatched.

The results? The drone feeds helped quickly identify and locate two runners in need of help

“The view from above when monitoring moving crowds is just incomparable,” says Dr. Valérie Homier, an Emergency Physician at McGill University Health Centre and the lead researcher on the project. 

Below: InDro pilots Kaiwen Xu, Ella Hayashi and Liam Dwyer.

Montreal Marathon

TCXpo

One of the highlights of the year was the TCXpo event at Area X.O in Ottawa. Sponsored by Transport Canada and Innovation, Science and Economic Development Canada, it was a full day of demonstrations from Canadian leaders in the world of Smart Mobility. 

InDro was there, of course, displaying our InDro Pilot-enabled Wayfinder drone, as well a *lot* of ground robots. CEO Philip Reece moderated a panel – and was also in charge of airspace for multiple drone demonstrations. That’s Philip, below, talking about aerial and ground robotics to attendees.

InDro Robotics
Drone Training

training

Speaking of drones, we also launched drone training and resource portal FLYY. Online lessons are carried out by our own Kate Klassen, widely recognised as one of the best (and most qualified) drone instructors in North America. Whether you’re looking to obtain your Basic or Advanced RPAS certificate, or want to further expand your skills, Kate’s got the goods.

If you’re looking for training for multiple people, Kate offers discounts for companies and educational institutes. You can reach her here.

it's a secret...

The other big thing happening in 2022 (and continuing in 2023) was InDro’s work with major global clients. We can’t disclose that work due to non-disclosure agreements, but we can tell you we’re busy with multiple, exciting, ongoing projects!

Finally, we closed out 2022 with another successful InDro Hack-a-Thon. Employees were given a day and a half to work on a project or process that could benefit InDro down the road. Once again, Team InDro delivered, with some amazing projects completed within the deadline. You can read all about it here

a final word...

As you can see, it’s been quite a year – and CEO Philip Reece couldn’t be happier.

“I’m incredibly proud of the work InDro accomplished in 2022,” he says. “Our engineering and sales staff consistently punch above their weight, with multiple significant milestones – including excellent revenue growth – achieved in the past year. Just as gratifying is the fact our employees love what they do.”

Very true. Now stay tuned for an even more amazing 2023.

And if you’d like to reach InDro, just give that little orange button a click. Though we’ve got plenty of robots, we’ll make sure a real human being gets back to you shortly.

Still a long road to fully autonomous passenger cars

Still a long road to fully autonomous passenger cars

By Scott Simmie

 

We hear a lot about self-driving cars – and that’s understandable.

There are a growing number of Teslas on the roads, with many owners testing the latest beta versions of FSD (Full Self-Driving) software. The latest release allows for automated driving on both highways and in cities – but the driver still must be ready to intervene and take control at all times. Genesis, Hyundai and Kia electric vehicles can actively steer, brake and accelerate on highways while the driver’s hands remain on the wheel. Ford EVs offer something known as BlueCruise, a hands-free mode that can be engaged on specific, approved highways in Canada and the US. Other manufacturers, such as Honda, BMW and Mercedes, are also in the driver-assist game.

So a growing number of manufacturers offer something that’s on the path to autonomy. But are there truly autonomous vehicles intended to transport humans on our roads? If not, how long will it take until there are?

Good question. And it was one of several explored during a panel on autonomy (and associated myths) at the fifth annual CAV Canada conference, which took place in Ottawa December 5. InDro’s own Head of Robotic Solutions (and Tesla owner) Peter King joined other experts in the field on the panel.

 

Autonomous Cars

Levels of autonomy

 

As the panel got underway, there were plenty of acronyms being thrown around. The most common were L2 and L3, standing for Level 2 and Level 3 on a scale of autonomy that ranges from zero to five.

This scale was created by the Society of Automotive Engineers as a reference classification system for motor vehicles. At Level 0, there is no automation whatsoever, and all aspects of driving require human input. Think of your standard car, where you basically have to do everything. Level 0 cars can have some assistive features such as stability control, collision warning and automatic emergency braking. But because none of those features are considered to actually help drive the car, such vehicles remain in Level 0.

Level 5 is a fully autonomous vehicle capable of driving at any time of the day or night and in any conditions, ranging from a sunny day with dry pavement through to a raging blizzard or even a hurricane (when, arguably, no one should be driving anyway). The driver does not need to do anything other than input a destination, and is free to watch a movie or even sleep during the voyage. In fact, a Level 5 vehicle would not need a steering wheel, gas pedal, or other standard manual controls. It would also be capable of responding in an emergency situation completely on its own.

Currently, the vast majority of cars on the road in North America are Level 0. And even the most advanced Tesla would be considered Level 2. There is a Level 3 vehicle on the roads in Japan, but there are currently (at least to the best of our knowledge and research), no Level 3 vehicles in the US or Canada.

As consumer research and data analytics firm J.D. Power describes it:

“It is worth repeating and emphasizing the following: As of May 2021, no vehicles sold in the U.S. market have a Level 3, Level 4, or Level 5 automated driving system. All of them require an alert driver sitting in the driver’s seat, ready to take control at any time. If you believe otherwise, you are mistaken, and it could cost you your life, the life of someone you love, or the life of an innocent bystander.”

To get a better picture of these various levels of autonomy, take a look at this graphic produced by the Society of Automotive Engineers International.

Autonomy

Now we’ve got some context…

 

So let’s hear what the experts have to say.

The consensus, as you might have guessed, is that we’re nowhere near the elusive goal of a Level 5 passenger vehicle.

“Ten years ago, we were all promised we’d be in autonomous vehicles by now,” said panel moderator Michelle Gee, Business Development and Strategy Director with extensive experience in the automotive and aerospace sectors. Gee then asked panelists for their own predictions as to when the Level 4 or 5 vehicles would truly arrive.

“I think we’re still probably about seven-plus years away,” offered Colin Singh Dhillon, CTO with the Automotive Parts Manufacturers’ Association.

“But I’d also like to say, it’s not just about the form of mobility, you have to make sure your infrastructure is also smart as well. So if we’re all in a bit of a rush to get there, then I think we also have to make sure we’re taking infrastructure along with us.”

Autonomous Cars

It’s an important point.

Vehicles on the path to autonomy currently have to operate within an infrastructure originally built for human beings operating Level 0 vehicles. Such vehicles, as they move up progressive levels of autonomy, must be able to scan and interpret signage, traffic lights, understand weather and traction conditions – and much more.

Embedding smart technologies along urban streets and even on highways could help enable functionalities and streamline data processing in future. If a Level 4 or 5 vehicle ‘knew’ there was no traffic coming at an upcoming intersection, there would be no need to stop. In fact, if *all* vehicles were Level 4 or above, smart infrastructure could fully negate the need for traffic lights and road signs entirely.

 

Seven to 10 years?

 

If that’s truly the reality, why is there so much talk about autonomous cars right now?

The answer, it was suggested, is in commonly used – but misleading – language. The term “self-driving” has become commonplace, even when referring solely to the ability of a vehicle to maintain speed and lane position on the highway. Tesla refers to its beta software as “Full Self-Driving.” And when consumers hear that, they think autonomy – even though such vehicles are only Level 2 on the autonomy scale. So some education around langage may be in order, suggested some panelists.

“It’s the visual of the word ‘self-driving’ – which somehow means: ‘Oh, it’s autonomous.’ But it isn’t,” explained Dhillon. “…maybe make automakers change those terms. If that was ‘driver-assisted driving,’ then I don’t think people would sleeping at the wheel whilst they’re on the highway.”

One panelist suggested looking ahead to Level 5 may be impractical – and even unnecessary. Recall that Level 5 means a vehicle capable of operating in all conditions, including weather events like hurricanes, where the vast majority of people would not even attempt to drive.

“It’s not safe for a human to be out in those conditions…I think we should be honing down on the ‘must-haves,’ offered Selika Josaih Talbott, a strategic advisor known for her thoughtful takes on autonomy, EVs and mobility.

“Can it move safely within communities in the most generalised conditions? And I think we’re clearly getting there. I don’t even know that it’s (Level 5) something we need to get to, so I’d rather concentrate on Level 3 and Level 4 at this point.”

 

Autonomous Cars

InDro’s Peter King agrees that Level 5 isn’t coming anytime soon.

I believe the technology will be ready within the next 10 years,” he says. “But I believe it’ll take 30-40 years before we see widespread adoption due to necessary changes required in infrastructure, regulation and consumer buy-in.”

And that’s not all.

“A go-to-market strategy for Level 5 autonomy is a monumental task. It involves significant investments in technology and infrastructure – and needs to be done so in collaboration with regulators while also factoring in safety and trust from consumers with a business model that is attainable for the masses.”

What about robots?

Specifically, what about Uncrewed Ground Vehicles like InDro’s Sentinel inspection robot, designed for monitoring remote facilities like electrical substations and solar farms? Sentinel is currently teleoperated over 4G and 5G networks with a human controlling the robot’s actions and monitoring its data output. 

Yet regular readers will also know we recently announced InDro Autonomy, a forthcoming software package we said will allow Sentinel and other ROS2 (Robot Operating System) machines to carry out autonomous missions.

Were we – perhaps like some automakers – overstating things?

“The six levels of autonomy put together by the SAE are meant to apply to motor vehicles that carry humans,” explains Arron Griffiths, InDro’s lead engineer. In fact, there’s a separate categorization for UGVs.

The American Society for Testing and Materials (ASTM), which creates standards, describes those tiers as follows: “Currently, an A-UGV can be at one of three autonomy levels: automatic, automated, or autonomous. Vehicles operating on the first two levels (automatic and automated) are referred to as automatic guided vehicles (AGVs), while those on the third are called mobile robots.”

“With uncrewed robots like Sentinel, we like to think of autonomy as requiring minimal human intervention over time,” explains Griffiths. “Because Sentinel can auto-dock for wireless recharging in between missions, we believe it could go for weeks – quite likely even longer – without human intervention, regardless of whether that intervention is in-person or virtual,” he says.

“The other thing to consider is that these remote ground robots, in general, don’t have to cope with the myriad of inputs and potential dangers that an autonomous vehicle driving in a city must contend with. Nearly all of our UGV ground deployments are in remote and fenced-in facilities – with no people or other vehicles around.”

So yes, given that InDro’s Sentinel will be able to operate independently – or with minimal human intervention spread over long periods – we are comfortable with saying that machine will soon be autonomous. It will even be smart enough to figure out shortcuts over time that might make its data missions more efficient.

It won’t have the capabilities of that elusive Level 5 – but it will get the job done.

InDro’s take

 

Autonomy isn’t easy. Trying to create a fully autonomous vehicle that can safely transport a human (and drive more safely than a human in all conditions), is a daunting task. We expect Level 5 passenger vehicles will come, but there’s still a long road ahead.

Things are easier when it comes to Uncrewed Ground Vehicles collecting data in remote locations (which is, arguably, where they’re needed most). They don’t have to deal with urban infrastructure, unpredictable drivers, reading and interpreting signage, etc.

That doesn’t mean it’s easy, of course – but it is doable.

And we’re doing it. Stay tuned for the Q1 release of InDro Autonomy.

 

Percepto’s autonomous drone-in-a-box

Percepto’s autonomous drone-in-a-box

By Scott Simmie, InDro Robotics

 

If you’re in the drone industry, you’ve likely heard the phrase: “Drone-in-a-box.” If refers to an autonomous system where a drone nests inside an enclosure for charging and safe harbour – and is regularly dispatched for automated missions. Usually, those missions are pre-programmed and involve inspection, surveillance, or change detection. Manual missions can be carried out when necessary, but the real point is automation.

The idea is that missions are carried out repeatedly, with a human simply monitoring from a remote location. Remotely could mean inside a building on an industrial site where the drone is based. But, because these emerging systems operate using LTE to control the drone and communicate with the software, they can be operated from hundreds or even thousands of kilometres away (providing you have permission to operate Beyond Visual Line of Sight and an available LTE network). With missions carried out automatically and on a regular schedule, this makes for vastly more efficient inspections, surveillance, tracking of construction progress, etc.

It beats requiring a pilot on site, and the drone never suffers from fatigue. The advantages for Enterprise clients are immediately apparent: Inspections, surveillance or general monitoring take place like clockwork, with all relevant data stored for easy access and interpretation. But think also of the edge such systems could provide for First Responders: A drone could be automatically dispatched to the location of a 9-1-1 call or critical incident. Video or thermal imagery can be securely live-streamed to decision makers down the line, regardless of where they’re located. The potential of such systems is unlimited.

At InDro Robotics, we’re no strangers to this concept. In fact, we’ve got a few things under the hood in this regard. But we like to acknowledge and celebrate success in this field. And so today’s post will focus on Percepto – the world’s leader in drone-in-a-box solutions. The company has a proven system, currently deployed in more than 100 locations around the globe. We recently had an opportunity to see a demo of the Percepto system, hosted by Canadian distributors RMUS (Rocky Mountain Unmanned Solutions).

Percepto’s autonomous drone system

The Percepto website outlines its offering with this statement: “Changing the way visual data is collected and analyzed, Percepto AIM is the only end-to-end inspection and monitoring software solution that fully automates visual data workflows, from capture to insights.” AIM stands for Autonomous Inspection and Monitoring, and is the software integral to the overall package.

This Percepto video provides a good overview:

It’s one thing to see a company video, quite something else to see that system in person. We were part of a briefing with Percepto’s Ehud Ollech (Head of Business Development) and Shykeh Gordon (VP Global Sales). They demonstrated the AIM software, the Sparrow drone (which comes with a parachute), and much more.

But they started by explaining that this system is purpose-built for major industrial customers, with clients from the mining, solar, oil & gas/petrochemical and utilities sector. And, they said, don’t think of Percepto as a drone company.

“Basically we are a big data analytics company,” said Shykeh, “offering end-to-end inspection and monitoring solutions.” What kind of solutions? This corporate graphic, supplied by Percepto’s marketing department, helps explain:

Percepto Drone

Percepto’s AIM software

During the demonstration, Shykeh and and Ehud walked us through the AIM (Autonomous site Inspection and Monitoring) software. It’s a browser-based system that allows you to program missions, monitor flights in real-time, watch a live stream from the Sparrow drone’s RGB or thermal camera, and take a deep dive into meaningful data. The User Interface is simple, and Percepto says a mission can be planned in as little as three minutes. In fact, they flew a brief mission from indoors with visitors watching from a conference room and visual observers outside. Every aspect of the mission, including a live video feed, was delivered in real-time. We could monitor what the drone was seeing, which is part of the point. And Ehud had the option, if something caught his eye, of stopping for a closer look.

RGB and thermal data is continuously captured during missions, then uploaded to the AWS cloud when the drone comes home to roost. Significantly, the AIM software is capable of change detection – a major feature for many clients. Once a baseline capture of a designated area has been stored in the Cloud, if a subsequent mission detects any changes, anomalies will be flagged. These could include thermal changes, issues with solar panels, oil leaks, a broken window – the list goes on. (The thermal data is radiometric, meaning it provides the actual temperatures measured.) The AI does not always categorize the type of anomaly, but even when it doesn’t it will quickly point out the relevant images for the operator to take a closer look. Percepto can also be integrated with Smart Fences or Pan-Tilt-Zoom cameras and dispatched automatically if something seem amiss.

Percepto Autonomous Drone

“The heart of our system is our software,” says Ehud.

AIM can also integrate data from ground-based robots, such as Boston Dynamics’ Spot. Even a smartphone photo or other image can be added to the mix, providing it contains geolocation data. The system can produce 3D digital twins, with all photogrammetry stitching done in the Cloud by AIM. (Some solutions for automated data capture rely on third-party software for photogrammetry.)  As part of the demonstration, Ehud defined an area of a pile of earth; a volumetric calculation was instantly performed. And this was all while the group was comfortable indoors. We were probably 50 metres from the actual system, but we could have just as easily have been across the planet, assuming LTE connection at the drone end.

Seeing the Percepto drone in a box in action

Percepto Autonomous Drone

After the first mission was complete, we went outside to watch the system in person. It began with the clamshell housing – which Percepto says can withstand a Category 5 hurricane, closed. Once the mission was initiated, it opened up quickly and the Sparrow took off. The system is operational in winds up to 40 kph, with a takeoff/landing limit of 27 kph. Winds during the demo were 24 kph; the Sparrow was rock steady.

Percepto Autonomous Drone

The system does not have obstacle avoidance, but uses ground-based radar to avoid conflicts with crewed aircraft. Altitude parameters, obviously, are programmed when setting the mission.

“Everyone’s waiting for aerial radar to get cheaper and lighter,” said Shykeh.

More sensors to come

The company already is working on a gas detection sensor (aka OGI camera), and is looking at potential LiDAR sensors as well. Maximum flight time is 40 minutes under optimal conditions, but generally flights are limited to 30 minutes. The next generation battery will offer a 20-30 per cent increase in time and range, and charging time in the station – from zero to full – is about 40 minutes.

Percepto Drone

InDro’s view

This is a refined and mature system, well-suited for major corporations with the budget for this kind of data acquisition and interpretation. It’s particularly suited for remote sites – especially sites that do not have staff on site but require persistent monitoring for safety, security or other reasons. Percepto has some very high-profile clients on its roster, including FPL, Koch, Verizon and Enel.

We’re strong supporters of drone-in-a-box solutions – and are actively exploring systems that might be helpful to First Responders. Kudos to Percepto…and stay tuned.