I, Robot: The Humanoids are here

I, Robot: The Humanoids are here

By Scott Simmie

 

You might own a robot without even realising it.

Have a Roomba? That’s a robot. And a drone? That’s a flying robot. Even a Tesla, in Full Self-Driving mode, is a robot.

There are a lot of definitions out there – but one we particularly like comes from Maja Matarić, a computer scientist, roboticist and AI researcher at the University of California. In her book, The Robotics Primer, she concisely defines a robot as “an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals.”

Whether that goal is to vacuum your floor, capture aerial data, or weld a part in a factory, we feel this is a really clear definition. It also doesn’t delineate between platforms: A robot that fits this bill could be stationary, wheeled, a quadruped or even a humanoid.

And it’s that last platform – humanoid – that’s been getting a lot of buzz recently. Numerous companies are now manufacturing robots that resemble human beings in their form factor. And, as it turns out, for very good reasons.

Below: Ameca, a robot built by the UK’s Engineered Arts, is known for its eery ability to mimic human expressions

Ameca robotics AI

WHY HUMANOID?

 

The idea of a humanoid robot has been around for longer than you might think. Leonardo Da Vinci designed – and possibly built – an automaton in the late 15th Century. It’s known currently as Leonardo’s Robot or Leonardo’s Mechanical Knight. According to Wikipedia, “The robot’s design largely consists of a series of pulleys that allow it to mimic human motions. Operational versions of the robot have been reconstructed by multiple researchers after the discovery of Leonardo’s sketches in the 1950s.”

It appears that the purpose of this design was for entertainment (which also fits the definition of a goal), but it fell short when it comes to sensing its environment and autonomy. Still, it’s fascinating to know the Italian inventor turned his attention to designing a mechanical device in human form way back then.

It would take another half a millennia before the first true humanoid robot would be built. In the early 1970s, the Wabot was unveiled in Japan. It was anthropomorphic, with two arms and two legs. It also contained a vision system, audio sensors and could speak in Japanese. According to this overview, “It was estimated that the WABOT-1 has the mental faculty of a one-and-half-year-old child.”

Below: A modern reproduction, based on Leonardo Da Vinci’s sketches, of his “Mechanical Knight” complete with inner mechanisms. It’s followed by an image of Wabot-1 from 1973

Da Vinci humanoid robot
1973 Wabot humanoid

THE HUMAN ADVANTAGE

 

Why create a humanoid in the first place?

Well, there are certain advantages to a human form factor, particularly when it comes to carrying out repetitive tasks in the real world. And the reason? The world around us has been built for humans. If there’s an existing task carried out by people, say pick-and-place, the infrastructure for that task has been created with humans in mind. That means conveyor belts, shelving, cupboards etc. are all designed for the average human. If you build a robot in a human-like form and roughly to scale, that’s a big advantage.

“You don’t need to change the surrounding infrastructure to accommodate the robot,” explains Head of R&D Sales Luke Corbeth.

“The end result obviously is faster deployment. This applies to factories, homes, hospitals, pretty much any use-case. None of these locations need to be robot native to effectively leverage a humanoid robot because they’ve been built for people.”

In fact, humanoid robots have already been deployed on some factory floors. They’re ideally suited to repetitive tasks such as picking up an item and moving it from one location to another – and contain tactile feedback sensors in their manipulators to calculate appropriate grip strength. They could also be deployed, says Corbeth, in environments built for humans – but which may pose hazards. An example, says Corbeth, might be for inspections or maintenance inside a nuclear facility in a radioactive environment.

“There are a lot of dexterous tasks people are doing today that are very challenging to automate because they require high levels of precision,” he says. “These are perfect tasks for humanoids.”

Looking down the road, many foresee an era when humanoids are affordable enough – and capable enough – for deployment in homes. There, they could carry out some of the more mundane household tasks like cleaning or clothes washing, perhaps even elder care and companionship.

A growing number of companies are now in the humanoid space, including Tesla (Optimus), Agility (Digit), Boston Dynamics (Atlas), and Figure (Figure 02). InDro Robotics is a distributor for Unitree, and carries the G1 humanoid and H1 and H1-2 research and development models. (We can also modify these robots for specific use-cases.)

The base version of the G1 sells for $21,600 US – which is surprisingly reasonable for a humanoid form factor. Corbeth says the current offerings are a result of a “perfect storm” across multiple advances in AI compute, battery, sensor and manufacturing technologies. The more advanced H1 sells for $99,600 US and is better suited for complex R&D.

 

WHAT’S NEXT

 

Humanoids are already in the real world. With further and inevitable advances in AI, Machine Vision and Machine Learning (as well as sensors, manipulators, etc.) it’s safe to assume that humanoids will only get smarter and better at smoothly carrying out fully autonomous tasks.

“I think that it will be probably, realistically, three to five years before you see walking humanoid robots around people all the time,” Dr, David Hanson, Founder of Hanson Robotics recently told the South China Morning Post.

“I think we are entering the age of living intelligent machines. It’s coming. Machine consciousness, self-determining machines…it’s on its way. And if we see that happen, then we want to make sure that we make the AI good, compassionate, able to connect and want the best for humans.” Yes, indeed.

And a final note. At some point, these humanoids will be good enough to manufacture themselves. That’s historically been something in the realm of science fiction. However, a recent TechCrunch story pointed out a new partnership between humanoid developers Apptronik and manufacturer Jabril.

“This means that should everything go according to plan, the humanoid robot will eventually be put to work building itself,” says the article.

Below: A C-NET video outlines developments expected in this field in 2025

INDRO’S TAKE

 

Because we sell and modify humanoids in addition to designing and building our own robots (and robots for clients), we’re obviously interested in this space. While we don’t have plans to develop our own humanoid (yet), we are currently working with the Unitree G1 and H1 models to evaluate and enhance their capabilities. And yes, we’ve already sold these to customers.

“Humanoids are a logical progression in robotics,” says InDro Robotics Founder and CEO Philip Reece. “While they’re not the solution for every use-case, they have a clear role in carrying out repetitive or even dangerous tasks that are currently carried out by humans. I suspect, in the not-so-distant future, humanoids will be working alongside people in an ever-increasing number of settings.”

Interested in learning more? Contact us here.

Sense, solve, go: Does Waymo herald the future of autonomous vehicles?

Sense, solve, go: Does Waymo herald the future of autonomous vehicles?

By Scott Simmie

 

During a recent trip to California, I had the opportunity to ride in a Waymo.

I’d certainly read about Alphabet’s autonomous car-for-hire service in the past and work for a company that builds robots and autonomy software. So it seemed a natural, while in San Francisco, to download the app (similar to Uber) and hail an autonomous vehicle.

Within a couple of minutes, a Waymo vehicle arrived at the pickup point – just a short walk from where it had been summoned. It pulled up, LiDARs spinning, waiting for me to climb in. I put a hand on the door; it was locked. A quick glance at the app and I saw an “unlock” feature. Then I was inside.

And then, with some ambient music playing in the background (you have the option to turn it off or select something else), we – meaning the car and I – were off. A display showed a digital representation of what the vehicle was seeing in its surroundings, including parked vehicles and pedestrians. The electric Jaguar quietly accelerated to the speed limit, obeyed all traffic rules, and smoothly adjusted for unexpected occurrences. When the driver of a parked vehicle opened the door to exit, the Waymo liquidly arced a safe distance away. With the steering wheel making smooth turns and constant smaller fine adjustments, it was like being in a vehicle with an invisible, silent driver at the helm.

I had full faith in the technology and actually preferred it to a standard rideshare. Waymo’s safety data (which we’ll explore later) had reassured me the drive was going to be statistically safer than riding with a human driver. Plus, there was no need to engage in small talk. When the ride was done, I simply exited without being prompted for a tip.

I’d been aware of Waymo since it first deployed. I also have a friend with a Tesla who has the Full Self-Driving package. He commutes twice a week from well outside Toronto into the GTA without any inputs beyond setting his destination. He foresees a day, long promised by Elon Musk, when his own vehicle will earn him money by working during off hours as an autonomous taxi.

The technology for fully autonomous vehicles is basically here – arriving both sooner and later than some had predicted. But what does that mean for the future?

Below: A Waymo Driver waits patiently at an intersection – while another Waymo Driver glides past. Photos by Scott Simmie 

SENSE, SOLVE, GO

 

Owned by Google parent company Alphabet, Waymo states “We’re on a mission to be the world’s most trusted driver. Making it safer, more accessible, and more sustainable to get around — without the need for anyone in the driver’s seat.” It calls its service, the app and the car together, Waymo One. The system, the hardware and software, are referred to collectively as Waymo Driver.

Commercial rollout began in Phoenix, Arizona in 2020, with testing in San Francisco commencing late the following year. It’s now also operating in Los Angeles, with expansion into Atlanta, Austin and Miami next. It uses a Jaguar I-PACE electric vehicle as its base, heavily outfitted with an array of sensors. We’re talking a lot of sensors.

In total there are 29 cameras, six radar and five LiDAR units (including a roof-mounted 360° LiDAR). These sensors allow Waymo Driver to fully capture its environment up to three football fields away. Powerful AI, machine vision and machine learning software continuously crunch predictive algorithms allowing Waymo to understand where a pedestrian, cyclist or other vehicle will most likely continue based on current trajectory. And, of course, if one of those moving subjects suddenly does something unexpected/unpredicted, the system quickly readjusts. Waymo touts safety stats (which we’ll explore later) it says proves Waymo Driver is far safer than a human driver.

Waymo boils down the entire process to three words: Sense, solve, go. Waymo Driver senses its environment using the sensors mentioned above (it also has an array of External Audio Receivers – EARs – which alert the system if they detect sirens, etc. It can even echolocate the location of said sirens and will understand if it needs to pull over). The algorithms solve the challenge of safely moving through that environment, including moving objects, and then it’s go time. Those three simple words represent a decade and a half of intense R&D, development of its own sensors, and a huge capital expenditure.

Google first began exploring self-driving vehicles in earnest back in January of 2009. By the time it revealed this publicly, it had already done extensive R&D and testing. But it wasn’t until the fall of 2015 that the first solo member of the public climbed into a Waymo in Austin, Texas and took the vehicle for a ride on city streets. That passenger was Steve Mahan, who is legally blind. It was the first time in 12 years that he’d been alone in a car. It would be another five years before the first rollout to the public.

During that five years, both the car and sensor package – along with the software – evolved considerably. Just compare videos below; the first shows Steve Mahan on that historic trip in 2015, the second is an updated video explaining the fifth-generation Waymo Driver.

SAFETY FIRST

 

 

Waymo could pitch its offering on a number of grounds: Sustainability, convenience, cool factor. But instead, it focuses its customer-facing marketing on safety. Waymo Driver, it says repeatedly, is far safer than a human driver. It has proven that in many millions of miles on the streets, it says, and billions more in simulation.

The statistics Waymo publishes are based on Incidents Per Million Miles (IPMM) of driving – and compare its own rates of incidents with a benchmark of IPMM involving human drivers. Whether it’s airbag deployments, crashes with a reported injury, or incidents where police are notified, Waymo’s stats are consistently a fraction of those involving people at the wheel. In more than 33 million miles of driving, Waymo touts these as the results:

  • 81 per cent fewer airbag deployment crashes
  • 78 per cent fewer injury-causing crashes
  • 62 per cent fewer police-reported crashes

That’s clearly a significant reduction, and to most people would indicate that Waymo is safer than taking a ride with a stranger (or even friend) at the wheel. The statistics include accidents where other drivers were at fault, but does not separate them out – so we can’t actually see what percentage involved an error from the Waymo side. Waymo has previously stated that the majority of these incidents were the fault of human drivers, and that there have been but two accidents involving injuries where it expects to pay out insurance liability claims.

But even rare incidents can quickly become high-profile. In Phoenix, an empty Waymo that had been summoned by a customer crashed at low speed into a telephone pole in an alley. Had a human been at the wheel, we would never have heard of it. But because it was a Waymo, the incident led newscasts. Why is that? Well, we expect perfection in systems like these. And that seems a reasonable expectation if you’re going to trust your personal safety to a driverless car. It just can’t make mistakes. And that’s why Waymo quickly issued a recall for a software fix.

“This is our second voluntary recall,” Katherine Barna, a Waymo spokesperson, told TechCrunch. “This reflects how seriously we take our responsibility to safely deploy our technology and to transparently communicate with the public.”

In May of 2024, the US National Highway Transportation Safety Agency (NHTSA) informed Waymo it was investigating 22 incidents involving its vehicles (and subsequently added an additional nine incidents), stretching back to 2001. Many of those incidents were described by Forbes as “surprisingly minor” and 11 of those incidents were culled by the NHTSA from social media reports of the vehicles driving in an unusual fashion (such as using the oncoming lane to avoid traffic problems). The most serious was the aforementioned pole collision.

We were unable to find any reports of Waymo incidents involving a serious injury. The one fatality involving a Waymo occurred in January of 2025, when an unoccupied stationary Waymo stopped at a traffic light was one of several vehicles hit by a speeding car. One person and a dog died in that incident, but because a Waymo was tangentially involved it made the headlines. It is the only case we can find involving a fully driverless vehicle where a fatality was involved – and in this case the vehicle was completely passive. (There was a pedestrian fatality in 2018 involving an autonomous Uber vehicle. In that incident, which occurred in Tempe, Arizona, a human safety driver was in the driver’s seat. She was watching television on her phone when the accident occurred and subsequently pleaded guilty to endangerment.)

While Waymo has an excellent track record, there have been incidents. But with each incident where Waymo Driver has somehow made the wrong decision, it’s reasonable to assume it was followed by a software fix. And here’s where a fleet of autonomous vehicles have a definite advantage over people: That tweak can be instantly applied to the entire fleet.

Still, there are skeptics who argue that – despite those millions of miles of driverless passenger trips – Waymo does not have enough data upon which to draw sound conclusions.

“We don’t know a lot. We know what Waymo tells us,” Philip Koopman, an expert on autonomous vehicle safety at Carnegie Mellon University, told the Miami Herald. “Basically you are trusting Waymo to do the right thing.”

 

THE FUTURE

 

Autonomy is hard – and it takes time: Google and Alphabet have invested more than 15 years of continuous engineering for Waymo Driver to reach this level of technological maturity.

Now, Waymo is rolling out to more cities. Remember those 672 Jaguars that had the software upgrade? They’re just a fraction of the 20,000 I-Pace vehicles Waymo signed a contract with Jaguar to purchase. Plus, the company recently announced that its six-generation vehicle – a Chinese-made electric minivan – is next up for testing and deployment. From all external appearances, Waymo shows no sign of stopping (except at red lights, of course). In 2024, it carried out four million autonomous rides – four times more than its total of trips over the previous four years. Rides in 2024 tripled to 150,000 per week. And the company calculates “Waymo riders helped avoid over 6 million kilograms of CO2 emissions.”

That’s all great. But for any commercial enterprise, even if it’s willing to absorb costs during rollout, the ultimate test will be the bottom line. Will Waymo prove profitable?

We can’t say for certain – and Waymo’s current financials are somewhat invisible to the public, as they’re bundled in with several other projects Alphabet projects. But some analysts predict Waymo, the clear leader in autonomous rideshare, will ultimately win a significant piece of the market. An analysis on Nasdaq.com predicts Waymo could prove over time to be the jewel in Alphabet’s crown.

“Uber does more than 200 million rides each week,” states the story. “Let’s let that sink in. So if autonomous rides can capture even half that market, that would mean 100 million rides per week…If Waymo can capture about one-third of the $1 trillion autonomous rides market, it could generate annual revenues of around $300 billion.” Enough, suggests the story, to double Alphabet’s stock price.

That’s a big prize. And, clearly, incentive for Alphabet and Waymo to continue on the road to profitability.

 

Below: The LCD display for rear Waymo passengers. Note the option to “pull over” if you unexpectedly need to end your ride early

INDRO’S TAKE

 

Because we’re deeply involved in the autonomous space, we obviously take great interest in Waymo and other deployments of autonomous technologies at scale. Waymo Driver is different from most other applications, though, because it’s transporting human beings. There is very little – if any – room for error.

“We can’t predict the future, but – like algorithms – can make informed predictions with available data,” observes InDro Robotics Founder and CEO Philip Reece. “Waymo appears to be heavily invested in continuously making a good safety record even better – and has the engineering and financial resources to do so. I suspect Waymo, and its competitors, are here to stay.”

For more on Waymo, check out its website. And, if you’re in one of the growing number of cities where it operates, download the app and let Waymo Driver take the wheel.

Wisk promises autonomous Advanced Air Mobility

Wisk promises autonomous Advanced Air Mobility

By Scott Simmie

 

If you’ve been following our posts, you’ll know that InDro Robotics was part of a Canadian trade delegation that visited California last week. Some 40 organisations took part – including private companies, airports, academics, Transport Canada, NAV Canada and the National Research Council Canada. The trip was organised by Canadian Advanced Air Mobility (CAAM), the organization that speaks with a unified voice on behalf of industry and others with a vested stake in the coming world of AAM.

California was chosen because it’s home to three of the leading companies in the Advanced Air Mobility space: Joby, Archer and Wisk. It’s also home to the NASA Ames Research Center – which is working closely with industry on multiple technical issues as the world of AAM approaches. Last week, we shared highlights of our visits at Joby and Archer with this post (which we’d encourage you to read for context).

Today’s post? It’s all about Wisk, the final air taxi company the delegation visited. And its vision?

“Creating a future for air travel that elevates people, communities, and aviation.”

Unlike Joby and Archer – which plan to launch with piloted aircraft – Wisk differentiates itself with its “autonomous-first strategy.” That means, once it has attained all the necessary FAA certifications, the first passengers will climb on board an aircraft that flies itself. An autonomous aircraft carrying human beings? That’s a really big deal.

“When we’re successful at certifying this aircraft, that has the potential to change so much more beyond Wisk,” explained Becky Tanner, the company’s Chief Marketing Officer. In fact, she believes it will have an impact on the broader aviation industry, encouraging it to “take a step forward.”

Wisk is currently flying its sixth-generation full-sized aircraft. Its first generation was autonomous, but the following two were piloted.

“We made the conscious choice from Generation 3 to Gen 4 to stick with autonomous aircraft,” says Chief Technical Officer Jim Tighe. He points to the Generation 6 (which they call “Gen6”) on the floor.

“There will never be a pilot in that aircraft,” he says.

Below: Wisk’s Gen6 – the latest iteration of its autonomous air taxi designed to carry four passengers

 

 

Wisk Gen6 Autonomous Air Taxi

THE DESIGN

 

Like Joby and Archer, Wisk’s basic design is a fixed-wing eVTOL that uses tilt-rotors on booms attached below the wing. Two motors are on each of those six booms. The forward motors have tiltable five-blade rotors that allow them to transition for more efficient forward flight. These motors are in use throughout the flight – takeoff, landing, hover, forward flight – and any other manoeuvres. The rear motors are used for the VTOL portions of flight but are turned off once Gen6 has transitioned to forward flight.

Gen6, as you perhaps guessed, is the sixth full-size aircraft that Wisk has designed and built. And, like Generations 1, 4 and 5 it’s fully autonomous. That feature eliminates the possibility of pilot error.

“It’s obviously a differentiator,” says Tighe. “But we really believe that autonomy will enable safety. These are challenging operations. Short distance flights, you’re doing a lot of takeoffs and landings and you’re doing it in congested airspace.”

Building a completely autonomous aircraft is difficult. But it’s especially challenging – and rewarding – when you have to invent required components.

“When we first started, most of these systems did not exist – so we had to build them ourselves,” CTO Tighe told the Canadian delegation. That included motors, highly optimised batteries, flight control systems and much more. The company now holds 300+ patents globally and has carried out more than 1750 test flights with full-scale aircraft.

“It’s really important to design systems that meet our challenges for design, safety, weight and performance requirements,” he said, adding “It’s a lot easier if you can work on it yourself.”

Tighe, who dresses and speaks casually, comes with an impeccable background. After his first few years working with Boeing as an Aerodynamics Engineer, he worked as Chief Aerodynamicist for 14 years at Scaled Composites. That was the Burt Rutan company known for an incredible number of innovative aircraft and world aerospace records.

But Scaled’s jewel in the crown came right in the midst of Tighe’s tenure. The company designed and built SpaceShipOne and mothership White Knight. SpaceShipOne was a crewed, reusable suborbital rocket-powered aircraft that was carried to 50,000′ AGL while affixed beneath White Knight. When it was released, SpaceShipOne ignited its rocket engine, which took the small aircraft to the edge of space (100km). By accomplishing this feat twice within two weeks, Scaled Composites won the $10M Ansari X Prize. The technology, which includes a feathered system where the wing of the spacecraft rotates for optimal atmospheric entry, is core to the Virgin Galactic space tourism program. Tighe left Scaled Composites in 2014, moving directly to Wisk – a job he describes as “really fun if you’re an engineer.”

Below: The Gen6, which is capable of carrying four passengers of all shapes and sizes, including passengers with mobility issues

 

 

Wisk Gen6

AUTONOMY

 

Autonomy isn’t just about the technology (though we’ll get to that). It’s also part of a strategic business model in a market sector that will undoubtedly be competitive. Both Joby and Archer will initially have piloted models, meaning one of the four seats will be taken by the pilot. That not only costs more (to pay for the pilot), but also means losing revenue for one passenger on every single flight.

But will passengers embrace flying without a human at the controls? Wisk believes so, and says it puts great emphasis on safety. And here, it has some help: Wisk became a fully-owned subsidiary of Boeing in 2023 (though it operates separately). Some 150 Boeing employees are directly involved with the Wisk operation. That relationship, says the Wisk website, “allows us to tap into Boeing’s development, testing and certification expertise, and more.”

And on the autonomy front? In addition to its own inventions, Gen6 relies heavily on tried and true systems like autopilot. It’s self-flying approach includes, according to its website:

  • “Leveraging the same proven technology that accounts for more than 93% of automated pilot functions on today’s commercial flights (autopilots, precision navigation, flight management systems, etc.)
  • “New, innovative technology such as improved detect and avoid capabilities, sensors, and more
  • “Wisk’s logic-driven, procedural-based, decision-making software which provides reliable, deterministic outcomes.”

What’s more, Wisk already has a highly integrated system that allows human flight supervisors to track missions from the ground and monitor aircraft systems. Those flight supervisors will have the ability to intervene remotely, should that ever be required. It’s anticipated that, initially, one supervisor will be responsible for monitoring three missions simultaneously. Wisk offered a simulated demonstration of this system – which already looks pretty mature.

The location the delegation visited was in Mountain View, CA. This Bay Area campus is responsible for engineering, composite assembly, airframe assembly, motors, its battery lab, autonomy lab and is home to the corporate team. In addition, Wisk has additional locations in the US, Canada (Montréal), Poland, Australia and New Zealand. Its flight tests and R&D are carried out in Hollister, CA. The company currently has about 800 employees (including 50 in Montréal).

 

SUSTAINABLE AND ACCESSIBLE

page1image1075603648 page1image1075604000

One of the many impressive things about Wisk was its emphasis on design. Engineers have worked hard to reduce the number of moving parts in the aircraft – points of failure – to the point where there no single mechanical or software problem could take the aircraft out of the sky. But equally impressive was its commitment to design.

Beyond ensuring everything is comfortable, ergonomic and safe for passengers – a great deal of work has gone into ensuring any Wisk aircraft will be accessible for people of all shapes and sizes and even with disabilities. Wisk has an ongoing program where civilians with physical or sensory limitations are brought into the lab to try out the latest iteration of the cabin and offer feedback for improvement. For example, there’s Braille in the cabin and on the flight safety cards. And, when it was discovered that a guide dog was fearful of the metal steps for climbing up and into the cabin – they redesigned them to be easier on the paws. The guide dog happily climbed aboard the redesigned steps on a subsequent visit.

In conjunction with making the service affordable, this philosophy is something Wisk emphasised during the visit.

“The big vision of this is to have this accessible for everyone,” said CMO Becky Tanner. “Making sure this feels comfortable and enjoyable and safe for all kinds of people – people with disabilities, people with different heights, shapes and sizes.”

Below: InDro’s Scott Simmie (front right) inside Gen6. InDro’s Dr. Eric Saczuk, who was attending on behalf of BCIT’s RPAS Hub (which he directs) is in the seat behind him. Dr. Saczuk is also InDro’s Chief of Flight Operations

 

Scott and Eric on Wisk Gen6

INDRO’S TAKE

 

Before we get into our view of this world, it’s also worth mentioning that the delegation had the privilege of touring the NASA Ames Research Center. We saw, among other things, a high-end simulator purpose-built for testing eVTOL flight in congested urban airspace – as well as top-level research into developing predictive models for turbulence at the coming vertiports – where these vehicles will takeoff and land.

“The worlds of Advanced Air Mobility and Urban Air Mobility are definitely coming. This is truly going to be an inflection point in aviation, and we foresee many positive use-case scenarios beyond air taxis that these technologies will enable,” says InDro Robotics Founder and CEO Philip Reece.

“It was highly instructive to get a front-row seat with these industry leaders, and we thank CAAM for its foresight in planning and executing this important trip. InDro will have some announcements of its own for the AAM space – both for service provision and more – down the road.”

We look forward to these companies gaining their final FAA Certifications – and seeing these aircraft carry passengers and eventually cargo.

Robosense sets new bar for affordable, powerful LiDAR sensors

Robosense sets new bar for affordable, powerful LiDAR sensors

By Scott Simmie

 

Building or modifying a robot?

Specifically, are you working on something with autonomy that needs to understand an unfamiliar environment? Then you’re likely looking at adding two key sensors: A depth camera and a LiDAR unit.

LiDAR (as most of you likely know), scans the surrounding environment with a continuous barrage of eye-safe laser beams. It measures what’s known as the “Time of Flight” – meaning the time it takes for the photons to be reflected off surrounding surfaces and return to the LiDAR unit. The closer that surface is, the shorter the Time of Flight. LiDARs calculate the time of each of those reflected beams and convert that into distance. Scatter enough of those beams in a short period of time (and LiDARs do), and you get an accurate digital representation of the surrounding environment – even while the robot is moving through it.

This is particularly useful for autonomous missions and especially for Simultaneous Localisation and Mapping, or SLAM. That’s where a LiDAR-equipped robot can be placed in a completely unfamiliar (and even GPS-denied) environment and produce a point-cloud map of its surroundings while avoiding obstacles. Quality LiDARs are also capable of producing 3D precision scans for a wide variety of use-cases.

All great, right? Except for one thing: LiDAR sensors tend to be very expensive. So expensive, they can be out of reach for an R&D team, academic institution or Startup.

There is, however, a solution: Robosense.

The company produces LiDAR sensors (both mechanical and solid-state) that rival the established players in the market. And they do so for about one-third of the cost of the industry heavyweights.

“The performance of Robosense is outstanding – absolutely on par with its main competitors in North America,” says InDro Account Executive Callum Cameron. “We have been integrating Robosense LiDAR on our products for about two years, and their performance is exceptional.”

Below: A fleet of four robots, equipped with Robosense LiDAR, which recently shipped to an academic client.

 

Robosense LiDAR

ROBOSENSE

 

The company might not yet be a household name (unless your household has robots), but as of May 2024 the firm had sold 460,000 LiDAR units. Its sensors power a large number of autonomous cars, delivery vehicles and other robots – and it’s the first company to achieve mass production of automotive-grade LiDAR units with its own in-house developed chip.

The company was founded in 2014, with some A-level engineering talent – and it’s been on a stellar trajectory ever since. One of the reasons is because Robosense produces all three core technologies behind its products: The actual chipsets, the LiDAR hardware, and the perception software. We’ll let the company itself tell you more:

“In 2016, RoboSense began developing its R Platform mechanical LiDAR. One year later, in 2017, we introduced our perception software alongside the automotive-grade M Platform LiDAR sensors tailored for advanced driver assistance and autonomous driving systems. We achieved the start-of-production (SOP) of the M1 in 2021, becoming the world’s first LiDAR company to mass-produce automotive-grade LiDAR equipped with chips developed in-house,” says its website.

The company now has thousands of engineers. And it didn’t take long before the world noticed what they were producing.

“As of May 17, 2024, RoboSense has secured 71 vehicle model design wins and enabled 22 OEMs and Tier 1 customers to start mass production of 25 models. We serve over 2,500 customers in the robotics and other non-automotive industries and are the global LiDAR market leader in cumulative sales volume.”

The company has also received prestigious recognition for its products, including two CES Innovation awards, the Automotive News PACE award, and the Audi Innovation Lab Champion prize.

“This company has standout features, including Field of View, point cloud density and high frame rates,” says Cameron. “If you look at that fleet of four robots we recently built, using the competition those LiDAR units alone would have come to close to $80,000. The Robosense solution cost roughly one-quarter of that with similar capabilities.”

And the factories? State of the art. Though this video focuses on its solid-state LiDAR, Robosense uses the same meticulous process for its mechanical units:

LiDAR FOR EVERY APPLICATION

 

Robosense produces many different LiDAR sensors. But what particularly appeals to us is that the company has (excuse the pun) a laser-like focus on the robotics industry. Its Helios multi-beam LiDAR units have been designed from the ground up for robots and intelligent vehicles. There are customisable fields of view, depending on application, and a near-field blind-spot of ≤ 0.2 metres. In addition, Helios LiDAR comes in 16- and 32-beam options depending on point-cloud density and FOV requirements. Both are capable of functioning in temperatures as low as -40° or on a scorching day in the Sahara desert. There’s also protection against multi-radar interference and strong light (which can be an issue with LiDAR). You can learn more about its features here.

Its Bpearl unit proves that very good things can indeed come in small packages. With a 360° horizontal and 90° vertical hemispherical FOV, it’s been designed for near-field blind spots, capable of detection at ≤10 cm. That’s why we selected it for a robot designed to inspect cycling lanes for hazards (while avoiding cyclists, of course). We actually have two Bpearls on that robot (one on each side), since detecting blind spots and avoiding other obstacles is so critical to this application.

“We’ve integrated both the Bpearl and Helios LiDAR units into multiple different robots and the performance has been excellent, even under adverse conditions,” says Cameron. “Obstacle avoidance has been outstanding, and SLAM missions are a snap.”

Below: This InDro robot features two 32-beam Robosense Bpearl LiDAR units. You can see one of them – that tiny bubble on the side (and there’s another one on the opposite side):

InDro Sentinel

THE THREE “D”s

 

You’ve likely heard this before, but robots are perfect for jobs that are Dirty, Dull or Dangerous – because they remove humans from those scenarios. Robots, particularly inspection robots, are often subjected to extremes in terms of weather and other conditions.

So this is a good place to mention that if a Robosense LiDAR encounters fog, rain, dust or snow it has a de-noising function to ensure it’s still capturing accurate data and that your point cloud isn’t a representation of falling snow. All of the Robosense LiDAR sensors have outstanding Ingress Protection ratings.

Because adverse conditions are quite likely to occur at some point during a robotic mission, Robosense puts its products through absolutely gruelling tests. Hopefully your robot won’t encounter the scenarios seen below, but if it does – the LiDAR will keep working:

INDRO’S TAKE

 

We take pride in putting only the highest quality sensors into our products.

Prior to adopting Robosense as our “go-to” LiDAR about two years ago, we were using big-name products. But those products also came with a big price tag. When we discovered the quality and price of Robosense LiDAR units, it was an obvious choice to make the switch. We have shipped multiple Robosense-enabled robots to clients, saving them thousands of dollars – in one case, tens of thousands – while still capturing every bit of data they require. Robosense is now our go-to, even on our flagship products. (We recently did a demonstration of one of our newer Helios-equipped autonomous quadrupeds to a high-profile client; they were amazed with the results.)

“Robosense is every bit the equal of the heavyweight LiDAR manufacturers, without the downside of the high cost,” says InDro Robotics CEO Philip Reece. “The field-of-view, point cloud density and quality of construction are all state-of-the-art, as are the manufacturing facilities. What’s more, Robosense continues to push the envelope with every new product it releases.”

Interested in learning more, including price and options? Contact Account Executive Callum Cameron right here, and he’ll give you all the info you need.

George Mason U. researchers enable robots to intelligently navigate challenging terrain

George Mason U. researchers enable robots to intelligently navigate challenging terrain

By Scott Simmie

 

Picture this: You’re out for a drive and in a hurry to reach your destination.

At first, the road is clear and dry. You’ve got great traction and things are going smoothly. But then the road turns to gravel, with twists and turns along the way. You know your vehicle well, and have navigated such terrain before.

And so, instinctually, you slow the vehicle to navigate the more challenging conditions. By doing so, you avoid slipping on the gravel. Your experience with driving, and in detecting the conditions, has saved you from a potential mishap. Yes, you slowed down a bit. But you’ll speed up again when the conditions improve. The same scenario could apply to driving on grass, ice – or even just a hairpin corner on a dry paved road.

For human beings, especially those with years of driving experience, such adjustments are second-nature. We have learned from experience, and we know the limitations of our vehicles. We see and instantly recognize potentially hazardous conditions – and we react.

But what about if you’re a robot? Particularly, a robot that wants to reach a destination at the maximum safe speed?

That’s the crux of fascinating research taking place at George Mason University: Building robots that are taught – and can subsequently teach themselves – how to adapt to changing terrain to ensure stable travel at the maximum safe speed.

It’s very cool research, with really positive implications.

Below: You don’t want this happening on a critical mission…

George Mason Xuesu Xiao Hunter SE

“XX”

 

Those are the initials of Dr. Xuesu Xiao, an Assistant Professor at George Mason University. He holds a PhD in Computer Science, and runs a lab that plays off his initials, called the RobotiXX Lab. Here’s a snippet of the description from his website:

“At RobotiXX lab, researchers (XX-Men) and robots (XX-Bots) perform robotics research at the intersection of motion planning and machine learning with a specific focus on robustly deployable field robotics. Our research goal is to develop highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision.”

We spoke with Dr. Xiao about this work.

It turns out he’s particularly interested in making robots that are particularly useful to First Responders, and carrying out those dull, dirty and dangerous tasks. Speed in such situations can be critical, but comes with its own set of challenges. A robot that makes too sharp a turn at speed on a high friction surface can easily roll over – effectively becoming useless in its task. Plus, there are the difficulties previously flagged with other terrains.

This area of “motion planning” fascinates Dr. Xiao. Specifically, how to take robots beyond traditional motion planning and enable them to identify and adapt to changing conditions. And that involves machine vision and machine learning.

“Most motion planners used in existing robots are classical methods,” he says. “What we want to do is embed machine learning techniques to make those classical motion planners more intelligent. That means I want the robots to not only plan their own motion, but also learn from their own past experiences.”

In other words, he and his students have been focussing on pushing robots to develop capabilities that surpass the instructions and algorithms a roboticist might traditionally program.

“So they’re not just executing what has been programmed by their designers, right? I want them to  improve on their own, utilising all the different sources of information they can get while working in the field.”

 

THE PLATFORM

 

The RobotiXX Lab has chosen the Hunter SE from AgileX as its core platform for this work. That platform was supplied by InDro Robotics, and modified with the InDro Commander module. That module enables communication over 5G (and 4G) networks, enabling high speed data throughput. It comes complete with multiple USB slots and the Robot Operating System (ROS) library onboard, enabling the easy addition (or removal) of multiple sensors and other modifications. It also has a remote dashboard for controlling missions, plotting waypoints, etc.

Dr. Xiao was interested in this platform for a specific reason.

“The main reason is it is because it’s high speed, with a top speed of 4.8m per second. For a one-fifth/one-sixth scale vehicle that is a very, very high speed. And we want to study what will happen when you are executing a turn, for example, while driving very quickly.”

As noted previously, people with driving experience instinctively get it. They know how to react.

“Humans have a pretty good grasp on what terrain means,” he says. “Rocky terrain means things will get bumpy, grass can impede a motion, and if you’re driving on a high-friction surface you can’t turn sharply at speed. We understand these phenomenon. The problem is, robots don’t.”

So how can we teach robots to be more human in their ability to navigate and adjust to such terrains – and to learn from their mistakes?

As you’ll see in the diagram below, it gets *very* technical. But we’ll do our best to explain.

George Mason Hunter Xuesu Xiao

THE APPROACH

 

The basics here are pretty clear, says Dr. Xiao.

“We want to teach the robots to know the consequences of taking some aggressive maneuvers at different speeds on different terrains. If you drive very quickly while the friction between your tires and the ground is high, taking a very sharp turn will actually cause the vehicle to roll over – and there’s no way the robot by itself will be able to recover from it, right? So the whole idea of the paper is trying to enable robots to understand all these consequences; to make them ‘competence aware.'”

The paper Dr. Xiao is referring to has been submitted for scientific publication. It’s pretty meaty, and is intended for engineers/roboticists. It’s authored by Dr. Xiao and researchers Anuj Pokhrel, Mohammad Nazeri, and Aniket Datar. It’s entitled: CAHSOR: Competence-Aware High-Speed Off-Road Ground Navigation in SE(3).

That SE(3) term is used to describe how objects can move and rotate in 3D space. Technically, it stands for Special Euclidean group in three dimensions. It refers to keeping track of an object in 3D space – including position and orientation.

We’ll get to more of the paper in a minute, but we asked Dr. Xiao to give us some help understanding what the team did to achieve these results. Was it just coding? Or were there some hardware adjustments as well?

Turns out, there were both. Yes, there was plenty of complex coding. There was also the addition of an RTK GPS unit so that the robot’s position in space could be measured as accurately as possible. Because the team soon discovered that intense vibration over rough surfaces could loosen components, threadlock was used to keep things tightly in place.

But, as you might have guessed, machine vision and machine learning are a big part of this whole process. The robot needs to identify the terrain in order to know how to react.

We asked Dr. Xiao if an external data library was used and imported for the project. The answer? “No.”

“There’s no dataset out there that includes all these different basic catastrophic consequences when you’re doing aggressive maneuvers. So all the data we used to train the robot and to train our machine learning algorithms were all collected by ourselves.”

 

SLIPS, SLIDES, ROLLOVERS

 

As part of the training process, the Hunter SE was driven over all manner of demanding terrain.

“We actually bumped it through very large rocks many times and also slid it all over the place,” he says. “We actually rolled the vehicle over entirely many times. This was all very important for us to collect some data so that it learns to not do that in the future, right?”
 
And while the cameras and machine vision were instrumental in determining what terrain was coming up, the role of the robot’s Inertial Measurement Unit was also key.

“It’s actually multi-modal perception, and vision is just part of it. So we are looking at the terrain using camera images and we are also using our IMU. Those inertial measurement unit readings  sense the acceleration and the angular velocities of the robot so that it can better respond,” he says.

“Because ultimately it’s not only about the visual appearance of the terrain, it is also about how you drive on it, how you feel it.”

 

THE RESULTS

 

Well, they’re impressive.

The full details are outlined in this paper, but here’s the headline: Regardless of whether the robot was operating autonomously heading to defined waypoints, or whether a human was controlling it, there was a significant reduction in incidents (slips, slides, rollovers etc.) with only a small reduction in overall speed.

Specifically, “CAHSOR (Competence-Aware High-Speed Off-Road Ground Navigation) can efficiently reduce vehicle instability by 62% while only compromising 8.6% average speed with the help of TRON (visual and inertial Terrain Representation for Off-road Navigation).”

That’s a tremendous reduction in instability – meaning the likelihood that these robots will reach their destination without incident is greatly improved. Think of the implications for a First Responder application, where without this system a critical vehicle rushing to a scene carrying medical supplies – or even simply for situational awareness – might roll over and be rendered useless. The slight reduction in speed is a small price to pay for greatly enhancing the odds of an incident-free mission.

“Without using our method, a robot will just blindly go very aggressively over every single terrain – while risking rolling over, bumps and vibrations on rocks, maybe even sliding and rolling off a cliff.”

What’s more, these robots continue to learn with each and every mission. They can also share data with each other, so that the experience of one machine can be shared with many. Dr. Xiao also says the learnings from this project, which began in January of 2023, can also be applied to marine and even aerial robots.

For the moment, though, the emphasis has been fully on the ground. And there can be no question this research has profound and positive implications for First Responders (and others) using robots in mission-critical situations.

Below: The Hunter SE gets put through its paces. (All images courtesy of Dr. Xiao.)

Hunter SE George Mason Xuesu Xiao

INDRO’S TAKE

 

We’re tremendously impressed with the work being carried out by Dr. Xiao and his team at George Mason University. We’re also honoured to have played a small role in supplying the Hunter SE, InDro Commander, as well as occasional support as the project progressed.

“The use of robotics by First Responders is growing rapidly,” says InDro Robotics CEO Philip Reece. “Improving their ability to reach destinations safely on mission-critical deployments is extremely important work – and the data results are truly impressive.

“We are hopeful the work of Dr. Xiao and his team are adopted in future beyond research and into real-world applications. There’s clearly a need for this solution.”

If your institution or R&D facility is interested in learning more how InDro’s stable of robots (and there are many), please reach out to us here.

Elroy Air’s Chaparral brings long-range, heavy lift cargo solution

Elroy Air’s Chaparral brings long-range, heavy lift cargo solution

By Scott Simmie

 

Some history has just been made in the world of Advanced Air Mobility (AAM).

On November 12, Elroy Air successfully flew its Chaparral C1 – the first flight of a turbogenerator-hybrid electric vertical take-off and landing (hVTOL) aircraft. The hover test of the full-scale aircraft took place at the company’s test-flight facility in Byron, California.

It’s an important milestone as the world moves toward the AAM era, when new and transformative aircraft will move goods and people to destinations that would have been impractical or too expensive using traditional aircraft.

“This is an exhilarating day for our team and the industry as a whole,” says Elroy Air co-founder and CEO Dave Merrill.

There are plenty of companies competing for this new space with innovative autonomous designs. Some are designed to carry people, cargo, or both. There are several excellent designs out there, but Elroy Air’s Chaparral C1 has been on our radar for reasons you’re about to discover.

Before we get into the history, though, let’s get straight to the news. Here’s a video of the test flight:

AND DOWN ON THE GROUND

 

Check out the Chaparral C1 on the ground. Take a good look, as we’ll be discussing these features.

Elroy Air Chaparral AAM

THE CHAPARRAL

 

Let’s get into why this aircraft will fill a niche.

It’s been designed to move large payloads long distances – and do so efficiently. Humanitarian aid, military resupply and middle-mile logistics are all perfect use-cases for the Chaparral. Its sole purpose is to move significant amounts of cargo efficiently – and be ready for the return trip in minutes.

Here’s the one-floor elevator pitch:

“We’re building an aircraft that will be able to fly 300 miles (483 km) and carry 300 pounds (136 kg) of cargo,” explains Jason Chow, the company’s Director of Strategy and Business Development.

“It’s VTOL, so we don’t need runways. It’s also hybrid electric, so in many situations where there are remote areas, we’re still able to fly where electric power is unavailable.”

Hybrid electric makes sense when you’re after this kind of range, since the craft benefits from the energy density of jet fuel.

“A turboshaft engine powers the batteries, and the batteries power flight,” says Chow.

“One of the most intensive parts of flight is the takeoff portion, where you’re vertically flying upwards. And once you get into forward flight, the turbine is able to throttle back to meet the reduced demand while maintaining battery charge.”

As you can see from the photo, there are eight motors for vertical lift and four for forward propulsion. Once the craft transitions into forward flight, its fixed-wing design brings greater efficiency and range than would be possible with a traditional multi-rotor (which don’t generally have lifting surfaces aside from the rotors themselves).

But while all this looks great, Chaparral’s real secret sauce is its cargo capabilities – which have been designed, literally, from the ground up.

Take a look again at the photo above. Note the design of the wheel struts, as well as the ample space between the bottom of the fuselage and the ground. That’s all for a very specific reason: Chaparral has been designed to carry an aerodynamic, quickly-swappable cargo pod.

Have a look:

 

Elroy Air Chaparral AAM

THE POD

 

Chow says the system is comparable to a tractor-trailer. On a road, the tractor provides the power to move the goods. In the air, “the trailer is the equivalent of the cargo pod. We imagine customers will have multiple cargo pods.”

Those pods can be quickly interchanged on the ground – because the Chaparral’s autonomy abilities aren’t limited to flight. The aircraft can taxi to a predetermined location, lower and disengage a cargo pod, then reposition itself and pick up the next one. You can imagine the advantage of such a system when transporting food or critical medical supplies in an emergency situation. This isn’t simply an aircraft: It’s a delivery system.

It’s also worth noting that the pod has been designed to be compatible with existing infrastructure and tools such as forklifts. As the Elroy Air website explains:

The Palletized Pod uses a fairing-on-pallet design to ease loading of heavy cargo. This configuration features a standardized L-Track system for securing shipments, ensuring simple loading and safe travel for hefty items.”

Below: The Chaparral C1 with the pod snugged up and ready for business…

Elroy Air Chaparral AAM

BUSINESS MODEL

 

So, will Elroy Air be a service provider, overseeing autonomous flights for clients? Or will it be producing the Chaparral to be sold to clients who will operate it themselves?

“The current thinking is that we would do both,” explains Chow. “There are a lot of our partners that are very good at operating aircraft: FedEx, Bristow, the United States Air Force. The main thing they do is operate aircraft really well. So in those situations we would sell to them only as the OEM (Original Equipment Manufacturer).”

“But we also have customers who are interested in what we can provide. So in those situations we could provide the service ourselves or rely on very experienced operators.”

Elroy Air Chaparral Test Flight

MANUFACTURING

 

Producing an aircraft of this scale – it has a wingspan of 26.3 feet (8.01 metres) and a length of 19.3 feet (5.88 metres) – is no small task. Elroy Air made the decision early on that the most efficient approach would be as a highly selective and meticulous integrator. So its composite fuselage, for example, is outsourced.

“There are folks in the general Advanced Air Mobility industry that are building everything in-house. That’s great, you can own the IP (Intellectual Property) for everything,” says Chow.

“That being said, it takes longer. So our approach has been to be an integrator. We source the best parts to help us get to market – including the generator.”

Elroy Air Chaparral Test Flight

TRAJECTORY

 

There are a lot of startups in this space, including plenty of newcomers. Elroy Air was formed back in 2016 in San Francisco by Dave Merrill (now CEO) and Clint Cope (Chief Product Officer).

By 2018 the company flight-tested sub-scale Chaparral aircraft and user-tested its automated cargo‑handling systems. The following year it had established a relationship (and contract) with the United States Air Force “enabling Elroy Air to understand and inform the USAF’s operational needs for distributed aerial logistics in contested environments. We developed our custom simulation environment for Chaparral aircraft and ran a successful flight test campaign on an early 1200‑pound, full-scale Chaparral prototype outfitted with an all-electric powertrain.”

The milestones have kept coming. The year 2020 brought refinements to its simulation system, allowing the team to carry out thousands of virtual flights and ground/cargo mission experiments. Development began in earnest that same year on the hybrid-electric powertrain, including multiple turboshaft engine runs.

A Series A financing in 2021 brought in partners Lockheed Martin, Prosperity7 and Marlinspike, who came to the table with $40M. In 2022 an additional $36M in capital arrived, and the company unveiled its C1-1 Chaparral to the public. (The aircraft also made it to the cover of Aviation Week.)

It’s been a careful, methodical journey that has brought the company this far – and it clearly has ambitious plans for the future. If you’d like to read about these milestones in greater detail, you’ll find a company timeline here

But the biggest milestone so far? The flight that opened this story.

“This marks a major moment for the industry as hybrid-electric aircraft enable the dual benefits of runway-independent safe redundant propulsion, and long-range flight well in excess of battery power alone,” says co-founder and CEO Dave Merrill. 

“Our accomplishment puts Elroy Air one step closer to delivering a transformative logistics capability to our customers and partners.”

Elroy Air Chaparral Test Flight

INDRO’S TAKE

 

We at InDro obviously have a stake in the future of Advanced Air Mobility. We know from our own work in this field of the pent-up demand for efficient VTOL aircraft that can safely shuttle critical cargo – whether across major cities or to isolated communities lacking runways.

We’ve also been watching, with interest, the companies that are vying for space in this coming market.

“From everything we’ve seen, Chaparral is going to be a perfect fit,” says InDro Robotics President Philip Reece. “It’s cargo capacity and range will really fill a void, and the pod system – complete with its autonomous coupling and decoupling feature – will be hugely advantageous. We congratulate Elroy Air on this milestone, and look forward to seeing a transition flight before long.”

As with all new aircraft, it will take time before certification takes place and the FAA gives Elroy Air its full blessings. We’re confident that not only will that day come – but that Elroy Air and Chaparral will play a significant role in the era of Advanced Air Mobility.

All images supplied with permission by Elroy Air