Research using InDro robots for real-world autonomy

Research using InDro robots for real-world autonomy

By Scott Simmie

 

As you’re likely aware by now, InDro builds custom robots for a wide variety of clients. Many of those clients are themselves researchers, creating algorithms that push the envelope in multiple sectors.

Recently, we highlighted amazing work being carried out at the University of Alberta, where our robots are being developed as Smart Walkers – intended to assist people with partial paralysis. (It’s a really fascinating story you can find right here.)

Today, we swing the spotlight down to North Carolina State University. That’s where we find Donggun Lee, Assistant Professor in the Departments of Mechanical Engineering and Aerospace Engineering. Donggun holds a PhD in Mechanical Engineering from UC Berkely (2022), as well as a Master’s of Science in the same discipline from the Korea Advanced Institute of Science and Technology. He oversees a small number of dedicated researchers at NCSU’s Intelligent Control Lab.

“We are working on safe autonomy in various vehicle systems and in uncertain conditions,” he explains.

That work could one day lead to safer and more efficient robot deliveries and enhance the use of autonomous vehicles in agriculture.

Below: Four modified AgileX Scout Mini platforms, outfitted with LiDAR, depth cameras and Commander Navigate are being used for research at NCSU. Chart below shows features of the Commander Navigate package

Research Robots
Commander Navigate

“UNCERTAIN” CONDITIONS

 

When you head out for a drive, it’s usually pretty predictable – but never certain. Maybe an oncoming vehicle will unexpectedly turn in front of you, or someone you’re following will spill a coffee on their lap and slam on their brakes. Perhaps the weather will change and you’ll face slippery conditions. As human beings, we’ve learned to respond as quickly as we can to uncertain scenarios or conditions. And, thankfully, we’re usually pretty good at it.

But what about robots? Delivery robots, for example, are already being rolled out at multiple locations in North America (and are quite widespread in China). How will they adapt to other robots on the road, or human-driven vehicles and even pedestrians? How will they adapt to slippery patches or ice or other unanticipated changes in terrain? The big picture goes far beyond obstacle avoidance – particularly if you’re also interested in efficiency. How do you ensure safe autonomy without being so careful that you slow things down?

These are the kinds of questions that intrigue Donggun Lee. And, for several years now, he has been searching for answers through research. To give you an idea of how his brain ticks, here’s the abstract from one of his co-authored IEEE papers:

Autonomous vehicles (AVs) must share the driving space with other drivers and often employ conservative motion planning strategies to ensure safety. These conservative strategies can negatively impact AV’s performance and significantly slow traffic throughput. Therefore, to avoid conservatism, we design an interaction-aware motion planner for the ego vehicle (AV) that interacts with surrounding vehicles to perform complex maneuvers in a locally optimal manner. Our planner uses a neural network-based interactive trajectory predictor and analytically integrates it with model predictive control (MPC). We solve the MPC optimization using the alternating direction method of multipliers (ADMM) and prove the algorithm’s convergence.

That gives you an idea of what turns Donggun’s crank. But with the addition of four InDro robots to his lab, he says research could explore many potential vectors.

“Any vehicle applications are okay in our group,” he explains. “We just try to develop general control and AI machine learning framework that works well in real vehicle scenarios.”

One (of many) applications that intrigues Donggun is agriculture. He’s interested in algorithms that could be used on a real farm, so that an autonomous tractor could safely follow an autonomous combine. And, in this case, they’ve done some work where they’ve programmed the Open Source Crazy Flie drone to autonomously follow the InDro robot. Despite the fact it’s a drone, Donggun says the algorithm could be useful to that agricultural work.

“You can easily replace a drone with a ground vehicle,” he explains.

And that’s not all.

“We are also currently tackling food delivery robot applications. There are a lot of uncertainties there: Humans walking around the robot, other nearby robots…How many humans will these robots interact with – and what kind of human behaviours will occur? These kinds of things are really unknown; there are no prior data.”

And so Donggun hopes to collect some.

“We want to develop some sort of AI system that will utilise the sensor information from the InDro robots in real-time. We eventually hope to be able to predict human behaviours and make decisions in real-time.”

Plus, some of Donggun’s previous research can be applied to future research. The paper cited above is a good example. In addition to the planned work on human-robot interaction, that previous research could also be applied to maximise efficiency.

“There is trade-off between safety guarantees and getting high performance. You want to get to a destination as quickly as possible and at speed while still avoiding collisions.”

He explains that the pendulum tends to swing to the caution side, where algorithms contain virtually all scenarios – including occurrences that are unlikely. By excluding some of those exceedingly rare ‘what-ifs’, he says speed and efficiency can be maximised without compromising safety.

Below: Image from Donggun’s autonomy research showing the InDro robot being followed by an Open Source Crazy Flie drone

NCSU InDro Navigator Cray Flie

INDRO’S TAKE

 

We, obviously, like to sell robots. In fact, our business depends on it.

And while we put all of our clients on an equal playing field, we have a special place in our non-robotic hearts for academic institutions doing important R&D. This is the space where breakthroughs are made.

“I really do love working with people in the research space,” says Head of R&D Sales Luke Corbeth. “We really make a concerted effort to maximise their budgets and, when possible, try to value-add with some extras. And, as with all clients, InDro backs what we sell with post-sale technical support and troubleshooting.”

The robots we delivered to NCSU were purchased under a four-year budget, and delivered last summer. Though the team is already carrying out impressive work, we know there’s much more to come and will certainly check in a year or so down the road.

In the meantime, if you’re looking for a robot or drone – whether in the R&D or Enterprise sectors – feel free to get in touch with us here. He takes pride in finding clients solutions that work.

Industry 4.0 and InDro – the evolution continues

Industry 4.0 and InDro – the evolution continues

By Scott Simmie

 

Many of you will remember the days before smart phones. Same goes for automated tellers, online banking, self check-outs, personal computers, 3D printers – even the internet itself. Technology hasn’t merely marched along; it’s been sprinting at an ever-accelerating pace. What’s more, it’s been doing so pretty much everywhere. From the smart devices that now populate our pockets and homes and vehicles through to autonomous mobile robots in factories, hospitals, warehouses, airports – we are in the midst of an inflection point.

If you’re in the technology industry, this era is known as Industry 4.0. And there’s no question that it is – and will continue to be – utterly transformative.

Let’s take a brief look at how we got here…and where it’s going.

Below: An InDro Robotics Sentinel inspection robot. It carries out complex autonomous inspections before returning to its base to wirelessly recharge

Sentinel enclosure Ottawa Hydro

THE PATH TO 4.0

 

Industry 4.0 is also known by some as 4IR, meaning the Fourth Industrial Revolution. So it’s worth briefly reviewing the other three.

The initial Industrial Revolution began in the UK in the mid-1700s. The development of steam power, water power, and mechanisation paved the path for production of certain commodities at scale. They may seem primitive now, but these were huge innovations at the time. These efficiencies helped vault the UK to a leading economic position and the technology began rapidly spreading elsewhere in the world.

That was followed by three other industrial epochs:

  • The late 1800s, where mass production lines using electrical power marked the outset of the Second Industrial Revolution
  • The late 1960s saw the introduction of computers and other early IT systems, as well as significant advances in automation including simple robotic devices
  • The mid-2010s ushered in Industry 4.0, often described as the integration of cyber and physical systems (more on this in a moment)

To help visualise this, we’ve tapped on Wikimedia Commons, and this graphic from Christoph Roser at AllAboutLean.com

Industry 4.0 Wikimedia Commons Christoph Roser at AllAboutLean.com

THE FOURTH WAVE

 

As we saw, what’s thought of as the Third Industrial Revolution brought computers and early robotics/manufacturing advances onto the scene. Industry 4.0 can be thought of as the logical extension of the third – but with massive technological and data integration advances. As this Forbes article puts it, “The fourth industrial revolution will take what was started in the third with the adoption of computers and automation and enhance it with smart and autonomous systems fueled by data and machine learning…As a result of the support of smart machines that keep getting smarter as they get access to more data, our factories will become more efficient and productive and less wasteful.”

We asked an AI engine for its take, and it came back with a very concise definition: “Industry 4.0 is a term that describes the ongoing technological revolution that is transforming how companies operate, design, produce, and deliver goods and services.”

It also offered, helpfully, the key enabling technologies including: 

  • Artificial Intelligence 
  • The Internet of Things 
  • Big Data Analytics 
  • Augmented Reality 
  • Precision Scanning and digital twins
  • Robotics
  • Advanced manufacturing techniques, including 3D printing

COVID-19, with its extensive isolation and social distancing, played a significant role in companies embracing Industry 4.0. A basic example many can relate to was the growth of UberEats and other food delivery services. The coding and technology – the integration of the cyber and physical words – utterly transformed much of the restaurant industry.

It would be hard to think of a sector that has not been touched by 4IR: Manufacturing, mining, agriculture, pharmaceuticals, aerospace – you name it.

 

INDRO 4.0

 

Industry 4.0 is a massive topic – with implications not only for companies seeking a competitive edge but also for workers. Many companies, according to this excellent McKinsey and Company overview (complete with compelling data and examples of ‘Lighthouses’ – companies at the pinnacle of 4.0), are re-skilling employees hand-in-hand with adopting new 4IR technologies. Europe here has taken the lead over North America.

As for InDro? The company was officially formed in 2014. That happens to be the year generally accepted as the year Industry 4.0 began. And from the beginning, this has been the realm where our R&D has taken place. As a leader in the autonomous robotic space, many of our own inventions and custom builds operate in the Industry 4.0 space. We’re particularly proud of our Sentinel inspection robot (several of which are now working autonomously for a major US energy client), and also Captis – the leading solution in inventory cycle counting and precision scanning for large warehouses and other supply chain assets. InDro Robotics was the technology incubator for Captis, produced by Cypher Robotics. It’s already on the job in Canada, and will soon be deployed in New Zealand.

Below: The Captis cycle-counting and precision scanning system

Sentinel

INDRO’S TAKE

 

Industry 4.0 isn’t just a buzzword. It is a full-fledged transformation leveraging multiple complex technologies working in synergy for greater efficiency. Most of our clients have fully embraced IR4 or are in the midst of that transformation. And we, as always, continue to develop new robots, drones and other products for this new and exciting era.

“Industry 4.0 certainly draws on the framework laid by 3.0, but the technological advances of the past decade have been truly transformative,” says InDro Robotics Founder and CEO Philip Reece. “We are definitely in the midst of a new and exciting era, and InDro will continue to develop intelligent and innovative products for Industry 4.0. And yes, when 5IR eventually comes along…we’ll be ready.”

Want to learn more about how an InDro solution can help your company in IR4? Interested in learning how a private 5G network can offer smart factories a competitive and security edge? Head of R&D Sales Luke Corbeth is always up for a thoughtful conversation.

I, Robot: The Humanoids are here

I, Robot: The Humanoids are here

By Scott Simmie

 

You might own a robot without even realising it.

Have a Roomba? That’s a robot. And a drone? That’s a flying robot. Even a Tesla, in Full Self-Driving mode, is a robot.

There are a lot of definitions out there – but one we particularly like comes from Maja Matarić, a computer scientist, roboticist and AI researcher at the University of California. In her book, The Robotics Primer, she concisely defines a robot as “an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals.”

Whether that goal is to vacuum your floor, capture aerial data, or weld a part in a factory, we feel this is a really clear definition. It also doesn’t delineate between platforms: A robot that fits this bill could be stationary, wheeled, a quadruped or even a humanoid.

And it’s that last platform – humanoid – that’s been getting a lot of buzz recently. Numerous companies are now manufacturing robots that resemble human beings in their form factor. And, as it turns out, for very good reasons.

Below: Ameca, a robot built by the UK’s Engineered Arts, is known for its eery ability to mimic human expressions

Ameca robotics AI

WHY HUMANOID?

 

The idea of a humanoid robot has been around for longer than you might think. Leonardo Da Vinci designed – and possibly built – an automaton in the late 15th Century. It’s known currently as Leonardo’s Robot or Leonardo’s Mechanical Knight. According to Wikipedia, “The robot’s design largely consists of a series of pulleys that allow it to mimic human motions. Operational versions of the robot have been reconstructed by multiple researchers after the discovery of Leonardo’s sketches in the 1950s.”

It appears that the purpose of this design was for entertainment (which also fits the definition of a goal), but it fell short when it comes to sensing its environment and autonomy. Still, it’s fascinating to know the Italian inventor turned his attention to designing a mechanical device in human form way back then.

It would take another half a millennia before the first true humanoid robot would be built. In the early 1970s, the Wabot was unveiled in Japan. It was anthropomorphic, with two arms and two legs. It also contained a vision system, audio sensors and could speak in Japanese. According to this overview, “It was estimated that the WABOT-1 has the mental faculty of a one-and-half-year-old child.”

Below: A modern reproduction, based on Leonardo Da Vinci’s sketches, of his “Mechanical Knight” complete with inner mechanisms. It’s followed by an image of Wabot-1 from 1973

Da Vinci humanoid robot
1973 Wabot humanoid

THE HUMAN ADVANTAGE

 

Why create a humanoid in the first place?

Well, there are certain advantages to a human form factor, particularly when it comes to carrying out repetitive tasks in the real world. And the reason? The world around us has been built for humans. If there’s an existing task carried out by people, say pick-and-place, the infrastructure for that task has been created with humans in mind. That means conveyor belts, shelving, cupboards etc. are all designed for the average human. If you build a robot in a human-like form and roughly to scale, that’s a big advantage.

“You don’t need to change the surrounding infrastructure to accommodate the robot,” explains Head of R&D Sales Luke Corbeth.

“The end result obviously is faster deployment. This applies to factories, homes, hospitals, pretty much any use-case. None of these locations need to be robot native to effectively leverage a humanoid robot because they’ve been built for people.”

In fact, humanoid robots have already been deployed on some factory floors. They’re ideally suited to repetitive tasks such as picking up an item and moving it from one location to another – and contain tactile feedback sensors in their manipulators to calculate appropriate grip strength. They could also be deployed, says Corbeth, in environments built for humans – but which may pose hazards. An example, says Corbeth, might be for inspections or maintenance inside a nuclear facility in a radioactive environment.

“There are a lot of dexterous tasks people are doing today that are very challenging to automate because they require high levels of precision,” he says. “These are perfect tasks for humanoids.”

Looking down the road, many foresee an era when humanoids are affordable enough – and capable enough – for deployment in homes. There, they could carry out some of the more mundane household tasks like cleaning or clothes washing, perhaps even elder care and companionship.

A growing number of companies are now in the humanoid space, including Tesla (Optimus), Agility (Digit), Boston Dynamics (Atlas), and Figure (Figure 02). InDro Robotics is a distributor for Unitree, and carries the G1 humanoid and H1 and H1-2 research and development models. (We can also modify these robots for specific use-cases.)

The base version of the G1 sells for $21,600 US – which is surprisingly reasonable for a humanoid form factor. Corbeth says the current offerings are a result of a “perfect storm” across multiple advances in AI compute, battery, sensor and manufacturing technologies. The more advanced H1 sells for $99,600 US and is better suited for complex R&D.

 

WHAT’S NEXT

 

Humanoids are already in the real world. With further and inevitable advances in AI, Machine Vision and Machine Learning (as well as sensors, manipulators, etc.) it’s safe to assume that humanoids will only get smarter and better at smoothly carrying out fully autonomous tasks.

“I think that it will be probably, realistically, three to five years before you see walking humanoid robots around people all the time,” Dr, David Hanson, Founder of Hanson Robotics recently told the South China Morning Post.

“I think we are entering the age of living intelligent machines. It’s coming. Machine consciousness, self-determining machines…it’s on its way. And if we see that happen, then we want to make sure that we make the AI good, compassionate, able to connect and want the best for humans.” Yes, indeed.

And a final note. At some point, these humanoids will be good enough to manufacture themselves. That’s historically been something in the realm of science fiction. However, a recent TechCrunch story pointed out a new partnership between humanoid developers Apptronik and manufacturer Jabril.

“This means that should everything go according to plan, the humanoid robot will eventually be put to work building itself,” says the article.

Below: A C-NET video outlines developments expected in this field in 2025

INDRO’S TAKE

 

Because we sell and modify humanoids in addition to designing and building our own robots (and robots for clients), we’re obviously interested in this space. While we don’t have plans to develop our own humanoid (yet), we are currently working with the Unitree G1 and H1 models to evaluate and enhance their capabilities. And yes, we’ve already sold these to customers.

“Humanoids are a logical progression in robotics,” says InDro Robotics Founder and CEO Philip Reece. “While they’re not the solution for every use-case, they have a clear role in carrying out repetitive or even dangerous tasks that are currently carried out by humans. I suspect, in the not-so-distant future, humanoids will be working alongside people in an ever-increasing number of settings.”

Interested in learning more? Contact us here.

Sense, solve, go: Does Waymo herald the future of autonomous vehicles?

Sense, solve, go: Does Waymo herald the future of autonomous vehicles?

By Scott Simmie

 

During a recent trip to California, I had the opportunity to ride in a Waymo.

I’d certainly read about Alphabet’s autonomous car-for-hire service in the past and work for a company that builds robots and autonomy software. So it seemed a natural, while in San Francisco, to download the app (similar to Uber) and hail an autonomous vehicle.

Within a couple of minutes, a Waymo vehicle arrived at the pickup point – just a short walk from where it had been summoned. It pulled up, LiDARs spinning, waiting for me to climb in. I put a hand on the door; it was locked. A quick glance at the app and I saw an “unlock” feature. Then I was inside.

And then, with some ambient music playing in the background (you have the option to turn it off or select something else), we – meaning the car and I – were off. A display showed a digital representation of what the vehicle was seeing in its surroundings, including parked vehicles and pedestrians. The electric Jaguar quietly accelerated to the speed limit, obeyed all traffic rules, and smoothly adjusted for unexpected occurrences. When the driver of a parked vehicle opened the door to exit, the Waymo liquidly arced a safe distance away. With the steering wheel making smooth turns and constant smaller fine adjustments, it was like being in a vehicle with an invisible, silent driver at the helm.

I had full faith in the technology and actually preferred it to a standard rideshare. Waymo’s safety data (which we’ll explore later) had reassured me the drive was going to be statistically safer than riding with a human driver. Plus, there was no need to engage in small talk. When the ride was done, I simply exited without being prompted for a tip.

I’d been aware of Waymo since it first deployed. I also have a friend with a Tesla who has the Full Self-Driving package. He commutes twice a week from well outside Toronto into the GTA without any inputs beyond setting his destination. He foresees a day, long promised by Elon Musk, when his own vehicle will earn him money by working during off hours as an autonomous taxi.

The technology for fully autonomous vehicles is basically here – arriving both sooner and later than some had predicted. But what does that mean for the future?

Below: A Waymo Driver waits patiently at an intersection – while another Waymo Driver glides past. Photos by Scott Simmie 

SENSE, SOLVE, GO

 

Owned by Google parent company Alphabet, Waymo states “We’re on a mission to be the world’s most trusted driver. Making it safer, more accessible, and more sustainable to get around — without the need for anyone in the driver’s seat.” It calls its service, the app and the car together, Waymo One. The system, the hardware and software, are referred to collectively as Waymo Driver.

Commercial rollout began in Phoenix, Arizona in 2020, with testing in San Francisco commencing late the following year. It’s now also operating in Los Angeles, with expansion into Atlanta, Austin and Miami next. It uses a Jaguar I-PACE electric vehicle as its base, heavily outfitted with an array of sensors. We’re talking a lot of sensors.

In total there are 29 cameras, six radar and five LiDAR units (including a roof-mounted 360° LiDAR). These sensors allow Waymo Driver to fully capture its environment up to three football fields away. Powerful AI, machine vision and machine learning software continuously crunch predictive algorithms allowing Waymo to understand where a pedestrian, cyclist or other vehicle will most likely continue based on current trajectory. And, of course, if one of those moving subjects suddenly does something unexpected/unpredicted, the system quickly readjusts. Waymo touts safety stats (which we’ll explore later) it says proves Waymo Driver is far safer than a human driver.

Waymo boils down the entire process to three words: Sense, solve, go. Waymo Driver senses its environment using the sensors mentioned above (it also has an array of External Audio Receivers – EARs – which alert the system if they detect sirens, etc. It can even echolocate the location of said sirens and will understand if it needs to pull over). The algorithms solve the challenge of safely moving through that environment, including moving objects, and then it’s go time. Those three simple words represent a decade and a half of intense R&D, development of its own sensors, and a huge capital expenditure.

Google first began exploring self-driving vehicles in earnest back in January of 2009. By the time it revealed this publicly, it had already done extensive R&D and testing. But it wasn’t until the fall of 2015 that the first solo member of the public climbed into a Waymo in Austin, Texas and took the vehicle for a ride on city streets. That passenger was Steve Mahan, who is legally blind. It was the first time in 12 years that he’d been alone in a car. It would be another five years before the first rollout to the public.

During that five years, both the car and sensor package – along with the software – evolved considerably. Just compare videos below; the first shows Steve Mahan on that historic trip in 2015, the second is an updated video explaining the fifth-generation Waymo Driver.

SAFETY FIRST

 

 

Waymo could pitch its offering on a number of grounds: Sustainability, convenience, cool factor. But instead, it focuses its customer-facing marketing on safety. Waymo Driver, it says repeatedly, is far safer than a human driver. It has proven that in many millions of miles on the streets, it says, and billions more in simulation.

The statistics Waymo publishes are based on Incidents Per Million Miles (IPMM) of driving – and compare its own rates of incidents with a benchmark of IPMM involving human drivers. Whether it’s airbag deployments, crashes with a reported injury, or incidents where police are notified, Waymo’s stats are consistently a fraction of those involving people at the wheel. In more than 33 million miles of driving, Waymo touts these as the results:

  • 81 per cent fewer airbag deployment crashes
  • 78 per cent fewer injury-causing crashes
  • 62 per cent fewer police-reported crashes

That’s clearly a significant reduction, and to most people would indicate that Waymo is safer than taking a ride with a stranger (or even friend) at the wheel. The statistics include accidents where other drivers were at fault, but does not separate them out – so we can’t actually see what percentage involved an error from the Waymo side. Waymo has previously stated that the majority of these incidents were the fault of human drivers, and that there have been but two accidents involving injuries where it expects to pay out insurance liability claims.

But even rare incidents can quickly become high-profile. In Phoenix, an empty Waymo that had been summoned by a customer crashed at low speed into a telephone pole in an alley. Had a human been at the wheel, we would never have heard of it. But because it was a Waymo, the incident led newscasts. Why is that? Well, we expect perfection in systems like these. And that seems a reasonable expectation if you’re going to trust your personal safety to a driverless car. It just can’t make mistakes. And that’s why Waymo quickly issued a recall for a software fix.

“This is our second voluntary recall,” Katherine Barna, a Waymo spokesperson, told TechCrunch. “This reflects how seriously we take our responsibility to safely deploy our technology and to transparently communicate with the public.”

In May of 2024, the US National Highway Transportation Safety Agency (NHTSA) informed Waymo it was investigating 22 incidents involving its vehicles (and subsequently added an additional nine incidents), stretching back to 2001. Many of those incidents were described by Forbes as “surprisingly minor” and 11 of those incidents were culled by the NHTSA from social media reports of the vehicles driving in an unusual fashion (such as using the oncoming lane to avoid traffic problems). The most serious was the aforementioned pole collision.

We were unable to find any reports of Waymo incidents involving a serious injury. The one fatality involving a Waymo occurred in January of 2025, when an unoccupied stationary Waymo stopped at a traffic light was one of several vehicles hit by a speeding car. One person and a dog died in that incident, but because a Waymo was tangentially involved it made the headlines. It is the only case we can find involving a fully driverless vehicle where a fatality was involved – and in this case the vehicle was completely passive. (There was a pedestrian fatality in 2018 involving an autonomous Uber vehicle. In that incident, which occurred in Tempe, Arizona, a human safety driver was in the driver’s seat. She was watching television on her phone when the accident occurred and subsequently pleaded guilty to endangerment.)

While Waymo has an excellent track record, there have been incidents. But with each incident where Waymo Driver has somehow made the wrong decision, it’s reasonable to assume it was followed by a software fix. And here’s where a fleet of autonomous vehicles have a definite advantage over people: That tweak can be instantly applied to the entire fleet.

Still, there are skeptics who argue that – despite those millions of miles of driverless passenger trips – Waymo does not have enough data upon which to draw sound conclusions.

“We don’t know a lot. We know what Waymo tells us,” Philip Koopman, an expert on autonomous vehicle safety at Carnegie Mellon University, told the Miami Herald. “Basically you are trusting Waymo to do the right thing.”

 

THE FUTURE

 

Autonomy is hard – and it takes time: Google and Alphabet have invested more than 15 years of continuous engineering for Waymo Driver to reach this level of technological maturity.

Now, Waymo is rolling out to more cities. Remember those 672 Jaguars that had the software upgrade? They’re just a fraction of the 20,000 I-Pace vehicles Waymo signed a contract with Jaguar to purchase. Plus, the company recently announced that its six-generation vehicle – a Chinese-made electric minivan – is next up for testing and deployment. From all external appearances, Waymo shows no sign of stopping (except at red lights, of course). In 2024, it carried out four million autonomous rides – four times more than its total of trips over the previous four years. Rides in 2024 tripled to 150,000 per week. And the company calculates “Waymo riders helped avoid over 6 million kilograms of CO2 emissions.”

That’s all great. But for any commercial enterprise, even if it’s willing to absorb costs during rollout, the ultimate test will be the bottom line. Will Waymo prove profitable?

We can’t say for certain – and Waymo’s current financials are somewhat invisible to the public, as they’re bundled in with several other projects Alphabet projects. But some analysts predict Waymo, the clear leader in autonomous rideshare, will ultimately win a significant piece of the market. An analysis on Nasdaq.com predicts Waymo could prove over time to be the jewel in Alphabet’s crown.

“Uber does more than 200 million rides each week,” states the story. “Let’s let that sink in. So if autonomous rides can capture even half that market, that would mean 100 million rides per week…If Waymo can capture about one-third of the $1 trillion autonomous rides market, it could generate annual revenues of around $300 billion.” Enough, suggests the story, to double Alphabet’s stock price.

That’s a big prize. And, clearly, incentive for Alphabet and Waymo to continue on the road to profitability.

 

Below: The LCD display for rear Waymo passengers. Note the option to “pull over” if you unexpectedly need to end your ride early

INDRO’S TAKE

 

Because we’re deeply involved in the autonomous space, we obviously take great interest in Waymo and other deployments of autonomous technologies at scale. Waymo Driver is different from most other applications, though, because it’s transporting human beings. There is very little – if any – room for error.

“We can’t predict the future, but – like algorithms – can make informed predictions with available data,” observes InDro Robotics Founder and CEO Philip Reece. “Waymo appears to be heavily invested in continuously making a good safety record even better – and has the engineering and financial resources to do so. I suspect Waymo, and its competitors, are here to stay.”

For more on Waymo, check out its website. And, if you’re in one of the growing number of cities where it operates, download the app and let Waymo Driver take the wheel.

Wisk promises autonomous Advanced Air Mobility

Wisk promises autonomous Advanced Air Mobility

By Scott Simmie

 

If you’ve been following our posts, you’ll know that InDro Robotics was part of a Canadian trade delegation that visited California last week. Some 40 organisations took part – including private companies, airports, academics, Transport Canada, NAV Canada and the National Research Council Canada. The trip was organised by Canadian Advanced Air Mobility (CAAM), the organization that speaks with a unified voice on behalf of industry and others with a vested stake in the coming world of AAM.

California was chosen because it’s home to three of the leading companies in the Advanced Air Mobility space: Joby, Archer and Wisk. It’s also home to the NASA Ames Research Center – which is working closely with industry on multiple technical issues as the world of AAM approaches. Last week, we shared highlights of our visits at Joby and Archer with this post (which we’d encourage you to read for context).

Today’s post? It’s all about Wisk, the final air taxi company the delegation visited. And its vision?

“Creating a future for air travel that elevates people, communities, and aviation.”

Unlike Joby and Archer – which plan to launch with piloted aircraft – Wisk differentiates itself with its “autonomous-first strategy.” That means, once it has attained all the necessary FAA certifications, the first passengers will climb on board an aircraft that flies itself. An autonomous aircraft carrying human beings? That’s a really big deal.

“When we’re successful at certifying this aircraft, that has the potential to change so much more beyond Wisk,” explained Becky Tanner, the company’s Chief Marketing Officer. In fact, she believes it will have an impact on the broader aviation industry, encouraging it to “take a step forward.”

Wisk is currently flying its sixth-generation full-sized aircraft. Its first generation was autonomous, but the following two were piloted.

“We made the conscious choice from Generation 3 to Gen 4 to stick with autonomous aircraft,” says Chief Technical Officer Jim Tighe. He points to the Generation 6 (which they call “Gen6”) on the floor.

“There will never be a pilot in that aircraft,” he says.

Below: Wisk’s Gen6 – the latest iteration of its autonomous air taxi designed to carry four passengers

 

 

Wisk Gen6 Autonomous Air Taxi

THE DESIGN

 

Like Joby and Archer, Wisk’s basic design is a fixed-wing eVTOL that uses tilt-rotors on booms attached below the wing. Two motors are on each of those six booms. The forward motors have tiltable five-blade rotors that allow them to transition for more efficient forward flight. These motors are in use throughout the flight – takeoff, landing, hover, forward flight – and any other manoeuvres. The rear motors are used for the VTOL portions of flight but are turned off once Gen6 has transitioned to forward flight.

Gen6, as you perhaps guessed, is the sixth full-size aircraft that Wisk has designed and built. And, like Generations 1, 4 and 5 it’s fully autonomous. That feature eliminates the possibility of pilot error.

“It’s obviously a differentiator,” says Tighe. “But we really believe that autonomy will enable safety. These are challenging operations. Short distance flights, you’re doing a lot of takeoffs and landings and you’re doing it in congested airspace.”

Building a completely autonomous aircraft is difficult. But it’s especially challenging – and rewarding – when you have to invent required components.

“When we first started, most of these systems did not exist – so we had to build them ourselves,” CTO Tighe told the Canadian delegation. That included motors, highly optimised batteries, flight control systems and much more. The company now holds 300+ patents globally and has carried out more than 1750 test flights with full-scale aircraft.

“It’s really important to design systems that meet our challenges for design, safety, weight and performance requirements,” he said, adding “It’s a lot easier if you can work on it yourself.”

Tighe, who dresses and speaks casually, comes with an impeccable background. After his first few years working with Boeing as an Aerodynamics Engineer, he worked as Chief Aerodynamicist for 14 years at Scaled Composites. That was the Burt Rutan company known for an incredible number of innovative aircraft and world aerospace records.

But Scaled’s jewel in the crown came right in the midst of Tighe’s tenure. The company designed and built SpaceShipOne and mothership White Knight. SpaceShipOne was a crewed, reusable suborbital rocket-powered aircraft that was carried to 50,000′ AGL while affixed beneath White Knight. When it was released, SpaceShipOne ignited its rocket engine, which took the small aircraft to the edge of space (100km). By accomplishing this feat twice within two weeks, Scaled Composites won the $10M Ansari X Prize. The technology, which includes a feathered system where the wing of the spacecraft rotates for optimal atmospheric entry, is core to the Virgin Galactic space tourism program. Tighe left Scaled Composites in 2014, moving directly to Wisk – a job he describes as “really fun if you’re an engineer.”

Below: The Gen6, which is capable of carrying four passengers of all shapes and sizes, including passengers with mobility issues

 

 

Wisk Gen6

AUTONOMY

 

Autonomy isn’t just about the technology (though we’ll get to that). It’s also part of a strategic business model in a market sector that will undoubtedly be competitive. Both Joby and Archer will initially have piloted models, meaning one of the four seats will be taken by the pilot. That not only costs more (to pay for the pilot), but also means losing revenue for one passenger on every single flight.

But will passengers embrace flying without a human at the controls? Wisk believes so, and says it puts great emphasis on safety. And here, it has some help: Wisk became a fully-owned subsidiary of Boeing in 2023 (though it operates separately). Some 150 Boeing employees are directly involved with the Wisk operation. That relationship, says the Wisk website, “allows us to tap into Boeing’s development, testing and certification expertise, and more.”

And on the autonomy front? In addition to its own inventions, Gen6 relies heavily on tried and true systems like autopilot. It’s self-flying approach includes, according to its website:

  • “Leveraging the same proven technology that accounts for more than 93% of automated pilot functions on today’s commercial flights (autopilots, precision navigation, flight management systems, etc.)
  • “New, innovative technology such as improved detect and avoid capabilities, sensors, and more
  • “Wisk’s logic-driven, procedural-based, decision-making software which provides reliable, deterministic outcomes.”

What’s more, Wisk already has a highly integrated system that allows human flight supervisors to track missions from the ground and monitor aircraft systems. Those flight supervisors will have the ability to intervene remotely, should that ever be required. It’s anticipated that, initially, one supervisor will be responsible for monitoring three missions simultaneously. Wisk offered a simulated demonstration of this system – which already looks pretty mature.

The location the delegation visited was in Mountain View, CA. This Bay Area campus is responsible for engineering, composite assembly, airframe assembly, motors, its battery lab, autonomy lab and is home to the corporate team. In addition, Wisk has additional locations in the US, Canada (Montréal), Poland, Australia and New Zealand. Its flight tests and R&D are carried out in Hollister, CA. The company currently has about 800 employees (including 50 in Montréal).

 

SUSTAINABLE AND ACCESSIBLE

page1image1075603648 page1image1075604000

One of the many impressive things about Wisk was its emphasis on design. Engineers have worked hard to reduce the number of moving parts in the aircraft – points of failure – to the point where there no single mechanical or software problem could take the aircraft out of the sky. But equally impressive was its commitment to design.

Beyond ensuring everything is comfortable, ergonomic and safe for passengers – a great deal of work has gone into ensuring any Wisk aircraft will be accessible for people of all shapes and sizes and even with disabilities. Wisk has an ongoing program where civilians with physical or sensory limitations are brought into the lab to try out the latest iteration of the cabin and offer feedback for improvement. For example, there’s Braille in the cabin and on the flight safety cards. And, when it was discovered that a guide dog was fearful of the metal steps for climbing up and into the cabin – they redesigned them to be easier on the paws. The guide dog happily climbed aboard the redesigned steps on a subsequent visit.

In conjunction with making the service affordable, this philosophy is something Wisk emphasised during the visit.

“The big vision of this is to have this accessible for everyone,” said CMO Becky Tanner. “Making sure this feels comfortable and enjoyable and safe for all kinds of people – people with disabilities, people with different heights, shapes and sizes.”

Below: InDro’s Scott Simmie (front right) inside Gen6. InDro’s Dr. Eric Saczuk, who was attending on behalf of BCIT’s RPAS Hub (which he directs) is in the seat behind him. Dr. Saczuk is also InDro’s Chief of Flight Operations

 

Scott and Eric on Wisk Gen6

INDRO’S TAKE

 

Before we get into our view of this world, it’s also worth mentioning that the delegation had the privilege of touring the NASA Ames Research Center. We saw, among other things, a high-end simulator purpose-built for testing eVTOL flight in congested urban airspace – as well as top-level research into developing predictive models for turbulence at the coming vertiports – where these vehicles will takeoff and land.

“The worlds of Advanced Air Mobility and Urban Air Mobility are definitely coming. This is truly going to be an inflection point in aviation, and we foresee many positive use-case scenarios beyond air taxis that these technologies will enable,” says InDro Robotics Founder and CEO Philip Reece.

“It was highly instructive to get a front-row seat with these industry leaders, and we thank CAAM for its foresight in planning and executing this important trip. InDro will have some announcements of its own for the AAM space – both for service provision and more – down the road.”

We look forward to these companies gaining their final FAA Certifications – and seeing these aircraft carry passengers and eventually cargo.

Robosense sets new bar for affordable, powerful LiDAR sensors

Robosense sets new bar for affordable, powerful LiDAR sensors

By Scott Simmie

 

Building or modifying a robot?

Specifically, are you working on something with autonomy that needs to understand an unfamiliar environment? Then you’re likely looking at adding two key sensors: A depth camera and a LiDAR unit.

LiDAR (as most of you likely know), scans the surrounding environment with a continuous barrage of eye-safe laser beams. It measures what’s known as the “Time of Flight” – meaning the time it takes for the photons to be reflected off surrounding surfaces and return to the LiDAR unit. The closer that surface is, the shorter the Time of Flight. LiDARs calculate the time of each of those reflected beams and convert that into distance. Scatter enough of those beams in a short period of time (and LiDARs do), and you get an accurate digital representation of the surrounding environment – even while the robot is moving through it.

This is particularly useful for autonomous missions and especially for Simultaneous Localisation and Mapping, or SLAM. That’s where a LiDAR-equipped robot can be placed in a completely unfamiliar (and even GPS-denied) environment and produce a point-cloud map of its surroundings while avoiding obstacles. Quality LiDARs are also capable of producing 3D precision scans for a wide variety of use-cases.

All great, right? Except for one thing: LiDAR sensors tend to be very expensive. So expensive, they can be out of reach for an R&D team, academic institution or Startup.

There is, however, a solution: Robosense.

The company produces LiDAR sensors (both mechanical and solid-state) that rival the established players in the market. And they do so for about one-third of the cost of the industry heavyweights.

“The performance of Robosense is outstanding – absolutely on par with its main competitors in North America,” says InDro Account Executive Callum Cameron. “We have been integrating Robosense LiDAR on our products for about two years, and their performance is exceptional.”

Below: A fleet of four robots, equipped with Robosense LiDAR, which recently shipped to an academic client.

 

Robosense LiDAR

ROBOSENSE

 

The company might not yet be a household name (unless your household has robots), but as of May 2024 the firm had sold 460,000 LiDAR units. Its sensors power a large number of autonomous cars, delivery vehicles and other robots – and it’s the first company to achieve mass production of automotive-grade LiDAR units with its own in-house developed chip.

The company was founded in 2014, with some A-level engineering talent – and it’s been on a stellar trajectory ever since. One of the reasons is because Robosense produces all three core technologies behind its products: The actual chipsets, the LiDAR hardware, and the perception software. We’ll let the company itself tell you more:

“In 2016, RoboSense began developing its R Platform mechanical LiDAR. One year later, in 2017, we introduced our perception software alongside the automotive-grade M Platform LiDAR sensors tailored for advanced driver assistance and autonomous driving systems. We achieved the start-of-production (SOP) of the M1 in 2021, becoming the world’s first LiDAR company to mass-produce automotive-grade LiDAR equipped with chips developed in-house,” says its website.

The company now has thousands of engineers. And it didn’t take long before the world noticed what they were producing.

“As of May 17, 2024, RoboSense has secured 71 vehicle model design wins and enabled 22 OEMs and Tier 1 customers to start mass production of 25 models. We serve over 2,500 customers in the robotics and other non-automotive industries and are the global LiDAR market leader in cumulative sales volume.”

The company has also received prestigious recognition for its products, including two CES Innovation awards, the Automotive News PACE award, and the Audi Innovation Lab Champion prize.

“This company has standout features, including Field of View, point cloud density and high frame rates,” says Cameron. “If you look at that fleet of four robots we recently built, using the competition those LiDAR units alone would have come to close to $80,000. The Robosense solution cost roughly one-quarter of that with similar capabilities.”

And the factories? State of the art. Though this video focuses on its solid-state LiDAR, Robosense uses the same meticulous process for its mechanical units:

LiDAR FOR EVERY APPLICATION

 

Robosense produces many different LiDAR sensors. But what particularly appeals to us is that the company has (excuse the pun) a laser-like focus on the robotics industry. Its Helios multi-beam LiDAR units have been designed from the ground up for robots and intelligent vehicles. There are customisable fields of view, depending on application, and a near-field blind-spot of ≤ 0.2 metres. In addition, Helios LiDAR comes in 16- and 32-beam options depending on point-cloud density and FOV requirements. Both are capable of functioning in temperatures as low as -40° or on a scorching day in the Sahara desert. There’s also protection against multi-radar interference and strong light (which can be an issue with LiDAR). You can learn more about its features here.

Its Bpearl unit proves that very good things can indeed come in small packages. With a 360° horizontal and 90° vertical hemispherical FOV, it’s been designed for near-field blind spots, capable of detection at ≤10 cm. That’s why we selected it for a robot designed to inspect cycling lanes for hazards (while avoiding cyclists, of course). We actually have two Bpearls on that robot (one on each side), since detecting blind spots and avoiding other obstacles is so critical to this application.

“We’ve integrated both the Bpearl and Helios LiDAR units into multiple different robots and the performance has been excellent, even under adverse conditions,” says Cameron. “Obstacle avoidance has been outstanding, and SLAM missions are a snap.”

Below: This InDro robot features two 32-beam Robosense Bpearl LiDAR units. You can see one of them – that tiny bubble on the side (and there’s another one on the opposite side):

InDro Sentinel

THE THREE “D”s

 

You’ve likely heard this before, but robots are perfect for jobs that are Dirty, Dull or Dangerous – because they remove humans from those scenarios. Robots, particularly inspection robots, are often subjected to extremes in terms of weather and other conditions.

So this is a good place to mention that if a Robosense LiDAR encounters fog, rain, dust or snow it has a de-noising function to ensure it’s still capturing accurate data and that your point cloud isn’t a representation of falling snow. All of the Robosense LiDAR sensors have outstanding Ingress Protection ratings.

Because adverse conditions are quite likely to occur at some point during a robotic mission, Robosense puts its products through absolutely gruelling tests. Hopefully your robot won’t encounter the scenarios seen below, but if it does – the LiDAR will keep working:

INDRO’S TAKE

 

We take pride in putting only the highest quality sensors into our products.

Prior to adopting Robosense as our “go-to” LiDAR about two years ago, we were using big-name products. But those products also came with a big price tag. When we discovered the quality and price of Robosense LiDAR units, it was an obvious choice to make the switch. We have shipped multiple Robosense-enabled robots to clients, saving them thousands of dollars – in one case, tens of thousands – while still capturing every bit of data they require. Robosense is now our go-to, even on our flagship products. (We recently did a demonstration of one of our newer Helios-equipped autonomous quadrupeds to a high-profile client; they were amazed with the results.)

“Robosense is every bit the equal of the heavyweight LiDAR manufacturers, without the downside of the high cost,” says InDro Robotics CEO Philip Reece. “The field-of-view, point cloud density and quality of construction are all state-of-the-art, as are the manufacturing facilities. What’s more, Robosense continues to push the envelope with every new product it releases.”

Interested in learning more, including price and options? Contact Account Executive Callum Cameron right here, and he’ll give you all the info you need.