Cypher Robotics takes the stage at Hannover Messe 2025

Cypher Robotics takes the stage at Hannover Messe 2025

By Scott Simmie

 

We feel a bit like proud parents.

That’s because a company we have incubated, Cypher Robotics, recently took to the world stage at the big Hannover Messe 2025 show in Germany. It’s one of the largest automation and technology shows in the world. More than 125,000 people attended and Cypher’s Founder and CEO Peter King (seen on the left in the panel above) was one of them.

King was there to showcase the Captis cycle counting solution – a three-in-one robot that can autonomously scan inventory, capture RFID tags, and carry out precision scans on missions that last up to five hours. A single shift with Captis can capture all inventory in a large warehouse while transferring that data in realtime to a client’s Warehouse Management System or Warehouse Execution System software.

What differentiates Captis from other solutions is a ROS2-based tethered drone. In the back of a warehouse, that drone ascends from the Autonomous Mobile Robot (AMR) base. As the base moves down the aisles, the drone optically scans product codes. This eliminates human error, and saves people from the repetitive – and potentially risky – task of working from heights.

Below: The Captis system with its ROS2 tethered drone. InDro Robotics is the incubator for Cypher Robotics

 

Cypher Robotics Captis

ERICSSON AND 5G

 

Captis was on display at the Ericsson booth. That company is the world leader in telecommunications hardware and software. It’s very likely that your local cellular provider is running on a network built by Ericsson.

And now, as companies globally transition toward an Industry 4.0 (IR4) world of automation and connected devices, those networks are more important than ever. Secure, high-speed data throughput in the form of private 5G networks is the foundation of IR4.

“As manufacturers modernize their operations, the need for the right connectivity has never been more critical,” explains Ericsson. “Data transformation in manufacturing starts with a unified connectivity platform that seamlessly integrates your existing digital assets to enable new technologies. At the heart of this transformation is 5G connectivity, delivering scalable, flexible solutions that harness massive amounts of data generated by Industrial IoT devices.”

At big shows like Hannover Messe, Ericsson wants a tangible way to demonstrate what these private 5G networks mean in the real world. That’s why Cypher Robotics, along with several other innovators, was invited to display at the Ericsson booth.

“There was a lot of interest in Captis at Hannover Messe,” says King. “People immediately understand the value proposition.”

Below: Cypher Robotics Founder and CEO Peter King, left, along with partners Ericsson and Slolam Consulting

Cypher Robotics Peter King Hannover Messe Captis

INDRO’S TAKE

 

InDro was pleased to see Captis and Peter King once again take to the global stage. As the incubator for Cypher Robotics, we are particularly proud.

“The Captis solution is truly at the forefront of cycle-counting technology, and it’s been very satisfying to assist the Cypher Robotics team overcome some of the demanding technical hurdles,” says InDro Robotics Founder and CEO Philip Reece. “In addition to what it can do for clients, Captis is also a great way for Ericsson to demonstrate the undeniable efficiencies that private 5G networks enable in an Industry 4.0 setting.”

Bonus: If you’d like to see highlights from the big show – along with a glimpse of a robotic goaltender built by Ontario high school students that was on display – check out the video below.

Research using InDro robots for real-world autonomy

Research using InDro robots for real-world autonomy

By Scott Simmie

 

As you’re likely aware by now, InDro builds custom robots for a wide variety of clients. Many of those clients are themselves researchers, creating algorithms that push the envelope in multiple sectors.

Recently, we highlighted amazing work being carried out at the University of Alberta, where our robots are being developed as Smart Walkers – intended to assist people with partial paralysis. (It’s a really fascinating story you can find right here.)

Today, we swing the spotlight down to North Carolina State University. That’s where we find Donggun Lee, Assistant Professor in the Departments of Mechanical Engineering and Aerospace Engineering. Donggun holds a PhD in Mechanical Engineering from UC Berkely (2022), as well as a Master’s of Science in the same discipline from the Korea Advanced Institute of Science and Technology. He oversees a small number of dedicated researchers at NCSU’s Intelligent Control Lab.

“We are working on safe autonomy in various vehicle systems and in uncertain conditions,” he explains.

That work could one day lead to safer and more efficient robot deliveries and enhance the use of autonomous vehicles in agriculture.

Below: Four modified AgileX Scout Mini platforms, outfitted with LiDAR, depth cameras and Commander Navigate are being used for research at NCSU. Chart below shows features of the Commander Navigate package

Research Robots
Commander Navigate

“UNCERTAIN” CONDITIONS

 

When you head out for a drive, it’s usually pretty predictable – but never certain. Maybe an oncoming vehicle will unexpectedly turn in front of you, or someone you’re following will spill a coffee on their lap and slam on their brakes. Perhaps the weather will change and you’ll face slippery conditions. As human beings, we’ve learned to respond as quickly as we can to uncertain scenarios or conditions. And, thankfully, we’re usually pretty good at it.

But what about robots? Delivery robots, for example, are already being rolled out at multiple locations in North America (and are quite widespread in China). How will they adapt to other robots on the road, or human-driven vehicles and even pedestrians? How will they adapt to slippery patches or ice or other unanticipated changes in terrain? The big picture goes far beyond obstacle avoidance – particularly if you’re also interested in efficiency. How do you ensure safe autonomy without being so careful that you slow things down?

These are the kinds of questions that intrigue Donggun Lee. And, for several years now, he has been searching for answers through research. To give you an idea of how his brain ticks, here’s the abstract from one of his co-authored IEEE papers:

Autonomous vehicles (AVs) must share the driving space with other drivers and often employ conservative motion planning strategies to ensure safety. These conservative strategies can negatively impact AV’s performance and significantly slow traffic throughput. Therefore, to avoid conservatism, we design an interaction-aware motion planner for the ego vehicle (AV) that interacts with surrounding vehicles to perform complex maneuvers in a locally optimal manner. Our planner uses a neural network-based interactive trajectory predictor and analytically integrates it with model predictive control (MPC). We solve the MPC optimization using the alternating direction method of multipliers (ADMM) and prove the algorithm’s convergence.

That gives you an idea of what turns Donggun’s crank. But with the addition of four InDro robots to his lab, he says research could explore many potential vectors.

“Any vehicle applications are okay in our group,” he explains. “We just try to develop general control and AI machine learning framework that works well in real vehicle scenarios.”

One (of many) applications that intrigues Donggun is agriculture. He’s interested in algorithms that could be used on a real farm, so that an autonomous tractor could safely follow an autonomous combine. And, in this case, they’ve done some work where they’ve programmed the Open Source Crazy Flie drone to autonomously follow the InDro robot. Despite the fact it’s a drone, Donggun says the algorithm could be useful to that agricultural work.

“You can easily replace a drone with a ground vehicle,” he explains.

And that’s not all.

“We are also currently tackling food delivery robot applications. There are a lot of uncertainties there: Humans walking around the robot, other nearby robots…How many humans will these robots interact with – and what kind of human behaviours will occur? These kinds of things are really unknown; there are no prior data.”

And so Donggun hopes to collect some.

“We want to develop some sort of AI system that will utilise the sensor information from the InDro robots in real-time. We eventually hope to be able to predict human behaviours and make decisions in real-time.”

Plus, some of Donggun’s previous research can be applied to future research. The paper cited above is a good example. In addition to the planned work on human-robot interaction, that previous research could also be applied to maximise efficiency.

“There is trade-off between safety guarantees and getting high performance. You want to get to a destination as quickly as possible and at speed while still avoiding collisions.”

He explains that the pendulum tends to swing to the caution side, where algorithms contain virtually all scenarios – including occurrences that are unlikely. By excluding some of those exceedingly rare ‘what-ifs’, he says speed and efficiency can be maximised without compromising safety.

Below: Image from Donggun’s autonomy research showing the InDro robot being followed by an Open Source Crazy Flie drone

NCSU InDro Navigator Cray Flie

INDRO’S TAKE

 

We, obviously, like to sell robots. In fact, our business depends on it.

And while we put all of our clients on an equal playing field, we have a special place in our non-robotic hearts for academic institutions doing important R&D. This is the space where breakthroughs are made.

“I really do love working with people in the research space,” says Head of R&D Sales Luke Corbeth. “We really make a concerted effort to maximise their budgets and, when possible, try to value-add with some extras. And, as with all clients, InDro backs what we sell with post-sale technical support and troubleshooting.”

The robots we delivered to NCSU were purchased under a four-year budget, and delivered last summer. Though the team is already carrying out impressive work, we know there’s much more to come and will certainly check in a year or so down the road.

In the meantime, if you’re looking for a robot or drone – whether in the R&D or Enterprise sectors – feel free to get in touch with us here. He takes pride in finding clients solutions that work.

Research at U of Alberta focuses on robotics for medical applications

Research at U of Alberta focuses on robotics for medical applications

By Scott Simmie

 

You’ve probably heard of the “Three Ds” by now: Robots are perfect for tasks that are Dirty, Dull and Dangerous. In fact, we recently took a pretty comprehensive look at why inspection robots can tick all of these boxes – while saving companies from unplanned downtime.

Generally, that maxim holds true. But a recent conversation with two researchers from the University of Alberta got us thinking that some innovative robotics applications don’t truly fit this description. Specifically, certain medical or healthcare use-cases.

The people we spoke to carry out their research under the umbrella of a body that intersects the robotics and healthcare sectors. It’s called the Telerobotic and Biorobotic Systems Group in the Electrical and Computer Engineering Department of the U of A. It’s under the direction of Prof. Mahdi Tavakoli, who is kind of a big name in this sector. Within that group, there are three separate labs:

  • CREATE Lab (Collaborative, Rehabilitation, Assistive robotics research
  • HANDS Lab (Haptics and Surgery research
  • SIMULAT-OR Lab (A simulated operating room featuring a da Vinci Surgical System)

Broadly, the research can be thought of as belonging to one of two realms: Rehabilitation/assistive and surgical. But what does that actually mean? And how has a robot from InDro been modified to become a smart device that can assist people with certain disabilities?

Let’s dive in.

Below: Could a robotic platform like the Ranger Mini be put to use helping someone with mobility issues? We’ll find out…

Ranger Mini 3.0

HELPING PEOPLE (AND EVEN SURGEONS)

 

We spoke with researchers Sadra Zargarzadeh and Mahdi Chalaki. Sadra is a Master’s student in Electrical and Computer Engineering and previously studied Mechanical Engineering at Iran’s Sharif University of Technology. Mahdi is also a Master’s student in the same department, and studied Mechanical Engineering at the University of Tehran.

Sadra’s research has focused on healthcare robotics with an emphasis on autonomous systems leveraging Large Language Model AI.

“I’ve always had a passion for helping people that have disabilities,” he explains. “And in the rehab sector we often deal with patients that have some sort of fine motor skill issue or challenge in executing tasks the way they’d like to. Robotics has the potential to mitigate some of these issues and essentially be a means to remove some of the barriers patients are dealing with – so I think there’s a very big potential for engineering and robotics to increase the quality of life for these people.”

That’s not dirty, dull or dangerous. But it is a very worthwhile use-case.

 

SMART WALKER

 

People with mobility and/or balance issues often require the help of walkers. Some of these devices are completely manual, and some have their own form of locomotion that keeps pace with the user’s desired speed. The direction of these is generally controlled with two hands on some form of steering device. Usually, equal pressure from each hand and arm are required in order to go in a straight line and by pushing harder on one side or another steering is achieved.

But what about someone who has had a stroke that has left them with partial paralysis on one side? They might well not be able to compensate, meaning despite their intent to carry out a straight path forward the device would turn. That’s where Mahdi’s research comes in.

“Robotic walkers or Smart Walkers have been studied for more than 20 years,” he says. “But in almost all of them, their controllers assume you have the same amount of force in both of your hands. And people with strokes often don’t have the same strength in one side of their body as they have on the other side.”

So how can robotics compensate for that? Well, using an AgileX Ranger Mini with InDro Commander from InDro Robotics as the base, Mahdi and others got to work. They built a steering structure and integrated a force sensor, depth perception camera, and some clever algorithms. That camera zones in on the user’s shoulders and translates movement into user intent.

“We know, for example, if you are just trying to use your right hand to turn left, the shoulder angle increases. If you’re trying to turn right, the shoulder angle on the right arm decreases.”

By interpreting those shoulder movements in conjunction with the force being applied by each hand, this Smart Walker translates that data into desired steering action. As a result, the user doesn’t have to push so hard with that compromised side and it also reduces cognitive load. The wrist torque required by the user drops by up to 80 per cent.

Of course, there’s much more to this device than we’ve outlined here. Enough, in fact, that a scientific paper on it can be found here. You can also check out the video below:

 

ROBOTS IN THE O-R

 

While the Smart Walker is a great example of robotics being put to use on the assistive and rehabilitation side of things, let’s not forget that the Telerobotic and Biosystems Research Group also carries out work on the surgical side. Sadra explains that robotic devices – particularly in conjunction with AI – could prove of great benefit assisting a surgeon.

“My research centres around the use of Generative AI. With the growth of Large Language Models (LLM) such as ChatGPT, we want to see how these AI tools can translate into the physical world in robots. A big section of my projects have focused on Generative AI for surgical autonomy.”

For example, a robotic device with plenty of AI onboard might be able to handle tasks such as suctioning blood. Machine Vision and Machine Learning could help that device determine where and how much suction needs to be applied. And, if you push this far enough, a surgeon might be able to initiate that process with a simple voice command like: “Suction.”

“How can we have task planners and motion planners through generative AI such that the surgeon would communicate with the robot with natural language – so they could ask the robot to complete a task and it would execute?” asks Sadra. “This would allow robots to become more friendly to the average individual who doesn’t have robotics knowledge.”

On the flip side of the coin, there’s also the potential for robotic devices to inform the surgeon of something that might require attention. In breast cancer surgery, for example, an AI-enhanced robot with realtime data from an imaging device might notice remaining tumour tissue and give the all-clear to close the incision only after all cancerous material has been excised.

In other words, some of the algorithms Sadra works on involve working on that human-robotic interface while leveraging powerful Large Language Model systems.

“Exactly. And we look at this process in three stages: We think about high-level reasoning and task planning, then mid-level motion planning, then lower-level motion control. This is not only for surgery; it’s a similar workflow for assistive robotics.”

The head of the lab, Professor & Senior University of Alberta Engineering
Research Chair in Healthcare Robotics Dr. Mahdi Tavakoli, describes AI in this field as “a game-changer,” enabling the next level of human-robotics interface.

“Our focus is clear: We’re building robots that collaborate with humans — robots that can understand our language, interpret context, and assist with the kinds of repetitive or physically demanding tasks that free people up to focus on what they do best: The creative, the social, the human. We see the future in ‘collaborative intelligence,’ where people stay in control and robots amplify human capabilities.”

Fun fact: The most powerful LLMs are known as Generative Pretrained Transformers – which is where ChatGPT gets its name.

 

WHAT’S NEXT?

 

We asked the researchers if the plan is to ultimately explore commercialisation. Apparently it’s a little more complex when it comes to surgery due to regulatory issues, but this is definitely on the roadmap. Sadra has been doing research through a program called Lab2Market and says there’s been very positive feedback from clinicians, physical and occupational therapists and manufacturers.

Program head Dr. Tavakoli says the lab is “thinking big” about how such innovations can help diversify the Canadian economy. In Alberta specifically, which has traditionally been a resource-dominated economy, he says robotics presents a huge opportunity for growth.

“That’s part of why we’ve launched Alberta Robotics: To build a regional ecosystem for robotics research, education, and innovation. So, the University of Alberta is open for business when it comes to robotics; people should be watching for what will come out of Alberta in robotics!”

Below: A promotional video for the da Vinci Surgical System. Will research at the U of A someday enable machines like this to take verbal commands from a surgeon?

INDRO’S TAKE

 

The research being carried out at the University of Alberta is both fascinating and carries with it huge potential in both the surgery and rehabilitation/assistive spheres. We’re pleased to know that three Ranger Mini platforms with InDro Commander are being put to work for this purpose – which is unlike any other use-case we’ve seen for our robots.

“I’m incredibly impressed with what they’re doing,” says InDro Founder and CEO Philip Reece. “It’s researchers like these, quietly carrying out advanced and focussed work, who make breakthroughs that ultimately become real-world devices and applications. We’re pleased to put a well-deserved spotlight on their work.”

You can check out a list of researchers and alumni – and see a photo of Sadra and Mahdi – right here.

How remote inspection robots reduce downtime

How remote inspection robots reduce downtime

By Scott Simmie

 

Inspection robots aren’t cheap.

We fully acknowledge that might not be the best opening pitch, but hear us out.

While a capable inspection robot can be costly, so is downtime. So is dispatching human beings to distant locations. Electrical substations and certain oil and gas assets are often remote and require many hours of driving to reach – plus the cost of hotels and per diems. Sometimes, companies even have to charter a helicopter just to place eyes on those remote spots. Depending on the sector, these inspections might take place monthly, bi-weekly – or at some other interval.

Point is: Regular inspection of remote assets is an absolute necessity. An inspection can troubleshoot for regular wear and tear, thermal anomalies, damage from animals, vandalism, environmental impact, leaks – the list goes on. A human being (often equipped with handheld scanners and other detection equipment) can generally spot all these things.

But so, too, can a robot. And, unlike a human being when it comes to remote assets, an autonomous robotic inspector can be on the site 24/7. It never requests a hotel room, doesn’t charge overtime – and never forgets to do everything it’s been instructed to carry out.

Below: The InDro Robotics Sentinel, at an electrical substation in Ottawa

Sentinel enclosure Ottawa Hydro

DOWNTIME

 

There are two types of downtime: Planned and unplanned. The former, obviously, is pre-arranged. Maybe it’s time to replace certain pieces of equipment or do other scheduled maintenance. Planned downtime can include hardware and software upgrades, even large-scale replacements. For those companies in service provision, including those in the B2B space, a scheduled event minimises downtime because everything is lined up in advance for the necessary task. In addition, you can notify consumers or clients that the service or commodity will be temporarily unavailable – and schedule the downtime to minimise disruption. Customers and clients generally understand these inconveniences when they know about them ahead of time.

Then there’s that other kind of downtime: Unplanned. Something goes wrong and you need to scramble to fix it. Precisely because these are unexpected, you might not have the required widgets or personnel on-hand (or on-site). And it’s not just the repair itself. There’s usually lost revenue, reputational damage, and even more:

“The repercussions of unplanned downtime extend beyond immediate financial losses,” explains this overview.

“Companies may face financial penalties and legal liabilities, especially if downtime leads to non-compliance with regulatory requirements. These penalties can add another layer of financial strain on top of the already significant downtime costs.”

We’ve all heard stories about airlines being fined, sometimes heavily, for unexpected delays. And the reputational damage? You wouldn’t have to look hard to find consumers who have switched airlines, internet providers and more due to unplanned downtime that inconvenienced them.

That same article dips into the oil and gas sector, using data from a 2016 study by Kimberlite (a research company specialising in the sector) which found offshore organisations face an average of $38M US annually in costs from unplanned downtime. Those with the worst records racked up yearly tabs close to $90M US. So clearly, it’s something most would like to avoid.

 

THE ADVANTAGES OF INSPECTION

 

Regular robotic inspection can help reduce unplanned downtime by identifying potential failures before they happen. Is a key component starting to age? Has wildlife encroached on sensitive components? Did the storm that passed through overnight have an impact on anything? Are all gauges reading as they should? Are there any thermal anomalies? Is there the molecular presence of hydrocarbons or other indicators above a safe threshold? Are there any strange new noises, such as arcing or humming?

Yes. People can do this when they’re dispatched. But a robot tailored for inspection – and they can be customised for every client’s needs – can carry out these same tasks reliably, repeatedly, and on schedule.

This idea of predictive maintenance is very much a pillar in the world of Industry 4.0, or 4IR (which we recently explored in some detail). As companies move into this next phase, particularly in the manufacturing sector, Smart Devices are being integrated in every conceivable location across newer factory floors. In conjunction with software, they keep an eye on critical components, identifying potential problems before they occur. Industry leaders in this space, such as Siemens, state these systems can result in up to a 50 per cent reduction in unplanned downtime, and up to a 40 per cent reduction in maintenance costs.

That’s the gold standard. But we are just at the cusp of this integration, and it’s more broadly targeted at the manufacturing sector. Those remote electrical substations and oil assets are still, in many ways, not that smart when it comes to asset intelligence and will require regular inspection for many years to come.

Below: InDro’s Sentinel inspection robot, which can be customised for any inspection scenario, It’s seen here at a demo for Ottawa Hydro

Sentinel enclosure Ottawa Hydro

THE SENTINEL SOLUTION

 

Sentinel is our flagship inspection robot. Our first iteration was in 2022 and – as with all InDro innovations – we have continued to enhance its capabilities. As new advances in sensors and compute have emerged, so too have Sentinel’s powers. But Sentinel’s evolution goes far beyond adding new LiDAR, depth cameras or processors. In the background at Area X.O, we are continuously improving our own IP. Specifically, our InDro Autonomy and InDro Controller software.

InDro Controller is a desktop-based interface with Sentinel (or any other ROS-based robot). Fully customisable and easy to use, it allows our clients to plan and monitor missions with ease. A few clicks allows users to set up repeatable points of interest where the robot will carry our specific inspection tasks. Need eyes on a critical gauge? Have InDro Controller stop Sentinel at a particular spot. Use the 30X optical Pan-Tilt-Zoom camera to frame and capture the shot. Happy with the results? Great. InDro Controller will remember and carry out this step (and as many others as you’d like) when it next carries out the mission. Collisions won’t be an issue, as InDro Autonomy’s detect and avoid capabilities ensure there won’t be any mishaps on the way. In fact, you could drop Sentinel in a completely unfamiliar setting littered with obstacles, and it could map that site and even produce a precision scan. And, like a regular visit to the robot doctor, InDro Controller also monitors the overall system health of any integrated device.

From the outset, Sentinel has been on a continuous journey pushing the R&D envelope, with testing and rigorous third-party evaluation. An earlier iteration was even put through demanding tests by the US Electric Power Research Association (EPRI) at its test facility in Massachusetts. All of these deployments have resulted in learnings that have been incorporated into our latest version of Sentinel.

 

SET AND FORGET

 

When it comes to remote assets, our clients clearly needed a hands-off approach. That meant we had to incorporate some sort of autonomous charging, since there’s no one on these sites to plug it in. We evaluated mechanical docking systems, but realised these physical mechanisms introduce another potential point of failure.

And so we ultimately settled on a powerful wireless charging system. Using optical codes, Sentinel returns to a housed structure following its missions. It then positions itself snugly up to the wireless charging system so that it’s ready for the next deployment (you’ll see a picture of one of our earlier test structures in a few seconds). We needed to avoid metal to ensure the cleanest possible wireless communication (Sentinel operates over 5G and also has the option for WiFi). Housing Sentinel when it’s off-duty protects it from unnecessary exposure to the elements, though it’s certainly built to operate in virtually anything Mother Nature can throw at it (short of a hurricane).

Finally, Sentinel also has InDro Commander on board. In addition to housing its powerful brain, Commander allows for the easy addition of additional sensors by simply plugging them in. It provides both power and a data pipeline, and InDro Controller has been built to instantly recognise the addition of any new sensors. In other words, if a client’s requirements change and a new sensor is required, Sentinel can be modified with relative ease and no new coding.

Below: Sentinel, following a demonstration for Ottawa Hydro, snugs up to charge

Sentinel enclosure Ottawa Hydro

THE SENTINEL EVOLUTION

 

As mentioned, Sentinel has gone through a ton of testing, coding and development to reach its current iteration. We’ve taken literally all of our learnings and client feedback and put them into this robot. Sentinel does the job reliably and repeatedly, capturing actionable data intended to reduce downtime for our clients. What’s more, we have moved past the phase of producing these robots as one-offs when demand arises. With our fabrication facility InDro Forge, we are now commencing to manufacture Sentinel at scale.

“Sentinel is now a fully mature and market-ready product,” says InDro Founder and CEO Philip Reece. “We already have multiple Sentinels on the ground for a major US utility client and have other orders pending. We – like our clients – are confident Sentinel is worth the investment by reducing downtime and saving companies the expense and time of sending people to these remote locations for inspection work.”

Interested in learning more, or even taking the controls for a remote demonstration of Sentinel and InDro Controller? Contact us here.

What Canada’s new drone regulations mean for you

What Canada’s new drone regulations mean for you

By Kate Klassen, Training and Regulatory Specialist

 

It’s not every day I get excited to see an email before 0600. But yesterday, March 26, was one of those days! 

Unexpectedly, Transport Canada announced the publication of the highly anticipated Canada Gazette II which included new regulations for RPAS Operations Beyond Visual Line-of-Sight and Other Operations.  

It’s a hefty publication with lots of cross-references and makes for a bit of a dense read. But after a day of reading, re-reading, digesting and consulting with other colleagues who share my nerdiness about this area, I’m pleased to provide this overview – which we’ll continue to update as new information becomes available.  

All-in-all, it’s what we were expecting and hoping to see: Common-sense amendments to existing regulations, noticeable inclusions from feedback on the Canada Gazette I draft, and formalization of the next phase of routine RPAS operations in Canada.  

If you were one of the many who took the time to provide comments to Transport Canada following CG1, well done. What we saw today is proof they listen and that those efforts matter. Thank you, TC! 

Some regulations come into effect on April 1 2025, with others commencing November 4, 2025. This phased approach enables the mechanisms for compliance to be in place prior to requiring compliance with them. In other words, it gives you time to get prepared before it’s required by law. So don’t panic. There are no major changes required before this flying season. You can’t even fly BVLOS under these rules until November.

Now, let’s dive in. 

Below: Low-risk BVLOS flights will be permitted starting November 4, 2025. These operations will require a new Level 1 Complex RPAS Certificate

 

PRACTICALLY SPEAKING

 

As mentioned, the document published yesterday is complex. Our goal here is explain what it actually means in the real world. So we’re going to break this down into implications for different scenarios. Here’s what the new rules mean for RPAS pilots with: 

 

…a sub-250 gram drone
  • On April 1, there are more regulations than just the CAR 900.06 ‘don’t be an idiot’ rule that come into force. These specifically spell out steps to follow if you inadvertently enter restricted airspace (CAR 900.07) and include prohibitions around emergency security perimeters (CAR 900.08) 
…a Basic RPAS Certificate
  • Not many changes aside from general tidying of rules to ensure intent aligns with application 
  • You can allow a non-certified individual to fly under your supervision (CAR 901.54) 
  • You are qualified as a visual observer for BVLOS operations 
…an Advanced RPAS Certificate
  • You get new capabilities as of November 4th – and you don’t have to do any additional testing to take advantage of them! 
  • You’ll be able to carry out EVLOS – Extended Visual Line Of Sight operations. This allows you to fly up to 2NM from the pilot, control station and Visual Observer at any time during the flight, provided the pilot and control station are at the take-off and launch location (CAR 901.74) 
  • Sheltered Operation – This allows the drone to be flown around a building or structure without the use of a visual observer, in accordance with certain conditions  
  • Medium Drones: You’ll be permitted to fly drones with an operating weight of up to 150kg  
  • With an Advanced Certificate already in hand, you meet the prerequisite to begin your Level 1 Complex ground school (more in a moment). If you’ve passed the Advanced Certificate but only hold your Basic because you haven’t yet done your Flight Review, you can pursue the Level 1 Complex
5G Drones

AND ROUTINE, LOW-RISK BVLOS?

 

This was an area the industry had really been pushing for in the new regulations. Specifically, to be able to carry out such flights without the need for a Special Flight Operations Certificate. Here, too, there’s good news:

  • After November 4 2025, you’ll be able to fly low-risk BVLOS if you hold a Level 1 Complex RPAS pilot certification (CAR 901.89). This means BVLOS in uncontrolled airspace and away from people
  • Permits the operation of a 250g – 150kg RPAS to conduct a BVLOS operation in uncontrolled airspace and one kilometre or more from a populated area 
  • In addition to holding a Level 1 Complex pilot certificate, you also need to be an RPAS Operator (RPOC) or an employee/agent of one and comply with the conditions of your certificate (CAR 901.88) 

    

INTERESTING NUGGETS: 

 

  • RPAS Operating Certificate uses the acronym RPOC rather than ROC (as was drafted previously). This is likely in response to anticipated confusion with the ROC-A or Radio Operator Certificate with Aeronautical qualification issued by ISED
  • The new regs contain detailed guidance for visual observers and their requirements in various scenarios
  • You can’t “daisy chain” Visual Observers for EVLOS over greater distances. The pilot/control station needs to be at the take-off and landing area and the RPA can’t go further than 2 NM from the pilot, control station AND VO. 
  • Despite previous suggestions, there is no medical requirement! Just fit-to-fly rules like previously

 

There are also some changes to SFOC requirements. Police operations at events won’t require an SFOC. Department of National Defence operations won’t require them, just adequate coordination. In addition, you’ll be able to drop lifesaving gear without an SFOC, providing you don’t create a hazard. 

Declarations, maintenance and servicing will take on a more prevalent role (not surprising, given the EVLOS, low-risk BVLOS, and the ease of restrictions on flying heavier drones). It’s also worth noting that the already-useful Drone Site Selection Tool (DSST) will get upgraded to include new situational data layers for lower-risk BVLOS. These layers will include population density, aerodromes, controlled airspace, and Detect and Avoid requirements. 

DJI Dock

KATE’S TAKE

 

Canada Gazette II is a massive document. I actually tried to do a word count and the computer simply froze in fear. But, in conjunction with all of the above, here are some final key takeaways:

  • Don’t freak out: There’s plenty of time to process and time to act. There are no major changes happening before November 4th, though you’ll probably want to get your ducks in a row before then if you anticipate your flying season extending beyond that date
  • For maybe the first time ever, regulations have outpaced technology. We still (desperately) need technical detect solutions that are reliable, capable and affordable
  • We’ve had a few folks reach out about ground school for Level 1 Complex and yes – we absolutely will be offering this. At FLYY, we have things well underway as we were anticipating this announcement.

Unlike previous ground schools, there are some instructor requirements that need to be in place before we can even make the declaration to TC that we’re offering TP15530 compliant training. We’re on top of it!

We plan to start offering live, TC-compliant courses prior to the end of April. Over a series of weeks, these courses will run every Wednesday at 0930 PDT for 2.5 hours. These sessions will be recorded and made available to all course participants to review or watch at their leisure. We’ll keep you posted as we get closer to launch.

You can take advantage of our presale here to make sure you’re first in line.