Still a long road to fully autonomous passenger cars

Still a long road to fully autonomous passenger cars

By Scott Simmie

 

We hear a lot about self-driving cars – and that’s understandable.

There are a growing number of Teslas on the roads, with many owners testing the latest beta versions of FSD (Full Self-Driving) software. The latest release allows for automated driving on both highways and in cities – but the driver still must be ready to intervene and take control at all times. Genesis, Hyundai and Kia electric vehicles can actively steer, brake and accelerate on highways while the driver’s hands remain on the wheel. Ford EVs offer something known as BlueCruise, a hands-free mode that can be engaged on specific, approved highways in Canada and the US. Other manufacturers, such as Honda, BMW and Mercedes, are also in the driver-assist game.

So a growing number of manufacturers offer something that’s on the path to autonomy. But are there truly autonomous vehicles intended to transport humans on our roads? If not, how long will it take until there are?

Good question. And it was one of several explored during a panel on autonomy (and associated myths) at the fifth annual CAV Canada conference, which took place in Ottawa December 5. InDro’s own Head of Robotic Solutions (and Tesla owner) Peter King joined other experts in the field on the panel.

 

Autonomous Cars

Levels of autonomy

 

As the panel got underway, there were plenty of acronyms being thrown around. The most common were L2 and L3, standing for Level 2 and Level 3 on a scale of autonomy that ranges from zero to five.

This scale was created by the Society of Automotive Engineers as a reference classification system for motor vehicles. At Level 0, there is no automation whatsoever, and all aspects of driving require human input. Think of your standard car, where you basically have to do everything. Level 0 cars can have some assistive features such as stability control, collision warning and automatic emergency braking. But because none of those features are considered to actually help drive the car, such vehicles remain in Level 0.

Level 5 is a fully autonomous vehicle capable of driving at any time of the day or night and in any conditions, ranging from a sunny day with dry pavement through to a raging blizzard or even a hurricane (when, arguably, no one should be driving anyway). The driver does not need to do anything other than input a destination, and is free to watch a movie or even sleep during the voyage. In fact, a Level 5 vehicle would not need a steering wheel, gas pedal, or other standard manual controls. It would also be capable of responding in an emergency situation completely on its own.

Currently, the vast majority of cars on the road in North America are Level 0. And even the most advanced Tesla would be considered Level 2. There is a Level 3 vehicle on the roads in Japan, but there are currently (at least to the best of our knowledge and research), no Level 3 vehicles in the US or Canada.

As consumer research and data analytics firm J.D. Power describes it:

“It is worth repeating and emphasizing the following: As of May 2021, no vehicles sold in the U.S. market have a Level 3, Level 4, or Level 5 automated driving system. All of them require an alert driver sitting in the driver’s seat, ready to take control at any time. If you believe otherwise, you are mistaken, and it could cost you your life, the life of someone you love, or the life of an innocent bystander.”

To get a better picture of these various levels of autonomy, take a look at this graphic produced by the Society of Automotive Engineers International.

Autonomy

Now we’ve got some context…

 

So let’s hear what the experts have to say.

The consensus, as you might have guessed, is that we’re nowhere near the elusive goal of a Level 5 passenger vehicle.

“Ten years ago, we were all promised we’d be in autonomous vehicles by now,” said panel moderator Michelle Gee, Business Development and Strategy Director with extensive experience in the automotive and aerospace sectors. Gee then asked panelists for their own predictions as to when the Level 4 or 5 vehicles would truly arrive.

“I think we’re still probably about seven-plus years away,” offered Colin Singh Dhillon, CTO with the Automotive Parts Manufacturers’ Association.

“But I’d also like to say, it’s not just about the form of mobility, you have to make sure your infrastructure is also smart as well. So if we’re all in a bit of a rush to get there, then I think we also have to make sure we’re taking infrastructure along with us.”

Autonomous Cars

It’s an important point.

Vehicles on the path to autonomy currently have to operate within an infrastructure originally built for human beings operating Level 0 vehicles. Such vehicles, as they move up progressive levels of autonomy, must be able to scan and interpret signage, traffic lights, understand weather and traction conditions – and much more.

Embedding smart technologies along urban streets and even on highways could help enable functionalities and streamline data processing in future. If a Level 4 or 5 vehicle ‘knew’ there was no traffic coming at an upcoming intersection, there would be no need to stop. In fact, if *all* vehicles were Level 4 or above, smart infrastructure could fully negate the need for traffic lights and road signs entirely.

 

Seven to 10 years?

 

If that’s truly the reality, why is there so much talk about autonomous cars right now?

The answer, it was suggested, is in commonly used – but misleading – language. The term “self-driving” has become commonplace, even when referring solely to the ability of a vehicle to maintain speed and lane position on the highway. Tesla refers to its beta software as “Full Self-Driving.” And when consumers hear that, they think autonomy – even though such vehicles are only Level 2 on the autonomy scale. So some education around langage may be in order, suggested some panelists.

“It’s the visual of the word ‘self-driving’ – which somehow means: ‘Oh, it’s autonomous.’ But it isn’t,” explained Dhillon. “…maybe make automakers change those terms. If that was ‘driver-assisted driving,’ then I don’t think people would sleeping at the wheel whilst they’re on the highway.”

One panelist suggested looking ahead to Level 5 may be impractical – and even unnecessary. Recall that Level 5 means a vehicle capable of operating in all conditions, including weather events like hurricanes, where the vast majority of people would not even attempt to drive.

“It’s not safe for a human to be out in those conditions…I think we should be honing down on the ‘must-haves,’ offered Selika Josaih Talbott, a strategic advisor known for her thoughtful takes on autonomy, EVs and mobility.

“Can it move safely within communities in the most generalised conditions? And I think we’re clearly getting there. I don’t even know that it’s (Level 5) something we need to get to, so I’d rather concentrate on Level 3 and Level 4 at this point.”

 

Autonomous Cars

InDro’s Peter King agrees that Level 5 isn’t coming anytime soon.

I believe the technology will be ready within the next 10 years,” he says. “But I believe it’ll take 30-40 years before we see widespread adoption due to necessary changes required in infrastructure, regulation and consumer buy-in.”

And that’s not all.

“A go-to-market strategy for Level 5 autonomy is a monumental task. It involves significant investments in technology and infrastructure – and needs to be done so in collaboration with regulators while also factoring in safety and trust from consumers with a business model that is attainable for the masses.”

What about robots?

Specifically, what about Uncrewed Ground Vehicles like InDro’s Sentinel inspection robot, designed for monitoring remote facilities like electrical substations and solar farms? Sentinel is currently teleoperated over 4G and 5G networks with a human controlling the robot’s actions and monitoring its data output. 

Yet regular readers will also know we recently announced InDro Autonomy, a forthcoming software package we said will allow Sentinel and other ROS2 (Robot Operating System) machines to carry out autonomous missions.

Were we – perhaps like some automakers – overstating things?

“The six levels of autonomy put together by the SAE are meant to apply to motor vehicles that carry humans,” explains Arron Griffiths, InDro’s lead engineer. In fact, there’s a separate categorization for UGVs.

The American Society for Testing and Materials (ASTM), which creates standards, describes those tiers as follows: “Currently, an A-UGV can be at one of three autonomy levels: automatic, automated, or autonomous. Vehicles operating on the first two levels (automatic and automated) are referred to as automatic guided vehicles (AGVs), while those on the third are called mobile robots.”

“With uncrewed robots like Sentinel, we like to think of autonomy as requiring minimal human intervention over time,” explains Griffiths. “Because Sentinel can auto-dock for wireless recharging in between missions, we believe it could go for weeks – quite likely even longer – without human intervention, regardless of whether that intervention is in-person or virtual,” he says.

“The other thing to consider is that these remote ground robots, in general, don’t have to cope with the myriad of inputs and potential dangers that an autonomous vehicle driving in a city must contend with. Nearly all of our UGV ground deployments are in remote and fenced-in facilities – with no people or other vehicles around.”

So yes, given that InDro’s Sentinel will be able to operate independently – or with minimal human intervention spread over long periods – we are comfortable with saying that machine will soon be autonomous. It will even be smart enough to figure out shortcuts over time that might make its data missions more efficient.

It won’t have the capabilities of that elusive Level 5 – but it will get the job done.

InDro’s take

 

Autonomy isn’t easy. Trying to create a fully autonomous vehicle that can safely transport a human (and drive more safely than a human in all conditions), is a daunting task. We expect Level 5 passenger vehicles will come, but there’s still a long road ahead.

Things are easier when it comes to Uncrewed Ground Vehicles collecting data in remote locations (which is, arguably, where they’re needed most). They don’t have to deal with urban infrastructure, unpredictable drivers, reading and interpreting signage, etc.

That doesn’t mean it’s easy, of course – but it is doable.

And we’re doing it. Stay tuned for the Q1 release of InDro Autonomy.

 

Coming soon: InDro Autonomy software

Coming soon: InDro Autonomy software

By Scott Simmie

We’ve got news.

InDro Robotics is in the final stages of a new product that will vastly improve the capabilities of ground robots running on ROS2 or ROS1 Robot Operating System libraries. In short, it will transform a remotely tele-operated machine into an autonomous one. The software – no surprise here – is called InDro Autonomy.

“It’s our solution to automating the mundane tasks of outdoor navigation,” says Arron Griffiths, lead engineer at InDro’s Area X.O R&D facility in Ottawa, Ontario.

As you know, ground and aerial robots excel at jobs often referred to as “The Three Ds” – meaning tasks that are dirty, dull or dangerous (and sometimes all three). We would actually add a fourth “D” to that set and include jobs that are distant.

Think, for example, of remote locations such as electrical substations or solar farms. Currently, most robots (including many InDro products in the field) are remotely tele-operated by someone connecting with those robots over 4G or 5G networks. That’s an awesome capability, and one being put to use regularly by InDro clients. (In fact, we recently did a demonstration for T-Mobile at an Analyst’s Summit in Washington State. Attendees could operate a robot using an Xbox controller in real-time. The robot was in Ottawa – some 4,000 kilometres away.)

So tele-operation will still be with us, and is exceedingly effective for many use-cases. But there are some tasks where autonomy would be preferable.

Below: InDro’s Austin Greisman tweaks a Sentinel robot at our Area X.O facility

Robot Autonomy

Why autonomy?

 

While tele-operations are appropriate for a variety of inspection and surveillance tasks, there are cases where autonomy is a preferable option. Specifically, when a robot is repeatedly carrying out the same route/mission. This is especially true when the path becomes larger, such as inspections at solar farms. Why assign a human being to control a robot for a lengthy and repetitive task when a robot can carry it out on its own? That’s where InDro Autonomy comes in.

“Solar farms can be absolutely massive and very hard to maintain,” explains Griffiths. “Having a human walk or even drive through with a thermal camera is very inefficient. Our Sentinel ground robot, for example, can do this on its own.”

InDro’s Austin Greisman, who has been integral to InDro Autonomy’s development, puts it like this:

“Performing routine inspection checks – going to see the same thing over and over again – can be automated with this type of system,” he says. “So you just collect the data.”

 

Compatability

 

InDro Autonomy was inspired by our exclusive Commander module. It’s a bolt-on solution for ground robots that allows for rapid customisation and sensor integration. It does so by containing the complete Robot Operating System (ROS2) software library onboard, as well as an Edge computer for real-time processing. With its 4G and 5G tele-operating capabilities, open USB slots, built-in camera and slick User Interface, it’s a snap to add and integrate sensors and carry out remote missions via dashboard.

While InDro Autonomy was developed to work seamlessly with Commander-enabled robots, our developers wanted to maximise interoperability. So while Commander takes the pain out of robot customisation, you don’t have to have it to deploy InDro Autonomy.

“Not only does InDro Autonomy fully work with any platform Commander supports, InDro Autonomy will work with any ROS2-based Robot that has an IMU, GPS, and wheel odometry,” says Greisman.

Here’s a look at the Commander module, the box that’s the brains of Sentinel. It’s directly above the robot treads.

 

InDro Sentinel

How does it work?

 

With lots of coding, lol.

But there’s more than that, of course. InDro Autonomy works by recording its path via GPS during an initial tele-operated cruise through the desired route. In the case of Commander-enabled robots, that GPS trail is immensely accurate.

“We have a very high quality GPS unit in Commander that is accurate to around 5 cm,” explains Greisman. “As the robot moves, it drops GPS breadcrumbs. Later on, at any time, even if you reboot it and drop it somewhere random, it will understand where it is relative to where it’s been – so if can figure out where it needs to go next.”

 

Wait, there’s more!

 

And there is.

Greisman says InDro Autonomy has been designed for even very large spaces. Often, covering more territory requires greater onboard computational power. Not with this system.

“You can be in as big of a space as you like and there’s no computational limitation to that. What sometimes happens with autonomy systems, as you go to a bigger space you need more computational power, and this system has been designed to be efficient on low compute platforms. So this allows us to keep costs low for the client as well.”

 

Anomaly detection, alerts

 

As InDro Autonomy gets ready for a Q1 2023 release, InDro’s R&D team has a few other features it’s planning on adding to the software.

Because it’s all about autonomous missions, it’s assumed that not every inspection will be monitored in real-time. All captured data will be georeferenced and stored, of course, with the ability to review any mission remotely. But InDro thought it would be good to take things even further.

InDro is baking in the ability to customize the software per customer requests. So, for example, if components at an electrical substation should not exceed 50° Celsius, an alert will be sent by text or email if the robot detects temperatures exceeding that parameter. Engineers are also looking into integrating an early warning system for arcing, which could be triggered by its unique sonic signature. Plus, Commander-enabled robots will have the ability to autonomously snug right up to their charging docks.

So InDro Autonomy has a lot of features. And they’re coming soon.

InDro’s take

 

When InDro engineers developed the InDro Commander, we knew we had something special. And it wasn’t long before a growing number of clients realized that fact. Our Commander module became popular because it simplified a previously painstaking task, making a tough and time-consuming job easier.

We feel the same way about InDro Autonomy – and we believe our clients will, too.

Hallowe’en transformation: From Unitree GO 1 to Pokémon character Jolteon

Hallowe’en transformation: From Unitree GO 1 to Pokémon character Jolteon

By Scott Simmie

 

What could possibly be better than dressing up for Hallowe’en?

For Dave Niewinski, the answer was clear: Dress up a robot for Hallowe’en. Specifically, transform a quadruped robot into a Pokémon character.

“This was definitely a different project than what I normally do – usually they’re more technical,” explains Niewinski. “I just wanted to try something different.”

And it doesn’t get more different than this:

Advanced Robotics

A little background

 

Dave Niewinski is an engineer, one who has long had a passion for robotics. He does contract and consulting work for a number of major robotics companies (including InDro) and others interested in learning more about robotic solutions. That’s what pays the bills.

But Niewinski also wears another hat: Builder of fantastical creations that wind up on his Dave’s Armoury website and YouTube channel. You’ll find a ton of amazing videos on that channel – everything from a robotic arm lighting up fireworks for Canada Day through to a custom setup for pouring beer from a tap. (The “arm” in “Dave’s Armoury” comes from his frequent work with robotic arms.)

The YouTube channel provides Niewinski with an outlet for his creative side, a place where he builds devices with wild new capabilities, while also entertaining and educating his audience along the way.

“I end up making YouTube videos to (A): Have fun. But also partly for just educating people,” he says. “Usually when I write code I put the code up online. I also like showing that robots aren’t some scary unattainable thing.”

As it turns out, InDro Robotics had loaned Niewinski a Unitree GO 1 robot to play around with. The quadruped already looks a bit like an animal, and that got him thinking: What if dressed this up for Hallowe’en?

And so he did, choosing to transform the GO 1 into Jolteon. This wasn’t a simple task, and at one point in the transformation GO 1 looked more than a bit like an unfortunate lamb:

 

Robotics Engineering

With a little help from his neighbour – and some 200 hours of 3D printing – Jolteon took shape.

Unlike most of Niewinski’s projects, there wasn’t any coding involved with this project. It was simple (and not-so-simple) cosmetics.

Check out the full video showcasing how GO 1 became Jolteon: We guarantee you it’s totally worth your time. (And if you like it, subscribe to his amazing channel.)

The reaction

 

Niewinski took Jolteon for a walk around the neighborhood – and people were amazed by his creation.

“For a lot of people my age (he’s 33), our childhood was Pokémon. So to actually see it walking around in reality was unattainable until now. People loved it,” he says.

His kids loved it, too. Maddy (five) and Ollie (three) are used to seeing robots around the house. So much so, that they give them nicknames. The GO 1 is affectionately known as ‘Max’ and the AgileX Bunker Pro is known as ‘Frank the Tank.’

Speaking of the Frank: “He pulls all the kids around town. I’ve got a La-Z-Boy on top of him at the moment.”

Below: Jolteon’s 3D-printed head…

Canada Robotics

Robots keep getting better

 

Just a few short years ago, an affordable quadruped like the Unitree GO 1 would have been unthinkable. But with the increase in computing power (EDGE computing), better and cheaper sensors, plus advances in robot hardware – robots just keeping getting better at a near-exponential pace. Niewinski refers to the sensors, hardware and raw computing power as the three essential forces driving this change.

“All three of those are advancing so quickly, and they all rely on each other,” he says.

“You could have the best dog hardware, but if you don’t have great cameras or great processing power, it doesn’t really matter. Those three pillars of robotics are all advancing, (and) we’re going to continue seeing ridiculous leaps in robotics.”

B1 Quadruped

InDro’s take

 

It’s always a pleasure both working with Dave Niewinski on the serious stuff – and seeing the amazing projects he creates on his own. It’s terrific to see what passion, combined with technical talent, can produce.

We also agree with him about those three pillars of robotics. Whether drones or ground robots, we’ve seen phenomenal leaps in technology in recent years. Better sensors, better robot hardware, better EDGE computing – all working synergically.

Those leaps mean more powerful, more affordable solutions for data acquisition, asset monitoring and much more. It also keeps our engineering staff on their toes to see how InDro can further improve that technology for even more ambitious use-cases.

Sometimes, though, it’s nice to take a break and just have fun. And Niewinski’s creation certainly gave us – and, hopefully, you – a smile.

InDro Robotics and T-Mobile: A 5G match

InDro Robotics and T-Mobile: A 5G match

InDro Robotics was recently invited to attend an analyst’s summit in Bellevue, Washington State, put on by T-Mobile for Business. We were demonstrating remote operations of our Sentinel inspection robot, with Command & Control taking place over the 5G network. We were at the summit; the robot was in Ottawa.

We’ll get to that in a moment. But first, it’s worth looking at how InDro became involved with T-Mobile. It began with a different invitation. This one, from the Electric Power Research Institute, or EPRI.

EPRI is a major non-profit that does research and development to improve the efficiency of power generation and delivery. As the Institute states: “EPRI’s trusted experts collaborate with more than 450 companies in 45 countries, driving innovation to ensure the public has clean, safe, reliable, affordable, and equitable access to electricity across the globe.”

EPRI does a *lot* of testing within its R&D scope, often looking to find Best Practices it can share with its members. And in 2022, EPRI decided it wanted to put some ground robots to the test. Specifically, it was interested in how remotely operated or autonomous robots might be used to inspect electrical substations. These are usually remote, unstaffed facilities where high voltage is stepped down prior to being delivered to consumers (though some substations step up voltage).

EPRI invited a few ground robot manufacturers to its testing facility in Lenox, Massachusetts to see how well robots could carry out remote inspection. That facility is an electrical substation that can be energised or de-energised. It also features a set of overhead water pipes that can be used to simulate rain.

And so, earlier this year, InDro Robotics packed up Sentinel for the test.

 

InDro Robotics Sentinel

The 5G connection

 

Sentinel, like all of our ground robots (and drones), has been modified by InDro to enable remote teleoperations. By connecting to 4G or 5G networks at both ends, InDro devices can be operated from hundreds and even thousands of kilometres away. But while 4G works for many operations, 5G is the gold standard.

That’s because 5G allows for dense data downloads, virtually in real-time. Since the Sentinel robot is equipped with multiple sensors (30x optical zoom pan-tilt-zoom camera, thermal, and sometimes LiDAR), a large data pipeline is advantageous. Not only does it enable real-time, minimal latency for the operator – it also allows for direct data uploads to the cloud.

InDro, of course, is based in Canada. And the EPRI test facility is based in the US. That meant we required a high-speed, robust cellular network in the US to carry out the test.

Because InDro is an R&D company, we did extensive research prior to selecting a US carrier. We wanted to make a single choice and stick with it, given the growing number of deployments in the US.

The choice quickly became obvious: T-Mobile. Its 5G network covers the entire country, with new towers being added each week. T‑Mobile has 5G speeds nearly twice as fast as its competitors. Typical download speeds on T‑Mobile’s 5G network are 75 – 335 Mbps with peaks over 1Gbps.

And when we were in Bellevue on T-Mobile’s network? Just check out the speed test below:

 

T Mobile

T-Mobile Summit

 

T-Mobile got wind of us selecting them as our network of choice for US operations. The company even issued a news release on that front. And Peter King, InDro’s Head of Robotic Solutions, wrote a guest blog about the Lenox experience and why a solid 5G network is so important to this kind of work.

And all of that? Well, it led to InDro being invited to the first-ever T-Mobile for Business Analysts Summit, held in Bellevue in October. We were asked to demonstrate the low-latency and data throughput that 5G enables. And what better way than to have our Sentinel robot connected to 5G in Ottawa…with us connected to 5G in Bellevue.

Those attending the summit were invited to the InDro display, where they could operate the robot using a simple Xbox controller. Video and thermal imaging were returned in real-time. But the real star of the show was near-zero latency. Seriously, the instant the controller was touched Sentinel would respond.

The hands-on demo even impressed John Saw, a McMaster engineering graduate who’s now the Executive Vice President, Advanced & Emerging Technologies at T-Mobile. Here’s John, controlling Sentinel over 5G in Ottawa from more than 4100km away:

T Mobile

It’s clear John Saw thought the experience was kind of cool – and indicative of the kind of applications 5G unlocks.

In fact, he even mentioned it to the analysts:

InDro’s take

 

5G connectivity is about a lot more than phone calls. The speed and bandwidth of 5G is what will power the growing Internet of Things. And our Sentinel robot, arguably, is an IoT device. Operating it using T-Mobile’s network in the US, connected to the private 5G network at Area X.O, was a snap.

“Building ground and aerial robots that can be operated from great distances is integral to InDro Robotics and its clients,” explains InDro CEO Philip Reece. “But this capability can only be realized with fast, reliable networks. T-Mobile was the obvious choice for our US operations – and we look forward to many more deployments over 5G in the future.”

Finally, a shout-out to T-Mobile for Business. Thanks for inviting us to Washington State; it was a privilege to be able to showcase our technology with the help of your 5G network.

Spexi announces “Spexigon” – a global fly-to-earn platform

Spexi announces “Spexigon” – a global fly-to-earn platform

Vancouver-based Spexi Geospatial has some news – and it’s big.

The company has announced a plan, and a platform, to capture high-resolution aerial data of the earth with drones. Drone pilots will be able to fly to earn crypto currency – or even dollars.

The long-term goal? Well, picture crystal-clear data sets of cities, infrastructure, and even rural settings. With each individual pilot capturing data from different locations, Spexigon will assemble it over time to form a global jigsaw puzzle – and sell parts of that dataset to clients.

We’ll get into more details shortly, but Spexi’s plan has some strong backers – including InDro Robotics.

 

News release

 

News of Spexigon came in the form of an announcement. The company revealed it had secured $5.5 million USD in seed funding “to pursue our vision of collecting Earth’s most important data with drones.” The funding round was led by Blockchange Ventures, with other investing by InDro Robotics, Protocol Labs, Alliance DAO, FJ Labs, Dapper Labs, Vinny Lingham, Adam Jackson, and CyLon Ventures.

The same team that built Spexi – an easy-to use system for automated flight and data acquisition – is developing Spexigon. This brief video gives a “big picture” look at how it will work when it’s rolled out next year.

“Fly to earn”

 

A big part of what makes Spexigon’s plan so intriguing is what you might call incentivised crowd-sourcing. Anyone with a drone can download the forthcoming Spexigon app and fly an automated flight. The images will be uploaded to Spexigon to build the database – and the pilot will be rewarded.

“With our new Fly-to-Earn model, people who own consumer drones will be able to earn $SPEXI tokens and dollars while building a high resolution base layer of the earth,” reads the Spexigon announcement. “It is our hope that soon any organization or individual will be able to use the imagery collected by the Spexigon platform to make better decisions.”

 

Business model

 

You could think of this over time as like Google Earth, only with really sharp aerial imagery. Every time a pilot carries out a flight for Spexigon, that map will continue to fill in, building Spexigon’s database. Clients will purchase imagery online.

“This new base layer will enable governments and organizations of all sizes to make better decisions about real world assets like buildings, utilities, infrastructure, risk and natural resources, without requiring people on the ground,” continues the announcement.

“By using Spexigon, organizations that require high-resolution aerial imagery will no longer need to own their own drones or hire their own pilots. Instead, they’ll use our web and mobile app to search for and purchase imagery. Data buyers will then be able to use a variety of internal and external tools to put the imagery to use.”

 

For pilots

 

Spexigon says it will have online training when it launches. Pilots will learn how to use the app to carry out their flights – which, obviously, the pilots will monitor. Depending on the location, pilots can earn crypto currency or actual dollars. Some locations, obviously, will have greater value to Spexigon and its clients than others.

“The app will contain a map of the earth overlaid with hexagonal zones called ‘Spexigons’. Spexigons that are open and ready to fly will be easily visible so pilots can choose an area close to them and begin collecting imagery,” says the company.

“To ensure that imagery is captured in a safe, standardized, and repeatable way, our app controls each pilot’s drone automatically while they supervise the flight. Although our app will do the flying, pilots will always be in command and will have the ability to take back manual control at any time if need arises.”

Spexigon is now starting to build the app, and already has a small community emerging. You can join its Telegram channel here – and there’s also a Discord channel.

As for those ‘Spexigons’, the image below gives you an idea what those pieces of the puzzle might look like.

Spexi

InDro’s take

 

Since InDro Robotics is one of the backers of Spexigon, we obviously feel the plan is a good one.

It comes from the outstanding team that built Spexi from scratch into a user-friendly, automated system for capturing and crunching aerial data. We also believe drone pilots will embrace this unique “fly to earn” model – a global first.

“The Spexi team has already created an excellent and proven Software as a Solution product and clearly has the expertise in this space,” says InDro Robotics CEO Philip Reece. “I’m genuinely excited about the potential for Spexigon to become the ‘go-to’ database of high-quality aerial imagery from around the world.”

So are the rest of us.