top of page
Johan Terblanche

Application: Lidar

Updated: Nov 18



“NavAbility’s platform was critical for us to rapidly go from robotics idea to proof-of-concept demonstration. A small engineering team was able to show customers a working solution in six months, saving us years of up-front development costs.” -- Friedl Swanepoel (Excutive at Industri4 and ilima)

Overview

The Built Environment, although well-established, is ripe for automation.  We think automation will develop in incremental steps to optimize operation costs, while keeping an eye on the future for a wider revolutionary trends.


We believe that any automation supporting project execution must be robust and operator friendly.  Whether it’s a computational digital twin or in-situ robotic operations, how do you ensure that the fundamental (computational) task of localization and mapping works — and works reliably in such a dynamic environment?


Going further, how could you leverage the same survey / sensor data to discern any environmental changes captured in raw data as actionable information?  Data and information that is necessary for humans and robots alike.


Industri4 addressed this relevant problem in real-world robotics and turned a problem into a strength. By making use of the natively multi-modal (i.e. non-Gaussian) solver in NavAbility Accelerator, Industri4 was able to not only compensate for changes in the environment, they were also able to use the data to identify discrepancies between the as-designed blueprints and the as-built model with an autonomous vehicle.


Daily interactions with the WhereWhen.ai team accelerated Industry4’s product development, and the teams produced a fully operational demonstration for a key customer in the span of weeks rather than years.





Application Highlights


  • Rapid development and faster time to market using NavAbility Accelerator

  • Unprecedented robustness delivered using the natively multi-modal (non-Gaussian) open-source core solver algorithm

  • Flexible integration of disparate sources of information with the open APIs

  • A reduced-cost-of-ownership hosted solution (see Get Started’) for production implementations, online visualizations, and high assurance operations

  • Careful consideration of automation project-lifecycle, technology-growth, and multi-system interop  issues

Problem Sketch

In a construction environment, there are multiple data assets or opportunities available.  The Built Environment today relies on both digital/virtual assets which should be useful to robotic mapping and manipulation — but the technical challenges can be considerable.  In addition, projects today are heavily reliant on human operated, and increasingly robotic platforms, to collect and execute on volumes of data.


Recorded and prior data contains many features of temporary or permanent changes.  Some of the data overlaps with previous recordings of the same environment, while some data is of new environments entirely.  The recorded data is captured from different sensor platforms and different sensing technologies, or by human input.  Some data is based on laser ranging, structured light, while other is passive camera images in optical or thermal spectrum; further data includes perhaps magnetics or odometry distance measurements.  Yet more data may include ground penetrating radar, gyroscopes, accelerometers, or inclinometers.  Some situations may even include synthetic aperture radar or acoustic data. 


The problem is how to leverage all this data in a digital format, either for developing a digital twin, or developing multiple ad-hoc digital map datasets of a construction site/situation.  And, how to use this same map to navigate multiple (real-time) robots on-site.  One key challenge is to have the computers find a reliable shared frame of reference between humans and the equipment.


Furthermore, how can you make a small technology investment now which has the so-called “Systems Engineering” runway to later be extended /  modified / integrated with other systems (i.e. future-proofing).  What robotic software / technology decisions are amenable to many small and aggregating upgrades over time, without having to scrap and replace previous work?



A Select Technical Showcase

The NavAbility localization and mapping approach focuses on the central problem of solving (doing computational inference on) heterogeneous non-Gaussian factor graphs.  To the best of our knowledge this is the first serious non-Gaussian (multimodal) solution available to the public.  Our design philosophy considers both i) batch (overnight) processing workloads, which is immediately useful to ii) real-time robotic platform navigation and mapping; and iii.) early technology decisions that allows the technology to scale in complexity of many future operations without having to rebuild previous software due to future integration issues. 


Our approach also combines lessons from diverse (community wide) operations that are centered around human operated equipment.  Our technology further allows a clear pathway to more automated robotic equipment, freeing humans from the time-intensive robot handing tasks.


To showcase deeper fundamental results from our technology, see for example the selected non-Gaussian result shown in Figure 1.  This is a simultaneous localization and mapping (SLAM) result where the posterior estimates of poses and landmarks can have multi-modal belief.  NavAbility has developed Navigation-Affordances  as virtual/digital representations (or assets) which allow the user to inject prior data or human knowledge and experience into the live navigation and mapping computation with minimal risk of “breaking” the SLAM solution when discrepancies or variations in the data occur.  This fundamental duality is resolved using our unique multi-modal (non-Gaussian) factor graph formulation and navigation AI solver technology.



Figure 1: Example of 2D floor-plan (with unknown digital asset / CAD model errors).  The straight-edge cyan lines indicate a zoomed-in portion of an ‘as-designed’ floor plan, while the straight-edge red lines show the ‘as-built’ construction instead.  The calculated locations for dual ‘as-built’ and ‘as-designed’ landmarks is shown as the blue and red contour probabilistic densities.  The top blue density illustrations show convergence to the user-provided Navigation-Affordance floor plan (i.e. a CAD model), while the bottom red density illustrations show convergence to both the ‘as-designed’ and ‘as-built’ structures.  Stable computational convergence is good!  The duality produced by the discrepancy can now be clearly identified from the calculated blue and red densities.  The result bottom right is bi-modal (red contours).  This bi-modality is a special feature of our algorithm and a strong departure from competing software algorithms which are forced to make unimodal Gaussian assumptions (as shown by conventional result ‘G’).  A conventional SLAM system cannot do such multi-modality, and as a result would have hidden this duality error among massive amounts of data and results.

Dig Deeper

Curious about technical details? Want to try the code yourself? Great!

Catch us at the IEEE CASE2021 conference


[preprint** download notice] © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting / republishing this material for advertising or promotional purposes, creating new collective 
works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

bottom of page