top of page
Search

Application: Radar

  • Dehann Fourie
  • Feb 1, 2022
  • 5 min read

Updated: Nov 18, 2024


Introduction

Bringing artificial intelligence (AI) into operations around high-value maritime infrastructure involves a number of unique challenges.  New products or services aimed at supporting humans and operational efficiency of high-value assets require reliable computer data fusion algorithms and software technology.  One clear example is that autonomy cannot just assume 100% GPS availability as a mechanism for robust localization, mapping, and data registration in crowded / busy high-value environments.  And, even more so for underwater operations.


This example demonstrates how NavAbility Accelerator and WhereWhen.ai's open-source software can process various raw sensor data to calculate and disseminate a more robust localization and mapping solution — a critical component in any number of infrastructure management AI applications.  See links to our open software below.  


Let’s look at building a simultaneous localization and mapping (SLAM) software stack in a Boston Harbor marine environment.  Furthermore, let’s assume total GPS-denied operations  — even though various sensor data can be readily incorporated, including human input, GPS, compass, Lidar, camera, prior maps, etc.  We use the familiar factor graph algorithm abstraction and are proud to share our enhanced non-Gaussian factor graph capability, see our source code documentation here for more details; along with this motivation for why a more enhanced capability is so important.


We at WhereWhen.ai focus on providing the best-in-class, open-source navigation AI software and cloud platform services to enable, simplify, and help accelerate the development of robust digital infrastructure support tools and robots.


Robot, Data, and RobotOS



The video above shows a GPS-denied localization and mapping solution, which is described hereafter.  This dataset was collected in Boston Harbor just south of Logan Airport, at coordinates (42.350035, -71.020232).  Examples of related raw data is shared by MIT-SeaGrant here.


For this example, we use ROS as a common data exchange middleware (www.ros.org).  The sensor data events are digested—e.g. cameras, RADAR, SONAR, LIDAR, IMU, etc—and used to construct a common factor graph object that the computer can use for a variety of tasks, including SLAM inference or training neural network models.  This data processing pipeline was quickly assembled from the modular software building blocks in the NavAbility / Caesar.jl ecosystem.


In the video above, the map view (left) shows the estimated robot pose locations as the string of white dots (heading in a North-West direction).  The surrounding environment map is aggregated as more data is collected.  This calculation was done totally GPS-denied, and using our non-Gaussian factor graph solver.  Let’s take a look at how this was done.  Thereafter, we will also look at the Shore-Side view and how data from multiple agents and sessions can be brought together using the NavAbility Platform and SDK tools.


SLAM and Missing Data


[ ASIDE: For examples on other techniques such as matched filtering, synthetic arrays, or IMU preintegration, see the source code examples, or the references pages or contact us for more details. ]


This example shows simultaneous localization and mapping (SLAM) while assuming total GPS-denied  operation, using Radar instead.  By including data from a regular commercial broadband Marine radar, we are able to robustly align the scattering point clouds between different poses.  For this example a sequence of robot pose variables are generated in the factor graph on every fifth 360deg sweep of the broadband marine radar.  Note, the scene has various dynamic objects  (other ships/boats), and furthermore consecutive radar sweeps are similar but not exactly the same.  These elements complicate the alignment / correlation of the radar sweeps, and can result in non-Gaussian and multi-modal belief likelihood functions as shows in the next figure:


Non-Gaussian (Multimodal) Data Processing
Left shows two radar maps from poses 250 (white) and 255 (green) both in radar body-frame. An odometry transform estimate between the two radar sweeps can be found by some mechanism of correlating these two radar sweeps. Note the dynamic ships/boats, as well as the radar “spike” measurement error in the white colors map. The image on the right is a slice of a kernel Hilbert space volumetric correlation (x, y, theta) between the two radar sweeps shown. This is essentially a pseudo probability density (derived from the exponential of squared distance on the SpecialEuclidean(2) manifold). We use this pseudo probability as a variational measurement likelihood function in the non-Gaussian factor graph. Notice the multimodal behavior manifesting as the more red regions. Also note that the dense correlation map (as shown here) does not have to be computed during inference -- only the required test points in the correlation map are computed. The dense map is shown here to help illustrate some of the non-Gaussian features of the Caesar.jl solver.

The 360deg radar sweep data from consecutive poses (so called variable nodes in the graph) are used to construct odometry factors  (graph edges connect a factor to select variables).  These variable-factor combinations are used during probabilistic inference to simultaneously compute both the location and surrounding map of the environment.


We take this alignment correlation as a usable likelihood in the factor graph to compute the SLAM solution and produce the video above.  See below for links to the relevant code on how this example extracts data from ROS, builds a factor graph, performs simultaneous localization and mapping (SLAM) solves, including advanced Bayes tree clique recycling features.  Beyond the SLAM front-end sensor ingestion-graph-construction code provided, and the open-source Caesar.jl solver code, we also share the visualization code to generate the content of this page.


Although this example shows total GPS-denied SLAM, the factor graph method utilized here allows great flexibility in which multi-sensor data could be included into variations on the SLAM solutions involved.  GPS, camera, lidar, and other navigation aids (including human input) can readily be incorporated with the modular graph-based philosophy.


Applications



This case study provides a complete mapping system that can easily be extended to comprehensive obstacle detection.


A powerful feature of the NavAbility platform is that it automatically provides an index of all your robotics data, and it can be queried both by time and position.  The indexed radar and camera data can easily be used to build a obstacle tracking and path planning system that is significantly more accurate than existing systems because:

  • It uses both radar and camera data as well as all the path history of the vehicle(s)

  • It integrates all data from all vehicles

  • The NavAbility algorithm robustly manages conflicting information (mis-classifications and incorrect measurements)

Multi-session, Multi-agent, and Shore-side Cloud



Working with and around valuable assets requires that various users have 24/7 access to current and historic information collected by robots or human operators.  The promise of autonomy is to bring better and more frequent data to shore and alleviate the burden of monotonous or dangerous tasks within larger industrial operations, while improving accuracy and data / results / visualization / task-feature availability.


WhereWhen.ai hosts cloud resources with which to rapidly build distributed multi-agent applications.  Using the NavAbilitySDK and Cloud App features provides a variety options for Edge only, Cloud-only, or Distributed Edge-to-Cloud navigation AI features.

Our products and services page for more details on the how to NavAbility Accelerator can bring leading navigation AI features to your project.


NavAbility Product Value


WhereWhen.ai has studied this commonality between shore-side and robot operations at length, and provides modular, extensible, open-source software for free to the public.  NavAbility Accelerator also provides access to our on-call expertise to help customize and accelerate development for your application.  WhereWhen.ai also provides code maintenance and continuous improvement to the overall software components.  As part of our subscription services, we provide related cloud hosted computational services so that you can prototype faster and ultimately reduce lifetime cost-of-ownership.  Get started with our cloud platform here.  For further information about help in developing proof-of-concept demonstrations beyond the monthly subscriptions, please contact us at info@wherewhen.ai.


Dig Deeper

Further examples, including other marine surface and underwater vehicles are highlighted as peer-reviewed publications, along with source code and documentation here.

 
 
  • GitHub
  • Slack
  • GitHub

© 2024 WhereWhen.ai Technologies Inc.

bottom of page