LiDAR Mapping Principles 2017-07-20T11:20:03+00:00

LiDAR 3D mapping is a versatile technology that unlike passive sensing methods
such as photogrammetry can penetrate vegetation and operate in
dark lighting situations. LiDAR is a more applicable, user-friendly technology providing
data that is faster and easier to process.

Phoenix LiDAR’s aerial solutions allow you to scan area swaths faster and with more consistent results than using
current ground scan technologies, whether they’re utilizing terrestrial or mobile mapping methods.
We’ve created this section to help our customers and the public learn about LiDAR and what makes our solutions a
unique, ground-breaking tool for a variety of applications.

Do you have sample data I could try in LiDARMill? 2018-07-20T14:21:29+00:00

If you’d like to see how LiDARMill works by uploading and running some sample data, you can download the raw UAV LiDAR files here, and upload them into LiDARMill.

 

Note: As is the case with any UAV LiDAR, do not change the file names, since they contain important data for post-processing.

 

Want a step-by-step overview of LiDARMill? Watch our LiDARMill training video.

How do Beta users upgrade? 2018-07-02T09:00:58+00:00

First of all, thank you for being beta testers. We are pleased to offer all beta testers a free month of LiDARMill Pro, which is also known as LiDARMill Tier 2. You can continue using LiDARMill after your free month by purchasing a subscription. Simply open the menu on the left of the platform and click “Payments.”

 

For future reference, please note that no refunds or proration is possible for subscription cancellations or modifications.

Where can I get more details about LiDARMill? 2018-07-02T12:29:09+00:00

To learn more about LiDARMill, see the LiDARMill Data Sheet [PDF].

 

To see the detailed terms, see the LiDARMill Terms and Conditions.

 

Once you decide to try LiDARMill, contact us to see which LiDARMill Tier would best support your workflow!

Can I cancel or modify my subscription? 2018-07-03T12:08:38+00:00

To modify your subscription, you must first cancel the existing subscription and purchase to a new one. Cancellation requires two steps: 1) cancel in LiDARMill by opening the Menu and clicking Payments, and 2) log into PayPal and cancel your subscription from within your PayPal account. Please note that no refunds or proration is possible for subscription cancellations or modifications.

Is there a Setup Fee? 2018-07-09T13:56:32+00:00

All Phoenix LiDAR systems are automatically compatible with LiDARMill, so no setup or setup fee is necessary.

 

While we’re constantly working to increase compatibility with third-party LiDAR systems, some systems will require extra time for custom configuration. If that’s necessary, our team will let you know how much it might cost based on an estimate of hours at a fixed rate.

 

With the purchase of an annual subscription, the initial setup fee for third-party systems can be partially or fully credited back to you. Please ask your Phoenix LiDAR representative for details.

Does LiDARMill work with non-Phoenix LiDAR systems? 2018-06-22T17:30:50+00:00

LiDARMill can process data collected by many third-party systems, depending on configuration. Not sure if your system is compatible? Just get in touch and we’ll help you check. Additional setup may be necessary in some cases, and we’ll let you know how much that may cost.

How do I sign up for LiDARMill? 2018-07-02T08:56:10+00:00

LiDARMill is a subscription cloud service for LiDAR post-processing. To subscribe or learn more, fill out the contact form below and we’ll help you find the best LiDARMill Tier for your team.

Understanding Accuracy 2017-02-27T18:21:30+00:00

Accuracy Explained

The most-frequently asked question we get asked is about the accuracy of produced point clouds. So, how accurate is our data?

First of all, there’s different kinds of accurate: we are mostly interested in what we’ll call absolute and relative accuracy. Absolute accuracy describes how much the whole point cloud is offset in any direction, leading to a constant error in georeferencing. This error depends almost exclusively on a correctly configured GNSS reference station and thus, will not be discussed any further. Far more interesting is the relative accuracy, indicating how self-consistent the resulting point cloud is.

To answer this question, let’s follow the history of origin of a point cloud’s point: it is first scanned by a LIDAR sensor, then transformed into a global coordinate system using position and orientation solved by the GNSS/Inertial Navigation System.

Velodyne’s HDL-32E and VLP-16 (Puck) LIDAR sensors are specified to measure distances with errors of less than 2cm. Riegl’s VUX-1 lists a maximum error of less than 1cm. To be on the safe side, we’ll assume an error of 150% the value specified by Velodyne, arriving at 3cm.

Because we mount LIDAR and IMU fixed to each other through a single flat aluminium part, we can consider them perfectly aligned and will not introduce an alignment offset (neither static, nor – worse – dynamic, e.g. due to vibrations), which might affect more fragile setups that place IMU and LIDAR sensors further apart.

The horizontal accuracy of the GNSS/Inertial position solution is listed as 1cm + 1ppm, where the latter figure indicates that for every kilometer of distance to the GNSS reference station (called the baseline), an additional millimeter of error is to be expected. Depending on the satellite constellation geometry, the vertical position error is usually estimated to be 150% of the horizontal error. Using a baseline of 1km as an example, we arrive at a positioning error of 1.983cm (1.1cm horizontal and 1.65cm vertical).

Summing up positioning errors from LIDAR and navigation system, our point is off by 3.983 cm.

However, the largest offset stems from orientation error in the GNSS/Inertial solution, which in turn is mostly determined by the accuracy of the IMU. For this reason, the AL3 and Ranger series use fiber-optic gyros and MEMS servo accelerometers. This is one of the most accurate commercially available IMUs (i.e., not affected by ITAR export controls), yielding 0.015 degree error in pitch/roll and 0.035 degree error in heading angles for real-time solutions – post-processing is usually even more accurate. To prevent losing alignment (common in environments with strong vibrations) and further enhance accuracy, the AL3 and Ranger series employ a dual-antenna solution (optional for Scout series).

Still, range and accuracy have always necessitated a compromise, because inaccuracies in orientation cause the point’s error to grow linearly with it’s distance from the LIDAR. This must also be noted when upgrading to scanners that offer longer ranges than the Velodynes (e.g. the Faro X330).

The image below details the range-constant positional error, as well as the additional, range-dependent offset due to orientation error.

Generally, errors in position and orientation are RMS (root mean square) values listed in the navigation system’s specification. In practice, errors will not change rapidly within the given bounds, but drift slowly instead.

Even though all these numbers have their origins in thorough tests, they still are only numbers. Depending on satellite coverage and constellation geometry, vibrations, RTK baseline and choice of antennas, they are subject to change. We will gladly supply you with some sample data of ground and aerial surveys – please contact us!

Basics of LiDAR 2017-02-24T20:10:20+00:00

The Basics of LiDAR

LiDAR, short for Light Detection and Ranging is an active, remote sensing method used for an array of applications. It uses light in the form of a pulsed laser to measure ranges (variable distances) through vegetation to the Earth. The system is able to capture accurate surface data by measuring the time it takes for the laser to return to its source.

LiDAR requires four basics parts to operate:

  1. The LiDAR unit itself, which emits pulses of light, when mounted to a UAV scanning a predefined swath below.
  2. A GPS receiver tracking the unit’s x,y,z coordinates.
  3. An IMU which stands for Inertial Measurement Unit that tracks the tilt of the unit in space to achieve accurate elevation measurements.
  4. A computer that records all transmitted data.

These light pulses and their capabilities to produce multiple returns — combined with other data recorded by a system — are processed to create highly accurate, three dimensional information about the surface it has scanned.

How does LiDAR work? 2017-02-27T18:26:54+00:00

How does LiDAR work?

The technique we use to derive centimeter level precision is called Real Time Kinematic Global Navigation Satellite System (RTK GNSS). This system uses the satellite signal’s carrier wave in addition to the information content of the signal and relies on a single GNSS reference station to provide real-time corrections. Now what happens during short periods of GNSS outages? Enter the Inertial Navigation System (INS): the INS uses a computer, motion sensors (accelerometers) and rotation sensors (gyroscopes) to continuously calculate the position, orientation, and velocity of the system. In order to combine the two systems, a very sophisticated algorithm, known as a linear quadratic estimation (LQE), operates on streams of noisy sensor data to produce a statistically optimal estimate of the system’s position at any point in time.  By fusing this information with the LiDAR data, a point-cloud is generated and visualized in real-time using Phoenix Aerial SpatialExplorer.

In case real-time corrections from the GNSS reference station are not available or longer outages prevent transmission of data to Rover, a third party software package called Inertial Explorer™ can still produce a precise trajectory in post-processing. Both types of trajectories (either generated in real-time from the INS or from Inertial Explorer™ in post-processing) can be fused with LiDAR data with Phoenix Aerial SpatialFuser to create point clouds in LAS format.

Phoenix Aerial LiDAR solutions are engineered to attach to almost any vehicle and for the first time, the accompanying software is just as flexible as the module. By splitting sensor control and user interface into separate parts, multiple mapping options are possible:

Aerial Mapping

Phoenix Aerial LiDAR solutions can be used for mapping with many different vehicles such as UAV’s, gyrocopters, fixed-wing aircraft, etc.  As shown in the image above, the operator is typically on the ground and connected directly to the GNSS reference station.  Using the Phoenix Aerial SpatialExplorer software, the operator transmits correction data to the Rover via a long range WiFi system. The Rover then fuses this data in real time and transmits a down-sampled point cloud back to the operator.

Ground Mapping

When the operator travels with the Rover in a car, boat or ATV, he/she can connect directly to the Rover using either WiFi or an ethernet cable. Correction data can then be transmitted from the GNSS reference station to SpatialExplorer software via long range WiFi or a public IP address (using e.g. 3G/4G). With the on-board 240 gig SSD hard drive, the operator can scan for 6 hours without having to stop to download the data.

Real-time Point Cloud Advantages

The ability to visualize real time point clouds brings several key advantages:

1) The operator can immediately determine if the results match the expectations. Previously, results were available only after landing, in which case it becomes very time consuming and expensive to make any changes.

2) The operator can visualize the growing point cloud on a computer screen in real time and with this data can locate areas yet to be scanned and quickly alter the UAV’s course.

3) Via 4G network, the operator can remotely share his/her screen with clients in real time to confirm/alter LiDAR point cloud.

Parameters that Govern LiDAR 2017-02-27T18:30:59+00:00

Parameters for LiDAR Scanning via UAV

Phoenix LiDAR Systems builds systems meant for mobile mapping. Surveying from a moving object is accompanied by certain parameters an operator must take into account: speed, scan area, altitude, frequency, pulse rate, scan angle and point density all play an integral role in capturing data. Note that you will obtain a scan swath of varying ranges and densities depending on these parameters.

Actual accuracy is dependent on GPS processing options (RTK, PPK, WAAS), ionospheric conditions, satellite visibility, flight altitude (AGL) and other factors.

Phoenix LiDAR Workflow 2017-02-22T00:38:53+00:00

Phoenix LiDAR Workflow

To give you a better understanding on how our systems work refer to the chart on the left.

Real-time vs Post Processed

In RTK (Real Time Kinematic) mode, about 500 bytes of differential corrections are sent from the reference station to the rover about once every second. Applying these corrections, the rover is able to solve its position with centimeter-accuracy.

The differences between RTK mode and post-processing are:

  • post-processing requires extra software
  • post-processing does not require a real-time connection between reference station and rover
  • post-processing will often compute more accurate results, especially in environments with bad satellite visibility (ground scanning)
  • post-processing allows the user to better judge the solution’s accuracy
Sensor Comparison 2017-02-22T00:34:30+00:00

Your Content Goes Here

Your Content Goes Here

Your Content Goes Here

Your Content Goes Here

Your Content Goes Here

Your Content Goes Here

Your Content Goes Here

Sensor Comparison

Phoenix offers a range of sensors with some being more or less suitable for your applications. The tabs on your left give you an overview.

The most-frequently asked question we get asked is about the accuracy of produced point clouds. So, how accurate is our data? 

First of all, there’s different kinds of accurate: we are mostly interested in what we’ll call absolute and relative accuracy. Absolute accuracy describes how much the whole point cloud is offset in any direction, leading to a constant error in georeferencing. This error depends almost exclusively on a correctly configured GNSS reference station and thus, will not be discussed any further. Far more interesting is the relative accuracy, indicating how self-consistent the resulting point cloud is.

To answer this question, let’s follow the history of origin of a point cloud’s point: it is first scanned by a LIDAR sensor, then transformed into a global coordinate system using position and orientation solved by the GNSS/Inertial Navigation System.

Velodyne’s HDL-32E and VLP-16 (Puck) LIDAR sensors are specified to measure distances with errors of less than 2cm. Riegl’s VUX-1 lists a maximum error of less than 1cm. To be on the safe side, we’ll assume an error of 150% the value specified by Velodyne, arriving at 3cm.

Because we mount LIDAR and IMU fixed to each other through a single flat aluminium part, we can consider them perfectly aligned and will not introduce an alignment offset (neither static, nor – worse – dynamic, e.g. due to vibrations), which might affect more fragile setups that place IMU and LIDAR sensors further apart.

The horizontal accuracy of the GNSS/Inertial position solution is listed as 1cm + 1ppm, where the latter figure indicates that for every kilometer of distance to the GNSS reference station (called the baseline), an additional millimeter of error is to be expected. Depending on the satellite constellation geometry, the vertical position error is usually estimated to be 150% of the horizontal error. Using a baseline of 1km as an example, we arrive at a positioning error of 1.983cm (1.1cm horizontal and 1.65cm vertical).

Summing up positioning errors from LIDAR and navigation system, our point is off by 3.983 cm.

However, the largest offset stems from orientation error in the GNSS/Inertial solution, which in turn is mostly determined by the accuracy of the IMU. For this reason, the AL3 and Ranger series use fiber-optic gyros and MEMS servo accelerometers. This is one of the most accurate commercially available IMUs (i.e., not affected by ITAR export controls), yielding 0.015 degree error in pitch/roll and 0.035 degree error in heading angles for real-time solutions – post-processing is usually even more accurate. To prevent losing alignment (common in environments with strong vibrations) and further enhance accuracy, the AL3 and Ranger series employ a dual-antenna solution (optional for Scout series).

Still, range and accuracy have always necessitated a compromise, because inaccuracies in orientation cause the point’s error to grow linearly with it’s distance from the LIDAR. This must also be noted when upgrading to scanners that offer longer ranges than the Velodynes (e.g. the Faro X330).

The image below details the range-constant positional error, as well as the additional, range-dependent offset due to orientation error.

Generally, errors in position and orientation are RMS (root mean square) values listed in the navigation system’s specification. In practice, errors will not change rapidly within the given bounds, but drift slowly instead.

Even though all these numbers have their origins in thorough tests, they still are only numbers. Depending on satellite coverage and constellation geometry, vibrations, RTK baseline and choice of antennas, they are subject to change. We will gladly supply you with some sample data of ground and aerial surveys – please contact us!