Skip to content

Commit

Permalink
Update docs with generic decoder design
Browse files Browse the repository at this point in the history
  • Loading branch information
mojomex committed Aug 21, 2023
1 parent d7059c8 commit b5268bb
Show file tree
Hide file tree
Showing 4 changed files with 71 additions and 43 deletions.
38 changes: 21 additions & 17 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,27 +3,31 @@
"language": "en",
"allowCompoundWords": true,
"words": [
"adctp",
"Adctp",
"AT",
"block_id",
"UDP_SEQ",
"STD_COUT",
"PANDARQT",
"PANDARXT",
"PANDARAT",
"gprmc",
"Hesai",
"memcpy",
"nohup",
"nproc",
"pandar",
"PANDAR",
"AT",
"XT",
"QT",
"XTM",
"PANDARAT",
"PANDARQT",
"PANDARXT",
"Pdelay",
"gprmc",
"Vccint",
"vccint",
"adctp",
"Adctp",
"QT",
"schedutil",
"STD_COUT",
"stds",
"nproc",
"nohup",
"schedutil"
"struct",
"structs",
"UDP_SEQ",
"vccint",
"Vccint",
"XT",
"XTM"
]
}
75 changes: 49 additions & 26 deletions nebula_decoders/DESIGN.md → docs/hesai_decoder_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,10 @@
Since sensors from the same vendor often follow similar conventions when it comes to packet structure and data processing steps, a generic decoder can be used for most of the decoding work.
This document outlines the requirements and design of the generic Hesai decoder.

## Potential for generalization
## Requirements

There shall only be one decoder class which makes use of static (template) polymorphism to handle different sensor types.
This way, runtime overhead for this generalization is `0`.

### Packet formats

Expand All @@ -15,19 +18,30 @@ For all handled Hesai sensors, the packet structure follows this rough format:
### Decoding steps

For all handled Hesai sensors, decoding a packet follows these steps:
1. parse (& validate) packet
2. decode and append points to pointcloud, iterating over groups of blocks (corresponding to the same firing for multi-return) or single blocks (single-return)
1. check if block (group) completes scan and swap output buffers
2. iterate over units in block and decode
1. get distance from packet
2. filter by distance
3. get and correct azimuth and elevation from packet __*__
4. compute x, y, z using trigonometry (preferably lookup tables)
5. compute and correct time offset of point __*__
6. assign return type to point (handling possible duplicates from multi-return) __*__
7. append point to pointcloud
```python
def unpack(packet):
parse_and_validate(packet)
# return group: one (single-return) or more (multi-return)
# blocks that belong to the same azimuth
for return_group in packet:
if is_start_of_new_scan(return_group):
# swap output buffers etc.
decode(return_group)

def decode(return_group):
for unit in return_group:
filter by:
distance thresholds
distance to other returns
correct azimuth/elevation *
compute x/y/z using sin/cos lookup tables *
compute time_offset to scan *
determine return_type *
append to pointcloud
```

The steps marked with __*__ are model-specific:

* angle correction
* timing correction
* return type assignment
Expand All @@ -38,30 +52,39 @@ There are two approaches between all the supported sensors:
* Calibration file based
* Correction file based (currently only used by AT128)

For both approaches, the same sin/cos lookup tables can be computed and used.
For both approaches, sin/cos lookup tables can be computed.
However, the resolution and calculation of these tables is different.

#### Calibration based

For each laser channel, a fixed elevation angle and azimuth angle offset are defined in the calibration file.
sin/cos lookup tables are computed at 0.001 deg resolution (the resolution observed in the calibration files) for the whole 360 deg range .
Thus, sin/cos for elevation are only a function of the laser channel (not dependent on azimuth) while those for azimuth are a function of azimuth AND elevation.

Lookup tables for elevation can thus be sized with `n_channels`, yielding a maximum size of
`128 * sizeof(float) = 512B` each.

This yields a (360 deg / 0.001 deg) * sizeof(float) = 360000 * 4B = 1.4MB memory footprint per lookup table (of which there are two: sin, cos).
For azimuth, the size is `n_channels * n_azimuths = n_channels * 360 * azimuth_resolution <= 128 * 36000`.
This yields a table size of `128 * 36000 * sizeof(float) ≈ 18.4MB`.

#### Correction based

While azimuth and elevation correction also have a per-channel component, an additional component depending on azimuth AND channel is present.
The angular resolution of AT128 is `1 / (100 * 256) deg` and the per-channel correction as well as the additional component are defined as integers with the same or `1 / 100 deg` resolution respectively.
This means that a lookup table of length `360 * 100 * 256` will contain sin/cos for all corrected values, since the resolution of corrections is smaller/equal to the base angular resolution.

This leads to more individual azimuth/elevation values than the calibration-based approach, and for the case of the AT128, the lookup table resolution is around 0.00004 deg.

This yields a memory footprint of 36.9 MB per lookup table.
The lookup tables (of which there only need to be two: sin, cos; both usable for azimuth/elevation) each have a size of `360 * 100 * 256 * sizeof(float) ≈ 36.9MB`.

### Timing correction

Timing correction for all sensors follows the same underlying formula:
Given a scan start timestamp $T_{s_i}$, packet start timestamp $T_{p_j}$, block offset $o_{b_k}$ within the packet and channel offset $o_{c_l}$ within the block, the point identified by $(i, j, k, l)$ has the relative scan timestamp $t_{i,j,k,l} = T_{p_j} - T_{s_i} + o_{b_k} + o_{c_l}$.
Each sensor features an absolute timestamp per packet and formulae to compute the relative time between a unit or block and the packet.

The desired output features a start timestamp per scan, and a relative timestamp to the scan for each point.

Thus, the scan timestamp must be computed as the timestamp of the earliest point in the scan, and all packet-relative point times need to be converted to scan-relative ones.
The earliest point in a scan is guaranteed to be in the first return group (≈ first 1-3 blocks) of the scan.
Note that a scan can start mid-packet, if the scan phase does not align with packet bounds.

The block offset follows a formula linear in the block index for all sensor models which additionally depends on the number of returns of the currently active `return_mode`.
The block offset follows a formula linear in the block index for all sensor models which additionally depends on the number of returns of the currently active `return_mode`. The parametrization is different for each sensor.

The channel offset is given as a formula, table or set of tables for all sensors. A few sensors' formula is influenced by factors such as high resolution mode (128E3X, 128E4X), alternate firing sequences (QT128) and near/farfield firing (128E3X).

Expand All @@ -77,14 +100,14 @@ Here is an exhaustive list of differences:

For all other return modes, duplicate points are output if the two returns coincide.

## The implementation
## Implementation

### `HesaiPacket`

Packets are defined as **packed** structs for effortless parsing.
Packets are defined as **packed** structs to enable parsing via `memcpy`.
The sensor-specific layout for sensor XYZ is defined in `PacketXYZ` and usually employs an own `TailXYZ` struct.
The header formats are largely shared between sensors.
The packet body (i.e. point data) is mainly parameterized by bytes per point, points per block, and blocks per body. Thus, parameterized generic structs are used. A few skews such as fine azimuth blocks and blocks with a start-of-block (SOB) header exist and are implemented as their own structs.
The packet body (i.e. point data) is mainly parameterized by bytes per point, points per block, and blocks per body. Thus, parameterized templated structs are used. A few skews such as fine azimuth blocks and blocks with a start-of-block (SOB) header exist and are implemented as their own structs.

![HesaiPacket diagram](./GenericHesaiDecoder-Packet%20Formats.png)

Expand All @@ -101,8 +124,7 @@ The angle corrector has three main tasks:
* compute and provide lookup tables for sin/cos/etc.

The two angle correction types are calibration-based and correction-based. In both approaches, a file from the sensor is used to extract the angle correction for each azimuth/channel.
For all approaches, cos/sin lookup tables in the appropriate size (360deg) and resolution (either 1/1000th deg or 1/25600th deg) are generated.
These resolutions come from the resolutions of the calibration file angle offsets (1/1000th deg) and the fine azimuth resolution of AT128 (1/15600th deg).
For all approaches, cos/sin lookup tables in the appropriate size are generated (see requirements section above).

### `HesaiDecoder<SensorT>`

Expand All @@ -115,3 +137,4 @@ Its tasks are:
* managing decode/output point buffers
* converting all points in the packet using the sensor-specific functions of `SensorT` where necessary

`HesaiDecoder<SensorT>` is a subclass of the existing `HesaiScanDecoder` to allow all template instantiations to be assigned to variables of the supertype.
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ nav:
- Design: design.md
- About Nebula: about.md
- Add Your Sensor: add_sensor.md
- Hesai Decoder Design: hesai_decoder_design.md
- Nebula Common: nebula_common/links.md
- Nebula Decoders: nebula_decoders/links.md
- Nebula HW Interfaces: nebula_hw_interfaces/links.md
Expand Down

0 comments on commit b5268bb

Please sign in to comment.