Search is not available for this dataset
repo
stringlengths 2
152
⌀ | file
stringlengths 15
239
| code
stringlengths 0
58.4M
| file_length
int64 0
58.4M
| avg_line_length
float64 0
1.81M
| max_line_length
int64 0
12.7M
| extension_type
stringclasses 364
values |
---|---|---|---|---|---|---|
la3dm | la3dm-master/.travis.yml | dist: trusty
sudo: required
language:
- generic
cache:
- apt
services:
- docker
# Global environment variables
env:
global:
- ROS_DISTRO=kinetic
- ROS_REPOSITORY_PATH=http://packages.ros.org/ros/ubuntu
# - ROS_CI_DESKTOP="`lsb_release -cs`"
# - CI_SOURCE_PATH=$(pwd)
# - ROS_PARALLEL_JOBS='-j8 -16'
# ROS Setup
# before_install:
# - docker pull ubuntu:xenial
# - docker run -it ubuntu:xenial
# - apt-get update
# - apt-get install -y wget
# - sh -c "echo \"deb http://packages.ros.org/ros/ubuntu $ROS_CI_DESKTOP main\" > /etc/apt/sources.list.d/ros-latest.list"
# - wget http://packages.ros.org/ros.key -O - | apt-key add -
# - apt-get update
# - apt-cache search kinetic-catkin
# - sudo apt-get install -y ros-$ROS_DISTRO-catkin
# - source /opt/ros/$ROS_DISTRO/setup.bash
# - sudo rosdep init
# - rosdep update
install:
- git clone https://github.com/ros-industrial/industrial_ci.git .ci_config
script:
- .ci_config/travis.sh
# Create catkin workspace and
# install:
# - mkdir -p ~/catkin_ws/src
# - cd ~/catkin_ws/src
# - catkin_init_workspace
# - cd ~/catkin_ws
# - catkin_make
# - source devel/setup.bash
# Add package
# - cd ~/catkin_ws/src
# - ln -s $CI_SOURCE_PATH .
# Install dependencies
# before_script:
# - sudo apt-get install ros-kinetic-octomap*
# Build package
# script:
# - cd ~/catkin_ws
# - catkin_make | 1,407 | 23.275862 | 124 | yml |
la3dm | la3dm-master/README.md | # Learning-Aided 3D Mapping
[](https://travis-ci.org/RobustFieldAutonomyLab/la3dm)
A suite of algorithms for learning-aided mapping. Includes implementations of Gaussian process regression and Bayesian generalized kernel inference for occupancy prediction using test-data octrees. A demonstration of the system can be found here: https://youtu.be/SRXLMALpU20
## Overview
This implementation as it stands now is primarily intended to enable replication of these methods over a few datasets. In addition to the implementation of relevant learning algorithms and data structures, we provide two sets of range data (sim_structured and sim_unstructured) collected in Gazebo for demonstration. Parameters of the sensors and environments are set in the relevant `yaml` files contained in the `config/datasets` directory, while configuration of parameters for the mapping methods can be found in `config/methods`.
## Getting Started
### Dependencies
The current package runs with ROS Noetic, but for testing in ROS Kinetic and ROS Indigo, you can set the CMAKE flag in the CMAKELists file to c++11.
Octomap is a dependancy, which can be installed using the command below. Change distribution as necessary.
```bash
$ sudo apt-get install ros-noetic-octomap*
```
### Building with catkin
The repository is set up to work with catkin, so to get started you can clone the repository into your catkin workspace `src` folder and compile with `catkin_make`:
```bash
my_catkin_workspace/src$ git clone https://github.com/RobustFieldAutonomyLab/la3dm.git
my_catkin_workspace/src$ cd ..
my_catkin_workspace$ catkin_make
my_catkin_workspace$ source ~/my_catkin_workspace/devel/setup.bash
```
## Running the Demo
To run the demo on the `sim_structured` environment, simply run:
```bash
$ roslaunch la3dm la3dm_static.launch
```
which by default will run using the BGKOctoMap-LV method. If you want to try a different method or dataset, simply pass the
name of the method or dataset as a parameter. For example, if you want to run GPOctoMap on the `sim_unstructured` map,
you would run:
```bash
$ roslaunch la3dm la3dm_static.launch method:=gpoctomap dataset:=sim_unstructured
```
## Relevant Publications
If you found this code useful, please cite the following:
Improving Obstacle Boundary Representations in Predictive Occupancy Mapping ([PDF](https://www.sciencedirect.com/science/article/abs/pii/S0921889022000380)) - describes the latest BGKOctoMap-LV addition to the LA3DM library:
```
@article{pearson2022improving,
title={Improving Obstacle Boundary Representations in Predictive Occupancy Mapping},
author={Pearson, Erik and Doherty, Kevin and Englot, Brendan},
journal={Robotics and Autonomous Systems},
volume={153},
pages={104077},
year={2022},
publisher={Elsevier}
}
```
Learning-Aided 3-D Occupancy Mapping with Bayesian Generalized Kernel Inference ([PDF](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8713569)) - describes the BGKOctoMap and BGKOctoMap-L approaches originally included in the LA3DM library.
```
@article{Doherty2019,
doi = {10.1109/tro.2019.2912487},
url = {https://doi.org/10.1109/tro.2019.2912487},
year = {2019},
publisher = {Institute of Electrical and Electronics Engineers ({IEEE})},
pages = {1--14},
author = {Kevin Doherty and Tixiao Shan and Jinkun Wang and Brendan Englot},
title = {Learning-Aided 3-D Occupancy Mapping With Bayesian Generalized Kernel Inference},
journal = {{IEEE} Transactions on Robotics}
}
```
Fast, accurate gaussian process occupancy maps via test-data octrees and nested Bayesian fusion ([PDF](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7487232)) - describes the GPOctoMap approach included in the LA3DM library.
```
@INPROCEEDINGS{JWang-ICRA-16,
author={J. Wang and B. Englot},
booktitle={2016 IEEE International Conference on Robotics and Automation (ICRA)},
title={Fast, accurate gaussian process occupancy maps via test-data octrees and nested Bayesian fusion},
year={2016},
pages={1003-1010},
month={May},
}
```
Bayesian Generalized Kernel Inference for Occupancy Map Prediction ([PDF](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7989356))
```
@INPROCEEDINGS{KDoherty-ICRA-17,
author={K. Doherty and J. Wang, and B. Englot},
booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
title={Bayesian Generalized Kernel Inference for Occupancy Map Prediction},
year={2017},
month={May},
}
```
## Contributors
Jinkun Wang, Kevin Doherty, and Erik Pearson, [Robust Field Autonomy Lab (RFAL)](https://robustfieldautonomylab.github.io/), Stevens Institute of Technology.
| 4,718 | 43.102804 | 534 | md |
la3dm | la3dm-master/config/datasets/sim_structured.yaml | # sim_structured config
# Information about range-finder data
scan_num: 12 # How many scans to use
max_range: 8 # Sensor max range (m)
# Bounds on map height
min_z: 0
max_z: 4.3
original_size: false
predict: false
| 217 | 15.769231 | 37 | yaml |
la3dm | la3dm-master/config/datasets/sim_structured_long_term.yaml | # sim_structured_long_term config
# Information about range-finder data
scan_num: 15 # How many scans to use
max_range: 8 # Sensor max range (m)
# Bounds on map height
min_z: 0
max_z: 4.3
original_size: true
predict: false
| 226 | 16.461538 | 37 | yaml |
la3dm | la3dm-master/config/datasets/sim_unstructured.yaml | # sim_unstructured config
# Information about range-finder data
scan_num: 12 # How many scans to use
max_range: 8 # Sensor max range (m)
# Bounds on map height
min_z: 0
max_z: 4.3
original_size: false
predict: false
| 219 | 15.923077 | 37 | yaml |
la3dm | la3dm-master/config/methods/bgkloctomap.yaml | # BGKLOctoMap config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.1
block_depth: 3 # Test-data octree depth (see Wang & Englot ICRA 2016)
# Kernel parameters
sf2: 0.1 # Actually sigma_0 in sparse kernel
ell: 0.2 # Length scale of the sparse kernel
# Sampling resolutions
free_resolution: 0.3 # Free space sampling res
ds_resolution: 0.1 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# BGK Inference positive and negative class prior pseudocounts
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
var_thresh: 0.15 # Threshold on variance to distinguish known/unknown
| 695 | 28 | 69 | yaml |
la3dm | la3dm-master/config/methods/bgkloctomap_large_map.yaml | # BGKL Large Map config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.2
block_depth: 5 # Test-data octree depth (see Wang & Englot ICRA 2016)
max_range: 30 # Sensor max range (m)
original_size: true
# Bounds on map height
min_z: -3.0
max_z: 3.0
# Kernel parameters
sf2: 0.1
ell: 0.6
# Sampling resolutions
free_resolution: 6.5 #6.5 for block depth 5, 3.25 for block depth 4, 13 for depth 6
ds_resolution: 0.5 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# BGKL parameters
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
var_thresh: 100.0 # Threshold on variance to distinguish known/unknown
| 726 | 21.71875 | 83 | yaml |
la3dm | la3dm-master/config/methods/bgklvoctomap.yaml | # BGKLVOctoMap config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.1
block_depth: 5 # Test-data octree depth (see Wang & Englot ICRA 2016)
# Kernel parameters
sf2: 0.1 # Actually sigma_0 in sparse kernel
ell: 0.2 # Length scale of the sparse kernel
# Sampling resolutions
free_resolution: 0.1 # Free space sampling res
ds_resolution: 0.1 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# BGK Inference positive and negative class prior pseudocounts
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
var_thresh: 0.2 # Threshold on variance to distinguish known/unknown
min_W: 0.001 #minimum unknown threshold
| 735 | 28.44 | 69 | yaml |
la3dm | la3dm-master/config/methods/bgklvoctomap_large_map.yaml | # BGKLV Large Map config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.2
block_depth: 6 # Test-data octree depth (see Wang & Englot ICRA 2016)
max_range: 30 # Sensor max range (m)
original_size: true
# Bounds on map height
min_z: -3.0
max_z: 3.0
# Kernel parameters
sf2: 0.1
ell: 0.6
# Sampling resolutions
free_resolution: 0.1
ds_resolution: 0.5 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# BGKL+ parameters
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
min_W: 0.01 #minimum unknown threshold
var_thresh: 0.001 # Threshold on variance to distinguish known/unknown
| 703 | 21 | 70 | yaml |
la3dm | la3dm-master/config/methods/bgkoctomap.yaml | # BGKOctoMap config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.1
block_depth: 3 # Test-data octree depth (see Wang & Englot ICRA 2016)
# Kernel parameters
sf2: 1.0 # Actually sigma_0 in sparse kernel
ell: 0.2 # Length scale of the sparse kernel
# Sampling resolutions
free_resolution: 0.5 # Free space sampling resolution
ds_resolution: 0.1 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
var_thresh: 100.0 # Threshold on variance to distinguish known/unknown
# BGK Inference positive and negative class prior pseudocounts
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
| 701 | 28.25 | 70 | yaml |
la3dm | la3dm-master/config/methods/bgkoctomap_large_map.yaml | # BGKOctoMap config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.2
block_depth: 3 # Test-data octree depth (see Wang & Englot ICRA 2016)
max_range: 30 # Sensor max range (m)
original_size: true
# Bounds on map height
min_z: -3.0
max_z: 3.0
# Kernel parameters
sf2: 1.0 # Actually sigma_0 in sparse kernel
ell: 0.2 # Length scale of the sparse kernel
# Sampling resolutions
free_resolution: 0.2 # Free space sampling resolution
ds_resolution: 0.1 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
var_thresh: 100.0 # Threshold on variance to distinguish known/unknown
# BGK Inference positive and negative class prior pseudocounts
prior_A: 0.001 # Positive class (occupied)
prior_B: 0.001 # Negative class (unoccupied)
| 806 | 25.9 | 70 | yaml |
la3dm | la3dm-master/config/methods/gpoctomap.yaml | # GPOctoMap config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.1
block_depth: 3 # Test-data octree depth (see Wang & Englot ICRA 2016)
# Kernel parameters
sf2: 1.0
ell: 1.0
# Sampling resolutions
free_resolution: 0.1 # Free space sampling resolution
ds_resolution: 0.1 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# GP parameters
noise: 0.01
l: 100
max_var: 1000
min_var: 0.001
max_known_var: 00.02
| 491 | 17.923077 | 69 | yaml |
la3dm | la3dm-master/config/methods/gpoctomap_large_map.yaml | # GP/BGKL+ Large Map config
# Map topic, grid cell mininum resolution
topic: /occupied_cells_vis_array
resolution: 0.2
block_depth: 4 # Test-data octree depth (see Wang & Englot ICRA 2016)
max_range: 30 # Sensor max range (m)
original_size: true
# Bounds on map height
min_z: -3.0
max_z: 3.0
# Kernel parameters
sf2: 1.0
ell: 1.0
# Sampling resolutions
free_resolution: 0.1
ds_resolution: 0.5 # Downsampling factor
# Free/Occupied Thresholds
free_thresh: 0.3
occupied_thresh: 0.7
# GP parameters
noise: 0.01
l: 100
max_var: 1000
min_var: 0.001
max_known_var: 0.02
| 574 | 15.911765 | 69 | yaml |
la3dm | la3dm-master/include/bgkloctomap/bgklblock.h | #ifndef LA3DM_BGKL_BLOCK_H
#define LA3DM_BGKL_BLOCK_H
#include <unordered_map>
#include <array>
#include "point3f.h"
#include "bgkloctree_node.h"
#include "bgkloctree.h"
namespace la3dm {
/// Hask key to index Block given block's center.
typedef int64_t BlockHashKey;
/// Initialize Look-Up Table
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth);
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map,
unsigned short max_depth);
/// Extended Block
#ifdef PREDICT
typedef std::array<BlockHashKey, 27> ExtendedBlock;
#else
typedef std::array<BlockHashKey, 7> ExtendedBlock;
#endif
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(point3f center);
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(float x, float y, float z);
/// Convert from hash key to block.
point3f hash_key_to_block(BlockHashKey key);
/// Get current block's extended block.
ExtendedBlock get_extended_block(BlockHashKey key);
/*
* @brief Block is built on top of OcTree, providing the functions to locate nodes.
*
* Block stores the information needed to locate each OcTreeNode's position:
* fixed resolution, fixed block_size, both of which must be initialized.
* The localization is implemented using Loop-Up Table.
*/
class Block : public OcTree {
friend BlockHashKey block_to_hash_key(point3f center);
friend BlockHashKey block_to_hash_key(float x, float y, float z);
friend point3f hash_key_to_block(BlockHashKey key);
friend ExtendedBlock get_extended_block(BlockHashKey key);
friend class BGKLOctoMap;
public:
Block();
Block(point3f center);
/// @return location of the OcTreeNode given OcTree's LeafIterator.
inline point3f get_loc(const LeafIterator &it) const {
return Block::key_loc_map[it.get_hash_key()] + center;
}
/// @return size of the OcTreeNode given OcTree's LeafIterator.
inline float get_size(const LeafIterator &it) const {
unsigned short depth, index;
hash_key_to_node(it.get_hash_key(), depth, index);
return float(size / pow(2, depth));
}
/// @return center of current Block.
inline point3f get_center() const { return center; }
/// @return min lim of current Block.
inline point3f get_lim_min() const { return center - point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return max lim of current Block.
inline point3f get_lim_max() const { return center + point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return ExtendedBlock of current Block.
ExtendedBlock get_extended_block() const;
OcTreeHashKey get_node(unsigned short x, unsigned short y, unsigned short z) const;
point3f get_point(unsigned short x, unsigned short y, unsigned short z) const;
void get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const;
OcTreeNode &search(float x, float y, float z) const;
OcTreeNode &search(point3f p) const;
private:
// Loop-Up Table
static std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
static std::unordered_map<unsigned short, OcTreeHashKey> index_map;
static float resolution;
static float size;
static unsigned short cell_num;
point3f center;
};
}
#endif // LA3DM_BGKL_BLOCK_H
| 3,715 | 32.781818 | 131 | h |
la3dm | la3dm-master/include/bgkloctomap/bgklinference.h | #ifndef LA3DM_BGKL_H
#define LA3DM_BGKL_H
namespace la3dm {
/*
* @brief Bayesian Generalized Kernel Inference on Bernoulli distribution
* @param dim dimension of data (2, 3, etc.)
* @param T data type (float, double, etc.)
* @ref Nonparametric Bayesian inference on multivariate exponential families
*/
template<int dim, typename T>
class BGKLInference {
public:
/// Eigen matrix type for training and test data and kernel
using MatrixXType = Eigen::Matrix<T, -1, 2*dim, Eigen::RowMajor>;
using MatrixPType = Eigen::Matrix<T, -1, dim, Eigen::RowMajor>;
using MatrixKType = Eigen::Matrix<T, -1, -1, Eigen::RowMajor>;
using MatrixDKType = Eigen::Matrix<T, -1, 1>;
using MatrixYType = Eigen::Matrix<T, -1, 1>;
float EPSILON = 0.0001;
BGKLInference(T sf2, T ell) : sf2(sf2), ell(ell), trained(false) { }
/*
* @brief Fit BGK Model
* @param x input vector (3N, row major)
* @param y target vector (N)
*/
void train(const std::vector<T> &x, const std::vector<T> &y) {
assert(x.size() % (2*dim) == 0 && (int) (x.size() / (2*dim)) == y.size());
MatrixXType _x = Eigen::Map<const MatrixXType>(x.data(), x.size() / (2*dim), 2*dim);
MatrixYType _y = Eigen::Map<const MatrixYType>(y.data(), y.size(), 1);
train(_x, _y);
}
/*
* @brief Fit BGK Model
* @param x input matrix (NX3)
* @param y target matrix (NX1)
*/
void train(const MatrixXType &x, const MatrixYType &y) {
// std::cout << "training pt2" << std::endl;
this->x = MatrixXType(x);
this->y = MatrixYType(y);
trained = true;
}
/*
* @brief Predict with BGK Model
* @param xs input vector (3M, row major)
* @param ybar positive class kernel density estimate (\bar{y})
* @param kbar kernel density estimate (\bar{k})
*/
void predict(const std::vector<T> &xs, std::vector<T> &ybar, std::vector<T> &kbar) const {
// std::cout << "predicting" << std::endl;
assert(xs.size() % dim == 0);
// std::cout << "passed assertion" << std::endl;
MatrixPType _xs = Eigen::Map<const MatrixPType>(xs.data(), xs.size() / dim, dim);
// std::cout << "matrix conversion successful" << std::endl;
MatrixYType _ybar, _kbar;
predict(_xs, _ybar, _kbar);
// std::cout << "finished prediction" << std::endl;
ybar.resize(_ybar.rows());
kbar.resize(_kbar.rows());
for (int r = 0; r < _ybar.rows(); ++r) {
ybar[r] = _ybar(r, 0);
kbar[r] = _kbar(r, 0);
}
}
/*
* @brief Predict with nonparametric Bayesian generalized kernel inference
* @param xs input vector (M x 3)
* @param ybar positive class kernel density estimate (M x 1)
* @param kbar kernel density estimate (M x 1)
*/
void predict(const MatrixPType &xs, MatrixYType &ybar, MatrixYType &kbar) const {
// std::cout << "second prediction step" << std::endl;
assert(trained == true);
MatrixKType Ks;
covSparseLine(xs, x, Ks);
// std::cout << "computed covsparseline" << std::endl;
ybar = (Ks * y).array();
kbar = Ks.rowwise().sum().array();
}
private:
/*
* @brief Compute Euclid distances between two vectors.
* @param x input vector
* @param z input vecotr
* @return d distance matrix
*/
void dist(const MatrixXType &x, const MatrixXType &z, MatrixKType &d) const {
d = MatrixKType::Zero(x.rows(), z.rows());
for (int i = 0; i < x.rows(); ++i) {
d.row(i) = (z.rowwise() - x.row(i)).rowwise().norm();
}
}
// TODO: validate me
void point_to_line_dist(const MatrixPType &x, const MatrixXType &z, MatrixKType &d) const {
assert((x.cols() == 3) && (z.cols() == 6));
// std::cout << "made it" << std::endl;
d = MatrixKType::Zero(x.rows(), z.rows());
float line_len;
point3f p, p0, p1, v, w, line_vec, pnt_vec, nearest;
float t;
for (int i = 0; i < x.rows(); ++i) {
p = point3f(x(i,0), x(i,1), x(i,2));
for (int j = 0; j < z.rows(); ++j) {
p0 = point3f(z(j,0), z(j,1), z(j,2));
p1 = point3f(z(j,3), z(j,4), z(j,5));
line_vec = p1 - p0;
line_len = line_vec.norm();
pnt_vec = p - p0;
if (line_len < EPSILON) {
d(i,j) = (p-p0).norm();
}
else {
double c1 = pnt_vec.dot(line_vec);
double c2 = line_vec.dot(line_vec);
if ( c1 <= 0) {
d(i,j) = (p - p0).norm();
}
else if (c2 <= c1) {
d(i,j) = (p - p1).norm();
}
else{
double b = c1 / c2;
nearest = p0 + (line_vec*b);
d(i,j) = (p - nearest).norm();
}
}
}
}
}
/*
* @brief Matern3 kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
*/
void covMaterniso3(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(1.73205 / ell * x, 1.73205 / ell * z, Kxz);
Kxz = ((1 + Kxz.array()) * exp(-Kxz.array())).matrix() * sf2;
}
/*
* @brief Sparse kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
* @ref A sparse covariance function for exact gaussian process inference in large datasets.
*/
void covSparse(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(x / ell, z / ell, Kxz);
Kxz = (((2.0f + (Kxz * 2.0f * 3.1415926f).array().cos()) * (1.0f - Kxz.array()) / 3.0f) +
(Kxz * 2.0f * 3.1415926f).array().sin() / (2.0f * 3.1415926f)).matrix() * sf2;
// Clean up for values with distance outside length scale
// Possible because Kxz <= 0 when dist >= ell
for (int i = 0; i < Kxz.rows(); ++i)
{
for (int j = 0; j < Kxz.cols(); ++j)
if (Kxz(i,j) < 0.0)
Kxz(i,j) = 0.0f;
}
}
/*
* @brief Sparse kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
* @ref A sparse covariance function for exact gaussian process inference in large datasets.
*/
void covSparseLine(const MatrixPType &x, const MatrixXType &z, MatrixKType &Kxz) const {
point_to_line_dist(x, z, Kxz); // Check on this
Kxz /= ell;
Kxz = (((2.0f + (Kxz * 2.0f * 3.1415926f).array().cos()) * (1.0f - Kxz.array()) / 3.0f) +
(Kxz * 2.0f * 3.1415926f).array().sin() / (2.0f * 3.1415926f)).matrix() * sf2;
// Clean up for values with distance outside length scale
// Possible because Kxz <= 0 when dist >= ell
for (int i = 0; i < Kxz.rows(); ++i)
{
for (int j = 0; j < Kxz.cols(); ++j)
if (Kxz(i,j) < 0.0)
Kxz(i,j) = 0.0f;
}
}
T sf2; // signal variance
T ell; // length-scale
MatrixXType x; // temporary storage of training data
MatrixYType y; // temporary storage of training labels
bool trained; // true if bgkinference stored training data
};
typedef BGKLInference<3, float> BGKL3f;
}
#endif // LA3DM_BGKL_H
| 8,306 | 38.183962 | 101 | h |
la3dm | la3dm-master/include/bgkloctomap/bgkloctomap.h | #ifndef LA3DM_BGKL_OCTOMAP_H
#define LA3DM_BGKL_OCTOMAP_H
#include <unordered_map>
#include <vector>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include "rtree.h"
#include "bgklblock.h"
#include "bgkloctree_node.h"
#include "point6f.h"
namespace la3dm {
/// PCL PointCloud types as input
typedef pcl::PointXYZ PCLPointType;
typedef pcl::PointCloud<PCLPointType> PCLPointCloud;
/*
* @brief BGKLOctoMap
*
* Bayesian Generalized Kernel Inference for Occupancy Map Prediction
* The space is partitioned by Blocks in which OcTrees with fixed
* depth are rooted. Occupancy values in one Block is predicted by
* its ExtendedBlock via Bayesian generalized kernel inference.
*/
class BGKLOctoMap {
public:
/// Types used internally
typedef std::vector<point3f> PointCloud;
typedef std::pair<point3f, float> GPPointType;
typedef std::pair<point6f, float> GPLineType; // generalizes GPPointType
typedef std::vector<GPPointType> GPPointCloud;
typedef std::vector<GPLineType> GPLineCloud; // generalizes GPLineType
typedef RTree<int, float, 3, float> MyRTree;
public:
BGKLOctoMap();
/*
* @param resolution (default 0.1m)
* @param block_depth maximum depth of OcTree (default 4)
* @param sf2 signal variance in GPs (default 1.0)
* @param ell length-scale in GPs (default 1.0)
* @param noise noise variance in GPs (default 0.01)
* @param l length-scale in logistic regression function (default 100)
* @param min_var minimum variance in Occupancy (default 0.001)
* @param max_var maximum variance in Occupancy (default 1000)
* @param max_known_var maximum variance for Occuapncy to be classified as KNOWN State (default 0.02)
* @param free_thresh free threshold for Occupancy probability (default 0.3)
* @param occupied_thresh occupied threshold for Occupancy probability (default 0.7)
*/
BGKLOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B);
~BGKLOctoMap();
/// Set resolution.
void set_resolution(float resolution);
/// Set block max depth.
void set_block_depth(unsigned short max_depth);
/// Get resolution.
inline float get_resolution() const { return resolution; }
/// Get block max depth.
inline float get_block_depth() const { return block_depth; }
/*
* @brief Insert PCL PointCloud into BGKLOctoMaps.
* @param cloud one scan in PCLPointCloud format
* @param origin sensor origin in the scan
* @param ds_resolution downsampling resolution for PCL VoxelGrid filtering (-1 if no downsampling)
* @param free_res resolution for sampling free training points along sensor beams (default 2.0)
* @param max_range maximum range for beams to be considered as valid measurements (-1 if no limitation)
*/
void insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res = 2.0f,
float max_range = -1);
void insert_training_data(const GPLineCloud &cloud);
/// Get bounding box of the map.
void get_bbox(point3f &lim_min, point3f &lim_max) const;
class RayCaster {
public:
RayCaster(const BGKLOctoMap *map, const point3f &start, const point3f &end) : map(map) {
assert(map != nullptr);
_block_key = block_to_hash_key(start);
block = map->search(_block_key);
lim = static_cast<unsigned short>(pow(2, map->block_depth - 1));
if (block != nullptr) {
block->get_index(start, x, y, z);
block_lim = block->get_center();
block_size = block->size;
current_p = start;
resolution = map->resolution;
int x0 = static_cast<int>((start.x() / resolution));
int y0 = static_cast<int>((start.y() / resolution));
int z0 = static_cast<int>((start.z() / resolution));
int x1 = static_cast<int>((end.x() / resolution));
int y1 = static_cast<int>((end.y() / resolution));
int z1 = static_cast<int>((end.z() / resolution));
dx = abs(x1 - x0);
dy = abs(y1 - y0);
dz = abs(z1 - z0);
n = 1 + dx + dy + dz;
x_inc = x1 > x0 ? 1 : (x1 == x0 ? 0 : -1);
y_inc = y1 > y0 ? 1 : (y1 == y0 ? 0 : -1);
z_inc = z1 > z0 ? 1 : (z1 == z0 ? 0 : -1);
xy_error = dx - dy;
xz_error = dx - dz;
yz_error = dy - dz;
dx *= 2;
dy *= 2;
dz *= 2;
} else {
n = 0;
}
}
inline bool end() const { return n <= 0; }
bool next(point3f &p, OcTreeNode &node, BlockHashKey &block_key, OcTreeHashKey &node_key) {
assert(!end());
bool valid = false;
unsigned short index = x + y * lim + z * lim * lim;
node_key = Block::index_map[index];
block_key = _block_key;
if (block != nullptr) {
valid = true;
node = (*block)[node_key];
current_p = block->get_point(x, y, z);
p = current_p;
} else {
p = current_p;
}
if (xy_error > 0 && xz_error > 0) {
x += x_inc;
current_p.x() += x_inc * resolution;
xy_error -= dy;
xz_error -= dz;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error < 0 && yz_error > 0) {
y += y_inc;
current_p.y() += y_inc * resolution;
xy_error += dx;
yz_error -= dz;
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
} else if (yz_error < 0 && xz_error < 0) {
z += z_inc;
current_p.z() += z_inc * resolution;
xz_error += dx;
yz_error += dy;
if (z >= lim || z < 0) {
block_lim.z() += z_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
z = z_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error == 0) {
x += x_inc;
y += y_inc;
n -= 2;
current_p.x() += x_inc * resolution;
current_p.y() += y_inc * resolution;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
}
n--;
return valid;
}
private:
const BGKLOctoMap *map;
Block *block;
point3f block_lim;
float block_size, resolution;
int dx, dy, dz, error, n;
int x_inc, y_inc, z_inc, xy_error, xz_error, yz_error;
unsigned short index, x, y, z, lim;
BlockHashKey _block_key;
point3f current_p;
};
/// LeafIterator for iterating all leaf nodes in blocks
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator(const BGKLOctoMap *map) {
assert(map != nullptr);
block_it = map->block_arr.cbegin();
end_block = map->block_arr.cend();
if (map->block_arr.size() > 0) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
} else {
leaf_it = OcTree::LeafIterator();
end_leaf = OcTree::LeafIterator();
}
}
// just for initializing end iterator
LeafIterator(std::unordered_map<BlockHashKey, Block *>::const_iterator block_it,
OcTree::LeafIterator leaf_it)
: block_it(block_it), leaf_it(leaf_it), end_block(block_it), end_leaf(leaf_it) { }
bool operator==(const LeafIterator &other) {
return (block_it == other.block_it) && (leaf_it == other.leaf_it);
}
bool operator!=(const LeafIterator &other) {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
++leaf_it;
if (leaf_it == end_leaf) {
++block_it;
if (block_it != end_block) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
}
}
return *this;
}
OcTreeNode &operator*() const {
return *leaf_it;
}
std::vector<point3f> get_pruned_locs() const {
std::vector<point3f> pruned_locs;
point3f center = get_loc();
float size = get_size();
float x0 = center.x() - size * 0.5 + Block::resolution * 0.5;
float y0 = center.y() - size * 0.5 + Block::resolution * 0.5;
float z0 = center.z() - size * 0.5 + Block::resolution * 0.5;
float x1 = center.x() + size * 0.5;
float y1 = center.y() + size * 0.5;
float z1 = center.z() + size * 0.5;
for (float x = x0; x < x1; x += Block::resolution) {
for (float y = y0; y < y1; y += Block::resolution) {
for (float z = z0; z < z1; z += Block::resolution) {
pruned_locs.emplace_back(x, y, z);
}
}
}
return pruned_locs;
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline point3f get_loc() const {
return block_it->second->get_loc(leaf_it);
}
inline float get_size() const {
return block_it->second->get_size(leaf_it);
}
private:
std::unordered_map<BlockHashKey, Block *>::const_iterator block_it;
std::unordered_map<BlockHashKey, Block *>::const_iterator end_block;
OcTree::LeafIterator leaf_it;
OcTree::LeafIterator end_leaf;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); }
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(block_arr.cend(), OcTree::LeafIterator()); }
OcTreeNode search(point3f p) const;
OcTreeNode search(float x, float y, float z) const;
Block *search(BlockHashKey key) const;
inline float get_block_size() const { return block_size; }
private:
/// @return true if point is inside a bounding box given min and max limits.
inline bool gp_point_in_bbox(const GPPointType &p, const point3f &lim_min, const point3f &lim_max) const {
return (p.first.x() > lim_min.x() && p.first.x() < lim_max.x() &&
p.first.y() > lim_min.y() && p.first.y() < lim_max.y() &&
p.first.z() > lim_min.z() && p.first.z() < lim_max.z());
}
/// Get the bounding box of a pointcloud.
void bbox(const GPLineCloud &cloud, point3f &lim_min, point3f &lim_max) const;
/// Get all block indices inside a bounding box.
void get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<BlockHashKey> &blocks) const;
/// Get all points inside a bounding box assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<int> &out);
/// @return true if point exists inside a bounding box assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max);
/// Get all points inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const BlockHashKey &key, std::vector<int> &out);
/// @return true if point exists inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const BlockHashKey &key);
/// Get all points inside an extended block assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const ExtendedBlock &block, std::vector<int> &out);
/// @return true if point exists inside an extended block assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const ExtendedBlock &block);
/// RTree callback function
static bool count_callback(int p, void *arg);
/// RTree callback function
static bool search_callback(int p, void *arg);
/// Downsample PCLPointCloud using PCL VoxelGrid Filtering.
void downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const;
/// Sample free training points along sensor beams.
void beam_sample(const point3f &hits, const point3f &origin, PointCloud &frees,
float free_resolution) const;
/// Get training data from one sensor scan.
void get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPLineCloud &xy, GPLineCloud &rays, std::vector<int> &ray_idx) const;
float resolution;
float block_size;
unsigned short block_depth;
std::unordered_map<BlockHashKey, Block *> block_arr;
MyRTree rtree;
};
}
#endif // LA3DM_BGKLOCTOMAP_H
| 16,070 | 40.527132 | 140 | h |
la3dm | la3dm-master/include/bgkloctomap/bgkloctree.h | #ifndef LA3DM_BGKL_OCTREE_H
#define LA3DM_BGKL_OCTREE_H
#include <stack>
#include <vector>
#include "point3f.h"
#include "bgkloctree_node.h"
namespace la3dm {
/// Hash key to index OcTree nodes given depth and the index in that layer.
typedef int OcTreeHashKey;
/// Convert from node to hask key.
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index);
/// Convert from hash key to node.
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index);
/*
* @brief A simple OcTree to organize occupancy data in one block.
*
* OcTree doesn't store positions of nodes in order to reduce memory usage.
* The nodes in OcTrees are indexed by OcTreeHashKey which can be used to
* retrieve positions later (See Block).
* For the purpose of mapping, this OcTree has fixed depth which should be
* set before using OcTrees.
*/
class OcTree {
friend class BGKLOctoMap;
public:
OcTree();
~OcTree();
OcTree(const OcTree &other);
OcTree &operator=(const OcTree &other);
/*
* @brief Rursively pruning OcTreeNodes with the same state.
*
* Prune nodes by setting nodes to PRUNED.
* Delete the layer if all nodes are pruned.
*/
bool prune();
/// @return true if this node is a leaf node.
bool is_leaf(OcTreeHashKey key) const;
/// @return true if this node is a leaf node.
bool is_leaf(unsigned short depth, unsigned short index) const;
/// @return true if this node exists and is not pruned.
bool search(OcTreeHashKey key) const;
/// @return Occupancy of the node (without checking if it exists!)
OcTreeNode &operator[](OcTreeHashKey key) const;
/// Leaf iterator for OcTrees: iterate all leaf nodes not pruned.
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator() : tree(nullptr) { }
LeafIterator(const OcTree *tree)
: tree(tree != nullptr && tree->node_arr != nullptr ? tree : nullptr) {
if (tree != nullptr) {
stack.emplace(0, 0);
stack.emplace(0, 0);
++(*this);
}
}
LeafIterator(const LeafIterator &other) : tree(other.tree), stack(other.stack) { }
LeafIterator &operator=(const LeafIterator &other) {
tree = other.tree;
stack = other.stack;
return *this;
}
bool operator==(const LeafIterator &other) const {
return (tree == other.tree) &&
(stack.size() == other.stack.size()) &&
(stack.size() == 0 || (stack.size() > 0 &&
(stack.top().depth == other.stack.top().depth) &&
(stack.top().index == other.stack.top().index)));
}
bool operator!=(const LeafIterator &other) const {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
if (stack.empty()) {
tree = nullptr;
} else {
stack.pop();
while (!stack.empty() && !tree->is_leaf(stack.top().depth, stack.top().index))
single_inc();
if (stack.empty())
tree = nullptr;
}
return *this;
}
inline OcTreeNode &operator*() const {
return (*tree)[get_hash_key()];
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline OcTreeHashKey get_hash_key() const {
OcTreeHashKey key = node_to_hash_key(stack.top().depth, stack.top().index);
return key;
}
protected:
void single_inc() {
StackElement top(stack.top());
stack.pop();
for (int i = 0; i < 8; ++i) {
stack.emplace(top.depth + 1, top.index * 8 + i);
}
}
struct StackElement {
unsigned short depth;
unsigned short index;
StackElement(unsigned short depth, unsigned short index)
: depth(depth), index(index) { }
};
const OcTree *tree;
std::stack<StackElement, std::vector<StackElement> > stack;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); };
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(nullptr); };
private:
OcTreeNode **node_arr;
static unsigned short max_depth;
};
}
#endif // LA3DM_BGKL_OCTREE_H
| 5,281 | 31.604938 | 98 | h |
la3dm | la3dm-master/include/bgkloctomap/bgkloctree_node.h | #ifndef LA3DM_BGKL_OCCUPANCY_H
#define LA3DM_BGKL_OCCUPANCY_H
#include <iostream>
#include <fstream>
namespace la3dm {
/// Occupancy state: before pruning: FREE, OCCUPIED, UNKNOWN; after pruning: PRUNED
enum class State : char {
FREE, OCCUPIED, UNKNOWN, PRUNED
};
/*
* @brief Inference ouputs and occupancy state.
*
* Occupancy has member variables: m_A and m_B (kernel densities of positive
* and negative class, respectively) and State.
* Before using this class, set the static member variables first.
*/
class Occupancy {
friend std::ostream &operator<<(std::ostream &os, const Occupancy &oc);
friend std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc);
friend std::ifstream &operator>>(std::ifstream &is, Occupancy &oc);
friend class BGKLOctoMap;
public:
/*
* @brief Constructors and destructor.
*/
Occupancy() : m_A(Occupancy::prior_A), m_B(Occupancy::prior_B), state(State::UNKNOWN) { classified = false; }
Occupancy(float A, float B);
Occupancy(const Occupancy &other) : m_A(other.m_A), m_B(other.m_B), state(other.state) { }
Occupancy &operator=(const Occupancy &other) {
m_A = other.m_A;
m_B = other.m_B;
state = other.state;
return *this;
}
~Occupancy() { }
/*
* @brief Exact updates for nonparametric Bayesian kernel inference
* @param ybar kernel density estimate of positive class (occupied)
* @param kbar kernel density of negative class (unoccupied)
*/
void update(float ybar, float kbar);
/// Get probability of occupancy.
float get_prob() const;
/// Get variance of occupancy (uncertainty)
inline float get_var() const { return (m_A * m_B) / ( (m_A + m_B) * (m_A + m_B) * (m_A + m_B + 1.0f)); }
/*
* @brief Get occupancy state of the node.
* @return occupancy state (see State).
*/
inline State get_state() const { return state; }
/// Prune current node; set state to PRUNED.
inline void prune() { state = State::PRUNED; }
/// Only FREE and OCCUPIED nodes can be equal.
inline bool operator==(const Occupancy &rhs) const {
return this->state != State::UNKNOWN && this->state == rhs.state;
}
bool classified;
private:
float m_A;
float m_B;
State state;
static float sf2;
static float ell; // length-scale
static float prior_A; // prior on alpha
static float prior_B; // prior on beta
static float free_thresh; // FREE occupancy threshold
static float occupied_thresh; // OCCUPIED occupancy threshold
static float var_thresh;
};
typedef Occupancy OcTreeNode;
}
#endif // LA3DM_BGKL_OCCUPANCY_H
| 2,950 | 29.112245 | 117 | h |
la3dm | la3dm-master/include/bgklvoctomap/bgklvblock.h | #ifndef LA3DM_BGKLV_BLOCK_H
#define LA3DM_BGKLV_BLOCK_H
#include <unordered_map>
#include <array>
#include "point3f.h"
#include "bgklvoctree_node.h"
#include "bgklvoctree.h"
namespace la3dm {
/// Hask key to index Block given block's center.
typedef int64_t BlockHashKey;
/// Initialize Look-Up Table
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth);
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map,
unsigned short max_depth);
/// Extended Block
#ifdef PREDICT
typedef std::array<BlockHashKey, 27> ExtendedBlock;
#else
typedef std::array<BlockHashKey, 7> ExtendedBlock;
#endif
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(point3f center);
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(float x, float y, float z);
/// Convert from hash key to block.
point3f hash_key_to_block(BlockHashKey key);
/// Get current block's extended block.
ExtendedBlock get_extended_block(BlockHashKey key);
/*
* @brief Block is built on top of OcTree, providing the functions to locate nodes.
*
* Block stores the information needed to locate each OcTreeNode's position:
* fixed resolution, fixed block_size, both of which must be initialized.
* The localization is implemented using Loop-Up Table.
*/
class Block : public OcTree {
friend BlockHashKey block_to_hash_key(point3f center);
friend BlockHashKey block_to_hash_key(float x, float y, float z);
friend point3f hash_key_to_block(BlockHashKey key);
friend ExtendedBlock get_extended_block(BlockHashKey key);
friend class BGKLVOctoMap;
public:
Block();
Block(point3f center);
/// @return location of the OcTreeNode given OcTree's LeafIterator.
inline point3f get_loc(const LeafIterator &it) const {
return Block::key_loc_map[it.get_hash_key()] + center;
}
/// @return size of the OcTreeNode given OcTree's LeafIterator.
inline float get_size(const LeafIterator &it) const {
unsigned short depth;
unsigned long index;
hash_key_to_node(it.get_hash_key(), depth, index);
return float(size / pow(2, depth));
}
/// @return center of current Block.
inline point3f get_center() const { return center; }
/// @return min lim of current Block.
inline point3f get_lim_min() const { return center - point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return max lim of current Block.
inline point3f get_lim_max() const { return center + point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return ExtendedBlock of current Block.
ExtendedBlock get_extended_block() const;
OcTreeHashKey get_node(unsigned short x, unsigned short y, unsigned short z) const;
point3f get_point(unsigned short x, unsigned short y, unsigned short z) const;
void get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const;
OcTreeNode &search(float x, float y, float z) const;
OcTreeNode &search(point3f p) const;
private:
// Loop-Up Table
static std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
static std::unordered_map<unsigned short, OcTreeHashKey> index_map;
static float resolution;
static float size;
static unsigned short cell_num;
point3f center;
};
}
#endif // LA3DM_BGKLV_BLOCK_H
| 3,747 | 32.765766 | 131 | h |
la3dm | la3dm-master/include/bgklvoctomap/bgklvinference.h | #ifndef LA3DM_BGKLV_H
#define LA3DM_BGKLV_H
namespace la3dm {
/*
* @brief Bayesian Generalized Kernel Inference on Bernoulli distribution
* @param dim dimension of data (2, 3, etc.)
* @param T data type (float, double, etc.)
* @ref Nonparametric Bayesian inference on multivariate exponential families
*/
template<int dim, typename T>
class BGKLVInference {
public:
/// Eigen matrix type for training and test data and kernel
using MatrixXType = Eigen::Matrix<T, -1, 2*dim, Eigen::RowMajor>;
using MatrixPType = Eigen::Matrix<T, -1, dim, Eigen::RowMajor>;
using MatrixKType = Eigen::Matrix<T, -1, -1, Eigen::RowMajor>;
using MatrixDKType = Eigen::Matrix<T, -1, 1>;
using MatrixYType = Eigen::Matrix<T, -1, 1>;
float EPSILON = 0.0001;
BGKLVInference(T sf2, T ell) : sf2(sf2), ell(ell), trained(false) { }
/*
* @brief Fit BGKLV Model
* @param x input vector (3N, row major)
* @param y target vector (N)
*/
void train(const std::vector<T> &x, const std::vector<T> &y) {
assert(x.size() % (2*dim) == 0 && (int) (x.size() / (2*dim)) == y.size());
MatrixXType _x = Eigen::Map<const MatrixXType>(x.data(), x.size() / (2*dim), 2*dim);
MatrixYType _y = Eigen::Map<const MatrixYType>(y.data(), y.size(), 1);
train(_x, _y);
}
/*
* @brief Fit BGKLV Model
* @param x input matrix (NX3)
* @param y target matrix (NX1)
*/
void train(const MatrixXType &x, const MatrixYType &y) {
// std::cout << "training pt2" << std::endl;
this->xt = MatrixXType(x);
this->yt = MatrixYType(y);
trained = true;
}
/*
* @brief Predict with BGKLV Model
* @param xs input vector (3M, row major)
* @param ybar positive class kernel density estimate (\bar{y})
* @param kbar kernel density estimate (\bar{k})
*/
void predict(const std::vector<T> &xs, std::vector<T> &ybar, std::vector<T> &kbar) const {
assert(xs.size() % dim == 0);
MatrixPType _xs = Eigen::Map<const MatrixPType>(xs.data(), xs.size() / dim, dim);
MatrixYType _ybar, _kbar;
predict(_xs, _ybar, _kbar);
ybar.resize(_ybar.rows());
kbar.resize(_kbar.rows());
for (int r = 0; r < _kbar.rows(); ++r) {
ybar[r] = _ybar(r, 0);
kbar[r] = _kbar(r, 0);
}
}
/*
* @brief Predict with nonparametric Bayesian generalized kernel inference
* @param xs input vector (M x 3)
* @param ybar positive class kernel density estimate (M x 1)
* @param kbar kernel density estimate (M x 1)
*/
void predict(const MatrixPType &xs, MatrixYType &ybar, MatrixYType &kbar) const {
// std::cout << "second prediction step" << std::endl;
assert(trained == true);
MatrixKType Ks;
covSparseLine(xs, xt, Ks);
// std::cout << "computed covsparseline" << std::endl;
ybar = (Ks * yt).array();
kbar = Ks.rowwise().sum().array();
}
private:
/*
* @brief Compute Euclid distances between two vectors.
* @param x input vector
* @param z input vecotr
* @return d distance matrix
*/
void dist(const MatrixXType &x, const MatrixXType &z, MatrixKType &d) const {
d = MatrixKType::Zero(x.rows(), z.rows());
for (int i = 0; i < x.rows(); ++i) {
d.row(i) = (z.rowwise() - x.row(i)).rowwise().norm();
}
}
void point_to_line_dist(const MatrixPType &x, const MatrixXType &z, MatrixKType &d) const {
assert((x.cols() == 3) && (z.cols() == 6));
d = MatrixKType::Zero(x.rows(), z.rows());
float line_len;
point3f p, p0, p1, v, w, line_vec, pnt_vec, nearest;
float t;
for (int i = 0; i < x.rows(); ++i) {
p = point3f(x(i,0), x(i,1), x(i,2));
for (int j = 0; j < z.rows(); ++j) {
p0 = point3f(z(j,0), z(j,1), z(j,2));
p1 = point3f(z(j,3), z(j,4), z(j,5));
line_vec = p1 - p0;
line_len = line_vec.norm();
pnt_vec = p - p0;
if (line_len < EPSILON) {
d(i,j) = (p-p0).norm();
}
else {
double c1 = pnt_vec.dot(line_vec);
double c2 = line_vec.dot(line_vec);
if ( c1 <= 0) {
d(i,j) = (p - p0).norm();
}
else if (c2 <= c1) {
d(i,j) = (p - p1).norm();
}
else{
double b = c1 / c2;
nearest = p0 + (line_vec*b);
d(i,j) = (p - nearest).norm();
}
}
}
}
}
/*
* @brief Sparse kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
* @ref A sparse covariance function for exact gaussian process inference in large datasets.
*/
void covSparseLine(const MatrixPType &x, const MatrixXType &z, MatrixKType &Kxz) const {
point_to_line_dist(x, z, Kxz); // Check on this
Kxz /= ell;
for (int i = 0; i < Kxz.rows(); ++i)
{
for (int j = 0; j < Kxz.cols(); ++j)
if (Kxz(i,j) > 1.0)
Kxz(i,j) = 1.0f;
}
//sparse kernel function
Kxz = (((2.0f + (Kxz * 2.0f * 3.1415926f).array().cos()) * (1.0f - Kxz.array()) / 3.0f) + (Kxz * 2.0f * 3.1415926f).array().sin() / (2.0f * 3.1415926f)).matrix() * sf2;
}
T sf2; // signal variance
T ell; // length-scale
MatrixXType xt; // temporary storage of training data
MatrixYType yt; // temporary storage of training labels
bool trained; // true if bgklvinference stored training data
};
typedef BGKLVInference<3, float> BGKLV3f;
}
#endif // LA3DM_BGKLV_H
| 6,530 | 36.97093 | 180 | h |
la3dm | la3dm-master/include/bgklvoctomap/bgklvoctomap.h | #ifndef LA3DM_BGKLV_OCTOMAP_H
#define LA3DM_BGKLV_OCTOMAP_H
#include <unordered_map>
#include <vector>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include "rtree.h"
#include "bgklvblock.h"
#include "bgklvoctree_node.h"
#include "point6f.h"
namespace la3dm {
/// PCL PointCloud types as input
typedef pcl::PointXYZ PCLPointType;
typedef pcl::PointCloud<PCLPointType> PCLPointCloud;
/*
* @brief BGKLVOctoMap
*
* Bayesian Generalized Kernel Inference for Occupancy Map Prediction
* The space is partitioned by Blocks in which OcTrees with fixed
* depth are rooted. Occupancy values in one Block is predicted by
* each cell via Bayesian generalized kernel inference.
*/
class BGKLVOctoMap {
public:
/// Types used internally
typedef std::vector<point3f> PointCloud;
typedef std::pair<point3f, float> GPPointType;
typedef std::pair<point6f, float> GPLineType; // generalizes GPPointType
typedef std::vector<GPPointType> GPPointCloud;
typedef std::vector<GPLineType> GPLineCloud; // generalizes GPLineType
typedef RTree<int, float, 3, float> MyRTree;
public:
BGKLVOctoMap();
/*
* @param resolution (default 0.1m)
* @param block_depth maximum depth of OcTree (default 4)
* @param sf2 signal variance in GPs (default 1.0)
* @param free_thresh free threshold for Occupancy probability (default 0.3)
* @param occupied_thresh occupied threshold for Occupancy probability (default 0.7)
* @param var_thresh variance threshold to define UNCERTAIN State (default 0.2)
* @param prior_A prior weight of Occupied expectation (default 0.001)
* @param prior_B prior weight of Free expectation (default 0.001)
* @param original_size boolean whether or not to prune (default true)
* @param MIN_W threshold for UNKNOWN (default 0.001)
*/
BGKLVOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B,
bool original_size,
float MIN_W);
~BGKLVOctoMap();
/// Set resolution.
void set_resolution(float resolution);
/// Set block max depth.
void set_block_depth(unsigned short max_depth);
/// Get resolution.
inline float get_resolution() const { return resolution; }
/// Get block max depth.
inline float get_block_depth() const { return block_depth; }
/*
* @brief Insert PCL PointCloud into BGKLVOctoMaps.
* @param cloud one scan in PCLPointCloud format
* @param origin sensor origin in the scan
* @param ds_resolution downsampling resolution for PCL VoxelGrid filtering (-1 if no downsampling)
* @param free_res resolution for sampling free training points along sensor beams (default 2.0)
* @param max_range maximum range for beams to be considered as valid measurements (-1 if no limitation)
*/
void insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res = 2.0f,
float max_range = -1);
void insert_training_data(const GPLineCloud &cloud);
/// Get bounding box of the map.
void get_bbox(point3f &lim_min, point3f &lim_max) const;
class RayCaster {
public:
RayCaster(const BGKLVOctoMap *map, const point3f &start, const point3f &end) : map(map) {
assert(map != nullptr);
_block_key = block_to_hash_key(start);
block = map->search(_block_key);
lim = static_cast<unsigned short>(pow(2, map->block_depth - 1));
if (block != nullptr) {
block->get_index(start, x, y, z);
block_lim = block->get_center();
block_size = block->size;
current_p = start;
resolution = map->resolution;
int x0 = static_cast<int>((start.x() / resolution));
int y0 = static_cast<int>((start.y() / resolution));
int z0 = static_cast<int>((start.z() / resolution));
int x1 = static_cast<int>((end.x() / resolution));
int y1 = static_cast<int>((end.y() / resolution));
int z1 = static_cast<int>((end.z() / resolution));
dx = abs(x1 - x0);
dy = abs(y1 - y0);
dz = abs(z1 - z0);
n = 1 + dx + dy + dz;
x_inc = x1 > x0 ? 1 : (x1 == x0 ? 0 : -1);
y_inc = y1 > y0 ? 1 : (y1 == y0 ? 0 : -1);
z_inc = z1 > z0 ? 1 : (z1 == z0 ? 0 : -1);
xy_error = dx - dy;
xz_error = dx - dz;
yz_error = dy - dz;
dx *= 2;
dy *= 2;
dz *= 2;
} else {
n = 0;
}
}
inline bool end() const { return n <= 0; }
bool next(point3f &p, OcTreeNode &node, BlockHashKey &block_key, OcTreeHashKey &node_key) {
assert(!end());
bool valid = false;
unsigned short index = x + y * lim + z * lim * lim;
node_key = Block::index_map[index];
block_key = _block_key;
if (block != nullptr) {
valid = true;
node = (*block)[node_key];
current_p = block->get_point(x, y, z);
p = current_p;
} else {
p = current_p;
}
if (xy_error > 0 && xz_error > 0) {
x += x_inc;
current_p.x() += x_inc * resolution;
xy_error -= dy;
xz_error -= dz;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error < 0 && yz_error > 0) {
y += y_inc;
current_p.y() += y_inc * resolution;
xy_error += dx;
yz_error -= dz;
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
} else if (yz_error < 0 && xz_error < 0) {
z += z_inc;
current_p.z() += z_inc * resolution;
xz_error += dx;
yz_error += dy;
if (z >= lim || z < 0) {
block_lim.z() += z_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
z = z_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error == 0) {
x += x_inc;
y += y_inc;
n -= 2;
current_p.x() += x_inc * resolution;
current_p.y() += y_inc * resolution;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
}
n--;
return valid;
}
private:
const BGKLVOctoMap *map;
Block *block;
point3f block_lim;
float block_size, resolution;
int dx, dy, dz, error, n;
int x_inc, y_inc, z_inc, xy_error, xz_error, yz_error;
unsigned short index, x, y, z, lim;
BlockHashKey _block_key;
point3f current_p;
};
/// LeafIterator for iterating all leaf nodes in blocks
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator(const BGKLVOctoMap *map) {
assert(map != nullptr);
block_it = map->block_arr.cbegin();
end_block = map->block_arr.cend();
if (map->block_arr.size() > 0) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
} else {
leaf_it = OcTree::LeafIterator();
end_leaf = OcTree::LeafIterator();
}
}
// just for initializing end iterator
LeafIterator(std::unordered_map<BlockHashKey, Block *>::const_iterator block_it,
OcTree::LeafIterator leaf_it)
: block_it(block_it), leaf_it(leaf_it), end_block(block_it), end_leaf(leaf_it) { }
bool operator==(const LeafIterator &other) {
return (block_it == other.block_it) && (leaf_it == other.leaf_it);
}
bool operator!=(const LeafIterator &other) {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
++leaf_it;
if (leaf_it == end_leaf) {
++block_it;
if (block_it != end_block) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
}
}
return *this;
}
OcTreeNode &operator*() const {
return *leaf_it;
}
std::vector<point3f> get_pruned_locs() const {
std::vector<point3f> pruned_locs;
point3f center = get_loc();
float size = get_size();
float x0 = center.x() - size * 0.5 + Block::resolution * 0.5;
float y0 = center.y() - size * 0.5 + Block::resolution * 0.5;
float z0 = center.z() - size * 0.5 + Block::resolution * 0.5;
float x1 = center.x() + size * 0.5;
float y1 = center.y() + size * 0.5;
float z1 = center.z() + size * 0.5;
for (float x = x0; x < x1; x += Block::resolution) {
for (float y = y0; y < y1; y += Block::resolution) {
for (float z = z0; z < z1; z += Block::resolution) {
pruned_locs.emplace_back(x, y, z);
}
}
}
return pruned_locs;
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline point3f get_loc() const {
return block_it->second->get_loc(leaf_it);
}
inline float get_size() const {
return block_it->second->get_size(leaf_it);
}
private:
std::unordered_map<BlockHashKey, Block *>::const_iterator block_it;
std::unordered_map<BlockHashKey, Block *>::const_iterator end_block;
OcTree::LeafIterator leaf_it;
OcTree::LeafIterator end_leaf;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); }
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(block_arr.cend(), OcTree::LeafIterator()); }
OcTreeNode search(point3f p) const;
OcTreeNode search(float x, float y, float z) const;
Block *search(BlockHashKey key) const;
inline float get_block_size() const { return block_size; }
private:
/// @return true if point is inside a bounding box given min and max limits.
inline bool gp_point_in_bbox(const GPPointType &p, const point3f &lim_min, const point3f &lim_max) const {
return (p.first.x() > lim_min.x() && p.first.x() < lim_max.x() &&
p.first.y() > lim_min.y() && p.first.y() < lim_max.y() &&
p.first.z() > lim_min.z() && p.first.z() < lim_max.z());
}
/// Get the bounding box of a pointcloud.
void bbox(const GPLineCloud &cloud, point3f &lim_min, point3f &lim_max) const;
/// Get all block indices inside a bounding box.
void get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<BlockHashKey> &blocks) const;
/// Get all points inside a bounding box assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<int> &out);
/// @return true if point exists inside a bounding box assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max);
/// Get all points inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const BlockHashKey &key, std::vector<int> &out);
/// @return true if point exists inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const BlockHashKey &key);
/// Get all points inside an extended block assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const ExtendedBlock &block, std::vector<int> &out);
/// @return true if point exists inside an extended block assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const ExtendedBlock &block);
/// RTree callback function
static bool count_callback(int p, void *arg);
/// RTree callback function
static bool search_callback(int p, void *arg);
/// Downsample PCLPointCloud using PCL VoxelGrid Filtering.
void downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const;
/// Sample free training points along sensor beams.
void beam_sample(const point3f &hits, const point3f &origin, PointCloud &frees,
float free_resolution) const;
/// Get training data from one sensor scan.
void get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPLineCloud &xy, GPLineCloud &rays, std::vector<int> &ray_idx) const;
float resolution;
float block_size;
unsigned short block_depth;
std::unordered_map<BlockHashKey, Block *> block_arr;
MyRTree rtree;
};
}
#endif // LA3DM_BGKLVOCTOMAP_H
| 16,075 | 40.43299 | 140 | h |
la3dm | la3dm-master/include/bgklvoctomap/bgklvoctree.h | #ifndef LA3DM_BGKLV_OCTREE_H
#define LA3DM_BGKLV_OCTREE_H
#include <stack>
#include <vector>
#include "point3f.h"
#include "bgklvoctree_node.h"
namespace la3dm {
/// Hash key to index OcTree nodes given depth and the index in that layer.
typedef int OcTreeHashKey;
/// Convert from node to hask key.
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned long index);
/// Convert from hash key to node.
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned long &index);
/*
* @brief A simple OcTree to organize occupancy data in one block.
*
* OcTree doesn't store positions of nodes in order to reduce memory usage.
* The nodes in OcTrees are indexed by OcTreeHashKey which can be used to
* retrieve positions later (See Block).
* For the purpose of mapping, this OcTree has fixed depth which should be
* set before using OcTrees.
*/
class OcTree {
friend class BGKLVOctoMap;
public:
OcTree();
~OcTree();
OcTree(const OcTree &other);
OcTree &operator=(const OcTree &other);
/*
* @brief Rursively pruning OcTreeNodes with the same state.
*
* Prune nodes by setting nodes to PRUNED.
* Delete the layer if all nodes are pruned.
*/
bool prune();
/// @return true if this node is a leaf node.
bool is_leaf(OcTreeHashKey key) const;
/// @return true if this node is a leaf node.
bool is_leaf(unsigned short depth, unsigned long index) const;
/// @return true if this node exists and is not pruned.
bool search(OcTreeHashKey key) const;
/// @return Occupancy of the node (without checking if it exists!)
OcTreeNode &operator[](OcTreeHashKey key) const;
/// Leaf iterator for OcTrees: iterate all leaf nodes not pruned.
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator() : tree(nullptr) { }
LeafIterator(const OcTree *tree)
: tree(tree != nullptr && tree->node_arr != nullptr ? tree : nullptr) {
if (tree != nullptr) {
stack.emplace(0, 0);
stack.emplace(0, 0);
++(*this);
}
}
LeafIterator(const LeafIterator &other) : tree(other.tree), stack(other.stack) { }
LeafIterator &operator=(const LeafIterator &other) {
tree = other.tree;
stack = other.stack;
return *this;
}
bool operator==(const LeafIterator &other) const {
return (tree == other.tree) &&
(stack.size() == other.stack.size()) &&
(stack.size() == 0 || (stack.size() > 0 &&
(stack.top().depth == other.stack.top().depth) &&
(stack.top().index == other.stack.top().index)));
}
bool operator!=(const LeafIterator &other) const {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
if (stack.empty()) {
tree = nullptr;
} else {
stack.pop();
while (!stack.empty() && !tree->is_leaf(stack.top().depth, stack.top().index))
single_inc();
if (stack.empty())
tree = nullptr;
}
return *this;
}
inline OcTreeNode &operator*() const {
return (*tree)[get_hash_key()];
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline OcTreeHashKey get_hash_key() const {
OcTreeHashKey key = node_to_hash_key(stack.top().depth, stack.top().index);
return key;
}
protected:
void single_inc() {
StackElement top(stack.top());
stack.pop();
for (int i = 0; i < 8; ++i) {
stack.emplace(top.depth + 1, top.index * 8 + i);
}
}
struct StackElement {
unsigned short depth;
unsigned long index;
StackElement(unsigned short depth, unsigned long index)
: depth(depth), index(index) { }
};
const OcTree *tree;
std::stack<StackElement, std::vector<StackElement> > stack;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); };
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(nullptr); };
private:
OcTreeNode **node_arr;
static unsigned short max_depth;
};
}
#endif // LA3DM_BGKLV_OCTREE_H
| 5,281 | 31.604938 | 98 | h |
la3dm | la3dm-master/include/bgklvoctomap/bgklvoctree_node.h | #ifndef LA3DM_BGKLV_OCCUPANCY_H
#define LA3DM_BGKLV_OCCUPANCY_H
#include <iostream>
#include <fstream>
#include <cmath>
namespace la3dm {
/// Occupancy state: before pruning: FREE, OCCUPIED, UNKNOWN, UNCERTAIN; after pruning: PRUNED
enum class State : char {
FREE, OCCUPIED, UNKNOWN, UNCERTAIN, PRUNED
};
/*
* @brief Inference ouputs and occupancy state.
*
* Occupancy has member variables: m_A and m_B (kernel densities of positive
* and negative class, respectively) and State.
* Before using this class, set the static member variables first.
*/
class Occupancy {
friend std::ostream &operator<<(std::ostream &os, const Occupancy &oc);
friend std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc);
friend std::ifstream &operator>>(std::ifstream &is, Occupancy &oc);
friend class BGKLVOctoMap;
public:
/*
* @brief Constructors and destructor.
*/
Occupancy() : m_A(Occupancy::prior_A), m_B(Occupancy::prior_B), state(State::UNKNOWN) { classified = false; }
Occupancy(float A, float B);
Occupancy(const Occupancy &other) : m_A(other.m_A), m_B(other.m_B), state(other.state) { }
Occupancy &operator=(const Occupancy &other) {
m_A = other.m_A;
m_B = other.m_B;
state = other.state;
return *this;
}
~Occupancy() { }
/*
* @brief Exact updates for nonparametric Bayesian kernel inference
* @param ybar kernel density estimate of positive class (occupied)
* @param kbar kernel density of negative class (unoccupied)
*/
void update(float ybar, float kbar);
/// Get probability of occupancy.
float get_prob() const;
/// Get variance of occupancy (uncertainty)
float get_var() const;
/*
* @brief Get occupancy state of the node.
* @return occupancy state (see State).
*/
inline State get_state() const { return state; }
/// Prune current node; set state to PRUNED.
inline void prune() { state = State::PRUNED; }
/// Only FREE and OCCUPIED nodes can be equal.
inline bool operator==(const Occupancy &rhs) const {
return this->state != State::UNKNOWN && this->state != State::UNCERTAIN && this->state == rhs.state;
}
bool classified;
private:
float m_A;
float m_B;
State state;
static float sf2;
static float ell; // length-scale
static float min_W;
static float prior_A; // prior on alpha
static float prior_B; // prior on beta
static bool original_size;
static float free_thresh; // FREE occupancy threshold
static float occupied_thresh; // OCCUPIED occupancy threshold
static float var_thresh;
};
typedef Occupancy OcTreeNode;
}
#endif // LA3DM_BGKLV_OCCUPANCY_H
| 3,010 | 28.811881 | 117 | h |
la3dm | la3dm-master/include/bgkoctomap/bgkblock.h | #ifndef LA3DM_BGK_BLOCK_H
#define LA3DM_BGK_BLOCK_H
#include <unordered_map>
#include <array>
#include "point3f.h"
#include "bgkoctree_node.h"
#include "bgkoctree.h"
namespace la3dm {
/// Hask key to index Block given block's center.
typedef int64_t BlockHashKey;
/// Initialize Look-Up Table
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth);
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map,
unsigned short max_depth);
/// Extended Block
#ifdef PREDICT
typedef std::array<BlockHashKey, 27> ExtendedBlock;
#else
typedef std::array<BlockHashKey, 7> ExtendedBlock;
#endif
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(point3f center);
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(float x, float y, float z);
/// Convert from hash key to block.
point3f hash_key_to_block(BlockHashKey key);
/// Get current block's extended block.
ExtendedBlock get_extended_block(BlockHashKey key);
/*
* @brief Block is built on top of OcTree, providing the functions to locate nodes.
*
* Block stores the information needed to locate each OcTreeNode's position:
* fixed resolution, fixed block_size, both of which must be initialized.
* The localization is implemented using Loop-Up Table.
*/
class Block : public OcTree {
friend BlockHashKey block_to_hash_key(point3f center);
friend BlockHashKey block_to_hash_key(float x, float y, float z);
friend point3f hash_key_to_block(BlockHashKey key);
friend ExtendedBlock get_extended_block(BlockHashKey key);
friend class BGKOctoMap;
public:
Block();
Block(point3f center);
/// @return location of the OcTreeNode given OcTree's LeafIterator.
inline point3f get_loc(const LeafIterator &it) const {
return Block::key_loc_map[it.get_hash_key()] + center;
}
/// @return size of the OcTreeNode given OcTree's LeafIterator.
inline float get_size(const LeafIterator &it) const {
unsigned short depth, index;
hash_key_to_node(it.get_hash_key(), depth, index);
return float(size / pow(2, depth));
}
/// @return center of current Block.
inline point3f get_center() const { return center; }
/// @return min lim of current Block.
inline point3f get_lim_min() const { return center - point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return max lim of current Block.
inline point3f get_lim_max() const { return center + point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return ExtendedBlock of current Block.
ExtendedBlock get_extended_block() const;
OcTreeHashKey get_node(unsigned short x, unsigned short y, unsigned short z) const;
point3f get_point(unsigned short x, unsigned short y, unsigned short z) const;
void get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const;
OcTreeNode &search(float x, float y, float z) const;
OcTreeNode &search(point3f p) const;
private:
// Loop-Up Table
static std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
static std::unordered_map<unsigned short, OcTreeHashKey> index_map;
static float resolution;
static float size;
static unsigned short cell_num;
point3f center;
};
}
#endif // LA3DM_BGK_BLOCK_H
| 3,709 | 32.727273 | 131 | h |
la3dm | la3dm-master/include/bgkoctomap/bgkinference.h | #ifndef LA3DM_BGK_H
#define LA3DM_BGK_H
namespace la3dm {
/*
* @brief Bayesian Generalized Kernel Inference on Bernoulli distribution
* @param dim dimension of data (2, 3, etc.)
* @param T data type (float, double, etc.)
* @ref Nonparametric Bayesian inference on multivariate exponential families
*/
template<int dim, typename T>
class BGKInference {
public:
/// Eigen matrix type for training and test data and kernel
using MatrixXType = Eigen::Matrix<T, -1, dim, Eigen::RowMajor>;
using MatrixKType = Eigen::Matrix<T, -1, -1, Eigen::RowMajor>;
using MatrixDKType = Eigen::Matrix<T, -1, 1>;
using MatrixYType = Eigen::Matrix<T, -1, 1>;
BGKInference(T sf2, T ell) : sf2(sf2), ell(ell), trained(false) { }
/*
* @brief Fit BGK Model
* @param x input vector (3N, row major)
* @param y target vector (N)
*/
void train(const std::vector<T> &x, const std::vector<T> &y) {
assert(x.size() % dim == 0 && (int) (x.size() / dim) == y.size());
MatrixXType _x = Eigen::Map<const MatrixXType>(x.data(), x.size() / dim, dim);
MatrixYType _y = Eigen::Map<const MatrixYType>(y.data(), y.size(), 1);
train(_x, _y);
}
/*
* @brief Fit BGK Model
* @param x input matrix (NX3)
* @param y target matrix (NX1)
*/
void train(const MatrixXType &x, const MatrixYType &y) {
this->x = MatrixXType(x);
this->y = MatrixYType(y);
trained = true;
}
/*
* @brief Predict with BGK Model
* @param xs input vector (3M, row major)
* @param ybar positive class kernel density estimate (\bar{y})
* @param kbar kernel density estimate (\bar{k})
*/
void predict(const std::vector<T> &xs, std::vector<T> &ybar, std::vector<T> &kbar) const {
assert(xs.size() % dim == 0);
MatrixXType _xs = Eigen::Map<const MatrixXType>(xs.data(), xs.size() / dim, dim);
MatrixYType _ybar, _kbar;
predict(_xs, _ybar, _kbar);
ybar.resize(_ybar.rows());
kbar.resize(_kbar.rows());
for (int r = 0; r < _ybar.rows(); ++r) {
ybar[r] = _ybar(r, 0);
kbar[r] = _kbar(r, 0);
}
}
/*
* @brief Predict with nonparametric Bayesian generalized kernel inference
* @param xs input vector (M x 3)
* @param ybar positive class kernel density estimate (M x 1)
* @param kbar kernel density estimate (M x 1)
*/
void predict(const MatrixXType &xs, MatrixYType &ybar, MatrixYType &kbar) const {
assert(trained == true);
MatrixKType Ks;
covSparse(xs, x, Ks);
ybar = (Ks * y).array();
kbar = Ks.rowwise().sum().array();
}
private:
/*
* @brief Compute Euclid distances between two vectors.
* @param x input vector
* @param z input vecotr
* @return d distance matrix
*/
void dist(const MatrixXType &x, const MatrixXType &z, MatrixKType &d) const {
d = MatrixKType::Zero(x.rows(), z.rows());
for (int i = 0; i < x.rows(); ++i) {
d.row(i) = (z.rowwise() - x.row(i)).rowwise().norm();
}
}
/*
* @brief Matern3 kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
*/
void covMaterniso3(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(1.73205 / ell * x, 1.73205 / ell * z, Kxz);
Kxz = ((1 + Kxz.array()) * exp(-Kxz.array())).matrix() * sf2;
}
/*
* @brief Sparse kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
* @ref A sparse covariance function for exact gaussian process inference in large datasets.
*/
void covSparse(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(x / ell, z / ell, Kxz);
Kxz = (((2.0f + (Kxz * 2.0f * 3.1415926f).array().cos()) * (1.0f - Kxz.array()) / 3.0f) +
(Kxz * 2.0f * 3.1415926f).array().sin() / (2.0f * 3.1415926f)).matrix() * sf2;
// Clean up for values with distance outside length scale
// Possible because Kxz <= 0 when dist >= ell
for (int i = 0; i < Kxz.rows(); ++i)
{
for (int j = 0; j < Kxz.cols(); ++j)
if (Kxz(i,j) < 0.0)
Kxz(i,j) = 0.0f;
}
}
T sf2; // signal variance
T ell; // length-scale
MatrixXType x; // temporary storage of training data
MatrixYType y; // temporary storage of training labels
bool trained; // true if bgkinference stored training data
};
typedef BGKInference<3, float> BGK3f;
}
#endif // LA3DM_BGK_H
| 5,140 | 35.460993 | 101 | h |
la3dm | la3dm-master/include/bgkoctomap/bgkoctomap.h | #ifndef LA3DM_BGK_OCTOMAP_H
#define LA3DM_BGK_OCTOMAP_H
#include <unordered_map>
#include <vector>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include "rtree.h"
#include "bgkblock.h"
#include "bgkoctree_node.h"
namespace la3dm {
/// PCL PointCloud types as input
typedef pcl::PointXYZ PCLPointType;
typedef pcl::PointCloud<PCLPointType> PCLPointCloud;
/*
* @brief BGKOctoMap
*
* Bayesian Generalized Kernel Inference for Occupancy Map Prediction
* The space is partitioned by Blocks in which OcTrees with fixed
* depth are rooted. Occupancy values in one Block is predicted by
* its ExtendedBlock via Bayesian generalized kernel inference.
*/
class BGKOctoMap {
public:
/// Types used internally
typedef std::vector<point3f> PointCloud;
typedef std::pair<point3f, float> GPPointType;
typedef std::vector<GPPointType> GPPointCloud;
typedef RTree<GPPointType *, float, 3, float> MyRTree;
public:
BGKOctoMap();
/*
* @param resolution (default 0.1m)
* @param block_depth maximum depth of OcTree (default 4)
* @param sf2 signal variance in GPs (default 1.0)
* @param ell length-scale in GPs (default 1.0)
* @param noise noise variance in GPs (default 0.01)
* @param l length-scale in logistic regression function (default 100)
* @param min_var minimum variance in Occupancy (default 0.001)
* @param max_var maximum variance in Occupancy (default 1000)
* @param max_known_var maximum variance for Occuapncy to be classified as KNOWN State (default 0.02)
* @param free_thresh free threshold for Occupancy probability (default 0.3)
* @param occupied_thresh occupied threshold for Occupancy probability (default 0.7)
*/
BGKOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B);
~BGKOctoMap();
/// Set resolution.
void set_resolution(float resolution);
/// Set block max depth.
void set_block_depth(unsigned short max_depth);
/// Get resolution.
inline float get_resolution() const { return resolution; }
/// Get block max depth.
inline float get_block_depth() const { return block_depth; }
/*
* @brief Insert PCL PointCloud into BGKOctoMaps.
* @param cloud one scan in PCLPointCloud format
* @param origin sensor origin in the scan
* @param ds_resolution downsampling resolution for PCL VoxelGrid filtering (-1 if no downsampling)
* @param free_res resolution for sampling free training points along sensor beams (default 2.0)
* @param max_range maximum range for beams to be considered as valid measurements (-1 if no limitation)
*/
void insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res = 2.0f,
float max_range = -1);
void insert_training_data(const GPPointCloud &cloud);
/// Get bounding box of the map.
void get_bbox(point3f &lim_min, point3f &lim_max) const;
class RayCaster {
public:
RayCaster(const BGKOctoMap *map, const point3f &start, const point3f &end) : map(map) {
assert(map != nullptr);
_block_key = block_to_hash_key(start);
block = map->search(_block_key);
lim = static_cast<unsigned short>(pow(2, map->block_depth - 1));
if (block != nullptr) {
block->get_index(start, x, y, z);
block_lim = block->get_center();
block_size = block->size;
current_p = start;
resolution = map->resolution;
int x0 = static_cast<int>((start.x() / resolution));
int y0 = static_cast<int>((start.y() / resolution));
int z0 = static_cast<int>((start.z() / resolution));
int x1 = static_cast<int>((end.x() / resolution));
int y1 = static_cast<int>((end.y() / resolution));
int z1 = static_cast<int>((end.z() / resolution));
dx = abs(x1 - x0);
dy = abs(y1 - y0);
dz = abs(z1 - z0);
n = 1 + dx + dy + dz;
x_inc = x1 > x0 ? 1 : (x1 == x0 ? 0 : -1);
y_inc = y1 > y0 ? 1 : (y1 == y0 ? 0 : -1);
z_inc = z1 > z0 ? 1 : (z1 == z0 ? 0 : -1);
xy_error = dx - dy;
xz_error = dx - dz;
yz_error = dy - dz;
dx *= 2;
dy *= 2;
dz *= 2;
} else {
n = 0;
}
}
inline bool end() const { return n <= 0; }
bool next(point3f &p, OcTreeNode &node, BlockHashKey &block_key, OcTreeHashKey &node_key) {
assert(!end());
bool valid = false;
unsigned short index = x + y * lim + z * lim * lim;
node_key = Block::index_map[index];
block_key = _block_key;
if (block != nullptr) {
valid = true;
node = (*block)[node_key];
current_p = block->get_point(x, y, z);
p = current_p;
} else {
p = current_p;
}
if (xy_error > 0 && xz_error > 0) {
x += x_inc;
current_p.x() += x_inc * resolution;
xy_error -= dy;
xz_error -= dz;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error < 0 && yz_error > 0) {
y += y_inc;
current_p.y() += y_inc * resolution;
xy_error += dx;
yz_error -= dz;
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
} else if (yz_error < 0 && xz_error < 0) {
z += z_inc;
current_p.z() += z_inc * resolution;
xz_error += dx;
yz_error += dy;
if (z >= lim || z < 0) {
block_lim.z() += z_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
z = z_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error == 0) {
x += x_inc;
y += y_inc;
n -= 2;
current_p.x() += x_inc * resolution;
current_p.y() += y_inc * resolution;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
}
n--;
return valid;
}
private:
const BGKOctoMap *map;
Block *block;
point3f block_lim;
float block_size, resolution;
int dx, dy, dz, error, n;
int x_inc, y_inc, z_inc, xy_error, xz_error, yz_error;
unsigned short index, x, y, z, lim;
BlockHashKey _block_key;
point3f current_p;
};
/// LeafIterator for iterating all leaf nodes in blocks
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator(const BGKOctoMap *map) {
assert(map != nullptr);
block_it = map->block_arr.cbegin();
end_block = map->block_arr.cend();
if (map->block_arr.size() > 0) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
} else {
leaf_it = OcTree::LeafIterator();
end_leaf = OcTree::LeafIterator();
}
}
// just for initializing end iterator
LeafIterator(std::unordered_map<BlockHashKey, Block *>::const_iterator block_it,
OcTree::LeafIterator leaf_it)
: block_it(block_it), leaf_it(leaf_it), end_block(block_it), end_leaf(leaf_it) { }
bool operator==(const LeafIterator &other) {
return (block_it == other.block_it) && (leaf_it == other.leaf_it);
}
bool operator!=(const LeafIterator &other) {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
++leaf_it;
if (leaf_it == end_leaf) {
++block_it;
if (block_it != end_block) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
}
}
return *this;
}
OcTreeNode &operator*() const {
return *leaf_it;
}
std::vector<point3f> get_pruned_locs() const {
std::vector<point3f> pruned_locs;
point3f center = get_loc();
float size = get_size();
float x0 = center.x() - size * 0.5 + Block::resolution * 0.5;
float y0 = center.y() - size * 0.5 + Block::resolution * 0.5;
float z0 = center.z() - size * 0.5 + Block::resolution * 0.5;
float x1 = center.x() + size * 0.5;
float y1 = center.y() + size * 0.5;
float z1 = center.z() + size * 0.5;
for (float x = x0; x < x1; x += Block::resolution) {
for (float y = y0; y < y1; y += Block::resolution) {
for (float z = z0; z < z1; z += Block::resolution) {
pruned_locs.emplace_back(x, y, z);
}
}
}
return pruned_locs;
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline point3f get_loc() const {
return block_it->second->get_loc(leaf_it);
}
inline float get_size() const {
return block_it->second->get_size(leaf_it);
}
private:
std::unordered_map<BlockHashKey, Block *>::const_iterator block_it;
std::unordered_map<BlockHashKey, Block *>::const_iterator end_block;
OcTree::LeafIterator leaf_it;
OcTree::LeafIterator end_leaf;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); }
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(block_arr.cend(), OcTree::LeafIterator()); }
OcTreeNode search(point3f p) const;
OcTreeNode search(float x, float y, float z) const;
Block *search(BlockHashKey key) const;
inline float get_block_size() const { return block_size; }
private:
/// @return true if point is inside a bounding box given min and max limits.
inline bool gp_point_in_bbox(const GPPointType &p, const point3f &lim_min, const point3f &lim_max) const {
return (p.first.x() > lim_min.x() && p.first.x() < lim_max.x() &&
p.first.y() > lim_min.y() && p.first.y() < lim_max.y() &&
p.first.z() > lim_min.z() && p.first.z() < lim_max.z());
}
/// Get the bounding box of a pointcloud.
void bbox(const GPPointCloud &cloud, point3f &lim_min, point3f &lim_max) const;
/// Get all block indices inside a bounding box.
void get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<BlockHashKey> &blocks) const;
/// Get all points inside a bounding box assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
GPPointCloud &out);
/// @return true if point exists inside a bounding box assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max);
/// Get all points inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const BlockHashKey &key, GPPointCloud &out);
/// @return true if point exists inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const BlockHashKey &key);
/// Get all points inside an extended block assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const ExtendedBlock &block, GPPointCloud &out);
/// @return true if point exists inside an extended block assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const ExtendedBlock &block);
/// RTree callback function
static bool count_callback(GPPointType *p, void *arg);
/// RTree callback function
static bool search_callback(GPPointType *p, void *arg);
/// Downsample PCLPointCloud using PCL VoxelGrid Filtering.
void downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const;
/// Sample free training points along sensor beams.
void beam_sample(const point3f &hits, const point3f &origin, PointCloud &frees,
float free_resolution) const;
/// Get training data from one sensor scan.
void get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPPointCloud &xy) const;
float resolution;
float block_size;
unsigned short block_depth;
std::unordered_map<BlockHashKey, Block *> block_arr;
MyRTree rtree;
};
}
#endif // LA3DM_BGKOCTOMAP_H
| 15,848 | 40.273438 | 125 | h |
la3dm | la3dm-master/include/bgkoctomap/bgkoctree.h | #ifndef LA3DM_BGK_OCTREE_H
#define LA3DM_BGK_OCTREE_H
#include <stack>
#include <vector>
#include "point3f.h"
#include "bgkoctree_node.h"
namespace la3dm {
/// Hash key to index OcTree nodes given depth and the index in that layer.
typedef int OcTreeHashKey;
/// Convert from node to hask key.
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index);
/// Convert from hash key to node.
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index);
/*
* @brief A simple OcTree to organize occupancy data in one block.
*
* OcTree doesn't store positions of nodes in order to reduce memory usage.
* The nodes in OcTrees are indexed by OcTreeHashKey which can be used to
* retrieve positions later (See Block).
* For the purpose of mapping, this OcTree has fixed depth which should be
* set before using OcTrees.
*/
class OcTree {
friend class BGKOctoMap;
public:
OcTree();
~OcTree();
OcTree(const OcTree &other);
OcTree &operator=(const OcTree &other);
/*
* @brief Rursively pruning OcTreeNodes with the same state.
*
* Prune nodes by setting nodes to PRUNED.
* Delete the layer if all nodes are pruned.
*/
bool prune();
/// @return true if this node is a leaf node.
bool is_leaf(OcTreeHashKey key) const;
/// @return true if this node is a leaf node.
bool is_leaf(unsigned short depth, unsigned short index) const;
/// @return true if this node exists and is not pruned.
bool search(OcTreeHashKey key) const;
/// @return Occupancy of the node (without checking if it exists!)
OcTreeNode &operator[](OcTreeHashKey key) const;
/// Leaf iterator for OcTrees: iterate all leaf nodes not pruned.
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator() : tree(nullptr) { }
LeafIterator(const OcTree *tree)
: tree(tree != nullptr && tree->node_arr != nullptr ? tree : nullptr) {
if (tree != nullptr) {
stack.emplace(0, 0);
stack.emplace(0, 0);
++(*this);
}
}
LeafIterator(const LeafIterator &other) : tree(other.tree), stack(other.stack) { }
LeafIterator &operator=(const LeafIterator &other) {
tree = other.tree;
stack = other.stack;
return *this;
}
bool operator==(const LeafIterator &other) const {
return (tree == other.tree) &&
(stack.size() == other.stack.size()) &&
(stack.size() == 0 || (stack.size() > 0 &&
(stack.top().depth == other.stack.top().depth) &&
(stack.top().index == other.stack.top().index)));
}
bool operator!=(const LeafIterator &other) const {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
if (stack.empty()) {
tree = nullptr;
} else {
stack.pop();
while (!stack.empty() && !tree->is_leaf(stack.top().depth, stack.top().index))
single_inc();
if (stack.empty())
tree = nullptr;
}
return *this;
}
inline OcTreeNode &operator*() const {
return (*tree)[get_hash_key()];
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline OcTreeHashKey get_hash_key() const {
OcTreeHashKey key = node_to_hash_key(stack.top().depth, stack.top().index);
return key;
}
protected:
void single_inc() {
StackElement top(stack.top());
stack.pop();
for (int i = 0; i < 8; ++i) {
stack.emplace(top.depth + 1, top.index * 8 + i);
}
}
struct StackElement {
unsigned short depth;
unsigned short index;
StackElement(unsigned short depth, unsigned short index)
: depth(depth), index(index) { }
};
const OcTree *tree;
std::stack<StackElement, std::vector<StackElement> > stack;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); };
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(nullptr); };
private:
OcTreeNode **node_arr;
static unsigned short max_depth;
};
}
#endif // LA3DM_BGK_OCTREE_H
| 5,276 | 31.574074 | 98 | h |
la3dm | la3dm-master/include/bgkoctomap/bgkoctree_node.h | #ifndef LA3DM_BGK_OCCUPANCY_H
#define LA3DM_BGK_OCCUPANCY_H
#include <iostream>
#include <fstream>
namespace la3dm {
/// Occupancy state: before pruning: FREE, OCCUPIED, UNKNOWN; after pruning: PRUNED
enum class State : char {
FREE, OCCUPIED, UNKNOWN, PRUNED
};
/*
* @brief Inference ouputs and occupancy state.
*
* Occupancy has member variables: m_A and m_B (kernel densities of positive
* and negative class, respectively) and State.
* Before using this class, set the static member variables first.
*/
class Occupancy {
friend std::ostream &operator<<(std::ostream &os, const Occupancy &oc);
friend std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc);
friend std::ifstream &operator>>(std::ifstream &is, Occupancy &oc);
friend class BGKOctoMap;
public:
/*
* @brief Constructors and destructor.
*/
Occupancy() : m_A(Occupancy::prior_A), m_B(Occupancy::prior_B), state(State::UNKNOWN) { classified = false; }
Occupancy(float A, float B);
Occupancy(const Occupancy &other) : m_A(other.m_A), m_B(other.m_B), state(other.state) { }
Occupancy &operator=(const Occupancy &other) {
m_A = other.m_A;
m_B = other.m_B;
state = other.state;
return *this;
}
~Occupancy() { }
/*
* @brief Exact updates for nonparametric Bayesian kernel inference
* @param ybar kernel density estimate of positive class (occupied)
* @param kbar kernel density of negative class (unoccupied)
*/
void update(float ybar, float kbar);
/// Get probability of occupancy.
float get_prob() const;
/// Get variance of occupancy (uncertainty)
inline float get_var() const { return (m_A * m_B) / ( (m_A + m_B) * (m_A + m_B) * (m_A + m_B + 1.0f)); }
/*
* @brief Get occupancy state of the node.
* @return occupancy state (see State).
*/
inline State get_state() const { return state; }
/// Prune current node; set state to PRUNED.
inline void prune() { state = State::PRUNED; }
/// Only FREE and OCCUPIED nodes can be equal.
inline bool operator==(const Occupancy &rhs) const {
return this->state != State::UNKNOWN && this->state == rhs.state;
}
bool classified;
private:
float m_A;
float m_B;
State state;
static float sf2;
static float ell; // length-scale
static float prior_A; // prior on alpha
static float prior_B; // prior on beta
static float free_thresh; // FREE occupancy threshold
static float occupied_thresh; // OCCUPIED occupancy threshold
static float var_thresh;
};
typedef Occupancy OcTreeNode;
}
#endif // LA3DM_BGK_OCCUPANCY_H
| 2,946 | 29.071429 | 117 | h |
la3dm | la3dm-master/include/common/markerarray_pub.h | #include <pcl_ros/point_cloud.h>
#include <geometry_msgs/Point.h>
#include <visualization_msgs/MarkerArray.h>
#include <visualization_msgs/Marker.h>
#include <std_msgs/ColorRGBA.h>
#include <cmath>
#include <string>
namespace la3dm {
std_msgs::ColorRGBA heightMapColor(double h) {
std_msgs::ColorRGBA color;
color.a = 1.0;
// blend over HSV-values (more colors)
double s = 1.0;
double v = 1.0;
h -= floor(h);
h *= 6;
int i;
double m, n, f;
i = floor(h);
f = h - i;
if (!(i & 1))
f = 1 - f; // if i is even
m = v * (1 - s);
n = v * (1 - s * f);
switch (i) {
case 6:
case 0:
color.r = v;
color.g = n;
color.b = m;
break;
case 1:
color.r = n;
color.g = v;
color.b = m;
break;
case 2:
color.r = m;
color.g = v;
color.b = n;
break;
case 3:
color.r = m;
color.g = n;
color.b = v;
break;
case 4:
color.r = n;
color.g = m;
color.b = v;
break;
case 5:
color.r = v;
color.g = m;
color.b = n;
break;
default:
color.r = 1;
color.g = 0.5;
color.b = 0.5;
break;
}
return color;
}
class MarkerArrayPub {
typedef pcl::PointXYZ PointType;
typedef pcl::PointCloud<PointType> PointCloud;
public:
MarkerArrayPub(ros::NodeHandle nh, std::string topic, float resolution) : nh(nh),
msg(new visualization_msgs::MarkerArray),
topic(topic),
resolution(resolution),
markerarray_frame_id("map") {
pub = nh.advertise<visualization_msgs::MarkerArray>(topic, 1, true);
msg->markers.resize(10);
for (int i = 0; i < 10; ++i) {
msg->markers[i].header.frame_id = markerarray_frame_id;
msg->markers[i].ns = "map";
msg->markers[i].id = i;
msg->markers[i].type = visualization_msgs::Marker::CUBE_LIST;
msg->markers[i].scale.x = resolution * pow(2, i);
msg->markers[i].scale.y = resolution * pow(2, i);
msg->markers[i].scale.z = resolution * pow(2, i);
std_msgs::ColorRGBA color;
color.r = 0.0;
color.g = 0.0;
color.b = 1.0;
color.a = 1.0;
msg->markers[i].color = color;
}
}
void insert_point3d(float x, float y, float z, float min_z, float max_z, float size) {
geometry_msgs::Point center;
center.x = x;
center.y = y;
center.z = z;
int depth = 0;
if (size > 0)
depth = (int) log2(size /resolution);
msg->markers[depth].points.push_back(center);
if (min_z < max_z) {
double h = (1.0 - std::min(std::max((z - min_z) / (max_z - min_z), 0.0f), 1.0f)) * 0.8;
msg->markers[depth].colors.push_back(heightMapColor(h));
}
}
void insert_point3d(float x, float y, float z, float min_z, float max_z, float size, float prob) {
geometry_msgs::Point center;
center.x = x;
center.y = y;
center.z = z;
int depth = 0;
if (size > 0)
depth = (int) log2(size / resolution);
msg->markers[depth].points.push_back(center);
std_msgs::ColorRGBA color;
color.a = 1.0;
if(prob < 0.5){
color.r = 0.8;
color.g = 0.8;
color.b = 0.8;
}
else{
color = heightMapColor(std::min(2.0-2.0*prob, 0.6));
}
msg->markers[depth].colors.push_back(color);
}
void insert_point3d(float x, float y, float z, float min_z, float max_z) {
insert_point3d(x, y, z, min_z, max_z, -1.0f);
}
void insert_point3d(float x, float y, float z) {
insert_point3d(x, y, z, 1.0f, 0.0f, -1.0f);
}
void insert_color_point3d(float x, float y, float z, double min_v, double max_v, double v) {
geometry_msgs::Point center;
center.x = x;
center.y = y;
center.z = z;
int depth = 0;
msg->markers[depth].points.push_back(center);
double h = (1.0 - std::min(std::max((v - min_v) / (max_v - min_v), 0.0), 1.0)) * 0.8;
msg->markers[depth].colors.push_back(heightMapColor(h));
}
void clear() {
for (int i = 0; i < 10; ++i) {
msg->markers[i].points.clear();
msg->markers[i].colors.clear();
}
}
void publish() const {
msg->markers[0].header.stamp = ros::Time::now();
pub.publish(*msg);
}
private:
ros::NodeHandle nh;
ros::Publisher pub;
visualization_msgs::MarkerArray::Ptr msg;
std::string markerarray_frame_id;
std::string topic;
float resolution;
};
}
| 5,869 | 29.572917 | 123 | h |
la3dm | la3dm-master/include/common/point3f.h | #ifndef LA3DM_VECTOR3_H
#define LA3DM_VECTOR3_H
#include <iostream>
#include <math.h>
namespace la3dm {
/*!
* \brief This class represents a three-dimensional vector
*
* The three-dimensional vector can be used to represent a
* translation in three-dimensional space or to represent the
* attitude of an object using Euler angle.
*/
class Vector3 {
public:
/*!
* \brief Default constructor
*/
Vector3() { data[0] = data[1] = data[2] = 0.0; }
/*!
* \brief Copy constructor
*
* @param other a vector of dimension 3
*/
Vector3(const Vector3 &other) {
data[0] = other(0);
data[1] = other(1);
data[2] = other(2);
}
/*!
* \brief Constructor
*
* Constructs a three-dimensional vector from
* three single values x, y, z or roll, pitch, yaw
*/
Vector3(float x, float y, float z) {
data[0] = x;
data[1] = y;
data[2] = z;
}
/* inline Eigen3::Vector3f getVector3f() const { return Eigen3::Vector3f(data[0], data[1], data[2]) ; } */
/* inline Eigen3::Vector4f& getVector4f() { return data; } */
/* inline Eigen3::Vector4f getVector4f() const { return data; } */
/*!
* \brief Assignment operator
*
* @param other a vector of dimension 3
*/
inline Vector3 &operator=(const Vector3 &other) {
data[0] = other(0);
data[1] = other(1);
data[2] = other(2);
return *this;
}
/*!
* \brief Three-dimensional vector (cross) product
*
* Calculates the tree-dimensional cross product, which
* represents the vector orthogonal to the plane defined
* by this and other.
* @return this x other
*/
inline Vector3 cross(const Vector3 &other) const {
//return (data.start<3> ().cross (other.data.start<3> ()));
// \note should this be renamed?
return Vector3(y() * other.z() - z() * other.y(),
z() * other.x() - x() * other.z(),
x() * other.y() - y() * other.x());
}
/// dot product
inline double dot(const Vector3 &other) const {
return x() * other.x() + y() * other.y() + z() * other.z();
}
inline const float &operator()(unsigned int i) const {
return data[i];
}
inline float &operator()(unsigned int i) {
return data[i];
}
inline float &x() {
return operator()(0);
}
inline float &y() {
return operator()(1);
}
inline float &z() {
return operator()(2);
}
inline const float &x() const {
return operator()(0);
}
inline const float &y() const {
return operator()(1);
}
inline const float &z() const {
return operator()(2);
}
inline float &roll() {
return operator()(0);
}
inline float &pitch() {
return operator()(1);
}
inline float &yaw() {
return operator()(2);
}
inline const float &roll() const {
return operator()(0);
}
inline const float &pitch() const {
return operator()(1);
}
inline const float &yaw() const {
return operator()(2);
}
inline Vector3 operator-() const {
Vector3 result;
result(0) = -data[0];
result(1) = -data[1];
result(2) = -data[2];
return result;
}
inline Vector3 operator+(const Vector3 &other) const {
Vector3 result(*this);
result(0) += other(0);
result(1) += other(1);
result(2) += other(2);
return result;
}
inline Vector3 operator*(float x) const {
Vector3 result(*this);
result(0) *= x;
result(1) *= x;
result(2) *= x;
return result;
}
inline Vector3 operator-(const Vector3 &other) const {
Vector3 result(*this);
result(0) -= other(0);
result(1) -= other(1);
result(2) -= other(2);
return result;
}
inline void operator+=(const Vector3 &other) {
data[0] += other(0);
data[1] += other(1);
data[2] += other(2);
}
inline void operator-=(const Vector3 &other) {
data[0] -= other(0);
data[1] -= other(1);
data[2] -= other(2);
}
inline void operator/=(float x) {
data[0] /= x;
data[1] /= x;
data[2] /= x;
}
inline void operator*=(float x) {
data[0] *= x;
data[1] *= x;
data[2] *= x;
}
inline bool operator==(const Vector3 &other) const {
for (unsigned int i = 0; i < 3; i++) {
if (operator()(i) != other(i))
return false;
}
return true;
}
/// @return length of the vector ("L2 norm")
inline double norm() const {
return sqrt(norm_sq());
}
/// @return squared length ("L2 norm") of the vector
inline double norm_sq() const {
return (x() * x() + y() * y() + z() * z());
}
/// normalizes this vector, so that it has norm=1.0
inline Vector3 &normalize() {
double len = norm();
if (len > 0)
*this /= (float) len;
return *this;
}
/// @return normalized vector, this one remains unchanged
inline Vector3 normalized() const {
Vector3 result(*this);
result.normalize();
return result;
}
inline double angleTo(const Vector3 &other) const {
double dot_prod = this->dot(other);
double len1 = this->norm();
double len2 = other.norm();
return acos(dot_prod / (len1 * len2));
}
inline double distance(const Vector3 &other) const {
double dist_x = x() - other.x();
double dist_y = y() - other.y();
double dist_z = z() - other.z();
return sqrt(dist_x * dist_x + dist_y * dist_y + dist_z * dist_z);
}
inline double distanceXY(const Vector3 &other) const {
double dist_x = x() - other.x();
double dist_y = y() - other.y();
return sqrt(dist_x * dist_x + dist_y * dist_y);
}
Vector3 &rotate_IP(double roll, double pitch, double yaw);
// void read (unsigned char * src, unsigned int size);
std::istream &read(std::istream &s);
std::ostream &write(std::ostream &s) const;
std::istream &readBinary(std::istream &s);
std::ostream &writeBinary(std::ostream &s) const;
protected:
float data[3];
};
//! user friendly output in format (x y z)
std::ostream &operator<<(std::ostream &out, la3dm::Vector3 const &v);
typedef Vector3 point3f;
}
#endif // LA3DM_VECTOR3_H
| 7,471 | 25.974729 | 114 | h |
la3dm | la3dm-master/include/common/point6f.h | #ifndef LA3DM_VECTOR6_H
#define LA3DM_VECTOR6_H
#include <iostream>
#include <math.h>
#include "point3f.h"
namespace la3dm {
/*!
* \brief This class represents a six-dimensional vector
*
* We use the six-dimensional vector to represent the start
* and end points of a line segment
*/
class Vector6 {
public:
/*!
* \brief Default constructor
*/
Vector6() { data[0] = data[1] = data[2] = data[3] = data[4] = data[5] = 0.0; }
/*!
* \brief Copy constructor
*
* @param other a vector of dimension 6
*/
Vector6(const Vector6 &other) {
data[0] = other(0);
data[1] = other(1);
data[2] = other(2);
data[3] = other(3);
data[4] = other(4);
data[5] = other(5);
}
/*!
* \brief 6-D vector as 2 copies of a 3-D vector
*
* @param point a vector of dimension 3
*/
Vector6(const Vector3 &point) {
data[0] = data[3] = point(0);
data[1] = data[4] = point(1);
data[2] = data[5] = point(2);
}
/*!
* \brief 6-D vector from start and endpoint of a line
*
* @param start a vector of dimension 3
* @param end a vector of dimension 3
*/
Vector6(const Vector3 &start, const Vector3 &end) {
data[0] = start(0);
data[1] = start(1);
data[2] = start(2);
data[3] = end(0);
data[4] = end(1);
data[5] = end(2);
}
/*!
* \brief Constructor
*
* Constructs a six-dimensional vector from
* three single values x, y, z by duplication
*/
Vector6(float x0, float y0, float z0) {
data[0] = x0;
data[1] = y0;
data[2] = z0;
data[3] = x0;
data[4] = y0;
data[5] = z0;
}
/*!
* \brief Constructor
*
* Constructs a six-dimensional vector from
* six single values
*/
Vector6(float x0, float y0, float z0, float x1, float y1, float z1) {
data[0] = x0;
data[1] = y0;
data[2] = z0;
data[3] = x1;
data[4] = y1;
data[5] = z1;
}
/* inline Eigen3::Vector6f getVector6f() const { return Eigen3::Vector6f(data[0], data[1], data[2]) ; } */
/* inline Eigen3::Vector4f& getVector4f() { return data; } */
/* inline Eigen3::Vector4f getVector4f() const { return data; } */
/*!
* \brief Assignment operator
*
* @param other a vector of dimension 6
*/
inline Vector6 &operator=(const Vector6 &other) {
data[0] = other(0);
data[1] = other(1);
data[2] = other(2);
data[3] = other(3);
data[4] = other(4);
data[5] = other(5);
return *this;
}
/// dot product
inline double dot(const Vector6 &other) const {
return x0() * other.x0() + y0() * other.y0() + z0() * other.z0() + x1() * other.x1() + y1() * other.y1() + z1() * other.z1();
}
inline const float &operator()(unsigned int i) const {
return data[i];
}
inline float &operator()(unsigned int i) {
return data[i];
}
inline point3f start() {
return point3f(x0(), y0(), z0());
}
inline point3f end() {
return point3f(x1(), y1(), z1());
}
inline float &x0() {
return operator()(0);
}
inline float &y0() {
return operator()(1);
}
inline float &z0() {
return operator()(2);
}
inline float &x1() {
return operator()(3);
}
inline float &y1() {
return operator()(4);
}
inline float &z1() {
return operator()(5);
}
inline const point3f start() const {
return point3f(x0(), y0(), z0());
}
inline const point3f end() const {
return point3f(x1(), y1(), z1());
}
inline const float &x0() const {
return operator()(0);
}
inline const float &y0() const {
return operator()(1);
}
inline const float &z0() const {
return operator()(2);
}
inline const float &x1() const {
return operator()(3);
}
inline const float &y1() const {
return operator()(4);
}
inline const float &z1() const {
return operator()(5);
}
inline Vector6 operator-() const {
Vector6 result;
result(0) = -data[0];
result(1) = -data[1];
result(2) = -data[2];
result(3) = -data[3];
result(4) = -data[4];
result(5) = -data[5];
return result;
}
inline Vector6 operator+(const Vector6 &other) const {
Vector6 result(*this);
result(0) += other(0);
result(1) += other(1);
result(2) += other(2);
result(3) += other(3);
result(4) += other(4);
result(5) += other(5);
return result;
}
inline Vector6 operator*(float x) const {
Vector6 result(*this);
result(0) *= x;
result(1) *= x;
result(2) *= x;
result(3) *= x;
result(4) *= x;
result(5) *= x;
return result;
}
inline Vector6 operator-(const Vector6 &other) const {
Vector6 result(*this);
result(0) -= other(0);
result(1) -= other(1);
result(2) -= other(2);
result(3) -= other(3);
result(4) -= other(4);
result(5) -= other(5);
return result;
}
inline void operator+=(const Vector6 &other) {
data[0] += other(0);
data[1] += other(1);
data[2] += other(2);
data[3] += other(3);
data[4] += other(4);
data[5] += other(5);
}
inline void operator-=(const Vector6 &other) {
data[0] -= other(0);
data[1] -= other(1);
data[2] -= other(2);
data[3] -= other(3);
data[4] -= other(4);
data[5] -= other(5);
}
inline void operator/=(float x) {
data[0] /= x;
data[1] /= x;
data[2] /= x;
data[3] /= x;
data[4] /= x;
data[5] /= x;
}
inline void operator*=(float x) {
data[0] *= x;
data[1] *= x;
data[2] *= x;
data[3] *= x;
data[4] *= x;
data[5] *= x;
}
inline bool operator==(const Vector6 &other) const {
for (unsigned int i = 0; i < 6; i++) {
if (operator()(i) != other(i))
return false;
}
return true;
}
/// @return length of the line segment start -> end
inline double line_length() const {
return sqrt((start() - end()).norm_sq());
}
/// @return length of the vector ("L2 norm")
inline double norm() const {
return sqrt(norm_sq());
}
/// @return squared length ("L2 norm") of the vector
inline double norm_sq() const {
return (x0() * x0() + y0() * y0() + z0() * z0() + x1() * x1() + y1() * y1() + z1() * z1());
}
/// normalizes this vector, so that it has norm=1.0
inline Vector6 &normalize() {
double len = norm();
if (len > 0)
*this /= (float) len;
return *this;
}
/// @return normalized vector, this one remains unchanged
inline Vector6 normalized() const {
Vector6 result(*this);
result.normalize();
return result;
}
// inline double angleTo(const Vector6 &other) const {
// double dot_prod = this->dot(other);
// double len1 = this->norm();
// double len2 = other.norm();
// return acos(dot_prod / (len1 * len2));
// }
// inline double distance(const Vector6 &other) const {
// double dist_x = x() - other.x();
// double dist_y = y() - other.y();
// double dist_z = z() - other.z();
// return sqrt(dist_x * dist_x + dist_y * dist_y + dist_z * dist_z);
// }
// inline double distanceXY(const Vector6 &other) const {
// double dist_x = x() - other.x();
// double dist_y = y() - other.y();
// return sqrt(dist_x * dist_x + dist_y * dist_y);
// }
// Vector6 &rotate_IP(double roll, double pitch, double yaw);
// void read (unsigned char * src, unsigned int size);
std::istream &read(std::istream &s);
std::ostream &write(std::ostream &s) const;
std::istream &readBinary(std::istream &s);
std::ostream &writeBinary(std::ostream &s) const;
protected:
float data[6];
};
//! user friendly output in format (x0 y0 z0 x1 y1 z1)
std::ostream &operator<<(std::ostream &out, la3dm::Vector6 const &v);
typedef Vector6 point6f;
}
#endif // LA3DM_VECTOR6_H
| 9,678 | 26.188202 | 137 | h |
la3dm | la3dm-master/include/common/rtree.h | #ifndef RTREE_H
#define RTREE_H
// NOTE This file compiles under MSVC 6 SP5 and MSVC .Net 2003 it may not work on other compilers without modification.
// NOTE These next few lines may be win32 specific, you may need to modify them to compile on other platform
#include <stdio.h>
#include <math.h>
#include <assert.h>
#include <stdlib.h>
#include <algorithm>
#define ASSERT assert // RTree uses ASSERT( condition )
//#ifndef Min
// #define Min std::min
//#endif //Min
//#ifndef Max
// #define Max std::max
//#endif //Max
//
// RTree.h
//
#define RTREE_TEMPLATE template<class DATATYPE, class ELEMTYPE, int NUMDIMS, class ELEMTYPEREAL, int TMAXNODES, int TMINNODES>
#define RTREE_QUAL RTree<DATATYPE, ELEMTYPE, NUMDIMS, ELEMTYPEREAL, TMAXNODES, TMINNODES>
#define RTREE_DONT_USE_MEMPOOLS // This version does not contain a fixed memory allocator, fill in lines with EXAMPLE to implement one.
#define RTREE_USE_SPHERICAL_VOLUME // Better split classification, may be slower on some systems
// Fwd decl
class RTFileStream; // File I/O helper class, look below for implementation and notes.
/// \class RTree
/// Implementation of RTree, a multidimensional bounding rectangle tree.
/// Example usage: For a 3-dimensional tree use RTree<Object*, float, 3> myTree;
///
/// This modified, templated C++ version by Greg Douglas at Auran (http://www.auran.com)
///
/// DATATYPE Referenced data, should be int, void*, obj* etc. no larger than sizeof<void*> and simple type
/// ELEMTYPE Type of element such as int or float
/// NUMDIMS Number of dimensions such as 2 or 3
/// ELEMTYPEREAL Type of element that allows fractional and large values such as float or double, for use in volume calcs
///
/// NOTES: Inserting and removing data requires the knowledge of its constant Minimal Bounding Rectangle.
/// This version uses new/delete for nodes, I recommend using a fixed size allocator for efficiency.
/// Instead of using a callback function for returned results, I recommend and efficient pre-sized, grow-only memory
/// array similar to MFC CArray or STL Vector for returning search query result.
///
template<class DATATYPE, class ELEMTYPE, int NUMDIMS,
class ELEMTYPEREAL = ELEMTYPE, int TMAXNODES = 8, int TMINNODES = TMAXNODES / 2>
class RTree
{
protected:
struct Node; // Fwd decl. Used by other internal structs and iterator
public:
// These constant must be declared after Branch and before Node struct
// Stuck up here for MSVC 6 compiler. NSVC .NET 2003 is much happier.
enum
{
MAXNODES = TMAXNODES, ///< Max elements in node
MINNODES = TMINNODES, ///< Min elements in node
};
typedef bool (*t_resultCallback)(DATATYPE, void*);
public:
RTree();
virtual ~RTree();
/// Insert entry
/// \param a_min Min of bounding rect
/// \param a_max Max of bounding rect
/// \param a_dataId Positive Id of data. Maybe zero, but negative numbers not allowed.
void Insert(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], const DATATYPE& a_dataId);
/// Remove entry
/// \param a_min Min of bounding rect
/// \param a_max Max of bounding rect
/// \param a_dataId Positive Id of data. Maybe zero, but negative numbers not allowed.
void Remove(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], const DATATYPE& a_dataId);
/// Find all within search rectangle
/// \param a_min Min of search bounding rect
/// \param a_max Max of search bounding rect
/// \param a_searchResult Search result array. Caller should set grow size. Function will reset, not append to array.
/// \param a_resultCallback Callback function to return result. Callback should return 'true' to continue searching
/// \param a_context User context to pass as parameter to a_resultCallback
/// \return Returns the number of entries found
int Search(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], t_resultCallback a_resultCallback, void* a_context);
/// Remove all entries from tree
void RemoveAll();
/// Count the data elements in this container. This is slow as no internal counter is maintained.
int Count();
/// Load tree contents from file
bool Load(const char* a_fileName);
/// Load tree contents from stream
bool Load(RTFileStream& a_stream);
/// Save tree contents to file
bool Save(const char* a_fileName);
/// Save tree contents to stream
bool Save(RTFileStream& a_stream);
/// Iterator is not remove safe.
class Iterator
{
private:
enum { MAX_STACK = 32 }; // Max stack size. Allows almost n^32 where n is number of branches in node
struct StackElement
{
Node* m_node;
int m_branchIndex;
};
public:
Iterator() { Init(); }
~Iterator() { }
/// Is iterator invalid
bool IsNull() { return (m_tos <= 0); }
/// Is iterator pointing to valid data
bool IsNotNull() { return (m_tos > 0); }
/// Access the current data element. Caller must be sure iterator is not NULL first.
DATATYPE& operator*()
{
ASSERT(IsNotNull());
StackElement& curTos = m_stack[m_tos - 1];
return curTos.m_node->m_branch[curTos.m_branchIndex].m_data;
}
/// Access the current data element. Caller must be sure iterator is not NULL first.
const DATATYPE& operator*() const
{
ASSERT(IsNotNull());
StackElement& curTos = m_stack[m_tos - 1];
return curTos.m_node->m_branch[curTos.m_branchIndex].m_data;
}
/// Find the next data element
bool operator++() { return FindNextData(); }
/// Get the bounds for this node
void GetBounds(ELEMTYPE a_min[NUMDIMS], ELEMTYPE a_max[NUMDIMS])
{
ASSERT(IsNotNull());
StackElement& curTos = m_stack[m_tos - 1];
Branch& curBranch = curTos.m_node->m_branch[curTos.m_branchIndex];
for(int index = 0; index < NUMDIMS; ++index)
{
a_min[index] = curBranch.m_rect.m_min[index];
a_max[index] = curBranch.m_rect.m_max[index];
}
}
private:
/// Reset iterator
void Init() { m_tos = 0; }
/// Find the next data element in the tree (For internal use only)
bool FindNextData()
{
for(;;)
{
if(m_tos <= 0)
{
return false;
}
StackElement curTos = Pop(); // Copy stack top cause it may change as we use it
if(curTos.m_node->IsLeaf())
{
// Keep walking through data while we can
if(curTos.m_branchIndex+1 < curTos.m_node->m_count)
{
// There is more data, just point to the next one
Push(curTos.m_node, curTos.m_branchIndex + 1);
return true;
}
// No more data, so it will fall back to previous level
}
else
{
if(curTos.m_branchIndex+1 < curTos.m_node->m_count)
{
// Push sibling on for future tree walk
// This is the 'fall back' node when we finish with the current level
Push(curTos.m_node, curTos.m_branchIndex + 1);
}
// Since cur node is not a leaf, push first of next level to get deeper into the tree
Node* nextLevelnode = curTos.m_node->m_branch[curTos.m_branchIndex].m_child;
Push(nextLevelnode, 0);
// If we pushed on a new leaf, exit as the data is ready at TOS
if(nextLevelnode->IsLeaf())
{
return true;
}
}
}
}
/// Push node and branch onto iteration stack (For internal use only)
void Push(Node* a_node, int a_branchIndex)
{
m_stack[m_tos].m_node = a_node;
m_stack[m_tos].m_branchIndex = a_branchIndex;
++m_tos;
ASSERT(m_tos <= MAX_STACK);
}
/// Pop element off iteration stack (For internal use only)
StackElement& Pop()
{
ASSERT(m_tos > 0);
--m_tos;
return m_stack[m_tos];
}
StackElement m_stack[MAX_STACK]; ///< Stack as we are doing iteration instead of recursion
int m_tos; ///< Top Of Stack index
friend class RTree; // Allow hiding of non-public functions while allowing manipulation by logical owner
};
/// Get 'first' for iteration
void GetFirst(Iterator& a_it)
{
a_it.Init();
Node* first = m_root;
while(first)
{
if(first->IsInternalNode() && first->m_count > 1)
{
a_it.Push(first, 1); // Descend sibling branch later
}
else if(first->IsLeaf())
{
if(first->m_count)
{
a_it.Push(first, 0);
}
break;
}
first = first->m_branch[0].m_child;
}
}
/// Get Next for iteration
void GetNext(Iterator& a_it) { ++a_it; }
/// Is iterator NULL, or at end?
bool IsNull(Iterator& a_it) { return a_it.IsNull(); }
/// Get object at iterator position
DATATYPE& GetAt(Iterator& a_it) { return *a_it; }
protected:
/// Minimal bounding rectangle (n-dimensional)
struct Rect
{
ELEMTYPE m_min[NUMDIMS]; ///< Min dimensions of bounding box
ELEMTYPE m_max[NUMDIMS]; ///< Max dimensions of bounding box
};
/// May be data or may be another subtree
/// The parents level determines this.
/// If the parents level is 0, then this is data
struct Branch
{
Rect m_rect; ///< Bounds
Node* m_child; ///< Child node
DATATYPE m_data; ///< Data Id
};
/// Node for each branch level
struct Node
{
bool IsInternalNode() { return (m_level > 0); } // Not a leaf, but a internal node
bool IsLeaf() { return (m_level == 0); } // A leaf, contains data
int m_count; ///< Count
int m_level; ///< Leaf is zero, others positive
Branch m_branch[MAXNODES]; ///< Branch
};
/// A link list of nodes for reinsertion after a delete operation
struct ListNode
{
ListNode* m_next; ///< Next in list
Node* m_node; ///< Node
};
/// Variables for finding a split partition
struct PartitionVars
{
enum { NOT_TAKEN = -1 }; // indicates that position
int m_partition[MAXNODES+1];
int m_total;
int m_minFill;
int m_count[2];
Rect m_cover[2];
ELEMTYPEREAL m_area[2];
Branch m_branchBuf[MAXNODES+1];
int m_branchCount;
Rect m_coverSplit;
ELEMTYPEREAL m_coverSplitArea;
};
Node* AllocNode();
void FreeNode(Node* a_node);
void InitNode(Node* a_node);
void InitRect(Rect* a_rect);
bool InsertRectRec(const Branch& a_branch, Node* a_node, Node** a_newNode, int a_level);
bool InsertRect(const Branch& a_branch, Node** a_root, int a_level);
Rect NodeCover(Node* a_node);
bool AddBranch(const Branch* a_branch, Node* a_node, Node** a_newNode);
void DisconnectBranch(Node* a_node, int a_index);
int PickBranch(const Rect* a_rect, Node* a_node);
Rect CombineRect(const Rect* a_rectA, const Rect* a_rectB);
void SplitNode(Node* a_node, const Branch* a_branch, Node** a_newNode);
ELEMTYPEREAL RectSphericalVolume(Rect* a_rect);
ELEMTYPEREAL RectVolume(Rect* a_rect);
ELEMTYPEREAL CalcRectVolume(Rect* a_rect);
void GetBranches(Node* a_node, const Branch* a_branch, PartitionVars* a_parVars);
void ChoosePartition(PartitionVars* a_parVars, int a_minFill);
void LoadNodes(Node* a_nodeA, Node* a_nodeB, PartitionVars* a_parVars);
void InitParVars(PartitionVars* a_parVars, int a_maxRects, int a_minFill);
void PickSeeds(PartitionVars* a_parVars);
void Classify(int a_index, int a_group, PartitionVars* a_parVars);
bool RemoveRect(Rect* a_rect, const DATATYPE& a_id, Node** a_root);
bool RemoveRectRec(Rect* a_rect, const DATATYPE& a_id, Node* a_node, ListNode** a_listNode);
ListNode* AllocListNode();
void FreeListNode(ListNode* a_listNode);
bool Overlap(Rect* a_rectA, Rect* a_rectB);
void ReInsert(Node* a_node, ListNode** a_listNode);
bool Search(Node* a_node, Rect* a_rect, int& a_foundCount, t_resultCallback a_resultCallback, void* a_context);
void RemoveAllRec(Node* a_node);
void Reset();
void CountRec(Node* a_node, int& a_count);
bool SaveRec(Node* a_node, RTFileStream& a_stream);
bool LoadRec(Node* a_node, RTFileStream& a_stream);
Node* m_root; ///< Root of tree
ELEMTYPEREAL m_unitSphereVolume; ///< Unit sphere constant for required number of dimensions
};
// Because there is not stream support, this is a quick and dirty file I/O helper.
// Users will likely replace its usage with a Stream implementation from their favorite API.
class RTFileStream
{
FILE* m_file;
public:
RTFileStream()
{
m_file = NULL;
}
~RTFileStream()
{
Close();
}
bool OpenRead(const char* a_fileName)
{
m_file = fopen(a_fileName, "rb");
if(!m_file)
{
return false;
}
return true;
}
bool OpenWrite(const char* a_fileName)
{
m_file = fopen(a_fileName, "wb");
if(!m_file)
{
return false;
}
return true;
}
void Close()
{
if(m_file)
{
fclose(m_file);
m_file = NULL;
}
}
template< typename TYPE >
size_t Write(const TYPE& a_value)
{
ASSERT(m_file);
return fwrite((void*)&a_value, sizeof(a_value), 1, m_file);
}
template< typename TYPE >
size_t WriteArray(const TYPE* a_array, int a_count)
{
ASSERT(m_file);
return fwrite((void*)a_array, sizeof(TYPE) * a_count, 1, m_file);
}
template< typename TYPE >
size_t Read(TYPE& a_value)
{
ASSERT(m_file);
return fread((void*)&a_value, sizeof(a_value), 1, m_file);
}
template< typename TYPE >
size_t ReadArray(TYPE* a_array, int a_count)
{
ASSERT(m_file);
return fread((void*)a_array, sizeof(TYPE) * a_count, 1, m_file);
}
};
RTREE_TEMPLATE
RTREE_QUAL::RTree()
{
ASSERT(MAXNODES > MINNODES);
ASSERT(MINNODES > 0);
// Precomputed volumes of the unit spheres for the first few dimensions
const float UNIT_SPHERE_VOLUMES[] = {
0.000000f, 2.000000f, 3.141593f, // Dimension 0,1,2
4.188790f, 4.934802f, 5.263789f, // Dimension 3,4,5
5.167713f, 4.724766f, 4.058712f, // Dimension 6,7,8
3.298509f, 2.550164f, 1.884104f, // Dimension 9,10,11
1.335263f, 0.910629f, 0.599265f, // Dimension 12,13,14
0.381443f, 0.235331f, 0.140981f, // Dimension 15,16,17
0.082146f, 0.046622f, 0.025807f, // Dimension 18,19,20
};
m_root = AllocNode();
m_root->m_level = 0;
m_unitSphereVolume = (ELEMTYPEREAL)UNIT_SPHERE_VOLUMES[NUMDIMS];
}
RTREE_TEMPLATE
RTREE_QUAL::~RTree()
{
Reset(); // Free, or reset node memory
}
RTREE_TEMPLATE
void RTREE_QUAL::Insert(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], const DATATYPE& a_dataId)
{
#ifdef _DEBUG
for(int index=0; index<NUMDIMS; ++index)
{
ASSERT(a_min[index] <= a_max[index]);
}
#endif //_DEBUG
Branch branch;
branch.m_data = a_dataId;
branch.m_child = NULL;
for(int axis=0; axis<NUMDIMS; ++axis)
{
branch.m_rect.m_min[axis] = a_min[axis];
branch.m_rect.m_max[axis] = a_max[axis];
}
InsertRect(branch, &m_root, 0);
}
RTREE_TEMPLATE
void RTREE_QUAL::Remove(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], const DATATYPE& a_dataId)
{
#ifdef _DEBUG
for(int index=0; index<NUMDIMS; ++index)
{
ASSERT(a_min[index] <= a_max[index]);
}
#endif //_DEBUG
Rect rect;
for(int axis=0; axis<NUMDIMS; ++axis)
{
rect.m_min[axis] = a_min[axis];
rect.m_max[axis] = a_max[axis];
}
RemoveRect(&rect, a_dataId, &m_root);
}
RTREE_TEMPLATE
int RTREE_QUAL::Search(const ELEMTYPE a_min[NUMDIMS], const ELEMTYPE a_max[NUMDIMS], t_resultCallback a_resultCallback, void* a_context)
{
#ifdef _DEBUG
for(int index=0; index<NUMDIMS; ++index)
{
ASSERT(a_min[index] <= a_max[index]);
}
#endif //_DEBUG
Rect rect;
for(int axis=0; axis<NUMDIMS; ++axis)
{
rect.m_min[axis] = a_min[axis];
rect.m_max[axis] = a_max[axis];
}
// NOTE: May want to return search result another way, perhaps returning the number of found elements here.
int foundCount = 0;
Search(m_root, &rect, foundCount, a_resultCallback, a_context);
return foundCount;
}
RTREE_TEMPLATE
int RTREE_QUAL::Count()
{
int count = 0;
CountRec(m_root, count);
return count;
}
RTREE_TEMPLATE
void RTREE_QUAL::CountRec(Node* a_node, int& a_count)
{
if(a_node->IsInternalNode()) // not a leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
CountRec(a_node->m_branch[index].m_child, a_count);
}
}
else // A leaf node
{
a_count += a_node->m_count;
}
}
RTREE_TEMPLATE
bool RTREE_QUAL::Load(const char* a_fileName)
{
RemoveAll(); // Clear existing tree
RTFileStream stream;
if(!stream.OpenRead(a_fileName))
{
return false;
}
bool result = Load(stream);
stream.Close();
return result;
}
RTREE_TEMPLATE
bool RTREE_QUAL::Load(RTFileStream& a_stream)
{
// Write some kind of header
int _dataFileId = ('R'<<0)|('T'<<8)|('R'<<16)|('E'<<24);
int _dataSize = sizeof(DATATYPE);
int _dataNumDims = NUMDIMS;
int _dataElemSize = sizeof(ELEMTYPE);
int _dataElemRealSize = sizeof(ELEMTYPEREAL);
int _dataMaxNodes = TMAXNODES;
int _dataMinNodes = TMINNODES;
int dataFileId = 0;
int dataSize = 0;
int dataNumDims = 0;
int dataElemSize = 0;
int dataElemRealSize = 0;
int dataMaxNodes = 0;
int dataMinNodes = 0;
a_stream.Read(dataFileId);
a_stream.Read(dataSize);
a_stream.Read(dataNumDims);
a_stream.Read(dataElemSize);
a_stream.Read(dataElemRealSize);
a_stream.Read(dataMaxNodes);
a_stream.Read(dataMinNodes);
bool result = false;
// Test if header was valid and compatible
if( (dataFileId == _dataFileId)
&& (dataSize == _dataSize)
&& (dataNumDims == _dataNumDims)
&& (dataElemSize == _dataElemSize)
&& (dataElemRealSize == _dataElemRealSize)
&& (dataMaxNodes == _dataMaxNodes)
&& (dataMinNodes == _dataMinNodes)
)
{
// Recursively load tree
result = LoadRec(m_root, a_stream);
}
return result;
}
RTREE_TEMPLATE
bool RTREE_QUAL::LoadRec(Node* a_node, RTFileStream& a_stream)
{
a_stream.Read(a_node->m_level);
a_stream.Read(a_node->m_count);
if(a_node->IsInternalNode()) // not a leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
Branch* curBranch = &a_node->m_branch[index];
a_stream.ReadArray(curBranch->m_rect.m_min, NUMDIMS);
a_stream.ReadArray(curBranch->m_rect.m_max, NUMDIMS);
curBranch->m_child = AllocNode();
LoadRec(curBranch->m_child, a_stream);
}
}
else // A leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
Branch* curBranch = &a_node->m_branch[index];
a_stream.ReadArray(curBranch->m_rect.m_min, NUMDIMS);
a_stream.ReadArray(curBranch->m_rect.m_max, NUMDIMS);
a_stream.Read(curBranch->m_data);
}
}
return true; // Should do more error checking on I/O operations
}
RTREE_TEMPLATE
bool RTREE_QUAL::Save(const char* a_fileName)
{
RTFileStream stream;
if(!stream.OpenWrite(a_fileName))
{
return false;
}
bool result = Save(stream);
stream.Close();
return result;
}
RTREE_TEMPLATE
bool RTREE_QUAL::Save(RTFileStream& a_stream)
{
// Write some kind of header
int dataFileId = ('R'<<0)|('T'<<8)|('R'<<16)|('E'<<24);
int dataSize = sizeof(DATATYPE);
int dataNumDims = NUMDIMS;
int dataElemSize = sizeof(ELEMTYPE);
int dataElemRealSize = sizeof(ELEMTYPEREAL);
int dataMaxNodes = TMAXNODES;
int dataMinNodes = TMINNODES;
a_stream.Write(dataFileId);
a_stream.Write(dataSize);
a_stream.Write(dataNumDims);
a_stream.Write(dataElemSize);
a_stream.Write(dataElemRealSize);
a_stream.Write(dataMaxNodes);
a_stream.Write(dataMinNodes);
// Recursively save tree
bool result = SaveRec(m_root, a_stream);
return result;
}
RTREE_TEMPLATE
bool RTREE_QUAL::SaveRec(Node* a_node, RTFileStream& a_stream)
{
a_stream.Write(a_node->m_level);
a_stream.Write(a_node->m_count);
if(a_node->IsInternalNode()) // not a leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
Branch* curBranch = &a_node->m_branch[index];
a_stream.WriteArray(curBranch->m_rect.m_min, NUMDIMS);
a_stream.WriteArray(curBranch->m_rect.m_max, NUMDIMS);
SaveRec(curBranch->m_child, a_stream);
}
}
else // A leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
Branch* curBranch = &a_node->m_branch[index];
a_stream.WriteArray(curBranch->m_rect.m_min, NUMDIMS);
a_stream.WriteArray(curBranch->m_rect.m_max, NUMDIMS);
a_stream.Write(curBranch->m_data);
}
}
return true; // Should do more error checking on I/O operations
}
RTREE_TEMPLATE
void RTREE_QUAL::RemoveAll()
{
// Delete all existing nodes
Reset();
m_root = AllocNode();
m_root->m_level = 0;
}
RTREE_TEMPLATE
void RTREE_QUAL::Reset()
{
#ifdef RTREE_DONT_USE_MEMPOOLS
// Delete all existing nodes
RemoveAllRec(m_root);
#else // RTREE_DONT_USE_MEMPOOLS
// Just reset memory pools. We are not using complex types
// EXAMPLE
#endif // RTREE_DONT_USE_MEMPOOLS
}
RTREE_TEMPLATE
void RTREE_QUAL::RemoveAllRec(Node* a_node)
{
ASSERT(a_node);
ASSERT(a_node->m_level >= 0);
if(a_node->IsInternalNode()) // This is an internal node in the tree
{
for(int index=0; index < a_node->m_count; ++index)
{
RemoveAllRec(a_node->m_branch[index].m_child);
}
}
FreeNode(a_node);
}
RTREE_TEMPLATE
typename RTREE_QUAL::Node* RTREE_QUAL::AllocNode()
{
Node* newNode;
#ifdef RTREE_DONT_USE_MEMPOOLS
newNode = new Node;
#else // RTREE_DONT_USE_MEMPOOLS
// EXAMPLE
#endif // RTREE_DONT_USE_MEMPOOLS
InitNode(newNode);
return newNode;
}
RTREE_TEMPLATE
void RTREE_QUAL::FreeNode(Node* a_node)
{
ASSERT(a_node);
#ifdef RTREE_DONT_USE_MEMPOOLS
delete a_node;
#else // RTREE_DONT_USE_MEMPOOLS
// EXAMPLE
#endif // RTREE_DONT_USE_MEMPOOLS
}
// Allocate space for a node in the list used in DeletRect to
// store Nodes that are too empty.
RTREE_TEMPLATE
typename RTREE_QUAL::ListNode* RTREE_QUAL::AllocListNode()
{
#ifdef RTREE_DONT_USE_MEMPOOLS
return new ListNode;
#else // RTREE_DONT_USE_MEMPOOLS
// EXAMPLE
#endif // RTREE_DONT_USE_MEMPOOLS
}
RTREE_TEMPLATE
void RTREE_QUAL::FreeListNode(ListNode* a_listNode)
{
#ifdef RTREE_DONT_USE_MEMPOOLS
delete a_listNode;
#else // RTREE_DONT_USE_MEMPOOLS
// EXAMPLE
#endif // RTREE_DONT_USE_MEMPOOLS
}
RTREE_TEMPLATE
void RTREE_QUAL::InitNode(Node* a_node)
{
a_node->m_count = 0;
a_node->m_level = -1;
}
RTREE_TEMPLATE
void RTREE_QUAL::InitRect(Rect* a_rect)
{
for(int index = 0; index < NUMDIMS; ++index)
{
a_rect->m_min[index] = (ELEMTYPE)0;
a_rect->m_max[index] = (ELEMTYPE)0;
}
}
// Inserts a new data rectangle into the index structure.
// Recursively descends tree, propagates splits back up.
// Returns 0 if node was not split. Old node updated.
// If node was split, returns 1 and sets the pointer pointed to by
// new_node to point to the new node. Old node updated to become one of two.
// The level argument specifies the number of steps up from the leaf
// level to insert; e.g. a data rectangle goes in at level = 0.
RTREE_TEMPLATE
bool RTREE_QUAL::InsertRectRec(const Branch& a_branch, Node* a_node, Node** a_newNode, int a_level)
{
ASSERT(a_node && a_newNode);
ASSERT(a_level >= 0 && a_level <= a_node->m_level);
// recurse until we reach the correct level for the new record. data records
// will always be called with a_level == 0 (leaf)
if(a_node->m_level > a_level)
{
// Still above level for insertion, go down tree recursively
Node* otherNode;
// find the optimal branch for this record
int index = PickBranch(&a_branch.m_rect, a_node);
// recursively insert this record into the picked branch
bool childWasSplit = InsertRectRec(a_branch, a_node->m_branch[index].m_child, &otherNode, a_level);
if (!childWasSplit)
{
// Child was not split. Merge the bounding box of the new record with the
// existing bounding box
a_node->m_branch[index].m_rect = CombineRect(&a_branch.m_rect, &(a_node->m_branch[index].m_rect));
return false;
}
else
{
// Child was split. The old branches are now re-partitioned to two nodes
// so we have to re-calculate the bounding boxes of each node
a_node->m_branch[index].m_rect = NodeCover(a_node->m_branch[index].m_child);
Branch branch;
branch.m_child = otherNode;
branch.m_rect = NodeCover(otherNode);
// The old node is already a child of a_node. Now add the newly-created
// node to a_node as well. a_node might be split because of that.
return AddBranch(&branch, a_node, a_newNode);
}
}
else if(a_node->m_level == a_level)
{
// We have reached level for insertion. Add rect, split if necessary
return AddBranch(&a_branch, a_node, a_newNode);
}
else
{
// Should never occur
ASSERT(0);
return false;
}
}
// Insert a data rectangle into an index structure.
// InsertRect provides for splitting the root;
// returns 1 if root was split, 0 if it was not.
// The level argument specifies the number of steps up from the leaf
// level to insert; e.g. a data rectangle goes in at level = 0.
// InsertRect2 does the recursion.
//
RTREE_TEMPLATE
bool RTREE_QUAL::InsertRect(const Branch& a_branch, Node** a_root, int a_level)
{
ASSERT(a_root);
ASSERT(a_level >= 0 && a_level <= (*a_root)->m_level);
#ifdef _DEBUG
for(int index=0; index < NUMDIMS; ++index)
{
ASSERT(a_branch.m_rect.m_min[index] <= a_branch.m_rect.m_max[index]);
}
#endif //_DEBUG
Node* newNode;
if(InsertRectRec(a_branch, *a_root, &newNode, a_level)) // Root split
{
// Grow tree taller and new root
Node* newRoot = AllocNode();
newRoot->m_level = (*a_root)->m_level + 1;
Branch branch;
// add old root node as a child of the new root
branch.m_rect = NodeCover(*a_root);
branch.m_child = *a_root;
AddBranch(&branch, newRoot, NULL);
// add the split node as a child of the new root
branch.m_rect = NodeCover(newNode);
branch.m_child = newNode;
AddBranch(&branch, newRoot, NULL);
// set the new root as the root node
*a_root = newRoot;
return true;
}
return false;
}
// Find the smallest rectangle that includes all rectangles in branches of a node.
RTREE_TEMPLATE
typename RTREE_QUAL::Rect RTREE_QUAL::NodeCover(Node* a_node)
{
ASSERT(a_node);
Rect rect = a_node->m_branch[0].m_rect;
for(int index = 1; index < a_node->m_count; ++index)
{
rect = CombineRect(&rect, &(a_node->m_branch[index].m_rect));
}
return rect;
}
// Add a branch to a node. Split the node if necessary.
// Returns 0 if node not split. Old node updated.
// Returns 1 if node split, sets *new_node to address of new node.
// Old node updated, becomes one of two.
RTREE_TEMPLATE
bool RTREE_QUAL::AddBranch(const Branch* a_branch, Node* a_node, Node** a_newNode)
{
ASSERT(a_branch);
ASSERT(a_node);
if(a_node->m_count < MAXNODES) // Split won't be necessary
{
a_node->m_branch[a_node->m_count] = *a_branch;
++a_node->m_count;
return false;
}
else
{
ASSERT(a_newNode);
SplitNode(a_node, a_branch, a_newNode);
return true;
}
}
// Disconnect a dependent node.
// Caller must return (or stop using iteration index) after this as count has changed
RTREE_TEMPLATE
void RTREE_QUAL::DisconnectBranch(Node* a_node, int a_index)
{
ASSERT(a_node && (a_index >= 0) && (a_index < MAXNODES));
ASSERT(a_node->m_count > 0);
// Remove element by swapping with the last element to prevent gaps in array
a_node->m_branch[a_index] = a_node->m_branch[a_node->m_count - 1];
--a_node->m_count;
}
// Pick a branch. Pick the one that will need the smallest increase
// in area to accomodate the new rectangle. This will result in the
// least total area for the covering rectangles in the current node.
// In case of a tie, pick the one which was smaller before, to get
// the best resolution when searching.
RTREE_TEMPLATE
int RTREE_QUAL::PickBranch(const Rect* a_rect, Node* a_node)
{
ASSERT(a_rect && a_node);
bool firstTime = true;
ELEMTYPEREAL increase;
ELEMTYPEREAL bestIncr = (ELEMTYPEREAL)-1;
ELEMTYPEREAL area;
ELEMTYPEREAL bestArea;
int best;
Rect tempRect;
for(int index=0; index < a_node->m_count; ++index)
{
Rect* curRect = &a_node->m_branch[index].m_rect;
area = CalcRectVolume(curRect);
tempRect = CombineRect(a_rect, curRect);
increase = CalcRectVolume(&tempRect) - area;
if((increase < bestIncr) || firstTime)
{
best = index;
bestArea = area;
bestIncr = increase;
firstTime = false;
}
else if((increase == bestIncr) && (area < bestArea))
{
best = index;
bestArea = area;
bestIncr = increase;
}
}
return best;
}
// Combine two rectangles into larger one containing both
RTREE_TEMPLATE
typename RTREE_QUAL::Rect RTREE_QUAL::CombineRect(const Rect* a_rectA, const Rect* a_rectB)
{
ASSERT(a_rectA && a_rectB);
Rect newRect;
for(int index = 0; index < NUMDIMS; ++index)
{
newRect.m_min[index] = std::min(a_rectA->m_min[index], a_rectB->m_min[index]);
newRect.m_max[index] = std::max(a_rectA->m_max[index], a_rectB->m_max[index]);
}
return newRect;
}
// Split a node.
// Divides the nodes branches and the extra one between two nodes.
// Old node is one of the new ones, and one really new one is created.
// Tries more than one method for choosing a partition, uses best result.
RTREE_TEMPLATE
void RTREE_QUAL::SplitNode(Node* a_node, const Branch* a_branch, Node** a_newNode)
{
ASSERT(a_node);
ASSERT(a_branch);
// Could just use local here, but member or external is faster since it is reused
PartitionVars localVars;
PartitionVars* parVars = &localVars;
// Load all the branches into a buffer, initialize old node
GetBranches(a_node, a_branch, parVars);
// Find partition
ChoosePartition(parVars, MINNODES);
// Create a new node to hold (about) half of the branches
*a_newNode = AllocNode();
(*a_newNode)->m_level = a_node->m_level;
// Put branches from buffer into 2 nodes according to the chosen partition
a_node->m_count = 0;
LoadNodes(a_node, *a_newNode, parVars);
ASSERT((a_node->m_count + (*a_newNode)->m_count) == parVars->m_total);
}
// Calculate the n-dimensional volume of a rectangle
RTREE_TEMPLATE
ELEMTYPEREAL RTREE_QUAL::RectVolume(Rect* a_rect)
{
ASSERT(a_rect);
ELEMTYPEREAL volume = (ELEMTYPEREAL)1;
for(int index=0; index<NUMDIMS; ++index)
{
volume *= a_rect->m_max[index] - a_rect->m_min[index];
}
ASSERT(volume >= (ELEMTYPEREAL)0);
return volume;
}
// The exact volume of the bounding sphere for the given Rect
RTREE_TEMPLATE
ELEMTYPEREAL RTREE_QUAL::RectSphericalVolume(Rect* a_rect)
{
ASSERT(a_rect);
ELEMTYPEREAL sumOfSquares = (ELEMTYPEREAL)0;
ELEMTYPEREAL radius;
for(int index=0; index < NUMDIMS; ++index)
{
ELEMTYPEREAL halfExtent = ((ELEMTYPEREAL)a_rect->m_max[index] - (ELEMTYPEREAL)a_rect->m_min[index]) * 0.5f;
sumOfSquares += halfExtent * halfExtent;
}
radius = (ELEMTYPEREAL)sqrt(sumOfSquares);
// Pow maybe slow, so test for common dims like 2,3 and just use x*x, x*x*x.
if(NUMDIMS == 3)
{
return (radius * radius * radius * m_unitSphereVolume);
}
else if(NUMDIMS == 2)
{
return (radius * radius * m_unitSphereVolume);
}
else
{
return (ELEMTYPEREAL)(pow(radius, NUMDIMS) * m_unitSphereVolume);
}
}
// Use one of the methods to calculate retangle volume
RTREE_TEMPLATE
ELEMTYPEREAL RTREE_QUAL::CalcRectVolume(Rect* a_rect)
{
#ifdef RTREE_USE_SPHERICAL_VOLUME
return RectSphericalVolume(a_rect); // Slower but helps certain merge cases
#else // RTREE_USE_SPHERICAL_VOLUME
return RectVolume(a_rect); // Faster but can cause poor merges
#endif // RTREE_USE_SPHERICAL_VOLUME
}
// Load branch buffer with branches from full node plus the extra branch.
RTREE_TEMPLATE
void RTREE_QUAL::GetBranches(Node* a_node, const Branch* a_branch, PartitionVars* a_parVars)
{
ASSERT(a_node);
ASSERT(a_branch);
ASSERT(a_node->m_count == MAXNODES);
// Load the branch buffer
for(int index=0; index < MAXNODES; ++index)
{
a_parVars->m_branchBuf[index] = a_node->m_branch[index];
}
a_parVars->m_branchBuf[MAXNODES] = *a_branch;
a_parVars->m_branchCount = MAXNODES + 1;
// Calculate rect containing all in the set
a_parVars->m_coverSplit = a_parVars->m_branchBuf[0].m_rect;
for(int index=1; index < MAXNODES+1; ++index)
{
a_parVars->m_coverSplit = CombineRect(&a_parVars->m_coverSplit, &a_parVars->m_branchBuf[index].m_rect);
}
a_parVars->m_coverSplitArea = CalcRectVolume(&a_parVars->m_coverSplit);
}
// Method #0 for choosing a partition:
// As the seeds for the two groups, pick the two rects that would waste the
// most area if covered by a single rectangle, i.e. evidently the worst pair
// to have in the same group.
// Of the remaining, one at a time is chosen to be put in one of the two groups.
// The one chosen is the one with the greatest difference in area expansion
// depending on which group - the rect most strongly attracted to one group
// and repelled from the other.
// If one group gets too full (more would force other group to violate min
// fill requirement) then other group gets the rest.
// These last are the ones that can go in either group most easily.
RTREE_TEMPLATE
void RTREE_QUAL::ChoosePartition(PartitionVars* a_parVars, int a_minFill)
{
ASSERT(a_parVars);
ELEMTYPEREAL biggestDiff;
int group, chosen, betterGroup;
InitParVars(a_parVars, a_parVars->m_branchCount, a_minFill);
PickSeeds(a_parVars);
while (((a_parVars->m_count[0] + a_parVars->m_count[1]) < a_parVars->m_total)
&& (a_parVars->m_count[0] < (a_parVars->m_total - a_parVars->m_minFill))
&& (a_parVars->m_count[1] < (a_parVars->m_total - a_parVars->m_minFill)))
{
biggestDiff = (ELEMTYPEREAL) -1;
for(int index=0; index<a_parVars->m_total; ++index)
{
if(PartitionVars::NOT_TAKEN == a_parVars->m_partition[index])
{
Rect* curRect = &a_parVars->m_branchBuf[index].m_rect;
Rect rect0 = CombineRect(curRect, &a_parVars->m_cover[0]);
Rect rect1 = CombineRect(curRect, &a_parVars->m_cover[1]);
ELEMTYPEREAL growth0 = CalcRectVolume(&rect0) - a_parVars->m_area[0];
ELEMTYPEREAL growth1 = CalcRectVolume(&rect1) - a_parVars->m_area[1];
ELEMTYPEREAL diff = growth1 - growth0;
if(diff >= 0)
{
group = 0;
}
else
{
group = 1;
diff = -diff;
}
if(diff > biggestDiff)
{
biggestDiff = diff;
chosen = index;
betterGroup = group;
}
else if((diff == biggestDiff) && (a_parVars->m_count[group] < a_parVars->m_count[betterGroup]))
{
chosen = index;
betterGroup = group;
}
}
}
Classify(chosen, betterGroup, a_parVars);
}
// If one group too full, put remaining rects in the other
if((a_parVars->m_count[0] + a_parVars->m_count[1]) < a_parVars->m_total)
{
if(a_parVars->m_count[0] >= a_parVars->m_total - a_parVars->m_minFill)
{
group = 1;
}
else
{
group = 0;
}
for(int index=0; index<a_parVars->m_total; ++index)
{
if(PartitionVars::NOT_TAKEN == a_parVars->m_partition[index])
{
Classify(index, group, a_parVars);
}
}
}
ASSERT((a_parVars->m_count[0] + a_parVars->m_count[1]) == a_parVars->m_total);
ASSERT((a_parVars->m_count[0] >= a_parVars->m_minFill) &&
(a_parVars->m_count[1] >= a_parVars->m_minFill));
}
// Copy branches from the buffer into two nodes according to the partition.
RTREE_TEMPLATE
void RTREE_QUAL::LoadNodes(Node* a_nodeA, Node* a_nodeB, PartitionVars* a_parVars)
{
ASSERT(a_nodeA);
ASSERT(a_nodeB);
ASSERT(a_parVars);
for(int index=0; index < a_parVars->m_total; ++index)
{
ASSERT(a_parVars->m_partition[index] == 0 || a_parVars->m_partition[index] == 1);
int targetNodeIndex = a_parVars->m_partition[index];
Node* targetNodes[] = {a_nodeA, a_nodeB};
// It is assured that AddBranch here will not cause a node split.
bool nodeWasSplit = AddBranch(&a_parVars->m_branchBuf[index], targetNodes[targetNodeIndex], NULL);
ASSERT(!nodeWasSplit);
}
}
// Initialize a PartitionVars structure.
RTREE_TEMPLATE
void RTREE_QUAL::InitParVars(PartitionVars* a_parVars, int a_maxRects, int a_minFill)
{
ASSERT(a_parVars);
a_parVars->m_count[0] = a_parVars->m_count[1] = 0;
a_parVars->m_area[0] = a_parVars->m_area[1] = (ELEMTYPEREAL)0;
a_parVars->m_total = a_maxRects;
a_parVars->m_minFill = a_minFill;
for(int index=0; index < a_maxRects; ++index)
{
a_parVars->m_partition[index] = PartitionVars::NOT_TAKEN;
}
}
RTREE_TEMPLATE
void RTREE_QUAL::PickSeeds(PartitionVars* a_parVars)
{
int seed0, seed1;
ELEMTYPEREAL worst, waste;
ELEMTYPEREAL area[MAXNODES+1];
for(int index=0; index<a_parVars->m_total; ++index)
{
area[index] = CalcRectVolume(&a_parVars->m_branchBuf[index].m_rect);
}
worst = -a_parVars->m_coverSplitArea - 1;
for(int indexA=0; indexA < a_parVars->m_total-1; ++indexA)
{
for(int indexB = indexA+1; indexB < a_parVars->m_total; ++indexB)
{
Rect oneRect = CombineRect(&a_parVars->m_branchBuf[indexA].m_rect, &a_parVars->m_branchBuf[indexB].m_rect);
waste = CalcRectVolume(&oneRect) - area[indexA] - area[indexB];
if(waste > worst)
{
worst = waste;
seed0 = indexA;
seed1 = indexB;
}
}
}
Classify(seed0, 0, a_parVars);
Classify(seed1, 1, a_parVars);
}
// Put a branch in one of the groups.
RTREE_TEMPLATE
void RTREE_QUAL::Classify(int a_index, int a_group, PartitionVars* a_parVars)
{
ASSERT(a_parVars);
ASSERT(PartitionVars::NOT_TAKEN == a_parVars->m_partition[a_index]);
a_parVars->m_partition[a_index] = a_group;
// Calculate combined rect
if (a_parVars->m_count[a_group] == 0)
{
a_parVars->m_cover[a_group] = a_parVars->m_branchBuf[a_index].m_rect;
}
else
{
a_parVars->m_cover[a_group] = CombineRect(&a_parVars->m_branchBuf[a_index].m_rect, &a_parVars->m_cover[a_group]);
}
// Calculate volume of combined rect
a_parVars->m_area[a_group] = CalcRectVolume(&a_parVars->m_cover[a_group]);
++a_parVars->m_count[a_group];
}
// Delete a data rectangle from an index structure.
// Pass in a pointer to a Rect, the tid of the record, ptr to ptr to root node.
// Returns 1 if record not found, 0 if success.
// RemoveRect provides for eliminating the root.
RTREE_TEMPLATE
bool RTREE_QUAL::RemoveRect(Rect* a_rect, const DATATYPE& a_id, Node** a_root)
{
ASSERT(a_rect && a_root);
ASSERT(*a_root);
ListNode* reInsertList = NULL;
if(!RemoveRectRec(a_rect, a_id, *a_root, &reInsertList))
{
// Found and deleted a data item
// Reinsert any branches from eliminated nodes
while(reInsertList)
{
Node* tempNode = reInsertList->m_node;
for(int index = 0; index < tempNode->m_count; ++index)
{
// TODO go over this code. should I use (tempNode->m_level - 1)?
InsertRect(tempNode->m_branch[index],
a_root,
tempNode->m_level);
}
ListNode* remLNode = reInsertList;
reInsertList = reInsertList->m_next;
FreeNode(remLNode->m_node);
FreeListNode(remLNode);
}
// Check for redundant root (not leaf, 1 child) and eliminate TODO replace
// if with while? In case there is a whole branch of redundant roots...
if((*a_root)->m_count == 1 && (*a_root)->IsInternalNode())
{
Node* tempNode = (*a_root)->m_branch[0].m_child;
ASSERT(tempNode);
FreeNode(*a_root);
*a_root = tempNode;
}
return false;
}
else
{
return true;
}
}
// Delete a rectangle from non-root part of an index structure.
// Called by RemoveRect. Descends tree recursively,
// merges branches on the way back up.
// Returns 1 if record not found, 0 if success.
RTREE_TEMPLATE
bool RTREE_QUAL::RemoveRectRec(Rect* a_rect, const DATATYPE& a_id, Node* a_node, ListNode** a_listNode)
{
ASSERT(a_rect && a_node && a_listNode);
ASSERT(a_node->m_level >= 0);
if(a_node->IsInternalNode()) // not a leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
if(Overlap(a_rect, &(a_node->m_branch[index].m_rect)))
{
if(!RemoveRectRec(a_rect, a_id, a_node->m_branch[index].m_child, a_listNode))
{
if(a_node->m_branch[index].m_child->m_count >= MINNODES)
{
// child removed, just resize parent rect
a_node->m_branch[index].m_rect = NodeCover(a_node->m_branch[index].m_child);
}
else
{
// child removed, not enough entries in node, eliminate node
ReInsert(a_node->m_branch[index].m_child, a_listNode);
DisconnectBranch(a_node, index); // Must return after this call as count has changed
}
return false;
}
}
}
return true;
}
else // A leaf node
{
for(int index = 0; index < a_node->m_count; ++index)
{
if(a_node->m_branch[index].m_data == a_id)
{
DisconnectBranch(a_node, index); // Must return after this call as count has changed
return false;
}
}
return true;
}
}
// Decide whether two rectangles overlap.
RTREE_TEMPLATE
bool RTREE_QUAL::Overlap(Rect* a_rectA, Rect* a_rectB)
{
ASSERT(a_rectA && a_rectB);
for(int index=0; index < NUMDIMS; ++index)
{
if (a_rectA->m_min[index] > a_rectB->m_max[index] ||
a_rectB->m_min[index] > a_rectA->m_max[index])
{
return false;
}
}
return true;
}
// Add a node to the reinsertion list. All its branches will later
// be reinserted into the index structure.
RTREE_TEMPLATE
void RTREE_QUAL::ReInsert(Node* a_node, ListNode** a_listNode)
{
ListNode* newListNode;
newListNode = AllocListNode();
newListNode->m_node = a_node;
newListNode->m_next = *a_listNode;
*a_listNode = newListNode;
}
// Search in an index tree or subtree for all data retangles that overlap the argument rectangle.
RTREE_TEMPLATE
bool RTREE_QUAL::Search(Node* a_node, Rect* a_rect, int& a_foundCount, t_resultCallback a_resultCallback, void* a_context)
{
ASSERT(a_node);
ASSERT(a_node->m_level >= 0);
ASSERT(a_rect);
if(a_node->IsInternalNode())
{
// This is an internal node in the tree
for(int index=0; index < a_node->m_count; ++index)
{
if(Overlap(a_rect, &a_node->m_branch[index].m_rect))
{
if(!Search(a_node->m_branch[index].m_child, a_rect, a_foundCount, a_resultCallback, a_context))
{
// The callback indicated to stop searching
return false;
}
}
}
}
else
{
// This is a leaf node
for(int index=0; index < a_node->m_count; ++index)
{
if(Overlap(a_rect, &a_node->m_branch[index].m_rect))
{
DATATYPE& id = a_node->m_branch[index].m_data;
++a_foundCount;
// NOTE: There are different ways to return results. Here's where to modify
if(a_resultCallback)
{
if(!a_resultCallback(id, a_context))
{
return false; // Don't continue searching
}
}
}
}
}
return true; // Continue searching
}
#undef RTREE_TEMPLATE
#undef RTREE_QUAL
#endif //RTREE_H
| 44,357 | 26.671865 | 136 | h |
la3dm | la3dm-master/include/gpoctomap/gpblock.h | #ifndef LA3DM_GP_BLOCK_H
#define LA3DM_GP_BLOCK_H
#include <unordered_map>
#include <array>
#include "point3f.h"
#include "gpoctree_node.h"
#include "gpoctree.h"
namespace la3dm {
/// Hask key to index Block given block's center.
typedef int64_t BlockHashKey;
/// Initialize Look-Up Table
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth);
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map,
unsigned short max_depth);
/// Extended Block
#ifdef PREDICT
typedef std::array<BlockHashKey, 27> ExtendedBlock;
#else
typedef std::array<BlockHashKey, 7> ExtendedBlock;
#endif
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(point3f center);
/// Convert from block to hash key.
BlockHashKey block_to_hash_key(float x, float y, float z);
/// Convert from hash key to block.
point3f hash_key_to_block(BlockHashKey key);
/// Get current block's extended block.
ExtendedBlock get_extended_block(BlockHashKey key);
/*
* @brief Block is built on top of OcTree, providing the functions to locate nodes.
*
* Block stores the information needed to locate each OcTreeNode's position:
* fixed resolution, fixed block_size, both of which must be initialized.
* The localization is implemented using Loop-Up Table.
*/
class Block : public OcTree {
friend BlockHashKey block_to_hash_key(point3f center);
friend BlockHashKey block_to_hash_key(float x, float y, float z);
friend point3f hash_key_to_block(BlockHashKey key);
friend ExtendedBlock get_extended_block(BlockHashKey key);
friend class GPOctoMap;
public:
Block();
Block(point3f center);
/// @return location of the OcTreeNode given OcTree's LeafIterator.
inline point3f get_loc(const LeafIterator &it) const {
return Block::key_loc_map[it.get_hash_key()] + center;
}
/// @return size of the OcTreeNode given OcTree's LeafIterator.
inline float get_size(const LeafIterator &it) const {
unsigned short depth, index;
hash_key_to_node(it.get_hash_key(), depth, index);
return float(size / pow(2, depth));
}
/// @return center of current Block.
inline point3f get_center() const { return center; }
/// @return min lim of current Block.
inline point3f get_lim_min() const { return center - point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return max lim of current Block.
inline point3f get_lim_max() const { return center + point3f(size / 2.0f, size / 2.0f, size / 2.0f); }
/// @return ExtendedBlock of current Block.
ExtendedBlock get_extended_block() const;
OcTreeHashKey get_node(unsigned short x, unsigned short y, unsigned short z) const;
point3f get_point(unsigned short x, unsigned short y, unsigned short z) const;
void get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const;
OcTreeNode &search(float x, float y, float z) const;
OcTreeNode &search(point3f p) const;
private:
// Loop-Up Table
static std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
static std::unordered_map<unsigned short, OcTreeHashKey> index_map;
static float resolution;
static float size;
static unsigned short cell_num;
point3f center;
};
}
#endif // LA3DM_GP_BLOCK_H
| 3,704 | 32.378378 | 131 | h |
la3dm | la3dm-master/include/gpoctomap/gpoctomap.h | #ifndef LA3DM_GP_OCTOMAP_H
#define LA3DM_GP_OCTOMAP_H
#include <unordered_map>
#include <vector>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include "rtree.h"
#include "gpblock.h"
#include "gpoctree_node.h"
namespace la3dm {
/// PCL PointCloud types as input
typedef pcl::PointXYZ PCLPointType;
typedef pcl::PointCloud<PCLPointType> PCLPointCloud;
/*
* @brief GPOctoMap
*
* Fast, Accurate Gaussian Process Occupancy Maps via Test-Data Octrees
* and Nested Bayesian Fusion. The space is partitioned by Blocks
* in which OcTrees with fixed depth are rooted. The training part of GP
* is performed Block by Block with training data inside each Block.
* Occupancy values in one Block is predicted by its ExtendedBlock via BCM.
*/
class GPOctoMap {
public:
/// Types used internally
typedef std::vector<point3f> PointCloud;
typedef std::pair<point3f, float> GPPointType;
typedef std::vector<GPPointType> GPPointCloud;
typedef RTree<GPPointType *, float, 3, float> MyRTree;
public:
GPOctoMap();
/*
* @param resolution (default 0.1m)
* @param block_depth maximum depth of OcTree (default 4)
* @param sf2 signal variance in GPs (default 1.0)
* @param ell length-scale in GPs (default 1.0)
* @param noise noise variance in GPs (default 0.01)
* @param l length-scale in logistic regression function (default 100)
* @param min_var minimum variance in Occupancy (default 0.001)
* @param max_var maximum variance in Occupancy (default 1000)
* @param max_known_var maximum variance for Occuapncy to be classified as KNOWN State (default 0.02)
* @param free_thresh free threshold for Occupancy probability (default 0.3)
* @param occupied_thresh occupied threshold for Occupancy probability (default 0.7)
*/
GPOctoMap(float resolution, unsigned short block_depth, float sf2, float ell, float noise, float l,
float min_var,
float max_var, float max_known_var, float free_thresh, float occupied_thresh);
~GPOctoMap();
/// Set resolution.
void set_resolution(float resolution);
/// Set block max depth.
void set_block_depth(unsigned short max_depth);
/// Get resolution.
inline float get_resolution() const { return resolution; }
/// Get block max depth.
inline float get_block_depth() const { return block_depth; }
/*
* @brief Insert PCL PointCloud into GPOctoMaps.
* @param cloud one scan in PCLPointCloud format
* @param origin sensor origin in the scan
* @param ds_resolution downsampling resolution for PCL VoxelGrid filtering (-1 if no downsampling)
* @param free_res resolution for sampling free training points along sensor beams (default 2.0)
* @param max_range maximum range for beams to be considered as valid measurements (-1 if no limitation)
*/
void insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res = 2.0f,
float max_range = -1);
void insert_training_data(const GPPointCloud &cloud);
/// Get bounding box of the map.
void get_bbox(point3f &lim_min, point3f &lim_max) const;
class RayCaster {
public:
RayCaster(const GPOctoMap *map, const point3f &start, const point3f &end) : map(map) {
assert(map != nullptr);
_block_key = block_to_hash_key(start);
block = map->search(_block_key);
lim = static_cast<unsigned short>(pow(2, map->block_depth - 1));
if (block != nullptr) {
block->get_index(start, x, y, z);
block_lim = block->get_center();
block_size = block->size;
current_p = start;
resolution = map->resolution;
int x0 = static_cast<int>((start.x() / resolution));
int y0 = static_cast<int>((start.y() / resolution));
int z0 = static_cast<int>((start.z() / resolution));
int x1 = static_cast<int>((end.x() / resolution));
int y1 = static_cast<int>((end.y() / resolution));
int z1 = static_cast<int>((end.z() / resolution));
dx = abs(x1 - x0);
dy = abs(y1 - y0);
dz = abs(z1 - z0);
n = 1 + dx + dy + dz;
x_inc = x1 > x0 ? 1 : (x1 == x0 ? 0 : -1);
y_inc = y1 > y0 ? 1 : (y1 == y0 ? 0 : -1);
z_inc = z1 > z0 ? 1 : (z1 == z0 ? 0 : -1);
xy_error = dx - dy;
xz_error = dx - dz;
yz_error = dy - dz;
dx *= 2;
dy *= 2;
dz *= 2;
} else {
n = 0;
}
}
inline bool end() const { return n <= 0; }
bool next(point3f &p, OcTreeNode &node, BlockHashKey &block_key, OcTreeHashKey &node_key) {
assert(!end());
bool valid = false;
unsigned short index = x + y * lim + z * lim * lim;
node_key = Block::index_map[index];
block_key = _block_key;
if (block != nullptr) {
valid = true;
node = (*block)[node_key];
current_p = block->get_point(x, y, z);
p = current_p;
} else {
p = current_p;
}
if (xy_error > 0 && xz_error > 0) {
x += x_inc;
current_p.x() += x_inc * resolution;
xy_error -= dy;
xz_error -= dz;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error < 0 && yz_error > 0) {
y += y_inc;
current_p.y() += y_inc * resolution;
xy_error += dx;
yz_error -= dz;
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
} else if (yz_error < 0 && xz_error < 0) {
z += z_inc;
current_p.z() += z_inc * resolution;
xz_error += dx;
yz_error += dy;
if (z >= lim || z < 0) {
block_lim.z() += z_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
z = z_inc > 0 ? 0 : lim - 1;
}
} else if (xy_error == 0) {
x += x_inc;
y += y_inc;
n -= 2;
current_p.x() += x_inc * resolution;
current_p.y() += y_inc * resolution;
if (x >= lim || x < 0) {
block_lim.x() += x_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
x = x_inc > 0 ? 0 : lim - 1;
}
if (y >= lim || y < 0) {
block_lim.y() += y_inc * block_size;
_block_key = block_to_hash_key(block_lim);
block = map->search(_block_key);
y = y_inc > 0 ? 0 : lim - 1;
}
}
n--;
return valid;
}
private:
const GPOctoMap *map;
Block *block;
point3f block_lim;
float block_size, resolution;
int dx, dy, dz, error, n;
int x_inc, y_inc, z_inc, xy_error, xz_error, yz_error;
unsigned short index, x, y, z, lim;
BlockHashKey _block_key;
point3f current_p;
};
/// LeafIterator for iterating all leaf nodes in blocks
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator(const GPOctoMap *map) {
assert(map != nullptr);
block_it = map->block_arr.cbegin();
end_block = map->block_arr.cend();
if (map->block_arr.size() > 0) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
} else {
leaf_it = OcTree::LeafIterator();
end_leaf = OcTree::LeafIterator();
}
}
// just for initializing end iterator
LeafIterator(std::unordered_map<BlockHashKey, Block *>::const_iterator block_it,
OcTree::LeafIterator leaf_it)
: block_it(block_it), leaf_it(leaf_it), end_block(block_it), end_leaf(leaf_it) { }
bool operator==(const LeafIterator &other) {
return (block_it == other.block_it) && (leaf_it == other.leaf_it);
}
bool operator!=(const LeafIterator &other) {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
++leaf_it;
if (leaf_it == end_leaf) {
++block_it;
if (block_it != end_block) {
leaf_it = block_it->second->begin_leaf();
end_leaf = block_it->second->end_leaf();
}
}
return *this;
}
OcTreeNode &operator*() const {
return *leaf_it;
}
std::vector<point3f> get_pruned_locs() const {
std::vector<point3f> pruned_locs;
point3f center = get_loc();
float size = get_size();
float x0 = center.x() - size * 0.5 + Block::resolution * 0.5;
float y0 = center.y() - size * 0.5 + Block::resolution * 0.5;
float z0 = center.z() - size * 0.5 + Block::resolution * 0.5;
float x1 = center.x() + size * 0.5;
float y1 = center.y() + size * 0.5;
float z1 = center.z() + size * 0.5;
for (float x = x0; x < x1; x += Block::resolution) {
for (float y = y0; y < y1; y += Block::resolution) {
for (float z = z0; z < z1; z += Block::resolution) {
pruned_locs.emplace_back(x, y, z);
}
}
}
return pruned_locs;
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline point3f get_loc() const {
return block_it->second->get_loc(leaf_it);
}
inline float get_size() const {
return block_it->second->get_size(leaf_it);
}
private:
std::unordered_map<BlockHashKey, Block *>::const_iterator block_it;
std::unordered_map<BlockHashKey, Block *>::const_iterator end_block;
OcTree::LeafIterator leaf_it;
OcTree::LeafIterator end_leaf;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); }
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(block_arr.cend(), OcTree::LeafIterator()); }
OcTreeNode search(point3f p) const;
OcTreeNode search(float x, float y, float z) const;
Block *search(BlockHashKey key) const;
inline float get_block_size() const { return block_size; }
private:
/// @return true if point is inside a bounding box given min and max limits.
inline bool gp_point_in_bbox(const GPPointType &p, const point3f &lim_min, const point3f &lim_max) const {
return (p.first.x() > lim_min.x() && p.first.x() < lim_max.x() &&
p.first.y() > lim_min.y() && p.first.y() < lim_max.y() &&
p.first.z() > lim_min.z() && p.first.z() < lim_max.z());
}
/// Get the bounding box of a pointcloud.
void bbox(const GPPointCloud &cloud, point3f &lim_min, point3f &lim_max) const;
/// Get all block indices inside a bounding box.
void get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
std::vector<BlockHashKey> &blocks) const;
/// Get all points inside a bounding box assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
GPPointCloud &out);
/// @return true if point exists inside a bounding box assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max);
/// Get all points inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const BlockHashKey &key, GPPointCloud &out);
/// @return true if point exists inside a bounding box (block) assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const BlockHashKey &key);
/// Get all points inside an extended block assuming pointcloud has been inserted in rtree before.
int get_gp_points_in_bbox(const ExtendedBlock &block, GPPointCloud &out);
/// @return true if point exists inside an extended block assuming pointcloud has been inserted in rtree before.
int has_gp_points_in_bbox(const ExtendedBlock &block);
/// RTree callback function
static bool count_callback(GPPointType *p, void *arg);
/// RTree callback function
static bool search_callback(GPPointType *p, void *arg);
/// Downsample PCLPointCloud using PCL VoxelGrid Filtering.
void downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const;
/// Sample free training points along sensor beams.
void beam_sample(const point3f &hits, const point3f &origin, PointCloud &frees,
float free_resolution) const;
/// Get training data from one sensor scan.
void get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPPointCloud &xy) const;
float resolution;
float block_size;
unsigned short block_depth;
std::unordered_map<BlockHashKey, Block *> block_arr;
MyRTree rtree;
};
}
#endif // LA3DM_GP_OCTOMAP_H
| 15,859 | 40.957672 | 125 | h |
la3dm | la3dm-master/include/gpoctomap/gpoctree.h | #ifndef LA3DM_GP_OCTREE_H
#define LA3DM_GP_OCTREE_H
#include <stack>
#include <vector>
#include "point3f.h"
#include "gpoctree_node.h"
namespace la3dm {
/// Hash key to index OcTree nodes given depth and the index in that layer.
typedef int OcTreeHashKey;
/// Convert from node to hask key.
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index);
/// Convert from hash key to node.
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index);
/*
* @brief A simple OcTree to organize GP occupancy data in one block.
*
* OcTree doesn't store positions of nodes in order to reduce memory usage.
* The nodes in OcTrees are indexed by OcTreeHashKey which can be used to
* retrieve positions later (See Block).
* For the purpose of mapping, this OcTree has fixed depth which should be
* set before using OcTrees.
*/
class OcTree {
friend class GPOctoMap;
public:
OcTree();
~OcTree();
OcTree(const OcTree &other);
OcTree &operator=(const OcTree &other);
/*
* @brief Rursively pruning OcTreeNodes with the same state.
*
* Prune nodes by setting nodes to PRUNED.
* Delete the layer if all nodes are pruned.
*/
bool prune();
/// @return true if this node is a leaf node.
bool is_leaf(OcTreeHashKey key) const;
/// @return true if this node is a leaf node.
bool is_leaf(unsigned short depth, unsigned short index) const;
/// @return true if this node exists and is not pruned.
bool search(OcTreeHashKey key) const;
/// @return Occupancy of the node (without checking if it exists!)
OcTreeNode &operator[](OcTreeHashKey key) const;
/// Leaf iterator for OcTrees: iterate all leaf nodes not pruned.
class LeafIterator : public std::iterator<std::forward_iterator_tag, OcTreeNode> {
public:
LeafIterator() : tree(nullptr) { }
LeafIterator(const OcTree *tree)
: tree(tree != nullptr && tree->node_arr != nullptr ? tree : nullptr) {
if (tree != nullptr) {
stack.emplace(0, 0);
stack.emplace(0, 0);
++(*this);
}
}
LeafIterator(const LeafIterator &other) : tree(other.tree), stack(other.stack) { }
LeafIterator &operator=(const LeafIterator &other) {
tree = other.tree;
stack = other.stack;
return *this;
}
bool operator==(const LeafIterator &other) const {
return (tree == other.tree) &&
(stack.size() == other.stack.size()) &&
(stack.size() == 0 || (stack.size() > 0 &&
(stack.top().depth == other.stack.top().depth) &&
(stack.top().index == other.stack.top().index)));
}
bool operator!=(const LeafIterator &other) const {
return !(this->operator==(other));
}
LeafIterator operator++(int) {
LeafIterator result(*this);
++(*this);
return result;
}
LeafIterator &operator++() {
if (stack.empty()) {
tree = nullptr;
} else {
stack.pop();
while (!stack.empty() && !tree->is_leaf(stack.top().depth, stack.top().index))
single_inc();
if (stack.empty())
tree = nullptr;
}
return *this;
}
inline OcTreeNode &operator*() const {
return (*tree)[get_hash_key()];
}
inline OcTreeNode &get_node() const {
return operator*();
}
inline OcTreeHashKey get_hash_key() const {
OcTreeHashKey key = node_to_hash_key(stack.top().depth, stack.top().index);
return key;
}
protected:
void single_inc() {
StackElement top(stack.top());
stack.pop();
for (int i = 0; i < 8; ++i) {
stack.emplace(top.depth + 1, top.index * 8 + i);
}
}
struct StackElement {
unsigned short depth;
unsigned short index;
StackElement(unsigned short depth, unsigned short index)
: depth(depth), index(index) { }
};
const OcTree *tree;
std::stack<StackElement, std::vector<StackElement> > stack;
};
/// @return the beginning of leaf iterator
inline LeafIterator begin_leaf() const { return LeafIterator(this); };
/// @return the end of leaf iterator
inline LeafIterator end_leaf() const { return LeafIterator(nullptr); };
private:
OcTreeNode **node_arr;
static unsigned short max_depth;
};
}
#endif // LA3DM_GP_OCTREE_H
| 5,274 | 31.561728 | 98 | h |
la3dm | la3dm-master/include/gpoctomap/gpoctree_node.h | #ifndef LA3DM_GP_OCCUPANCY_H
#define LA3DM_GP_OCCUPANCY_H
#include <iostream>
#include <fstream>
namespace la3dm {
/// Occupancy state: before pruning: FREE, OCCUPIED, UNKNOWN; after pruning: PRUNED
enum class State : char {
FREE, OCCUPIED, UNKNOWN, PRUNED
};
/*
* @brief GP regression ouputs and occupancy state.
*
* Occupancy has member variables: m_ivar (m*ivar), ivar (1/var) and State.
* This representation speeds up the updates via BCM.
* Before using this class, set the static member variables first.
*/
class Occupancy {
friend std::ostream &operator<<(std::ostream &os, const Occupancy &oc);
friend std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc);
friend std::ifstream &operator>>(std::ifstream &is, Occupancy &oc);
friend class GPOctoMap;
public:
/*
* @brief Constructors and destructor.
*/
Occupancy() : m_ivar(0.0), ivar(Occupancy::min_ivar), state(State::UNKNOWN) { classified = false; }
Occupancy(float m, float var);
Occupancy(const Occupancy &other) : m_ivar(other.m_ivar), ivar(other.ivar), state(other.state) { }
Occupancy &operator=(const Occupancy &other) {
m_ivar = other.m_ivar;
ivar = other.ivar;
state = other.state;
return *this;
}
~Occupancy() { }
/*
* @brief Bayesian Committee Machine (BCM) update for Gaussian Process regression.
* @param new_m mean resulted from GP regression
* @param new_var variance resulted from GP regression
*/
void update(float new_m, float new_var);
/// Get probability of occupancy.
float get_prob() const;
/// Get variance of occupancy (uncertainty)
inline float get_var() const { return 1.0f / ivar; }
/*
* @brief Get occupancy state of the node.
* @return occupancy state (see State).
*/
inline State get_state() const { return state; }
/// Prune current node; set state to PRUNED.
inline void prune() { state = State::PRUNED; }
/// Only FREE and OCCUPIED nodes can be equal.
inline bool operator==(const Occupancy &rhs) const {
return this->state != State::UNKNOWN && this->state == rhs.state;
}
bool classified;
private:
float m_ivar; // m / var or m * ivar
float ivar; // 1.0 / var
State state;
static float sf2; // signal variance
static float ell; // length-scale
static float noise; // noise variance
static float l; // gamma in logistic functions
static float max_ivar; // minimum variance
static float min_ivar; // maximum variance
static float min_known_ivar; // maximum variance for nodes to be considered as FREE or OCCUPIED
static float free_thresh; // FREE occupancy threshold
static float occupied_thresh; // OCCUPIED occupancy threshold
};
typedef Occupancy OcTreeNode;
}
#endif // LA3DM_GP_OCCUPANCY_H
| 3,150 | 30.51 | 107 | h |
la3dm | la3dm-master/include/gpoctomap/gpregressor.h | #ifndef LA3DM_GP_REGRESSOR_H
#define LA3DM_GP_REGRESSOR_H
#include <Eigen/Dense>
#include <vector>
namespace la3dm {
/*
* @brief A Simple Gaussian Process Regressor
* @param dim dimension of data (2, 3, etc.)
* @param T data type (float, double, etc.)
*/
template<int dim, typename T>
class GPRegressor {
public:
/// Eigen matrix type for training and test data and kernel
using MatrixXType = Eigen::Matrix<T, -1, dim, Eigen::RowMajor>;
using MatrixKType = Eigen::Matrix<T, -1, -1, Eigen::RowMajor>;
using MatrixDKType = Eigen::Matrix<T, -1, 1>;
using MatrixYType = Eigen::Matrix<T, -1, 1>;
GPRegressor(T sf2, T ell, T noise) : sf2(sf2), ell(ell), noise(noise), trained(false) { }
/*
* @brief Train Gaussian Process
* @param x input vector (3N, row major)
* @param y target vector (N)
*/
void train(const std::vector<T> &x, const std::vector<T> &y) {
assert(x.size() % dim == 0 && (int) (x.size() / dim) == y.size());
MatrixXType _x = Eigen::Map<const MatrixXType>(x.data(), x.size() / dim, dim);
MatrixYType _y = Eigen::Map<const MatrixYType>(y.data(), y.size(), 1);
train(_x, _y);
}
/*
* @brief Train Gaussian Process
* @param x input matrix (NX3)
* @param y target matrix (NX1)
*/
void train(const MatrixXType &x, const MatrixYType &y) {
this->x = MatrixXType(x);
covMaterniso3(x, x, K);
// covSparse(x, x, K);
K = K + noise * MatrixKType::Identity(K.rows(), K.cols());
Eigen::LLT<MatrixKType> llt(K);
alpha = llt.solve(y);
L = llt.matrixL();
trained = true;
}
/*
* @brief Predict with Gaussian Process
* @param xs input vector (3M, row major)
* @param m predicted mean vector (M)
* @param var predicted variance vector (M)
*/
void predict(const std::vector<T> &xs, std::vector<T> &m, std::vector<T> &var) const {
assert(xs.size() % dim == 0);
MatrixXType _xs = Eigen::Map<const MatrixXType>(xs.data(), xs.size() / dim, dim);
MatrixYType _m, _var;
predict(_xs, _m, _var);
m.resize(_m.rows());
var.resize(_var.rows());
for (int r = 0; r < _m.rows(); ++r) {
m[r] = _m(r, 0);
var[r] = _var(r, 0);
}
}
/*
* @brief Predict with Gaussian Process
* @param xs input vector (MX3)
* @param m predicted mean matrix (MX1)
* @param var predicted variance matrix (MX1)
*/
void predict(const MatrixXType &xs, MatrixYType &m, MatrixYType &var) const {
assert(trained == true);
MatrixKType Ks;
covMaterniso3(x, xs, Ks);
// covSparse(x, xs, Ks);
m = Ks.transpose() * alpha;
MatrixKType v = L.template triangularView<Eigen::Lower>().solve(Ks);
MatrixDKType Kss;
covMaterniso3(xs, xs, Kss);
// covSparse(xs, xs, Kss);
var = Kss - (v.transpose() * v).diagonal();
}
private:
/*
* @brief Compute Euclid distances between two vectors.
* @param x input vector
* @param z input vecotr
* @return d distance matrix
*/
void dist(const MatrixXType &x, const MatrixXType &z, MatrixKType &d) const {
d = MatrixKType::Zero(x.rows(), z.rows());
for (int i = 0; i < x.rows(); ++i) {
d.row(i) = (z.rowwise() - x.row(i)).rowwise().norm();
}
}
/*
* @brief Matern3 kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
*/
void covMaterniso3(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(1.73205 / ell * x, 1.73205 / ell * z, Kxz);
Kxz = ((1 + Kxz.array()) * exp(-Kxz.array())).matrix() * sf2;
}
/*
* @brief Diagonal of Matern3 kernel.
* @return Kxz sf2 * I
*/
void covMaterniso3(const MatrixXType &x, const MatrixXType &z, MatrixDKType &Kxz) const {
Kxz = MatrixDKType::Ones(x.rows()) * sf2;
}
/*
* @brief Sparse kernel.
* @param x input vector
* @param z input vector
* @return Kxz covariance matrix
* @ref A sparse covariance function for exact gaussian process inference in large datasets.
*/
void covSparse(const MatrixXType &x, const MatrixXType &z, MatrixKType &Kxz) const {
dist(x / ell, z / ell, Kxz);
Kxz = ((2 + (Kxz * 2 * 3.1415926).array().cos()) * (1.0 - Kxz.array()) / 3.0 +
(Kxz * 2 * 3.1415926).array().sin() / (2 * 3.1415926)).matrix() * sf2;
for (int i = 0; i < Kxz.rows(); ++i) {
for (int j = 0; j < Kxz.cols(); ++j) {
if (Kxz(i,j) < 0.0) {
Kxz(i,j) = 0.0f;
}
}
}
}
/*
* @brief Diagonal of sparse kernel.
* @return Kxz sf2 * I
*/
void covSparse(const MatrixXType &x, const MatrixXType &z, MatrixDKType &Kxz) const {
Kxz = MatrixDKType::Ones(x.rows()) * sf2;
}
T sf2; // signal variance
T ell; // length-scale
T noise; // noise variance
MatrixXType x; // temporary storage of training data
MatrixKType K;
MatrixYType alpha;
MatrixKType L;
bool trained; // true if gpregressor has been trained
};
typedef GPRegressor<3, float> GPR3f;
}
#endif // LA3DM_GP_REGRESSOR_H
| 5,931 | 33.894118 | 100 | h |
la3dm | la3dm-master/src/bgkloctomap/bgklblock.cpp | #include "bgklblock.h"
#include <queue>
#include <algorithm>
namespace la3dm {
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth) {
std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
std::queue<point3f> center_q;
center_q.push(point3f(0.0f, 0.0f, 0.0f));
for (unsigned short depth = 0; depth < max_depth; ++depth) {
unsigned short q_size = (unsigned short) center_q.size();
float half_size = (float) (resolution * pow(2, max_depth - depth - 1) * 0.5f);
for (unsigned short index = 0; index < q_size; ++index) {
point3f center = center_q.front();
center_q.pop();
key_loc_map.emplace(node_to_hash_key(depth, index), center);
if (depth == max_depth - 1)
continue;
for (unsigned short i = 0; i < 8; ++i) {
float x = (float) (center.x() + half_size * (i & 4 ? 0.5 : -0.5));
float y = (float) (center.y() + half_size * (i & 2 ? 0.5 : -0.5));
float z = (float) (center.z() + half_size * (i & 1 ? 0.5 : -0.5));
center_q.emplace(x, y, z);
}
}
}
return key_loc_map;
}
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(
const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map, unsigned short max_depth) {
std::vector<std::pair<OcTreeHashKey, point3f>> temp;
for (auto it = key_loc_map.begin(); it != key_loc_map.end(); ++it) {
unsigned short depth, index;
hash_key_to_node(it->first, depth, index);
if (depth == max_depth - 1)
temp.push_back(std::make_pair(it->first, it->second));
}
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.x() < p2.second.x();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.y() < p2.second.y();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.z() < p2.second.z();
});
std::unordered_map<unsigned short, OcTreeHashKey> index_map;
int index = 0;
for (auto it = temp.cbegin(); it != temp.cend(); ++it, ++index) {
index_map.insert(std::make_pair(index, it->first));
}
return index_map;
};
BlockHashKey block_to_hash_key(point3f center) {
return block_to_hash_key(center.x(), center.y(), center.z());
}
BlockHashKey block_to_hash_key(float x, float y, float z) {
return (int64_t(x / (double) Block::size + 524288.5) << 40) |
(int64_t(y / (double) Block::size + 524288.5) << 20) |
(int64_t(z / (double) Block::size + 524288.5));
}
point3f hash_key_to_block(BlockHashKey key) {
return point3f(((key >> 40) - 524288) * Block::size,
(((key >> 20) & 0xFFFFF) - 524288) * Block::size,
((key & 0xFFFFF) - 524288) * Block::size);
}
ExtendedBlock get_extended_block(BlockHashKey key) {
ExtendedBlock blocks;
point3f center = hash_key_to_block(key);
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = key;
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
float Block::resolution = 0.1f;
float Block::size = 0.8f;
unsigned short Block::cell_num = static_cast<unsigned short>(round(Block::size / Block::resolution));
std::unordered_map<OcTreeHashKey, point3f> Block::key_loc_map;
std::unordered_map<unsigned short, OcTreeHashKey> Block::index_map;
Block::Block() : OcTree(), center(0.0f, 0.0f, 0.0f) { }
Block::Block(point3f center) : OcTree(), center(center) { }
ExtendedBlock Block::get_extended_block() const {
ExtendedBlock blocks;
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = block_to_hash_key(x, y, z);
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
OcTreeHashKey Block::get_node(unsigned short x, unsigned short y, unsigned short z) const {
unsigned short index = x + y * Block::cell_num + z * Block::cell_num * Block::cell_num;
return Block::index_map[index];
}
point3f Block::get_point(unsigned short x, unsigned short y, unsigned short z) const {
return Block::key_loc_map[get_node(x, y, z)] + center;
}
void Block::get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const {
int xx = static_cast<int>((p.x() - center.x()) / resolution + Block::cell_num / 2);
int yy = static_cast<int>((p.y() - center.y()) / resolution + Block::cell_num / 2);
int zz = static_cast<int>((p.z() - center.z()) / resolution + Block::cell_num / 2);
auto clip = [](int a) -> int { return std::max(0, std::min(a, Block::cell_num - 1)); };
x = static_cast<unsigned short>(clip(xx));
y = static_cast<unsigned short>(clip(yy));
z = static_cast<unsigned short>(clip(zz));
}
OcTreeNode& Block::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
OcTreeNode& Block::search(point3f p) const {
unsigned short x, y, z;
get_index(p, x, y, z);
return operator[](get_node(x, y, z));
}
} | 6,731 | 41.075 | 109 | cpp |
la3dm | la3dm-master/src/bgkloctomap/bgkloctomap.cpp | #include <algorithm>
#include <ros/ros.h>
#include <pcl/filters/voxel_grid.h>
#include "bgkloctomap.h"
#include "bgklinference.h"
using std::vector;
// #define DEBUG true;
#ifdef DEBUG
#include <iostream>
#define Debug_Msg(msg) {\
std::cout << "Debug: " << msg << std::endl; }
#endif
namespace la3dm {
BGKLOctoMap::BGKLOctoMap() : BGKLOctoMap(0.1f, // resolution
4, // block_depth
1.0, // sf2
1.0, // ell
0.3f, // free_thresh
0.7f, // occupied_thresh
1.0f, // var_thresh
1.0f, // prior_A
1.0f // prior_B
) { }
BGKLOctoMap::BGKLOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B)
: resolution(resolution), block_depth(block_depth),
block_size((float) pow(2, block_depth - 1) * resolution) {
Block::resolution = resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
Block::index_map = init_index_map(Block::key_loc_map, block_depth);
OcTree::max_depth = block_depth;
OcTreeNode::sf2 = sf2;
OcTreeNode::ell = ell;
OcTreeNode::free_thresh = free_thresh;
OcTreeNode::occupied_thresh = occupied_thresh;
OcTreeNode::var_thresh = var_thresh;
OcTreeNode::prior_A = prior_A;
OcTreeNode::prior_B = prior_B;
}
BGKLOctoMap::~BGKLOctoMap() {
for (auto it = block_arr.begin(); it != block_arr.end(); ++it) {
if (it->second != nullptr) {
delete it->second;
}
}
}
void BGKLOctoMap::set_resolution(float resolution) {
this->resolution = resolution;
Block::resolution = resolution;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKLOctoMap::set_block_depth(unsigned short max_depth) {
this->block_depth = max_depth;
OcTree::max_depth = max_depth;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKLOctoMap::insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res, float max_range) {
#ifdef DEBUG
Debug_Msg("Insert pointcloud: " << "cloud size: " << cloud.size() << " origin: " << origin);
#endif
////////// Preparation //////////////////////////
/////////////////////////////////////////////////
GPLineCloud xy;
GPLineCloud rays;
vector<int> ray_idx;
// const int ray_size = rays.size();
// std::array<int, ray_size> ray_keys;
get_training_data(cloud, origin, ds_resolution, free_res, max_range, xy, rays, ray_idx);
// vector<int> ray_keys(rays.size(), 0);
assert (ray_idx.size() == xy.size());
// std::cout << "N rays: " << rays.size() << std::endl;
// std::cout << "vec size: " << ray_keys.size() << std::endl;
#ifdef DEBUG
Debug_Msg("Training data size: " << xy.size());
#endif
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
// std::unordered_map<BlockHashKey, GPLineCloud> key_train_data_map;
for (int k = 0; k < xy.size(); ++k) {
float p[] = {xy[k].first.x0(), xy[k].first.y0(), xy[k].first.z0()};
rtree.Insert(p, p, k);
}
/////////////////////////////////////////////////
////////// Training /////////////////////////////
/////////////////////////////////////////////////
vector<BlockHashKey> test_blocks;
std::unordered_map<BlockHashKey, BGKL3f *> bgkl_arr;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
ExtendedBlock eblock = get_extended_block(key);
if (has_gp_points_in_bbox(eblock))
#ifdef OPENMP
#pragma omp critical
#endif
{
test_blocks.push_back(key);
};
// GPLineCloud block_xy;
vector<int> xy_idx;
get_gp_points_in_bbox(key, xy_idx);
if (xy_idx.size() < 1)
continue;
vector<int> ray_keys(rays.size(), 0);
vector<float> block_x, block_y;
for (int j = 0; j < xy_idx.size(); ++j) {
#ifdef OPENMP
#pragma omp critical
#endif
{
if (ray_idx[xy_idx[j]] == -1) {
block_x.push_back(xy[xy_idx[j]].first.x0());
block_x.push_back(xy[xy_idx[j]].first.y0());
block_x.push_back(xy[xy_idx[j]].first.z0());
block_x.push_back(xy[xy_idx[j]].first.x0());
block_x.push_back(xy[xy_idx[j]].first.y0());
block_x.push_back(xy[xy_idx[j]].first.z0());
block_y.push_back(1.0f);
}
else if (ray_keys[ray_idx[xy_idx[j]]] == 0) {
ray_keys[ray_idx[xy_idx[j]]] = 1;
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.x0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.y0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.z0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.x1());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.y1());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.z1());
block_y.push_back(0.0f);
}
}
};
// std::cout << "number of training blocks" << block_y.size() << std::endl;
BGKL3f *bgkl = new BGKL3f(OcTreeNode::sf2, OcTreeNode::ell);
bgkl->train(block_x, block_y);
#ifdef OPENMP
#pragma omp critical
#endif
{
bgkl_arr.emplace(key, bgkl);
};
}
#ifdef DEBUG
Debug_Msg("Training done");
Debug_Msg("Prediction: block number: " << test_blocks.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
#ifdef OPENMP
#pragma omp critical
#endif
{
if (block_arr.find(key) == block_arr.end())
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
Block *block = block_arr[key];
vector<float> xs;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
point3f p = block->get_loc(leaf_it);
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
}
ExtendedBlock eblock = block->get_extended_block();
for (auto block_it = eblock.cbegin(); block_it != eblock.cend(); ++block_it) {
auto bgkl = bgkl_arr.find(*block_it);
if (bgkl == bgkl_arr.end())
continue;
vector<float> ybar, kbar;
bgkl->second->predict(xs, ybar, kbar);
int j = 0;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it, ++j) {
OcTreeNode &node = leaf_it.get_node();
auto node_loc = block->get_loc(leaf_it);
// if (node_loc.x() == 7.45 && node_loc.y() == 10.15 && node_loc.z() == 1.15) {
// std::cout << "updating the node " << ybar[j] << " " << kbar[j] << std::endl;
// }
// Only need to update if kernel density total kernel density est > 0
// TODO param out change threshold?
if (kbar[j] > 0.001f)
node.update(ybar[j], kbar[j]);
}
}
}
#ifdef DEBUG
Debug_Msg("Prediction done");
#endif
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
///////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
for (auto it = bgkl_arr.begin(); it != bgkl_arr.end(); ++it) {
delete it->second;
}
// ray_keys.clear();
rtree.RemoveAll();
}
void BGKLOctoMap::get_bbox(point3f &lim_min, point3f &lim_max) const {
lim_min = point3f(0, 0, 0);
lim_max = point3f(0, 0, 0);
GPLineCloud centers;
for (auto it = block_arr.cbegin(); it != block_arr.cend(); ++it) {
centers.emplace_back(point6f(it->second->get_center()), 1);
}
if (centers.size() > 0) {
bbox(centers, lim_min, lim_max);
lim_min -= point3f(block_size, block_size, block_size) * 0.5;
lim_max += point3f(block_size, block_size, block_size) * 0.5;
}
}
void BGKLOctoMap::get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPLineCloud &xy, GPLineCloud &rays, vector<int> &ray_idx) const {
PCLPointCloud sampled_hits;
downsample(cloud, sampled_hits, ds_resolution);
std::cout << "Sampled points: " << sampled_hits.size() << std::endl;
PCLPointCloud frees;
frees.height = 1;
frees.width = 0;
rays.clear();
ray_idx.clear();
xy.clear();
int idx = 0;
for (auto it = sampled_hits.begin(); it != sampled_hits.end(); ++it) {
point3f p(it->x, it->y, it->z);
if (max_range > 0) {
double l = (p - origin).norm();
if (l > max_range)
continue;
}
// point6f p6f(p);
// xy.emplace_back(p6f, 1.0f);
// ray_idx.push_back(-1);
float l = (float) sqrt((p.x() - origin.x()) * (p.x() - origin.x()) + (p.y() - origin.y()) * (p.y() - origin.y()) + (p.z() - origin.z()) * (p.z() - origin.z()));
float nx = (p.x() - origin.x()) / l;
float ny = (p.y() - origin.y()) / l;
float nz = (p.z() - origin.z()) / l;
point3f occ_endpt(origin.x() + nx * l, origin.y() + ny * l, origin.z() + nz * l);
xy.emplace_back(point6f(occ_endpt), 1.0f);
ray_idx.push_back(-1);
// point3f free_endpt(origin.x() + nx * (l - free_resolution), origin.y() + ny * (l - free_resolution), origin.z() + nz * (l - 0.1f));
// point6f line6f(origin, free_endpt);
// rays.emplace_back(line6f, 0.0f);
PointCloud frees_n;
beam_sample(occ_endpt, origin, frees_n, free_resolution);
frees.push_back(PCLPointType(origin.x(), origin.y(), origin.z()));
xy.emplace_back(point6f(origin.x(), origin.y(), origin.z()), 0.0f);
ray_idx.push_back(idx);
for (auto p = frees_n.begin(); p != frees_n.end(); ++p) {
xy.emplace_back(point6f(p->x(), p->y(), p->z()), 0.0f);
ray_idx.push_back(idx);
}
l = l - free_resolution;
point3f free_endpt(origin.x() + nx * l, origin.y() + ny * l, origin.z() + nz * l);
point6f line6f(origin, free_endpt);
rays.emplace_back(line6f, 0.0f);
frees.clear();
++idx;
}
}
void BGKLOctoMap::downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const {
if (ds_resolution < 0) {
out = in;
return;
}
PCLPointCloud::Ptr pcl_in(new PCLPointCloud(in));
pcl::VoxelGrid<PCLPointType> sor;
sor.setInputCloud(pcl_in);
sor.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
sor.filter(out);
}
void BGKLOctoMap::beam_sample(const point3f &hit, const point3f &origin, PointCloud &frees,
float free_resolution) const {
frees.clear();
float x0 = origin.x();
float y0 = origin.y();
float z0 = origin.z();
float x = hit.x();
float y = hit.y();
float z = hit.z();
float l = (float) sqrt((x - x0) * (x - x0) + (y - y0) * (y - y0) + (z - z0) * (z - z0));
float nx = (x - x0) / l;
float ny = (y - y0) / l;
float nz = (z - z0) / l;
float d = l - free_resolution;
while (d > 0.0) {
frees.emplace_back(x0 + nx * d, y0 + ny * d, z0 + nz * d);
d -= free_resolution;
}
}
void BGKLOctoMap::bbox(const GPLineCloud &cloud, point3f &lim_min, point3f &lim_max) const {
vector<float> x, y, z;
for (auto it = cloud.cbegin(); it != cloud.cend(); ++it) {
x.push_back(it->first.x0());
x.push_back(it->first.x1());
y.push_back(it->first.y0());
y.push_back(it->first.y1());
z.push_back(it->first.z0());
z.push_back(it->first.z1());
}
auto xlim = std::minmax_element(x.cbegin(), x.cend());
auto ylim = std::minmax_element(y.cbegin(), y.cend());
auto zlim = std::minmax_element(z.cbegin(), z.cend());
lim_min.x() = *xlim.first;
lim_min.y() = *ylim.first;
lim_min.z() = *zlim.first;
lim_max.x() = *xlim.second;
lim_max.y() = *ylim.second;
lim_max.z() = *zlim.second;
}
void BGKLOctoMap::get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<BlockHashKey> &blocks) const {
for (float x = lim_min.x() - block_size; x <= lim_max.x() + 2 * block_size; x += block_size) {
for (float y = lim_min.y() - block_size; y <= lim_max.y() + 2 * block_size; y += block_size) {
for (float z = lim_min.z() - block_size; z <= lim_max.z() + 2 * block_size; z += block_size) {
blocks.push_back(block_to_hash_key(x, y, z));
}
}
}
}
int BGKLOctoMap::get_gp_points_in_bbox(const BlockHashKey &key,
vector<int> &out) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return get_gp_points_in_bbox(lim_min, lim_max, out);
}
int BGKLOctoMap::has_gp_points_in_bbox(const BlockHashKey &key) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return has_gp_points_in_bbox(lim_min, lim_max);
}
int BGKLOctoMap::get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<int> &out) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKLOctoMap::search_callback, static_cast<void *>(&out));
}
int BGKLOctoMap::has_gp_points_in_bbox(const point3f &lim_min,
const point3f &lim_max) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKLOctoMap::count_callback, NULL);
}
bool BGKLOctoMap::count_callback(int k, void *arg) {
return false;
}
bool BGKLOctoMap::search_callback(int k, void *arg) {
// GPLineCloud *out = static_cast<GPLineCloud *>(arg);
vector<int> *out = static_cast<vector<int> *>(arg);
out->push_back(k);
return true;
}
int BGKLOctoMap::has_gp_points_in_bbox(const ExtendedBlock &block) {
for (auto it = block.cbegin(); it != block.cend(); ++it) {
if (has_gp_points_in_bbox(*it) > 0)
return 1;
}
return 0;
}
int BGKLOctoMap::get_gp_points_in_bbox(const ExtendedBlock &block,
vector<int> &out) {
int n = 0;
for (auto it = block.cbegin(); it != block.cend(); ++it) {
n += get_gp_points_in_bbox(*it, out);
}
return n;
}
Block *BGKLOctoMap::search(BlockHashKey key) const {
auto block = block_arr.find(key);
if (block == block_arr.end()) {
return nullptr;
} else {
return block->second;
}
}
OcTreeNode BGKLOctoMap::search(point3f p) const {
Block *block = search(block_to_hash_key(p));
if (block == nullptr) {
return OcTreeNode();
} else {
return OcTreeNode(block->search(p));
}
}
OcTreeNode BGKLOctoMap::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
}
| 18,619 | 36.24 | 172 | cpp |
la3dm | la3dm-master/src/bgkloctomap/bgkloctomap_server.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include <pcl_ros/transforms.h>
#include <pcl/filters/voxel_grid.h>
#include "markerarray_pub.h"
#include "bgkloctomap.h"
tf::TransformListener *listener;
std::string frame_id("/map");
la3dm::BGKLOctoMap *map;
la3dm::MarkerArrayPub *m_pub_occ, *m_pub_free;
//startup parameters
tf::Vector3 last_position;
tf::Quaternion last_orientation;
bool first = true;
double position_change_thresh = 0.1;
double orientation_change_thresh = 0.2;
bool updated = false;
//Universal parameters
std::string map_topic_occ("/occupied_cells_vis_array");
std::string map_topic_free("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 0.1;
double ell = 0.2;
double free_resolution = 0.65;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = true;
//BGKL parameters
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &cloud) {
tf::StampedTransform transform;
try {
listener->waitForTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, ros::Duration(5.0));
listener->lookupTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, transform); //ros::Time::now() -- Don't use this because processing time delay breaks it
} catch (tf::TransformException ex) {
ROS_ERROR("%s", ex.what());
return;
}
ros::Time start = ros::Time::now();
la3dm::point3f origin;
tf::Vector3 translation = transform.getOrigin();
tf::Quaternion orientation = transform.getRotation();
if (first || orientation.angleShortestPath(last_orientation) > orientation_change_thresh || translation.distance(last_position) > position_change_thresh)
{
ROS_INFO_STREAM("Cloud received");
last_position = translation;
last_orientation = orientation;
origin.x() = (float) translation.x();
origin.y() = (float) translation.y();
origin.z() = (float) translation.z();
sensor_msgs::PointCloud2 cloud_map;
pcl_ros::transformPointCloud(frame_id, *cloud, cloud_map, *listener);
//pointer required for downsampling
la3dm::PCLPointCloud::Ptr pcl_cloud (new la3dm::PCLPointCloud());
pcl::fromROSMsg(cloud_map, *pcl_cloud);
//downsample for faster mapping
la3dm::PCLPointCloud filtered_cloud;
pcl::VoxelGrid<pcl::PointXYZ> filterer;
filterer.setInputCloud(pcl_cloud);
filterer.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
filterer.filter(filtered_cloud);
if(filtered_cloud.size() > 5){
map->insert_pointcloud(filtered_cloud, origin, (float) resolution, (float) free_resolution, (float) max_range);
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("One cloud finished in " << (end - start).toSec() << "s");
updated = true;
}
if (updated)
{
ros::Time start2 = ros::Time::now();
m_pub_occ->clear();
m_pub_free->clear();
for (auto it = map->begin_leaf(); it != map->end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size)
{
m_pub_occ->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_occ->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution());
}
}
}
else if(it.get_node().get_state() == la3dm::State::FREE)
{
if (original_size)
{
m_pub_free->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_free->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution(), it.get_node().get_prob());
}
}
}
}
updated = false;
m_pub_occ->publish();
m_pub_free->publish();
ros::Time end2 = ros::Time::now();
ROS_INFO_STREAM("One map published in " << (end2 - start2).toSec() << "s");
}
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgkloctomap_server");
ros::NodeHandle nh("~");
//incoming pointcloud topic, this could be put into the .yaml too
std::string cloud_topic("/velodyne_points");
//Universal parameters
nh.param<std::string>("topic", map_topic_occ, map_topic_occ);
nh.param<std::string>("topic_free", map_topic_free, map_topic_free);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
//BKGL parameters
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"topic: " << map_topic_occ << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B
);
map = new la3dm::BGKLOctoMap(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B);
ros::Subscriber point_sub = nh.subscribe<sensor_msgs::PointCloud2>(cloud_topic, 1, cloudHandler);
m_pub_occ = new la3dm::MarkerArrayPub(nh, map_topic_occ, resolution);
m_pub_free = new la3dm::MarkerArrayPub(nh, map_topic_free, resolution);
listener = new tf::TransformListener();
while(ros::ok())
{
ros::spin();
}
return 0;
}
| 7,550 | 35.302885 | 177 | cpp |
la3dm | la3dm-master/src/bgkloctomap/bgkloctomap_static_node.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include "bgkloctomap.h"
#include "markerarray_pub.h"
void load_pcd(std::string filename, la3dm::point3f &origin, la3dm::PCLPointCloud &cloud) {
pcl::PCLPointCloud2 cloud2;
Eigen::Vector4f _origin;
Eigen::Quaternionf orientaion;
pcl::io::loadPCDFile(filename, cloud2, _origin, orientaion);
pcl::fromPCLPointCloud2(cloud2, cloud);
origin.x() = _origin[0];
origin.y() = _origin[1];
origin.z() = _origin[2];
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgkloctomap_static_node");
ros::NodeHandle nh("~");
std::string dir;
std::string prefix;
int scan_num = 0;
std::string map_topic("/occupied_cells_vis_array");
std::string map_topic2("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 1.0;
double ell = 1.0;
double free_resolution = 0.5;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = false;
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
nh.param<std::string>("dir", dir, dir);
nh.param<std::string>("prefix", prefix, prefix);
nh.param<std::string>("topic", map_topic, map_topic);
nh.param<std::string>("topic2", map_topic2, map_topic2);
nh.param<int>("scan_num", scan_num, scan_num);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"dir: " << dir << std::endl <<
"prefix: " << prefix << std::endl <<
"topic: " << map_topic << std::endl <<
"scan_sum: " << scan_num << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B
);
la3dm::BGKLOctoMap map(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B);
ros::Time start = ros::Time::now();
for (int scan_id = 1; scan_id <= scan_num; ++scan_id) {
la3dm::PCLPointCloud cloud;
la3dm::point3f origin;
std::string filename(dir + "/" + prefix + "_" + std::to_string(scan_id) + ".pcd");
load_pcd(filename, origin, cloud);
map.insert_pointcloud(cloud, origin, resolution, free_resolution, max_range);
ROS_INFO_STREAM("Scan " << scan_id << " done");
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("Mapping finished in " << (end - start).toSec() << "s");
///////// Compute Frontiers /////////////////////
// ROS_INFO_STREAM("Computing frontiers");
// la3dm::MarkerArrayPub f_pub(nh, "frontier_map", resolution);
// for (auto it = map.begin_leaf(); it != map.end_leaf(); ++it) {
// la3dm::point3f p = it.get_loc();
// if (p.z() > 1.0 || p.z() < 0.3)
// continue;
// if (it.get_node().get_var() > 0.02 &&
// it.get_node().get_prob() < 0.3) {
// f_pub.insert_point3d(p.x(), p.y(), p.z());
// }
// }
// f_pub.publish();
//////// Test Raytracing //////////////////
// la3dm::MarkerArrayPub ray_pub(nh, "/ray", resolution);
// la3dm::BGKLOctoMap::RayCaster ray(&map, la3dm::point3f(1, 1, 0.3), la3dm::point3f(6, 7, 8));
// while (!ray.end()) {
// la3dm::point3f p;
// la3dm::OcTreeNode node;
// la3dm::BlockHashKey block_key;
// la3dm::OcTreeHashKey node_key;
// if (ray.next(p, node, block_key, node_key)) {
// ray_pub.insert_point3d(p.x(), p.y(), p.z());
// }
// }
// ray_pub.publish();
///////// Publish Map /////////////////////
la3dm::MarkerArrayPub m_pub(nh, map_topic, resolution);
la3dm::MarkerArrayPub m_pub2(nh, map_topic2, resolution);
if (min_z == max_z) {
la3dm::point3f lim_min, lim_max;
map.get_bbox(lim_min, lim_max);
min_z = lim_min.z();
max_z = lim_max.z();
}
for (auto it = map.begin_leaf(); it != map.end_leaf(); ++it){
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size) {
m_pub.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
} else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution());
}
}
else if (it.get_node().get_state() == la3dm::State::FREE) {
if (original_size) {
m_pub2.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
} else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub2.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution(), it.get_node().get_prob());
}
}
}
m_pub.publish();
m_pub2.publish();
ros::spin();
return 0;
}
| 6,720 | 38.304094 | 128 | cpp |
la3dm | la3dm-master/src/bgkloctomap/bgkloctree.cpp | #include "bgkloctree.h"
#include <cmath>
namespace la3dm {
unsigned short OcTree::max_depth = 0;
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index) {
return (depth << 16) + index;
}
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index) {
depth = (unsigned short) (key >> 16);
index = (unsigned short) (key & 0xFFFF);
}
OcTree::OcTree() {
if (max_depth <= 0)
node_arr = nullptr;
else {
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
node_arr[i] = new OcTreeNode[(int) pow(8, i)]();
}
}
}
OcTree::~OcTree() {
if (node_arr != nullptr) {
for (unsigned short i = 0; i < max_depth; ++i) {
if (node_arr[i] != nullptr) {
delete[] node_arr[i];
}
}
delete[] node_arr;
}
}
OcTree::OcTree(const OcTree &other) {
if (other.node_arr == nullptr) {
node_arr = nullptr;
return;
}
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (other.node_arr[i] != nullptr) {
int n = (int) pow(8, i);
node_arr[i] = new OcTreeNode[n]();
std::copy(node_arr[i], node_arr[i] + n, other.node_arr[i]);
} else
node_arr[i] = nullptr;
}
}
OcTree &OcTree::operator=(const OcTree &other) {
OcTreeNode **local_node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (local_node_arr[i] != nullptr) {
int n = (int) pow(8, i);
local_node_arr[i] = new OcTreeNode[n]();
std::copy(local_node_arr[i], local_node_arr[i] + n, other.node_arr[i]);
} else
local_node_arr[i] = nullptr;
}
node_arr = local_node_arr;
return *this;
}
bool OcTree::is_leaf(unsigned short depth, unsigned short index) const {
if (node_arr != nullptr && node_arr[depth] != nullptr && node_arr[depth][index].get_state() != State::PRUNED) {
if (depth + 1 < max_depth) {
if (node_arr[depth + 1] == nullptr || node_arr[depth + 1][index * 8].get_state() == State::PRUNED)
return true;
} else {
return true;
}
}
return false;
}
bool OcTree::is_leaf(OcTreeHashKey key) const {
unsigned short depth = 0;
unsigned short index = 0;
hash_key_to_node(key, depth, index);
return is_leaf(depth, index);
}
bool OcTree::search(OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr != nullptr &&
node_arr[depth] != nullptr &&
node_arr[depth][index].get_state() != State::PRUNED;
}
bool OcTree::prune() {
if (node_arr == nullptr)
return false;
bool pruned = false;
for (unsigned short depth = max_depth - 1; depth > 0; --depth) {
OcTreeNode *layer = node_arr[depth];
OcTreeNode *parent_layer = node_arr[depth - 1];
if (layer == nullptr)
continue;
bool empty_layer = true;
unsigned int n = (unsigned int) pow(8, depth);
for (unsigned short index = 0; index < n; index += 8) {
State state = layer[index].get_state();
if (state == State::UNKNOWN) {
empty_layer = false;
continue;
}
if (state == State::PRUNED)
continue;
bool collapsible = true;
for (unsigned short i = 1; i < 8; ++i) {
if (layer[index + i].get_state() != state) {
collapsible = false;
continue;
}
}
if (collapsible) {
parent_layer[(int) floor(index / 8)] = layer[index];
for (unsigned short i = 0; i < 8; ++i) {
layer[index + i].prune();
}
pruned = true;
} else {
empty_layer = false;
}
}
if (empty_layer) {
delete[] layer;
node_arr[depth] = nullptr;
}
}
return pruned;
}
OcTreeNode &OcTree::operator[](OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr[depth][index];
}
} | 4,955 | 30.769231 | 119 | cpp |
la3dm | la3dm-master/src/bgkloctomap/bgkloctree_node.cpp | #include "bgkloctree_node.h"
#include <cmath>
namespace la3dm {
/// Default static values
float Occupancy::sf2 = 1.0f;
float Occupancy::ell = 1.0f;
float Occupancy::free_thresh = 0.3f;
float Occupancy::occupied_thresh = 0.7f;
float Occupancy::var_thresh = 1000.0f;
float Occupancy::prior_A = 0.5f;
float Occupancy::prior_B = 0.5f;
Occupancy::Occupancy(float A, float B) : m_A(Occupancy::prior_A + A), m_B(Occupancy::prior_B + B) {
classified = false;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNKNOWN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
float Occupancy::get_prob() const {
return m_A / (m_A + m_B);
}
void Occupancy::update(float ybar, float kbar) {
classified = true;
m_A += ybar;
m_B += kbar - ybar;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNKNOWN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc) {
os.write((char *) &oc.m_A, sizeof(oc.m_A));
os.write((char *) &oc.m_B, sizeof(oc.m_B));
return os;
}
std::ifstream &operator>>(std::ifstream &is, Occupancy &oc) {
float m_A, m_B;
is.read((char *) &m_A, sizeof(m_A));
is.read((char *) &m_B, sizeof(m_B));
oc = OcTreeNode(m_A, m_B);
return is;
}
std::ostream &operator<<(std::ostream &os, const Occupancy &oc) {
return os << '(' << oc.m_A << ' ' << oc.m_B << ' ' << oc.get_prob() << ')';
}
} | 2,123 | 32.714286 | 117 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvblock.cpp | #include "bgklvblock.h"
#include <queue>
#include <algorithm>
namespace la3dm {
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth) {
std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
std::queue<point3f> center_q;
center_q.push(point3f(0.0f, 0.0f, 0.0f));
for (unsigned short depth = 0; depth < max_depth; ++depth) {
unsigned short q_size = (unsigned short) center_q.size();
float half_size = (float) (resolution * pow(2, max_depth - depth - 1) * 0.5f);
for (unsigned long index = 0; index < q_size; ++index) {
point3f center = center_q.front();
center_q.pop();
key_loc_map.emplace(node_to_hash_key(depth, index), center);
if (depth == max_depth - 1)
continue;
for (unsigned short i = 0; i < 8; ++i) {
float x = (float) (center.x() + half_size * (i & 4 ? 0.5 : -0.5));
float y = (float) (center.y() + half_size * (i & 2 ? 0.5 : -0.5));
float z = (float) (center.z() + half_size * (i & 1 ? 0.5 : -0.5));
center_q.emplace(x, y, z);
}
}
}
return key_loc_map;
}
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(
const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map, unsigned short max_depth) {
std::vector<std::pair<OcTreeHashKey, point3f>> temp;
for (auto it = key_loc_map.begin(); it != key_loc_map.end(); ++it) {
unsigned short depth;
unsigned long index;
hash_key_to_node(it->first, depth, index);
if (depth == max_depth - 1)
temp.push_back(std::make_pair(it->first, it->second));
}
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.x() < p2.second.x();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.y() < p2.second.y();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.z() < p2.second.z();
});
std::unordered_map<unsigned short, OcTreeHashKey> index_map;
int index = 0;
for (auto it = temp.cbegin(); it != temp.cend(); ++it, ++index) {
index_map.insert(std::make_pair(index, it->first));
}
return index_map;
};
BlockHashKey block_to_hash_key(point3f center) {
return block_to_hash_key(center.x(), center.y(), center.z());
}
BlockHashKey block_to_hash_key(float x, float y, float z) {
return (int64_t(x / (double) Block::size + 524288.5) << 40) |
(int64_t(y / (double) Block::size + 524288.5) << 20) |
(int64_t(z / (double) Block::size + 524288.5));
}
point3f hash_key_to_block(BlockHashKey key) {
return point3f(((key >> 40) - 524288) * Block::size,
(((key >> 20) & 0xFFFFF) - 524288) * Block::size,
((key & 0xFFFFF) - 524288) * Block::size);
}
ExtendedBlock get_extended_block(BlockHashKey key) {
ExtendedBlock blocks;
point3f center = hash_key_to_block(key);
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = key;
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
float Block::resolution = 0.1f;
float Block::size = 0.8f;
unsigned short Block::cell_num = static_cast<unsigned short>(round(Block::size / Block::resolution));
std::unordered_map<OcTreeHashKey, point3f> Block::key_loc_map;
std::unordered_map<unsigned short, OcTreeHashKey> Block::index_map;
Block::Block() : OcTree(), center(0.0f, 0.0f, 0.0f) { }
Block::Block(point3f center) : OcTree(), center(center) { }
ExtendedBlock Block::get_extended_block() const {
ExtendedBlock blocks;
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = block_to_hash_key(x, y, z);
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
OcTreeHashKey Block::get_node(unsigned short x, unsigned short y, unsigned short z) const {
unsigned short index = x + y * Block::cell_num + z * Block::cell_num * Block::cell_num;
return Block::index_map[index];
}
point3f Block::get_point(unsigned short x, unsigned short y, unsigned short z) const {
return Block::key_loc_map[get_node(x, y, z)] + center;
}
void Block::get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const {
int xx = static_cast<int>((p.x() - center.x()) / resolution + Block::cell_num / 2);
int yy = static_cast<int>((p.y() - center.y()) / resolution + Block::cell_num / 2);
int zz = static_cast<int>((p.z() - center.z()) / resolution + Block::cell_num / 2);
auto clip = [](int a) -> int { return std::max(0, std::min(a, Block::cell_num - 1)); };
x = static_cast<unsigned short>(clip(xx));
y = static_cast<unsigned short>(clip(yy));
z = static_cast<unsigned short>(clip(zz));
}
OcTreeNode& Block::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
OcTreeNode& Block::search(point3f p) const {
unsigned short x, y, z;
get_index(p, x, y, z);
return operator[](get_node(x, y, z));
}
} | 6,757 | 40.975155 | 109 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvoctomap.cpp | #include <algorithm>
#include <pcl/filters/voxel_grid.h>
#include "bgklvoctomap.h"
#include "bgklvinference.h"
#include <iostream>
using std::vector;
//#define DEBUG true;
#ifdef DEBUG
#include <iostream>
#define Debug_Msg(msg) {\
std::cout << "Debug: " << msg << std::endl; }
#endif
namespace la3dm {
BGKLVOctoMap::BGKLVOctoMap() : BGKLVOctoMap(0.1f, // resolution
4, // block_depth
1.0, // sf2
1.0, // ell
0.3f, // free_thresh
0.7f, // occupied_thresh
1.0f, // var_thresh
1.0f, // prior_A
1.0f, // prior_B
true, //original_size
0.1f // min_W
) { }
BGKLVOctoMap::BGKLVOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B,
bool original_size,
float min_W)
: resolution(resolution), block_depth(block_depth),
block_size((float) pow (2, block_depth - 1) * resolution) {
Block::resolution = resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
Block::index_map = init_index_map(Block::key_loc_map, block_depth);
OcTree::max_depth = block_depth;
OcTreeNode::sf2 = sf2;
OcTreeNode::ell = ell;
OcTreeNode::free_thresh = free_thresh;
OcTreeNode::occupied_thresh = occupied_thresh;
OcTreeNode::var_thresh = var_thresh;
OcTreeNode::prior_A = prior_A;
OcTreeNode::prior_B = prior_B;
OcTreeNode::original_size = original_size;
OcTreeNode::min_W = min_W;
}
BGKLVOctoMap::~BGKLVOctoMap() {
for (auto it = block_arr.begin(); it != block_arr.end(); ++it) {
if (it->second != nullptr) {
delete it->second;
}
}
}
void BGKLVOctoMap::set_resolution(float resolution) {
this->resolution = resolution;
Block::resolution = resolution;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKLVOctoMap::set_block_depth(unsigned short max_depth) {
this->block_depth = max_depth;
OcTree::max_depth = max_depth;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKLVOctoMap::insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res, float max_range) {
#ifdef DEBUG
Debug_Msg("Insert pointcloud: " << "cloud size: " << cloud.size() << " origin: " << origin);
#endif
////////// Preparation //////////////////////////
/////////////////////////////////////////////////
GPLineCloud xy;
GPLineCloud rays;
vector<int> ray_idx;
if(ds_resolution > resolution){
ds_resolution = resolution;
}
get_training_data(cloud, origin, ds_resolution, free_res, max_range, xy, rays, ray_idx);
assert (ray_idx.size() == xy.size());
#ifdef DEBUG
Debug_Msg("Training data size: " << xy.size());
#endif
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
//define all blocks from point cloud input
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
//insert training data into rtree
for (int k = 0; k < xy.size(); ++k) {
float p[] = {xy[k].first.x0(), xy[k].first.y0(), xy[k].first.z0()};
rtree.Insert(p, p, k);
}
/////////////////////////////////////////////////
////////// Training & Prediction ////////////////
/////////////////////////////////////////////////
//define set of blocks that will be predicted
vector<BlockHashKey> test_blocks;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
//create key for each block from point cloud, begin loop to process each block
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
#ifdef OPENMP
#pragma omp critical
#endif
{
//run in parallel, add block to block_arr if it doesn't already exist (block_arr is maintained)
if (block_arr.find(key) == block_arr.end())
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
Block *block = block_arr[key];
bool Block_has_info = false;
//half the region of influence
point3f half_size(OcTreeNode::ell, OcTreeNode::ell, OcTreeNode::ell);
//process each node within a block
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
float block_size = block->get_size(leaf_it);
//skip larger blocks than base resolution
if (block_size > Block::resolution)
continue;
point3f p = block->get_loc(leaf_it);
point3f lim_min = p - half_size;
point3f lim_max = p + half_size;
if(!has_gp_points_in_bbox(lim_min,lim_max))
continue;
//find data for each node individually, xy_idx is raw data
vector<int> xy_idx;
get_gp_points_in_bbox(lim_min, lim_max, xy_idx);
if(xy_idx.size() < 1)
continue;
vector<int> ray_keys(rays.size(), 0); //rays.size number of 0's
vector<float> block_x, block_y;
#ifdef OPENMP
#pragma omp critical
#endif
{
//run in parallel, define data that exists in node influence as hits or rays
for (int j = 0; j < xy_idx.size(); ++j) {
if (ray_idx[xy_idx[j]] == -1) {
block_x.push_back(xy[xy_idx[j]].first.x0());
block_x.push_back(xy[xy_idx[j]].first.y0());
block_x.push_back(xy[xy_idx[j]].first.z0());
block_x.push_back(xy[xy_idx[j]].first.x0());
block_x.push_back(xy[xy_idx[j]].first.y0());
block_x.push_back(xy[xy_idx[j]].first.z0());
block_y.push_back(1.0f);
}
else if (ray_keys[ray_idx[xy_idx[j]]] == 0) {
ray_keys[ray_idx[xy_idx[j]]] = 1;
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.x0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.y0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.z0());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.x1());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.y1());
block_x.push_back(rays[ray_idx[xy_idx[j]]].first.z1());
block_y.push_back(0.0f);
}
}
};
//push training data into node (legacy code, used to push into block)
BGKLV3f *bgklv = new BGKLV3f(OcTreeNode::sf2, OcTreeNode::ell);
bgklv->train(block_x, block_y);
#ifdef DEBUG
Debug_Msg("Training done");
Debug_Msg("Prediction: block number: " << bgklv_arr.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
vector<float> xs;
vector<float> ybar, kbar;
//p was defined as the current node earlier
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
//use training data in bgklv to predict ybar and kbar at xs
bgklv->predict(xs, ybar, kbar);
//update active node with predictions
OcTreeNode &node = leaf_it.get_node();
if (kbar[0] > 0.001f){
node.update(ybar[0], kbar[0]);
}
Block_has_info = true;
#ifdef DEBUG
Debug_Msg("Prediction done");
#endif
}
#ifdef OPENMP
#pragma omp critical
#endif
{
//run in parallel, after block iteration ends check whether to add to list of blocks that were updated
if(Block_has_info){
test_blocks.push_back(key);
}
};
}
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
///////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for
#endif
//only use updated blocks
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
if (OcTreeNode::original_size)
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
//only need to remove raw data from the rtree
rtree.RemoveAll();
}
void BGKLVOctoMap::get_bbox(point3f &lim_min, point3f &lim_max) const {
lim_min = point3f(0, 0, 0);
lim_max = point3f(0, 0, 0);
GPLineCloud centers;
for (auto it = block_arr.cbegin(); it != block_arr.cend(); ++it) {
centers.emplace_back(point6f(it->second->get_center()), 1);
}
if (centers.size() > 0) {
bbox(centers, lim_min, lim_max);
lim_min -= point3f(block_size, block_size, block_size) * 0.5;
lim_max += point3f(block_size, block_size, block_size) * 0.5;
}
}
//method to build training dataset from raw pointcloud data
void BGKLVOctoMap::get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPLineCloud &xy, GPLineCloud &rays, vector<int> &ray_idx) const {
//downsample all incoming data
PCLPointCloud sampled_hits;
downsample(cloud, sampled_hits, ds_resolution);
rays.clear();
ray_idx.clear();
xy.clear();
int idx = 0;
double offset = OcTreeNode::ell*pow(2,0.5);
double influence = OcTreeNode::ell;
for (auto it = sampled_hits.begin(); it != sampled_hits.end(); ++it) {
point3f p(it->x, it->y, it->z);
double l = (p - origin).norm();
float nx = (p.x() - origin.x()) / l;
float ny = (p.y() - origin.y()) / l;
float nz = (p.z() - origin.z()) / l;
//filter out points too far away, but keep rays up to max_range
if (max_range > 0) {
if (l < max_range){
l = (float) sqrt((p.x() - origin.x()) * (p.x() - origin.x()) + (p.y() - origin.y()) * (p.y() - origin.y()) + (p.z() - origin.z()) * (p.z() - origin.z()));
l = l-offset;
xy.emplace_back(point6f(p), 1.0f);
ray_idx.push_back(-1);
}
else{
// continue; //use continue to skip rays up to max_range
l = max_range-offset;
}
}
point3f nearest_point = p;
point3f free_endpt(origin.x() + nx * l, origin.y() + ny * l, origin.z() + nz * l);
//find points "near" the ray
PointCloud nearby_points;
for (auto iter = sampled_hits.begin(); iter != sampled_hits.end(); ++iter) {
point3f p0(iter->x, iter->y, iter->z);
//filter out points too far away
if (max_range > 0) {
double range = (p0 - origin).norm();
if (range > max_range)
continue;
}
//include free space near the floor (by removing floor points from nearby, currently using x-y plane as floor)
if(p.z() > (offset+origin.z()) && p0.z() < origin.z()+influence){
continue;
}
double dist1 = (free_endpt-p0).norm();
double dist2 = (origin-p0).norm();
//check if endpt is within influence of ray
if(dist1 < influence){
nearby_points.emplace_back(p0);
}
else if(dist1 < l && dist2 < l){
nearby_points.emplace_back(p0);
}
}
//search through nearby points to reduce ray as necessary
point3f line_vec = free_endpt-origin;
for (auto p1 = nearby_points.begin(); p1 != nearby_points.end(); p1++){
double dist;
point3f pnt_vec = *p1 - origin;
double b = pnt_vec.dot(line_vec);
if(b > pow(l,2)){
continue;
}
else{
point3f nearest = origin + line_vec*(b/pow(line_vec.norm(),2));
dist = (*p1-nearest).norm();
}
if(dist < influence){
nearest_point = *p1;
l = b/line_vec.norm();
}
}
//remove downward rays close to sensor
if(l < max_range/5.0 && l/(offset-nearest_point.z()) > 0){
continue;
}
free_endpt = point3f(origin.x() + nx * l, origin.y() + ny * l, origin.z() + nz * l);
point3f free_origin = origin;
//move free ray origin away from robot
double mu = 1.0;
if(l > influence*mu){
free_origin = point3f(origin.x() + nx * influence*mu, origin.y() + ny * influence*mu, origin.z() + nz * influence*mu);
}
else{
free_origin = free_endpt;
}
PointCloud frees;
beam_sample(free_endpt, free_origin, frees, free_resolution);
xy.emplace_back(point6f(free_origin.x(), free_origin.y(), free_origin.z()), 0.0f);
ray_idx.push_back(idx);
//plaxeholder points along the ray used to check if a ray is near a cell -> yes means use this ray
for (auto p = frees.begin(); p != frees.end(); ++p) {
xy.emplace_back(point6f(p->x(), p->y(), p->z()), 0.0f);
ray_idx.push_back(idx);
}
point6f line6f(free_origin, free_endpt);
rays.emplace_back(line6f, 0.0f);
++idx;
}
}
void BGKLVOctoMap::downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const {
if (ds_resolution < 0) {
out = in;
return;
}
PCLPointCloud::Ptr pcl_in(new PCLPointCloud(in));
pcl::VoxelGrid<PCLPointType> sor;
sor.setInputCloud(pcl_in);
sor.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
sor.filter(out);
}
void BGKLVOctoMap::beam_sample(const point3f &hit, const point3f &origin, PointCloud &frees,
float free_resolution) const {
frees.clear();
float x0 = origin.x();
float y0 = origin.y();
float z0 = origin.z();
float x = hit.x();
float y = hit.y();
float z = hit.z();
float l = (float) sqrt((x - x0) * (x - x0) + (y - y0) * (y - y0) + (z - z0) * (z - z0));
float nx = (x - x0) / l;
float ny = (y - y0) / l;
float nz = (z - z0) / l;
float d = l;
while (d > 0.0) {
frees.emplace_back(x0 + nx * d, y0 + ny * d, z0 + nz * d);
d -= free_resolution;
}
}
void BGKLVOctoMap::bbox(const GPLineCloud &cloud, point3f &lim_min, point3f &lim_max) const {
vector<float> x, y, z;
for (auto it = cloud.cbegin(); it != cloud.cend(); ++it) {
x.push_back(it->first.x0());
x.push_back(it->first.x1());
y.push_back(it->first.y0());
y.push_back(it->first.y1());
z.push_back(it->first.z0());
z.push_back(it->first.z1());
}
auto xlim = std::minmax_element(x.cbegin(), x.cend());
auto ylim = std::minmax_element(y.cbegin(), y.cend());
auto zlim = std::minmax_element(z.cbegin(), z.cend());
lim_min.x() = *xlim.first;
lim_min.y() = *ylim.first;
lim_min.z() = *zlim.first;
lim_max.x() = *xlim.second;
lim_max.y() = *ylim.second;
lim_max.z() = *zlim.second;
}
void BGKLVOctoMap::get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<BlockHashKey> &blocks) const {
for (float x = lim_min.x() - block_size; x <= lim_max.x() + 2 * block_size; x += block_size) {
for (float y = lim_min.y() - block_size; y <= lim_max.y() + 2 * block_size; y += block_size) {
for (float z = lim_min.z() - block_size; z <= lim_max.z() + 2 * block_size; z += block_size) {
blocks.push_back(block_to_hash_key(x, y, z));
}
}
}
}
int BGKLVOctoMap::get_gp_points_in_bbox(const BlockHashKey &key,
vector<int> &out) {
point3f half_size(OcTreeNode::ell, OcTreeNode::ell, OcTreeNode::ell);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return get_gp_points_in_bbox(lim_min, lim_max, out);
}
int BGKLVOctoMap::has_gp_points_in_bbox(const BlockHashKey &key) {
point3f half_size(OcTreeNode::ell, OcTreeNode::ell, OcTreeNode::ell);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return has_gp_points_in_bbox(lim_min, lim_max);
}
int BGKLVOctoMap::get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<int> &out) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKLVOctoMap::search_callback, static_cast<void *>(&out));
}
int BGKLVOctoMap::has_gp_points_in_bbox(const point3f &lim_min,
const point3f &lim_max) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKLVOctoMap::count_callback, NULL);
}
bool BGKLVOctoMap::count_callback(int k, void *arg) {
return false;
}
bool BGKLVOctoMap::search_callback(int k, void *arg) {
vector<int> *out = static_cast<vector<int> *>(arg);
out->push_back(k);
return true;
}
int BGKLVOctoMap::has_gp_points_in_bbox(const ExtendedBlock &block) {
for (auto it = block.cbegin(); it != block.cend(); ++it) {
if (has_gp_points_in_bbox(*it) > 0)
return 1;
}
return 0;
}
int BGKLVOctoMap::get_gp_points_in_bbox(const ExtendedBlock &block,
vector<int> &out) {
int n = 0;
for (auto it = block.cbegin(); it != block.cend(); ++it) {
n += get_gp_points_in_bbox(*it, out);
}
return n;
}
Block *BGKLVOctoMap::search(BlockHashKey key) const {
auto block = block_arr.find(key);
if (block == block_arr.end()) {
return nullptr;
} else {
return block->second;
}
}
OcTreeNode BGKLVOctoMap::search(point3f p) const {
Block *block = search(block_to_hash_key(p));
if (block == nullptr) {
return OcTreeNode();
} else {
return OcTreeNode(block->search(p));
}
}
OcTreeNode BGKLVOctoMap::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
}
| 21,314 | 35.877163 | 174 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvoctomap_server.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include <pcl_ros/transforms.h>
#include <pcl/filters/voxel_grid.h>
#include "markerarray_pub.h"
#include "bgklvoctomap.h"
tf::TransformListener *listener;
std::string frame_id("/map");
la3dm::BGKLVOctoMap *map;
la3dm::MarkerArrayPub *m_pub_occ, *m_pub_free;
tf::Vector3 last_position;
tf::Quaternion last_orientation;
bool first = true;
double position_change_thresh = 0.1;
double orientation_change_thresh = 0.2;
bool updated = false;
//Universal parameters
std::string map_topic_occ("/occupied_cells_vis_array");
std::string map_topic_free("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 0.1;
double ell = 0.2;
double free_resolution = 0.1;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = true;
//BGKLV parameters
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
float min_W = 0.1f;
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &cloud) {
tf::StampedTransform transform;
try {
listener->waitForTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, ros::Duration(5.0));
listener->lookupTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, transform); //ros::Time::now() -- Don't use this because processing time delay breaks it
} catch (tf::TransformException ex) {
ROS_ERROR("%s", ex.what());
return;
}
ros::Time start = ros::Time::now();
la3dm::point3f origin;
tf::Vector3 translation = transform.getOrigin();
tf::Quaternion orientation = transform.getRotation();
if (first || orientation.angleShortestPath(last_orientation) > orientation_change_thresh || translation.distance(last_position) > position_change_thresh)
{
ROS_INFO_STREAM("Cloud received");
last_position = translation;
last_orientation = orientation;
origin.x() = (float) translation.x();
origin.y() = (float) translation.y();
origin.z() = (float) translation.z();
sensor_msgs::PointCloud2 cloud_map;
pcl_ros::transformPointCloud(frame_id, *cloud, cloud_map, *listener);
la3dm::PCLPointCloud::Ptr pcl_cloud (new la3dm::PCLPointCloud());
pcl::fromROSMsg(cloud_map, *pcl_cloud);
if(pcl_cloud->size() > 5){
map->insert_pointcloud(*pcl_cloud, origin, (float) ds_resolution, (float) free_resolution, (float) max_range);
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("One cloud finished in " << (end - start).toSec() << "s");
updated = true;
}
if (updated)
{
ros::Time start2 = ros::Time::now();
m_pub_occ->clear();
m_pub_free->clear();
for (auto it = map->begin_leaf(); it != map->end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size)
{
m_pub_occ->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_occ->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution());
}
}
}
else if(it.get_node().get_state() == la3dm::State::FREE)
{
if (original_size)
{
m_pub_free->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_free->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution(), it.get_node().get_prob());
}
}
}
}
updated = false;
m_pub_free->publish();
m_pub_occ->publish();
ros::Time end2 = ros::Time::now();
ROS_INFO_STREAM("One map published in " << (end2 - start2).toSec() << "s");
}
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgklvoctomap_server");
ros::NodeHandle nh("~");
//incoming pointcloud topic, this could be put into the .yaml too
std::string cloud_topic("/velodyne_points");
//Universal parameters
nh.param<std::string>("topic", map_topic_occ, map_topic_occ);
nh.param<std::string>("topic_free", map_topic_free, map_topic_free);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
//BKGLV parameters
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
nh.param<float>("min_W", min_W, min_W);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"topic: " << map_topic_occ << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B << std::endl <<
"min_W: " << min_W
);
map = new la3dm::BGKLVOctoMap(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B, original_size, min_W);
ros::Subscriber point_sub = nh.subscribe<sensor_msgs::PointCloud2>(cloud_topic, 1, cloudHandler);
m_pub_occ = new la3dm::MarkerArrayPub(nh, map_topic_occ, resolution);
m_pub_free = new la3dm::MarkerArrayPub(nh, map_topic_free, resolution);
listener = new tf::TransformListener();
while(ros::ok())
{
ros::spin();
}
return 0;
}
| 7,312 | 35.383085 | 177 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvoctomap_static_node.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include "bgklvoctomap.h"
#include "markerarray_pub.h"
void load_pcd(std::string filename, la3dm::point3f &origin, la3dm::PCLPointCloud &cloud) {
pcl::PCLPointCloud2 cloud2;
Eigen::Vector4f _origin;
Eigen::Quaternionf orientaion;
pcl::io::loadPCDFile(filename, cloud2, _origin, orientaion);
pcl::fromPCLPointCloud2(cloud2, cloud);
ROS_INFO_STREAM("Pointcloud is of size: " << cloud.size());
origin.x() = _origin[0];
origin.y() = _origin[1];
origin.z() = _origin[2];
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgklvoctomap_static_node");
ros::NodeHandle nh("~");
std::string dir;
std::string prefix;
int scan_num = 0;
std::string map_topic("/occupied_cells_vis_array");
std::string map_topic2("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 1.0;
double ell = 1.0;
double free_resolution = 0.5;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = false;
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
float min_W = 0.1f;
nh.param<std::string>("dir", dir, dir);
nh.param<std::string>("prefix", prefix, prefix);
nh.param<std::string>("topic", map_topic, map_topic);
nh.param<std::string>("topic2", map_topic2, map_topic2);
nh.param<int>("scan_num", scan_num, scan_num);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
nh.param<float>("min_W", min_W, min_W);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"dir: " << dir << std::endl <<
"prefix: " << prefix << std::endl <<
"topic: " << map_topic << std::endl <<
"scan_sum: " << scan_num << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B << std::endl <<
"min_W: " << min_W
);
la3dm::BGKLVOctoMap map(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B, original_size, min_W);
ros::Time start = ros::Time::now();
for (int scan_id = 1; scan_id <= scan_num; ++scan_id) {
la3dm::PCLPointCloud cloud;
la3dm::point3f origin;
std::string filename(dir + "/" + prefix + "_" + std::to_string(scan_id) + ".pcd");
load_pcd(filename, origin, cloud);
ROS_INFO_STREAM("Scan " << scan_id << " loaded");
map.insert_pointcloud(cloud, origin, resolution, free_resolution, max_range);
ROS_INFO_STREAM("Scan " << scan_id << " done");
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("Mapping finished in " << (end - start).toSec() << "s");
///////// Publish Map /////////////////////
la3dm::MarkerArrayPub m_pub(nh, map_topic, resolution);
la3dm::MarkerArrayPub m_pub2(nh, map_topic2, resolution);
if (min_z == max_z) {
la3dm::point3f lim_min, lim_max;
map.get_bbox(lim_min, lim_max);
min_z = lim_min.z();
max_z = lim_max.z();
}
for (auto it = map.begin_leaf(); it != map.end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if(p.z() > 2.0)
continue;
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size) {
m_pub.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
}
else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution());
}
}
else if (it.get_node().get_state() == la3dm::State::FREE) {
if (original_size) {
m_pub2.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub2.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution(), it.get_node().get_prob());
}
}
}
m_pub.publish();
m_pub2.publish();
ros::spin();
return 0;
}
| 5,932 | 38.291391 | 145 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvoctree.cpp | #include "bgklvoctree.h"
#include <cmath>
namespace la3dm {
unsigned short OcTree::max_depth = 0;
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned long index) {
return (depth << 28) + index;
}
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned long &index) {
depth = (unsigned short) (key >> 28);
index = (unsigned long) (key & 0xFFFFFFF);
}
OcTree::OcTree() {
if (max_depth <= 0)
node_arr = nullptr;
else {
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
node_arr[i] = new OcTreeNode[(int) pow(8, i)]();
}
}
}
OcTree::~OcTree() {
if (node_arr != nullptr) {
for (unsigned short i = 0; i < max_depth; ++i) {
if (node_arr[i] != nullptr) {
delete[] node_arr[i];
}
}
delete[] node_arr;
}
}
OcTree::OcTree(const OcTree &other) {
if (other.node_arr == nullptr) {
node_arr = nullptr;
return;
}
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (other.node_arr[i] != nullptr) {
int n = (int) pow(8, i);
node_arr[i] = new OcTreeNode[n]();
std::copy(node_arr[i], node_arr[i] + n, other.node_arr[i]);
} else
node_arr[i] = nullptr;
}
}
OcTree &OcTree::operator=(const OcTree &other) {
OcTreeNode **local_node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (local_node_arr[i] != nullptr) {
int n = (int) pow(8, i);
local_node_arr[i] = new OcTreeNode[n]();
std::copy(local_node_arr[i], local_node_arr[i] + n, other.node_arr[i]);
} else
local_node_arr[i] = nullptr;
}
node_arr = local_node_arr;
return *this;
}
bool OcTree::is_leaf(unsigned short depth, unsigned long index) const {
if (node_arr != nullptr && node_arr[depth] != nullptr && node_arr[depth][index].get_state() != State::PRUNED) {
if (depth + 1 < max_depth) {
if (node_arr[depth + 1] == nullptr || node_arr[depth + 1][index * 8].get_state() == State::PRUNED)
return true;
} else {
return true;
}
}
return false;
}
bool OcTree::is_leaf(OcTreeHashKey key) const {
unsigned short depth = 0;
unsigned long index = 0;
hash_key_to_node(key, depth, index);
return is_leaf(depth, index);
}
bool OcTree::search(OcTreeHashKey key) const {
unsigned short depth;
unsigned long index;
hash_key_to_node(key, depth, index);
return node_arr != nullptr &&
node_arr[depth] != nullptr &&
node_arr[depth][index].get_state() != State::PRUNED;
}
bool OcTree::prune() {
if (node_arr == nullptr)
return false;
bool pruned = false;
for (unsigned short depth = max_depth - 1; depth > 0; --depth) {
OcTreeNode *layer = node_arr[depth];
OcTreeNode *parent_layer = node_arr[depth - 1];
if (layer == nullptr)
continue;
bool empty_layer = true;
unsigned int n = (unsigned int) pow(8, depth);
for (unsigned long index = 0; index < n; index += 8) {
State state = layer[index].get_state();
if (state == State::UNKNOWN) {
empty_layer = false;
continue;
}
if (state == State::PRUNED)
continue;
bool collapsible = true;
for (unsigned short i = 1; i < 8; ++i) {
if (layer[index + i].get_state() != state) {
collapsible = false;
continue;
}
}
if (collapsible) {
parent_layer[(int) floor(index / 8)] = layer[index];
for (unsigned short i = 0; i < 8; ++i) {
layer[index + i].prune();
}
pruned = true;
} else {
empty_layer = false;
}
}
if (empty_layer) {
delete[] layer;
node_arr[depth] = nullptr;
}
}
return pruned;
}
OcTreeNode &OcTree::operator[](OcTreeHashKey key) const {
unsigned short depth;
unsigned long index;
hash_key_to_node(key, depth, index);
return node_arr[depth][index];
}
} | 4,951 | 30.74359 | 119 | cpp |
la3dm | la3dm-master/src/bgklvoctomap/bgklvoctree_node.cpp | #include "bgklvoctree_node.h"
#include <cmath>
namespace la3dm {
/// Default static values
float Occupancy::sf2 = 1.0f;
float Occupancy::ell = 1.0f;
float Occupancy::free_thresh = 0.3f;
float Occupancy::occupied_thresh = 0.7f;
float Occupancy::var_thresh = 1000.0f;
float Occupancy::prior_A = 0.5f;
float Occupancy::prior_B = 0.5f;
bool Occupancy::original_size = true;
float Occupancy::min_W = 0.1f;
Occupancy::Occupancy(float A, float B) : m_A(Occupancy::prior_A + A), m_B(Occupancy::prior_B + B) {
classified = false;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNCERTAIN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
float Occupancy::get_prob() const {
float prob, W;
if (m_A + m_B < Occupancy::min_W){
W = Occupancy::min_W;
}
else{
W = m_A + m_B;
}
if(m_A > m_B){
prob = m_A / (W-m_B) + (W-m_A-m_B)*0.5 / (W-m_B);
}
else{
prob = 0.5*(W-m_B-m_A) / (W-m_A);
}
return prob;
}
float Occupancy::get_var() const {
float var, W;
float prob = get_prob();
if (m_A + m_B < Occupancy::min_W){
W = Occupancy::min_W;
}
else{
W = m_A + m_B;
}
var = m_A / W * pow(1-prob,2) + (W-m_A-m_B) / W * pow(0.5-prob,2) + m_B / W * pow(prob,2);
return var;
}
void Occupancy::update(float ybar, float kbar) {
classified = true;
m_A += ybar;
m_B += kbar - ybar;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNCERTAIN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc) {
os.write((char *) &oc.m_A, sizeof(oc.m_A));
os.write((char *) &oc.m_B, sizeof(oc.m_B));
return os;
}
std::ifstream &operator>>(std::ifstream &is, Occupancy &oc) {
float m_A, m_B;
is.read((char *) &m_A, sizeof(m_A));
is.read((char *) &m_B, sizeof(m_B));
oc = OcTreeNode(m_A, m_B);
return is;
}
std::ostream &operator<<(std::ostream &os, const Occupancy &oc) {
return os << '(' << oc.m_A << ' ' << oc.m_B << ' ' << oc.get_prob() << ')';
}
} | 2,939 | 29.625 | 117 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkblock.cpp | #include "bgkblock.h"
#include <queue>
#include <algorithm>
namespace la3dm {
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth) {
std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
std::queue<point3f> center_q;
center_q.push(point3f(0.0f, 0.0f, 0.0f));
for (unsigned short depth = 0; depth < max_depth; ++depth) {
unsigned short q_size = (unsigned short) center_q.size();
float half_size = (float) (resolution * pow(2, max_depth - depth - 1) * 0.5f);
for (unsigned short index = 0; index < q_size; ++index) {
point3f center = center_q.front();
center_q.pop();
key_loc_map.emplace(node_to_hash_key(depth, index), center);
if (depth == max_depth - 1)
continue;
for (unsigned short i = 0; i < 8; ++i) {
float x = (float) (center.x() + half_size * (i & 4 ? 0.5 : -0.5));
float y = (float) (center.y() + half_size * (i & 2 ? 0.5 : -0.5));
float z = (float) (center.z() + half_size * (i & 1 ? 0.5 : -0.5));
center_q.emplace(x, y, z);
}
}
}
return key_loc_map;
}
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(
const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map, unsigned short max_depth) {
std::vector<std::pair<OcTreeHashKey, point3f>> temp;
for (auto it = key_loc_map.begin(); it != key_loc_map.end(); ++it) {
unsigned short depth, index;
hash_key_to_node(it->first, depth, index);
if (depth == max_depth - 1)
temp.push_back(std::make_pair(it->first, it->second));
}
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.x() < p2.second.x();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.y() < p2.second.y();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.z() < p2.second.z();
});
std::unordered_map<unsigned short, OcTreeHashKey> index_map;
int index = 0;
for (auto it = temp.cbegin(); it != temp.cend(); ++it, ++index) {
index_map.insert(std::make_pair(index, it->first));
}
return index_map;
};
BlockHashKey block_to_hash_key(point3f center) {
return block_to_hash_key(center.x(), center.y(), center.z());
}
BlockHashKey block_to_hash_key(float x, float y, float z) {
return (int64_t(x / (double) Block::size + 524288.5) << 40) |
(int64_t(y / (double) Block::size + 524288.5) << 20) |
(int64_t(z / (double) Block::size + 524288.5));
}
point3f hash_key_to_block(BlockHashKey key) {
return point3f(((key >> 40) - 524288) * Block::size,
(((key >> 20) & 0xFFFFF) - 524288) * Block::size,
((key & 0xFFFFF) - 524288) * Block::size);
}
ExtendedBlock get_extended_block(BlockHashKey key) {
ExtendedBlock blocks;
point3f center = hash_key_to_block(key);
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = key;
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
float Block::resolution = 0.1f;
float Block::size = 0.8f;
unsigned short Block::cell_num = static_cast<unsigned short>(round(Block::size / Block::resolution));
std::unordered_map<OcTreeHashKey, point3f> Block::key_loc_map;
std::unordered_map<unsigned short, OcTreeHashKey> Block::index_map;
Block::Block() : OcTree(), center(0.0f, 0.0f, 0.0f) { }
Block::Block(point3f center) : OcTree(), center(center) { }
ExtendedBlock Block::get_extended_block() const {
ExtendedBlock blocks;
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = block_to_hash_key(x, y, z);
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
OcTreeHashKey Block::get_node(unsigned short x, unsigned short y, unsigned short z) const {
unsigned short index = x + y * Block::cell_num + z * Block::cell_num * Block::cell_num;
return Block::index_map[index];
}
point3f Block::get_point(unsigned short x, unsigned short y, unsigned short z) const {
return Block::key_loc_map[get_node(x, y, z)] + center;
}
void Block::get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const {
int xx = static_cast<int>((p.x() - center.x()) / resolution + Block::cell_num / 2);
int yy = static_cast<int>((p.y() - center.y()) / resolution + Block::cell_num / 2);
int zz = static_cast<int>((p.z() - center.z()) / resolution + Block::cell_num / 2);
auto clip = [](int a) -> int { return std::max(0, std::min(a, Block::cell_num - 1)); };
x = static_cast<unsigned short>(clip(xx));
y = static_cast<unsigned short>(clip(yy));
z = static_cast<unsigned short>(clip(zz));
}
OcTreeNode& Block::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
OcTreeNode& Block::search(point3f p) const {
unsigned short x, y, z;
get_index(p, x, y, z);
return operator[](get_node(x, y, z));
}
} | 6,730 | 41.06875 | 109 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkoctomap.cpp | #include <algorithm>
#include <pcl/filters/voxel_grid.h>
#include "bgkoctomap.h"
#include "bgkinference.h"
using std::vector;
// #define DEBUG true;
#ifdef DEBUG
#include <iostream>
#define Debug_Msg(msg) {\
std::cout << "Debug: " << msg << std::endl; }
#endif
namespace la3dm {
BGKOctoMap::BGKOctoMap() : BGKOctoMap(0.1f, // resolution
4, // block_depth
1.0, // sf2
1.0, // ell
0.3f, // free_thresh
0.7f, // occupied_thresh
1.0f, // var_thresh
1.0f, // prior_A
1.0f // prior_B
) { }
BGKOctoMap::BGKOctoMap(float resolution,
unsigned short block_depth,
float sf2,
float ell,
float free_thresh,
float occupied_thresh,
float var_thresh,
float prior_A,
float prior_B)
: resolution(resolution), block_depth(block_depth),
block_size((float) pow(2, block_depth - 1) * resolution) {
Block::resolution = resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
Block::index_map = init_index_map(Block::key_loc_map, block_depth);
OcTree::max_depth = block_depth;
OcTreeNode::sf2 = sf2;
OcTreeNode::ell = ell;
OcTreeNode::free_thresh = free_thresh;
OcTreeNode::occupied_thresh = occupied_thresh;
OcTreeNode::var_thresh = var_thresh;
OcTreeNode::prior_A = prior_A;
OcTreeNode::prior_B = prior_B;
}
BGKOctoMap::~BGKOctoMap() {
for (auto it = block_arr.begin(); it != block_arr.end(); ++it) {
if (it->second != nullptr) {
delete it->second;
}
}
}
void BGKOctoMap::set_resolution(float resolution) {
this->resolution = resolution;
Block::resolution = resolution;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKOctoMap::set_block_depth(unsigned short max_depth) {
this->block_depth = max_depth;
OcTree::max_depth = max_depth;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void BGKOctoMap::insert_training_data(const GPPointCloud &xy) {
if (xy.empty())
return;
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
for (auto it = xy.cbegin(); it != xy.cend(); ++it) {
float p[] = {it->first.x(), it->first.y(), it->first.z()};
rtree.Insert(p, p, const_cast<GPPointType *>(&*it));
}
/////////////////////////////////////////////////
////////// Training /////////////////////////////
/////////////////////////////////////////////////
vector<BlockHashKey> test_blocks;
std::unordered_map<BlockHashKey, BGK3f *> bgk_arr;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
ExtendedBlock eblock = get_extended_block(key);
if (has_gp_points_in_bbox(eblock))
#ifdef OPENMP
#pragma omp critical
#endif
{
test_blocks.push_back(key);
};
GPPointCloud block_xy;
get_gp_points_in_bbox(key, block_xy);
if (block_xy.size() < 1)
continue;
vector<float> block_x, block_y;
for (auto it = block_xy.cbegin(); it != block_xy.cend(); ++it) {
block_x.push_back(it->first.x());
block_x.push_back(it->first.y());
block_x.push_back(it->first.z());
block_y.push_back(it->second);
}
BGK3f *bgk = new BGK3f(OcTreeNode::sf2, OcTreeNode::ell);
bgk->train(block_x, block_y);
#ifdef OPENMP
#pragma omp critical
#endif
{
bgk_arr.emplace(key, bgk);
};
}
#ifdef DEBUG
Debug_Msg("Training done");
Debug_Msg("Prediction: block number: " << test_blocks.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
Block *block;
#ifdef OPENMP
#pragma omp critical
#endif
{
block = search(key);
if (block == nullptr)
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
vector<float> xs;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
point3f p = block->get_loc(leaf_it);
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
}
ExtendedBlock eblock = block->get_extended_block();
for (auto block_it = eblock.cbegin(); block_it != eblock.cend(); ++block_it) {
auto bgk = bgk_arr.find(*block_it);
if (bgk == bgk_arr.end())
continue;
vector<float> m, var;
bgk->second->predict(xs, m, var);
int j = 0;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it, ++j) {
OcTreeNode &node = leaf_it.get_node();
node.update(m[j], var[j]);
}
}
}
#ifdef DEBUG
Debug_Msg("Prediction done");
#endif
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
for (auto it = bgk_arr.begin(); it != bgk_arr.end(); ++it)
delete it->second;
rtree.RemoveAll();
}
void BGKOctoMap::insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res, float max_range) {
#ifdef DEBUG
Debug_Msg("Insert pointcloud: " << "cloud size: " << cloud.size() << " origin: " << origin);
#endif
////////// Preparation //////////////////////////
/////////////////////////////////////////////////
GPPointCloud xy;
get_training_data(cloud, origin, ds_resolution, free_res, max_range, xy);
#ifdef DEBUG
Debug_Msg("Training data size: " << xy.size());
#endif
// If pointcloud after max_range filtering is empty
// no need to do anything
if (xy.size() == 0) {
return;
}
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
for (auto it = xy.cbegin(); it != xy.cend(); ++it) {
float p[] = {it->first.x(), it->first.y(), it->first.z()};
rtree.Insert(p, p, const_cast<GPPointType *>(&*it));
}
/////////////////////////////////////////////////
////////// Training /////////////////////////////
/////////////////////////////////////////////////
vector<BlockHashKey> test_blocks;
std::unordered_map<BlockHashKey, BGK3f *> bgk_arr;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
ExtendedBlock eblock = get_extended_block(key);
if (has_gp_points_in_bbox(eblock))
#ifdef OPENMP
#pragma omp critical
#endif
{
test_blocks.push_back(key);
};
GPPointCloud block_xy;
get_gp_points_in_bbox(key, block_xy);
if (block_xy.size() < 1)
continue;
vector<float> block_x, block_y;
for (auto it = block_xy.cbegin(); it != block_xy.cend(); ++it) {
block_x.push_back(it->first.x());
block_x.push_back(it->first.y());
block_x.push_back(it->first.z());
block_y.push_back(it->second);
}
BGK3f *bgk = new BGK3f(OcTreeNode::sf2, OcTreeNode::ell);
bgk->train(block_x, block_y);
#ifdef OPENMP
#pragma omp critical
#endif
{
bgk_arr.emplace(key, bgk);
};
}
#ifdef DEBUG
Debug_Msg("Training done");
Debug_Msg("Prediction: block number: " << test_blocks.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
#ifdef OPENMP
#pragma omp critical
#endif
{
if (block_arr.find(key) == block_arr.end())
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
Block *block = block_arr[key];
vector<float> xs;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
point3f p = block->get_loc(leaf_it);
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
}
ExtendedBlock eblock = block->get_extended_block();
for (auto block_it = eblock.cbegin(); block_it != eblock.cend(); ++block_it) {
auto bgk = bgk_arr.find(*block_it);
if (bgk == bgk_arr.end())
continue;
vector<float> ybar, kbar;
bgk->second->predict(xs, ybar, kbar);
int j = 0;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it, ++j) {
OcTreeNode &node = leaf_it.get_node();
auto node_loc = block->get_loc(leaf_it);
if (node_loc.x() == 7.45 && node_loc.y() == 10.15 && node_loc.z() == 1.15) {
std::cout << "updating the node " << ybar[j] << " " << kbar[j] << std::endl;
}
// Only need to update if kernel density total kernel density est > 0
if (kbar[j] > 0.0)
node.update(ybar[j], kbar[j]);
}
}
}
#ifdef DEBUG
Debug_Msg("Prediction done");
#endif
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
for (auto it = bgk_arr.begin(); it != bgk_arr.end(); ++it)
delete it->second;
rtree.RemoveAll();
}
void BGKOctoMap::get_bbox(point3f &lim_min, point3f &lim_max) const {
lim_min = point3f(0, 0, 0);
lim_max = point3f(0, 0, 0);
GPPointCloud centers;
for (auto it = block_arr.cbegin(); it != block_arr.cend(); ++it) {
centers.emplace_back(it->second->get_center(), 1);
}
if (centers.size() > 0) {
bbox(centers, lim_min, lim_max);
lim_min -= point3f(block_size, block_size, block_size) * 0.5;
lim_max += point3f(block_size, block_size, block_size) * 0.5;
}
}
void BGKOctoMap::get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPPointCloud &xy) const {
PCLPointCloud sampled_hits;
downsample(cloud, sampled_hits, ds_resolution);
PCLPointCloud frees;
frees.height = 1;
frees.width = 0;
xy.clear();
for (auto it = sampled_hits.begin(); it != sampled_hits.end(); ++it) {
point3f p(it->x, it->y, it->z);
if (max_range > 0) {
double l = (p - origin).norm();
if (l > max_range)
continue;
}
xy.emplace_back(p, 1.0f);
PointCloud frees_n;
beam_sample(p, origin, frees_n, free_resolution);
frees.push_back(PCLPointType(origin.x(), origin.y(), origin.z()));
for (auto p = frees_n.begin(); p != frees_n.end(); ++p) {
frees.push_back(PCLPointType(p->x(), p->y(), p->z()));
frees.width++;
}
}
PCLPointCloud sampled_frees;
downsample(frees, sampled_frees, ds_resolution);
for (auto it = sampled_frees.begin(); it != sampled_frees.end(); ++it) {
xy.emplace_back(point3f(it->x, it->y, it->z), 0.0f);
}
}
void BGKOctoMap::downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const {
if (ds_resolution < 0) {
out = in;
return;
}
PCLPointCloud::Ptr pcl_in(new PCLPointCloud(in));
pcl::VoxelGrid<PCLPointType> sor;
sor.setInputCloud(pcl_in);
sor.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
sor.filter(out);
}
void BGKOctoMap::beam_sample(const point3f &hit, const point3f &origin, PointCloud &frees,
float free_resolution) const {
frees.clear();
float x0 = origin.x();
float y0 = origin.y();
float z0 = origin.z();
float x = hit.x();
float y = hit.y();
float z = hit.z();
float l = (float) sqrt((x - x0) * (x - x0) + (y - y0) * (y - y0) + (z - z0) * (z - z0));
float nx = (x - x0) / l;
float ny = (y - y0) / l;
float nz = (z - z0) / l;
float d = free_resolution;
while (d < l) {
frees.emplace_back(x0 + nx * d, y0 + ny * d, z0 + nz * d);
d += free_resolution;
}
if (l > free_resolution)
frees.emplace_back(x0 + nx * (l - free_resolution), y0 + ny * (l - free_resolution), z0 + nz * (l - free_resolution));
}
/*
* Compute bounding box of pointcloud
* Precondition: cloud non-empty
*/
void BGKOctoMap::bbox(const GPPointCloud &cloud, point3f &lim_min, point3f &lim_max) const {
assert(cloud.size() > 0);
vector<float> x, y, z;
for (auto it = cloud.cbegin(); it != cloud.cend(); ++it) {
x.push_back(it->first.x());
y.push_back(it->first.y());
z.push_back(it->first.z());
}
auto xlim = std::minmax_element(x.cbegin(), x.cend());
auto ylim = std::minmax_element(y.cbegin(), y.cend());
auto zlim = std::minmax_element(z.cbegin(), z.cend());
lim_min.x() = *xlim.first;
lim_min.y() = *ylim.first;
lim_min.z() = *zlim.first;
lim_max.x() = *xlim.second;
lim_max.y() = *ylim.second;
lim_max.z() = *zlim.second;
}
void BGKOctoMap::get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<BlockHashKey> &blocks) const {
for (float x = lim_min.x() - block_size; x <= lim_max.x() + 2 * block_size; x += block_size) {
for (float y = lim_min.y() - block_size; y <= lim_max.y() + 2 * block_size; y += block_size) {
for (float z = lim_min.z() - block_size; z <= lim_max.z() + 2 * block_size; z += block_size) {
blocks.push_back(block_to_hash_key(x, y, z));
}
}
}
}
int BGKOctoMap::get_gp_points_in_bbox(const BlockHashKey &key,
GPPointCloud &out) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return get_gp_points_in_bbox(lim_min, lim_max, out);
}
int BGKOctoMap::has_gp_points_in_bbox(const BlockHashKey &key) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return has_gp_points_in_bbox(lim_min, lim_max);
}
int BGKOctoMap::get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
GPPointCloud &out) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKOctoMap::search_callback, static_cast<void *>(&out));
}
int BGKOctoMap::has_gp_points_in_bbox(const point3f &lim_min,
const point3f &lim_max) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, BGKOctoMap::count_callback, NULL);
}
bool BGKOctoMap::count_callback(GPPointType *p, void *arg) {
return false;
}
bool BGKOctoMap::search_callback(GPPointType *p, void *arg) {
GPPointCloud *out = static_cast<GPPointCloud *>(arg);
out->push_back(*p);
return true;
}
int BGKOctoMap::has_gp_points_in_bbox(const ExtendedBlock &block) {
for (auto it = block.cbegin(); it != block.cend(); ++it) {
if (has_gp_points_in_bbox(*it) > 0)
return 1;
}
return 0;
}
int BGKOctoMap::get_gp_points_in_bbox(const ExtendedBlock &block,
GPPointCloud &out) {
int n = 0;
for (auto it = block.cbegin(); it != block.cend(); ++it) {
n += get_gp_points_in_bbox(*it, out);
}
return n;
}
Block *BGKOctoMap::search(BlockHashKey key) const {
auto block = block_arr.find(key);
if (block == block_arr.end()) {
return nullptr;
} else {
return block->second;
}
}
OcTreeNode BGKOctoMap::search(point3f p) const {
Block *block = search(block_to_hash_key(p));
if (block == nullptr) {
return OcTreeNode();
} else {
return OcTreeNode(block->search(p));
}
}
OcTreeNode BGKOctoMap::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
} | 20,391 | 34.464348 | 130 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkoctomap_server.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include <pcl_ros/transforms.h>
#include <pcl/filters/voxel_grid.h>
#include "markerarray_pub.h"
#include "bgkoctomap.h"
tf::TransformListener *listener;
std::string frame_id("/map");
la3dm::BGKOctoMap *map;
la3dm::MarkerArrayPub *m_pub_occ, *m_pub_free;
//startup parameters
tf::Vector3 last_position;
tf::Quaternion last_orientation;
bool first = true;
double position_change_thresh = 0.1;
double orientation_change_thresh = 0.2;
bool updated = false;
//Universal parameters
std::string map_topic_occ("/occupied_cells_vis_array");
std::string map_topic_free("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 0.1;
double ell = 0.2;
double free_resolution = 0.65;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = true;
//BGKL parameters
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &cloud) {
tf::StampedTransform transform;
try {
listener->waitForTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, ros::Duration(5.0));
listener->lookupTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, transform); //ros::Time::now() -- Don't use this because processing time delay breaks it
} catch (tf::TransformException ex) {
ROS_ERROR("%s", ex.what());
return;
}
ros::Time start = ros::Time::now();
la3dm::point3f origin;
tf::Vector3 translation = transform.getOrigin();
tf::Quaternion orientation = transform.getRotation();
if (first || orientation.angleShortestPath(last_orientation) > orientation_change_thresh || translation.distance(last_position) > position_change_thresh)
{
ROS_INFO_STREAM("Cloud received");
last_position = translation;
last_orientation = orientation;
origin.x() = (float) translation.x();
origin.y() = (float) translation.y();
origin.z() = (float) translation.z();
sensor_msgs::PointCloud2 cloud_map;
pcl_ros::transformPointCloud(frame_id, *cloud, cloud_map, *listener);
//pointer required for downsampling
la3dm::PCLPointCloud::Ptr pcl_cloud (new la3dm::PCLPointCloud());
pcl::fromROSMsg(cloud_map, *pcl_cloud);
//downsample for faster mapping
la3dm::PCLPointCloud filtered_cloud;
pcl::VoxelGrid<pcl::PointXYZ> filterer;
filterer.setInputCloud(pcl_cloud);
filterer.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
filterer.filter(filtered_cloud);
if(filtered_cloud.size() > 5){
map->insert_pointcloud(filtered_cloud, origin, (float) resolution, (float) free_resolution, (float) max_range);
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("One cloud finished in " << (end - start).toSec() << "s");
updated = true;
}
if (updated)
{
ros::Time start2 = ros::Time::now();
m_pub_occ->clear();
m_pub_free->clear();
for (auto it = map->begin_leaf(); it != map->end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size)
{
m_pub_occ->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_occ->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution());
}
}
}
else if(it.get_node().get_state() == la3dm::State::FREE)
{
if (original_size)
{
m_pub_free->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_free->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution(), it.get_node().get_prob());
}
}
}
}
updated = false;
m_pub_occ->publish();
m_pub_free->publish();
ros::Time end2 = ros::Time::now();
ROS_INFO_STREAM("One map published in " << (end2 - start2).toSec() << "s");
}
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgkoctomap_server");
ros::NodeHandle nh("~");
//incoming pointcloud topic, this could be put into the .yaml too
std::string cloud_topic("/velodyne_points");
//Universal parameters
nh.param<std::string>("topic", map_topic_occ, map_topic_occ);
nh.param<std::string>("topic_free", map_topic_free, map_topic_free);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
//BKGL parameters
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"topic: " << map_topic_occ << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B
);
map = new la3dm::BGKOctoMap(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B);
ros::Subscriber point_sub = nh.subscribe<sensor_msgs::PointCloud2>(cloud_topic, 1, cloudHandler);
m_pub_occ = new la3dm::MarkerArrayPub(nh, map_topic_occ, resolution);
m_pub_free = new la3dm::MarkerArrayPub(nh, map_topic_free, resolution);
listener = new tf::TransformListener();
while(ros::ok())
{
ros::spin();
}
return 0;
}
| 7,546 | 35.283654 | 177 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkoctomap_static_node.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include "bgkoctomap.h"
#include "markerarray_pub.h"
void load_pcd(std::string filename, la3dm::point3f &origin, la3dm::PCLPointCloud &cloud) {
pcl::PCLPointCloud2 cloud2;
Eigen::Vector4f _origin;
Eigen::Quaternionf orientaion;
pcl::io::loadPCDFile(filename, cloud2, _origin, orientaion);
pcl::fromPCLPointCloud2(cloud2, cloud);
origin.x() = _origin[0];
origin.y() = _origin[1];
origin.z() = _origin[2];
}
int main(int argc, char **argv) {
ros::init(argc, argv, "bgkoctomap_static_node");
ros::NodeHandle nh("~");
std::string dir;
std::string prefix;
int scan_num = 0;
std::string map_topic("/occupied_cells_vis_array");
std::string map_topic2("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 1.0;
double ell = 1.0;
double free_resolution = 0.5;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = false;
float var_thresh = 1.0f;
float prior_A = 1.0f;
float prior_B = 1.0f;
nh.param<std::string>("dir", dir, dir);
nh.param<std::string>("prefix", prefix, prefix);
nh.param<std::string>("topic", map_topic, map_topic);
nh.param<std::string>("topic2", map_topic2, map_topic2);
nh.param<int>("scan_num", scan_num, scan_num);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
nh.param<float>("var_thresh", var_thresh, var_thresh);
nh.param<float>("prior_A", prior_A, prior_A);
nh.param<float>("prior_B", prior_B, prior_B);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"dir: " << dir << std::endl <<
"prefix: " << prefix << std::endl <<
"topic: " << map_topic << std::endl <<
"scan_sum: " << scan_num << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size << std::endl <<
"var_thresh: " << var_thresh << std::endl <<
"prior_A: " << prior_A << std::endl <<
"prior_B: " << prior_B
);
la3dm::BGKOctoMap map(resolution, block_depth, sf2, ell, free_thresh, occupied_thresh, var_thresh, prior_A, prior_B);
ros::Time start = ros::Time::now();
for (int scan_id = 1; scan_id <= scan_num; ++scan_id) {
la3dm::PCLPointCloud cloud;
la3dm::point3f origin;
std::string filename(dir + "/" + prefix + "_" + std::to_string(scan_id) + ".pcd");
load_pcd(filename, origin, cloud);
map.insert_pointcloud(cloud, origin, resolution, free_resolution, max_range);
ROS_INFO_STREAM("Scan " << scan_id << " done");
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("Mapping finished in " << (end - start).toSec() << "s");
///////// Publish Map /////////////////////
la3dm::MarkerArrayPub m_pub(nh, map_topic, resolution);
la3dm::MarkerArrayPub m_pub2(nh, map_topic2, resolution);
if (min_z == max_z) {
la3dm::point3f lim_min, lim_max;
map.get_bbox(lim_min, lim_max);
min_z = lim_min.z();
max_z = lim_max.z();
}
for (auto it = map.begin_leaf(); it != map.end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size) {
la3dm::point3f p = it.get_loc();
m_pub.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
} else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution());
}
}
if (it.get_node().get_state() == la3dm::State::FREE) {
if (original_size) {
la3dm::point3f p = it.get_loc();
m_pub2.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub2.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution(), it.get_node().get_prob());
}
}
}
m_pub.publish();
m_pub2.publish();
ros::spin();
return 0;
}
| 5,703 | 38.611111 | 128 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkoctree.cpp | #include "bgkoctree.h"
#include <cmath>
namespace la3dm {
unsigned short OcTree::max_depth = 0;
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index) {
return (depth << 16) + index;
}
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index) {
depth = (unsigned short) (key >> 16);
index = (unsigned short) (key & 0xFFFF);
}
OcTree::OcTree() {
if (max_depth <= 0)
node_arr = nullptr;
else {
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
node_arr[i] = new OcTreeNode[(int) pow(8, i)]();
}
}
}
OcTree::~OcTree() {
if (node_arr != nullptr) {
for (unsigned short i = 0; i < max_depth; ++i) {
if (node_arr[i] != nullptr) {
delete[] node_arr[i];
}
}
delete[] node_arr;
}
}
OcTree::OcTree(const OcTree &other) {
if (other.node_arr == nullptr) {
node_arr = nullptr;
return;
}
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (other.node_arr[i] != nullptr) {
int n = (int) pow(8, i);
node_arr[i] = new OcTreeNode[n]();
std::copy(node_arr[i], node_arr[i] + n, other.node_arr[i]);
} else
node_arr[i] = nullptr;
}
}
OcTree &OcTree::operator=(const OcTree &other) {
OcTreeNode **local_node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (local_node_arr[i] != nullptr) {
int n = (int) pow(8, i);
local_node_arr[i] = new OcTreeNode[n]();
std::copy(local_node_arr[i], local_node_arr[i] + n, other.node_arr[i]);
} else
local_node_arr[i] = nullptr;
}
node_arr = local_node_arr;
return *this;
}
bool OcTree::is_leaf(unsigned short depth, unsigned short index) const {
if (node_arr != nullptr && node_arr[depth] != nullptr && node_arr[depth][index].get_state() != State::PRUNED) {
if (depth + 1 < max_depth) {
if (node_arr[depth + 1] == nullptr || node_arr[depth + 1][index * 8].get_state() == State::PRUNED)
return true;
} else {
return true;
}
}
return false;
}
bool OcTree::is_leaf(OcTreeHashKey key) const {
unsigned short depth = 0;
unsigned short index = 0;
hash_key_to_node(key, depth, index);
return is_leaf(depth, index);
}
bool OcTree::search(OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr != nullptr &&
node_arr[depth] != nullptr &&
node_arr[depth][index].get_state() != State::PRUNED;
}
bool OcTree::prune() {
if (node_arr == nullptr)
return false;
bool pruned = false;
for (unsigned short depth = max_depth - 1; depth > 0; --depth) {
OcTreeNode *layer = node_arr[depth];
OcTreeNode *parent_layer = node_arr[depth - 1];
if (layer == nullptr)
continue;
bool empty_layer = true;
unsigned int n = (unsigned int) pow(8, depth);
for (unsigned short index = 0; index < n; index += 8) {
State state = layer[index].get_state();
if (state == State::UNKNOWN) {
empty_layer = false;
continue;
}
if (state == State::PRUNED)
continue;
bool collapsible = true;
for (unsigned short i = 1; i < 8; ++i) {
if (layer[index + i].get_state() != state) {
collapsible = false;
continue;
}
}
if (collapsible) {
parent_layer[(int) floor(index / 8)] = layer[index];
for (unsigned short i = 0; i < 8; ++i) {
layer[index + i].prune();
}
pruned = true;
} else {
empty_layer = false;
}
}
if (empty_layer) {
delete[] layer;
node_arr[depth] = nullptr;
}
}
return pruned;
}
OcTreeNode &OcTree::operator[](OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr[depth][index];
}
} | 4,954 | 30.762821 | 119 | cpp |
la3dm | la3dm-master/src/bgkoctomap/bgkoctree_node.cpp | #include "bgkoctree_node.h"
#include <cmath>
namespace la3dm {
/// Default static values
float Occupancy::sf2 = 1.0f;
float Occupancy::ell = 1.0f;
float Occupancy::free_thresh = 0.3f;
float Occupancy::occupied_thresh = 0.7f;
float Occupancy::var_thresh = 1000.0f;
float Occupancy::prior_A = 0.5f;
float Occupancy::prior_B = 0.5f;
Occupancy::Occupancy(float A, float B) : m_A(Occupancy::prior_A + A), m_B(Occupancy::prior_B + B) {
classified = false;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNKNOWN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
float Occupancy::get_prob() const {
return m_A / (m_A + m_B);
}
void Occupancy::update(float ybar, float kbar) {
classified = true;
m_A += ybar;
m_B += kbar - ybar;
float var = get_var();
if (var > Occupancy::var_thresh)
state = State::UNKNOWN;
else {
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc) {
os.write((char *) &oc.m_A, sizeof(oc.m_A));
os.write((char *) &oc.m_B, sizeof(oc.m_B));
return os;
}
std::ifstream &operator>>(std::ifstream &is, Occupancy &oc) {
float m_A, m_B;
is.read((char *) &m_A, sizeof(m_A));
is.read((char *) &m_B, sizeof(m_B));
oc = OcTreeNode(m_A, m_B);
return is;
}
std::ostream &operator<<(std::ostream &os, const Occupancy &oc) {
return os << '(' << oc.m_A << ' ' << oc.m_B << ' ' << oc.get_prob() << ')';
}
} | 2,122 | 32.698413 | 117 | cpp |
la3dm | la3dm-master/src/common/point3f.cpp | #include "point3f.h"
#include <cassert>
#include <math.h>
#include <string.h>
namespace la3dm {
Vector3 &Vector3::rotate_IP(double roll, double pitch, double yaw) {
double x, y, z;
// pitch (around y)
x = (*this)(0);
z = (*this)(2);
(*this)(0) = (float) (z * sin(pitch) + x * cos(pitch));
(*this)(2) = (float) (z * cos(pitch) - x * sin(pitch));
// yaw (around z)
x = (*this)(0);
y = (*this)(1);
(*this)(0) = (float) (x * cos(yaw) - y * sin(yaw));
(*this)(1) = (float) (x * sin(yaw) + y * cos(yaw));
// roll (around x)
y = (*this)(1);
z = (*this)(2);
(*this)(1) = (float) (y * cos(roll) - z * sin(roll));
(*this)(2) = (float) (y * sin(roll) + z * cos(roll));
return *this;
}
std::istream &Vector3::read(std::istream &s) {
int temp;
s >> temp; // should be 3
for (unsigned int i = 0; i < 3; i++)
s >> operator()(i);
return s;
}
std::ostream &Vector3::write(std::ostream &s) const {
s << 3;
for (unsigned int i = 0; i < 3; i++)
s << " " << operator()(i);
return s;
}
std::istream &Vector3::readBinary(std::istream &s) {
int temp;
s.read((char *) &temp, sizeof(temp));
double val = 0;
for (unsigned int i = 0; i < 3; i++) {
s.read((char *) &val, sizeof(val));
operator()(i) = (float) val;
}
return s;
}
std::ostream &Vector3::writeBinary(std::ostream &s) const {
int temp = 3;
s.write((char *) &temp, sizeof(temp));
double val = 0;
for (unsigned int i = 0; i < 3; i++) {
val = operator()(i);
s.write((char *) &val, sizeof(val));
}
return s;
}
std::ostream &operator<<(std::ostream &out, la3dm::Vector3 const &v) {
return out << '(' << v.x() << ' ' << v.y() << ' ' << v.z() << ')';
}
} | 2,011 | 25.473684 | 74 | cpp |
la3dm | la3dm-master/src/common/point6f.cpp | #include "point6f.h"
#include <cassert>
#include <math.h>
#include <string.h>
namespace la3dm {
// Vector3 &Vector3::rotate_IP(double roll, double pitch, double yaw) {
// double x, y, z;
// // pitch (around y)
// x = (*this)(0);
// z = (*this)(2);
// (*this)(0) = (float) (z * sin(pitch) + x * cos(pitch));
// (*this)(2) = (float) (z * cos(pitch) - x * sin(pitch));
// // yaw (around z)
// x = (*this)(0);
// y = (*this)(1);
// (*this)(0) = (float) (x * cos(yaw) - y * sin(yaw));
// (*this)(1) = (float) (x * sin(yaw) + y * cos(yaw));
// // roll (around x)
// y = (*this)(1);
// z = (*this)(2);
// (*this)(1) = (float) (y * cos(roll) - z * sin(roll));
// (*this)(2) = (float) (y * sin(roll) + z * cos(roll));
// return *this;
// }
std::istream &Vector6::read(std::istream &s) {
int temp;
s >> temp; // should be 6
for (unsigned int i = 0; i < 6; i++)
s >> operator()(i);
return s;
}
std::ostream &Vector6::write(std::ostream &s) const {
s << 6;
for (unsigned int i = 0; i < 6; i++)
s << " " << operator()(i);
return s;
}
std::istream &Vector6::readBinary(std::istream &s) {
int temp;
s.read((char *) &temp, sizeof(temp));
double val = 0;
for (unsigned int i = 0; i < 6; i++) {
s.read((char *) &val, sizeof(val));
operator()(i) = (float) val;
}
return s;
}
std::ostream &Vector6::writeBinary(std::ostream &s) const {
int temp = 6;
s.write((char *) &temp, sizeof(temp));
double val = 0;
for (unsigned int i = 0; i < 6; i++) {
val = operator()(i);
s.write((char *) &val, sizeof(val));
}
return s;
}
std::ostream &operator<<(std::ostream &out, la3dm::Vector6 const &v) {
return out << '(' << v.x0() << ' ' << v.y0() << ' ' << v.z0() << ' ' << v.x1() << ' ' << v.y1() << ' ' << v.z1() << ')';
}
} | 2,122 | 26.934211 | 128 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpblock.cpp | #include "gpblock.h"
#include <queue>
#include <algorithm>
namespace la3dm {
std::unordered_map<OcTreeHashKey, point3f> init_key_loc_map(float resolution, unsigned short max_depth) {
std::unordered_map<OcTreeHashKey, point3f> key_loc_map;
std::queue<point3f> center_q;
center_q.push(point3f(0.0f, 0.0f, 0.0f));
for (unsigned short depth = 0; depth < max_depth; ++depth) {
unsigned short q_size = (unsigned short) center_q.size();
float half_size = (float) (resolution * pow(2, max_depth - depth - 1) * 0.5f);
for (unsigned short index = 0; index < q_size; ++index) {
point3f center = center_q.front();
center_q.pop();
key_loc_map.emplace(node_to_hash_key(depth, index), center);
if (depth == max_depth - 1)
continue;
for (unsigned short i = 0; i < 8; ++i) {
float x = (float) (center.x() + half_size * (i & 4 ? 0.5 : -0.5));
float y = (float) (center.y() + half_size * (i & 2 ? 0.5 : -0.5));
float z = (float) (center.z() + half_size * (i & 1 ? 0.5 : -0.5));
center_q.emplace(x, y, z);
}
}
}
return key_loc_map;
}
std::unordered_map<unsigned short, OcTreeHashKey> init_index_map(
const std::unordered_map<OcTreeHashKey, point3f> &key_loc_map, unsigned short max_depth) {
std::vector<std::pair<OcTreeHashKey, point3f>> temp;
for (auto it = key_loc_map.begin(); it != key_loc_map.end(); ++it) {
unsigned short depth, index;
hash_key_to_node(it->first, depth, index);
if (depth == max_depth - 1)
temp.push_back(std::make_pair(it->first, it->second));
}
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.x() < p2.second.x();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.y() < p2.second.y();
});
std::stable_sort(temp.begin(), temp.end(),
[](const std::pair<OcTreeHashKey, point3f> &p1,
const std::pair<OcTreeHashKey, point3f> &p2) {
return p1.second.z() < p2.second.z();
});
std::unordered_map<unsigned short, OcTreeHashKey> index_map;
int index = 0;
for (auto it = temp.cbegin(); it != temp.cend(); ++it, ++index) {
index_map.insert(std::make_pair(index, it->first));
}
return index_map;
};
BlockHashKey block_to_hash_key(point3f center) {
return block_to_hash_key(center.x(), center.y(), center.z());
}
BlockHashKey block_to_hash_key(float x, float y, float z) {
return (int64_t(x / (double) Block::size + 524288.5) << 40) |
(int64_t(y / (double) Block::size + 524288.5) << 20) |
(int64_t(z / (double) Block::size + 524288.5));
}
point3f hash_key_to_block(BlockHashKey key) {
return point3f(((key >> 40) - 524288) * Block::size,
(((key >> 20) & 0xFFFFF) - 524288) * Block::size,
((key & 0xFFFFF) - 524288) * Block::size);
}
ExtendedBlock get_extended_block(BlockHashKey key) {
ExtendedBlock blocks;
point3f center = hash_key_to_block(key);
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = key;
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
float Block::resolution = 0.1f;
float Block::size = 0.8f;
unsigned short Block::cell_num = static_cast<unsigned short>(round(Block::size / Block::resolution));
std::unordered_map<OcTreeHashKey, point3f> Block::key_loc_map;
std::unordered_map<unsigned short, OcTreeHashKey> Block::index_map;
Block::Block() : OcTree(), center(0.0f, 0.0f, 0.0f) { }
Block::Block(point3f center) : OcTree(), center(center) { }
ExtendedBlock Block::get_extended_block() const {
ExtendedBlock blocks;
float x = center.x();
float y = center.y();
float z = center.z();
blocks[0] = block_to_hash_key(x, y, z);
float ex, ey, ez;
for (int i = 0; i < 6; ++i) {
ex = (i / 2 == 0) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ey = (i / 2 == 1) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
ez = (i / 2 == 2) ? (i % 2 == 0 ? Block::size : -Block::size) : 0;
blocks[i + 1] = block_to_hash_key(ex + x, ey + y, ez + z);
}
return blocks;
}
OcTreeHashKey Block::get_node(unsigned short x, unsigned short y, unsigned short z) const {
unsigned short index = x + y * Block::cell_num + z * Block::cell_num * Block::cell_num;
return Block::index_map[index];
}
point3f Block::get_point(unsigned short x, unsigned short y, unsigned short z) const {
return Block::key_loc_map[get_node(x, y, z)] + center;
}
void Block::get_index(const point3f &p, unsigned short &x, unsigned short &y, unsigned short &z) const {
int xx = static_cast<int>((p.x() - center.x()) / resolution + Block::cell_num / 2);
int yy = static_cast<int>((p.y() - center.y()) / resolution + Block::cell_num / 2);
int zz = static_cast<int>((p.z() - center.z()) / resolution + Block::cell_num / 2);
auto clip = [](int a) -> int { return std::max(0, std::min(a, Block::cell_num - 1)); };
x = static_cast<unsigned short>(clip(xx));
y = static_cast<unsigned short>(clip(yy));
z = static_cast<unsigned short>(clip(zz));
}
OcTreeNode& Block::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
OcTreeNode& Block::search(point3f p) const {
unsigned short x, y, z;
get_index(p, x, y, z);
return operator[](get_node(x, y, z));
}
}
| 6,729 | 41.0625 | 109 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpoctomap.cpp | #include <algorithm>
#include <pcl/filters/voxel_grid.h>
#include "gpoctomap.h"
#include "gpregressor.h"
#include <iostream>
using std::vector;
//#define DEBUG true;
#ifdef DEBUG
#include <iostream>
#define Debug_Msg(msg) {\
std::cout << "Debug: " << msg << std::endl; }
#endif
namespace la3dm {
GPOctoMap::GPOctoMap() : GPOctoMap(0.1f, 4, 1.0, 1.0, 0.01, 100, 0.001f, 1000.0f, 0.02f, 0.3f, 0.7f) { }
GPOctoMap::GPOctoMap(float resolution, unsigned short block_depth, float sf2, float ell, float noise, float l,
float min_var,
float max_var, float max_known_var, float free_thresh, float occupied_thresh)
: resolution(resolution), block_depth(block_depth),
block_size((float) pow(2, block_depth - 1) * resolution) {
Block::resolution = resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
Block::index_map = init_index_map(Block::key_loc_map, block_depth);
OcTree::max_depth = block_depth;
OcTreeNode::sf2 = sf2;
OcTreeNode::ell = ell;
OcTreeNode::noise = noise;
OcTreeNode::l = l;
OcTreeNode::min_ivar = 1.0f / max_var;
OcTreeNode::max_ivar = 1.0f / min_var;
OcTreeNode::min_known_ivar = 1.0f / max_known_var;
OcTreeNode::free_thresh = free_thresh;
OcTreeNode::occupied_thresh = occupied_thresh;
}
GPOctoMap::~GPOctoMap() {
for (auto it = block_arr.begin(); it != block_arr.end(); ++it) {
if (it->second != nullptr) {
delete it->second;
}
}
}
void GPOctoMap::set_resolution(float resolution) {
this->resolution = resolution;
Block::resolution = resolution;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void GPOctoMap::set_block_depth(unsigned short max_depth) {
this->block_depth = max_depth;
OcTree::max_depth = max_depth;
this->block_size = (float) pow(2, block_depth - 1) * resolution;
Block::size = this->block_size;
Block::key_loc_map = init_key_loc_map(resolution, block_depth);
}
void GPOctoMap::insert_training_data(const GPPointCloud &xy) {
if (xy.empty())
return;
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
for (auto it = xy.cbegin(); it != xy.cend(); ++it) {
float p[] = {it->first.x(), it->first.y(), it->first.z()};
rtree.Insert(p, p, const_cast<GPPointType *>(&*it));
}
/////////////////////////////////////////////////
////////// Training /////////////////////////////
/////////////////////////////////////////////////
vector<BlockHashKey> test_blocks;
std::unordered_map<BlockHashKey, GPR3f *> gpr_arr;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
ExtendedBlock eblock = get_extended_block(key);
if (has_gp_points_in_bbox(eblock))
#ifdef OPENMP
#pragma omp critical
#endif
{
test_blocks.push_back(key);
};
GPPointCloud block_xy;
get_gp_points_in_bbox(key, block_xy);
if (block_xy.size() < 1)
continue;
vector<float> block_x, block_y;
for (auto it = block_xy.cbegin(); it != block_xy.cend(); ++it) {
block_x.push_back(it->first.x());
block_x.push_back(it->first.y());
block_x.push_back(it->first.z());
block_y.push_back(it->second);
}
GPR3f *gpr = new GPR3f(OcTreeNode::sf2, OcTreeNode::ell, OcTreeNode::noise);
gpr->train(block_x, block_y);
#ifdef OPENMP
#pragma omp critical
#endif
{
gpr_arr.emplace(key, gpr);
};
}
#ifdef DEBUG
Debug_Msg("GP training done");
Debug_Msg("GP prediction: block number: " << test_blocks.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
Block *block;
#ifdef OPENMP
#pragma omp critical
#endif
{
block = search(key);
if (block == nullptr)
// if (block_arr.find(key) == block_arr.end())
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
vector<float> xs;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
point3f p = block->get_loc(leaf_it);
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
}
ExtendedBlock eblock = block->get_extended_block();
for (auto block_it = eblock.cbegin(); block_it != eblock.cend(); ++block_it) {
auto gpr = gpr_arr.find(*block_it);
if (gpr == gpr_arr.end())
continue;
vector<float> m, var;
gpr->second->predict(xs, m, var);
int j = 0;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it, ++j) {
OcTreeNode &node = leaf_it.get_node();
node.update(m[j], var[j]);
}
}
}
#ifdef DEBUG
Debug_Msg("GP prediction done");
#endif
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
/////////////////////////////////////////////////
// #ifdef OPENMP
// #pragma omp parallel for
// #endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
for (auto it = gpr_arr.begin(); it != gpr_arr.end(); ++it)
delete it->second;
rtree.RemoveAll();
}
void GPOctoMap::insert_pointcloud(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_res, float max_range) {
#ifdef DEBUG
Debug_Msg("Insert pointcloud: " << "cloud size: " << cloud.size() << " origin: " << origin);
#endif
////////// Preparation //////////////////////////
/////////////////////////////////////////////////
GPPointCloud xy;
get_training_data(cloud, origin, ds_resolution, free_res, max_range, xy);
#ifdef DEBUG
Debug_Msg("Training data size: " << xy.size());
#endif
// If pointcloud after max_range filtering is empty
// no need to do anything
if (xy.size() == 0) {
return;
}
point3f lim_min, lim_max;
bbox(xy, lim_min, lim_max);
vector<BlockHashKey> blocks;
get_blocks_in_bbox(lim_min, lim_max, blocks);
for (auto it = xy.cbegin(); it != xy.cend(); ++it) {
float p[] = {it->first.x(), it->first.y(), it->first.z()};
rtree.Insert(p, p, const_cast<GPPointType *>(&*it));
}
/////////////////////////////////////////////////
////////// Training /////////////////////////////
/////////////////////////////////////////////////
vector<BlockHashKey> test_blocks;
std::unordered_map<BlockHashKey, GPR3f *> gpr_arr;
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < blocks.size(); ++i) {
BlockHashKey key = blocks[i];
ExtendedBlock eblock = get_extended_block(key);
if (has_gp_points_in_bbox(eblock))
#ifdef OPENMP
#pragma omp critical
#endif
{
test_blocks.push_back(key);
};
GPPointCloud block_xy;
get_gp_points_in_bbox(key, block_xy);
if (block_xy.size() < 1)
continue;
vector<float> block_x, block_y;
for (auto it = block_xy.cbegin(); it != block_xy.cend(); ++it) {
block_x.push_back(it->first.x());
block_x.push_back(it->first.y());
block_x.push_back(it->first.z());
block_y.push_back(it->second);
}
GPR3f *gpr = new GPR3f(OcTreeNode::sf2, OcTreeNode::ell, OcTreeNode::noise);
gpr->train(block_x, block_y);
#ifdef OPENMP
#pragma omp critical
#endif
{
gpr_arr.emplace(key, gpr);
};
}
#ifdef DEBUG
Debug_Msg("GP training done");
Debug_Msg("GP prediction: block number: " << test_blocks.size());
#endif
/////////////////////////////////////////////////
////////// Prediction ///////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for schedule(dynamic)
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
#ifdef OPENMP
#pragma omp critical
#endif
{
if (block_arr.find(key) == block_arr.end())
block_arr.emplace(key, new Block(hash_key_to_block(key)));
};
Block *block = block_arr[key];
vector<float> xs;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it) {
point3f p = block->get_loc(leaf_it);
xs.push_back(p.x());
xs.push_back(p.y());
xs.push_back(p.z());
}
ExtendedBlock eblock = block->get_extended_block();
for (auto block_it = eblock.cbegin(); block_it != eblock.cend(); ++block_it) {
auto gpr = gpr_arr.find(*block_it);
if (gpr == gpr_arr.end())
continue;
vector<float> m, var;
gpr->second->predict(xs, m, var);
int j = 0;
for (auto leaf_it = block->begin_leaf(); leaf_it != block->end_leaf(); ++leaf_it, ++j) {
OcTreeNode &node = leaf_it.get_node();
node.update(m[j], var[j]);
}
}
}
#ifdef DEBUG
Debug_Msg("GP prediction done");
#endif
/////////////////////////////////////////////////
////////// Pruning //////////////////////////////
/////////////////////////////////////////////////
#ifdef OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < test_blocks.size(); ++i) {
BlockHashKey key = test_blocks[i];
auto block = block_arr.find(key);
if (block == block_arr.end())
continue;
block->second->prune();
}
#ifdef DEBUG
Debug_Msg("Pruning done");
#endif
/////////////////////////////////////////////////
////////// Cleaning /////////////////////////////
/////////////////////////////////////////////////
for (auto it = gpr_arr.begin(); it != gpr_arr.end(); ++it)
delete it->second;
rtree.RemoveAll();
}
void GPOctoMap::get_bbox(point3f &lim_min, point3f &lim_max) const {
lim_min = point3f(0, 0, 0);
lim_max = point3f(0, 0, 0);
GPPointCloud centers;
for (auto it = block_arr.cbegin(); it != block_arr.cend(); ++it) {
centers.emplace_back(it->second->get_center(), 1);
}
if (centers.size() > 0) {
bbox(centers, lim_min, lim_max);
lim_min -= point3f(block_size, block_size, block_size) * 0.5;
lim_max += point3f(block_size, block_size, block_size) * 0.5;
}
}
void GPOctoMap::get_training_data(const PCLPointCloud &cloud, const point3f &origin, float ds_resolution,
float free_resolution, float max_range, GPPointCloud &xy) const {
PCLPointCloud sampled_hits;
downsample(cloud, sampled_hits, ds_resolution);
PCLPointCloud frees;
frees.height = 1;
frees.width = 0;
xy.clear();
for (auto it = sampled_hits.begin(); it != sampled_hits.end(); ++it) {
point3f p(it->x, it->y, it->z);
if (max_range > 0) {
double l = (p - origin).norm();
if (l > max_range)
continue;
}
xy.emplace_back(p, 1.0f);
PointCloud frees_n;
beam_sample(p, origin, frees_n, free_resolution);
frees.push_back(PCLPointType(origin.x(), origin.y(), origin.z()));
for (auto p = frees_n.begin(); p != frees_n.end(); ++p) {
frees.push_back(PCLPointType(p->x(), p->y(), p->z()));
frees.width++;
}
}
PCLPointCloud sampled_frees;
downsample(frees, sampled_frees, ds_resolution);
for (auto it = sampled_frees.begin(); it != sampled_frees.end(); ++it) {
xy.emplace_back(point3f(it->x, it->y, it->z), -1);
}
}
void GPOctoMap::downsample(const PCLPointCloud &in, PCLPointCloud &out, float ds_resolution) const {
if (ds_resolution < 0) {
out = in;
return;
}
PCLPointCloud::Ptr pcl_in(new PCLPointCloud(in));
pcl::VoxelGrid<PCLPointType> sor;
sor.setInputCloud(pcl_in);
sor.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
sor.filter(out);
// vector<int> indices;
// pcl_out.is_dense = false;
// pcl::removeNaNFromPointCloud(out, out, indices);
}
void GPOctoMap::beam_sample(const point3f &hit, const point3f &origin, PointCloud &frees,
float free_resolution) const {
frees.clear();
float x0 = origin.x();
float y0 = origin.y();
float z0 = origin.z();
float x = hit.x();
float y = hit.y();
float z = hit.z();
float l = (float) sqrt((x - x0) * (x - x0) + (y - y0) * (y - y0) + (z - z0) * (z - z0));
float nx = (x - x0) / l;
float ny = (y - y0) / l;
float nz = (z - z0) / l;
float d = free_resolution;
while (d < l) {
frees.emplace_back(x0 + nx * d, y0 + ny * d, z0 + nz * d);
d += free_resolution;
}
if (l > free_resolution)
frees.emplace_back(x0 + nx * (l - free_resolution), y0 + ny * (l - free_resolution), z0 + nz * (l - free_resolution));
}
/*
* Compute bounding box of pointcloud
* Precondition: cloud non-empty
*/
void GPOctoMap::bbox(const GPPointCloud &cloud, point3f &lim_min, point3f &lim_max) const {
assert(cloud.size() > 0);
vector<float> x, y, z;
for (auto it = cloud.cbegin(); it != cloud.cend(); ++it) {
x.push_back(it->first.x());
y.push_back(it->first.y());
z.push_back(it->first.z());
}
auto xlim = std::minmax_element(x.cbegin(), x.cend());
auto ylim = std::minmax_element(y.cbegin(), y.cend());
auto zlim = std::minmax_element(z.cbegin(), z.cend());
lim_min.x() = *xlim.first;
lim_min.y() = *ylim.first;
lim_min.z() = *zlim.first;
lim_max.x() = *xlim.second;
lim_max.y() = *ylim.second;
lim_max.z() = *zlim.second;
}
void GPOctoMap::get_blocks_in_bbox(const point3f &lim_min, const point3f &lim_max,
vector<BlockHashKey> &blocks) const {
for (float x = lim_min.x() - block_size; x <= lim_max.x() + 2 * block_size; x += block_size) {
for (float y = lim_min.y() - block_size; y <= lim_max.y() + 2 * block_size; y += block_size) {
for (float z = lim_min.z() - block_size; z <= lim_max.z() + 2 * block_size; z += block_size) {
blocks.push_back(block_to_hash_key(x, y, z));
}
}
}
}
int GPOctoMap::get_gp_points_in_bbox(const BlockHashKey &key,
GPPointCloud &out) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return get_gp_points_in_bbox(lim_min, lim_max, out);
}
int GPOctoMap::has_gp_points_in_bbox(const BlockHashKey &key) {
point3f half_size(block_size / 2.0f, block_size / 2.0f, block_size / 2.0);
point3f lim_min = hash_key_to_block(key) - half_size;
point3f lim_max = hash_key_to_block(key) + half_size;
return has_gp_points_in_bbox(lim_min, lim_max);
}
int GPOctoMap::get_gp_points_in_bbox(const point3f &lim_min, const point3f &lim_max,
GPPointCloud &out) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, GPOctoMap::search_callback, static_cast<void *>(&out));
}
int GPOctoMap::has_gp_points_in_bbox(const point3f &lim_min,
const point3f &lim_max) {
float a_min[] = {lim_min.x(), lim_min.y(), lim_min.z()};
float a_max[] = {lim_max.x(), lim_max.y(), lim_max.z()};
return rtree.Search(a_min, a_max, GPOctoMap::count_callback, NULL);
}
bool GPOctoMap::count_callback(GPPointType *p, void *arg) {
return false;
}
bool GPOctoMap::search_callback(GPPointType *p, void *arg) {
GPPointCloud *out = static_cast<GPPointCloud *>(arg);
out->push_back(*p);
return true;
}
int GPOctoMap::has_gp_points_in_bbox(const ExtendedBlock &block) {
for (auto it = block.cbegin(); it != block.cend(); ++it) {
if (has_gp_points_in_bbox(*it) > 0)
return 1;
}
return 0;
}
int GPOctoMap::get_gp_points_in_bbox(const ExtendedBlock &block,
GPPointCloud &out) {
int n = 0;
for (auto it = block.cbegin(); it != block.cend(); ++it) {
n += get_gp_points_in_bbox(*it, out);
}
return n;
}
Block *GPOctoMap::search(BlockHashKey key) const {
auto block = block_arr.find(key);
if (block == block_arr.end()) {
return nullptr;
} else {
return block->second;
}
}
OcTreeNode GPOctoMap::search(point3f p) const {
Block *block = search(block_to_hash_key(p));
if (block == nullptr) {
return OcTreeNode();
} else {
return OcTreeNode(block->search(p));
}
}
OcTreeNode GPOctoMap::search(float x, float y, float z) const {
return search(point3f(x, y, z));
}
}
| 19,736 | 33.994681 | 130 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpoctomap_server.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include <pcl_ros/transforms.h>
#include <pcl/filters/voxel_grid.h>
#include "markerarray_pub.h"
#include "gpoctomap.h"
tf::TransformListener *listener;
std::string frame_id("/map");
la3dm::GPOctoMap *map;
la3dm::MarkerArrayPub *m_pub_occ, *m_pub_free;
tf::Vector3 last_position;
tf::Quaternion last_orientation;
bool first = true;
double position_change_thresh = 0.1;
double orientation_change_thresh = 0.2;
bool updated = false;
//Universal parameters
std::string map_topic_occ("/occupied_cells_vis_array");
std::string map_topic_free("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 1.0;
double ell = 1.0;
double free_resolution = 0.1;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = true;
//parameters for GPOctomap
double noise = 0.01;
double l = 100;
double min_var = 0.001;
double max_var = 1000;
double max_known_var = 0.02;
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &cloud) {
tf::StampedTransform transform;
try {
listener->waitForTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, ros::Duration(5.0));
listener->lookupTransform(frame_id, cloud->header.frame_id, cloud->header.stamp, transform); //ros::Time::now() -- Don't use this because processing time delay breaks it
} catch (tf::TransformException ex) {
ROS_ERROR("%s", ex.what());
return;
}
ros::Time start = ros::Time::now();
la3dm::point3f origin;
tf::Vector3 translation = transform.getOrigin();
tf::Quaternion orientation = transform.getRotation();
if (first || orientation.angleShortestPath(last_orientation) > orientation_change_thresh || translation.distance(last_position) > position_change_thresh)
{
ROS_INFO_STREAM("Cloud received");
last_position = translation;
last_orientation = orientation;
origin.x() = (float) translation.x();
origin.y() = (float) translation.y();
origin.z() = (float) translation.z();
sensor_msgs::PointCloud2 cloud_map;
pcl_ros::transformPointCloud(frame_id, *cloud, cloud_map, *listener);
la3dm::PCLPointCloud::Ptr pcl_cloud (new la3dm::PCLPointCloud());
pcl::fromROSMsg(cloud_map, *pcl_cloud);
//downsample for faster mapping
la3dm::PCLPointCloud filtered_cloud;
pcl::VoxelGrid<pcl::PointXYZ> filterer;
filterer.setInputCloud(pcl_cloud);
filterer.setLeafSize(ds_resolution, ds_resolution, ds_resolution);
filterer.filter(filtered_cloud);
if(filtered_cloud.size() > 5){
map->insert_pointcloud(filtered_cloud, origin, (float) resolution, (float) free_resolution, (float) max_range);
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("One cloud finished in " << (end - start).toSec() << "s");
updated = true;
}
if (updated)
{
ros::Time start2 = ros::Time::now();
m_pub_occ->clear();
m_pub_free->clear();
for (auto it = map->begin_leaf(); it != map->end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size)
{
m_pub_occ->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_occ->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution());
}
}
}
else if(it.get_node().get_state() == la3dm::State::FREE)
{
if (original_size)
{
m_pub_free->insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
}
else
{
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
{
m_pub_free->insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map->get_resolution(), it.get_node().get_prob());
}
}
}
}
updated = false;
m_pub_occ->publish();
m_pub_free->publish();
ros::Time end2 = ros::Time::now();
ROS_INFO_STREAM("One map published in " << (end2 - start2).toSec() << "s");
}
}
int main(int argc, char **argv) {
ros::init(argc, argv, "gpoctomap_server");
ros::NodeHandle nh("~");
//incoming pointcloud topic, this could be put into the .yaml too
std::string cloud_topic("/velodyne_points");
//Universal parameters
nh.param<std::string>("topic", map_topic_occ, map_topic_occ);
nh.param<std::string>("topic_free", map_topic_free, map_topic_free);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
//parameters for GPOctomap
nh.param<double>("noise", noise, noise);
nh.param<double>("l", l, l);
nh.param<double>("min_var", min_var, min_var);
nh.param<double>("max_var", max_var, max_var);
nh.param<double>("max_known_var", max_known_var, max_known_var);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"topic: " << map_topic_occ << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"l: " << l << std::endl <<
"min_var: " << min_var << std::endl <<
"max_var: " << max_var << std::endl <<
"max_known_var: " << max_known_var << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size
);
map = new la3dm::GPOctoMap(resolution, block_depth, sf2, ell, noise, l, min_var, max_var, max_known_var, free_thresh, occupied_thresh);
ros::Subscriber point_sub = nh.subscribe<sensor_msgs::PointCloud2>(cloud_topic, 1, cloudHandler);
m_pub_occ = new la3dm::MarkerArrayPub(nh, map_topic_occ, resolution);
m_pub_free = new la3dm::MarkerArrayPub(nh, map_topic_free, resolution);
listener = new tf::TransformListener();
while(ros::ok())
{
ros::spin();
}
return 0;
}
| 7,686 | 35.43128 | 177 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpoctomap_static_node.cpp | #include <string>
#include <iostream>
#include <ros/ros.h>
#include "gpoctomap.h"
#include "markerarray_pub.h"
void load_pcd(std::string filename, la3dm::point3f &origin, la3dm::PCLPointCloud &cloud) {
pcl::PCLPointCloud2 cloud2;
Eigen::Vector4f _origin;
Eigen::Quaternionf orientaion;
pcl::io::loadPCDFile(filename, cloud2, _origin, orientaion);
pcl::fromPCLPointCloud2(cloud2, cloud);
origin.x() = _origin[0];
origin.y() = _origin[1];
origin.z() = _origin[2];
}
int main(int argc, char **argv) {
ros::init(argc, argv, "gpoctomap_static_node");
ros::NodeHandle nh("~");
std::string dir;
std::string prefix;
int scan_num = 0;
std::string map_topic("/occupied_cells_vis_array");
std::string map_topic2("/free_cells_vis_array");
double max_range = -1;
double resolution = 0.1;
int block_depth = 4;
double sf2 = 1.0;
double ell = 1.0;
double noise = 0.01;
double l = 100;
double min_var = 0.001;
double max_var = 1000;
double max_known_var = 0.02;
double free_resolution = 0.5;
double ds_resolution = 0.1;
double free_thresh = 0.3;
double occupied_thresh = 0.7;
double min_z = 0;
double max_z = 0;
bool original_size = false;
nh.param<std::string>("dir", dir, dir);
nh.param<std::string>("prefix", prefix, prefix);
nh.param<std::string>("topic", map_topic, map_topic);
nh.param<std::string>("topic2", map_topic2, map_topic2);
nh.param<int>("scan_num", scan_num, scan_num);
nh.param<double>("max_range", max_range, max_range);
nh.param<double>("resolution", resolution, resolution);
nh.param<int>("block_depth", block_depth, block_depth);
nh.param<double>("sf2", sf2, sf2);
nh.param<double>("ell", ell, ell);
nh.param<double>("noise", noise, noise);
nh.param<double>("l", l, l);
nh.param<double>("min_var", min_var, min_var);
nh.param<double>("max_var", max_var, max_var);
nh.param<double>("max_known_var", max_known_var, max_known_var);
nh.param<double>("free_resolution", free_resolution, free_resolution);
nh.param<double>("ds_resolution", ds_resolution, ds_resolution);
nh.param<double>("free_thresh", free_thresh, free_thresh);
nh.param<double>("occupied_thresh", occupied_thresh, occupied_thresh);
nh.param<double>("min_z", min_z, min_z);
nh.param<double>("max_z", max_z, max_z);
nh.param<bool>("original_size", original_size, original_size);
ROS_INFO_STREAM("Parameters:" << std::endl <<
"dir: " << dir << std::endl <<
"prefix: " << prefix << std::endl <<
"topic: " << map_topic << std::endl <<
"scan_sum: " << scan_num << std::endl <<
"max_range: " << max_range << std::endl <<
"resolution: " << resolution << std::endl <<
"block_depth: " << block_depth << std::endl <<
"sf2: " << sf2 << std::endl <<
"ell: " << ell << std::endl <<
"l: " << l << std::endl <<
"min_var: " << min_var << std::endl <<
"max_var: " << max_var << std::endl <<
"max_known_var: " << max_known_var << std::endl <<
"free_resolution: " << free_resolution << std::endl <<
"ds_resolution: " << ds_resolution << std::endl <<
"free_thresh: " << free_thresh << std::endl <<
"occupied_thresh: " << occupied_thresh << std::endl <<
"min_z: " << min_z << std::endl <<
"max_z: " << max_z << std::endl <<
"original_size: " << original_size
);
la3dm::GPOctoMap map(resolution, block_depth, sf2, ell, noise, l, min_var, max_var, max_known_var, free_thresh, occupied_thresh);
ros::Time start = ros::Time::now();
for (int scan_id = 1; scan_id <= scan_num; ++scan_id) {
la3dm::PCLPointCloud cloud;
la3dm::point3f origin;
std::string filename(dir + "/" + prefix + "_" + std::to_string(scan_id) + ".pcd");
load_pcd(filename, origin, cloud);
map.insert_pointcloud(cloud, origin, resolution, free_resolution, max_range);
ROS_INFO_STREAM("Scan " << scan_id << " done");
}
ros::Time end = ros::Time::now();
ROS_INFO_STREAM("Mapping finished in " << (end - start).toSec() << "s");
///////// Publish Map /////////////////////
la3dm::MarkerArrayPub m_pub(nh, map_topic, resolution);
la3dm::MarkerArrayPub m_pub2(nh, map_topic2, resolution);
if (min_z == max_z) {
la3dm::point3f lim_min, lim_max;
map.get_bbox(lim_min, lim_max);
min_z = lim_min.z();
max_z = lim_max.z();
}
for (auto it = map.begin_leaf(); it != map.end_leaf(); ++it) {
la3dm::point3f p = it.get_loc();
if (it.get_node().get_state() == la3dm::State::OCCUPIED) {
if (original_size) {
m_pub.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size());
} else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution());
}
}
else if (it.get_node().get_state() == la3dm::State::FREE) {
if (original_size) {
m_pub2.insert_point3d(p.x(), p.y(), p.z(), min_z, max_z, it.get_size(), it.get_node().get_prob());
} else {
auto pruned = it.get_pruned_locs();
for (auto n = pruned.cbegin(); n < pruned.cend(); ++n)
m_pub2.insert_point3d(n->x(), n->y(), n->z(), min_z, max_z, map.get_resolution(), it.get_node().get_prob());
}
}
}
m_pub.publish();
m_pub2.publish();
ros::spin();
return 0;
}
| 5,814 | 38.828767 | 133 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpoctree.cpp | #include "gpoctree.h"
#include <cmath>
namespace la3dm {
unsigned short OcTree::max_depth = 0;
OcTreeHashKey node_to_hash_key(unsigned short depth, unsigned short index) {
return (depth << 16) + index;
}
void hash_key_to_node(OcTreeHashKey key, unsigned short &depth, unsigned short &index) {
depth = (unsigned short) (key >> 16);
index = (unsigned short) (key & 0xFFFF);
}
OcTree::OcTree() {
if (max_depth <= 0)
node_arr = nullptr;
else {
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
node_arr[i] = new OcTreeNode[(int) pow(8, i)]();
}
}
}
OcTree::~OcTree() {
if (node_arr != nullptr) {
for (unsigned short i = 0; i < max_depth; ++i) {
if (node_arr[i] != nullptr) {
delete[] node_arr[i];
}
}
delete[] node_arr;
}
}
OcTree::OcTree(const OcTree &other) {
if (other.node_arr == nullptr) {
node_arr = nullptr;
return;
}
node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (other.node_arr[i] != nullptr) {
int n = (int) pow(8, i);
node_arr[i] = new OcTreeNode[n]();
std::copy(node_arr[i], node_arr[i] + n, other.node_arr[i]);
} else
node_arr[i] = nullptr;
}
}
OcTree &OcTree::operator=(const OcTree &other) {
OcTreeNode **local_node_arr = new OcTreeNode *[max_depth]();
for (unsigned short i = 0; i < max_depth; ++i) {
if (local_node_arr[i] != nullptr) {
int n = (int) pow(8, i);
local_node_arr[i] = new OcTreeNode[n]();
std::copy(local_node_arr[i], local_node_arr[i] + n, other.node_arr[i]);
} else
local_node_arr[i] = nullptr;
}
node_arr = local_node_arr;
return *this;
}
bool OcTree::is_leaf(unsigned short depth, unsigned short index) const {
if (node_arr != nullptr && node_arr[depth] != nullptr && node_arr[depth][index].get_state() != State::PRUNED) {
if (depth + 1 < max_depth) {
if (node_arr[depth + 1] == nullptr || node_arr[depth + 1][index * 8].get_state() == State::PRUNED)
return true;
} else {
return true;
}
}
return false;
}
bool OcTree::is_leaf(OcTreeHashKey key) const {
unsigned short depth = 0;
unsigned short index = 0;
hash_key_to_node(key, depth, index);
return is_leaf(depth, index);
}
bool OcTree::search(OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr != nullptr &&
node_arr[depth] != nullptr &&
node_arr[depth][index].get_state() != State::PRUNED;
}
bool OcTree::prune() {
if (node_arr == nullptr)
return false;
bool pruned = false;
for (unsigned short depth = max_depth - 1; depth > 0; --depth) {
OcTreeNode *layer = node_arr[depth];
OcTreeNode *parent_layer = node_arr[depth - 1];
if (layer == nullptr)
continue;
bool empty_layer = true;
unsigned int n = (unsigned int) pow(8, depth);
for (unsigned short index = 0; index < n; index += 8) {
State state = layer[index].get_state();
if (state == State::UNKNOWN) {
empty_layer = false;
continue;
}
if (state == State::PRUNED)
continue;
bool collapsible = true;
for (unsigned short i = 1; i < 8; ++i) {
if (layer[index + i].get_state() != state) {
collapsible = false;
continue;
}
}
if (collapsible) {
parent_layer[(int) floor(index / 8)] = layer[index];
for (unsigned short i = 0; i < 8; ++i) {
layer[index + i].prune();
}
pruned = true;
} else {
empty_layer = false;
}
}
if (empty_layer) {
delete[] layer;
node_arr[depth] = nullptr;
}
}
return pruned;
}
OcTreeNode &OcTree::operator[](OcTreeHashKey key) const {
unsigned short depth;
unsigned short index;
hash_key_to_node(key, depth, index);
return node_arr[depth][index];
}
} | 4,952 | 30.954839 | 119 | cpp |
la3dm | la3dm-master/src/gpoctomap/gpoctree_node.cpp | #include "gpoctree_node.h"
#include "gpregressor.h"
#include <cmath>
namespace la3dm {
/// Default static values
float Occupancy::sf2 = 1.0f;
float Occupancy::ell = 1.0f;
float Occupancy::noise = 0.01f;
float Occupancy::l = 100.f;
float Occupancy::max_ivar = 1000.0f;
float Occupancy::min_ivar = 0.001f;
float Occupancy::min_known_ivar = 10.0f;
float Occupancy::free_thresh = 0.3f;
float Occupancy::occupied_thresh = 0.7f;
Occupancy::Occupancy(float m, float var) : m_ivar(m / var), ivar(1.0f / var) {
classified = false;
if (ivar < Occupancy::min_known_ivar)
state = State::UNKNOWN;
else {
ivar = ivar > Occupancy::max_ivar ? Occupancy::max_ivar : ivar;
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
float Occupancy::get_prob() const {
// logistic regression function
return 1.0f / (1.0f + (float) exp(-l * m_ivar / Occupancy::max_ivar));
}
void Occupancy::update(float new_m, float new_var) {
classified = true;
ivar += 1.0 / new_var - Occupancy::sf2;
m_ivar += new_m / new_var;
if (ivar < Occupancy::min_known_ivar)
state = State::UNKNOWN;
else {
// chop variance
ivar = ivar > Occupancy::max_ivar ? Occupancy::max_ivar : ivar;
float p = get_prob();
state = p > Occupancy::occupied_thresh ? State::OCCUPIED : (p < Occupancy::free_thresh ? State::FREE
: State::UNKNOWN);
}
}
std::ofstream &operator<<(std::ofstream &os, const Occupancy &oc) {
os.write((char *) &oc.m_ivar, sizeof(oc.m_ivar));
os.write((char *) &oc.ivar, sizeof(oc.ivar));
return os;
}
std::ifstream &operator>>(std::ifstream &is, Occupancy &oc) {
float m_ivar, ivar;
is.read((char *) &m_ivar, sizeof(m_ivar));
is.read((char *) &ivar, sizeof(ivar));
oc = OcTreeNode(m_ivar / ivar, 1.0f / ivar);
return is;
}
std::ostream &operator<<(std::ostream &os, const Occupancy &oc) {
return os << '(' << (oc.m_ivar / oc.ivar) << ' ' << (1.0 / oc.ivar) << ' ' << oc.get_prob() << ')';
}
}
| 2,517 | 35.492754 | 117 | cpp |
label_propagation | label_propagation-master/README.md | # Implementations of label propagation like algorithms
This is a set of scikit-learn compatible implementations of label propagation (LP) like algorithms.
One can easily grid search and cross validate models using utils in scikit-learn.
## Implemented Algorithms
* Harmonic Function (HMN) [Zhu+, ICML03]
* Local and Global Consistency (LGC) [Zhou+, NIPS04]
* Partially Absorbing Random Walk (PARW) [Wu+, NIPS12]
* OMNI-Prop (OMNIProp) [Yamaguchi+, AAAI15]
* Confidence-Aware Modulated Label Propagation (CAMLP) [Yamaguchi+, SDM16]
## Usage
### Example
```
python main.py hmn -g sample.edgelist -l sample.label -o sample.output
```
### Inputs
```
$ cat sample.edgelist # [src node id] [dst node id]
0 1
1 2
2 3
$ cat sample.label # [node id] [label id]
1 0
2 1
$ cat sample.modulation # KxK matrix (K: no. of labels)
0 1
1 0
```
### HMN
```
$ python main.py hmn -h
usage: main.py hmn [-h] -g GRAPHFILE -l LABELFILE [-o [OUTFILE]]
optional arguments:
-h, --help show this help message and exit
-g GRAPHFILE, --graphfile GRAPHFILE
input graph file
-l LABELFILE, --labelfile LABELFILE
input label file
-o [OUTFILE], --outfile [OUTFILE]
output file (default=STDOUT)
```
### LGC
```
$ python main.py lgc -h
usage: main.py lgc [-h] -g GRAPHFILE -l LABELFILE [-o [OUTFILE]]
[--alpha [ALPHA]]
optional arguments:
-h, --help show this help message and exit
-g GRAPHFILE, --graphfile GRAPHFILE
input graph file
-l LABELFILE, --labelfile LABELFILE
input label file
-o [OUTFILE], --outfile [OUTFILE]
output file (default=STDOUT)
--alpha [ALPHA] alpha (default=0.99)
```
### PARW
```
$ python main.py parw -h
usage: main.py parw [-h] -g GRAPHFILE -l LABELFILE [-o [OUTFILE]]
[--lamb [LAMB]]
optional arguments:
-h, --help show this help message and exit
-g GRAPHFILE, --graphfile GRAPHFILE
input graph file
-l LABELFILE, --labelfile LABELFILE
input label file
-o [OUTFILE], --outfile [OUTFILE]
output file (default=STDOUT)
--lamb [LAMB] lambda (default=1.0)
```
### OMNIProp
```
$ python main.py omni -h
usage: main.py omni [-h] -g GRAPHFILE -l LABELFILE [-o [OUTFILE]]
[--lamb [LAMB]]
optional arguments:
-h, --help show this help message and exit
-g GRAPHFILE, --graphfile GRAPHFILE
input graph file
-l LABELFILE, --labelfile LABELFILE
input label file
-o [OUTFILE], --outfile [OUTFILE]
output file (default=STDOUT)
--lamb [LAMB] lambda (default=1.0)
```
### CAMLP
```
$ python main.py camlp -h
usage: main.py camlp [-h] -g GRAPHFILE -l LABELFILE [-o [OUTFILE]]
[--beta [BETA]] [--modulationfile [MODULATIONFILE]]
optional arguments:
-h, --help show this help message and exit
-g GRAPHFILE, --graphfile GRAPHFILE
input graph file
-l LABELFILE, --labelfile LABELFILE
input label file
-o [OUTFILE], --outfile [OUTFILE]
output file (default=STDOUT)
--beta [BETA] beta (default=0.1)
--modulationfile [MODULATIONFILE]
modulation matrix file (default: use identity)
```
## Usage in code
```
In [1]: import numpy as np
In [2]: import networkx as nx
In [3]: from label_propagation import LGC
In [4]: from scipy.sparse import lil_matrix
In [5]: A = lil_matrix((4,4)) # adjacency matrix
In [6]: A[0,1]=1; A[1,0]=1
In [7]: A[1,2]=1; A[2,1]=1
In [8]: A[2,3]=1; A[3,2]=1
In [9]: A.todense() # simple undirected chain
Out[9]:
matrix([[ 0., 1., 0., 0.],
[ 1., 0., 1., 0.],
[ 0., 1., 0., 1.],
[ 0., 0., 1., 0.]])
In [10]: x_train = np.array([1,2])
In [11]: y_train = np.array([0,1]) # node 1 -> label 0, node 2 -> label 1
In [12]: clf = LGC(graph=A, alpha=0.99)
In [13]: clf.fit(x_train,y_train) # scikit-learn compatible
Out[13]:
LGC(alpha=0.99,
graph=<4x4 sparse matrix of type '<type 'numpy.float64'>'
with 6 stored elements in LInked List format>,
max_iter=30)
In [14]: x_test = np.array([0,3]) # to predict labels of node 0 and node 3
In [15]: clf.predict(x_test) # scikit-learn compatible
Out[15]: array([0, 1])
```
| 4,475 | 25.329412 | 99 | md |
label_propagation | label_propagation-master/label_propagation.py | # coding=utf8
"""
Graph-Based Semi-Supervised Learning (GBSSL) implementation.
"""
# Authors: Yuto Yamaguchi <[email protected]>
# Lisence: MIT
import numpy as np
from scipy import sparse
from abc import ABCMeta, abstractmethod
from sklearn.base import BaseEstimator, ClassifierMixin
class Base(BaseEstimator, ClassifierMixin):
__metaclass__ = ABCMeta
def __init__(self,graph,max_iter=30):
self.max_iter = max_iter
self.graph = graph
@abstractmethod
def _build_propagation_matrix(self):
raise NotImplementedError("Propagation matrix construction must be implemented to fit a model.")
@abstractmethod
def _build_base_matrix(self):
raise NotImplementedError("Base matrix construction must be implemented to fit a model.")
def _init_label_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
return np.zeros((n_samples,n_classes))
def _arrange_params(self):
"""Do nothing by default"""
pass
def fit(self,x,y):
"""Fit a graph-based semi-supervised learning model
All the input data is provided array X (labeled samples only)
and corresponding label array y.
Parameters
----------
x : array_like, shape = [n_labeled_samples]
Node IDs of labeled samples
y : array_like, shape = [n_labeled_samples]
Label IDs of labeled samples
Returns
-------
self : returns an instance of self.
"""
self.x_ = x
self.y_ = y
self._arrange_params()
self.F_ = self._init_label_matrix()
self.P_ = self._build_propagation_matrix()
self.B_ = self._build_base_matrix()
remaining_iter = self.max_iter
while remaining_iter > 0:
self.F_ = self._propagate()
remaining_iter -= 1
return self
def _propagate(self):
return self.P_.dot(self.F_) + self.B_
def predict(self,x):
"""Performs prediction based on the fitted model
Parameters
----------
x : array_like, shape = [n_samples]
Node IDs
Returns
-------
y : array_like, shape = [n_samples]
Predictions for input node IDs
"""
probas = self.predict_proba(x)
return np.argmax(probas,axis=1)
def predict_proba(self,x):
"""Predict probability for each possible label
Parameters
----------
x : array_like, shape = [n_samples]
Node IDs
Returns
-------
probabilities : array_like, shape = [n_samples, n_classes]
Probability distributions across class labels
"""
z = np.sum(self.F_[x], axis=1)
z[z==0] += 1 # Avoid division by 0
return (self.F_[x].T / z).T
class LGC(Base):
"""Local and Global Consistency (LGC) for GBSSL
Parameters
----------
alpha : float
clamping factor
max_iter : float
maximum number of iterations allowed
Attributes
----------
x_ : array, shape = [n_samples]
Input array of node IDs.
References
----------
Zhou, D., Bousquet, O., Lal, T. N., Weston, J., & Schölkopf, B. (2004).
Learning with local and global consistency.
Advances in neural information processing systems, 16(16), 321-328.
"""
def __init__(self,graph=None,alpha=0.99,max_iter=30):
super(LGC, self).__init__(graph,max_iter=30)
self.alpha=alpha
def _build_propagation_matrix(self):
""" LGC computes the normalized Laplacian as its propagation matrix"""
degrees = self.graph.sum(axis=0).A[0]
degrees[degrees==0] += 1 # Avoid division by 0
D2 = np.sqrt(sparse.diags((1.0/degrees),offsets=0))
S = D2.dot(self.graph).dot(D2)
return self.alpha*S
def _build_base_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
B = np.zeros((n_samples,n_classes))
B[self.x_,self.y_] = 1
return (1-self.alpha)*B
class HMN(Base):
"""Harmonic funcsion (HMN) for GBSSL
Parameters
----------
max_iter : float
maximum number of iterations allowed
Attributes
----------
x_ : array, shape = [n_samples]
Input array of node IDs.
References
----------
Zhu, X., Ghahramani, Z., & Lafferty, J. (2003, August).
Semi-supervised learning using gaussian fields and harmonic functions.
In ICML (Vol. 3, pp. 912-919).
"""
def __init__(self,graph=None,max_iter=30):
super(HMN, self).__init__(graph,max_iter=30)
def _build_propagation_matrix(self):
degrees = self.graph.sum(axis=0).A[0]
degrees[degrees==0] += 1 # Avoid division by 0
D = sparse.diags((1.0/degrees),offsets=0)
P = D.dot(self.graph).tolil()
P[self.x_] = 0
return P.tocsr()
def _build_base_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
B = np.zeros((n_samples,n_classes))
B[self.x_,self.y_] = 1
return B
class PARW(Base):
"""Partially Absorbing Random Walk (PARW) for GBSSL
Parameters
----------
lamb: float (default=0.001)
Absorbing parameter
max_iter : float
maximum number of iterations allowed
Attributes
----------
x_ : array, shape = [n_samples]
Input array of node IDs.
References
----------
Wu, X. M., Li, Z., So, A. M., Wright, J., & Chang, S. F. (2012).
Learning with partially absorbing random walks.
In Advances in Neural Information Processing Systems (pp. 3077-3085).
"""
def __init__(self,graph=None,lamb=1.0,max_iter=30):
super(PARW, self).__init__(graph,max_iter=30)
self.lamb=lamb
def _build_propagation_matrix(self):
d = self.graph.sum(axis=1).T.A[0]
Z = sparse.diags(1.0 / (d+self.lamb), offsets=0)
P = Z.dot(self.graph)
return P
def _build_base_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
B = np.zeros((n_samples,n_classes))
B[self.x_,self.y_] = 1
d = np.array(self.graph.sum(1).T)[0]
Z = sparse.diags(1.0 / (d+self.lamb), offsets=0)
Lamb = sparse.diags(self.lamb,shape=(n_samples,n_samples), offsets=0)
return Z.dot(Lamb).dot(B)
class OMNI(Base):
"""OMNI-Prop for GBSSL
Parameters
----------
lamb : float > 0 (default = 1.0)
Define importance between prior and evidence from neighbors
max_iter : float
maximum number of iterations allowed
Attributes
----------
x_ : array, shape = [n_samples]
Input array of node IDs.
References
----------
Yamaguchi, Y., Faloutsos, C., & Kitagawa, H. (2015, February).
OMNI-Prop: Seamless Node Classification on Arbitrary Label Correlation.
In Twenty-Ninth AAAI Conference on Artificial Intelligence.
"""
def __init__(self,graph=None,lamb=1.0,max_iter=30):
super(OMNI,self).__init__(graph,max_iter)
self.lamb = lamb
def _build_propagation_matrix(self):
d = self.graph.sum(axis=0).A[0]
dT = self.graph.sum(axis=1).T.A[0]
Q = (sparse.diags(1.0/(d+self.lamb), offsets=0).dot(self.graph)).dot(sparse.diags(1.0/(dT+self.lamb),offsets=0).dot(self.graph.T)).tolil()
Q[self.x_] = 0
return Q
def _build_base_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
unlabeled = np.setdiff1d(np.arange(n_samples),self.x_)
dU = self.graph[unlabeled].sum(axis=1).T.A[0]
dT = self.graph.sum(axis=0).A[0]
n_samples = self.graph.shape[0]
r = sparse.diags(1.0/(dU+self.lamb),offsets=0).dot(self.lamb*self.graph[unlabeled].dot(sparse.diags(1.0/(dT+self.lamb),offsets=0)).dot(np.ones(n_samples))+self.lamb)
b = np.ones(n_classes) / float(n_classes)
B = np.zeros((n_samples,n_classes))
B[unlabeled] = np.outer(r,b)
B[self.x_,self.y_] = 1
return B
class CAMLP(Base):
"""Confidence-Aware Modulated Label Propagation (CAMLP) for GBSSL
Parameters
----------
beta : float > 0 (default = 0.1)
Define importance between prior and evidence from neighbors
H : array_like, shape = [n_classes, n_classes]
Define affinities between labels
if None, identity matrix is set
max_iter : float
maximum number of iterations allowed
Attributes
----------
x_ : array, shape = [n_samples]
Input array of node IDs.
References
----------
Yamaguchi, Y., Faloutsos, C., & Kitagawa, H. (2016, May).
CAMLP: Confidence-Aware Modulated Label Propagation.
In SIAM International Conference on Data Mining.
"""
def __init__(self,graph=None,beta=0.1,H=None,max_iter=30):
super(CAMLP,self).__init__(graph,max_iter)
self.beta=beta
self.H=H
def _arrange_params(self):
if self.H is None:
n_classes = self.y_.max()+1
self.H = np.identity(n_classes)
self.Z = self._build_normalization_term()
def _propagate(self):
return self.P_.dot(self.F_).dot(self.H) + self.B_
def _build_normalization_term(self):
d = self.graph.sum(axis=1).T.A[0]
return sparse.diags(1.0/(1.0+d*self.beta),offsets=0)
def _build_propagation_matrix(self):
return self.Z.dot(self.beta*self.graph)
def _build_base_matrix(self):
n_samples = self.graph.shape[0]
n_classes = self.y_.max()+1
B = np.ones((n_samples,n_classes))/float(n_classes)
B[self.x_] = 0
B[self.x_,self.y_] = 1
return self.Z.dot(B)
| 9,790 | 28.669697 | 173 | py |
label_propagation | label_propagation-master/main.py | import sys
import argparse
import numpy as np
from scipy.sparse import csr_matrix
from label_propagation import HMN,LGC,PARW,OMNI,CAMLP
def read_graphfile(f):
graph_data = np.genfromtxt(f, delimiter=' ', dtype=int)
row = np.hstack([graph_data[:,0], graph_data[:,1]])
col = np.hstack([graph_data[:,1], graph_data[:,0]])
max_nid = np.max(row)
return csr_matrix((np.ones(len(row)), (row,col)), shape=(max_nid+1,max_nid+1))
def read_labelfile(f):
label_data = np.genfromtxt(f, delimiter=' ', dtype=int)
return label_data[:,0],label_data[:,1]
p = argparse.ArgumentParser()
subparsers = p.add_subparsers(help='sub-command help', title='subcommands', dest='subparser_name')
hmn_p = subparsers.add_parser('hmn', help='HMN')
hmn_p.add_argument("-g", "--graphfile", help="input graph file", type=argparse.FileType('r'), required=True)
hmn_p.add_argument("-l", "--labelfile", help="input label file", type=argparse.FileType('r'), required=True)
hmn_p.add_argument("-o", "--outfile", help="output file (default=STDOUT)", type=argparse.FileType('w'), nargs='?', default=sys.stdout)
lgc_p = subparsers.add_parser('lgc', help='LGC')
lgc_p.add_argument("-g", "--graphfile", help="input graph file", type=argparse.FileType('r'), required=True)
lgc_p.add_argument("-l", "--labelfile", help="input label file", type=argparse.FileType('r'), required=True)
lgc_p.add_argument("-o", "--outfile", help="output file (default=STDOUT)", type=argparse.FileType('w'), nargs='?', default=sys.stdout)
lgc_p.add_argument("--alpha", help="alpha (default=0.99)", type=float, nargs='?', default=0.99)
parw_p = subparsers.add_parser('parw', help='PARW')
parw_p.add_argument("-g", "--graphfile", help="input graph file", type=argparse.FileType('r'), required=True)
parw_p.add_argument("-l", "--labelfile", help="input label file", type=argparse.FileType('r'), required=True)
parw_p.add_argument("-o", "--outfile", help="output file (default=STDOUT)", type=argparse.FileType('w'), nargs='?', default=sys.stdout)
parw_p.add_argument("--lamb", help="lambda (default=1.0)", type=float, nargs='?', default=1.0)
omni_p = subparsers.add_parser('omni', help='OMNI')
omni_p.add_argument("-g", "--graphfile", help="input graph file", type=argparse.FileType('r'), required=True)
omni_p.add_argument("-l", "--labelfile", help="input label file", type=argparse.FileType('r'), required=True)
omni_p.add_argument("-o", "--outfile", help="output file (default=STDOUT)", type=argparse.FileType('w'), nargs='?', default=sys.stdout)
omni_p.add_argument("--lamb", help="lambda (default=1.0)", type=float, nargs='?', default=1.0)
camlp_p = subparsers.add_parser('camlp', help='CAMLP')
camlp_p.add_argument("-g", "--graphfile", help="input graph file", type=argparse.FileType('r'), required=True)
camlp_p.add_argument("-l", "--labelfile", help="input label file", type=argparse.FileType('r'), required=True)
camlp_p.add_argument("-o", "--outfile", help="output file (default=STDOUT)", type=argparse.FileType('w'), nargs='?', default=sys.stdout)
camlp_p.add_argument("--beta", help="beta (default=0.1)", type=float, nargs='?', default=0.1)
camlp_p.add_argument("--modulationfile", help="modulation matrix file (default: use identity)", type=argparse.FileType('r'), nargs='?', default=None)
args = p.parse_args()
G = read_graphfile(args.graphfile).tolil()
x,y = read_labelfile(args.labelfile)
if args.subparser_name == 'hmn':
clf = HMN(graph=G)
elif args.subparser_name == 'lgc':
clf = LGC(graph=G,alpha=args.alpha)
elif args.subparser_name == 'parw':
clf = PARW(graph=G,lamb=args.lamb)
elif args.subparser_name == 'omni':
clf = OMNI(graph=G, lamb=args.lamb)
elif args.subparser_name == 'camlp':
H = np.genfromtxt(args.modulationfile, delimiter=' ')
clf = CAMLP(graph=G, beta=args.beta, H=H)
clf.fit(x,y)
predicted = clf.predict_proba(np.arange(G.shape[0]))
print >>args.outfile, 'Node ID,Predicted label ID,%s' % ','.join(['Prob %s'%v for v in range(predicted.shape[1])])
for i in range(predicted.shape[0]):
print >>args.outfile, "%s,%s,%s" % (i,predicted[i].argmax(),','.join(map(str,predicted[i])))
| 4,107 | 57.685714 | 149 | py |
label_propagation | label_propagation-master/sample.py | import numpy as np
import networkx as nx
from scipy import sparse
from label_propagation import LGC,HMN,PARW,OMNIProp,CAMLP
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
G = nx.karate_club_graph()
labels = {'Officer':0, 'Mr. Hi':1}
nodes = np.array([(n,labels[attr['club']]) for n,attr in G.nodes(data=True)])
x = nodes[:,0]
y = nodes[:,1]
A = nx.to_scipy_sparse_matrix(G,nodelist=x)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,random_state=0)
methods = [('HMN', HMN(), {'graph':[A]}),
('LGC', LGC(), {'graph':[A], 'alpha':[0.01,0.05,0.1,0.5,0.99]}),
('PARW', PARW(), {'graph':[A], 'lamb':[0.01, 0.05, 0.01, 0.5, 0.99]}),
('OMNIProp', OMNIProp(), {'graph':[A], 'lamb':[0.01, 0.1, 1.0, 10.0, 100.0]}),
('CAMLP', CAMLP(), {'graph':[A], 'beta':[0.01, 0.1, 1.0, 10.0, 100.0], 'H':[np.array([[1,0],[0,1]]), np.array([[0,1],[1,0]])]})]
for name, clf, params in methods:
print "========================="
print name
gs = GridSearchCV(clf, params, cv=5)
gs.fit(x_train,y_train)
print
print "Grid Scores:"
for score in gs.grid_scores_:
print score
model = gs.best_estimator_
model.fit(x_train,y_train)
print "\nBest Estimator:"
print model
predicted = model.predict(x_test)
acc = (predicted==y_test).mean()
print "\nAccuracy: %s" % acc
print
| 1,435 | 32.395349 | 140 | py |
trudy | trudy-master/main.go | package main
import (
"crypto/tls"
"encoding/hex"
"flag"
"fmt"
"github.com/gorilla/websocket"
"github.com/praetorian-inc/trudy/listener"
"github.com/praetorian-inc/trudy/module"
"github.com/praetorian-inc/trudy/pipe"
"io"
"log"
"net"
"net/http"
"strings"
"sync"
)
var connectionCount uint
var websocketConn *websocket.Conn
var websocketMutex *sync.Mutex
var tlsConfig *tls.Config
func main() {
var tcpport string
var tlsport string
var x509 string
var key string
var showConnectionAttempts bool
flag.StringVar(&tcpport, "tcp", "6666", "Listening port for non-TLS connections.")
flag.StringVar(&tlsport, "tls", "6443", "Listening port for TLS connections.")
flag.StringVar(&x509, "x509", "./certificate/trudy.cer", "Path to x509 certificate that will be presented for TLS connection.")
flag.StringVar(&key, "key", "./certificate/trudy.key", "Path to the corresponding private key for the specified x509 certificate")
flag.BoolVar(&showConnectionAttempts, "show", true, "Show connection open and close messages")
flag.Parse()
tcpport = ":" + tcpport
tlsport = ":" + tlsport
setup(tcpport, tlsport, x509, key, showConnectionAttempts)
}
func setup(tcpport, tlsport, x509, key string, show bool) {
//Setup non-TLS TCP listener!
tcpAddr, err := net.ResolveTCPAddr("tcp", tcpport)
if err != nil {
log.Printf("There appears to be an error with the TCP port you specified. See error below.\n%v\n", err.Error())
return
}
tcpListener := new(listener.TCPListener)
//Setup TLS listener!
trdy, err := tls.LoadX509KeyPair(x509, key)
if err != nil {
log.Printf("There appears to be an error with the x509 or key values specified. See error below.\n%v\n", err.Error())
return
}
tlsConfig = &tls.Config{
Certificates: []tls.Certificate{trdy},
InsecureSkipVerify: true,
}
tlsAddr, err := net.ResolveTCPAddr("tcp", tlsport)
if err != nil {
log.Printf("There appears to be an error with the TLS port specified. See error below.\n%v\n", err.Error())
return
}
tlsListener := new(listener.TLSListener)
//All good. Start listening.
tcpListener.Listen("tcp", tcpAddr, &tls.Config{})
tlsListener.Listen("tcp", tlsAddr, tlsConfig)
log.Println("[INFO] Trudy lives!")
log.Printf("[INFO] Listening for TLS connections on port %s\n", tlsport)
log.Printf("[INFO] Listening for all other TCP connections on port %s\n", tcpport)
go websocketHandler()
go connectionDispatcher(tlsListener, "TLS", show)
connectionDispatcher(tcpListener, "TCP", show)
}
func connectionDispatcher(listener listener.TrudyListener, name string, show bool) {
defer listener.Close()
for {
fd, conn, err := listener.Accept()
if err != nil {
continue
}
p := new(pipe.TrudyPipe)
if name == "TLS" {
err = p.New(connectionCount, fd, conn, true)
} else {
err = p.New(connectionCount, fd, conn, false)
}
if err != nil {
log.Println("[ERR] Error creating new pipe.")
continue
}
if show {
log.Printf("[INFO] ( %v ) %v Connection accepted!\n", connectionCount, name)
}
go clientHandler(p, show)
go serverHandler(p)
connectionCount++
}
}
func errHandler(err error) {
if err != nil {
panic(err)
}
}
//clientHandler manages data that is sent from the client to the server.
func clientHandler(pipe pipe.Pipe, show bool) {
if show {
defer log.Printf("[INFO] ( %v ) Closing TCP connection.\n", pipe.Id())
}
defer pipe.Close()
buffer := make([]byte, 65535)
for {
bytesRead, clientReadErr := pipe.ReadFromClient(buffer)
if clientReadErr != io.EOF && clientReadErr != nil {
break
}
if clientReadErr != io.EOF && bytesRead == 0 {
continue
}
data := module.Data{FromClient: true,
Bytes: buffer[:bytesRead],
TLSConfig: tlsConfig,
ServerAddr: pipe.ServerInfo(),
ClientAddr: pipe.ClientInfo()}
data.Deserialize()
if data.Drop() {
continue
}
if data.DoMangle() {
data.Mangle()
bytesRead = len(data.Bytes)
}
if data.DoIntercept() {
if websocketConn == nil {
log.Printf("[ERR] Websocket Connection has not been setup yet! Cannot intercept.")
continue
}
websocketMutex.Lock()
bs := fmt.Sprintf("% x", data.Bytes)
if err := websocketConn.WriteMessage(websocket.TextMessage, []byte(bs)); err != nil {
log.Printf("[ERR] Failed to write to websocket: %v\n", err)
websocketMutex.Unlock()
continue
}
_, moddedBytes, err := websocketConn.ReadMessage()
websocketMutex.Unlock()
if err != nil {
log.Printf("[ERR] Failed to read from websocket: %v\n", err)
continue
}
str := string(moddedBytes)
str = strings.Replace(str, " ", "", -1)
moddedBytes, err = hex.DecodeString(str)
if err != nil {
log.Printf("[ERR] Failed to decode hexedited data.")
continue
}
data.Bytes = moddedBytes
bytesRead = len(moddedBytes)
}
if data.DoPrint() {
log.Printf("%v -> %v\n%v\n", data.ClientAddr.String(), data.ServerAddr.String(), data.PrettyPrint())
}
data.Serialize()
data.BeforeWriteToServer(pipe)
bytesRead = len(data.Bytes)
_, serverWriteErr := pipe.WriteToServer(data.Bytes[:bytesRead])
if serverWriteErr != nil || clientReadErr == io.EOF {
break
}
data.AfterWriteToServer(pipe)
}
}
//serverHandler manages data that is sent from the server to the client.
func serverHandler(pipe pipe.Pipe) {
buffer := make([]byte, 65535)
defer pipe.Close()
for {
bytesRead, serverReadErr := pipe.ReadFromServer(buffer)
if serverReadErr != io.EOF && serverReadErr != nil {
break
}
if serverReadErr != io.EOF && bytesRead == 0 {
continue
}
data := module.Data{FromClient: false,
Bytes: buffer[:bytesRead],
TLSConfig: tlsConfig,
ClientAddr: pipe.ClientInfo(),
ServerAddr: pipe.ServerInfo()}
data.Deserialize()
if data.Drop() {
continue
}
if data.DoMangle() {
data.Mangle()
bytesRead = len(data.Bytes)
}
if data.DoIntercept() {
if websocketConn == nil {
log.Printf("[ERR] Websocket Connection has not been setup yet! Cannot intercept.")
continue
}
websocketMutex.Lock()
bs := fmt.Sprintf("% x", data.Bytes)
if err := websocketConn.WriteMessage(websocket.TextMessage, []byte(bs)); err != nil {
log.Printf("[ERR] Failed to write to websocket: %v\n", err)
websocketMutex.Unlock()
continue
}
_, moddedBytes, err := websocketConn.ReadMessage()
websocketMutex.Unlock()
if err != nil {
log.Printf("[ERR] Failed to read from websocket: %v\n", err)
continue
}
str := string(moddedBytes)
str = strings.Replace(str, " ", "", -1)
moddedBytes, err = hex.DecodeString(str)
if err != nil {
log.Printf("[ERR] Failed to decode hexedited data.")
continue
}
data.Bytes = moddedBytes
bytesRead = len(moddedBytes)
}
if data.DoPrint() {
log.Printf("%v -> %v\n%v\n", data.ServerAddr.String(), data.ClientAddr.String(), data.PrettyPrint())
}
data.Serialize()
data.BeforeWriteToClient(pipe)
bytesRead = len(data.Bytes)
_, clientWriteErr := pipe.WriteToClient(data.Bytes[:bytesRead])
if clientWriteErr != nil || serverReadErr == io.EOF {
break
}
data.AfterWriteToClient(pipe)
}
}
func websocketHandler() {
websocketMutex = &sync.Mutex{}
upgrader := websocket.Upgrader{ReadBufferSize: 65535, WriteBufferSize: 65535}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, editor)
})
http.HandleFunc("/ws", func(w http.ResponseWriter, r *http.Request) {
var err error
websocketConn, err = upgrader.Upgrade(w, r, nil)
if err != nil {
log.Printf("[ERR] Could not upgrade websocket connection.")
return
}
})
err := http.ListenAndServe(":8080", nil)
if err != nil {
panic(err)
}
}
const editor string = `<!-- this wonderful page was found here: https://github.com/xem/hex -->
<body onload='
// Reset the textarea value
m.value="00";
// Init the top cell content
for(i=0;i<16;i++)
t.innerHTML+=(0+i.toString(16)).slice(-2)+" ";
'>
<!-- TRUDY SPECIFIC CODE ADDED FOR THIS PROJECT -->
<h1> ~ Trudy Intercept ~ </h1>
<script>
var url = window.location.href
var arr = url.split("/");
var ws_url = "ws://" + arr[2] + "/ws"
var socket = new WebSocket(ws_url)
socket.onmessage = function (event) {
document.getElementById('m').value = event.data
document.getElementById('m').oninput()
document.getElementById('send').disabled = false
}
var sender = function() {
socket.send(document.getElementById('m').value)
document.getElementById('send').disabled = true
document.getElementById('m').value = "00"
document.getElementById('m').oninput()
}
</script>
<button onclick="sender()" id='send' disabled=true>send</button>
<!-- END TRUDY SPECIFIC CODE -->
</body>
<table border><td><pre><td id=t><tr><td id=l width=80>00000000<td><textarea spellcheck=false id=m oninput='
// On input, store the length of clean hex before the textarea caret in b
b=value
.substr(0,selectionStart)
.replace(/[^0-9A-F]/ig,"")
.replace(/(..)/g,"$1 ")
.length;
// Clean the textarea value
value=value
.replace(/[^0-9A-F]/ig,"")
.replace(/(..)/g,"$1 ")
.replace(/ $/,"")
.toUpperCase();
// Set the height of the textarea according to its length
style.height=(1.5+value.length/47)+"em";
// Reset h
h="";
// Loop on textarea lines
for(i=0;i<value.length/48;i++)
// Add line number to h
h+=(1E7+(16*i).toString(16)).slice(-8)+" ";
// Write h on the left column
l.innerHTML=h;
// Reset h
h="";
// Loop on the hex values
for(i=0;i<value.length;i+=3)
// Convert them in numbers
c=parseInt(value.substr(i,2),16),
// Convert in chars (if the charCode is in [64-126] (maybe more later)) or ".".
h=63<c&&127>c?h+String.fromCharCode(c):h+".";
// Write h in the right column (with line breaks every 16 chars)
r.innerHTML=h.replace(/(.{16})/g,"$1 ");
// If the caret position is after a space or a line break, place it at the previous index so we can use backspace to erase hex code
if(value[b]==" ")
b--;
// Put the textarea caret at the right place
setSelectionRange(b,b)'
cols=48></textarea><td width=160 id=r>.</td>
</table>
<style>
*{margin:0;padding:0;vertical-align:top;font:1em/1em courier}
#m{height:1.5em;resize:none;overflow:hidden}
#t{padding:0 2px}
#w{position:absolute;opacity:.001}
</style>
`
| 10,282 | 24.77193 | 131 | go |
trudy | trudy-master/README.md | ## Trudy
Trudy is a transparent proxy that can modify and drop traffic for arbitrary TCP connections. Trudy can be used to programmatically modify TCP traffic for proxy-unaware clients. Trudy creates a 2-way "pipe" for each connection it proxies. The device you are proxying (the "client") connects to Trudy (but doesn't know this) and Trudy connects to the client's intended destination (the "server"). Traffic is then passed between these pipes. Users can create Go functions to mangle data between pipes. [See it in action!](https://asciinema.org/a/7zkywm0biuz1wa64az3tmox8v) For a practical overview, check out [@tsusanka](https://twitter.com/tsusanka)'s very good [blog post](https://blog.susanka.eu/how-to-modify-general-tcp-ip-traffic-on-the-fly-with-trudy) on using Trudy to analyze Telegram's MTProto.
Trudy can also proxy TLS connections. Obviously, you will need a valid certificate or a client that does not validate certificates.
Trudy was designed for monitoring and modifying proxy-unaware devices that use non-HTTP protocols. If you want to monitor, intercept, and modify HTTP traffic, Burp Suite is probably the better option.
## Author
Written by Kelby Ludwig ([@kelbyludwig](https://twitter.com/kelbyludwig))
### Why I Built This
I have done security research that involved sitting between a embedded device and a server and modifying some custom binary protocol on the fly. This usually is a slow process that involves sniffing legitimate traffic, and then rebuilding packets programmatically. Trudy enables Burp-like features for generalized TCP traffic.
### Simple Setup
0. Configure a virtual machine (Trudy has been tested on a 64-bit Debian 8 VM) to shove all traffic through Trudy. I personally use a Vagrant VM that sets this up for me. The Vagrant VM is available [here](https://github.com/praetorian-inc/mitm-vm). If you would like to use different `--to-ports` values, you can use Trudy's command line flags to change Trudy's listening ports.
`iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 8888 -m tcp -j REDIRECT --to-ports 8080`
`iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 443 -m tcp -j REDIRECT --to-ports 6443`
`iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp -j REDIRECT --to-ports 6666`
`ip route del 0/0`
`route add default gw 192.168.1.1 dev eth1`
`sysctl -w net.ipv4.ip_forward=1`
1. Clone the repo on the virtual machine and build the Trudy binary.
`git clone https://github.com/kelbyludwig/trudy.git`
`cd trudy`
`go install`
2. Run the Trudy binary as root. This starts the listeners. If you ran the `iptables` commands above, `iptables` will forward traffic destined for port 443 to port 6443. Trudy listens on this port and expects traffic coming into this port to be TLS. All other TCP connections will be forwarded through port 6666.
`sudo $GOPATH/bin/trudy`
3. Setup your host machine to use the virtual machine as its router. You should see connections being made in Trudy's console but not notice any traffic issues on the host machine (except TLS errors).
4. In order to manipulate data, just implement whatever functions you might need within the `module` package. The default implementations for these functions are hands-off, so if they do not make sense for your situation, feel free to leave them as they are. More detailed documentation is in the `module` package and the data flow is detailed below.
5. To access the interceptor, visit `http://<IP ADDRESS OF VM>:8888/` in your web browser. The only gotcha here is you must visit the interceptor after starting Trudy but before Trudy receives a packet that it wants to intercept.
## Data Flow
Module methods are called in this order. Downward arrows indicate a branch if the `Do*` function returns true.
```
Deserialize -> Drop -> DoMangle -> DoIntercept -> DoPrint -> Serialize -> BeforeWriteTo(Server|Client) -> AfterWriteTo(Server|Client)
| / | /
|_> Mangle |_> PrettyPrint
```
| 4,077 | 66.966667 | 803 | md |
trudy | trudy-master/module/module.go | package module
import (
"crypto/tls"
"encoding/hex"
"github.com/praetorian-inc/trudy/pipe"
"net"
)
//Data is a thin wrapper that provides metadata that may be useful when mangling bytes on the network.
type Data struct {
FromClient bool //FromClient is true is the data sent is coming from the client (the device you are proxying)
Bytes []byte //Bytes is a byte slice that contians the TCP data
TLSConfig *tls.Config //TLSConfig is a TLS server config that contains Trudy's TLS server certficiate.
ServerAddr net.Addr //ServerAddr is net.Addr of the server
ClientAddr net.Addr //ClientAddr is the net.Addr of the client (the device you are proxying)
}
//DoMangle will return true if Data needs to be sent to the Mangle function.
func (input Data) DoMangle() bool {
return true
}
//Mangle can modify/replace the Bytes values within the Data struct. This can
//be empty if no programmatic mangling needs to be done.
func (input *Data) Mangle() {
}
//Drop will return true if the Data needs to be dropped before going through
//the pipe.
func (input Data) Drop() bool {
return false
}
//PrettyPrint returns the string representation of the data. This string will
//be the value that is logged to the console.
func (input Data) PrettyPrint() string {
return hex.Dump(input.Bytes)
}
//DoPrint will return true if the PrettyPrinted version of the Data struct
//needs to be logged to the console.
func (input Data) DoPrint() bool {
return true
}
//DoIntercept returns true if data should be sent to the Trudy interceptor.
func (input Data) DoIntercept() bool {
return false
}
//Deserialize should replace the Data struct's Bytes with a deserialized bytes.
//For example, unpacking a HTTP/2 frame would be deserialization.
func (input *Data) Deserialize() {
}
//Serialize should replace the Data struct's Bytes with the serialized form of
//the bytes. The serialized bytes will be sent over the wire.
func (input *Data) Serialize() {
}
//BeforeWriteToClient is a function that will be called before data is sent to
//a client.
func (input *Data) BeforeWriteToClient(p pipe.Pipe) {
}
//AfterWriteToClient is a function that will be called after data is sent to
//a client.
func (input *Data) AfterWriteToClient(p pipe.Pipe) {
}
//BeforeWriteToServer is a function that will be called before data is sent to
//a server.
func (input *Data) BeforeWriteToServer(p pipe.Pipe) {
}
//AfterWriteToServer is a function that will be called after data is sent to
//a server.
func (input *Data) AfterWriteToServer(p pipe.Pipe) {
}
| 2,569 | 28.204545 | 117 | go |
trudy | trudy-master/listener/listener.go | package listener
import (
"crypto/tls"
"errors"
"net"
)
//The TrudyListener interface is used to listen for incoming connections and accept them. This is almost
//the same as the typical Listener interface, except a net.Conn must be returned for Accept. This enables
//Trudy to grab the original destination IP address from the kernel.
type TrudyListener interface {
//TODO: Listen should take two strings: "tcp" or "udp" and a port to listen on.
//This parameter could create a Listener for both tcp and udp.
Listen(string, *net.TCPAddr, *tls.Config)
//Accept returns a generic net.Conn and the file descriptor of the socket.
Accept() (int, net.Conn, error)
//Close shuts down the listener.
Close() error
}
//The TCPListener struct implements the TrudyListener interface and handles TCP connections.
type TCPListener struct {
Listener *net.TCPListener
}
func (tl *TCPListener) Listen(nets string, tcpAddr *net.TCPAddr, _ *tls.Config) {
tcpListener, err := net.ListenTCP(nets, tcpAddr)
if err != nil {
panic(err)
}
tl.Listener = tcpListener
}
func (tl *TCPListener) Accept() (fd int, conn net.Conn, err error) {
cpointer, err := tl.Listener.AcceptTCP()
if err != nil {
return
}
file, err := cpointer.File()
if err != nil {
return
}
fd = int(file.Fd())
conn, err = net.FileConn(file)
if err != nil {
return
}
return
}
func (tl *TCPListener) Close() error {
return tl.Listener.Close()
}
//TLSListener struct implements the TrudyListener interface and handles TCP connections over TLS.
type TLSListener struct {
Listener *net.TCPListener
Config *tls.Config
}
func (tl *TLSListener) Accept() (fd int, conn net.Conn, err error) {
cpointer, err := tl.Listener.AcceptTCP()
if err != nil {
return
}
file, err := cpointer.File()
if err != nil {
return
}
fd = int(file.Fd())
fconn, err := net.FileConn(file)
if err != nil {
return
}
conn = tls.Server(fconn, tl.Config)
return
}
func (tl *TLSListener) Listen(nets string, laddr *net.TCPAddr, config *tls.Config) {
if len(config.Certificates) == 0 {
panic(errors.New("tls.Listen: no certificates in configuration"))
}
tcpListener, err := net.ListenTCP(nets, laddr)
if err != nil {
panic(err)
}
tl.Listener = tcpListener
tl.Config = config
}
func (tl *TLSListener) Close() error {
return tl.Listener.Close()
}
| 2,328 | 23.010309 | 105 | go |
trudy | trudy-master/pipe/pipe.go | //Package pipe defines the data structure used to manipulate, monitor, and create proxied connections.
package pipe
import (
"crypto/tls"
"log"
"net"
"strconv"
"sync"
"syscall"
"time"
)
//Netfilter/iptables adds a tcp header to identify original destination.
//Since all traffic is routed through trudy, we need to retrieve the original
//intended destination (i.e. _not_ trudy)
const SO_ORIGINAL_DST = 80
//Pipe is the primary interface that handles connections. Pipe creates a
//full-duplex pipe that passes data from the client to the server and vice
//versa. A pipe is compromised of two connections. The client transparently
//connects to Trudy, and Trudy accepts the connection. Trudy will then make a
//connection with the client's intended destination and just pass traffic
//back-and-forth between the two connections. All modifications and drops to
//the packet happen to data between the two ends of the pipe.
type Pipe interface {
//Id returns a unique Pipe identifier
Id() uint
//ServerInfo returns the net.Addr of the server-end of the pipe.
ServerInfo() (addr net.Addr)
//ClientInfo returns the net.Addr of the client-end of the pipe.
ClientInfo() (addr net.Addr)
//ReadFromClient reads data into the buffer from the client-end of the
//pipe. ReadFromClient returns the number of bytes read and an error
//value if an error or EOF occurred. Note: ReadFromClient can read a
//non-zero number of bytes and have a non-nil error value (e.g. EOF).
ReadFromClient(buffer []byte) (n int, err error)
//WriteToClient writes data to the client-end of the pipe. This is
//typically the proxy-unaware client.
WriteToClient(buffer []byte) (n int, err error)
//ReadFromServer reads data into the buffer from the server-end of the
//pipe. The server is the proxy-unaware client's intended destination.
//ReadFromServer returns the number of bytes read and an error value if
//an error or EOF occurred. Note: ReadFromServer can read a non-zero
//number of bytes and have a non-nil error value (e.g. EOF).
ReadFromServer(buffer []byte) (n int, err error)
//WriteToServer writes buffer to the server-end of the pipe. The server
//is the proxy-unaware client's intended destination.
WriteToServer(buffer []byte) (n int, err error)
//ServerConn returns the net.Conn responsible for server-end
//communication.
ServerConn() (conn net.Conn)
//CilentConn returns the net.Conn responsible for client-end
//communication.
ClientConn() (conn net.Conn)
//SetServerConn will replace the server-end of the pipe with the supplied
//net.Conn parameter.
SetServerConn(conn net.Conn)
//SetClientConn will replace the client-end of the pipe with the supplied
//net.Conn parameter.
SetClientConn(conn net.Conn)
//New builds a new Pipe.
New(pipeID uint, clientConnFD int, clientConn net.Conn, useTLS bool) (err error)
//Close closes both connections of the Pipe.
Close()
//Lock locks a per-Pipe mutex that can be used in modules for
//synchronization.
Lock()
//Unlock unlocks a per-Pipe mutex that can be used in modules for
//synchronization.
Unlock()
//AddContext adds a key/value pair to the Pipe.
AddContext(key string, value interface{})
//GetContext retrieves a value in a Pipe key/value data store.
//GetContext returns the value and a bool indicating success.
GetContext(key string) (value interface{}, ok bool)
//DeleteContext removes a key/value pair from the Pipe.
DeleteContext(key string)
}
//TODO(kkl): I don't think New needs to be part of the Pipe interface.
//Removing this very specific constructor will allow for other methods
//of getting trudy as a proxy (e.g. other transparent proxies, or
//non-transparent proxies like SOCKS).
//TrudyPipe implements the Pipe interface and can be used to proxy TCP connections.
type TrudyPipe struct {
id uint
serverConn net.Conn
clientConn net.Conn
pipeMutex *sync.Mutex
userMutex *sync.Mutex
KV map[string]interface{}
}
//Lock locks a mutex stored within TrudyPipe to allow for fine-grained
//synchronization within a module.
func (t *TrudyPipe) Lock() {
t.userMutex.Lock()
}
//Unlock unlocks a mutex stored within TrudyPipe to allow for fine-grained
//synchronization within a module.
func (t *TrudyPipe) Unlock() {
t.userMutex.Unlock()
}
//AddContext adds a key/value pair to the TrudyPipe. The key/value
//pair data store is per-TrudyPipe. AddContext is safe for use
//in multiple goroutines.
func (t *TrudyPipe) AddContext(key string, value interface{}) {
t.pipeMutex.Lock()
t.KV[key] = value
t.pipeMutex.Unlock()
}
//GetContext retrieves a value in a TrudyPipe key/value data store.
//GetContext returns the value and a bool indicating success.
func (t *TrudyPipe) GetContext(key string) (retval interface{}, ok bool) {
retval, ok = t.KV[key]
return
}
//DeleteContext removes a key/value pair from the TrudyPipe. DeleteContext is
//safe for use in multiple goroutines.
func (t *TrudyPipe) DeleteContext(key string) {
t.pipeMutex.Lock()
delete(t.KV, key)
t.pipeMutex.Unlock()
}
//CilentConn returns the net.Conn responsible for client-end communication.
func (t *TrudyPipe) ClientConn() net.Conn {
return t.clientConn
}
//ServerConn returns the net.Conn responsible for server-end communication.
func (t *TrudyPipe) ServerConn() net.Conn {
return t.serverConn
}
//SetClientConn will replace the client-end of the pipe with the supplied
//net.Conn parameter. SetClientConn is safe for use in multiple goroutines.
func (t *TrudyPipe) SetClientConn(c net.Conn) {
t.pipeMutex.Lock()
t.clientConn = c
t.pipeMutex.Unlock()
}
//SetServerConn will replace the server-end of the pipe with the supplied
//net.Conn parameter. SetServerConn is safe for use in multiple goroutines.
func (t *TrudyPipe) SetServerConn(s net.Conn) {
t.pipeMutex.Lock()
t.serverConn = s
t.pipeMutex.Unlock()
}
//Id returns a TrudyPipe identifier
func (t *TrudyPipe) Id() uint {
return t.id
}
//ServerInfo returns the net.Addr of the server.
func (t *TrudyPipe) ServerInfo() (addr net.Addr) {
addr = t.serverConn.RemoteAddr()
return
}
//ClientInfo returns the net.Addr of the client.
func (t *TrudyPipe) ClientInfo() (addr net.Addr) {
addr = t.clientConn.RemoteAddr()
return
}
//Close closes both ends of a TrudyPipe.
func (t *TrudyPipe) Close() {
t.serverConn.Close()
t.clientConn.Close()
}
//ReadFromClient reads data from the client end of the pipe. This is typically the proxy-unaware client.
func (t *TrudyPipe) ReadFromClient(buffer []byte) (n int, err error) {
//TODO(kkl): Make timeouts configureable.
err = t.clientConn.SetReadDeadline(time.Now().Add(15 * time.Second))
if err != nil {
return
}
n, err = t.clientConn.Read(buffer)
return
}
//WriteToClient writes data to the client end of the pipe. This is typically the proxy-unaware client.
func (t *TrudyPipe) WriteToClient(buffer []byte) (n int, err error) {
//TODO(kkl): Make timeouts configureable.
err = t.clientConn.SetWriteDeadline(time.Now().Add(15 * time.Second))
if err != nil {
return
}
n, err = t.clientConn.Write(buffer)
return
}
//ReadFromServer reads data from the server end of the pipe. The server is the
//proxy-unaware client's intended destination.
func (t *TrudyPipe) ReadFromServer(buffer []byte) (n int, err error) {
t.serverConn.SetReadDeadline(time.Now().Add(15 * time.Second))
return t.serverConn.Read(buffer)
}
//WriteToServer writes data to the server end of the pipe. The server is the
//proxy-unaware client's intended destination.
func (t *TrudyPipe) WriteToServer(buffer []byte) (n int, err error) {
err = t.serverConn.SetWriteDeadline(time.Now().Add(15 * time.Second))
if err != nil {
return
}
n, err = t.serverConn.Write(buffer)
return
}
//byteToConnString converts the Multiaddr bytestring returned by Getsockopt into a "host:port" connection string.
func byteToConnString(multiaddr [16]byte) string {
ip := multiaddr[4:8]
ipString := net.IPv4(ip[0], ip[1], ip[2], ip[3]).String()
port := multiaddr[2:4]
portUint := int64((uint32(port[0]) << 8) + uint32(port[1]))
portString := strconv.FormatInt(portUint, 10)
return (ipString + ":" + portString)
}
//New builds a new TrudyPipe. New will get the original destination of traffic
//that was mangled by iptables and get the original destination. New will then
//open a connection to that original destination and, upon success, will set
//all the internal values needed for a TrudyPipe.
func (t *TrudyPipe) New(id uint, fd int, clientConn net.Conn, useTLS bool) (err error) {
//TODO(kkl): Make the second argument system-dependent. E.g. If a linux machine: syscall.SOL_IP
originalAddrBytes, err := syscall.GetsockoptIPv6Mreq(fd, syscall.IPPROTO_IP, SO_ORIGINAL_DST)
if err != nil {
log.Println("[DEBUG] Getsockopt failed.")
clientConn.Close()
return err
}
var serverConn net.Conn
if useTLS {
tlsconfig := &tls.Config{InsecureSkipVerify: true}
serverConn, err = tls.Dial("tcp", byteToConnString(originalAddrBytes.Multiaddr), tlsconfig)
if err != nil {
log.Printf("[ERR] Unable to connect to destination. Closing connection %v.\n", id)
clientConn.Close()
return err
}
} else {
serverConn, err = net.Dial("tcp", byteToConnString(originalAddrBytes.Multiaddr))
if err != nil {
log.Printf("[ERR] ( %v ) Unable to connect to destination. Closing pipe.\n", id)
clientConn.Close()
return err
}
}
t.id = id
t.clientConn = clientConn
t.serverConn = serverConn
t.pipeMutex = new(sync.Mutex)
t.userMutex = new(sync.Mutex)
return nil
}
| 9,506 | 32.241259 | 113 | go |
null | stanford_alpaca-main/README.md |
<p align="center" width="100%">
<img src="assets/logo.png" alt="Stanford-Alpaca" style="width: 50%; min-width: 300px; display: block; margin: auto;">
</p>
# Stanford Alpaca: An Instruction-following LLaMA Model
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/WEIGHT_DIFF_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model. The repo contains:
- The [52K data](#data-release) used for fine-tuning the model.
- The code for [generating the data](#data-generation-process).
- The code for [fine-tuning the model](#fine-tuning).
- The code for [recovering Alpaca-7B weights from our released weight diff](#recovering-alpaca-weights).
Note: We thank the community for feedback on Stanford-Alpaca and supporting our research. Our live demo is suspended until further notice.
**Usage and License Notices**: Alpaca is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
The weight diff is also CC BY NC 4.0 (allowing only non-commercial use).
## Overview
The current Alpaca model is fine-tuned from a 7B LLaMA model [1] on 52K instruction-following data generated by the techniques in the Self-Instruct [2] paper, with some modifications that we discuss in the next section.
In a preliminary human evaluation, we found that the Alpaca 7B model behaves similarly to the `text-davinci-003` model on the Self-Instruct instruction-following evaluation suite [2].
Alpaca is still under development, and there are many limitations that have to be addressed.
Importantly, we have not yet fine-tuned the Alpaca model to be safe and harmless.
We thus encourage users to be cautious when interacting with Alpaca, and to report any concerning behavior to help improve the safety and ethical considerations of the model.
Our initial release contains the data generation procedure, dataset, and training recipe. We intend to release the model weights if we are given permission to do so by the creators of LLaMA. For now, we have chosen to host a live demo to help readers better understand the capabilities and limits of Alpaca, as well as a way to help us better evaluate Alpaca's performance on a broader audience.
**Please read our release [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) for more details about the model, our discussion of the potential harm and limitations of Alpaca models, and our thought process for releasing a reproducible model.**
[1]: LLaMA: Open and Efficient Foundation Language Models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. https://arxiv.org/abs/2302.13971v1
[2]: Self-Instruct: Aligning Language Model with Self Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi. https://arxiv.org/abs/2212.10560
## Data Release
[`alpaca_data.json`](./alpaca_data.json) contains 52K instruction-following data we used for fine-tuning the Alpaca model.
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
- `output`: `str`, the answer to the instruction as generated by `text-davinci-003`.
We used the following prompts for fine-tuning the Alpaca model:
- for examples with a non-empty input field:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
```
- for examples with an empty input field:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
During inference (eg for the web demo), we use the user instruction with an empty input field (second option).
## Data Generation Process
<details>
<summary> <strong> Running the code </strong> </summary>
1. Set environment variables `OPENAI_API_KEY` to your OpenAI API key.
2. Install the dependencies with `pip install -r requirements.txt`.
3. Run `python -m generate_instruction generate_instruction_following_data` to generate the data.
</details>
We built on the data generation pipeline from [self-instruct](https://github.com/yizhongw/self-instruct) and made the following modifications:
- We used `text-davinci-003` to generate the instruction data instead of `davinci`.
- We wrote a new prompt (`prompt.txt`) that explicitly gave the requirement of instruction generation to `text-davinci-003`. Note: there is a slight error in the prompt we used, and future users should incorporate the edit in <https://github.com/tatsu-lab/stanford_alpaca/pull/24>
- We adopted much more aggressive batch decoding, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- We simplified the data generation pipeline by discarding the difference between classification and non-classification instructions.
- We only generated a single instance for each instruction, instead of 2 to 3 instances as in [1].
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, we also find our 52K generated data to be much more diverse than the data released by [self-instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
We plot the below figure (in the style of Figure 2 in the [self-instruct paper](https://arxiv.org/abs/2212.10560) to demonstrate the diversity of our data.
The inner circle of the plot represents the root verb of the instructions, and the outer circle represents the direct objects.
[//]: # ()
[<img src="assets/parse_analysis.png" width="750" />](./assets/parse_analysis.png)
## Fine-tuning
We fine-tune our models using standard Hugging Face training code.
We fine-tune LLaMA-7B and LLaMA-13B with the following hyperparameters:
| Hyperparameter | LLaMA-7B | LLaMA-13B |
|----------------|----------|-----------|
| Batch size | 128 | 128 |
| Learning rate | 2e-5 | 1e-5 |
| Epochs | 3 | 5 |
| Max length | 512 | 512 |
| Weight decay | 0 | 0 |
To reproduce our fine-tuning runs for LLaMA, first install the requirements
```bash
pip install -r requirements.txt
```
Below is a command that fine-tunes LLaMA-7B with our dataset on a machine with 4 A100 80G GPUs in FSDP `full_shard` mode.
We were able to reproduce a model of similar quality as the one we hosted in our demo with the following command using **Python 3.10**.
Replace `<your_random_port>` with a port of your own, `<your_path_to_hf_converted_llama_ckpt_and_tokenizer>` with the
path to your converted checkpoint and tokenizer (following instructions in the PR), and `<your_output_dir>` with where you want to store your outputs.
```bash
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \
--model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \
--data_path ./alpaca_data.json \
--bf16 True \
--output_dir <your_output_dir> \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True
```
The same script also works for OPT fine-tuning. Here's an example for fine-tuning OPT-6.7B
```bash
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \
--model_name_or_path "facebook/opt-6.7b" \
--data_path ./alpaca_data.json \
--bf16 True \
--output_dir <your_output_dir> \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'OPTDecoderLayer' \
--tf32 True
```
Note the given training script is meant to be simple and easy to use, and is not particularly optimized.
To run on more gpus, you may prefer to turn down `gradient_accumulation_steps` to keep a global batch size of 128. Global batch size has not been tested for optimality.
### Addressing OOM
Naively, fine-tuning a 7B model requires about 7 x 4 x 4 = 112 GB of VRAM. Commands given above enable parameter sharding, so no redundant model copy is stored on any GPU.
If you'd like to further reduce the memory footprint, here are some options:
- Turn on CPU offload for FSDP with `--fsdp "full_shard auto_wrap offload"`. This saves VRAM at the cost of longer runtime.
- In our experience, DeepSpeed stage-3 (with offload) can at times be more memory efficient than FSDP with offload. Here's an example to use DeepSpeed stage-3 with 4 GPUs with both parameter and optimizer offload:
```bash
pip install deepspeed
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \
--model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \
--data_path ./alpaca_data.json \
--bf16 True \
--output_dir <your_output_dir> \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--deepspeed "./configs/default_offload_opt_param.json" \
--tf32 True
```
- The DeepSpeed library also provides some [helpful functions](https://deepspeed.readthedocs.io/en/latest/memory.html) to estimate memory usage.
- [LoRA](https://arxiv.org/abs/2106.09685) fine-tunes low-rank slices of the query, key, and value embedding heads. This can reduce the total memory footprint from 112GB to about 7x4=28GB. We may release our re-implemention of this in the future, but for now the [peft](https://github.com/huggingface/peft) codebase can be a useful resource.
## Recovering Alpaca Weights
The weight diff between Alpaca-7B and LLaMA-7B is located [here](https://huggingface.co/tatsu-lab/alpaca-7b-wdiff/tree/main).
To recover the original Alpaca-7B weights, follow these steps:
```text
1. Convert Meta's released weights into huggingface format. Follow this guide:
https://huggingface.co/docs/transformers/main/model_doc/llama
2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
3. Run this function with the correct paths. E.g.,
python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
```
Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following
```python
import transformers
alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>")
alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>")
```
### Authors
All grad students below contributed equally and the order is determined by random draw.
- [Rohan Taori](https://www.rohantaori.com/)
- [Ishaan Gulrajani](https://ishaan.io/)
- [Tianyi Zhang](https://tiiiger.github.io/)
- [Yann Dubois](https://yanndubs.github.io/)
- [Xuechen Li](https://www.lxuechen.com/)
All advised by [Tatsunori B. Hashimoto](https://thashim.github.io/). Yann is also advised by [Percy Liang](https://cs.stanford.edu/~pliang/) and Xuechen is also advised by [Carlos Guestrin](https://guestrin.su.domains/).
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
Naturally, you should also cite the original LLaMA paper [1] and the Self-Instruct paper [2].
### Acknowledgements
We thank Yizhong Wang for his help in explaining the data generation pipeline in Self-Instruct and providing the code for the parse analysis plot.
We thank Yifan Mai for helpful support, and members of the Stanford NLP Group as well as the Center for Research on Foundation Models (CRFM) for their helpful feedback.
| 14,461 | 52.562963 | 395 | md |
null | stanford_alpaca-main/datasheet.md | # Alpaca Instruction Following Dataset
## Motivation
### For what purpose was the dataset created?
To enable more open-source research on instruction following large language models, we use generate 52K instruction-followng demonstrations using OpenAI's text-davinci-003 model.
### Who created the dataset
- [Rohan Taori](https://www.rohantaori.com/)
- [Ishaan Gulrajani](https://ishaan.io/)
- [Tianyi Zhang](https://tiiiger.github.io/)
- [Yann Dubois](https://yanndubs.github.io/)
- [Xuechen Li](https://www.lxuechen.com/)
- [Carlos Guestrin](https://guestrin.su.domains/)
- [Percy Liang](https://cs.stanford.edu/~pliang/)
- [Tatsunori B. Hashimoto](https://thashim.github.io/)
## Composition
### What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?
The instruction following demonstrations are bootstrapped by following the [seed set](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl) released from the self-instruct project.
Given that the dataset is generated, it is difficult to pinpoint who/what the instances represent.
### How many instances are there in total
In total, there are 52,002 instances in the dataset.
### Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?
not applicable.
### What data does each instance consist of?
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
- `output`: `str`, the answer to the instruction as generated by `text-davinci-003`.
### Is any information missing from individual instances?
no.
### Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)?
not applicable.
### Is there a label or target associated with each instance?
the finetuning target is the response generated by `text-davinci-003`.
### Are there recommended data splits (e.g., training, development/validation, testing)?
The Alpaca models (both demo and the ones that will be released) are trained on all 52K data.
There is no recommended data split for the dataset.
### Are there any errors, sources of noise, or redundancies in the dataset?
All 52k instructions are unique. However, some generated instructions may not be sensible, i.e., there may not exist any good response to the instruction.
### Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?
the dataset is self-contained.
### Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?
no.
### Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
The generated may contain a few inappropriate responses. In our preliminary testing, we have not encountered any offensive responses.
## Collection process
The [Github repository](https://github.com/tatsu-lab/stanford_alpaca) contains the code to generate the dataset.
## Uses
### Has the dataset been used for any tasks already?
The dataset is used to train the Alpaca models that are both used for the demo and released.
### Is there a repository that links to any or all papers or systems that use the dataset?
Please see https://github.com/tatsu-lab/stanford_alpaca
### Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?
This dataset is generated by using the OpenAI's API. Therefore, this dataset cannot be used for commerical usage that compete with OpenAI.
### Are there tasks for which the dataset should not be used?
The dataset should not be used for commerical usage that compete with OpenAI.
## Distribution
### Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?
The dataset can be freely downloaded.
### How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?
The dataset can be downloaded from the [Github repository](https://github.com/tatsu-lab/stanford_alpaca) as a json file.
### Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?
This dataset is distributed under [the ODC-By license](https://opendatacommons.org/licenses/by/1-0/).
### Have any third parties imposed IP-based or other restrictions on the data associated with the instances?
no
### Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?
no
## Maintenance
### Who is supporting/hosting/maintaining the dataset?
The dataset is hosted on github and the Github repository is maintained by Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li.
### How can the owner/curator/manager of the dataset be contacted (e.g., email address)?
Please open an issue in the [Github repository](https://github.com/tatsu-lab/stanford_alpaca)
### Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?
We do not have plan to update the dataset. | 5,585 | 53.764706 | 233 | md |
null | stanford_alpaca-main/generate_instruction.py | """
batch_selfinstruct_generate.py
run:
python -m generate_instruction generate_instruction_following_data \
--output_dir ./ \
--num_instructions_to_generate 10 \
--model_name="text-davinci-003" \
"""
import time
import json
import os
import random
import re
import string
from functools import partial
from multiprocessing import Pool
import numpy as np
import tqdm
from rouge_score import rouge_scorer
import utils
import fire
def encode_prompt(prompt_instructions):
"""Encode multiple prompt instructions into a single string."""
prompt = open("./prompt.txt").read() + "\n"
for idx, task_dict in enumerate(prompt_instructions):
(instruction, input, output) = task_dict["instruction"], task_dict["input"], task_dict["output"]
instruction = re.sub(r"\s+", " ", instruction).strip().rstrip(":")
input = "<noinput>" if input.lower() == "" else input
prompt += f"###\n"
prompt += f"{idx + 1}. Instruction: {instruction}\n"
prompt += f"{idx + 1}. Input:\n{input}\n"
prompt += f"{idx + 1}. Output:\n{output}\n"
prompt += f"###\n"
prompt += f"{idx + 2}. Instruction:"
return prompt
def post_process_gpt3_response(num_prompt_instructions, response):
if response is None:
return []
raw_instructions = f"{num_prompt_instructions+1}. Instruction:" + response["text"]
raw_instructions = re.split("###", raw_instructions)
instructions = []
for idx, inst in enumerate(raw_instructions):
# if the decoding stops due to length, the last example is likely truncated so we discard it
if idx == len(raw_instructions) - 1 and response["finish_reason"] == "length":
continue
idx += num_prompt_instructions + 1
splitted_data = re.split(f"{idx}\.\s+(Instruction|Input|Output):", inst)
if len(splitted_data) != 7:
continue
else:
inst = splitted_data[2].strip()
input = splitted_data[4].strip()
input = "" if input.lower() == "<noinput>" else input
output = splitted_data[6].strip()
# filter out too short or too long instructions
if len(inst.split()) <= 3 or len(inst.split()) > 150:
continue
# filter based on keywords that are not suitable for language models.
blacklist = [
"image",
"images",
"graph",
"graphs",
"picture",
"pictures",
"file",
"files",
"map",
"maps",
"draw",
"plot",
"go to",
"video",
"audio",
"music",
"flowchart",
"diagram",
]
blacklist += []
if any(find_word_in_string(word, inst) for word in blacklist):
continue
# We found that the model tends to add "write a program" to some existing instructions, which lead to a lot of such instructions.
# And it's a bit comfusing whether the model need to write a program or directly output the result.
# Here we filter them out.
# Note this is not a comprehensive filtering for all programming instructions.
if inst.startswith("Write a program"):
continue
# filter those starting with punctuation
if inst[0] in string.punctuation:
continue
# filter those starting with non-english character
if not inst[0].isascii():
continue
instructions.append({"instruction": inst, "input": input, "output": output})
return instructions
def find_word_in_string(w, s):
return re.compile(r"\b({0})\b".format(w), flags=re.IGNORECASE).search(s)
def generate_instruction_following_data(
output_dir="./",
seed_tasks_path="./seed_tasks.jsonl",
num_instructions_to_generate=100,
model_name="text-davinci-003",
num_prompt_instructions=3,
request_batch_size=5,
temperature=1.0,
top_p=1.0,
num_cpus=16,
):
seed_tasks = [json.loads(l) for l in open(seed_tasks_path, "r")]
seed_instruction_data = [
{"instruction": t["instruction"], "input": t["instances"][0]["input"], "output": t["instances"][0]["output"]}
for t in seed_tasks
]
print(f"Loaded {len(seed_instruction_data)} human-written seed instructions")
os.makedirs(output_dir, exist_ok=True)
request_idx = 0
# load the LM-generated instructions
machine_instruction_data = []
if os.path.exists(os.path.join(output_dir, "regen.json")):
machine_instruction_data = utils.jload(os.path.join(output_dir, "regen.json"))
print(f"Loaded {len(machine_instruction_data)} machine-generated instructions")
# similarities = {}
scorer = rouge_scorer.RougeScorer(["rougeL"], use_stemmer=False)
# now let's generate new instructions!
progress_bar = tqdm.tqdm(total=num_instructions_to_generate)
if machine_instruction_data:
progress_bar.update(len(machine_instruction_data))
# first we tokenize all the seed instructions and generated machine instructions
all_instructions = [d["instruction"] for d in seed_instruction_data] + [
d["instruction"] for d in machine_instruction_data
]
all_instruction_tokens = [scorer._tokenizer.tokenize(inst) for inst in all_instructions]
while len(machine_instruction_data) < num_instructions_to_generate:
request_idx += 1
batch_inputs = []
for _ in range(request_batch_size):
# only sampling from the seed tasks
prompt_instructions = random.sample(seed_instruction_data, num_prompt_instructions)
prompt = encode_prompt(prompt_instructions)
batch_inputs.append(prompt)
decoding_args = utils.OpenAIDecodingArguments(
temperature=temperature,
n=1,
max_tokens=3072, # hard-code to maximize the length. the requests will be automatically adjusted
top_p=top_p,
stop=["\n20", "20.", "20."],
)
request_start = time.time()
results = utils.openai_completion(
prompts=batch_inputs,
model_name=model_name,
batch_size=request_batch_size,
decoding_args=decoding_args,
logit_bias={"50256": -100}, # prevent the <|endoftext|> token from being generated
)
request_duration = time.time() - request_start
process_start = time.time()
instruction_data = []
for result in results:
new_instructions = post_process_gpt3_response(num_prompt_instructions, result)
instruction_data += new_instructions
total = len(instruction_data)
keep = 0
for instruction_data_entry in instruction_data:
# computing similarity with the pre-tokenzied instructions
new_instruction_tokens = scorer._tokenizer.tokenize(instruction_data_entry["instruction"])
with Pool(num_cpus) as p:
rouge_scores = p.map(
partial(rouge_scorer._score_lcs, new_instruction_tokens),
all_instruction_tokens,
)
rouge_scores = [score.fmeasure for score in rouge_scores]
most_similar_instructions = {
all_instructions[i]: rouge_scores[i] for i in np.argsort(rouge_scores)[-10:][::-1]
}
if max(rouge_scores) > 0.7:
continue
else:
keep += 1
instruction_data_entry["most_similar_instructions"] = most_similar_instructions
instruction_data_entry["avg_similarity_score"] = float(np.mean(rouge_scores))
machine_instruction_data.append(instruction_data_entry)
all_instructions.append(instruction_data_entry["instruction"])
all_instruction_tokens.append(new_instruction_tokens)
progress_bar.update(1)
process_duration = time.time() - process_start
print(f"Request {request_idx} took {request_duration:.2f}s, processing took {process_duration:.2f}s")
print(f"Generated {total} instructions, kept {keep} instructions")
utils.jdump(machine_instruction_data, os.path.join(output_dir, "regen.json"))
def main(task, **kwargs):
globals()[task](**kwargs)
if __name__ == "__main__":
fire.Fire(main)
| 8,381 | 37.449541 | 137 | py |
null | stanford_alpaca-main/model_card.md | ---
# Alpaca Model Card
## Model details
**Organization developing the model**
Stanford Hashimoto Group
**Model date**
Alpaca was trained in March 2023
**Model version**
This is version 1 of the model.
**Model type**
Alpaca models are instruction-following models finetuned from LLaMA models.
**More information**
Please see our blog post at `link` for more information.
**Citations details**
Please cite the [github repo](https://github.com/tatsu-lab/stanford_alpaca) if you use the data or code in this repo.
**License**
Code and data are licensed under the Apache 2.0 license.
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/tatsu-lab/stanford_alpaca) of the project, by opening an issue.
## Intended use
**Primary intended uses**
The primary use of Alpaca is research on instruction following large language models.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
Alpaca models are not finetuned with human feedback and are not intended for use in production systems.
Alpaca models are trained from data generated using the OpenAI API and thus any usage must not be competing with the OpenAI API.
## Metrics
**Model performance measures**
the Alpaca 7B model has been evaluated using blinded pairwise comparison with OpenAI's text-davinci-003 on the self-instruct evaluation set.
Our student authors have judged the Alpaca 7B model to be on par with text-davinci-003, with a win rate around 50%.
**Approaches to uncertainty and variability**
We have only finetuned a single Alpaca model at each model size, and thus we do not have a good sense of the variability of the model.
## Evaluation datasets
The model was evaluated on the self-instruct evaluation set.
## Training dataset
The model was trained on 52K instruction following data, which is release in the [Github repository](https://github.com/tatsu-lab/stanford_alpaca). | 2,089 | 39.192308 | 157 | md |
null | stanford_alpaca-main/train.py | # Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import logging
from dataclasses import dataclass, field
from typing import Dict, Optional, Sequence
import torch
import transformers
import utils
from torch.utils.data import Dataset
from transformers import Trainer
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = "</s>"
DEFAULT_BOS_TOKEN = "<s>"
DEFAULT_UNK_TOKEN = "<unk>"
PROMPT_DICT = {
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:"
),
}
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="facebook/opt-125m")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:
"""Tokenize a list of strings."""
tokenized_list = [
tokenizer(
text,
return_tensors="pt",
padding="longest",
max_length=tokenizer.model_max_length,
truncation=True,
)
for text in strings
]
input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]
input_ids_lens = labels_lens = [
tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list
]
return dict(
input_ids=input_ids,
labels=labels,
input_ids_lens=input_ids_lens,
labels_lens=labels_lens,
)
def preprocess(
sources: Sequence[str],
targets: Sequence[str],
tokenizer: transformers.PreTrainedTokenizer,
) -> Dict:
"""Preprocess the data by tokenizing."""
examples = [s + t for s, t in zip(sources, targets)]
examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]
input_ids = examples_tokenized["input_ids"]
labels = copy.deepcopy(input_ids)
for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]):
label[:source_len] = IGNORE_INDEX
return dict(input_ids=input_ids, labels=labels)
class SupervisedDataset(Dataset):
"""Dataset for supervised fine-tuning."""
def __init__(self, data_path: str, tokenizer: transformers.PreTrainedTokenizer):
super(SupervisedDataset, self).__init__()
logging.warning("Loading data...")
list_data_dict = utils.jload(data_path)
logging.warning("Formatting inputs...")
prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"]
sources = [
prompt_input.format_map(example) if example.get("input", "") != "" else prompt_no_input.format_map(example)
for example in list_data_dict
]
targets = [f"{example['output']}{tokenizer.eos_token}" for example in list_data_dict]
logging.warning("Tokenizing inputs... This may take some time...")
data_dict = preprocess(sources, targets, tokenizer)
self.input_ids = data_dict["input_ids"]
self.labels = data_dict["labels"]
def __len__(self):
return len(self.input_ids)
def __getitem__(self, i) -> Dict[str, torch.Tensor]:
return dict(input_ids=self.input_ids[i], labels=self.labels[i])
@dataclass
class DataCollatorForSupervisedDataset(object):
"""Collate examples for supervised fine-tuning."""
tokenizer: transformers.PreTrainedTokenizer
def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels"))
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX)
return dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(self.tokenizer.pad_token_id),
)
def make_supervised_data_module(tokenizer: transformers.PreTrainedTokenizer, data_args) -> Dict:
"""Make dataset and collator for supervised fine-tuning."""
train_dataset = SupervisedDataset(tokenizer=tokenizer, data_path=data_args.data_path)
data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer)
return dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator)
def train():
parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=False,
)
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
smart_tokenizer_and_embedding_resize(
special_tokens_dict=special_tokens_dict,
tokenizer=tokenizer,
model=model,
)
data_module = make_supervised_data_module(tokenizer=tokenizer, data_args=data_args)
trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
if __name__ == "__main__":
train()
| 8,263 | 36.058296 | 119 | py |
null | stanford_alpaca-main/utils.py | import dataclasses
import logging
import math
import os
import io
import sys
import time
import json
from typing import Optional, Sequence, Union
import openai
import tqdm
from openai import openai_object
import copy
StrOrOpenAIObject = Union[str, openai_object.OpenAIObject]
openai_org = os.getenv("OPENAI_ORG")
if openai_org is not None:
openai.organization = openai_org
logging.warning(f"Switching to organization: {openai_org} for OAI API key.")
@dataclasses.dataclass
class OpenAIDecodingArguments(object):
max_tokens: int = 1800
temperature: float = 0.2
top_p: float = 1.0
n: int = 1
stream: bool = False
stop: Optional[Sequence[str]] = None
presence_penalty: float = 0.0
frequency_penalty: float = 0.0
suffix: Optional[str] = None
logprobs: Optional[int] = None
echo: bool = False
def openai_completion(
prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]],
decoding_args: OpenAIDecodingArguments,
model_name="text-davinci-003",
sleep_time=2,
batch_size=1,
max_instances=sys.maxsize,
max_batches=sys.maxsize,
return_text=False,
**decoding_kwargs,
) -> Union[Union[StrOrOpenAIObject], Sequence[StrOrOpenAIObject], Sequence[Sequence[StrOrOpenAIObject]],]:
"""Decode with OpenAI API.
Args:
prompts: A string or a list of strings to complete. If it is a chat model the strings should be formatted
as explained here: https://github.com/openai/openai-python/blob/main/chatml.md. If it is a chat model
it can also be a dictionary (or list thereof) as explained here:
https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
decoding_args: Decoding arguments.
model_name: Model name. Can be either in the format of "org/model" or just "model".
sleep_time: Time to sleep once the rate-limit is hit.
batch_size: Number of prompts to send in a single request. Only for non chat model.
max_instances: Maximum number of prompts to decode.
max_batches: Maximum number of batches to decode. This argument will be deprecated in the future.
return_text: If True, return text instead of full completion object (which contains things like logprob).
decoding_kwargs: Additional decoding arguments. Pass in `best_of` and `logit_bias` if you need them.
Returns:
A completion or a list of completions.
Depending on return_text, return_openai_object, and decoding_args.n, the completion type can be one of
- a string (if return_text is True)
- an openai_object.OpenAIObject object (if return_text is False)
- a list of objects of the above types (if decoding_args.n > 1)
"""
is_single_prompt = isinstance(prompts, (str, dict))
if is_single_prompt:
prompts = [prompts]
if max_batches < sys.maxsize:
logging.warning(
"`max_batches` will be deprecated in the future, please use `max_instances` instead."
"Setting `max_instances` to `max_batches * batch_size` for now."
)
max_instances = max_batches * batch_size
prompts = prompts[:max_instances]
num_prompts = len(prompts)
prompt_batches = [
prompts[batch_id * batch_size : (batch_id + 1) * batch_size]
for batch_id in range(int(math.ceil(num_prompts / batch_size)))
]
completions = []
for batch_id, prompt_batch in tqdm.tqdm(
enumerate(prompt_batches),
desc="prompt_batches",
total=len(prompt_batches),
):
batch_decoding_args = copy.deepcopy(decoding_args) # cloning the decoding_args
while True:
try:
shared_kwargs = dict(
model=model_name,
**batch_decoding_args.__dict__,
**decoding_kwargs,
)
completion_batch = openai.Completion.create(prompt=prompt_batch, **shared_kwargs)
choices = completion_batch.choices
for choice in choices:
choice["total_tokens"] = completion_batch.usage.total_tokens
completions.extend(choices)
break
except openai.error.OpenAIError as e:
logging.warning(f"OpenAIError: {e}.")
if "Please reduce your prompt" in str(e):
batch_decoding_args.max_tokens = int(batch_decoding_args.max_tokens * 0.8)
logging.warning(f"Reducing target length to {batch_decoding_args.max_tokens}, Retrying...")
else:
logging.warning("Hit request rate limit; retrying...")
time.sleep(sleep_time) # Annoying rate limit on requests.
if return_text:
completions = [completion.text for completion in completions]
if decoding_args.n > 1:
# make completions a nested list, where each entry is a consecutive decoding_args.n of original entries.
completions = [completions[i : i + decoding_args.n] for i in range(0, len(completions), decoding_args.n)]
if is_single_prompt:
# Return non-tuple if only 1 input and 1 generation.
(completions,) = completions
return completions
def _make_w_io_base(f, mode: str):
if not isinstance(f, io.IOBase):
f_dirname = os.path.dirname(f)
if f_dirname != "":
os.makedirs(f_dirname, exist_ok=True)
f = open(f, mode=mode)
return f
def _make_r_io_base(f, mode: str):
if not isinstance(f, io.IOBase):
f = open(f, mode=mode)
return f
def jdump(obj, f, mode="w", indent=4, default=str):
"""Dump a str or dictionary to a file in json format.
Args:
obj: An object to be written.
f: A string path to the location on disk.
mode: Mode for opening the file.
indent: Indent for storing json dictionaries.
default: A function to handle non-serializable entries; defaults to `str`.
"""
f = _make_w_io_base(f, mode)
if isinstance(obj, (dict, list)):
json.dump(obj, f, indent=indent, default=default)
elif isinstance(obj, str):
f.write(obj)
else:
raise ValueError(f"Unexpected type: {type(obj)}")
f.close()
def jload(f, mode="r"):
"""Load a .json file into a dictionary."""
f = _make_r_io_base(f, mode)
jdict = json.load(f)
f.close()
return jdict
| 6,481 | 36.252874 | 117 | py |
null | stanford_alpaca-main/weight_diff.py | # Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
import fire
import torch
import tqdm
import transformers
from train import smart_tokenizer_and_embedding_resize
@torch.inference_mode()
def make_diff(
path_raw: str, path_tuned: str, path_diff: str, device="cpu", # "cuda" or "cpu"
):
"""Make the weight diff.
This function is given to present full transparency of how the weight diff was created.
Run:
python weight_diff.py make_diff --path_raw <your_path_raw> --path_tuned <your_path_tuned> --path_diff <your_path_diff>
"""
model_tuned: transformers.PreTrainedModel = transformers.AutoModelForCausalLM.from_pretrained(
path_tuned,
device_map={"": torch.device(device)},
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
)
model_raw: transformers.PreTrainedModel = transformers.AutoModelForCausalLM.from_pretrained(
path_raw,
device_map={"": torch.device(device)},
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
)
tokenizer_tuned: transformers.PreTrainedTokenizer = transformers.AutoTokenizer.from_pretrained(
path_tuned
)
tokenizer_raw: transformers.PreTrainedTokenizer = transformers.AutoTokenizer.from_pretrained(
path_raw
)
if tokenizer_raw.pad_token is None:
smart_tokenizer_and_embedding_resize(
special_tokens_dict=dict(pad_token="[PAD]"),
model=model_raw,
tokenizer=tokenizer_raw,
)
state_dict_tuned = model_tuned.state_dict()
state_dict_raw = model_raw.state_dict()
for key in tqdm.tqdm(state_dict_tuned):
state_dict_tuned[key].add_(-state_dict_raw[key])
model_tuned.save_pretrained(path_diff)
tokenizer_tuned.save_pretrained(path_diff)
@torch.inference_mode()
def recover(
path_raw,
path_diff,
path_tuned: Optional[str] = None,
device="cpu",
test_inference=True,
check_integrity_naively=True,
):
"""Recover the original weights from the released weight diff.
This function is given for you to run.
Things to do before running this:
1. Convert Meta's released weights into huggingface format. Follow this guide:
https://huggingface.co/docs/transformers/main/model_doc/llama
2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
3. Run this function with the correct paths. E.g.,
python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir>
Additional notes:
- If things run too slowly, and you have an 80G GPU lying around, let GPU go brrr by setting `--device "cuda"`.
- If you want to save the recovered weights, set `--path_tuned <your_path_tuned>`.
Next time you can load the recovered weights directly from `<your_path_tuned>`.
"""
model_raw: transformers.PreTrainedModel = transformers.AutoModelForCausalLM.from_pretrained(
path_raw,
device_map={"": torch.device(device)},
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
)
model_recovered: transformers.PreTrainedModel = transformers.AutoModelForCausalLM.from_pretrained(
path_diff,
device_map={"": torch.device(device)},
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
)
tokenizer_raw: transformers.PreTrainedTokenizer = transformers.AutoTokenizer.from_pretrained(
path_raw
)
if tokenizer_raw.pad_token is None:
smart_tokenizer_and_embedding_resize(
special_tokens_dict=dict(pad_token="[PAD]"),
model=model_raw,
tokenizer=tokenizer_raw,
)
tokenizer_recovered: transformers.PreTrainedTokenizer = transformers.AutoTokenizer.from_pretrained(
path_diff
)
state_dict_recovered = model_recovered.state_dict()
state_dict_raw = model_raw.state_dict()
for key in tqdm.tqdm(state_dict_recovered):
state_dict_recovered[key].add_(state_dict_raw[key])
if check_integrity_naively:
# This is not a rigorous, cryptographically strong integrity check :)
allsum = sum(state_dict_recovered[key].sum() for key in state_dict_recovered)
assert torch.allclose(
allsum, torch.full_like(allsum, fill_value=50637.1836), atol=1e-2, rtol=0
), "Naive integrity check failed. This could imply that some of the checkpoint files are corrupted."
if path_tuned is not None:
model_recovered.save_pretrained(path_tuned)
tokenizer_recovered.save_pretrained(path_tuned)
if test_inference:
input_text = (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\r\n\r\n"
"### Instruction:\r\nList three technologies that make life easier.\r\n\r\n### Response:"
)
inputs = tokenizer_recovered(input_text, return_tensors="pt")
out = model_recovered.generate(inputs=inputs.input_ids, max_new_tokens=100)
output_text = tokenizer_recovered.batch_decode(out, skip_special_tokens=True)[0]
output_text = output_text[len(input_text) :]
print(f"Input: {input_text}\nCompletion: {output_text}")
return model_recovered, tokenizer_recovered
def main(task, **kwargs):
globals()[task](**kwargs)
if __name__ == "__main__":
fire.Fire(main)
| 6,137 | 37.603774 | 126 | py |
null | CARL-main/.codecov.yml | #see https://github.com/codecov/support/wiki/Codecov-Yaml
codecov:
require_ci_to_pass: yes
coverage:
# 2 = xx.xx%, 0 = xx%
precision: 2
# https://docs.codecov.com/docs/commit-status
status:
# We want our total main project to always remain above 87% coverage, a
# drop of 0.20% is allowed. It should fail if coverage couldn't be uploaded
# of the CI fails otherwise
project:
default:
target: 10%
threshold: 0.20%
if_not_found: failure
if_ci_failed: error
# The code changed by a PR should have 90% coverage. This is different from the
# overall number shown above.
# This encourages small PR's as they are easier to test.
patch:
default:
target: 90%
if_not_found: failure
if_ci_failed: error
# We upload additional information on branching with pytest-cov `--cov-branch`
# This information can be used by codecov.com to increase analysis of code
parsers:
gcov:
branch_detection:
conditional: true
loop: true
method: true
macro: false
comment:
layout: diff, reach
behavior: default
require_changes: false
| 1,158 | 23.659574 | 83 | yml |
null | CARL-main/.pre-commit-config.yaml | # If you see me, please update my `rev` field using the provided links
# Click the repo and update to latest tags.
# If things break on update, raise an issue
repos:
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
name: isort imports carl
files: carl/.*
args: [--check]
- id: isort
name: isort imports test
files: test/.*
args: [--check]
- repo: https://github.com/ambv/black
rev: 22.6.0
hooks:
- id: black
name: black formatter carl
files: carl/.*
args: [--check]
- id: black
name: black formatter test
files: test/.*
args: [--check]
- id: black
name: black formatter examples
files: examples/.*
args: [--check]
# This is disabled as most modules fail this
- repo: https://github.com/pycqa/pydocstyle
rev: 6.1.1
hooks:
- id: pydocstyle
files: DISABLED # carl/.*
always_run: false
additional_dependencies: ["toml"] # Needed to parse pyproject.toml
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.930
hooks:
- id: mypy
name: mypy carl
files: carl/.*
- repo: https://github.com/pycqa/flake8
rev: 6.0.0
hooks:
- id: flake8
name: flake8 carl
files: carl/.*
- id: flake8
name: flake8 test
files: test/.*
| 1,439 | 21.857143 | 74 | yaml |
null | CARL-main/.readthedocs.yaml | # use version 2, which is now recommended
version: 2
build:
os: ubuntu-20.04
tools:
python: "3.9"
# Build from the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# build all
formats: all
# Explicitly set the version of Python and its requirements
python:
install:
- method: pip
path: .
extra_requirements:
- docs
| 370 | 15.130435 | 59 | yaml |
null | CARL-main/README.md | <img align="left" width="80" src="./docs/source/figures/CARL_logo.png" alt="CARL">
# – The Benchmark Library
CARL (context adaptive RL) provides highly configurable contextual extensions
to several well-known RL environments.
It's designed to test your agent's generalization capabilities
in all scenarios where intra-task generalization is important.
Feel free to check out our [paper](https://arxiv.org/abs/2110.02102) and our short [blog post](https://www.automl.org/carl-a-benchmark-to-study-generalization-in-reinforcement-learning/)!
## Benchmarks
Benchmarks include:
- [OpenAI gym classic control suite](https://gym.openai.com/envs/#classic_control) extended with several physics context features like gravity or friction
- [OpenAI gym Box2D](https://gym.openai.com/envs/#box2d) BipedalWalker, LunarLander and
CarRacing, each with their own modification possibilities like
new vehicles to race
- All [Brax locomotion environments](https://github.com/google/brax) with exposed internal features like joint strength or torso mass
- [Super Mario (TOAD-GAN)](https://github.com/Mawiszus/TOAD-GAN), a procedurally generated jump'n'run game with control
over level similarity
- [dm_control](https://github.com/deepmind/dm_control), environments based on the MuJoCo physics engine. The environments are extended with different context features.

For more information, check out our [documentation](https://automl.github.io/CARL/)!
## Installation
We recommend you use a virtual environment (e.g. Anaconda) to
install CARL and its dependencies. We recommend and test with python 3.9 under Linux.
First, clone our repository and install the basic requirements:
```bash
git clone https://github.com/automl/CARL.git --recursive
cd CARL
pip install .
```
This will only install the basic classic control environments, which should run on most operating systems. For the full set of environments, use the install options:
```bash
pip install -e .[box2d, brax, mario, dm_control]
```
These may not be compatible with Windows systems. Box2D environment may need to be installed via conda on MacOS systems:
```bash
conda install -c conda-forge gym-box2d
```
In general, we test on Linux systems, but aim to keep the benchmark compatible with MacOS as much as possible.
Mario at this point, however, will not run on any operation system besides Linux
To install the additional requirements for ToadGAN:
```bash
javac carl/envs/mario/Mario-AI-Framework/**/*.java
```
## CARL's Contextual Extension
CARL contextually extends the environment by making the context visible and configurable.
During training we therefore can encounter different contexts and train for generalization.
We exemplarily show how Brax' Fetch is extended and embedded by CARL.
Different instiations can be achieved by setting the context features to different values.

## Cite Us
If you use CARL in your research, please cite our paper on the benchmark:
```bibtex
@inproceedings{BenEim2021a,
title = {CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning},
author = {Carolin Benjamins and Theresa Eimer and Frederik Schubert and André Biedenkapp and Bodo Rosenhahn and Frank Hutter and Marius Lindauer},
booktitle = {NeurIPS 2021 Workshop on Ecological Theory of Reinforcement Learning},
year = {2021},
month = dec
}
```
You can find the code and experiments for this paper in the `neurips_ecorl_workshop_2021` branch.
## References
[OpenAI gym, Brockman et al., 2016. arXiv preprint arXiv:1606.01540](https://arxiv.org/pdf/1606.01540.pdf)
[Brax -- A Differentiable Physics Engine for Large Scale
Rigid Body Simulation, Freeman et al., NeurIPS 2021 (Dataset &
Benchmarking Track)](https://arxiv.org/pdf/2106.13281.pdf)
[TOAD-GAN: Coherent Style Level Generation from a Single Example,
Awiszus et al., AIIDE 2020](https://arxiv.org/pdf/2008.01531.pdf)
[dm_control: Software and Tasks for Continuous Control](https://arxiv.org/pdf/2006.12983.pdf)
## License
CARL falls under the Apache License 2.0 (see file 'LICENSE') as is permitted by all
work that we use. This includes CARLMario, which is not based on the Nintendo Game, but on
TOAD-GAN and TOAD-GUI running under an MIT license. They in turn make use of the Mario AI framework
(https://github.com/amidos2006/Mario-AI-Framework). This is not the original game but a replica,
explicitly built for research purposes and includes a copyright notice (https://github.com/amidos2006/Mario-AI-Framework#copyrights ).
| 4,675 | 45.76 | 187 | md |
null | CARL-main/changelog.md | # 0.2.1
- Add Finger (DMC) env
- Readd RNA env (#78)
# 0.2.0
- Integrate dm control environments (#55)
- Add context masks to only append those to the state (#54)
- Extend classic control environments to parametrize initial state distributions (#52)
- Remove RNA environment for maintenance (#61)
- Fixed pre-commit (mypy, black, flake8, isort) (#62)
# 0.1.0
- Initial release.
| 380 | 26.214286 | 86 | md |
null | CARL-main/setup.py | import os
import setuptools
from carl import (
author,
author_email,
description,
package_name,
project_urls,
url,
version,
)
HERE = os.path.dirname(os.path.realpath(__file__))
def read_file(filepath: str) -> str:
with open(filepath, "r", encoding="utf-8") as fh:
return fh.read()
extras_require = {
"box2d": [
"gym[box2d]==0.24.1",
],
"brax": [
"brax>=0.0.10,<=0.0.16",
"protobuf>=3.17.3",
],
"dm_control": [
"dm_control>=1.0.3",
],
"mario": [
"torch>=1.9.0",
"Pillow>=8.3.1",
"py4j>=0.10.9.2",
],
"dev": [
"pytest>=6.1.1",
"pytest-cov",
"mypy",
"black",
"flake8",
"isort",
"pydocstyle",
"pre-commit",
],
"docs": [
"sphinx>=4.2.0",
"sphinx-gallery>=0.10.0",
"image>=1.5.33",
"sphinx-autoapi>=1.8.4",
]
}
setuptools.setup(
name=package_name,
author=author,
author_email=author_email,
description=description,
long_description=read_file(os.path.join(HERE, "README.md")),
long_description_content_type="text/markdown",
license="Apache 2.0",
license_file="LICENSE",
url=url,
project_urls=project_urls,
keywords=[
"RL",
"Generalization",
"Context",
"Reinforcement Learning"
],
version=version,
packages=setuptools.find_packages(exclude=["tests"]),
include_package_data=True,
python_requires=">=3.9",
install_requires=[
"gym==0.24.1",
"scipy>=1.7.0",
"ConfigArgParse>=1.5.1",
"numpy>=1.19.5",
"pandas>=1.3.0",
"xvfbwrapper>=0.2.9",
"matplotlib>=3.4.2",
"dataclasses>=0.6",
"numpyencoder>=0.3.0",
"pyglet>=1.5.15",
"pytablewriter>=0.62.0",
"PyYAML>=5.4.1",
"tabulate>=0.8.9",
"bs4>=0.0.1",
],
extras_require=extras_require,
test_suite="pytest",
platforms=["Linux"],
entry_points={
"console_scripts": ["smac = smac.smac_cli:cmd_line_call"],
},
classifiers=[
"Programming Language :: Python :: 3",
"Natural Language :: English",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering",
"Topic :: Software Development",
],
)
| 2,639 | 22.157895 | 66 | py |
null | CARL-main/.github/workflows/dist.yaml | name: dist-check
on:
# Manually triggerable in github
workflow_dispatch:
# When a push occurs on either of these branches
push:
branches:
- main
- development
# When a push occurs on a PR that targets these branches
pull_request:
branches:
- main
- development
jobs:
dist:
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
with:
submodules: "recursive"
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Build dist
run: |
python setup.py sdist
- name: Twine check
run: |
pip install twine
last_dist=$(ls -t dist/ContextuaRL-*.tar.gz | head -n 1)
twine_output=`twine check "$last_dist"`
if [[ "$twine_output" != "Checking $last_dist: PASSED" ]]
then
echo $twine_output
else
pip install $last_dist
fi
- name: Install dist
run: |
last_dist=$(ls -t dist/ContextuaRL-*.tar.gz | head -n 1)
pip install $last_dist
- name: PEP 561 Compliance
run: |
pip install mypy
cd .. # required to use the installed version of CARL
# Note this doesnt perform mypy checks, only
# that the types are exported
if ! mypy -c "import carl"; then exit 1; fi
| 1,391 | 21.451613 | 65 | yaml |
null | CARL-main/.github/workflows/precommit.yaml | name: pre-commit
on:
# Manually triggerable in github
workflow_dispatch:
# When a push occurs on either of these branches
push:
branches:
- main
- development
# When a push occurs on a PR that targets these branches
pull_request:
branches:
- main
- development
jobs:
run-all-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
submodules: "recursive"
- name: Setup Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install pre-commit
run: |
pip install pre-commit
pre-commit install
- name: Run pre-commit
run: |
pre-commit run --all-files
| 731 | 17.3 | 58 | yaml |
null | CARL-main/.github/workflows/tests.yaml | name: Tests
on:
# Manually triggerable in github
workflow_dispatch:
# When a push occurs on either of these branches
push:
branches:
- main
- development
# When a push occurs on a PR that targets these branches
pull_request:
branches:
- main
- development
schedule:
# Every day at 7AM UTC
- cron: '0 07 * * *'
env:
# Arguments used for pytest
pytest-args: >-
--durations=10
# Arguments used for code-cov which is later used to annotate PR's on github
code-cov-args: >-
--cov=carl
--cov-report=xml
jobs:
ubuntu:
name: ${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.kind }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-latest, macos-latest, ubuntu-latest]
python-version: ['3.9', '3.10']
kind: ['conda', 'source', 'dist']
exclude:
# Exclude all configurations *-*-dist, but include one later in `include`
- kind: 'dist'
# Exclude windows as bash commands wont work in windows runner
- os: windows-latest
# Exclude macos as there are permission errors using conda as we do
- os: macos-latest
include:
# Add the tag code-cov to ubuntu-3.7-source
- os: ubuntu-latest
python-version: 3.9
kind: 'source'
code-cov: true
# Include one config with dist, ubuntu-3.7-dist
- os: ubuntu-latest
python-version: 3.9
kind: 'dist'
steps:
- name: Checkout
uses: actions/checkout@v2
with:
submodules: "recursive"
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Conda install
if: matrix.kind == 'conda'
run: |
# Miniconda is available in $CONDA env var
$CONDA/bin/conda create -n testenv --yes pip wheel gxx_linux-64 gcc_linux-64 python=${{ matrix.python-version }}
$CONDA/envs/testenv/bin/python3 -m pip install --upgrade pip
$CONDA/envs/testenv/bin/pip3 install -e .[dev,dm_control]
- name: Source install
if: matrix.kind == 'source'
run: |
python -m pip install --upgrade pip
pip install -e .[dev,dm_control]
- name: Dist install
if: matrix.kind == 'dist'
run: |
python -m pip install --upgrade pip
python setup.py sdist
last_dist=$(ls -t dist/ContextuaRL-*.tar.gz | head -n 1)
pip install $last_dist[dev,dm_control]
- name: Tests
timeout-minutes: 60
run: |
if [[ ${{ matrix.kind }} == 'conda' ]]; then
PYTHON=$CONDA/envs/testenv/bin/python3
export PATH="$CONDA/envs/testenv/bin:$PATH"
else
PYTHON=$(which python3)
fi
if [ ${{ matrix.code-cov }} ]; then
$PYTHON -m pytest ${{ env.pytest-args }} ${{ env.code-cov-args }} --ignore=test/local_only test
else
$PYTHON -m pytest ${{ env.pytest-args }} --ignore=test/local_only test
fi
- name: Upload coverage
if: matrix.code-cov && always()
uses: codecov/codecov-action@v2
with:
fail_ci_if_error: true
verbose: true
| 3,330 | 25.436508 | 120 | yaml |
null | CARL-main/carl/__init__.py | __license__ = "Apache-2.0 License"
__version__ = "0.2.0"
__author__ = "Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Aditya Mohan, Sebastian Döhler"
import datetime
name = "CARL"
package_name = "ContextuaRL"
author = __author__
author_email = "[email protected]"
description = "CARL- Contextually Adaptive Reinforcement Learning"
url = "https://www.automl.org/"
project_urls = {
"Documentation": "https://carl.readthedocs.io/en/latest/",
"Source Code": "https://github.com/https://github.com/automl/CARL",
}
copyright = f"""
Copyright {datetime.date.today().strftime('%Y')}, AutoML.org Freiburg-Hannover
"""
version = __version__
| 683 | 28.73913 | 116 | py |
null | CARL-main/carl/context/__init__.py | 0 | 0 | 0 | py |
|
null | CARL-main/carl/context/augmentation.py | from typing import Any, List, Union
import numpy as np
def add_gaussian_noise(
default_value: Union[float, List[float]],
percentage_std: Union[float, Any] = 0.01,
random_generator: np.random.Generator = None,
) -> Union[float, Any]:
"""
Add gaussian noise to default value.
Parameters
----------
default_value: Union[float, List[float]]
Mean of normal distribution. Can be a scalar or a list of floats. If it is a list(-like) with length n, the
output will also be of length n.
percentage_std: float, optional = 0.01
Relative standard deviation, multiplied with default value (mean) is standard deviation of normal distribution.
If the default value is 0, percentage_std is assumed to be the absolute standard deviation.
random_generator: np.random.Generator, optional = None
Optional random generator to ensure deterministic behavior.
Returns
-------
Union[float, List[float]]
Default value with gaussian noise. If input was list (or array) with length n, output is also list (or array)
with length n.
"""
if type(default_value) in [int, float] and default_value != 0:
std = percentage_std * np.abs(default_value)
else:
std = percentage_std
mean = np.zeros_like(default_value)
if not random_generator:
random_generator = np.random.default_rng()
value = default_value + random_generator.normal(loc=mean, scale=std)
return value
| 1,494 | 35.463415 | 119 | py |
null | CARL-main/carl/context/sampling.py | # flake8: noqa: W605
from typing import Any, Dict, List, Tuple
import numpy as np
from scipy.stats import norm
from carl import envs
from carl.utils.types import Context, Contexts
def get_default_context_and_bounds(
env_name: str,
) -> Tuple[Dict[Any, Any], Dict[Any, Any]]:
"""
Get context feature defaults and bounds for environment.
Parameters
----------
env_name: str
Name of CARLEnv.
Returns
-------
Tuple[Dict[Any, Any], Dict[Any, Any]]
Context feature defaults as dictionary, context feature bounds as dictionary.
Keys are the names of the context features.
Context feature bounds can be in following formats:
int/float context features:
``"MAIN_ENGINE_POWER": (0, 50, float)``
list of int/float context feature:
``"target_structure_ids": (0, np.inf, [list, int])``
categorical context features:
``"VEHICLE": (None, None, "categorical", np.arange(0, len(PARKING_GARAGE)))``
"""
# TODO make less hacky / make explicit
env_defaults = getattr(envs, f"{env_name}_defaults")
env_bounds = getattr(envs, f"{env_name}_bounds")
return env_defaults, env_bounds
def sample_contexts(
env_name: str,
context_feature_args: List[str],
num_contexts: int,
default_sample_std_percentage: float = 0.05,
fallback_sample_std: float = 0.1,
) -> Dict[int, Dict[str, Any]]:
"""
Sample contexts.
Control which/how the context features are sampled with `context_feature_args`.
Categorical context features are sampled radonmly via the given choices in the context bounds.
For continuous context features a new value is sampled in the following way:
.. math:: x_{cf,new} \sim \mathcal{N}(x_{cf, default}, \sigma_{rel} \cdot x_{cf, default})
:math:`x_{cf,new}`: New context feature value
:math:`x_{cf, default}`: Default context feature value
:math:`\sigma_{rel}`: Relative standard deviation, parametrized in `context_feature_args`
by providing e.g. `["<context_feature_name>_std", "0.05"]`.
Examples
--------
Sampling two contexts for the CARLAcrobotEnv and changing only the context feature link_length_2.
>>> sample_contexts("CARLAcrobotEnv", ["link_length_2"], 2)
{0: {'link_length_1': 1,
'link_length_2': 1.0645201049835367,
'link_mass_1': 1,
'link_mass_2': 1,
'link_com_1': 0.5,
'link_com_2': 0.5,
'link_moi': 1,
'max_velocity_1': 12.566370614359172,
'max_velocity_2': 28.274333882308138},
1: {'link_length_1': 1,
'link_length_2': 1.011885635790618,
'link_mass_1': 1,
'link_mass_2': 1,
'link_com_1': 0.5,
'link_com_2': 0.5,
'link_moi': 1,
'max_velocity_1': 12.566370614359172,
'max_velocity_2': 28.274333882308138}}
Parameters
----------
env_name: str
Name of MetaEnvironment
context_feature_args: List[str]
All arguments from the parser, e.g., ["context_feature_0", "context_feature_1", "context_feature_1_std", "0.05"]
num_contexts: int
Number of contexts to sample.
default_sample_std_percentage: float, optional
The default relative standard deviation to use if <context_feature_name>_std is not specified. The default is
0.05.
fallback_sample_std: float, optional
The fallback relative standard deviation. Defaults to 0.1.
Returns
-------
Dict[int, Dict[str, Any]]
Dictionary containing the sampled contexts. Keys are integers, values are Dicts containing the context feature
names as keys and context feature values as values, e.g.,
"""
# Get default context features and bounds
env_defaults, env_bounds = get_default_context_and_bounds(env_name=env_name)
# Create sample distributions/rules
sample_dists = {}
for context_feature_name in env_defaults.keys():
if context_feature_name in context_feature_args:
if f"{context_feature_name}_mean" in context_feature_args:
sample_mean = float(
context_feature_args[
context_feature_args.index(f"{context_feature_name}_mean") + 1
]
)
else:
sample_mean = env_defaults[context_feature_name]
if f"{context_feature_name}_std" in context_feature_args:
sample_std = float(
context_feature_args[
context_feature_args.index(f"{context_feature_name}_std") + 1
]
)
else:
sample_std = default_sample_std_percentage * np.abs(sample_mean)
if sample_mean == 0:
# Fallback sample standard deviation. Necessary if the sample mean is 0.
# In this case the sample standard deviation would be 0 as well and we would always sample
# the sample mean. Therefore we use a fallback sample standard deviation.
sample_std = fallback_sample_std # TODO change this back to sample_std
random_variable = norm(loc=sample_mean, scale=sample_std)
context_feature_type = env_bounds[context_feature_name][2]
sample_dists[context_feature_name] = (random_variable, context_feature_type)
# Sample contexts
contexts: Contexts = {}
for i in range(0, num_contexts):
c: Context = {}
# k = name of context feature
for k in env_defaults.keys():
if k in sample_dists.keys():
# If we have a special sampling distribution/rule for context feature k
random_variable = sample_dists[k][0]
context_feature_type = sample_dists[k][1]
lower_bound, upper_bound = env_bounds[k][0], env_bounds[k][1]
if context_feature_type == list:
length = np.random.randint(
500000
) # TODO should we allow lists to be this long? or should we parametrize this?
arg_class = sample_dists[k][1][1]
context_list = random_variable.rvs(size=length)
context_list = np.clip(context_list, lower_bound, upper_bound)
c[k] = [arg_class(c) for c in context_list]
elif context_feature_type == "categorical":
choices = env_bounds[k][3]
choice = np.random.choice(choices)
c[k] = choice
elif context_feature_type == "conditional":
condition = env_bounds[k][4]
choices = env_bounds[k][3][condition]
choice = np.random.choice(choices)
c[k] = choice
else:
c[k] = random_variable.rvs(size=1)[0] # sample variable
c[k] = np.clip(c[k], lower_bound, upper_bound) # check bounds
c[k] = context_feature_type(c[k]) # cast to given type
else:
# No special sampling rule for context feature k, use the default context feature value
c[k] = env_defaults[k]
contexts[i] = c
return contexts
| 7,309 | 38.301075 | 120 | py |
null | CARL-main/carl/context/selection.py | from __future__ import annotations
from abc import abstractmethod
from typing import Any, Callable, List, Optional, Tuple
import numpy as np
from carl.utils.types import Context, Contexts
class AbstractSelector(object):
"""
Base class for context selectors.
Context is selected when calling `select`, not in `__init__`.
Parameters
----------
contexts: Contexts
Context set. A `Context` is a Dict[str, Any].
Attributes
----------
contexts : Contexts
Context set.
context_ids : List[int]
Integer index for contexts.
contexts_keys : List[Any]
Keys of contexts dictionary.
n_calls : int
Number of times `select` has been called.
context_id : Optional[int]
Context id of current selected context. Is None at first.
"""
def __init__(self, contexts: Contexts):
self.contexts: Contexts = contexts
self.context_ids: List[int] = list(np.arange(len(contexts)))
self.contexts_keys: List[Any] = list(contexts.keys())
self.n_calls: int = 0
self.context_id: Optional[
int
] = None # holds index of current context (integer index of context keys)
@abstractmethod
def _select(self) -> Tuple[Context, int]:
"""
Select next context (internal).
Should be implemented in child class, internal use.
Returns
-------
context : Context
Selected context.
context_id : int
Integer id of selected context.
"""
...
def select(self) -> Context:
"""
Select next context (API).
Returns
-------
context : Context
Selected context.
"""
context, context_id = self._select()
self.context_id = context_id
self.n_calls += 1
return context
@property
def context_key(self) -> Any | None:
"""
Return context key
If no context has been selected yet (context_id=None),
return None.
Returns
-------
Any | None
The key of the current context or None
"""
if self.context_id:
key = self.contexts_keys[self.context_id]
else:
key = None
return key
class RandomSelector(AbstractSelector):
"""
Random Context Selector.
"""
def _select(self) -> Tuple[Context, int]:
# TODO seed?
context_id = np.random.choice(self.context_ids)
context = self.contexts[self.contexts_keys[context_id]]
return context, context_id
class RoundRobinSelector(AbstractSelector):
"""
Round robin context selector.
Iterate through all contexts and then start at the first again.
"""
def _select(self) -> Tuple[Context, int]:
if self.context_id is None:
self.context_id = -1
self.context_id = (self.context_id + 1) % len(self.contexts)
context = self.contexts[self.contexts_keys[self.context_id]]
return context, self.context_id
class CustomSelector(AbstractSelector):
"""
Custom selector.
Pass an individual function implementing selection logic. Could also be implemented by subclassing
`AbstractSelector`.
Parameters
----------
contexts: Contexts
Set of contexts.
selector_function: callable
Function receiving a pointer to the selector implementing selection logic.
See example below.
Examples
--------
>>> def selector_function(inst: AbstractSelector) -> Tuple[Context, int]:
>>> if inst.n_calls == 0:
>>> context_id = 1
>>> else:
>>> context_id = 0
>>> return inst.contexts[inst.contexts_keys[context_id]], context_id
>>> contexts = ...
>>> selector = CustomSelector(contexts=contexts, selector_function=selector_function)
This custom selector selects a context id based on the number of times `select` has been called.
"""
def __init__(
self,
contexts: Contexts,
selector_function: Callable[[AbstractSelector], Tuple[Context, int]],
):
super().__init__(contexts=contexts)
self.selector_function = selector_function
def _select(self) -> Tuple[Context, int]:
context, context_id = self.selector_function(self)
self.context_id = context_id
return context, context_id
| 4,448 | 25.640719 | 102 | py |
null | CARL-main/carl/context/utils.py | from typing import Any, Dict, List, Tuple, Type
import numpy as np
def get_context_bounds(
context_keys: List[str], context_bounds: Dict[str, Tuple[float, float, Type[Any]]]
) -> Tuple[np.ndarray, np.ndarray]:
"""
Get context bounds for specific features.
Could add sophisticated method here.
Parameters
----------
context_keys: List[str]
Names of context features.
context_bounds: Dict[str, Tuple[float, float, type]]
Dictionary containing lower and upper bound as a tuple, e.g., "context_feature_name": (-np.inf, np.inf)).
Returns
-------
lower_bounds, upper_bounds: np.array, np.array
Lower and upper bounds as arrays.
"""
lower_bounds = np.empty(shape=len(context_keys))
upper_bounds = np.empty(shape=len(context_keys))
for i, context_key in enumerate(context_keys):
l, u, dtype = context_bounds[context_key]
lower_bounds[i] = l
upper_bounds[i] = u
return lower_bounds, upper_bounds
if __name__ == "__main__":
DEFAULT_CONTEXT = {
"min_position": -1.2, # unit?
"max_position": 0.6, # unit?
"max_speed": 0.07, # unit?
"goal_position": 0.5, # unit?
"goal_velocity": 0, # unit?
"force": 0.001, # unit?
"gravity": 0.0025, # unit?
"min_position_start": -0.6,
"max_position_start": -0.4,
"min_velocity_start": 0.0,
"max_velocity_start": 0.0,
}
CONTEXT_BOUNDS = {
"min_position": (-np.inf, np.inf, float),
"max_position": (-np.inf, np.inf, float),
"max_speed": (0, np.inf, float),
"goal_position": (-np.inf, np.inf, float),
"goal_velocity": (-np.inf, np.inf, float),
"force": (-np.inf, np.inf, float),
"gravity": (0, np.inf, float),
"min_position_start": (-np.inf, np.inf, float),
"max_position_start": (-np.inf, np.inf, float),
"min_velocity_start": (-np.inf, np.inf, float),
"max_velocity_start": (-np.inf, np.inf, float),
}
lower, upper = get_context_bounds(list(DEFAULT_CONTEXT.keys()), CONTEXT_BOUNDS)
| 2,130 | 30.80597 | 113 | py |
null | CARL-main/carl/envs/__init__.py | # flake8: noqa: F401
# Modular imports
import importlib.util as iutil
import warnings
# Classic control is in gym and thus necessary for the base version to run
from carl.envs.classic_control import *
# Environment loading
box2d_spec = iutil.find_spec("Box2D")
found = box2d_spec is not None
if found:
from carl.envs.box2d import *
else:
warnings.warn(
"Module 'Box2D' not found. If you want to use these environments, please follow the installation guide."
)
brax_spec = iutil.find_spec("brax")
found = brax_spec is not None
if found:
from carl.envs.brax import *
pass
else:
warnings.warn(
"Module 'Brax' not found. If you want to use these environments, please follow the installation guide."
)
try:
from carl.envs.mario import *
except:
warnings.warn(
"Module 'Mario' not found. Please follow installation guide for ToadGAN environment."
)
dm_control_spec = iutil.find_spec("dm_control")
found = dm_control_spec is not None
if found:
from carl.envs.dmc import *
else:
warnings.warn(
"Module 'dm_control' not found. If you want to use these environments, please follow the installation guide."
)
| 1,188 | 25.422222 | 117 | py |
null | CARL-main/carl/envs/carl_env.py | from __future__ import annotations
from typing import Any, Dict, List, Mapping, Optional, Tuple, Type, Union
import importlib
import inspect
import json
import os
from types import ModuleType
import gym
import numpy as np
from gym import Wrapper, spaces
from carl.context.augmentation import add_gaussian_noise
from carl.context.selection import AbstractSelector, RoundRobinSelector
from carl.context.utils import get_context_bounds
from carl.utils.trial_logger import TrialLogger
from carl.utils.types import Context, Contexts, ObsType, Vector
brax_spec = importlib.util.find_spec("brax")
if brax_spec is not None:
import jax.numpy as jnp
import jaxlib
class CARLEnv(Wrapper):
"""
Meta-environment formulating the original environments as cMDPs.
Here, a context feature can be anything defining the behavior of the
environment. An instance is the environment with a specific context.
Can change the context after each episode.
If not all keys are present in the provided context(s) the contexts will be filled
with the default context values in the init of the class.
Parameters
----------
env: gym.Env
Environment which context features are made visible / which is turned into a cMDP.
contexts: Contexts
Dict of contexts/instances. Key are context id, values are contexts as
Dict[context feature id, context feature value].
hide_context: bool = False
If False, the context will be appended to the original environment's state.
add_gaussian_noise_to_context: bool = False
Wether to add Gaussian noise to the context with the relative standard deviation
'gaussian_noise_std_percentage'.
gaussian_noise_std_percentage: float = 0.01
The relative standard deviation for the Gaussian noise. The actual standard deviation
is calculated by 'gaussian_noise_std_percentage' * context feature value.
logger: TrialLogger, optional
Optional TrialLogger which takes care of setting up logging directories and handles
custom logging.
max_episode_length: int = 1e6
Maximum length of episode in (time)steps. Cutoff.
scale_context_features: str = "no"
Wether to scale context features. Available modes are 'no', 'by_mean' and 'by_default'.
'by_mean' scales the context features by their mean over all passed instances and
'by_default' scales the context features by their default values ('default_context').
default_context: Context
The default context of the environment. Used for scaling the context features if applicable. Used for filling
incomplete contexts.
state_context_features: Optional[List[str]] = None
If the context is visible to the agent (hide_context=False), the context features are appended to the state.
state_context_features specifies which of the context features are appended to the state. The default is
appending all context features.
context_mask: Optional[List[str]]
Name of context features to be ignored when appending context features to the state.
context_selector: Optional[Union[AbstractSelector, type(AbstractSelector)]]
Context selector (object of) class, e.g., can be RoundRobinSelector (default) or RandomSelector.
Should subclass AbstractSelector.
context_selector_kwargs: Optional[Dict]
Optional kwargs for context selector class.
Raises
------
ValueError
If the choice of instance_mode is not available.
ValueError
If the choice of scale_context_features is not available.
"""
available_scale_methods = ["by_default", "by_mean", "no"]
available_instance_modes = ["random", "rr", "roundrobin"]
def __init__(
self,
env: gym.Env,
n_envs: int = 1,
contexts: Contexts = {},
hide_context: bool = True,
add_gaussian_noise_to_context: bool = False,
gaussian_noise_std_percentage: float = 0.01,
logger: Optional[TrialLogger] = None,
max_episode_length: int = int(1e6),
scale_context_features: str = "no",
default_context: Optional[Context] = None,
state_context_features: Optional[List[str]] = None,
context_mask: Optional[List[str]] = None,
dict_observation_space: bool = False,
context_selector: Optional[
Union[AbstractSelector, Type[AbstractSelector]]
] = None,
context_selector_kwargs: Optional[Dict] = None,
):
super().__init__(env=env)
# Gather args
self._context: Context # init for property
self._contexts: Contexts # init for property
self.default_context = default_context
self.contexts = contexts
self.context_mask = context_mask
self.hide_context = hide_context
self.dict_observation_space = dict_observation_space
self.cutoff = max_episode_length
self.logger = logger
self.add_gaussian_noise_to_context = add_gaussian_noise_to_context
self.gaussian_noise_std_percentage = gaussian_noise_std_percentage
self.context_selector: Type[AbstractSelector]
if context_selector is None:
self.context_selector = RoundRobinSelector(contexts=contexts) # type: ignore [assignment]
elif isinstance(context_selector, AbstractSelector):
self.context_selector = context_selector # type: ignore [assignment]
elif inspect.isclass(context_selector) and issubclass(
context_selector, AbstractSelector
):
if context_selector_kwargs is None:
context_selector_kwargs = {}
_context_selector_kwargs = {"contexts": contexts}
context_selector_kwargs.update(_context_selector_kwargs)
self.context_selector = context_selector(**context_selector_kwargs) # type: ignore [assignment]
else:
raise ValueError(
f"Context selector must be None or an AbstractSelector class or instance. "
f"Got type {type(context_selector)}."
)
context_keys: Vector
if state_context_features is not None:
if state_context_features == "changing_context_features" or (
type(state_context_features) == list
and state_context_features[0] == "changing_context_features"
):
# if we have only one context the context features do not change during training
if len(self.contexts) > 1:
# detect which context feature changes
context_array = np.array(
[np.array(list(c.values())) for c in self.contexts.values()]
)
which_cf_changes = ~np.all(
context_array == context_array[0, :], axis=0
)
context_keys = np.array(
list(self.contexts[list(self.contexts.keys())[0]].keys())
)
state_context_features = context_keys[which_cf_changes]
# TODO properly record which are appended to state
if logger is not None:
fname = os.path.join(logger.logdir, "env_info.json")
save_val: Optional[List[str]]
if state_context_features is not None:
save_val = list(state_context_features) # please json
else:
save_val = state_context_features
with open(fname, "w") as file:
data = {"state_context_features": save_val}
json.dump(data, file, indent="\t")
else:
state_context_features = []
else:
state_context_features = list(
self.contexts[list(self.contexts.keys())[0]].keys()
)
self.state_context_features: List[str] = state_context_features # type: ignore [assignment]
# (Mypy thinks that state_context_features is of type Optional[List[str]] which it can't be anymore due to the
# if-else clause)
# state_context_features contains the names of the context features that should be appended to the state
# However, if context_mask is set, we want to update state_context_feature_names so that the context features
# in context_mask are not appended to the state anymore.
if self.context_mask:
self.state_context_features = [
s for s in self.state_context_features if s not in self.context_mask
]
self.step_counter = 0 # type: int # increased in/after step
self.total_timestep_counter = 0 # type: int
self.episode_counter = -1 # type: int # increased during reset
self.whitelist_gaussian_noise = (
None
) # type: Optional[List[str]] # holds names of context features
# where it is allowed to add gaussian noise
# Set initial context
# TODO only set context during reset?
# Don't use the context selector. This way after the first reset we actually
# start with the first context. We just need a default/initial context here
# so all the tests and the rest does not break.
context_keys = list(self.contexts.keys())
self.context = self.contexts[context_keys[0]]
# Scale context features
if scale_context_features not in self.available_scale_methods:
raise ValueError(
f"{scale_context_features} not in {self.available_scale_methods}."
)
self.scale_context_features = scale_context_features
self.context_feature_scale_factors = None
if self.scale_context_features == "by_mean":
cfs_vals = np.concatenate(
[np.array(list(v.values()))[:, None] for v in self.contexts.values()],
axis=-1,
)
self.context_feature_scale_factors = np.mean(cfs_vals, axis=-1)
self.context_feature_scale_factors[
self.context_feature_scale_factors == 0
] = 1 # otherwise value / scale_factor = nan
elif self.scale_context_features == "by_default":
if self.default_context is None:
raise ValueError(
"Please set default_context for scale_context_features='by_default'."
)
self.context_feature_scale_factors = np.array(
list(self.default_context.values())
)
self.context_feature_scale_factors[
self.context_feature_scale_factors == 0
] = 1 # otherwise value / scale_factor = nan
self.vectorized = n_envs > 1
self.build_observation_space()
@property
def context(self) -> Dict:
return self._context
@context.setter
def context(self, context: Context) -> None:
self._context = self.fill_context_with_default(context=context)
@property
def context_key(self) -> Any | None:
return self.context_selector.context_key
@property
def contexts(self) -> Dict[Any, Dict[Any, Any]]:
return self._contexts
@contexts.setter
def contexts(self, contexts: Contexts) -> None:
self._contexts = {
k: self.fill_context_with_default(context=v) for k, v in contexts.items()
}
def reset(self, **kwargs: Dict) -> Union[ObsType, tuple[ObsType, dict]]: # type: ignore [override]
"""
Reset environment.
Parameters
----------
kwargs: Dict
Any keyword arguments passed to env.reset().
Returns
-------
state
State of environment after reset.
info_dict : dict
Return also if return_info=True.
"""
self.episode_counter += 1
self.step_counter = 0
self._progress_instance()
self._update_context()
self._log_context()
return_info = kwargs.get("return_info", False)
_ret = self.env.reset(**kwargs) # type: ignore [arg-type]
info_dict = dict()
if return_info:
state, info_dict = _ret
else:
state = _ret
state = self.build_context_adaptive_state(state=state)
ret = state
if return_info:
ret = state, info_dict
return ret
def build_context_adaptive_state(
self, state: List[float], context_feature_values: Optional[Vector] = None
) -> Union[Vector, Dict]:
tnp: ModuleType = np
if brax_spec is not None:
if type(state) == jaxlib.xla_extension.DeviceArray:
tnp = jnp
if not self.hide_context:
if context_feature_values is None:
# use current context
context_values = tnp.array(list(self.context.values()))
else:
# use potentially modified context
context_values = context_feature_values
# Append context to state
if self.state_context_features is not None:
# if self.state_context_features is an empty list, the context values will also be empty and we
# get the correct state
context_keys = list(self.context.keys())
context_values = tnp.array(
[
context_values[context_keys.index(k)]
for k in self.state_context_features
]
)
if self.dict_observation_space:
state: Dict = dict(state=state, context=context_values) # type: ignore [no-redef]
elif self.vectorized:
state = tnp.array([np.concatenate((s, context_values)) for s in state])
else:
state = tnp.concatenate((state, context_values))
return state
def step(self, action: Any) -> Tuple[Any, Any, bool, Dict]:
"""
Step the environment.
1. Step
2. Add (potentially scaled) context features to state if hide_context = False.
Emits done if the environment has taken more steps than cutoff (max_episode_length).
Parameters
----------
action:
Action to pass to env.step.
Returns
-------
state, reward, done, info: Any, Any, bool, Dict
Standard signature.
"""
# Step the environment
state, reward, done, info = self.env.step(action)
if not self.hide_context:
# Scale context features
context_feature_values = np.array(list(self.context.values()))
if self.scale_context_features == "by_default":
context_feature_values /= self.context_feature_scale_factors
elif self.scale_context_features == "by_mean":
context_feature_values /= self.context_feature_scale_factors
elif self.scale_context_features == "no":
pass
else:
raise ValueError(
f"{self.scale_context_features} not in {self.available_scale_methods}."
)
# Add context features to state
state = self.build_context_adaptive_state(
state=state, context_feature_values=context_feature_values
)
self.total_timestep_counter += 1
self.step_counter += 1
if self.step_counter >= self.cutoff:
done = True
return state, reward, done, info
def __getattr__(self, name: str) -> Any:
# TODO: does this work with activated noise? I think we need to update it
# We need this because our CARLEnv has underscore class methods which would
# throw an error otherwise
if name in ["_progress_instance", "_update_context", "_log_context"]:
return getattr(self, name)
if name.startswith("_"):
raise AttributeError(
"attempted to get missing private attribute '{}'".format(name)
)
return getattr(self.env, name)
def fill_context_with_default(self, context: Context) -> Dict:
"""
Fill the context with the default values if entries are missing
Parameters
----------
context
Returns
-------
context
"""
if self.default_context:
context_def = self.default_context.copy()
context_def.update(context)
context = context_def
return context
def _progress_instance(self) -> None:
"""
Progress instance.
In this case instance is a specific context.
1. Select instance with the instance_mode. If the instance_mode is random, randomly select
the next instance from the set of contexts. If instance_mode is rr or roundrobin, select
the next instance.
2. If Gaussian noise should be added to whitelisted context features, do so.
Returns
-------
None
"""
context = self.context_selector.select() # type: ignore [call-arg]
if self.add_gaussian_noise_to_context and self.whitelist_gaussian_noise:
context_augmented = {}
for key, value in context.items():
if key in self.whitelist_gaussian_noise:
context_augmented[key] = add_gaussian_noise(
default_value=value,
percentage_std=self.gaussian_noise_std_percentage,
random_generator=None, # self.np_random TODO discuss this
)
else:
context_augmented[key] = context[key]
context = context_augmented
self.context = context
def build_observation_space(
self,
env_lower_bounds: Optional[Vector] = None,
env_upper_bounds: Optional[Vector] = None,
context_bounds: Optional[Mapping[str, Tuple[float, float, type]]] = None,
) -> None:
"""
Build observation space of environment.
If the hide_context = False, add correct bounds for the context features to the
observation space.
Parameters
----------
env_lower_bounds: Optional[Union[List, np.array]], default=None
Lower bounds for environment observation space. If env_lower_bounds and env_upper_bounds
both are None, (re-)create bounds (low=-inf, high=inf) with correct dimension.
env_upper_bounds: Optional[Union[List, np.array]], default=None
Upper bounds for environment observation space.
context_bounds: Optional[Dict[str, Tuple[float, float, float]]], default=None
Lower and upper bounds for context features.
The bounds are provided as a Dict containing the context feature names/ids as keys and the
bounds per feature as a tuple (low, high, dtype).
If None and the context should not be hidden,
creates default bounds with (low=-inf, high=inf) with correct dimension.
Raises
------
ValueError:
If (env.)observation space is not gym.spaces.Box and the context should not be hidden
(hide_context = False).
Returns
-------
None
"""
self.observation_space: gym.spaces.Space
if (
not self.dict_observation_space
and not isinstance(self.observation_space, spaces.Box)
and not self.hide_context
):
raise ValueError(
"This environment does not yet support non-hidden contexts. Only supports "
"Box observation spaces."
)
obs_space = (
self.env.observation_space.spaces["state"].low
if isinstance(self.env.observation_space, spaces.Dict)
else self.env.observation_space.low # type: ignore [attr-defined]
)
obs_shape = obs_space.shape
if len(obs_shape) == 3 and self.hide_context:
# do not touch pixel state
pass
else:
if env_lower_bounds is None and env_upper_bounds is None:
obs_dim = obs_shape[0]
env_lower_bounds = -np.inf * np.ones(obs_dim)
env_upper_bounds = np.inf * np.ones(obs_dim)
if self.hide_context or (
self.state_context_features is not None
and len(self.state_context_features) == 0
):
self.env.observation_space = spaces.Box(
np.array(env_lower_bounds),
np.array(env_upper_bounds),
dtype=np.float32,
)
else:
context_keys = list(self.context.keys())
if context_bounds is None:
context_dim = len(list(self.context.keys()))
context_lower_bounds = -np.inf * np.ones(context_dim)
context_upper_bounds = np.inf * np.ones(context_dim)
else:
context_lower_bounds, context_upper_bounds = get_context_bounds(
context_keys, context_bounds # type: ignore [arg-type]
)
if self.state_context_features is not None:
ids = np.array(
[context_keys.index(k) for k in self.state_context_features]
)
context_lower_bounds = context_lower_bounds[ids]
context_upper_bounds = context_upper_bounds[ids]
if self.dict_observation_space:
self.env.observation_space = spaces.Dict(
{
"state": spaces.Box(
low=np.array(env_lower_bounds),
high=np.array(env_upper_bounds),
dtype=np.float32,
),
"context": spaces.Box(
low=np.array(context_lower_bounds),
high=np.array(context_upper_bounds),
dtype=np.float32,
),
}
)
else:
low: Vector = np.concatenate(
(np.array(env_lower_bounds), np.array(context_lower_bounds))
)
high: Vector = np.concatenate(
(np.array(env_upper_bounds), np.array(context_upper_bounds))
)
self.env.observation_space = spaces.Box(
low=np.array(low), high=np.array(high), dtype=np.float32
)
self.observation_space = (
self.env.observation_space
) # make sure it is the same object
def _update_context(self) -> None:
"""
Update the context feature values of the environment.
Returns
-------
None
"""
raise NotImplementedError
def _log_context(self) -> None:
"""
Log context.
Returns
-------
None
"""
if self.logger:
self.logger.write_context(
self.episode_counter, self.total_timestep_counter, self.context
)
| 23,659 | 39.934256 | 118 | py |
null | CARL-main/carl/envs/box2d/__init__.py | # flake8: noqa: F401
from carl.envs.box2d.carl_bipedal_walker import (
CONTEXT_BOUNDS as CARLBipedalWalkerEnv_bounds,
)
from carl.envs.box2d.carl_bipedal_walker import (
DEFAULT_CONTEXT as CARLBipedalWalkerEnv_defaults,
)
from carl.envs.box2d.carl_bipedal_walker import CARLBipedalWalkerEnv
# Contextenvs.s and bounds by name
from carl.envs.box2d.carl_lunarlander import CONTEXT_BOUNDS as CARLLunarLanderEnv_bounds
from carl.envs.box2d.carl_lunarlander import (
DEFAULT_CONTEXT as CARLLunarLanderEnv_defaults,
)
from carl.envs.box2d.carl_lunarlander import CARLLunarLanderEnv
from carl.envs.box2d.carl_vehicle_racing import (
CONTEXT_BOUNDS as CARLVehicleRacingEnv_bounds,
)
from carl.envs.box2d.carl_vehicle_racing import (
DEFAULT_CONTEXT as CARLVehicleRacingEnv_defaults,
)
from carl.envs.box2d.carl_vehicle_racing import CARLVehicleRacingEnv
| 867 | 36.73913 | 88 | py |
null | CARL-main/carl/envs/box2d/carl_bipedal_walker.py | from typing import Dict, List, Optional, Union
import numpy as np
from Box2D.b2 import edgeShape, fixtureDef, polygonShape
from gym.envs.box2d import bipedal_walker
from gym.envs.box2d import bipedal_walker as bpw
from carl.context.selection import AbstractSelector
from carl.envs.carl_env import CARLEnv
from carl.utils.trial_logger import TrialLogger
from carl.utils.types import Context, Contexts
DEFAULT_CONTEXT = {
"FPS": 50,
"SCALE": 30.0, # affects how fast-paced the game is, forces should be adjusted as well
"GRAVITY_X": 0,
"GRAVITY_Y": -10,
# surroundings
"FRICTION": 2.5,
"TERRAIN_STEP": 14 / 30.0,
"TERRAIN_LENGTH": 200, # in steps
"TERRAIN_HEIGHT": 600 / 30 / 4, # VIEWPORT_H/SCALE/4
"TERRAIN_GRASS": 10, # low long are grass spots, in steps
"TERRAIN_STARTPAD": 20, # in steps
# walker
"MOTORS_TORQUE": 80,
"SPEED_HIP": 4,
"SPEED_KNEE": 6,
"LIDAR_RANGE": 160 / 30.0,
"LEG_DOWN": -8 / 30.0,
"LEG_W": 8 / 30.0,
"LEG_H": 34 / 30.0,
# absolute value of random force applied to walker at start of episode
"INITIAL_RANDOM": 5,
# Size of world
"VIEWPORT_W": 600,
"VIEWPORT_H": 400,
}
# TODO make bounds more generous for all Box2D envs?
CONTEXT_BOUNDS = {
"FPS": (1, 500, float),
"SCALE": (
1,
100,
float,
), # affects how fast-paced the game is, forces should be adjusted as well
# surroundings
"FRICTION": (0, 10, float),
"TERRAIN_STEP": (0.25, 1, float),
"TERRAIN_LENGTH": (100, 500, int), # in steps
"TERRAIN_HEIGHT": (3, 10, float), # VIEWPORT_H/SCALE/4
"TERRAIN_GRASS": (5, 15, int), # low long are grass spots, in steps
"TERRAIN_STARTPAD": (10, 30, int), # in steps
# walker
"MOTORS_TORQUE": (0, 200, float),
"SPEED_HIP": (1e-6, 15, float),
"SPEED_KNEE": (1e-6, 15, float),
"LIDAR_RANGE": (0.5, 20, float),
"LEG_DOWN": (-2, -0.25, float),
"LEG_W": (0.25, 0.5, float),
"LEG_H": (0.25, 2, float),
# absolute value of random force applied to walker at start of episode
"INITIAL_RANDOM": (0, 50, float),
# Size of world
"VIEWPORT_W": (400, 1000, int),
"VIEWPORT_H": (200, 800, int),
"GRAVITY_X": (-20, 20, float), # unit: m/s²
"GRAVITY_Y": (
-20,
-0.01,
float,
), # the y-component of gravity must be smaller than 0 because otherwise the
# body leaves the frame by going up
}
class CARLBipedalWalkerEnv(CARLEnv):
def __init__(
self,
env: Optional[bipedal_walker.BipedalWalker] = None,
contexts: Contexts = {},
hide_context: bool = True,
add_gaussian_noise_to_context: bool = False,
gaussian_noise_std_percentage: float = 0.05,
logger: Optional[TrialLogger] = None,
scale_context_features: str = "no",
default_context: Optional[Context] = DEFAULT_CONTEXT,
state_context_features: Optional[List[str]] = None,
context_mask: Optional[List[str]] = None,
dict_observation_space: bool = False,
context_selector: Optional[
Union[AbstractSelector, type[AbstractSelector]]
] = None,
context_selector_kwargs: Optional[Dict] = None,
):
"""
Parameters
----------
env: gym.Env, optional
Defaults to classic control environment mountain car from gym (MountainCarEnv).
contexts: List[Dict], optional
Different contexts / different environment parameter settings.
instance_mode: str, optional
"""
if env is None:
env = bipedal_walker.BipedalWalker()
if not contexts:
contexts = {0: DEFAULT_CONTEXT}
super().__init__(
env=env,
contexts=contexts,
hide_context=hide_context,
add_gaussian_noise_to_context=add_gaussian_noise_to_context,
gaussian_noise_std_percentage=gaussian_noise_std_percentage,
logger=logger,
scale_context_features=scale_context_features,
default_context=default_context,
state_context_features=state_context_features,
dict_observation_space=dict_observation_space,
context_selector=context_selector,
context_selector_kwargs=context_selector_kwargs,
context_mask=context_mask,
)
self.whitelist_gaussian_noise = list(
DEFAULT_CONTEXT.keys()
) # allow to augment all values
def _update_context(self) -> None:
self.env: bipedal_walker.BipedalWalker
bpw.FPS = self.context["FPS"]
bpw.SCALE = self.context["SCALE"]
bpw.FRICTION = self.context["FRICTION"]
bpw.TERRAIN_STEP = self.context["TERRAIN_STEP"]
bpw.TERRAIN_LENGTH = int(
self.context["TERRAIN_LENGTH"]
) # TODO do this automatically
bpw.TERRAIN_HEIGHT = self.context["TERRAIN_HEIGHT"]
bpw.TERRAIN_GRASS = self.context["TERRAIN_GRASS"]
bpw.TERRAIN_STARTPAD = self.context["TERRAIN_STARTPAD"]
bpw.MOTORS_TORQUE = self.context["MOTORS_TORQUE"]
bpw.SPEED_HIP = self.context["SPEED_HIP"]
bpw.SPEED_KNEE = self.context["SPEED_KNEE"]
bpw.LIDAR_RANGE = self.context["LIDAR_RANGE"]
bpw.LEG_DOWN = self.context["LEG_DOWN"]
bpw.LEG_W = self.context["LEG_W"]
bpw.LEG_H = self.context["LEG_H"]
bpw.INITIAL_RANDOM = self.context["INITIAL_RANDOM"]
bpw.VIEWPORT_W = self.context["VIEWPORT_W"]
bpw.VIEWPORT_H = self.context["VIEWPORT_H"]
gravity_x = self.context["GRAVITY_X"]
gravity_y = self.context["GRAVITY_Y"]
gravity = (gravity_x, gravity_y)
self.env.world.gravity = gravity
# Important for building terrain
self.env.fd_polygon = fixtureDef(
shape=polygonShape(vertices=[(0, 0), (1, 0), (1, -1), (0, -1)]),
friction=bipedal_walker.FRICTION,
)
self.env.fd_edge = fixtureDef(
shape=edgeShape(vertices=[(0, 0), (1, 1)]),
friction=bipedal_walker.FRICTION,
categoryBits=0x0001,
)
bpw.HULL_FD = fixtureDef(
shape=polygonShape(
vertices=[(x / bpw.SCALE, y / bpw.SCALE) for x, y in bpw.HULL_POLY]
),
density=5.0,
friction=0.1,
categoryBits=0x0020,
maskBits=0x001, # collide only with ground
restitution=0.0,
) # 0.99 bouncy
bpw.LEG_FD = fixtureDef(
shape=polygonShape(box=(bpw.LEG_W / 2, bpw.LEG_H / 2)),
density=1.0,
restitution=0.0,
categoryBits=0x0020,
maskBits=0x001,
)
bpw.LOWER_FD = fixtureDef(
shape=polygonShape(box=(0.8 * bpw.LEG_W / 2, bpw.LEG_H / 2)),
density=1.0,
restitution=0.0,
categoryBits=0x0020,
maskBits=0x001,
)
self.env.world.gravity = gravity
def demo_heuristic(
env: Union[CARLBipedalWalkerEnv, bipedal_walker.BipedalWalker]
) -> None:
env.reset()
steps = 0
total_reward = 0
a = np.array([0.0, 0.0, 0.0, 0.0])
STAY_ON_ONE_LEG, PUT_OTHER_DOWN, PUSH_OFF = 1, 2, 3
SPEED = 0.29 # Will fall forward on higher speed
state = STAY_ON_ONE_LEG
moving_leg = 0
supporting_leg = 1 - moving_leg
SUPPORT_KNEE_ANGLE = +0.1
supporting_knee_angle = SUPPORT_KNEE_ANGLE
while True:
s, r, done, info = env.step(a)
total_reward += r
if steps % 20 == 0 or done:
print("\naction " + str(["{:+0.2f}".format(x) for x in a]))
print("step {} total_reward {:+0.2f}".format(steps, total_reward))
print("hull " + str(["{:+0.2f}".format(x) for x in s[0:4]]))
print("leg0 " + str(["{:+0.2f}".format(x) for x in s[4:9]]))
print("leg1 " + str(["{:+0.2f}".format(x) for x in s[9:14]]))
steps += 1
contact0 = s[8] # noqa: F841
contact1 = s[13] # noqa: F841
moving_s_base = 4 + 5 * moving_leg
supporting_s_base = 4 + 5 * supporting_leg
hip_targ = np.array([None, None]) # -0.8 .. +1.1
knee_targ = np.array([None, None]) # -0.6 .. +0.9
hip_todo = np.array([0.0, 0.0])
knee_todo = np.array([0.0, 0.0])
if state == STAY_ON_ONE_LEG:
hip_targ[moving_leg] = 1.1
knee_targ[moving_leg] = -0.6
supporting_knee_angle += 0.03
if s[2] > SPEED:
supporting_knee_angle += 0.03
supporting_knee_angle = min(supporting_knee_angle, SUPPORT_KNEE_ANGLE)
knee_targ[supporting_leg] = supporting_knee_angle
if s[supporting_s_base + 0] < 0.10: # supporting leg is behind
state = PUT_OTHER_DOWN
if state == PUT_OTHER_DOWN:
hip_targ[moving_leg] = +0.1
knee_targ[moving_leg] = SUPPORT_KNEE_ANGLE
knee_targ[supporting_leg] = supporting_knee_angle
if s[moving_s_base + 4]:
state = PUSH_OFF
supporting_knee_angle = min(s[moving_s_base + 2], SUPPORT_KNEE_ANGLE)
if state == PUSH_OFF:
knee_targ[moving_leg] = supporting_knee_angle
knee_targ[supporting_leg] = +1.0
if s[supporting_s_base + 2] > 0.88 or s[2] > 1.2 * SPEED:
state = STAY_ON_ONE_LEG
moving_leg = 1 - moving_leg
supporting_leg = 1 - moving_leg
if hip_targ[0]:
hip_todo[0] = 0.9 * (hip_targ[0] - s[4]) - 0.25 * s[5]
if hip_targ[1]:
hip_todo[1] = 0.9 * (hip_targ[1] - s[9]) - 0.25 * s[10]
if knee_targ[0]:
knee_todo[0] = 4.0 * (knee_targ[0] - s[6]) - 0.25 * s[7]
if knee_targ[1]:
knee_todo[1] = 4.0 * (knee_targ[1] - s[11]) - 0.25 * s[12]
hip_todo[0] -= 0.9 * (0 - s[0]) - 1.5 * s[1] # PID to keep head strait
hip_todo[1] -= 0.9 * (0 - s[0]) - 1.5 * s[1]
knee_todo[0] -= 15.0 * s[3] # vertical speed, to damp oscillations
knee_todo[1] -= 15.0 * s[3]
a[0] = hip_todo[0]
a[1] = knee_todo[0]
a[2] = hip_todo[1]
a[3] = knee_todo[1]
a = np.clip(0.5 * a, -1.0, 1.0)
env.render()
if done:
break
if __name__ == "__main__":
# Heurisic: suboptimal, have no notion of balance.
env = CARLBipedalWalkerEnv(add_gaussian_noise_to_context=True)
for i in range(3):
demo_heuristic(env)
env.close()
| 10,598 | 35.42268 | 91 | py |
null | CARL-main/carl/envs/box2d/carl_lunarlander.py | from typing import Dict, List, Optional, Tuple, TypeVar, Union
from gym import Wrapper
from gym.envs.box2d import lunar_lander
from carl.context.selection import AbstractSelector
from carl.envs.carl_env import CARLEnv
from carl.utils.trial_logger import TrialLogger
from carl.utils.types import Context, Contexts
ObsType = TypeVar("ObsType")
ActType = TypeVar("ActType")
# import pyglet
# pyglet.options["shadow_window"] = False
# TODO debug/test this environment by looking at rendering!
DEFAULT_CONTEXT = {
"FPS": 50,
"SCALE": 30.0, # affects how fast-paced the game is, forces should be adjusted as well
# Engine powers
"MAIN_ENGINE_POWER": 13.0,
"SIDE_ENGINE_POWER": 0.6,
# random force on lunar lander body on reset
"INITIAL_RANDOM": 1000.0, # Set 1500 to make game harder
"GRAVITY_X": 0,
"GRAVITY_Y": -10,
# lunar lander body specification
"LEG_AWAY": 20,
"LEG_DOWN": 18,
"LEG_W": 2,
"LEG_H": 8,
"LEG_SPRING_TORQUE": 40,
"SIDE_ENGINE_HEIGHT": 14.0,
"SIDE_ENGINE_AWAY": 12.0,
# Size of world
"VIEWPORT_W": 600,
"VIEWPORT_H": 400,
}
CONTEXT_BOUNDS = {
"FPS": (1, 500, float),
"SCALE": (
1,
100,
float,
), # affects how fast-paced the game is, forces should be adjusted as well
"MAIN_ENGINE_POWER": (0, 50, float),
"SIDE_ENGINE_POWER": (0, 50, float),
# random force on lunar lander body on reset
"INITIAL_RANDOM": (0, 2000, float), # Set 1500 to make game harder
"GRAVITY_X": (-20, 20, float), # unit: m/s²
"GRAVITY_Y": (
-20,
-0.01,
float,
), # the y-component of gravity must be smaller than 0 because otherwise the
# lunarlander leaves the frame by going up
# lunar lander body specification
"LEG_AWAY": (0, 50, float),
"LEG_DOWN": (0, 50, float),
"LEG_W": (1, 10, float),
"LEG_H": (1, 20, float),
"LEG_SPRING_TORQUE": (0, 100, float),
"SIDE_ENGINE_HEIGHT": (1, 20, float),
"SIDE_ENGINE_AWAY": (1, 20, float),
# Size of world
"VIEWPORT_W": (400, 1000, int),
"VIEWPORT_H": (200, 800, int),
}
class LunarLanderEnv(Wrapper):
def __init__(
self,
env: Optional[lunar_lander.LunarLander] = None,
high_gameover_penalty: bool = False,
):
if env is None:
env = lunar_lander.LunarLander()
super().__init__(env=env)
self.high_gameover_penalty = high_gameover_penalty
self.active_seed = None
def step(self, action: ActType) -> Tuple[ObsType, float, bool, dict]:
self.env: lunar_lander.LunarLander
state, reward, done, info = self.env.step(action)
if self.env.game_over and self.high_gameover_penalty:
reward = -10000
return state, reward, done, info
def seed(self, seed: Optional[int] = None) -> Optional[int]:
seed_ = self.env.seed(seed)
self.active_seed = seed_[0]
return seed_
class CARLLunarLanderEnv(CARLEnv):
def __init__(
self,
env: Optional[LunarLanderEnv] = None,
contexts: Contexts = {},
hide_context: bool = True,
add_gaussian_noise_to_context: bool = False,
gaussian_noise_std_percentage: float = 0.05,
logger: Optional[TrialLogger] = None,
scale_context_features: str = "no",
default_context: Optional[Context] = DEFAULT_CONTEXT,
state_context_features: Optional[List[str]] = None,
context_mask: Optional[List[str]] = None,
max_episode_length: int = 1000,
high_gameover_penalty: bool = False,
dict_observation_space: bool = False,
context_selector: Optional[
Union[AbstractSelector, type[AbstractSelector]]
] = None,
context_selector_kwargs: Optional[Dict] = None,
):
"""
Parameters
----------
env: gym.Env, optional
Defaults to classic control environment mountain car from gym (MountainCarEnv).
contexts: List[Dict], optional
Different contexts / different environment parameter settings.
instance_mode: str, optional
"""
if env is None:
# env = lunar_lander.LunarLander()
env = LunarLanderEnv(high_gameover_penalty=high_gameover_penalty)
if not contexts:
contexts = {0: DEFAULT_CONTEXT}
super().__init__(
env=env,
contexts=contexts,
hide_context=hide_context,
add_gaussian_noise_to_context=add_gaussian_noise_to_context,
gaussian_noise_std_percentage=gaussian_noise_std_percentage,
logger=logger,
scale_context_features=scale_context_features,
default_context=default_context,
state_context_features=state_context_features,
max_episode_length=max_episode_length,
dict_observation_space=dict_observation_space,
context_selector=context_selector,
context_selector_kwargs=context_selector_kwargs,
context_mask=context_mask,
)
self.whitelist_gaussian_noise = list(
DEFAULT_CONTEXT.keys()
) # allow to augment all values
def _update_context(self) -> None:
self.env: LunarLanderEnv
lunar_lander.FPS = self.context["FPS"]
lunar_lander.SCALE = self.context["SCALE"]
lunar_lander.MAIN_ENGINE_POWER = self.context["MAIN_ENGINE_POWER"]
lunar_lander.SIDE_ENGINE_POWER = self.context["SIDE_ENGINE_POWER"]
lunar_lander.INITIAL_RANDOM = self.context["INITIAL_RANDOM"]
lunar_lander.LEG_AWAY = self.context["LEG_AWAY"]
lunar_lander.LEG_DOWN = self.context["LEG_DOWN"]
lunar_lander.LEG_W = self.context["LEG_W"]
lunar_lander.LEG_H = self.context["LEG_H"]
lunar_lander.LEG_SPRING_TORQUE = self.context["LEG_SPRING_TORQUE"]
lunar_lander.SIDE_ENGINE_HEIGHT = self.context["SIDE_ENGINE_HEIGHT"]
lunar_lander.SIDE_ENGINE_AWAY = self.context["SIDE_ENGINE_AWAY"]
lunar_lander.VIEWPORT_W = self.context["VIEWPORT_W"]
lunar_lander.VIEWPORT_H = self.context["VIEWPORT_H"]
gravity_x = self.context["GRAVITY_X"]
gravity_y = self.context["GRAVITY_Y"]
gravity = (gravity_x, gravity_y)
self.env.world.gravity = gravity
| 6,349 | 34.082873 | 91 | py |
null | CARL-main/carl/envs/box2d/carl_vehicle_racing.py | from typing import Any, Dict, List, Optional, Tuple, Type, Union
import numpy as np
import pyglet
from gym.envs.box2d import CarRacing
from gym.envs.box2d.car_dynamics import Car
from pyglet import gl
from carl.context.selection import AbstractSelector
from carl.envs.box2d.parking_garage.bus import AWDBus # as Car
from carl.envs.box2d.parking_garage.bus import AWDBusLargeTrailer # as Car
from carl.envs.box2d.parking_garage.bus import AWDBusSmallTrailer # as Car
from carl.envs.box2d.parking_garage.bus import Bus # as Car
from carl.envs.box2d.parking_garage.bus import BusLargeTrailer # as Car
from carl.envs.box2d.parking_garage.bus import BusSmallTrailer # as Car
from carl.envs.box2d.parking_garage.bus import FWDBus # as Car
from carl.envs.box2d.parking_garage.bus import FWDBusLargeTrailer # as Car
from carl.envs.box2d.parking_garage.bus import FWDBusSmallTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import AWDRaceCar # as Car
from carl.envs.box2d.parking_garage.race_car import AWDRaceCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import AWDRaceCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import FWDRaceCar # as Car
from carl.envs.box2d.parking_garage.race_car import FWDRaceCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import FWDRaceCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import RaceCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import RaceCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.race_car import RaceCar
from carl.envs.box2d.parking_garage.street_car import AWDStreetCar # as Car
from carl.envs.box2d.parking_garage.street_car import AWDStreetCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.street_car import AWDStreetCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.street_car import FWDStreetCar # as Car
from carl.envs.box2d.parking_garage.street_car import FWDStreetCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.street_car import FWDStreetCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.street_car import StreetCar # as Car
from carl.envs.box2d.parking_garage.street_car import StreetCarLargeTrailer # as Car
from carl.envs.box2d.parking_garage.street_car import StreetCarSmallTrailer # as Car
from carl.envs.box2d.parking_garage.trike import TukTuk # as Car
from carl.envs.box2d.parking_garage.trike import TukTukSmallTrailer # as Car
from carl.envs.carl_env import CARLEnv
from carl.utils.trial_logger import TrialLogger
from carl.utils.types import Context, Contexts, ObsType
PARKING_GARAGE_DICT = {
# Racing car
"RaceCar": RaceCar,
"FWDRaceCar": FWDRaceCar,
"AWDRaceCar": AWDRaceCar,
"RaceCarSmallTrailer": RaceCarSmallTrailer,
"FWDRaceCarSmallTrailer": FWDRaceCarSmallTrailer,
"AWDRaceCarSmallTrailer": AWDRaceCarSmallTrailer,
"RaceCarLargeTrailer": RaceCarLargeTrailer,
"FWDRaceCarLargeTrailer": FWDRaceCarLargeTrailer,
"AWDRaceCarLargeTrailer": AWDRaceCarLargeTrailer,
# Street car
"StreetCar": StreetCar,
"FWDStreetCar": FWDStreetCar,
"AWDStreetCar": AWDStreetCar,
"StreetCarSmallTrailer": StreetCarSmallTrailer,
"FWDStreetCarSmallTrailer": FWDStreetCarSmallTrailer,
"AWDStreetCarSmallTrailer": AWDStreetCarSmallTrailer,
"StreetCarLargeTrailer": StreetCarLargeTrailer,
"FWDStreetCarLargeTrailer": FWDStreetCarLargeTrailer,
"AWDStreetCarLargeTrailer": AWDStreetCarLargeTrailer,
# Bus
"Bus": Bus,
"FWDBus": FWDBus,
"AWDBus": AWDBus,
"BusSmallTrailer": BusSmallTrailer,
"FWDBusSmallTrailer": FWDBusSmallTrailer,
"AWDBusSmallTrailer": AWDBusSmallTrailer,
"BusLargeTrailer": BusLargeTrailer,
"FWDBusLargeTrailer": FWDBusLargeTrailer,
"AWDBusLargeTrailer": AWDBusLargeTrailer,
# Tuk Tuk :)
"TukTuk": TukTuk,
"TukTukSmallTrailer": TukTukSmallTrailer,
}
PARKING_GARAGE = list(PARKING_GARAGE_DICT.values())
VEHICLE_NAMES = list(PARKING_GARAGE_DICT.keys())
DEFAULT_CONTEXT = {
"VEHICLE": PARKING_GARAGE.index(RaceCar),
}
CONTEXT_BOUNDS = {
"VEHICLE": (None, None, "categorical", np.arange(0, len(PARKING_GARAGE)))
}
CATEGORICAL_CONTEXT_FEATURES = ["VEHICLE"]
class CustomCarRacingEnv(CarRacing):
def __init__(self, vehicle_class: Type[Car] = Car, verbose: bool = True):
super().__init__(verbose)
self.vehicle_class = vehicle_class
def reset(
self,
*,
seed: Optional[int] = None,
return_info: bool = False,
options: Optional[dict] = None,
) -> Union[ObsType, tuple[ObsType, dict]]:
self._destroy()
self.reward = 0.0
self.prev_reward = 0.0
self.tile_visited_count = 0
self.t = 0.0
self.road_poly: List[Tuple[List[float], Tuple[Any]]] = []
while True:
success = self._create_track()
if success:
break
if self.verbose == 1:
print(
"retry to generate track (normal if there are not many"
"instances of this message)"
)
self.car = self.vehicle_class(self.world, *self.track[0][1:4]) # type: ignore [assignment]
for i in range(
49
): # this sets up the environment and resolves any initial violations of geometry
self.step(None) # type: ignore [arg-type]
if not return_info:
return self.step(None)[0] # type: ignore [arg-type]
else:
return self.step(None)[0], {} # type: ignore [arg-type]
def _render_indicators(self, W: int, H: int) -> None:
# copied from meta car racing
s = W / 40.0
h = H / 40.0
colors = [0, 0, 0, 1] * 4
polygons = [W, 0, 0, W, 5 * h, 0, 0, 5 * h, 0, 0, 0, 0]
def vertical_ind(place: int, val: int, color: Tuple) -> None:
colors.extend([color[0], color[1], color[2], 1] * 4)
polygons.extend(
[
place * s,
h + h * val,
0,
(place + 1) * s,
h + h * val,
0,
(place + 1) * s,
h,
0,
(place + 0) * s,
h,
0,
]
)
def horiz_ind(place: int, val: int, color: Tuple) -> None:
colors.extend([color[0], color[1], color[2], 1] * 4)
polygons.extend(
[
(place + 0) * s,
4 * h,
0,
(place + val) * s,
4 * h,
0,
(place + val) * s,
2 * h,
0,
(place + 0) * s,
2 * h,
0,
]
)
true_speed = np.sqrt(
np.square(self.car.hull.linearVelocity[0]) # type: ignore [attr-defined]
+ np.square(self.car.hull.linearVelocity[1]) # type: ignore [attr-defined]
)
vertical_ind(5, 0.02 * true_speed, (1, 1, 1))
# Custom render to handle different amounts of wheels
vertical_ind(7, 0.01 * self.car.wheels[0].omega, (0.0, 0, 1)) # type: ignore [attr-defined]
for i in range(len(self.car.wheels)): # type: ignore [attr-defined]
vertical_ind(7 + i, 0.01 * self.car.wheels[i].omega, (0.0 + i * 0.1, 0, 1)) # type: ignore [attr-defined]
horiz_ind(20, -10.0 * self.car.wheels[0].joint.angle, (0, 1, 0)) # type: ignore [attr-defined]
horiz_ind(30, -0.8 * self.car.hull.angularVelocity, (1, 0, 0)) # type: ignore [attr-defined]
vl = pyglet.graphics.vertex_list(
len(polygons) // 3, ("v3f", polygons), ("c4f", colors) # gl.GL_QUADS,
)
vl.draw(gl.GL_QUADS)
class CARLVehicleRacingEnv(CARLEnv):
def __init__(
self,
env: CustomCarRacingEnv = CustomCarRacingEnv(),
contexts: Optional[Contexts] = None,
hide_context: bool = True,
add_gaussian_noise_to_context: bool = False,
gaussian_noise_std_percentage: float = 0.01,
logger: Optional[TrialLogger] = None,
scale_context_features: str = "no",
default_context: Optional[Context] = DEFAULT_CONTEXT,
state_context_features: Optional[List[str]] = None,
context_mask: Optional[List[str]] = None,
dict_observation_space: bool = False,
context_selector: Optional[
Union[AbstractSelector, type[AbstractSelector]]
] = None,
context_selector_kwargs: Optional[Dict] = None,
):
"""
Parameters
----------
env: gym.Env, optional
Defaults to classic control environment mountain car from gym (MountainCarEnv).
contexts: List[Dict], optional
Different contexts / different environment parameter settings.
instance_mode: str, optional
"""
if not hide_context:
raise NotImplementedError(
"The context is already coded in the pixel state, the context cannot be hidden that easily. "
"Due to the pixel state we cannot easily concatenate the context to the state, therefore "
"hide_context must be True but at the same time the context is visible via the pixel state."
)
if not contexts:
contexts = {0: DEFAULT_CONTEXT}
super().__init__(
env=env,
contexts=contexts,
hide_context=hide_context,
add_gaussian_noise_to_context=add_gaussian_noise_to_context,
gaussian_noise_std_percentage=gaussian_noise_std_percentage,
logger=logger,
scale_context_features=scale_context_features,
default_context=default_context,
state_context_features=state_context_features,
dict_observation_space=dict_observation_space,
context_selector=context_selector,
context_selector_kwargs=context_selector_kwargs,
context_mask=context_mask,
)
self.whitelist_gaussian_noise = [
k for k in DEFAULT_CONTEXT.keys() if k not in CATEGORICAL_CONTEXT_FEATURES
]
def _update_context(self) -> None:
self.env: CustomCarRacingEnv
vehicle_class_index = self.context["VEHICLE"]
self.env.vehicle_class = PARKING_GARAGE[vehicle_class_index]
| 10,620 | 40.65098 | 118 | py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.