【文件属性】:
文件名称:b+树,learning index
文件大小:402KB
文件格式:ZIP
更新时间:2021-12-19 08:52:07
C++ B-Tree
## A C++11 implementation of the B-Tree part of "The Case for Learned Index Structures"
A research **proof of concept** that implements the B-Tree section of [The Case for Learned Index Structures](https://arxiv.org/pdf/1712.01208.pdf) paper in C++.
The general design is to have a single lookup structure that you can parameterize with a KeyType and a ValueType, and an overflow list that keeps new inserts until you retrain. There is a value in the constructor of the RMI that triggers a retrain when the overflow array reaches a certain size.
The basic API:
```c++
// [first/second]StageParams are network parameters
int maxAllowedError = 256;
int maxBufferBeforeRetrain = 10001;
auto modelIndex = RecursiveModelIndex recursiveModelIndex(firstStageParams,
secondStageParams,
maxAllowedError,
maxBufferBeforeRetrain);
for (int ii = 0; ii < 10000; ++ii) {
modelIndex.insert(ii, ii * 2);
}
// Since we still have one more insert before retraining, retrain before searching...
modelIndex.train();
auto result = modelIndex.find(5);
if (result) {
std::cout << "Yay! We got: " << result.get().first << ", " << result.get().second << std::endl;
} else {
std::cout << "Value not found." << std::endl; // This shouldn't happen in the above usage...
}
```
See [src/main.cpp](src/main.cpp) for a usage example where it stores scaled log normal data.
### Dependencies
- [nn_cpp](https://github.com/bcaine/nn_cpp) - Eigen based minimalistic C++ Neural Network library
- [cpp-btree](https://code.google.com/archive/p/cpp-btree/) - A fast C++ implementation of a B+ Tree
### TODO:
- Lots of code cleanup
- Profiling of where the slowdowns are. On small tests, the cpp_btree lib beats it by 10-100x
- Eigen::TensorFixed in nn_cpp would definitely help
- Increasing dataset size may lead to more of an advantage to the RMI
- Being much, much more efficient with memory and conversions (lots of casting)
- Very non-linear data the second stage tends to break down or stop performing on.
- A non-trivial amount of our second stage "dies" in the sense that we don't use it for predictions.
- The larger the dataset, or the more second stage nodes, the more likely this is. Bug somewhere?
- Experimenting/tuning of training parameters
- Still more learning rate sensitive than I'd like
- Checking, and failing if there are non-integer keys
- Tests on the actual RMI code (instead of using tests for experiments)
- Move retrain to non-blocking thread
- Logging
【文件预览】:
btree_test
----Net.h(5KB)
----utils()
--------DataGenerators.h(2KB)
--------NetworkParameters.h(653B)
--------DataUtils.h(1KB)
----btree_test.pro.user(23KB)
----SecondStageNode.h(8KB)
----RecursiveModelIndex.h(12KB)
----main.cpp(3KB)
----nn_cpp()
--------nn()
--------CMakeLists.txt(1KB)
--------examples()
--------LICENSE(1KB)
--------README.md(2KB)
--------.git()
--------tests()
--------.gitignore(33B)
----cpp-btree()
--------btree_map.h(4KB)
--------COPYING(11KB)
--------safe_btree_set.h(3KB)
--------CMakeLists.txt(2KB)
--------safe_btree.h(13KB)
--------btree.h(83KB)
--------btree_test.h(30KB)
--------safe_btree_map.h(3KB)
--------.git()
--------README(1007B)
--------btree_container.h(11KB)
--------btree_set.h(4KB)
----btree_test.pro(1KB)