The software is configured in Linux. Before running the experimental code, please ensure Python3 and Cython are properly installed, and run the following commands in Bash:
>>> cd source
>>> python setup.py build_ext --inplace
>>> cd software
>>> python setup.py build_ext --inplace
We use some publicly open-source code to aid in our experiments. Some scripts in the folder source
are downloaded from https://github.com/nla-group/fABBA and https://github.com/nla-group/ABBA.
The Monash Regression Dataset downloads from Time Series ExtrinsicRegression (TSER) benchmark. The data from the UCR Archive and the UEA Archive can be downloaded from https://www.timeseriesclassification.com/index.php; other data are contained in this repo.
Using a single A100 40G GPU, we present the steps to fine-tune Mistral-7B (or Llama2-7B)on TSER using QLoRA.
The results of Table 2 and Figure 1 can be reproduced by running demo.ipynb
The results of Figure 3 can be reproduced by running quantize_err.ipynb
The results of Figures 4 and 5 can be reproduced by first running multithreading.ipynb
, and then the figures are generated via running mthread_results.ipynb
The results of Figures 6 and 7 can be reproduced by UCRPP1.ipynb
and UCRPP2.ipynb
, and then the figures are generated via running run_variants_profiles.ipynb
The results of Figures 8, 9, and 10 simulated via qabba_uea_0.001.ipynb
, qabba_uea_0.01.ipynb
, corresponding to the figures generated by results1.ipynb
and results1.ipynb
, respectively.
@misc{carson2025quantizedsymbolictimeseries,
title={Quantized symbolic time series approximation},
author={Erin Carson and Xinye Chen and Cheng Kang},
year={2025},
eprint={2411.15209},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.15209},
}