Official implementation of PhonMatchNet: Phoneme-Guided Zero-Shot Keyword Spotting for User-Defined Keywords.
PyTorch version: https://github.com/ncsoft/PhonMatchNet/tree/pytorch
Download the dataset and prepare it according to each guide.
cd ./docker
docker build --tag udkws .docker run -it --rm --gpus '"device=0,1"' \
    -v /path/to/this/repo:/home/ \
    -v /path/to/prepared/dataset:/home/DB \
    ukws \
    /bin/bash -c \
    "python train.py \
        --epoch 100 \
        --lr 1e-3 \
        --loss_weight 1.0 1.0 \
        --audio_input both \
        --text_input g2p_embed \
        --stack_extractor \
        --comment 'user comments for each experiment'"
tensorboard --logdir ./log/ --bind_allPlease post bug reports and new feature suggestions to the Issues and Pull requests tabs of this repo.