List view
# Description Writing a good report for all the deliverable benchmarked results is a bit more work than it should but, but it should be done right. # Questions * Should the report be written in the wiki or in the README? * What are the details of the benchmarks themselves and the conclusions? TODO: more to come as this epic gets closer, including opening issues.
No due date# Description The classifier should be compressed to perform quicker inference. (i.e., TF lite, PyTorch lightning, or Onyx). Perhaps the code for the engine should be refactored into proper classes. There is still a lot to think about here. TODO: add issues once we finish the notebook version # DoD (definition of done) TODO: what are the requirements to be done here?
No due date# Description This will be a joint intent and entity classifier fine-tuned with the cleaned data set. It should also be benchmarked. TODO: add issues as this milestone comes closer. # DoD (definition of done) TODO: add requirements
No due dateThere are numerous features which could be improved upon. The current notebooks for intent and entity refinement are not that clean, but so much technical debt was taken on because the resulting cleaned data set is of a higher priority than clean code. The question is: What is the bare minimum to turn this into an MVP of deliverables?
No due date•0/2 issues closed# Description Similar to how the intent data set was created, a refinement will be performed for entities. # DoD (definition of done) - [x] Benchmark original data set. - [x] Build entity cleaning flow. - [ ] Refine entities. - [ ] Benchmark the refined data set.
No due date•3/6 issues closed