Open
Description
Hi, I trained this model in the default config, on 100 thousand cleaned sentences taken from the google corpus, for ~16000 steps.
When I was running the sampling tasks, it seems the model has completely reconstructed my inquiry. For example, when I use the input "what" for reconstruction, I get "is there american yes" as output.
May I ask what am I doing wrong...? The same phenomenon is observed for all other tasks as well (for example, in interpolation I do get a sentence moving to another... just neither of them is my original input. Both are very far apart from my input indeed.)
I don't think this is expected to happen, right? Since the paper includes a sentence completion task.
---note---
My suspects are that
- The default vocab size was 20000, so maybe many words are not encoded at all? I can't test on this right now because increasing the size significantly slows down the training.
- The default word keep rate was 0. Had I set it to 1, would the model be able to at least keep the fed previous?
Thanks in advance!
Metadata
Metadata
Assignees
Labels
No labels