Skip to content

Trained model produces completely different sentences in reconstruct/interpolate #9

Open
@asunyz

Description

@asunyz

Hi, I trained this model in the default config, on 100 thousand cleaned sentences taken from the google corpus, for ~16000 steps.
When I was running the sampling tasks, it seems the model has completely reconstructed my inquiry. For example, when I use the input "what" for reconstruction, I get "is there american yes" as output.
May I ask what am I doing wrong...? The same phenomenon is observed for all other tasks as well (for example, in interpolation I do get a sentence moving to another... just neither of them is my original input. Both are very far apart from my input indeed.)
I don't think this is expected to happen, right? Since the paper includes a sentence completion task.

---note---
My suspects are that

  1. The default vocab size was 20000, so maybe many words are not encoded at all? I can't test on this right now because increasing the size significantly slows down the training.
  2. The default word keep rate was 0. Had I set it to 1, would the model be able to at least keep the fed previous?

Thanks in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions