Skip to content

Conversation

Priyansi
Copy link
Collaborator

Fixes #68

" loss1 = criterion(y_pred1, y1)\n",
" loss2 = criterion(y_pred2, y2)\n",
"\n",
" loss1.backward()\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about the tracking of each loss' components to figure out that x^2 can't be approximate by a Linear but 2*x can.

loss = loss1 + loss2
loss.backward()

return {"loss": loss.item(), "loss1": loss1.item(), "loss2": loss2.item()}

" return self.model1(x), self.model2(x)\n",
"\n",
"model = Net().to(device)\n",
"optimizer = torch.optim.Adam(model.parameters(), lr=0.005)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"optimizer = torch.optim.Adam(model.parameters(), lr=0.005)\n",
"optimizer = torch.optim.Adam(model.parameters(), lr=0.1)\n",

"def log_validation_results(trainer):\n",
" val_evaluator.run(val_loader)\n",
" metrics = val_evaluator.state.metrics\n",
" print(f\"Validation Results - Epoch[{trainer.state.epoch}] Avg MAE: {metrics['mae']:.2f}\")"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add this to track losses

RunningAverage(output_transform=lambda x: x["loss"]).attach(trainer, "loss")
RunningAverage(output_transform=lambda x: x["loss1"]).attach(trainer, "loss1")
RunningAverage(output_transform=lambda x: x["loss2"]).attach(trainer, "loss2")

@trainer.on(Events.EPOCH_COMPLETED)
def iteration_results(trainer):
    metrics = trainer.state.metrics
    print(f"Iteration Results - Epoch[{trainer.state.iteration}] Metrics: {metrics}")

"outputId": "edc26afc-fab2-44e2-9c39-fb99bf751396"
},
"source": [
"trainer.run(train_loader, max_epochs=1)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"trainer.run(train_loader, max_epochs=1)"
"trainer.run(train_loader, max_epochs=10)"

@sdesrozis
Copy link
Contributor

@Priyansi I left a few comments to track the losses. What do you think ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

How-to guide to cover multi-output models and their evaluation
2 participants