1.
UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return F.log_softmax(x)
解决方法:把 F.log_softmax(x)改为F.log_softmax(x,dim=0) , 而且我发现改为F.log_softmax(x,dim=1),这个到底哪个更合理需要进一步确认。
2.
UserWarning: invalid index of a -dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a -dim tensor to a Python number train_loss += loss.data[]
解决方法:把 train_loss+=loss.data[0] 修改为 train_loss+= loss.item()
3.
UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
label = Variable(label.cuda(), volatile=True)
解决方法:把 label = Variable(label.cuda(), volatile=True) 修改为 label = Variable(label.cuda())
接下来的几个问题是我真实的代码中产生的,并且按如上的方法解决了
4.
遇到的警告:
UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
data, target = Variable(data, volatile=True), Variable(target) 解决方法:把data, target = Variable(data, volatile=True), Variable(target)修改为
data, target = Variable(data), Variable(target)
5.
遇到的警告:
UserWarning: invalid index of a -dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a -dim tensor to a Python number
test_loss += F.nll_loss(output, target, size_average=False).data[] 解决方法:把data[]改为item()
test_loss += F.nll_loss(output, target, size_average=False).item() 重新运行,报了另一个警告,
UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
看了警告中的提示信息,意思size_average这个参数和reduce这个参数都将会将被不赞成。请使用reduction='sum',
所以我就把test_loss += F.nll_loss(output, target, size_average=False).item()改为
test_loss += F.nll_loss(output, target, reduction='sum').item()
这样运行后就没有报警告了。其实我用pycharm右键去找nll_loss这个函数的定义的地方会发现。警告中的提示信息提示的
非常到位。以下展示下nll_loss函数的定义: