一、pytorch 損失函數(shù)中輸入輸出不匹配問題
File "C:\Users\Rain\AppData\Local\Programs\Python\Anaconda.3.5.1\envs\python35\python35\lib\site-packages\torch\nn\modules\module.py", line 491, in __call__ result = self.forward(*input, **kwargs)
File "C:\Users\Rain\AppData\Local\Programs\Python\Anaconda.3.5.1\envs\python35\python35\lib\site-packages\torch\nn\modules\loss.py", line 500, in forward reduce=self.reduce)
File "C:\Users\Rain\AppData\Local\Programs\Python\Anaconda.3.5.1\envs\python35\python35\lib\site-packages\torch\nn\functional.py", line 1514, in binary_cross_entropy_with_logits
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([32])) must be the same as input size (torch.Size([32,2]))
原因
input 和 target 尺寸不匹配
解決方案:
將target轉(zhuǎn)為onehot
例如:
one_hot = torch.nn.functional.one_hot(masks, num_classes=args.num_classes)
二、Pytorch遇到權(quán)重不匹配的問題
最近,樓主在pytorch微調(diào)模型時(shí)遇到
size mismatch for fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([2, 2048]).
size mismatch for fc.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([2]).
這個(gè)是因?yàn)闃侵飨螺d的預(yù)訓(xùn)練模型中的全連接層是1000類別的,而樓主本人的類別只有2類,所以會(huì)報(bào)不匹配的錯(cuò)誤
解決方案:
從報(bào)錯(cuò)信息可以看出,是fc層的權(quán)重參數(shù)不匹配,那我們只要不load 這一層的參數(shù)就可以了。
net = se_resnet50(num_classes=2)
pretrained_dict = torch.load("./senet/seresnet50-60a8950a85b2b.pkl")
model_dict = net.state_dict()
# 重新制作預(yù)訓(xùn)練的權(quán)重,主要是減去參數(shù)不匹配的層,樓主這邊層名為“fc”
pretrained_dict = {k: v for k, v in pretrained_dict.items() if (k in model_dict and 'fc' not in k)}
# 更新權(quán)重
model_dict.update(pretrained_dict)
net.load_state_dict(model_dict)
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
您可能感興趣的文章:- pytorch中常用的損失函數(shù)用法說明
- Pytorch 的損失函數(shù)Loss function使用詳解
- 解決pytorch 交叉熵?fù)p失輸出為負(fù)數(shù)的問題