主頁(yè) > 知識(shí)庫(kù) > Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2

Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2

熱門標(biāo)簽:幫人做地圖標(biāo)注收費(fèi)算詐騙嗎 溫州旅游地圖標(biāo)注 蘇州電銷機(jī)器人十大排行榜 外呼不封號(hào)系統(tǒng) 悟空智電銷機(jī)器人6 荊州云電銷機(jī)器人供應(yīng)商 電信營(yíng)業(yè)廳400電話申請(qǐng) 江蘇房產(chǎn)電銷機(jī)器人廠家 遼寧400電話辦理多少錢

一、model.py

1.1 Channel Shuffle




def channel_shuffle(x: Tensor, groups: int) -> Tensor:

    batch_size, num_channels, height, width = x.size()
    channels_per_group = num_channels // groups

    # reshape
    # [batch_size, num_channels, height, width] -> [batch_size, groups, channels_per_group, height, width]
    x = x.view(batch_size, groups, channels_per_group, height, width)

    x = torch.transpose(x, 1, 2).contiguous()

    # flatten
    x = x.view(batch_size, -1, height, width)

    return x

1.2 block



class InvertedResidual(nn.Module):
    def __init__(self, input_c: int, output_c: int, stride: int):
        super(InvertedResidual, self).__init__()

        if stride not in [1, 2]:
            raise ValueError("illegal stride value.")
        self.stride = stride

        assert output_c % 2 == 0
        branch_features = output_c // 2
        # 當(dāng)stride為1時(shí),input_channel應(yīng)該是branch_features的兩倍
        # python中 '' 是位運(yùn)算,可理解為計(jì)算×2的快速方法
        assert (self.stride != 1) or (input_c == branch_features  1)

        if self.stride == 2:
            self.branch1 = nn.Sequential(
                self.depthwise_conv(input_c, input_c, kernel_s=3, stride=self.stride, padding=1),
                nn.BatchNorm2d(input_c),
                nn.Conv2d(input_c, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
                nn.BatchNorm2d(branch_features),
                nn.ReLU(inplace=True)
            )
        else:
            self.branch1 = nn.Sequential()

        self.branch2 = nn.Sequential(
            nn.Conv2d(input_c if self.stride > 1 else branch_features, branch_features, kernel_size=1,
                      stride=1, padding=0, bias=False),
            nn.BatchNorm2d(branch_features),
            nn.ReLU(inplace=True),
            self.depthwise_conv(branch_features, branch_features, kernel_s=3, stride=self.stride, padding=1),
            nn.BatchNorm2d(branch_features),
            nn.Conv2d(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(branch_features),
            nn.ReLU(inplace=True)
        )

    @staticmethod
    def depthwise_conv(input_c: int,
                       output_c: int,
                       kernel_s: int,
                       stride: int = 1,
                       padding: int = 0,
                       bias: bool = False) -> nn.Conv2d:
        return nn.Conv2d(in_channels=input_c, out_channels=output_c, kernel_size=kernel_s,
                         stride=stride, padding=padding, bias=bias, groups=input_c)

    def forward(self, x: Tensor) -> Tensor:
        if self.stride == 1:
            x1, x2 = x.chunk(2, dim=1)
            out = torch.cat((x1, self.branch2(x2)), dim=1)
        else:
            out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)

        out = channel_shuffle(out, 2)

        return out

1.3 shufflenet v2




class ShuffleNetV2(nn.Module):
    def __init__(self,
                 stages_repeats: List[int],
                 stages_out_channels: List[int],
                 num_classes: int = 1000,
                 inverted_residual: Callable[..., nn.Module] = InvertedResidual):
        super(ShuffleNetV2, self).__init__()

        if len(stages_repeats) != 3:
            raise ValueError("expected stages_repeats as list of 3 positive ints")
        if len(stages_out_channels) != 5:
            raise ValueError("expected stages_out_channels as list of 5 positive ints")
        self._stage_out_channels = stages_out_channels

        # input RGB image
        input_channels = 3
        output_channels = self._stage_out_channels[0]

        self.conv1 = nn.Sequential(
            nn.Conv2d(input_channels, output_channels, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(output_channels),
            nn.ReLU(inplace=True)
        )
        input_channels = output_channels

        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)

        # Static annotations for mypy
        self.stage2: nn.Sequential
        self.stage3: nn.Sequential
        self.stage4: nn.Sequential

        stage_names = ["stage{}".format(i) for i in [2, 3, 4]]
        for name, repeats, output_channels in zip(stage_names, stages_repeats,
                                                  self._stage_out_channels[1:]):
            seq = [inverted_residual(input_channels, output_channels, 2)]
            for i in range(repeats - 1):
                seq.append(inverted_residual(output_channels, output_channels, 1))
            setattr(self, name, nn.Sequential(*seq))
            input_channels = output_channels

        output_channels = self._stage_out_channels[-1]
        self.conv5 = nn.Sequential(
            nn.Conv2d(input_channels, output_channels, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(output_channels),
            nn.ReLU(inplace=True)
        )

        self.fc = nn.Linear(output_channels, num_classes)

    def _forward_impl(self, x: Tensor) -> Tensor:
        # See note [TorchScript super()]
        x = self.conv1(x)
        x = self.maxpool(x)
        x = self.stage2(x)
        x = self.stage3(x)
        x = self.stage4(x)
        x = self.conv5(x)
        x = x.mean([2, 3])  # global pool
        x = self.fc(x)
        return x

    def forward(self, x: Tensor) -> Tensor:
        return self._forward_impl(x)

二、train.py

到此這篇關(guān)于Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2的文章就介紹到這了,更多相關(guān)Python用Pytorch搭建ShuffleNetv2內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

您可能感興趣的文章:
  • Python深度學(xué)習(xí)之Pytorch初步使用
  • python 如何查看pytorch版本
  • 簡(jiǎn)述python&pytorch 隨機(jī)種子的實(shí)現(xiàn)
  • 淺談pytorch、cuda、python的版本對(duì)齊問題
  • python、PyTorch圖像讀取與numpy轉(zhuǎn)換實(shí)例
  • 基于python及pytorch中乘法的使用詳解
  • python PyTorch參數(shù)初始化和Finetune
  • python PyTorch預(yù)訓(xùn)練示例
  • Python機(jī)器學(xué)習(xí)之基于Pytorch實(shí)現(xiàn)貓狗分類

標(biāo)簽:黃山 濟(jì)南 臺(tái)灣 欽州 景德鎮(zhèn) 宿遷 喀什 三沙

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2》,本文關(guān)鍵詞  Python,深度,學(xué),習(xí)之,使用,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無(wú)關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2》相關(guān)的同類信息!
  • 本頁(yè)收集關(guān)于Python深度學(xué)習(xí)之使用Pytorch搭建ShuffleNetv2的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章