目錄
- 一,利用 tensorboardX 可視化網(wǎng)絡(luò)結(jié)構(gòu)
- 二,利用 vistom 可視化
- 三,利用pytorchviz可視化網(wǎng)絡(luò)結(jié)構(gòu)
一,利用 tensorboardX 可視化網(wǎng)絡(luò)結(jié)構(gòu)
參考 https://github.com/lanpa/tensorboardX
支持scalar, image, figure, histogram, audio, text, graph, onnx_graph, embedding, pr_curve and video summaries.
例子要求tensorboardX>=1.2 and pytorch>=0.4
安裝
pip install tensorboardX
或 pip install git+https://github.com/lanpa/tensorboardX
例子
# demo.py
import torch
import torchvision.utils as vutils
import numpy as np
import torchvision.models as models
from torchvision import datasets
from tensorboardX import SummaryWriter
resnet18 = models.resnet18(False)
writer = SummaryWriter()
sample_rate = 44100
freqs = [262, 294, 330, 349, 392, 440, 440, 440, 440, 440, 440]
for n_iter in range(100):
dummy_s1 = torch.rand(1)
dummy_s2 = torch.rand(1)
# data grouping by `slash`
writer.add_scalar('data/scalar1', dummy_s1[0], n_iter)
writer.add_scalar('data/scalar2', dummy_s2[0], n_iter)
writer.add_scalars('data/scalar_group', {'xsinx': n_iter * np.sin(n_iter),
'xcosx': n_iter * np.cos(n_iter),
'arctanx': np.arctan(n_iter)}, n_iter)
dummy_img = torch.rand(32, 3, 64, 64) # output from network
if n_iter % 10 == 0:
x = vutils.make_grid(dummy_img, normalize=True, scale_each=True)
writer.add_image('Image', x, n_iter)
dummy_audio = torch.zeros(sample_rate * 2)
for i in range(x.size(0)):
# amplitude of sound should in [-1, 1]
dummy_audio[i] = np.cos(freqs[n_iter // 10] * np.pi * float(i) / float(sample_rate))
writer.add_audio('myAudio', dummy_audio, n_iter, sample_rate=sample_rate)
writer.add_text('Text', 'text logged at step:' + str(n_iter), n_iter)
for name, param in resnet18.named_parameters():
writer.add_histogram(name, param.clone().cpu().data.numpy(), n_iter)
# needs tensorboard 0.4RC or later
writer.add_pr_curve('xoxo', np.random.randint(2, size=100), np.random.rand(100), n_iter)
dataset = datasets.MNIST('mnist', train=False, download=True)
images = dataset.test_data[:100].float()
label = dataset.test_labels[:100]
features = images.view(100, 784)
writer.add_embedding(features, metadata=label, label_img=images.unsqueeze(1))
# export scalar data to JSON for external processing
writer.export_scalars_to_json("./all_scalars.json")
writer.close()
運(yùn)行: python demo.py
會(huì)出現(xiàn)runs文件夾,然后在cd到工程目錄運(yùn)行tensorboard --logdir runs
結(jié)果:
二,利用 vistom 可視化
參考:https://github.com/facebookresearch/visdom
安裝和啟動(dòng)
安裝: pip install visdom
啟動(dòng):python -m visdom.server示例
from visdom import Visdom
#單張
viz.image(
np.random.rand(3, 512, 256),
opts=dict(title=\\\\\'Random!\\', caption=\\\\\'How random.\\'),
)
#多張
viz.images(
np.random.randn(20, 3, 64, 64),
opts=dict(title=\\\\\'Random images\\', caption=\\\\\'How random.\\')
)
from visdom import Visdom
image = np.zeros((100,100))
vis = Visdom()
vis.text("hello world!!!")
vis.image(image)
vis.line(Y = np.column_stack((np.random.randn(10),np.random.randn(10))),
X = np.column_stack((np.arange(10),np.arange(10))),
opts = dict(title = "line", legend=["Test","Test1"]))
三,利用pytorchviz可視化網(wǎng)絡(luò)結(jié)構(gòu)
參考:https://github.com/szagoruyko/pytorchviz
到此這篇關(guān)于Pytorch可視化的幾種實(shí)現(xiàn)方法的文章就介紹到這了,更多相關(guān)Pytorch可視化內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
您可能感興趣的文章:- pytorch使用tensorboardX進(jìn)行l(wèi)oss可視化實(shí)例
- 使用pytorch實(shí)現(xiàn)可視化中間層的結(jié)果
- Pytorch十九種損失函數(shù)的使用詳解
- pytorch教程網(wǎng)絡(luò)和損失函數(shù)的可視化代碼示例