pytorch搭建神经网络是很简单明了的,这里介绍两种自己常用的搭建模式:
1
2
|
import torch
import torch.nn as nn
|
first:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
class NN(nn.Module):
def __init__( self ):
super (NN, self ).__init__()
self .model = nn.Sequential(
nn.Linear( 30 , 40 ),
nn.ReLU(),
nn.Linear( 40 , 60 ),
nn.Tanh(),
nn.Linear( 60 , 10 ),
nn.Softmax()
)
self .model[ 0 ].weight.data.uniform_( - 3e - 3 , 3e - 3 )
self .model[ 0 ].bias.data.uniform( - 1 , 1 )
def forward( self ,states):
return self .model(states)
|
这一种是将整个网络写在一个Sequential中,网络参数设置可以在网络搭建好后单独设置:self.model[0].weight.data.uniform_(-3e-3,3e-3),这是设置第一个linear的权重是(-3e-3,3e-3)之间的均匀分布,bias是-1至1之间的均匀分布。
second:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
class NN1(nn.Module):
def __init__( self ):
super (NN1, self ).__init__()
self .Linear1 = nn.Linear( 30 , 40 )
self .Linear1.weight.data.fill_( - 0.1 )
#self.Linear1.weight.data.uniform_(-3e-3,3e-3)
self .Linear1.bias.data.fill_( - 0.1 )
self .layer1 = nn.Sequential( self .Linear1,nn.ReLU())
self .Linear2 = nn.Linear( 40 , 60 )
self .layer2 = nn.Sequential( self .Linear2,nn.Tanh())
self .Linear3 = nn.Linear( 60 , 10 )
self .layer3 = nn.Sequential( self .Linear3,nn.Softmax())
def forward( self ,states):
return self .model(states)
|
网络参数的设置可以在定义完线性层之后直接设置如这里对于第一个线性层是这样设置:self.Linear1.weight.data.fill_(-0.1),self.Linear1.bias.data.fill_(-0.1)。
你可以看一下这样定义完的参数的效果:
1
2
3
4
5
6
7
8
9
10
11
12
|
Net = NN()
print ( "0:" ,Net.model[ 0 ])
print ( "weight:" , type (Net.model[ 0 ].weight))
print ( "weight:" , type (Net.model[ 0 ].weight.data))
print ( "bias" ,Net.model[ 0 ].bias.data)
print ( '1:' ,Net.model[ 1 ])
#print("weight:",Net.model[1].weight.data)
print ( '2:' ,Net.model[ 2 ])
print ( '3:' ,Net.model[ 3 ])
#print(Net.model[-1])
Net1 = NN1()
|
1
|
print (Net1.Linear1.weight.data)
|
输出:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
0 : Linear ( 30 - > 40 )
weight: < class 'torch.nn.parameter.Parameter' >
weight: < class 'torch.FloatTensor' >
bias
- 0.6287
- 0.6573
- 0.0452
0.9594
- 0.7477
0.1363
- 0.1594
- 0.1586
0.0360
0.7375
0.2501
- 0.1371
0.8359
- 0.9684
- 0.3886
0.7200
- 0.3906
0.4911
0.8081
- 0.5449
0.9872
0.2004
0.0969
- 0.9712
0.0873
0.4562
- 0.4857
- 0.6013
0.1651
0.3315
- 0.7033
- 0.7440
0.6487
0.9802
- 0.5977
0.3245
0.7563
0.5596
0.2303
- 0.3836
[torch.FloatTensor of size 40 ]
1 : ReLU ()
2 : Linear ( 40 - > 60 )
3 : Tanh ()
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
... ⋱ ...
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
- 0.1000 - 0.1000 - 0.1000 ... - 0.1000 - 0.1000 - 0.1000
[torch.FloatTensor of size 40x30 ]
Process finished with exit code 0
|
这里要注意self.Linear1.weight的类型是网络的parameter。而self.Linear1.weight.data是FloatTensor。
以上这篇关于pytorch中全连接神经网络搭建两种模式详解就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持服务器之家。
原文链接:https://blog.csdn.net/geter_CS/article/details/80015957