zhaifly_个人页

个人头像照片 zhaifly
0
1
0

个人介绍

暂无个人介绍

擅长的技术

获得更多能力
通用技术能力:

暂时未有相关通用技术能力~

云产品技术能力:

暂时未有相关云产品技术能力~

阿里云技能认证

详细说明
暂无更多信息
暂无更多信息
正在加载, 请稍后...
暂无更多信息
  • 回答了问题 2019-07-17

    深度学习Caffe框架中,Solver文件和Net文件分别是什么,怎么编写?

    net 网络模型:name: 'LeNet'layer { name: 'data' type: 'Input' top: 'data' input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }}layer { name: 'conv1' type: 'Convolution' bottom: 'data' top: 'conv1' param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: 'xavier' } bias_filler { type: 'constant' } }}layer { name: 'pool1' type: 'Pooling' bottom: 'conv1' top: 'pool1' pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: 'conv2' type: 'Convolution' bottom: 'pool1' top: 'conv2' param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: 'xavier' } bias_filler { type: 'constant' } }}layer { name: 'pool2' type: 'Pooling' bottom: 'conv2' top: 'pool2' pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: 'ip1' type: 'InnerProduct' bottom: 'pool2' top: 'ip1' param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: 'xavier' } bias_filler { type: 'constant' } }}layer { name: 'relu1' type: 'ReLU' bottom: 'ip1' top: 'ip1'}layer { name: 'ip2' type: 'InnerProduct' bottom: 'ip1' top: 'ip2' param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: 'xavier' } bias_filler { type: 'constant' } }}layer { name: 'prob' type: 'Softmax' bottom: 'ip2' top: 'prob'} solver: The train/test net protocol buffer definition net: 'examples/mnist/lenet_train_test.prototxt' test_iter specifies how many forward passes the test should carry out. In the case of MNIST, we have test batch size 100 and 100 test iterations, covering the full 10,000 testing images. test_iter: 100 Carry out testing every 500 training iterations. test_interval: 500 The base learning rate, momentum and the weight decay of the network. base_lr: 0.01momentum: 0.9weight_decay: 0.0005 The learning rate policy lr_policy: 'inv'gamma: 0.0001power: 0.75 Display every 100 iterations display: 100 The maximum number of iterations max_iter: 10000 snapshot intermediate results snapshot: 5000snapshot_prefix: 'examples/mnist/lenet' solver mode: CPU or GPU solver_mode: CPU
    踩1 评论0
正在加载, 请稍后...
滑动查看更多
正在加载, 请稍后...
暂无更多信息