# 神经网络代写｜6CCE3CHD/7CCEMCHD Hardware-Software Co-design of Neuromorphic Networks

本次英国代写是一个神经形态网络的软硬件协同设计的assignment

The code provided1 will train the SNN-DC model in the paper for the MNIST database.

In the main.py file, there is a section where you can define your network architecture

and learning schemes.

The variables you need to modify are as follows:

– Learning time – the number of time-steps for which the input is presented and

you want to train the network (set to 10 time-steps as default).

– Evaluation time – the number of time-steps for which the input is presented

during the inference phase (set to 5 time-steps as default).

– batch size – the number of images used in a batch to determine the weight

update values (set to 1 image per batch).

– epochs – the number of epochs of training (set to 3 now, you will probably need

to increase this to get better accuracy).

– lr vector – a vector that contains the learning rate at every epoch. (In the exam

ple provided, this vector has 3 elements, refer to the literature on how to chose

learning rate).

– PCM parameters

kwargs:

* Initially, you can set clone to pcm=false, but once your software network

is defined you should change this to true to see what the network accuracy

is in hardware.

* precision: 4. For PCM, this should be 4, but you are free to reduce this and

see the impact on accuracy.

* write noise stdv:0.01. This should be at least 0.01, but you can increase and

see the effect on accuracy.

– train size = 50,000 – The number of images from the database to be used for

training.

– valid size = 10,000 – The number of images from the database to be used for

validation.

– test size = 10,000 – The number of images from the database to be used for test.

– If you use the entire data set for initial design and exploration, you will see that

the code will take quite a bit of time to execute. So, initially start with smaller

numbers (say, 1000 for each). The final results you report should be with the

above values.

To define the network, use snn model.add(Linear(N, M, neuron model=LIF)). This

will add a new layer to the network with M neurons in the input and N neurons at

the output. So, in the code provided, we are generating a 784-256-10 fully-connected

network.

The code takes each image from the MNIST database, and normalises the image to

[0,1]. Then each pixel value is used as the success probability to generate a Bernoulli

random variable. Since we have set Learning time=10, each real-valued input pixel

will be translated to an input stream with dimenion 10, i.e., a maximum of 10 spikes.

We have used the squared hinge loss function as the cost. i.e., we set the target

values for the correct neuron output to be 1 and all incorrect neurons to be -1. The

cost function is the sum of squares of the error.

You should report training, test, and validation accuracy in your report for the full

database for the different networks and hyperparameters you have studied.

If you prefer to implement other networks or database problems, you are free to

modify the code provided.