Python代写｜Project 4: Neural Nets
In this project you will be implementing neural nets, and in particular the most common algorithm
for learning the correct weights for a neural net from examples. Code structure is provided for a
Perceptron and a multi layer NeuralNet class, and you are responsible for filling in some missing
functions in each of these classes. This includes writing code for the feed forward processing of
input, as well as the backward propagation algorithm to update network weights.
Files you will edit
NeuralNet.pyYour entire Neural Net implementation will be within this file
(You can edit for extra credit) Testing.pyHelper functions for learning a neural net
Files you will not edit
NeuralNetUtil.pyFunctions for converting the datasets into python data structures
Testing.pyHelper functions for learning a neural net from data
autograder.pyA custom autograder to check your code with
Evaluation:Your code will be autograded for technical correctness, using the same autograder and test
cases you are provided with. Please do not change the names of any provided functions or classes within
the code, or you will wreak havoc on the autograder. You should ensure your code passes all the test
cases before submitting the solution, as we will not give any points for a question if not all the test cases
for it pass. However, the correctness of your implementation, not the autograder’s judgments, will be the
final judge of your score. Even if your code passes the autograder, we reserve the right to check it for
mistakes in implementation, though this should only be a problem if your code takes too long or
you disregarded announcements regarding the project. The short answer grading guidelines are
Academic Dishonesty:We will be checking your code against other submissions in the class for logical
redundancy. If you copy someone else’s code and submit it with minor changes, we will know. These
cheat detectors are quite hard to fool, so please don’t try. We trust you all to submit your own work only;
pleasedon’t let us down. If you do, we will pursue the strongest consequences available to us.
Collaboration is allowed, and please don’t forget to include the names of your collaborators in your
Getting Help:You are not alone! If you find yourself stuck on something, contact the course staff for
help either during Office Hours or over email/piazza. We want these projects to be rewarding and
instructional, not frustrating and demoralizing. But, we don’t know when or how to help unless you ask.
This project follows the same terminology as in the lectures and Ch 18.7 in your book. Neural
networks are composed of nodes called perceptrons, as well as input units. Every perceptron has
inputs with associated weights, and from this it produces an output based on its activation
function. Thus you will be implementing a feed forward multi layer neural net.
We will be training the neural nets to be classifiers. Inputs will be in the form of sets of examples
that have an assignment of values to various features and corresponding class values. The
datasets used for this project include a cars dataset and a dataset of pen handwriting values.
For the latter, numeric data from images is stored to train a classifier of handwritten digits.
Instead of converting the stored examples into dictionaries as in the last project, each example
will be parsed into lists of numeric values. Each possible classification for each class corresponds
to a single output perceptron, so in addition to the list of inputs each example includes the list of
outputs for the output layer. The Pen dataset has 16 inputs and 10 output perceptrons, since
there are 16 different features for the handwriting recognition data input and 10 possible
classifications of the input (corresponding to the values 0 9). In the case of a discrete -valued
examples such as in the cars dataset, distinct arbitrary numeric values are assigned to every
value of every feature.
The code we provide you has the methods for parsing the datasets into python data structures,
and the beginning of the Perceptron and NeuralNet classes. A Perceptron merely stores an input
size and weights for all the inputs, as well as methods for computing the output and error given
an input. An object of the NeuralNet class stores lists of Perceptrons and has methods for
computing the output of an entire network and updating the network via back propagation
learning. The network consists of inputs (just a list of inputs that is a parameter to feed forward),
an output layer, and 0 or more hidden layers. Although the structure and initialization is written, all
the actual functionality will be implemented by you.
Implement sigmoid and sigmoidActivation in the Perceptron class. Then, implement feedForward in
the NeuralNet class. Be sure to heed the comments in particular, don’t forget to add a 1 to the input
list for the bias input. You now have a Neural Net classifier! However, the weights are still
randomized so it is rather useless…
Implement sigmoidDeriv, sigmoidActivationDeriv, and updateWeights in Perceptron according to the
equation in the book. Note that delta is an input to updateWeights, and will be the appropriate delta
value regardless of whether the Perceptron is in the output or a hidden layer; its computation will be
implemented later in backPropLearning.