# 代码代写｜FIT3181: Deep Learning Assignment 1

这是一篇来自澳洲的关于神经网络深度学习的**代码代写**

This notebook has been prepared for your to complete Assignment 1. The theme of this assignment is about practical machine learning knowledge and skills in deep neural networks, including feedforward and convolutional neural networks. Some sections have been partially completed to help you get started. **The total marks for this notebook is 100**.

This assignment contains **three **parts:

**Hint**: This assignment was essentially designed based on the lectures and tutorials sessions covered from Week 1 to Week 6. You are strongly recommended to go through these contents thoroughly which might help you to complete this assignment.

**3.2 What to submit **

This assignment is to be completed individually and submitted to Moodle unit site.

**By the due date, you are required to submit one single zip file, named ****xxx_assignment01_solution.zip where ****xxx ****is your student ID, to the corresponding ****Assignment (Dropbox) in Moodle**.

*For example, if your student ID is 12356, then gather all of your assignment solution **to folder, create a zip file named 123456_assignment01_solution.zip and submit this **file. *

Within this zip folder, you **must **submit the following files: 1. **Assignment01_solution.ipynb**:this is your Python notebook solution source file. 1. **Assignment01_output.html**: this is the output of your Python notebook solution *exported *in html format. 1. Any **extra files or folder **needed to complete your assignment (e.g., images used in your answers).

Since the notebook is quite big to load and work together, one recommended option is to split solution into three parts and work on them seperately. In that case, replace **Assign****ment01_solution.ipynb **by three notebooks: **Assignment01_Part1_solution.ipynb**, **As****signment01_Part2_solution.ipynb **and **Assignment01_Part3_solution.ipynb **

**You can run your codes on Google Colab. In this case, you need to capture the ****screenshots of your Google Colab model training and put in corresponding places in ****your Jupyter notebook. You also need to store your trained models to folder ***./models ***with recognizable file names (e.g., Part3_Sec3_2_model.h5). **

**3.3 Part 1: Theory and Knowledge Questions **

[Total marks for this part: 30 points]

The first part of this assignment is for you to demonstrate your knowledge in deep learning that you have acquired from the lectures and tutorials materials. Most of the contents in this assignment are drawn from **the lectures and tutorials from weeks 1 to 3**. Going through these materials before attempting this part is highly recommended.

**Question 1.1 Activation function plays an important role in modern Deep NNs. For ****each of the activation function below, state its output range, find its derivative ****(show your steps), and plot the activation fuction and its derivative (a) **Leaky ReLU:

LeakyReLU (*x*) = { 0*.*01*x *if *x < *0 *x *otherwise [1.5 points]

**(b) **Softplus: Softplus(*x*) = ln (1 + *e**x*)[1.5 points]

**Numpy is possibly being used in the following questions. You need to import numpy ****here. **

[ ]: **import ****numpy ****as ****np **

**Question 1.2 Assume that we feed a data point ***x ***with a ground-truth label ***y *= 2 **to ****the feed-forward neural network with the ReLU activation function as shown in the ****following figure (a) **What is the numerical value of the latent presentation *h*1(*x*)? [1 point]

**(b) **What is the numerical value of the latent presentation *h*2(*x*)? [1 point]

**(c) **What is the numerical value of the logit *h*3(*x*)? [1 point]

**(d) **What is the corresonding prediction probabilities *p*(*x*)? [1 point]

**(e) **What is the cross-entropy loss caused by the feed-forward neural network at (*x, y*)? Remind that *y *= 2. [1 point]

**(e) **Assume that we are applying the label smoothing technique (i.e., link for main paper from Goeff Hinton) with *α *= 0*.*1. What is the relevant loss caused by the feed-forward neural network at (*x, y*)? [1 point]

**You need to show both formulas and numerical results for earning full mark. Although ****it is optional, it is great if you show your numpy code for your computation. **

**Question 1.3 Assume that we are constructing a multilayered feed-forward neural ****network for a classification problem with three classes where the model parameters ****will be generated randomly using your student ID. The architecture of this network ****is (**3(*Input*) *→ *4(*LeakyReLU*) *→ *3(*Output*)**) as shown in the following figure. Note that ****the LeakyReLU has the same formula as the one in Q1.1. **

We feed a feature vector *x *=[ 1 *−*1 1*.*5 ]*T *with ground-truth label *y *= 3 to the above network.

**You need to show both formulas, numerical results, and your numpy code for your ****computation for earning full marks. **

[ ]: *#Code to generate random matrices and biases for W1, b1, W2, b2 *

**import ****numpy ****as ****np **

student_id = 1234

*#insert your student id here for example 1234 *

np.random.seed(student_id)

W1 = np.random.rand(4,3)

b1 = np.random.rand(4,1)

W2 = np.random.rand(3,4)

b2 = np.random.rand(3,1)

**Forward propagation **

**(a) **What is the value of¯*h*1(*x*)? [1 point]

*Show your fomular *

[ ]: *# Show your code *

**(b) **What is the value of *h*1(*x*)? [1 point]

*Show your fomular *

[ ]: *#Show your code *

**(c) **What is the predicted value *y*ˆ? [1 point]

*Show your fomular *

[ ]: *#Show your code *

**(d) **Suppose that we use the cross-entropy (CE) loss. What is the value of the CE loss *l*? [1 point]

*Show your fomular*