Homework1_LS

.pdf

School

Arizona State University *

*We aren’t endorsed by this school

Course

405

Subject

Electrical Engineering

Date

Dec 6, 2023

Type

pdf

Pages

3

Uploaded by MagistrateSnow15501

Report
EEE 498 ML with application to FPGAs All homework is handed in online. Don’t hand in pictures of the code files, hand in the actual code files so I can see if they work. Hand in images of outputs of your code (displays, plots …) converted to pdf. If there are questions to be answered answer them in PowerPoint, Word, or on paper and convert to pdf (It doesn’t actually matter how you get to the pdf). Organize your work so that the answer to each problem can be easily identified. You will not get credit if your submission is poorly organized, or if the grader cannot find the answers. You can use any language python, c, C++, MATLAB ,…, to do coding though Python is likely the easiest and easiest to translate from lectures. Be careful copying code from office products to Spyder or other editors, some characters are changed like single quotes. Sometimes invisible characters occur causing you to get an error on a line until you completely retype it. Homework 1 1) What are the zeroth, first, and second conditions of optimality? 2) What are the zeroth, first conditions of convexity? 3) What is the Least Squares Cost function? Use the compact notation. Show that it is convex. 4) Use the dataset regressionprob1_train0.csv. 1 2X2 matrix inversion M= , d b a b c a M c d ad bc = a) Using a matrix inversion (such as shown above or MLR p. 108 eqn 5.17) find the weights and intercept to predict F from the other columns. Determine the residual squared, or R 2 . Use compact notation as described in the lecture so that you get an intercept b (or w0). You can read the file in using pandas and create A in compact notation using this code:
import numpy as np import pandas as pd df = pd.read_csv('regressionprob1_train0.csv') X = df.iloc[:,0:4].values y = df['F'].values ones_ = np.ones(len(y),float) ## compact notation A = np.column_stack((ones_,X)) This Python code will calculate R 2 ## ## unexplained variance squared ## or R squared ## Y is the target value ## Yp is the predicted value ## import numpy as np def Rsquared(Y,Yp): V = Y-Yp Ymean = np.average(Y) totvar = np.sum((Y-Ymean)**2) unexpvar = np.sum(np.abs(V**2)) R2 = 1-unexpvar/totvar return(R2) you can also use from sklearn.metrics import r2_score R2 = r2_score(y,Yp) b) Solve again using a linear solver like numpy.linalg.solve and compare results. c) Now use the model trained in a) on dataset regressionprob1_test0.csv. Determine the residual squared, or R 2 . What can you conclude about your trained model from a) and b)? 5) Now write your own Gradient descents code and code the gradient to find the weights to minimize the cost function in (3) using training data in 4 (regressionprob1_train0.csv). (hint: see Lecture 7 W1 slide 24, and Lecture 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help