Database System Concepts
Database System Concepts
7th Edition
ISBN: 9780078022159
Author: Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher: McGraw-Hill Education
Bartleby Related Questions Icon

Related questions

Question

from matplotlib import cm
import matplotlib.pyplot as plt
import platform
import numpy as np
from mlxtend.data import loadlocal_mnist

from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
import pickle

plt.rcParams.update({'font.size': 12})
plt.rcParams['figure.figsize'] = [8, 4]

X, y = loadlocal_mnist(images_path='train-images-idx3-ubyte', labels_path='train-labels-idx1-ubyte')

# Keeping only digits 0 and 1
index = y <2
X = X[index]
y = y[index]

# Normalizing the data
X = X/X.max()

# Splitting to training and testing groups.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)

def sigmoid(t):
    reasonable = (t>-100).astype(float)
    t = t*reasonable
    s = 1/(1+np.exp(-t))
    s = s*reasonable
    return s

Nsamples,Nfeatures = X_train.shape
Nclasses = 2
a = np.random.randn(Nfeatures+1,Nclasses-1)
Xtilde = np.concatenate((X_train,np.ones((Nsamples,1))),axis=1)
gamma = 1e-1
for iter in range(1500):
    
     # YOUR CODE HERE
    
    a = a - gamma*gradient 
    
    if(np.mod(iter,100)==0):
        print("Error = ",np.sum(error**2)/)
        fig,ax = plt.subplots(1,2)
        ax[0].plot(s[:,0:200].T)
        ax[0].plot(y_train[0:200])
        ax[0].set_title('True and predicted labels')
        ax[1].plot(error)
        ax[1].set_title('Prediction Errors')
        plt.show()
    
    
plt.imshow(np.reshape(a[:-1],(28,28)))
plt.title("weights")

Nsamples,Nfeatures = X_test.shape
Xtilde = np.concatenate((X_test,np.ones((Nsamples,1))),axis=1).T
s_test = sigmoid(a.T@Xtilde)

error = (s_test - y_test).T
print(np.sum(error**2))
plt.figure()
plt.plot(s_test[:,400:700].T)
plt.plot(y_test[400:700])
plt.title("True and predicted labels")
plt.show()

plt.figure()
plt.plot(error)
plt.title("Error")

wrong_indices = np.where(np.abs(error[:,0])>0)[0]
Xwrong = X_test[wrong_indices]
fig, axs = plt.subplots(1, len(wrong_indices))

for i in range(len(wrong_indices)):
    axs[i].imshow(np.reshape(Xwrong[i],(28,28)))
    axs[i].set_title("Misclassified")

This code has only one part missing where your code here is written. I need help with completion of that part. Can someone please help me with the same? 

Expert Solution
Check Mark
Step 1

It seems that the missing part involves computing the gradient of the binary logistic regression loss function with respect to the weight parameter vector a.

Knowledge Booster
Background pattern image
Computer Science
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
Recommended textbooks for you
Text book image
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Text book image
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Text book image
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
Text book image
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Text book image
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Text book image
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education