You are asked to predict weather temperature. You only know the linear regression model and luckily the given samples can be mapped using it. In your given dataset temperature value depends on both humidity and visibility. To execute the tasks, you are given skeleton codes. Write your own code by modifying, updating, inserting code as necessary to estimate a linear equation for the given datasets. Write your code using raw python code. You can use NumPy, pandas, etc. However, you are not allowed to use any high-level API (such as TensorFlow, PyTorch, Maxnet, etc.)

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question

You are asked to predict weather temperature. You only know the linear regression model and luckily the given samples can be mapped using it. In your given dataset temperature value depends on both humidity and visibility. To execute the tasks, you are given skeleton codes. Write your own code by modifying, updating, inserting code as necessary to estimate a linear equation for the given datasets. Write your code using raw python code. You can use NumPy, pandas, etc. However, you are not allowed to use any high-level API (such as TensorFlow, PyTorch, Maxnet, etc.)

 

get_data () function returns the data and split it into training and test set. Write data_iter() function to create batch-wise data and return batches as needed during your training.

 

You are fitting these data samples using a linear equation. Write a function create_model_parameter(mu, sigma, row, column) to create the parameters and initialize values with normal random values. mu and sigma represent mean and standard deviation, respectively.

 

Write a code for the linear regression given as model() function in the skeleton code.

 

Compute loss function using squared_loss() function.

 

Compute gradient using gradient() function for each parameter of your model. 

 

Update you model parameter using sgd() function

 

Write your train() function to execute your linear regression for all the samples given. 

 

Draw a single figure for training loss vs number epochs for three different batch sizes. Write your own function by modifying draw_loss(). Please choose batch sizes as small, large, and just exact. Explain the effect of batch sizes on the training loss.

 

Skeleton code: 

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
datafile = "cleaned_data/weather_data.csv"
def get_data(filename):
    df = pd.read_csv(filename)
    X_ = df[["Humidity","Visibility (km)"]]
    Y_ = df[["Temperature (C)"]]
    
    # Spliting data into train and test sets
    X_train, X_test, y_train, y_test = train_test_split(X_, Y_, test_size=0.25, random_state=42)

   
    
    return X_train, X_test, y_train, y_test


X_train, X_test, y_train, y_test = get_data(datafile)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
def data_iter(batch_size, X, y):
    num_examples = len(X)
    indices = list(range(num_examples))
    # The examples are read at random, in no particular order
    random.shuffle(indices)
    for i in range(0, num_examples, batch_size):
        #write your code here to retrun batch wise X,y

batch_size = #define your batch size

for X, y in data_iter(batch_size, features, labels):
    print(X, '\n', y)
    break
create and initialize model p..#@save
    """Squared loss."""
    #write your code here for loss function
    return 

def gradient(loss,params):
    #compute gradeint of loss function with respect to params
    
    return 
    
def sgd(params, grads, lr, batch_size):  <..#@save
    """Minibatch stochastic gradient descent."""
    #write your code for updating your parameter using gradient descent algorithm
    #Example: theta = theta - (lr * grad)/batch_size
def train(lr,num_epochs,X,y):
    # write your own code and modify the below code as needed
    
    for epoch in range(num_epochs):
        for X, y in data_iter(batch_size, X, y):
           
        train_l = loss(net(features, w, b), labels)
        print(f'epoch {epoch + 1}, loss {float(tf.reduce_mean(train_l)):f}')
    
    return epoch, loss
def draw_loss(num_epochs,loss):
    plt.plot(num_epochs,loss) 
    plot.show()
def test(X):
    # write your own code 
    #predict temperature for the given humidity and visibility

 

Example of DataFile in screenshot

 

This is just what the data looks like. Actual data has 96430 lines. 

Please write the code in jupyter. 

weather_data
Humidity Visibility (km)
0.89
0.983 0.5234860409245300
0.86
0.983
0.5210835887664650
0.89
0.9290000000000000 0.5531438985999500
0.83
0.983 0.5019468146798110
0.83
0.983 0.5173556457625720
0.85
0.9290000000000000 0.5193438820313150
0.95 0.6200000000000000 0.49565073316212400
0.89 0.6200000000000000 0.510645348355563
0.82 0.6200000000000000 0.5746831248446690
0.72
0.6200000000000000
0.618672852290614
0.67
0.696 0.6521414961477920
0.54
0.711 0.6689586612542460
0.55
0.6787341562422340
0.51
0.6717753293016320
0.47
0.711 0.6948057327479080
0.46
0.6953027918150940
0.6
0.6427802170491260
0.63
0.6451826692071910
0.69
0.6258802087648080
0.7
0.6093115731919480
0.77
0.6850000000000000
0.5855355811448930
0.76 0.6200000000000000
0.5800679314058490
0.79
0.5641620412559030
0.7000000000000000
0.7000000000000000
0.7000000000000000
0.7000000000000000
0.711
0.696
0.696
0.983
Temperature (C)
Transcribed Image Text:weather_data Humidity Visibility (km) 0.89 0.983 0.5234860409245300 0.86 0.983 0.5210835887664650 0.89 0.9290000000000000 0.5531438985999500 0.83 0.983 0.5019468146798110 0.83 0.983 0.5173556457625720 0.85 0.9290000000000000 0.5193438820313150 0.95 0.6200000000000000 0.49565073316212400 0.89 0.6200000000000000 0.510645348355563 0.82 0.6200000000000000 0.5746831248446690 0.72 0.6200000000000000 0.618672852290614 0.67 0.696 0.6521414961477920 0.54 0.711 0.6689586612542460 0.55 0.6787341562422340 0.51 0.6717753293016320 0.47 0.711 0.6948057327479080 0.46 0.6953027918150940 0.6 0.6427802170491260 0.63 0.6451826692071910 0.69 0.6258802087648080 0.7 0.6093115731919480 0.77 0.6850000000000000 0.5855355811448930 0.76 0.6200000000000000 0.5800679314058490 0.79 0.5641620412559030 0.7000000000000000 0.7000000000000000 0.7000000000000000 0.7000000000000000 0.711 0.696 0.696 0.983 Temperature (C)
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 2 steps

Blurred answer
Follow-up Questions
Read through expert solutions to related follow-up questions below.
Follow-up Question

Please show detailed answer before I give feedback

Solution
Bartleby Expert
SEE SOLUTION
Knowledge Booster
Computational Systems
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education