深度学习--梯度下降再理解+线性回归

发布时间 2023-04-19 12:47:29作者: 林每天都要努力

深度学习--梯度下降再理解+线性回归

梯度下降

梯度下降的对象是 模型的参数,即 权重w ,偏置项b,通过寻找合适的参数使模型的loss值最小

Loss函数是关于输入,输出,权重,偏置项的函数,即:loss=(y-(wx+b))^2。loss值最小,y与wx+b相似。

个人思考:如果训练的数据量越大,识别的准确率是越大还是越小?

线性回归

通过对L=(wx+b-y)^2进行线性回归计算

Linear_Optimize.py文件,损失函数实现和梯度更新

import numpy as np
#计算Loss
def compute_error_for_line_given_points(b,w,points):
    totalError = 0
    for i in range(0,len(points)):
        x = points[i,0]
        y = points[i,1]
        totalError +=(y-(w*x+b))**2
    return totalError/float(len(points))

#计算梯度
def step_gradient(b_current , w_current ,points , learningRate):
    b_gradient = 0
    w_gradient = 0
    N = float(len(points))
    for i in range(0,len(points)):
        x = points[i,0]
        y = points[i,1]
        #求梯度
        b_gradient += -(2/N)*(y-((w_current*x)+b_current))
        w_gradient += -(2/N)*x*(y-((w_current*x)+b_current))
    new_b = b_current - (learningRate * b_gradient)
    new_w = w_current - (learningRate * w_gradient)
    return [new_b,new_w]

#循环迭代
def gradient_desecnt_runner(points,starting_b,starting_w,learning_rate,num_iterations):
    b = starting_b
    w = starting_w
    for i in range(num_iterations):
        b,w = step_gradient(b,w,np.array(points),learning_rate)
    return [b,w]

主函数调用运行:

import numpy as np
import Linear_Optimize as LO

def run():
    points = np.genfromtxt("data.csv",delimiter=",")
    learning_rate = 0.0001
    initial_b = 0
    initial_w = 0
    num_iterations =1000
    print("Starting gradient descent at b={0},w={1},error={2}".format(initial_b,initial_w,LO.compute_error_for_line_given_points(initial_b,initial_w,points)))
    print("Running...")
    [b,w]=LO.gradient_desecnt_runner(points,initial_b,initial_w,learning_rate,num_iterations)
    print("after {0} iterations b={1},w={2},error={3}".format(num_iterations,b, w,LO.compute_error_for_line_given_points(b,w,points)))


if __name__ == '__main__':
    run()