【机器学习】多元线性回归

发布时间 2024-01-12 10:43:48作者: 漫舞八月(Mount256)

多元线性回归模型(multiple regression model)

  • 多元线性回归模型:

\[\begin{aligned} f_{\vec{w}, b}(\vec{x}) &= \vec{w} \cdot \vec{x} + b \\ &= w_1x_1 + w_2x_2 + ... + w_nx_n + b \\ &= \sum_{j=1}^{n} w_jx_j + b \end{aligned} \]

其中:

  • \(\vec{w}\) 为权重(weight)= \((w_1, w_2, ..., w_n)\)\(n\) 为向量维数
  • \(b\) 为偏置(bias)
  • \(\vec{x} = (x_1, x_2, ..., x_n)\)\(n\) 为向量维数

损失/代价函数(cost function)——均方误差(mean squared error)

  • 一个训练样本:\(\vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)})\)\(y^{(i)}\)
  • 训练样本总数 = \(m\)
  • 损失/代价函数:

\[\begin{aligned} J(\vec{w}, b) &= \frac{1}{2m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}]^2 \\ &= \frac{1}{2m} \sum^{m}_{i=1} [\vec{w} \cdot \vec{x}^{(i)} + b - y^{(i)}]^2 \end{aligned} \]

批量梯度下降算法(batch gradient descent algorithm)

  • \(\alpha\):学习率(learning rate),用于控制梯度下降时的步长,以抵达损失函数的极小值处(不是最小值!)。若 \(\alpha\) 太小,梯度下降太慢;若 \(\alpha\) 太大,下降过程可能无法收敛。
  • 批量梯度下降算法:

\[\begin{aligned} repeat \{ \\ & tmp\_w_1 = w_1 - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_1^{(i)} \\ & tmp\_w_2 = w_2 - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_2^{(i)} \\ & ... \\ & tmp\_w_n = w_n - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_n^{(i)} \\ & tmp\_b = b - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] \\ & simultaneous \ update \ every \ parameters \\ \} until \ & converge \end{aligned} \]

  • 检查梯度下降是否收敛(converge):函数 \(J(\vec{w}, b)\) 随迭代次数的增加应逐渐减小。令 \(\epsilon = 0.001\),若在某一次迭代中发现函数 \(J(\vec{w}, b)\) 的增长值 \(\leq \epsilon\),则说明收敛。

特征工程(feature engineering)

将原有特征值通过组合或转化等方式变成新特征值。

特征缩放(feature scaling)

  • 特征缩放的作用:

【图片】

  • 均值归一化(mean normalization):

\[x_j^{(i)} := \frac{x_j^{(i)} - \mu_j}{\max (x_j) - \min (x_j)} \]

其中:\(\vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_j^{(i)}, ..., x_n^{(i)})\)\(\mu_j\) 为所有 \(x_j\) 的平均值(mean),即

\[\mu_j = \frac{1}{n} \sum_{i=1}^{n} x_j^{(i)} \]

  • z-score 归一化(z-score normalization):

\[x_j^{(i)} := \frac{x_j^{(i)} - \mu_j}{\sigma_j} \]

其中:\(\vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_j^{(i)}, ..., x_n^{(i)})\)\(\sigma_j\) 为所有 \(x_j\) 的标准差(Standard Deviation,std),即

\[\mu_j = \sqrt {\frac{1}{n} \sum_{i=1}^{n} [x_j^{(i)} - \mu_j]^2} \]

正则化线性回归(regularization linear regression)

  • 正则化的作用:解决过拟合(overfitting)问题(也可通过增加训练样本数据解决)。
  • 损失/代价函数(仅需正则化 \(w\),无需正则化 \(b\)):

\[\begin{aligned} J(\vec{w}, b) &= \frac{1}{2m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}]^2 + \frac{\lambda}{2m} \sum^{n}_{j=1} w_j^2 \end{aligned} \]

其中,第一项称为均方误差(mean squared error),第二项称为正则化项(regularization term),使 \(w_j\) 变小。初始设置的 \(\lambda\) 越大,最终得到的 \(w_j\) 越小。

  • 梯度下降算法:

\[\begin{aligned} repeat \{ \\ & tmp\_w_1 = w_1 - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_1^{(i)} + \frac{\lambda}{m} w_1 \\ & tmp\_w_2 = w_2 - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_2^{(i)} + \frac{\lambda}{m} w_2 \\ & ... \\ & tmp\_w_n = w_n - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] x_n^{(i)} + \frac{\lambda}{m} w_n \\ & tmp\_b = b - \alpha \frac{1}{m} \sum^{m}_{i=1} [f_{\vec{w},b}(\vec{x}^{(i)}) - y^{(i)}] \\ & simultaneous \ update \ every \ parameters \\ \} until \ & converge \end{aligned} \]