支持向量机算法

简介: 支持向量机算法

谷歌笔记本(可选)

from google.colab import drive
drive.mount("/content/drive")
Mounted at /content/drive

SMO高效优化算法

import random
def loadDataSet(fileName):
  dataMat = []
  labelMat = []
  fr = open(fileName)
  for line in fr.readlines():
    lineArr = line.strip().split('\t')
    dataMat.append([float(lineArr[0]), float(lineArr[1])])
    labelMat.append(float(lineArr[2]))
  return dataMat, labelMat
def selectJrand(i, m):
  j=i
  while(j==i):
    j = int(random.uniform(0, m))
  return j
def clipAlpha(aj, H, L):
  if aj > H:
    aj = H
  if L > aj:
    aj = L
  return aj
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
labelArr
[-1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 1.0,
 -1.0,
 1.0,
 1.0,
 1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0,
 -1.0]
from numpy import *
def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
    dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
    b = 0; m,n = shape(dataMatrix)
    alphas = mat(zeros((m,1)))
    iter = 0
    while (iter < maxIter):
        alphaPairsChanged = 0
        for i in range(m):
            fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
            Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
            if ((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0)):
                j = selectJrand(i,m)
                fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
                Ej = fXj - float(labelMat[j])
                alphaIold = alphas[i].copy(); alphaJold = alphas[j].copy();
                if (labelMat[i] != labelMat[j]):
                    L = max(0, alphas[j] - alphas[i])
                    H = min(C, C + alphas[j] - alphas[i])
                else:
                    L = max(0, alphas[j] + alphas[i] - C)
                    H = min(C, alphas[j] + alphas[i])
                if L==H:
                  print("L==H")
                  continue
                eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T
                if eta >= 0:
                  print("eta>=0")
                  continue
                alphas[j] -= labelMat[j]*(Ei - Ej)/eta
                alphas[j] = clipAlpha(alphas[j],H,L)
                if (abs(alphas[j] - alphaJold) < 0.00001):
                  print("j not moving enough")
                  continue
                alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j])#update i by the same amount as j
                                                                        #the update is in the oppostie direction
                b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T
                b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T
                if (0 < alphas[i]) and (C > alphas[i]):
                  b = b1
                elif (0 < alphas[j]) and (C > alphas[j]):
                  b = b2
                else:
                  b = (b1 + b2)/2.0
                alphaPairsChanged += 1
                print("iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
        if (alphaPairsChanged == 0):
          iter += 1
        else: iter = 0
        print("iteration number: %d" % iter)
    return b,alphas

这是一个简化版的SMO(Sequential Minimal Optimization)算法,用于支持向量机的训练。


输入参数:


  • dataMatIn: 输入数据的特征矩阵
  • classLabels: 输入数据的类别标签
  • C: 软间隔参数,在优化目标函数时对误分类样本的惩罚程度
  • toler: 容错率,用于控制支持向量的选择
  • maxIter: 最大迭代次数

输出结果:

  • b: SMO算法中的常数项
  • alphas: 支持向量的拉格朗日乘子

算法主要步骤:

  1. 初始化一些参数,包括数据矩阵的大小、拉格朗日乘子矩阵等。
  2. 在最大迭代次数内进行迭代,直到所有的乘子不再更新或达到最大迭代次数。
  3. 针对每个样本,计算样本的预测值和误差,并检查是否违反了KKT条件(KKT条件是支持向量机优化问题的充要条件之一)。
  4. 如果违反了KKT条件,选择一个样本作为更新的对象,并计算该样本的预测值和误差。
  5. 根据样本的类别标签,计算L和H的值,用于限制拉格朗日乘子的取值范围。
  6. 计算alpha的更新量eta,并检查eta是否大于等于0,如果是,则继续选择新的样本进行更新。
  7. 更新alpha的值,同时限制其在L和H之间的范围。
  8. 检查alpha的更新幅度是否足够大,如果不够大,则继续选择新的样本进行更新。
  9. 更新常数项b的值,根据更新前后的alpha值和对应的样本信息。
  10. 记录更新的乘子数量,并根据乘子数量是否发生变化来判断是否继续迭代。
  11. 返回最终的常数项和乘子矩阵。

注:其中的函数selectJrand()用于随机选择乘子的索引,clipAlpha()用于限制乘子的取值范围。

b, alphas = smoSimple(dataArr, labelArr, 0.6, 0.001, 40)
<ipython-input-10-609e212d7149>:9: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
<ipython-input-10-609e212d7149>:10: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
<ipython-input-10-609e212d7149>:13: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
<ipython-input-10-609e212d7149>:14: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ej = fXj - float(labelMat[j])


iter: 0 i:0, pairs changed 1
L==H
j not moving enough
L==H
L==H
L==H
L==H
L==H
……
j not moving enough
j not moving enough
iteration number: 40
b
matrix([[-3.82396091]])
alphas[alphas>0]
matrix([[0.09439001, 0.26843195, 0.0348491 , 0.32797286]])
shape(alphas[alphas>0])
(1, 4)
for i in range(100):
  if alphas[i] > 0:
    print(dataArr[i], labelArr[i])
[4.658191, 3.507396] -1.0
[3.457096, -0.082216] -1.0
[5.286862, -2.358286] 1.0
[6.080573, 0.418886] 1.0
import matplotlib.pyplot as plt
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
x = array(dataArr)[:, 0]
y = array(dataArr)[:, 1]
fig = plt.figure()
plt.scatter(x, y)
for i in range(100):
  if alphas[i] > 0:
    plt.scatter(dataArr[i][0], dataArr[i][1], color='red', s=20)
plt.show()

def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space
    m,n = shape(X)
    K = mat(zeros((m,1)))
    if kTup[0]=='lin': K = X * A.T   #linear kernel
    elif kTup[0]=='rbf':
        for j in range(m):
            deltaRow = X[j,:] - A
            K[j] = deltaRow*deltaRow.T
        K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
    else: raise NameError('Houston We Have a Problem -- \
    That Kernel is not recognized')
    return K

该函数是用于计算核函数或者将数据转换到更高维空间的函数。函数的输入包括数据集X、一个参考数据集A和一个核函数类型kTup。


首先,函数获取输入数据集的行和列数,并创建一个全零矩阵K,维度为m行1列。


然后,根据核函数类型选择不同的计算方法。如果核函数类型为’lin’,则采用线性核函数的计算方式,即将输入数据集X与参考数据集A的转置矩阵相乘。


如果核函数类型为’rbf’,则采用径向基函数(RBF)核函数的计算方式。首先遍历输入数据集X的每一行,计算每一行与参考数据集A的欧氏距离的平方,并存储在K矩阵中。然后,使用指数函数将K矩阵中的每个元素除以核函数参数的平方,并取负数。


最后,如果核函数类型不是’lin’也不是’rbf’,则报错提示核函数类型不被识别。

最后,函数返回计算得到的K矩阵。

class optStruct:
    def __init__(self,dataMatIn, classLabels, C, toler, kTup):  # Initialize the structure with the parameters
        self.X = dataMatIn
        self.labelMat = classLabels
        self.C = C
        self.tol = toler
        self.m = shape(dataMatIn)[0]
        self.alphas = mat(zeros((self.m,1)))
        self.b = 0
        self.eCache = mat(zeros((self.m,2))) #first column is valid flag
        self.K = mat(zeros((self.m,self.m)))
        for i in range(self.m):
            self.K[:,i] = kernelTrans(self.X, self.X[i,:], kTup)

这段代码是定义了一个名为optStruct的类,该类包含了一些变量和方法。


类的初始化函数__init__接受5个参数:dataMatIn、classLabels、C、toler和kTup。


  • dataMatIn是一个表示数据矩阵的输入
  • classLabels是一个表示类别标签的输入
  • C是一个常数,用于调整目标函数中的惩罚项
  • toler是一个容错率,用于控制在数值计算中的误差
  • kTup是一个元组,表示核函数的类型和参数

初始化函数中,将输入的参数赋值给类的成员变量。


其中,self.alphas是一个m行1列的矩阵,用于存储拉格朗日乘子

self.b是一个常数,用于计算分类器的偏置

self.eCache是一个m行2列的矩阵,用于存储计算过程中的误差缓存

self.K是一个m行m列的矩阵,用于存储样本间的核函数计算结果然后,使用一个循环来计算核函数矩阵self.K的值。循环从0到self.m-1,每次取出self.X的第i行作为参数,调用kernelTrans函数计算核函数的结果,并将结果赋值给self.K的第i列。

def calcEk(oS, k):
    fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
    Ek = fXk - float(oS.labelMat[k])
    return Ek
def selectJ(i, oS, Ei):         #this is the second choice -heurstic, and calcs Ej
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  #set valid #choose the alpha that gives the maximum delta E
    validEcacheList = nonzero(oS.eCache[:,0].A)[0]
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   #loop through valid Ecache values and find the one that maximizes delta E
            if k == i: continue #don't calc for i, waste of time
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek
        return maxK, Ej
    else:   #in this case (first time around) we don't have any valid eCache values
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej
def updateEk(oS, k):#after any alpha has changed update the new value in the cache
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]
def innerL(i, oS):
    Ei = calcEk(oS, i)
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H:
          print("L==H")
          return 0
        eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] #changed for kernel
        if eta >= 0:
          print("eta>=0")
          return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
        updateEk(oS, j) #added this for the Ecache
        if (abs(oS.alphas[j] - alphaJold) < 0.00001):
          print("j not moving enough")
          return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
        updateEk(oS, i) #added this for the Ecache                    #the update is in the oppostie direction
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0
def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)):    #full Platt SMO
    oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler, kTup)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):
                alphaPairsChanged += innerL(i,oS)
                print("fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print("non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True
        print("iteration number: %d" % iter)
    return oS.b,oS.alphas
import matplotlib.pyplot as plt
dataArr, labelArr = loadDataSet('/content/drive/MyDrive/Colab Notebooks/MachineLearning/《机器学习实战》/支持向量机/支持向量机/testSet.txt')
b, alphas = smoP(dataArr, labelArr, 0.6, 0.001, 40)
x = array(dataArr)[:, 0]
y = array(dataArr)[:, 1]
fig = plt.figure()
plt.scatter(x, y)
for i in range(100):
  if alphas[i] > 0:
    plt.scatter(dataArr[i][0], dataArr[i][1], color='red', s=20)
plt.show()
<ipython-input-48-c1e41c4ea928>:2: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
<ipython-input-48-c1e41c4ea928>:3: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
  Ek = fXk - float(oS.labelMat[k])


fullSet, iter: 0 i:0, pairs changed 1
fullSet, iter: 0 i:1, pairs changed 1
fullSet, iter: 0 i:2, pairs changed 2
fullSet, iter: 0 i:3, pairs changed 2
fullSet, iter: 0 i:4, pairs changed 3
fullSet, iter: 0 i:5, pairs changed 4
fullSet, iter: 0 i:6, pairs changed 4
fullSet, iter: 0 i:7, pairs changed 4
j not moving enough
fullSet, iter: 0 i:8, pairs changed 4
fullSet, iter: 0 i:9, pairs changed 4
j not moving enough
fullSet, iter: 0 i:10, pairs changed 4
fullSet, iter: 0 i:11, pairs changed 4
fullSet, iter: 0 i:12, pairs changed 4
fullSet, iter: 0 i:13, pairs changed 4
fullSet, iter: 0 i:14, pairs changed 4
fullSet, iter: 0 i:15, pairs changed 4
fullSet, iter: 0 i:16, pairs changed 4
fullSet, iter: 0 i:17, pairs changed 5
fullSet, iter: 0 i:18, pairs changed 6
fullSet, iter: 0 i:19, pairs changed 6
j not moving enough
fullSet, iter: 0 i:20, pairs changed 6
j not moving enough
fullSet, iter: 0 i:21, pairs changed 6
fullSet, iter: 0 i:22, pairs changed 6
fullSet, iter: 0 i:23, pairs changed 7
fullSet, iter: 0 i:24, pairs changed 7
j not moving enough
fullSet, iter: 0 i:25, pairs changed 7
L==H
fullSet, iter: 0 i:26, pairs changed 7
fullSet, iter: 0 i:27, pairs changed 7
fullSet, iter: 0 i:28, pairs changed 7
L==H
fullSet, iter: 0 i:29, pairs changed 7
fullSet, iter: 0 i:30, pairs changed 7
fullSet, iter: 0 i:31, pairs changed 7
fullSet, iter: 0 i:32, pairs changed 7
fullSet, iter: 0 i:33, pairs changed 7
fullSet, iter: 0 i:34, pairs changed 7
fullSet, iter: 0 i:35, pairs changed 7
fullSet, iter: 0 i:36, pairs changed 7
fullSet, iter: 0 i:37, pairs changed 7
fullSet, iter: 0 i:38, pairs changed 7
j not moving enough
fullSet, iter: 0 i:39, pairs changed 7
fullSet, iter: 0 i:40, pairs changed 7
fullSet, iter: 0 i:41, pairs changed 7
fullSet, iter: 0 i:42, pairs changed 7
fullSet, iter: 0 i:43, pairs changed 7
fullSet, iter: 0 i:44, pairs changed 7
fullSet, iter: 0 i:45, pairs changed 7
L==H
fullSet, iter: 0 i:46, pairs changed 7
fullSet, iter: 0 i:47, pairs changed 7
fullSet, iter: 0 i:48, pairs changed 7
fullSet, iter: 0 i:49, pairs changed 7
fullSet, iter: 0 i:50, pairs changed 7
fullSet, iter: 0 i:51, pairs changed 7
L==H
fullSet, iter: 0 i:52, pairs changed 7
fullSet, iter: 0 i:53, pairs changed 7
L==H
fullSet, iter: 0 i:54, pairs changed 7
L==H
fullSet, iter: 0 i:55, pairs changed 7
fullSet, iter: 0 i:56, pairs changed 7
L==H
fullSet, iter: 0 i:57, pairs changed 7
fullSet, iter: 0 i:58, pairs changed 7
fullSet, iter: 0 i:59, pairs changed 7
fullSet, iter: 0 i:60, pairs changed 7
fullSet, iter: 0 i:61, pairs changed 7
L==H
fullSet, iter: 0 i:62, pairs changed 7
fullSet, iter: 0 i:63, pairs changed 7
fullSet, iter: 0 i:64, pairs changed 7
fullSet, iter: 0 i:65, pairs changed 7
fullSet, iter: 0 i:66, pairs changed 7
fullSet, iter: 0 i:67, pairs changed 7
fullSet, iter: 0 i:68, pairs changed 7
L==H
fullSet, iter: 0 i:69, pairs changed 7
fullSet, iter: 0 i:70, pairs changed 7
fullSet, iter: 0 i:71, pairs changed 7
fullSet, iter: 0 i:72, pairs changed 7
fullSet, iter: 0 i:73, pairs changed 7
fullSet, iter: 0 i:74, pairs changed 7
fullSet, iter: 0 i:75, pairs changed 7
fullSet, iter: 0 i:76, pairs changed 7
fullSet, iter: 0 i:77, pairs changed 7
fullSet, iter: 0 i:78, pairs changed 7
L==H
fullSet, iter: 0 i:79, pairs changed 7
fullSet, iter: 0 i:80, pairs changed 7
fullSet, iter: 0 i:81, pairs changed 7
L==H
fullSet, iter: 0 i:82, pairs changed 7
fullSet, iter: 0 i:83, pairs changed 7
fullSet, iter: 0 i:84, pairs changed 7
fullSet, iter: 0 i:85, pairs changed 7
fullSet, iter: 0 i:86, pairs changed 7
fullSet, iter: 0 i:87, pairs changed 7
fullSet, iter: 0 i:88, pairs changed 7
fullSet, iter: 0 i:89, pairs changed 7
fullSet, iter: 0 i:90, pairs changed 7
fullSet, iter: 0 i:91, pairs changed 7
fullSet, iter: 0 i:92, pairs changed 7
fullSet, iter: 0 i:93, pairs changed 7
fullSet, iter: 0 i:94, pairs changed 7
fullSet, iter: 0 i:95, pairs changed 7
fullSet, iter: 0 i:96, pairs changed 7
fullSet, iter: 0 i:97, pairs changed 7
fullSet, iter: 0 i:98, pairs changed 7
fullSet, iter: 0 i:99, pairs changed 7
iteration number: 1
j not moving enough
non-bound, iter: 1 i:0, pairs changed 0
non-bound, iter: 1 i:4, pairs changed 1
non-bound, iter: 1 i:5, pairs changed 2
j not moving enough
non-bound, iter: 1 i:17, pairs changed 2
non-bound, iter: 1 i:18, pairs changed 3
non-bound, iter: 1 i:23, pairs changed 4
iteration number: 2
j not moving enough
non-bound, iter: 2 i:0, pairs changed 0
j not moving enough
non-bound, iter: 2 i:5, pairs changed 0
j not moving enough
non-bound, iter: 2 i:17, pairs changed 0
non-bound, iter: 2 i:23, pairs changed 0
j not moving enough
non-bound, iter: 2 i:52, pairs changed 0
non-bound, iter: 2 i:55, pairs changed 0
iteration number: 3
j not moving enough
fullSet, iter: 3 i:0, pairs changed 0
fullSet, iter: 3 i:1, pairs changed 0
fullSet, iter: 3 i:2, pairs changed 0
fullSet, iter: 3 i:3, pairs changed 0
fullSet, iter: 3 i:4, pairs changed 0
j not moving enough
fullSet, iter: 3 i:5, pairs changed 0
fullSet, iter: 3 i:6, pairs changed 0
fullSet, iter: 3 i:7, pairs changed 0
fullSet, iter: 3 i:8, pairs changed 0
fullSet, iter: 3 i:9, pairs changed 0
fullSet, iter: 3 i:10, pairs changed 0
fullSet, iter: 3 i:11, pairs changed 0
fullSet, iter: 3 i:12, pairs changed 0
fullSet, iter: 3 i:13, pairs changed 0
fullSet, iter: 3 i:14, pairs changed 0
fullSet, iter: 3 i:15, pairs changed 0
fullSet, iter: 3 i:16, pairs changed 0
j not moving enough
fullSet, iter: 3 i:17, pairs changed 0
fullSet, iter: 3 i:18, pairs changed 0
fullSet, iter: 3 i:19, pairs changed 0
fullSet, iter: 3 i:20, pairs changed 0
fullSet, iter: 3 i:21, pairs changed 0
fullSet, iter: 3 i:22, pairs changed 0
fullSet, iter: 3 i:23, pairs changed 0
fullSet, iter: 3 i:24, pairs changed 0
fullSet, iter: 3 i:25, pairs changed 0
fullSet, iter: 3 i:26, pairs changed 0
fullSet, iter: 3 i:27, pairs changed 0
fullSet, iter: 3 i:28, pairs changed 0
j not moving enough
fullSet, iter: 3 i:29, pairs changed 0
fullSet, iter: 3 i:30, pairs changed 0
fullSet, iter: 3 i:31, pairs changed 0
fullSet, iter: 3 i:32, pairs changed 0
fullSet, iter: 3 i:33, pairs changed 0
fullSet, iter: 3 i:34, pairs changed 0
fullSet, iter: 3 i:35, pairs changed 0
fullSet, iter: 3 i:36, pairs changed 0
fullSet, iter: 3 i:37, pairs changed 0
fullSet, iter: 3 i:38, pairs changed 0
fullSet, iter: 3 i:39, pairs changed 0
fullSet, iter: 3 i:40, pairs changed 0
fullSet, iter: 3 i:41, pairs changed 0
fullSet, iter: 3 i:42, pairs changed 0
fullSet, iter: 3 i:43, pairs changed 0
fullSet, iter: 3 i:44, pairs changed 0
fullSet, iter: 3 i:45, pairs changed 0
fullSet, iter: 3 i:46, pairs changed 0
fullSet, iter: 3 i:47, pairs changed 0
fullSet, iter: 3 i:48, pairs changed 0
fullSet, iter: 3 i:49, pairs changed 0
fullSet, iter: 3 i:50, pairs changed 0
fullSet, iter: 3 i:51, pairs changed 0
j not moving enough
fullSet, iter: 3 i:52, pairs changed 0
fullSet, iter: 3 i:53, pairs changed 0
L==H
fullSet, iter: 3 i:54, pairs changed 0
fullSet, iter: 3 i:55, pairs changed 0
fullSet, iter: 3 i:56, pairs changed 0
fullSet, iter: 3 i:57, pairs changed 0
fullSet, iter: 3 i:58, pairs changed 0
fullSet, iter: 3 i:59, pairs changed 0
fullSet, iter: 3 i:60, pairs changed 0
fullSet, iter: 3 i:61, pairs changed 0
fullSet, iter: 3 i:62, pairs changed 0
fullSet, iter: 3 i:63, pairs changed 0
fullSet, iter: 3 i:64, pairs changed 0
fullSet, iter: 3 i:65, pairs changed 0
fullSet, iter: 3 i:66, pairs changed 0
fullSet, iter: 3 i:67, pairs changed 0
fullSet, iter: 3 i:68, pairs changed 0
fullSet, iter: 3 i:69, pairs changed 0
fullSet, iter: 3 i:70, pairs changed 0
fullSet, iter: 3 i:71, pairs changed 0
fullSet, iter: 3 i:72, pairs changed 0
fullSet, iter: 3 i:73, pairs changed 0
fullSet, iter: 3 i:74, pairs changed 0
fullSet, iter: 3 i:75, pairs changed 0
fullSet, iter: 3 i:76, pairs changed 0
fullSet, iter: 3 i:77, pairs changed 0
fullSet, iter: 3 i:78, pairs changed 0
fullSet, iter: 3 i:79, pairs changed 0
fullSet, iter: 3 i:80, pairs changed 0
fullSet, iter: 3 i:81, pairs changed 0
fullSet, iter: 3 i:82, pairs changed 0
fullSet, iter: 3 i:83, pairs changed 0
fullSet, iter: 3 i:84, pairs changed 0
fullSet, iter: 3 i:85, pairs changed 0
fullSet, iter: 3 i:86, pairs changed 0
fullSet, iter: 3 i:87, pairs changed 0
fullSet, iter: 3 i:88, pairs changed 0
fullSet, iter: 3 i:89, pairs changed 0
fullSet, iter: 3 i:90, pairs changed 0
fullSet, iter: 3 i:91, pairs changed 0
fullSet, iter: 3 i:92, pairs changed 0
fullSet, iter: 3 i:93, pairs changed 0
fullSet, iter: 3 i:94, pairs changed 0
fullSet, iter: 3 i:95, pairs changed 0
fullSet, iter: 3 i:96, pairs changed 0
fullSet, iter: 3 i:97, pairs changed 0
fullSet, iter: 3 i:98, pairs changed 0
fullSet, iter: 3 i:99, pairs changed 0
iteration number: 4

def calcWs(alphas,dataArr,classLabels):
    X = mat(dataArr); labelMat = mat(classLabels).transpose()
    m,n = shape(X)
    w = zeros((n,1))
    for i in range(m):
        w += multiply(alphas[i]*labelMat[i],X[i,:].T)
    return w
def testRbf(k1=1.3):
    dataArr,labelArr = loadDataSet('testSetRBF.txt')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) #C=200 important
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd] #get matrix of only support vectors
    labelSV = labelMat[svInd];
    print("there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadDataSet('testSetRBF2.txt')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the test error rate is: %f" % (float(errorCount)/m))
def img2vector(filename):
    returnVect = zeros((1,1024))
    fr = open(filename)
    for i in range(32):
        lineStr = fr.readline()
        for j in range(32):
            returnVect[0,32*i+j] = int(lineStr[j])
    return returnVect
def loadImages(dirName):
    from os import listdir
    hwLabels = []
    trainingFileList = listdir(dirName)           #load the training set
    m = len(trainingFileList)
    trainingMat = zeros((m,1024))
    for i in range(m):
        fileNameStr = trainingFileList[i]
        fileStr = fileNameStr.split('.')[0]     #take off .txt
        classNumStr = int(fileStr.split('_')[0])
        if classNumStr == 9: hwLabels.append(-1)
        else: hwLabels.append(1)
        trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr))
    return trainingMat, hwLabels
def testDigits(kTup=('rbf', 10)):
    dataArr,labelArr = loadImages('trainingDigits')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup)
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd]
    labelSV = labelMat[svInd];
    print("there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadImages('testDigits')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print("the test error rate is: %f" % (float(errorCount)/m))
class optStructK:
    def __init__(self,dataMatIn, classLabels, C, toler):  # Initialize the structure with the parameters
        self.X = dataMatIn
        self.labelMat = classLabels
        self.C = C
        self.tol = toler
        self.m = shape(dataMatIn)[0]
        self.alphas = mat(zeros((self.m,1)))
        self.b = 0
        self.eCache = mat(zeros((self.m,2))) #first column is valid flag

def calcEkK(oS, k):
    fXk = float(multiply(oS.alphas,oS.labelMat).T*(oS.X*oS.X[k,:].T)) + oS.b
    Ek = fXk - float(oS.labelMat[k])
    return Ek

def selectJK(i, oS, Ei):         #this is the second choice -heurstic, and calcs Ej
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  #set valid #choose the alpha that gives the maximum delta E
    validEcacheList = nonzero(oS.eCache[:,0].A)[0]
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   #loop through valid Ecache values and find the one that maximizes delta E
            if k == i: continue #don't calc for i, waste of time
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek
        return maxK, Ej
    else:   #in this case (first time around) we don't have any valid eCache values
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej

def updateEkK(oS, k):#after any alpha has changed update the new value in the cache
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]

def innerLK(i, oS):
    Ei = calcEk(oS, i)
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H:
          print("L==H")
          return 0
        eta = 2.0 * oS.X[i,:]*oS.X[j,:].T - oS.X[i,:]*oS.X[i,:].T - oS.X[j,:]*oS.X[j,:].T
        if eta >= 0:
          print("eta>=0")
          return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
        updateEk(oS, j) #added this for the Ecache
        if (abs(oS.alphas[j] - alphaJold) < 0.00001):
          print("j not moving enough")
          return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
        updateEk(oS, i) #added this for the Ecache                    #the update is in the oppostie direction
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[i,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[i,:]*oS.X[j,:].T
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[j,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[j,:]*oS.X[j,:].T
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0

def smoPK(dataMatIn, classLabels, C, toler, maxIter):    #full Platt SMO
    oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):
                alphaPairsChanged += innerL(i,oS)
                print("fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print("non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True
        print("iteration number: %d" % iter)
    return oS.b,oS.alphas
目录
相关文章
|
2月前
|
机器学习/深度学习 算法 数据挖掘
R语言中的支持向量机(SVM)与K最近邻(KNN)算法实现与应用
【9月更文挑战第2天】无论是支持向量机还是K最近邻算法,都是机器学习中非常重要的分类算法。它们在R语言中的实现相对简单,但各有其优缺点和适用场景。在实际应用中,应根据数据的特性、任务的需求以及计算资源的限制来选择合适的算法。通过不断地实践和探索,我们可以更好地掌握这些算法并应用到实际的数据分析和机器学习任务中。
|
3月前
|
机器学习/深度学习 运维 算法
深入探索机器学习中的支持向量机(SVM)算法:原理、应用与Python代码示例全面解析
【8月更文挑战第6天】在机器学习领域,支持向量机(SVM)犹如璀璨明珠。它是一种强大的监督学习算法,在分类、回归及异常检测中表现出色。SVM通过在高维空间寻找最大间隔超平面来分隔不同类别的数据,提升模型泛化能力。为处理非线性问题,引入了核函数将数据映射到高维空间。SVM在文本分类、图像识别等多个领域有广泛应用,展现出高度灵活性和适应性。
139 2
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现ISSA融合反向学习与Levy飞行策略的改进麻雀优化算法优化支持向量机回归模型(SVR算法)项目实战
Python实现ISSA融合反向学习与Levy飞行策略的改进麻雀优化算法优化支持向量机回归模型(SVR算法)项目实战
150 9
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现WOA智能鲸鱼优化算法优化支持向量机分类模型(SVC算法)项目实战
Python实现WOA智能鲸鱼优化算法优化支持向量机分类模型(SVC算法)项目实战
310 4
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现ISSA融合反向学习与Levy飞行策略的改进麻雀优化算法优化支持向量机分类模型(SVC算法)项目实战
Python实现ISSA融合反向学习与Levy飞行策略的改进麻雀优化算法优化支持向量机分类模型(SVC算法)项目实战
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现WOA智能鲸鱼优化算法优化支持向量机回归模型(LinearSVR算法)项目实战
Python实现WOA智能鲸鱼优化算法优化支持向量机回归模型(LinearSVR算法)项目实战
255 2
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现SSA智能麻雀搜索算法优化支持向量机回归模型(SVR算法)项目实战
Python实现SSA智能麻雀搜索算法优化支持向量机回归模型(SVR算法)项目实战
226 1
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现SSA智能麻雀搜索算法优化支持向量机分类模型(SVC算法)项目实战
Python实现SSA智能麻雀搜索算法优化支持向量机分类模型(SVC算法)项目实战
245 1
|
4月前
|
机器学习/深度学习 数据采集 算法
Python实现GWO智能灰狼优化算法优化支持向量机回归模型(svr算法)项目实战
Python实现GWO智能灰狼优化算法优化支持向量机回归模型(svr算法)项目实战
202 1
|
4月前
|
机器学习/深度学习 数据采集 自然语言处理
Python实现支持向量机SVM分类模型(SVC算法)并应用网格搜索算法调优项目实战
Python实现支持向量机SVM分类模型(SVC算法)并应用网格搜索算法调优项目实战
165 0