机器学习实验(九):基于高斯分布和OneClassSVM的异常点检测
声明:版权所有,转载请联系作者并注明出处 http://blog.csdn.net/u013719780?viewmode=contents大多数数据挖掘或数据工作中,异常点都会在数据的预处理过程中被认为是“噪音”而剔除,以避免其对总体数据评估和分析挖掘的影响。但某些情况下,如果数据工作的目标就是围绕异常点,那么这些异常点会成为数据工作的焦点。 数据集中的异常数据通常
·
声明:版权所有,转载请联系作者并注明出处 http://blog.csdn.net/u013719780?viewmode=contents
In [1]:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from numpy import genfromtxt
from scipy.stats import multivariate_normal
from sklearn.metrics import precision_recall_fscore_support
In [2]:
def read_dataset(filePath,delimiter=','):
return genfromtxt(filePath, delimiter=delimiter)
def feature_normalize(dataset):
mu = np.mean(dataset,axis=0)
sigma = np.std(dataset,axis=0)
return (dataset - mu)/sigma
def estimateGaussian(dataset):
mu = np.mean(dataset, axis=0)
sigma = np.cov(dataset.T)
return mu, sigma
def multivariateGaussian(dataset,mu,sigma):
p = multivariate_normal(mean=mu, cov=sigma)
return p.pdf(dataset)
def selectThresholdByCV(probs,gt):
best_epsilon = 0
best_f1 = 0
f = 0
stepsize = (max(probs) - min(probs)) / 1000;
epsilons = np.arange(min(probs),max(probs),stepsize)
for epsilon in np.nditer(epsilons):
predictions = (probs < epsilon) * 1
p, r, f, s = precision_recall_fscore_support(gt, predictions,average='binary')
if f > best_f1:
best_f1 = f
best_epsilon = epsilon
return best_f1, best_epsilon
In [3]:
tr_data = read_dataset('tr_server_data.csv') # test data
cv_data = read_dataset('cv_server_data.csv') # used to tune parameters
gt_data = read_dataset('gt_server_data.csv') # label
n_training_samples = tr_data.shape[0]
n_dim = tr_data.shape[1]
print('Number of datapoints in training set: %d' % n_training_samples)
print('Number of dimensions/features: %d' % n_dim)
print(tr_data[1:5,:])
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.plot(tr_data[:,0],tr_data[:,1],'o')
plt.show()
In [4]:
mu, sigma = estimateGaussian(tr_data)
p = multivariateGaussian(tr_data,mu,sigma)
In [5]:
#selecting optimal value of epsilon using cross validation
p_cv = multivariateGaussian(cv_data,mu,sigma)
fscore, ep = selectThresholdByCV(p_cv,gt_data)
print fscore, ep
In [6]:
#selecting outlier datapoints
outliers = np.asarray(np.where(p < ep))
In [7]:
plt.figure()
plt.xlabel('Latency (ms)')
plt.ylabel('Throughput (mb/s)')
plt.plot(tr_data[:,0],tr_data[:,1],'bx')
plt.plot(tr_data[outliers,0],tr_data[outliers,1],'ro')
plt.show()
In [8]:
from sklearn import svm
plt.style.use('fivethirtyeight')
In [9]:
# use the same dataset
tr_data = read_dataset('tr_server_data.csv') # test data
In [10]:
clf = svm.OneClassSVM(nu=0.05, kernel="rbf", gamma=0.1)
clf.fit(tr_data)
Out[10]:
In [11]:
pred = clf.predict(tr_data)
# inliers are labeled 1, outliers are labeled -1
normal = tr_data[pred == 1]
abnormal = tr_data[pred == -1]
In [12]:
plt.plot(normal[:,0],normal[:,1],'bx')
plt.plot(abnormal[:,0],abnormal[:,1],'ro')
Out[12]:
更多推荐
所有评论(0)