引言

 机器学习栏目记录我在学习Machine Learning过程的一些心得笔记,涵盖线性回归、逻辑回归、Softmax回归、神经网络和SVM等等,主要学习资料来自Standford Andrew Ng老师在Coursera的教程以及UFLDL Tutorial,Stanford CS231n等在线课程和Tutorial,同时也参考了大量网上的相关资料(在后面列出)。
 

前言

 在不断学习的过程中,抓住基本概念是非常重要的,这样可以防止自己陷入某些细节中无法自拔,可以让自己站在比较宏观的层面上看待问题。在博主学习和应用Machine Learning算法的过程中,每当我遇到问题(或是不知道怎么建模,或是不知道怎么推导公式等等),我都会翻一翻这些基本的概念,总会有所收获。

 本文主要介绍机器学习/模式识别领域的一些术语、定义以及相关基础概念。在后续不断的工作学习中,博主还会整理一些比较重要的术语、概念更新到这里。
 
 

[定义] 机器学习(Machine Learning)

 Coursera上机器学习课程的介绍中,第一句话是:
 Machine learning is the science of getting computers to act without being explicitly programmed.
 这段话基本就讲明了机器学习这门学科是在研究什么了,下面是Andrew Ng老师的讲义中(Coursera)给出的定义:
 What is Machine Learning?
 Two definitions of Machine Learning are offered. Arthur Samuel described it as: the field of study that gives computers the ability to learn without being explicitly programmed. This is an older, informal definition.
 Tom Mitchell provides a more modern definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
Example: playing checkers.
 · E = the experience of playing many games of checkers
 · T = the task of playing checkers.
 · P = the probability that the program will win the next game.
 
 ————————————————————————–
 最近在读周志华老师的新书《机器学习》,引用其中对机器学习定义的一段话:
 机器学习正是这样一门学科,它致力于研究如何通过计算的手段,利用经验来改善系统自身的性能。在计算机系统中,“经验”通常以“数据”的形式存在,因此,机器学习所研究的主要内容,是关于在计算机上从数据中产生“模型”(model)的算法,即“学习算法”(learning algorithm)。有了学习算法,我们把经验数据提供给它,它就能基于这些数据产生模型;在面对新的情况时,模型会给我们提供相应的判断。

 如果没有接触过这个领域,第一遍读可能会有种不明觉厉的感觉O(∩_∩)O~。

 下面是维基百科中给出的定义:
 Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed”.[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[4]:2 rather than following strictly static program instructions.
 Machine learning is closely related to and often overlaps with computational statistics; a discipline which also focuses in prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR),[5] search engines and computer vision. Machine learning is sometimes conflated with data mining,[6] where the latter sub-field focuses more on exploratory data analysis and is known as unsupervised learning.
 参考:https://en.wikipedia.org/wiki/Machine_learning

[定义] 有监督学习(Supervised Learning)

维基百科:
 Supervised learning is the machine learning task of inferring a function from labeled training data.[1] The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a “reasonable” way.
 参考:https://en.wikipedia.org/wiki/Supervised_learning

Machine Learning(Andrew Ng):
 In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.
 Supervised learning problems are categorized into “regression” and “classification” problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
Example:
 Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
 We could turn this example into a classification problem by instead making our output about whether the house “sells for more or less than the asking price.” Here we are classifying the houses based on price into two discrete categories.
 

[定义] 无监督学习(Unsupervised Learning)

维基百科:
 Unsupervised learning is the machine learning task of inferring a function to describe hidden structure from unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution. This distinguishes unsupervised learning from supervised learning and reinforcement learning.
 Unsupervised learning is closely related to the problem of density estimation in statistics.[1] However unsupervised learning also encompasses many other techniques that seek to summarize and explain key features of the data. Many methods employed in unsupervised learning are based on data mining methods used to preprocess data.
 参考:https://en.wikipedia.org/wiki/Unsupervised_learning
 
Machine Learning(Andrew Ng):
 Unsupervised learning, on the other hand, allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don’t necessarily know the effect of the variables.
 We can derive this structure by clustering the data based on relationships among the variables in the data.
 With unsupervised learning there is no feedback based on the prediction results, i.e., there is no teacher to correct you. It’s not just about clustering. For example, associative memory is unsupervised learning.
Example:
 Clustering: Take a collection of 1000 essays written on the US Economy, and find a way to automatically group these essays into a small number that are somehow similar or related by different variables, such as word frequency, sentence length, page count, and so on.
 Associative: Suppose a doctor over years of experience forms associations in his mind between patient characteristics and illnesses that they have. If a new patient shows up then based on this patient’s characteristics such as symptoms, family medical history, physical attributes, mental outlook, etc the doctor associates possible illness or illnesses based on what the doctor has seen before with similar patients. This is not the same as rule based reasoning as in expert systems. In this case we would like to estimate a mapping function from patient characteristics into illnesses.

[定义] 半监督学习(Semi-supervised learning)

维基百科:
 Semi-supervised learning is a class of supervised learning tasks and techniques that also make use of unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy.
 The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render a fully labeled training set infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning.
 参考:https://en.wikipedia.org/wiki/Semi-supervised_learning

半监督学习方法 - 《计算机学报》:
 半监督学习(Semi-Supervised Learning,SSL)是模式识别和机器学习领域研究的重点问题,是监督学习与无监督学习相结合的一种学习方法。它主要考虑如何利用少量的标注样本和大量的未标注样本进行训练和分类的问题。主要分为半监督分类,半监督回归,半监督聚类和半监督降维算法。
 参考:刘建伟,刘媛,罗雄麟,半监督学习方法 - 《计算机学报》2015
 http://cjc.ict.ac.cn/online/onlinepaper/ljw02170-2015721213705.pdf
 

总结

 定义这种东西是清晰且严谨的,却也是比较死板的,第一次读这些定义也许会一头雾水;学习一段时间回来再读,也许就会略有体会;再深入学习一段时间回来再度,就会发现这些定义是如此的简洁明了,优雅动人。学习的过程,大抵就是这样吧。
 
 在Coursera上Andrew Ng的课程中,主要涉及:
 1)Supervised Learning,包括Linear Regression、Logistic Regression、Neural Networks、Support Vector Machines;
 2)Unsupervised Learning,包括Clustering、Dimensionality Reduction、Anomaly Detection、Recommender Systems。
 

参考资料

Coursera - Machine learning( Andrew Ng)
https://www.coursera.org/learn/machine-learning
Wikipedia
https://en.wikipedia.org/wiki/Main_Page

注:
 一些详细的参考资料直接写在引用的位置,这里就不再列出。

·

Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐