Data Science Posts and Resources

Articles on Data Science

Machine Learning Overview

Machine learning is the science, in which we focus on teaching machines or computers to perform certain tasks without being given specific instructions

Laxmi K Soni

7-Minute Read

Machine Learning ?

Machine learning is fundamentally is just the science, in which we focus on teaching machines or computers to perform certain tasks without being given specific instructions. We want our machines to learn how to do something themselves without explaining it to them.

In order to do this, we oftentimes look at how the human brain works and try to design virtual brains that work in a similar manner.

Machine learning and artificial intelligence are different. Artificial intelligence is a broad field and every system that can learn and solve problems might be considered an AI. Machine learning is one specific approach to this broad field.

Supervised Learning

In machine learning we have different approaches or types. The two main approaches are supervised learning and unsupervised learning . So let’s first talk about supervised learning.

Here, we give our model a set of inputs and also the corresponding outputs, which are the desired results. In this way, the model learns to match certain inputs to certain outputs and it adjusts its structure. It learns to make connections between what was put in and what the desired output is. It understands the correlation. When trained well enough, we can use the model to make predictions for inputs that we don’t know the results for.

Classic supervised learning algorithms are regressions, classifications and support vector machines.

Unsupervised Learning

With unsupervised learning on the other hand, we don’t give our model the desired results while training. Not because we don’t want to but because we don’t know them. This approach is more like a kind of pattern recognition. We give our model a set of input data and it then has to look for patterns in it. Once the model is trained, we can put in new data and our model will need to make decisions. Since the model doesn’t get any information about classes or results, it has to work with similarities and patterns in the data and categorize or cluster it by itself.

Classic unsupervised learning algorithms are clustering, anomaly detection and some applications of neural networks.

Reinforcement Learning

Then there is a third type of machine learning called reinforcement learning . Here we create some model with a random structure. Then we just observe what it does and reinforce or encourage it, when we like what it does. Otherwise, we can also give some negative feedback. The more our model does what we want it to do, the more we reinforce it and the more “rewards” it gets. This might happen in form of a number or a grade, which represents the so-called fitness of the model.

In this way, our model learns what is right and what is wrong. You can imagine it a little bit like natural selection and survival of the fittest. We can create 100 random models and kill the 50 models that perform worst. Then the remaining 50 reproduce and the same process repeats. These kinds of algorithms are called genetic algorithms. Classic reinforcement learning algorithms are genetic or evolutional algorithms.

Deep Learning

Another term that is always confused with machine learning is deep learning . Deep learning however is just one area of machine learning, namely the one, which works with neural networks. Neural networks are a very comprehensive and complex topic.

Fields of Application

Actually, it would be easier to list all the areas in which machine learning doesn’t get applied rather than the fields of application. Despite that, we will take a quick look at some of the major areas, in which machine learning gets applied.

· Research

· Autonomous Cars

· Spacecraft

· Economics and Finance

· Medical and Healthcare

· Physics, Biology, Chemistry

· Engineering

· Mathematics

· Robotics

· Education

· Forensics

· Police and Military

· Marketing

· Search Engines

· GPS and Pathfinding Systems

Machine Learning Frameworks

This five of the most popular frameworks are:

  • Torch

  • Theano

  • Caffe

  • Keras

  • TensorFlow

Torch

Torch originally released in 2002 by Ronan Collobert for the purpose of numeric computing. The computations of Torch involves multidimensional arrays called tensors. Tensors can be processed with regular vector or matrix operations. Torch acquired routines for building, training, and evaluating neural networks.Corporations like IBM and Facebook had great deal of interest in Torch. The other frameworks Theano, Caffe, Keras, and TensorFlow — can be interfaced through Python, which has emerged as the language of choice in the machine learning domain.

Theano

Theano was developed in 2010 by machile learning group at the University of Montreal. It was released as a library for numeric computation. Like NumPy, Theano also provides a wide range of Python modules for operating on multi-dimensional arrays. but theano stores operations in a data structure called a graph, which it compiles into high performance code. Theano supports symbolic differentiation, which makes it possible to find derivatives of functions automatically.

Caffe

This framework as developed at UC Berkeley. It is a framework for developing image recognition applications. Caffe is written in C++, and like Theano, it supports GPU acceleration.

Keras

Keras is modular and simple machine learning framework. Keras acts as an interface to other machine learning frameworks.Keras’s simplicity stems from its small API and intuitive set of functions.

TensorFlow

The Google Brain team released TensorFlow 1.0 in 2015, the current version is 1.4. It’s provided under the Apache 2.0 open source license, which means anyone is free to use it modify it and distribute modifications. Python is the primary interface in TensorFlow, but like Caffe, its core functionality is written in C++ for performance. TensorFlow applications can be executed on the Google Cloud Platform (GCP).

Machine Learning Algorithms

Linear Regression

The easiest and most basic machine learning algorithm is linear regression .It is a supervised learning algorithm. It is an approach to model the relation between a response i.e. dependent variable ‘Y’ and one or more independent or explanatory variables ‘X1,X2..’ .

Classification

In linear regression we now predicted specific output-values for certain given input-values. Sometimes, however, we are not trying to predict outputs but to categorize or classify our elements. For this, we use classification algorithms.

K-Nearest-Neighbors

In K-Nearest-Neighbors classifier, we assign the class of the new object, based on its nearest neighbors. The K specifies the amount of neighbors to look at. For example, we could say that we only want to look at the one neighbor who is nearest but we could also say that we want to factor in 100 neighbors.

Naive-Bayes

Naive-Bayes is a classification algorithm. It helps in finding the probability of one or more outcomes. It is used in decision making such that the outcome with higher probability is more likey to occur than outcome with lower probabilities.

Logistic Regression

Another popular classification algorithm is called logistic regression. It looks at probabilities and determines how likely it is that a certain event happens (or a certain class is the right one), given the input data. This is done by plotting a logistic growth curve and splitting the data into two.

Decision Trees

With decision tree classifiers, we construct a decision tree out of our training data and use it to predict the classes of new elements.

Random Forest

This classification is based on decision trees. What it does is creating a forest of multiple decision trees. To classify a new object, all the various trees determine a class and the most frequent result gets chosen. This makes the result more accurate and it also prevents overfitting

Support Vector Machines

In machine learning classification, SVM finds an optimal hyperplane that best segregates observations from different classes. A hyperplane is a plane of n -1 dimension that separates the n dimensional feature space of the observations into two spaces. For example, the hyperplane in a two-dimensional feature space is a line, and a surface in a three-dimensional feature space. The optimal hyperplane is picked so that the distance from its nearest points in each space to itself is maximized. And these nearest points are the so-called support vectors.

Say Something

Comments

Nothing yet.

Recent Posts

Categories

About

about