Big Data, Machine Learning, Artificial Intelligence and more terms - 【2020】
big data - machine learning - artificial intelligence

Big Data, Machine Learning, Artificial Intelligence and more terms

The triumphal procession of the so-called “Algorithmic Economy” brings (as so often with digital trends) much confusion around “new” terms and buzzwords. It does not matter if it is a work platform, a content management system, Bots or newsletter tools. Apparently everything today has to do with artificial intelligence, big data and machine learning.

But what does this set of new and healthy magic words, often used in a misleading way, that have been permeating all fields of online marketing for some time, just like Social, Big Data and Co. once did?

Artificial Intelligence

Artificial Intelligence describes a special set of modern statistical methods. The aim of artificial intelligence is to automatically make decisions based on insufficient information in the millisecond range. AI enables the processing of large, partially contradictory data sets. AI methods thus pave the way for a shift from “traditional” rather “boring” decision rules to complex networked decisions by machines.

Thus, AI methods can include in their decisions the big picture, i.e. the totality of all available information. Therefore, these methods have more in common with human intuition and intelligence than with the wired programs of classical methods.

Read also
Big Data Analytics: Glossary and Terminology

Big data

Big Data is a key word that summarizes modern data analysis techniques. Big Data is used when the data sources are too big, too fast or not commented enough and the classical database technologies and analysis methods are no longer sufficient. In the case of SMEs, the latter case (subnoted data) usually occurs. For the analysis of these data, methods from the field of artificial intelligence and data mining are used. However, the entry in the large universe of data is related to some detailed questions.

Data Science

Data science is an interdisciplinary field that deals with the investigation, processing and use of data. Data Science consists of modern programming paradigms, statistical methodology and techniques also from the field of artificial intelligence. Data Science plays a central role in Algorithmic Economics.

Machine learning

Machine learning is the process of finding internal connections between data without human intervention. It uses only the information from the available data and minimizes human bias. Machine learning often finds connections that remain hidden from human observers. The goal of machine learning is typically the training of a so-called classifier (algorithm) to predict future behavior (see also Unstructured data)

Read also
10 Skills You Need to Know to Become a Big Data Analyst

Data Mining

Data mining is a superior term for descriptive analytical methods. The goal of data mining is not the construction of classifiers (algorithms) or regression formulas for the prediction of future behavior. Data mining is primarily about “digging” for hidden knowledge. Such findings are only defined by the data and are not distorted by human assumptions. Therefore, relationships are often discovered that are fundamentally different from what was initially expected. An example of unexpected ideas is our Fluid Characters.

Deep Learning

Deep Learning is a subset of methods from the field of machine learning. Typically, the term “Deep Learning” comes from the field of artificial neural networks. In Deep Learning, a problem to be solved is analyzed in several layers, one after the other, to obtain optimal results. Each layer penetrates deeper (hence “deep learning”) into the sphere of the problem. Although the underlying methodology is relatively old, the term has recently been re-popularized by Google and others.

Advanced analytics

Advanced Analytics is a key word that summarizes the methods of modern statistics. This word is intended to distinguish, for example, the methods of artificial intelligence or data extraction from “classical” statistics. Advanced analyses are typically multivariate methods that allow analysis in high dimensional spaces, where data are highly interdependent.

Read also
The impact of big data on healthcare industry

Descriptive analysis

Descriptive analysis aims to describe the actual state of a data set. The objective is to better understand the data itself and therefore the customers, employees or tools, in order to be able to respond to them more effectively in the future. Typical methods are clustering, nearest neighbors or hypothesis testing.

Predictive analysis

Predictive analysis aims to infer unknown or future results from known data and outcomes. Typically, so-called predictive algorithms are used to infer from known events, about which more information is available, those findings that will only occur in the future. This allows an early reaction. Examples of this are the estimation of the disposition of prices or customer wishes (which are definitely only known after the purchase) in order to (re)act in time.

Prescriptive analysis

Prescriptive analysis is closely related to predictive analysis. The aim is to achieve a certain objective by means of predictions, i.e. to have a normative influence on the process. This includes, for example, predicting the willingness to pay a price, and then selectively exploiting discounts to increase sales.

Read also
The impact of big data on the retail industry

Structured data

Structured data is data for which relationships to each other and relevant information are known and stored, for example, in a database. This makes it possible to filter or sort by properties. Examples of this are age and gender in well-managed CRMs. Relational databases can process this data very well.

Unstructured data

Unstructured data, or rather: sub-data, is data for which the relationships between the data and the relevant information are not known and are only partially known. Therefore, there is no direct way to search or filter this information. Examples of this are price availability or relevant content at the current time. This information is not directly accessible and has to be estimated using superior analytical methods.

Grouping of data

Clustering is a subset of data mining. The goal is to combine “similar” data into larger structures (clusters). This allows you to find approximate structures in the data. Clustering can be done by dividing the entire data set (top-down) or by combining individual points (bottom-up). The goal is usually to better understand the data in order to find suitable measures for further evaluation.

Classifier

A classifier is a mathematical algorithm that assigns existing information to a class affiliation. An example is an algorithm that receives as input the websites visited by a user so far and produces a class affiliation for the user. Classifiers usually work with incomplete information, so this output is rarely perfect. Therefore, the result is often called “estimation”.

Read also
What is the Relationship between Big Data and Cloud Computing

Regression

A regression is a mathematical algorithm that maps existing information to one or more real numbers. An example is an algorithm that takes as input the websites visited so far and estimates the layout of the prices. Regressions typically operate with incomplete information, so this output is rarely perfect. Therefore, the result is often called “estimation”.

By the way, this is by no means a complete list.
It serves as a first attempt to give an overview of the different methods and areas. Which technologies and methods are used to “do something with AI” depends, among other things, strongly on the application, the objective and the quality of the available data.

You might also be interested: