Updated on: 25.04.2022
Design | Engineering
A decision tree makes decision-making processes transparent.
What is it about?
A decision tree is a decision-making tool that visualises all decision alternatives along a decision path. Decision trees are often used to prepare a higher-level cluster analysis.
The decision trees are based on an actually simple decision logic and are therefore easy to capture mathematically. In terms of syntax, a decision tree is comparable to a flowchart. They are a suitable tool for transparently displaying possible options and dependencies. Decision trees can sometimes also have a similar creativity-promoting effect as a morphological box and produce new combinations of solutions. In modern data analysis, decision trees are also used in various machine learning models.
If possible, an attempt should be made to evaluate decision alternatives in terms of probability of occurrence.
Function and structure of a decision tree
First, all decision points are identified and displayed as nodes in a diagram. The starting point is the original choice alternative, which is also called the root. At each node or decision point, new ramifications are added. At the end, a clear hierarchy of the decision problem is obtained. The tool thus helps to identify all decision points and to visualise all options in a diagram.
The decision tree as a machine learning method
In the field of machine learning, decision trees are used in the form of prediction models. Target variables are to be determined on the basis of different input variables by recursively refining classification and probability functions. A distinction is made between classification trees (forecast result in the form of a class assignment) and regression trees (forecast result is available as a numerical result).
Advantages of a decision tree in the field of data analysis
What is done in a Decision Tree?
Step-by-step instructions for the manual creation of a decision tree
1 Clarification of the problem
At the beginning there is the recording and clear delimitation of the initial problem (root).
2. identification of decision options
In a second step, all decision options resulting from the first question are to be identified. Each option forms its own branch.
3. determination of the probability of occurrence
As best as possible, the decision alternatives should be evaluated in terms of probability of occurrence. A rough estimate can be refined by later iterations.
4. recording in a data model
Depending on the scope of the decision problem, a suitable data model must be built.
Further advice on the use of Decision Trees
Options are mutually exclusive
In a decision tree, all options are mutually exclusive and overlaps are excluded. This should be checked at each level to ensure consistent mapping.
Linear decision path
The exclusion criterion also applies along a decision path, so that a linear course is always ensured.
Quantifiable decision variables
In addition to the probability of occurrence, other quantifiable decision variables (time, costs, effort ...) are helpful to make the data model even more meaningful.
If, then, otherwise ...
A good orientation offers an "if, then, otherwise" approach. If this question is asked at each node, there is no danger of forgetting something.
Combining a decision tree with other tools
A decision tree can also be used well in combination with a risk matrix and is also very suitable for defining service and maintenance activities. In this case, the decision tree helps to represent the different scenarios.