The principle of Occam's razor

Occam's razor is a logical principle attributed to the mediaeval philosopher William of Occam. The principle states that one should not make more assumptions than the minimum needed. This principle is also called the principle of parsimony. It underlies all scientific modelling and theory building. It admonishes us to choose, from a set of possible models that explain a given phenomenon, the simplest one. In any given model, Occam's razor helps us to "shave off" those concepts, variables or constructions that are not essential to explain the phenomenon. By doing that, the model is easier to develop and there is less chance of inconsistencies, ambiguities and redundancies. Complexity of modelling, appears to be the outcome of biological evolution; man with his brain, has become so complex, that he is illimited in his capacity to react with all the aspects of the world that he perceives. This is his force but at the same time his weekness.

The principle of Occam's razor is essential for building models of phenomena because theories are insufficiently determined by data. For a given set of data, there is always an infinite number of possible models explaining those data. This is because a model normally represents an infinite number of possible cases, of which the observed cases are only a finite subset. The non-observed cases are inferred by postulating general rules covering both actual and potential observations.

For example, through two points in a graph you can always draw a straight line, and induce that all further observations will lie on that line. However, you could also draw an infinite variety of the most complicated curves passing through those same two points, and these curves would fit the empirical data just as well. Only Occam's razor would in this case guide you in choosing the "straight" (i.e. linear) relation as best candidate model. A similar reasoning can be made for n data points lying in any kind of distribution.

Occam's razor is especially important for universal models such as the ones developed in General Systems Theory, mathematics or philosophy, because there the subject domain is of an unlimited complexity. If one starts with too complicated foundations for a theory that potentially encompasses the universe, the chances of getting any manageable model are very slim indeed. Moreover, the principle is sometimes the only remaining guideline when entering domains of such a high level of abstraction that no concrete tests or observations can decide between rival models. In mathematical modelling of systems, the principle can be made more concrete in the form of the principle of uncertainty maximization: from your data, induce that model which minimizes the number of additional assumptions.

This principle is part of epistemology, and can be motivated by the requirement of maximal simplicity of cognitive models. However, its significance might be extended to metaphysics if it is interpreted as saying that simpler models are more likely to be correct than complex ones, in other words, that "nature" prefers simplicity.


Partager | Suivez moi sur twitter @pratclif

Mis à jour le 01/04/2016 pratclif.com