Research Paper By Joseph Iverson
(Transformational Coach, UNITED STATES)
Renaissance – Rediscovery
In the wake of the Dark Ages, whose collapse was precipitated by Galileo’s observation that the Sun and other planets do not orbit the Earth, the Renaissance flourished against a backdrop of a rediscovered version of the philosophy of humanism brought about by the waning influence of church doctrine.
Humanism was nothing new, having its roots in classic Greece. One philosopher of that era in particular, Protagoras, had a revolutionary idea which was captured in his famous statement:
Man is the measure of all things: of the things that are, that they are, of the things that are not, that they are not.
Later, Plato would interpret this to mean that there is no absolute truth. Truth was instead something each person individually constructed for themselves – and this bold viewpoint was in stark contrast to other philosophical doctrines that claimed the universe was based on something objective, outside of human influence or perceptions.
Humanism can be defined as:
…a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism, empiricism) over acceptance of dogma or superstition.
Thus, with the Renaissance, science and the arts were supplanting religion as a source of study and inspiration. So, in the move away from a belief in external certainties and truths, from a primary focus externally onto God to a focus internally on man himself, the objective certainty of God himself became a subjective, relative “certainty” held in the eyes of each individual! Truth came to be seen as something which exists in each person’s subjective interpretation of the world formulated from the data received through their senses and not in some form of absolute certainty that exists “out there” in the universe. Even right and wrong are subjective, not objective, and indeed relative to the wider context within each person is situated.
In this paper, we will look at how two of the main epistemological theories, rationalism and empiricism, are used widely in the human endeavour to gain knowledge and provide the evidence humanism requires to verify what “we think we know.” This is important for coaches because there are strengths and limitations to each of these epistemological theories, in particular for those practicing in traditional Western countries and cultures. Whilst these theories have worked reasonably well, in particular in areas such as the hard sciences, when it has come to the social sciences, like psychology and sociology, the dispassionate nature of rationalism and empiricism is far less effective at essentialising, describing and predicting the nuanced complexity of individual human behaviour.
Rationalism & Empiricism
Rationalism is the view that “regards reason as the chief source and test of knowledge” or “in which the criterion of the truth is not sensory but intellectual and deductive.” For example, in the a priori domain of knowledge of mathematics, one only needs reason alone to determine the result that 3 + 1 = 4, the context is not really important. Objectivity is a central philosophical concept of rationalism, and generally means the state or quality of being true even outside of a subject’s individual biases, interpretations, feelings, and imaginings. A proposition is considered “objectively true” when truth it met without biases caused by feelings, ideas, opinions, etc. A critical argument on scientific objectivity and positivism is that all science has a degree of interpretivism. The mind/body dualism promulgated by Rene Descartes, that the mind was separate from the rest of the body, influenced other philosophers to believe that it is possible for someone to be truly objective. However, it well known that research studies are not always reproducible (with studies in psychology being some of the worst!)
Empiricism is a theory that states valid knowledge comes primarily from sensory experience, emphasises evidence, especially as discovered from experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning (i.e. pure rationalism), intuition, or revelation. Empiricism, often used by natural scientists, says that “knowledge is based on experience” and that “knowledge is tentative and probabilistic, subject to continued revision and falsification.” This view is commonly contrasted with rationalism, which states that knowledge may be derived from reason independently of the senses. Positivism is a central philosophical theory of empiricism – meaning that certain (“positive”) knowledge is based on natural phenomena, their properties and relations.
Thus, information derived from sensory experience, interpreted through reason and logic, forms the exclusive source of all certain knowledge.  Positivism holds that valid knowledge is found only in a posteriori knowledge (the opposite of a priori knowledge mentioned above) and these “positive facts” received from the senses are known as empirical evidence. Positivism also holds that society, like the physical world, operates according to general laws. Introspective and intuitive knowledge are rejected, (as are metaphysics and theology). Although the positivist approach has been a recurrent, dominant theme of Western thought,  the modern sense of the approach was formulated by the philosopher Auguste Comte in the early 19th century.  Comte argued that, much as the physical world operates according to gravity and other absolute laws, so does society,  and further developed positivism into a Religion of Humanity. Positivism asserts that all authentic knowledge allows verification and that all authentic knowledge assumes that the only valid knowledge is scientific.  The dispute between rationalism and empiricism concerns the extent to which people are dependent upon sense experience in the effort to gain knowledge. Rationalists claim that there are significant ways in which concepts and knowledge are gained independently of sense experience. Empiricists claim that sense experience is the ultimate source of all our concepts and knowledge.
Rationalists generally develop their view in two ways. First, they argue that there are cases where the content of our concepts or knowledge outstrips the information that sense experience can provide. Second, they construct accounts of how reason in some form or other provides that additional information about the world. Empiricists present complementary lines of
thought. First, they develop accounts of how experience provides the information that rationalists cite, insofar as it is available in the first place. (Empiricists will at times opt for scepticism as an alternative to rationalism: if experience cannot provide the concepts or knowledge the rationalists cite). Second, empiricists attack the rationalists’ accounts of how reason is a source of concepts or knowledge.
At the turn of the 20th century, an antipositivist movement, spearheaded by German sociologists, began speaking out and rejecting the positivist doctrine in sociology. They fought strenuously against the assumption that only explanations derived from science are valid in that “scientific explanations do not reach the inner nature of phenomena and it is humanistic knowledge that gives us insight into thoughts, feelings and desires. German theoretical physicist Werner Heisenberg, Nobel laureate for pioneering work in quantum mechanics, distanced himself from positivism by saying:
“The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can anyone conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.” 
Humanity and human behaviour are so intensely complex, and while is it convenient, perhaps even noble, to essentialise human behaviour and emphasise the seeming rational, “measurable” aspects of it; it can just as easily be said to be a grave error, bordering on arrogance, to dismiss emotions, feelings, and intuitions as “irrational” or simply ignore their existence because of the difficulty and unpredictability and assume they have no causal effect overall.
What are Emotions Anyway?
Emotions have been described by some theorists as discrete and consistent responses to internal or external events which have a particular significance to the individual. Emotions are brief in duration and consist of a coordinated set of responses – which may include verbal, physiological, behavioural, and neural mechanisms. . They can exist on a continuum – fear might range from mild concern to terror, while shame might range from simple embarrassment to toxic shame. Critically, emotions have o been described as biologically given and a result of evolution because they provided good solutions to ancient and recurring problems that faced our ancestors. It is this view of emotions, that they are genetically ancient and developed as a result of evolution when homo sapiens first emerged.
Theories about emotions stretch back to at least as far as the stoics of Ancient Greece and Ancient China. In China, excessive emotion was believed to cause damage to qi, which in turn, damages the vital organs.  The four humours theory made popular by Hippocrates contributed to the study of emotion in the same way that it did for medicine. Western philosophy regarded emotion in varying ways. In stoic theories it was seen as a hindrance to reason and therefore a hindrance to virtue. Aristotle believed that emotions were an essential component of virtue.  In the Aristotelian view all emotions (called passions) corresponded to appetites or capacities. During the Middle Ages, the Aristotelian view was adopted and further developed by scholasticism and Thomas Aquinas  in particular. In the 19th century emotions were considered adaptive and were studied more frequently from an empiricist psychiatric perspective.
Within our Western, technocratic lifestyle, thought is praised above all. We are often encouraged to discount, or even completely reject, our emotions and feelings because they can be seen as enabling weakness and vulnerability. Denying one’s humanity, and our unique capability to put significance and meaning onto that which we do, debases existence and is disempowering because it robs us of richness of the potential nature has given us which add depth to our experiences through meaning. Without meaning, our life is reduced to merely carrying out the purpose nature intended for us and simply adopting what society or other external sources have told us are the right ways to live, or even the most boring yet, just “trying to be happy! it is also a losing battle to think one can live a life disconnected from one’s inner reality and rely on logic and external measures, like wealth or success, for informing the choices one makes in life.
The Peril Of Demonising “The Irrational” – A Contemporary Example – Economics
Rational choice theory is a framework for understanding, and often modelling, social and economic behaviour.  The basic premise of rational choice theory is that aggregate social behaviour results from the behaviour of individual actors, each of whom is making their individual decisions. Rationality is widely used as an assumption of the behaviour of individuals in microeconomic models and analyses and appears in almost all economics textbook treatments of human decision-making.
The concept of rationality used in rational choice theory is somewhat different from the most philosophical uses of the word. Here, “rational” behaviour typically means “sensible”, “predictable”, or “in a thoughtful, clear-headed manner.” Rational choice theory uses a narrower definition of rationality. At its most basic level, behaviour is rational if it is goal-oriented, reflective (evaluative), and consistent (across time and different choice situations). This contrasts with behaviour that is random, impulsive, conditioned, or adopted by (unevaluated) imitation. [??]. Early neoclassical economists writing about rational choice assumed that agents (‘rational people’) make consumption choices that always maximize their happiness, or utility. Rational Choice Theory bases rational choice on a set of axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires, etc.) comes from. It mandates just a consistent ranking of the alternatives. Individuals choose the best action according to their personal preferences and the constraints facing them. Without specifying the individual’s goal or preferences, it may not be possible to empirically test, or falsify, the rationality assumption. In recent years, the most prevalent version of rational choice theory, Expected Utility Theory, has been challenged by the experimental results of behavioural economics. Economists are learning from other fields, such as psychology, and are enriching their theories of choice in order to get a more accurate view of human decision-making. These analyses also rely heavily on the following assumptions:
- Perfect information: The simple rational choice model above assumes that the individual has full or perfect information about the alternatives (i.e., the ranking between two alternatives involves no uncertainty.)
- Choice under uncertainty: In a richer model that involves uncertainty about the how choices (actions) lead to eventual outcomes, the individual effectively chooses between “lotteries”, where each lottery has a different probability distribution over outcomes. The additional assumption of independence of irrelevant alternatives then leads to expected utility theory.
- Inter-temporal choice: when decisions affect choices (such as consumption) at different points in time, the standard method for evaluating alternatives across time involves discounting future payoffs.
- Limited cognitive ability: identifying and weighing each alternative against every other may take time, effort, and mental capacity. Recognising the cost that these impose on the cognitive limitations of individuals gives rise to theories of bounded rationality. Bounded rationality implicates the idea that humans take reasoning shortcuts that may lead to suboptimal decision-making because the human brain capacity to process information in inherently limited.
The Blank Slate Fallacy
Both the assumptions and the behavioural predictions of rational choice theory have sparked criticism from various camps. As mentioned above, some economists have developed models of bounded rationality, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008).
In particular it is this class of models – rational behaviour as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism.  Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfil. 
The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at things systematically neglects the ways in which modern capitalist society and its social relations in fact actually constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, and most glaringly, its disregard for the emotions of humans which frequently contribute to actual human behaviour deviating from that predicted by this theory.
A Better Theoretical Framework
An evolutionary psychology perspective is that many of the seeming contradictions and biases regarding rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment (i.e. when human evolved in Africa) but not necessarily in the contemporary environment of civilisation today. Thus, when living at subsistence level where a reduction of resources may have meant death it may have been rational to place a greater value on losses than on gains. Proponents argue it may also explain differences between groups. 
Alternative theories of human action include such components as Amos Tversky and Daniel Kahneman’s cognitive biases, their first and most well-known one being Prospect Theory, which reflects the empirical finding that, contrary to standard preferences assumed under the Rational Actor Theory, individuals attach extra value to items that they already own compared to similar items owned by others. Under standard preferences, the amount that an individual is willing to pay for an item is assumed to equal the amount he or she is willing to be paid in order to part with it. In experiments, the latter price is sometimes significantly higher than the former! Tversky and Kahneman  therefore do not characterize loss aversion as irrational. Behavioural economics has added an incredible number of other viewpoints ands and cognitive biases to its picture of human behaviour that go against traditional, neoclassical assumptions.
The implication of this is that science should be understood as an ongoing process in which scientists improve the concepts they use to understand the mechanisms that they study. It should not, in contrast to the claim of empiricists, be about the identification of a coincidence between a postulated independent variable and dependent variable. Positivism/Falsificationism are also rejected due to the observation that it is highly plausible that a mechanism will exist but either a) go unactivated, b) be activated, but not perceived, or c) be activated, but counteracted by other mechanisms, which results in its having unpredictable effects. Thus, non-realisation of a posited mechanism cannot (in contrast to the claim of some positivists) be taken to signify its non-existence.
Critical naturalism argues that the transcendental realist model of science is equally applicable to both the physical and the human worlds. However, when we study the human world we are studying something fundamentally different from the physical world and must, therefore, adapt our strategy to studying it. Critical naturalism, therefore, prescribes social scientific method which seeks to identify the mechanisms producing social events, but with a recognition that these are in a much greater state of flux than those of the physical world (as human structures change much more readily than those of, say, a leaf). In particular, we must understand that human agency is made possible by social structures that themselves require the reproduction of certain actions/pre-conditions. Further, the individuals that inhabit these social structures are capable of consciously reflecting upon, and changing, the actions that produce them—a practice that is in part facilitated by social scientific research.