This principle might work very well, provided the province of science could be completely shut off from the realm of philosophy, and provided neither philosophers nor scientists were anxious to scale the dividing wall. Unfortunately, neither condition is completely fulfilled. The respective provinces of science and of philosophy coincide in two points. In the first place, the phenomena whose variations are measured, classified, and to some extent explained by the scientist, are precisely the same as those whose ratio essendi and ratio cognoscenti the philosopher seeks to discover; while the methods that the scientist uses presuppose the principles of sound logic, a discussion of which clearly lies within the sphere of the philosopher, quâ logicians. In the second place: Often enough the scientists are not content to keep to his science. He insists upon his right to speculate metaphysically on the validity of scientific theories and on the ultimate nature of the phenomena with which he has to deal, and it is by no means uncommon to find that he mingles his metaphysical assumptions with the methods and principles of his science.
Epistemology or science is that part of philosophy that asks “what can we know?” “What can we be sure of?” “How do we get beyond mere opinion to real knowledge?” Traditionally, there are two approaches to epistemology: Rationalism, which says we gain knowledge through reasoning, and empiricism, which says we gain knowledge through sensory experience. Although there are a few extremist philosophers, generally most agree that both these approaches to knowledge are needed, and that to some extent they support and correct each other.
Rationalists focus on what they call necessary truth. By that they mean that certain things are necessarily true, always, universally. Another term that means the same thing is a priori truth. “A priori” is Latin for “beforehand,” so a priori truth is something you know must be true before you even start looking at the world the senses reveal to us.
The most basic form of necessary truth is the self-evident truth. Self-evident means you do not really even have to think about it. It has to be true. The truths of mathematics, for example, are often thought of as self-evident. One plus one equal’s two. You do not need to go all over the world counting things to prove this. In fact, one plus one equal two is something you need to believe before you can count at all. (One of the criticisms that empiricists would put forth is that “one plus one is two” is trivial. It is tautological, meaning it is true, sure, but not because it is self-evident: It is true because we made it that way. One plus one is the definition of two, and so with the rest of mathematics. We created math in such a way that it works consistently for us.
Other self-evident truths that have been put forth over the years include “you cannot be in two places at once,” “something either is or it isn’t,” “everything exists.” These are pretty good candidates, “don’t you think?” Nonetheless, it is self-evident to one person is not self-evident to another. “God exists” is perhaps the most obvious one, and some people disagree with it quite vigorously. Or “the universe had to have a beginning”
- some people believe it has always been. A familiar use of the phrase “self-evident” is Thomas Jefferson's use of it in the Declaration of Independence: “We hold these truths to be self-evident: That all men are created equal,” . . . still, it is pretty obvious to most that this is not, really, true. Instead, it is a rhetorical device, that is, it sounds good to put it that way
In order to reason our way to more complex knowledge, we have to add deduction (also known as analytic truth) to the picture. This is what we usually think of when we think of thinking: With the rules of logic, we can discover what truths follow from other truths. The basic form of this is the syllogism, a pattern invented by Aristotle that has continued to be the foundation of logic to the present
The traditional example is this one, called modus ponens: “Men are mortal.” “Socrates is a man.” Therefore “Socrates is mortal.” If ‘x’, then ‘y’ (if you are human, then you are mortal). ‘x’ (you are human). Therefore, ‘y’ (you are mortal). This result will always be true, if the first two parts are true. So we can create whole systems of knowledge by using more and more of these logical deductions.
Another syllogism that always works is in the form “If ‘x’, then ‘y’ not ‘y’, . . . therefore not ‘x’.” If you are human, then you are mortal. You are not mortal. Therefore, you are not human. If the first two parts are true, then the last one is necessarily true. This one is called “modus tollens.”
On the other hand, there are two examples that don’t work, even though they sound an awful lot like the same. If ‘x’, then ‘y’. Not ‘x’. Therefore not ‘y’. If you are human, then you are mortal. You are not human. Therefore you are not mortal. That, of course, would come as a big surprise to animals. Or look at this example: If God would show himself to me personally, which would prove the truth of religion. However, he hasn’t done so. Therefore, religion is false.” It sounds like a reasonable argument, but it is not, e.g., this is called denial of the antecedent.
Another equalling effect carries on like this: If ‘x’, then ‘y’, ‘y’ therefore ‘x’. If you are human, then you are mortal. You are mortal. Therefore you are human. Or try this one: “If God created the universe, we would see order in nature. We do in fact see order in the universe - the laws of nature! Therefore, God must have created the universe.” It sounds good, doesn’t it? Yet it is not at all logical: The order in the universe could have another cause, e.g., This is called “affirmation of the consequent.”
There are many types of rationalism, and we usually refer to them by their creators. The best known, of course, is Plato’s (and ‘Socrates’). Aristotle, although he pretty much invented modern logic, is not entirely a rationalist - he was also interested in the truths of the senses. The most magnificent example of rationalism is Benedict Spinoza’s. In a book called Ethics, he began with one self-evident truth: God exists. By God, he meant the entire universe, both physical and spiritual, so his truth does seem pretty self- evident: Everything that is, is. However, from that truth, he carefully, in steps upon reasons of his way to a very sophisticated system of metaphysics, ethics, and psychology.
Many people think that empiricism is the same thing as science. That is an unfortunate mistake. The reason that empiricism is so closely tied in our minds to science is really more historical than philosophical: After many centuries of religious rationalism dominating European thinking, people like Galileo and Francis Bacon came out and said, hey, how about paying some attention to the world out there, instead of just trying to derive truth from the scriptures? The stage for this change in attitude was, in fact, already set by St. Thomas Aquinas, who at least felt that scriptural truth and empirical truth need not conflict
The simplest form of empirical truth is that based on direct observation - taking a good hard look. Now this is not the same as anecdotal evidence, such as “I know a fellow who has a cousin in Toronto who married a woman whose college roommate is a UFO.” It’s not really even the same as “I saw a UFO.” It means that there is an observation that was made and that you can make, too, and that, if it were possible, everyone should be able to make. In other words, “here’s a UFO: Take a look.”
In order to build a more complex body of knowledge from direct observations, we must make use of induction, also known as indirect empirical knowledge. We take the observations and carefully stretch them to cover more ground than we could actually cover directly. The basic form of this is called generalization. Say you see that a certain metal melts at a certain temperature. In fact, you’ve seen it many times, and you’ve shown it to others. At some point, you make the inductive leap and say “the melting point of this metal is so many degrees.” Now it’s true that you haven’t melted every bit of this metal in the universe, but you feel reasonably confident that (under the same conditions) it will melt at so many degrees. That’s” generalization.”
You can see that this is where statistics comes in, especially in a more wishy-washy science like psychology. How many observations do you need to make before you can comfortably generalize? How many exceptions to the desired result can you explain away as some sort of methodological error before it gets to be too much? What are the odds that my observation is actually true beyond these few instances of it? Just like there are different style’s rationalism, there are different types of empiricism. In this case, we have given them some names. Most empirical approaches are forms of epistemological realism, which says that what the senses show us is reality, is the truth.
The basic form of realism is direct realism (also known as simple or “naive” realism - the latter obviously used by those who disagree with it). Direct realism says that what you see is what you get: The senses portray the world accurately. The Scottish philosopher Thomas Reid is the best known direct realist.
The other kind is called critical (or representative) realism, which suggests that we see sensations, the images of the things in the real world, not the things directly. Critical realists, like rationalists, point out how often our eyes (and other senses) deceive us. One famous example is the way a stick jutting out of the water seems to be bent at the point out which it comes. Take it out of the water, and you find it is straight. Apparently, something about the optics of air and water leads to this illusion. So what we really see are sensations, which are representations of what is real. Descartes and Locke were both critical realists. So are the majority of psychologists who study sensation, perception, and cognition.
But, to give Reid his due, a direct realist would respond to the critical realist that what we call illusions are actually matters of insufficient information. We don’t perceive the world in flash photos: We move around, move our eyes and ears, use all our senses. . . . To go back to the stick, a complete empirical experience of it would include seeing it from all directions, perhaps even removing it. Then we will see not only the real stick, just as it is, but the laws of air-water optics as well. A modern direct realist is the psychologist J. J. Gibson.
There is a third, and rather unusual form of empiricism called subjective idealism that is most associated with Bishop George Berkeley. As an idealist in terms of his metaphysics, he argued that what we see is actually already a psychological or mental thing with which to begin. In fact, if we don’t see it, it isn’t really there: “To be is to be perceived” is how he put it. Now, this doesn’t mean that the table you are sitting at simply ceases to be when you leave the room: God’s mind is always present to maintain the table’s existence.
There is this famous question: “If a tree falls in the woods, and there is no one there to hear it, does it make a sound?” The subjective idealist answer is yes, it does, because God is always there. No other way to look at these three empirical approaches is like this: Critical realism postulates two steps to experiencing the world. First there is the thing itself and the light or sounds (etc.) it gives off. Second, there is the mental processing that goes on sometime after that light hits our retinas, or the sound hits our eardrums. Direct realism says that the first step is enough. Subjective idealism says that the second step is all there is. The traditional, ideal picture of science looks like this: Let’s start with a theory about how the world works. From this theory we deduce, using our best logic, a hypothesis, a guess, regarding what we will find in the world of our senses, moving from the general to the specific. This is “rationalism.” Then, when we observe what happens in the world of our senses, we take that information and inductively support or alter our theory, moving from the specific to the general. This is “empiricism.” And then we start again around the circle.
So science combines empiricism and rationalism into a cycle of progressive knowledge. The traditional, ideal picture of science looks like this: Let’s start with a theory about how the world works. From this theory we deduce, using our best logic, a hypothesis, a guess, regarding what we will find in the world of our senses, moving from the general to the specific. This is rationalism. Then, when we observe what happens in the world of our senses, we take that information and inductively support or alter our theory, moving from the specific to the general. This is empiricism. And then we start again around the circle. So science combines empiricism and rationalism into a cycle of progressive knowledge.
Now, if a particular scientist wishes to indulge in metaphysical speculations it is not for the philosopher to say him nay. It is only natural that one who devotes himself to the study of the laws which govern the phenomenal world should desire to know what phenomena are, and should form for themselves a metaphysical theory of the universe. But if the scientist constructs a metaphysical theory, he can hardly complain should the philosopher criticise that theory which is not the less metaphysical because it comes from the pen of a scientist. Physicists such as M. Duhem place themselves beyond the reach of the metaphysician by denying that their theories have in any sense a metaphysical import; but there are others who, in discussing the methods of science and the validity of its laws, have taken up a definite metaphysical position, which, they tell us, are more compatible with, if it is not actually presupposed by, the principles of science. This attitude, which is becoming more and more prevalent both in Germany and in France, is closely connected with the pragmatic movement. The pragmatic method claims to be based on that of science, and not a few scientists seem in return much inclined to adopt as their own the pragmatic theory of knowledge-in-general and the philosophy of Pure Experience with which it is so intimately bound up. Science has given up the naive, uncritical and often materialistic Realism which was formerly its customary attitude, and in its stead many of the devotees have taken up, not the non-metaphysical position of M. Duhem, but the position of Empirical Idealism.
Fifty years ago every scientist started from the common-sense point of view, assuming with his less educated brethren, that material things really exist independently of the exercise of mental activity. He took it for granted that his thoughts about the universe did not affect the nature of the ‘facts’ with which he had to deal. He did not trouble about the possibility of there being any a priori forms of the mind to which experience, consciously or unconsciously, had to conform; nor did he dream that in observing facts he was in reality making them. The aim of scientific research was to give an explanation, not only of the relations holding between phenomena, but also of the nature of the universe itself. Both mechanists and dynamists hoped to find an interpretation of the objective, real world at least in so far as it is material. Their atoms and molecules and their centres of force were real entities constitutive of material things and giving rise to those phenomena which we perceive by the senses. In fact, the complaint which the mechanist found with the dynamist was that the latter introduced into reality an unknown entity ‘force’, which could neither be imagined nor defined.
At present the position is changed. By many of our leading scientists the older metaphysics has been discarded and an Empirical Idealism or Pragmatic Sensationalism substituted in its stead. Mind and matter, relatively independent, are no longer the metaphysical conditions of scientific knowledge. For matter has been substituted sensation, and instead of knowledge arising through the manifestation of objective reality to a relatively passive mind, knowledge is now said to be due for the most part to the constructive activity of thought, to “l”action pensée," to ideas due, in part at least, to the creative power of mind, and striving to realise themselves in the field of sense-experience. The data of modern science are sensations; its aim is to discover the relations which hold between them; the means by which it seeks to acquire this knowledge is first of all sense-experience, in which experiment plays an important part, and, secondly, a mental activity of a higher order in which spontaneity and choice are conspicuous. Through the senses we have experience of relations between phenomena and sensation-complexes, and through the instrumentality of definitions and hypotheses created by thought we endeavour to arrange and classify these relations, to subsume them under general forms, and, if possible, to reduce them to unity by the discovery, or better, perhaps, the invention of some primary relation which holds throughout.
Two names stand out prominently as representative of this attitude, at once metaphysical and epistemological, in regard to the scope of science. They are those of Mach and Karl Pearson; and to these we may add a third, chosen from the more sceptical school of the Philosophie de la Contingence, M.Le Roy. M. Poincaré, on the other hand must be placed in a different category, for he admits the”objectivity” of fact, and even to “laws” assigns a certain 'normal objectivity,' though in certain passages he seems to speak as if he were a sensationalist like Mach.
Mach distinguishes three stages in scientific procedure, the experimental stage, in which we are in immediate contact with reality,
i.e., with sensation, and merely tabulate the results of experiment and observation; the deductive stage, in which we substitute mental images for facts, as in Mechanical Physics; and the formal stage, in which our terms consist of algebraical symbols, and our aim is to construct by their means the most convenient and most uniform synopsis of results. Similarly Poincaré distinguishes three kinds of hypotheses, (1) hypotheses suggested by facts and verified at least peu près in experience, (2) “indifferent hypotheses,” which are useful in that they express under images and figures relations between phenomena. However, which are neither true nor false; and (3) mathematical conventions, which consist of definitions more or less arbitrary, and which are independent of experience. Poincaré's 'indifferent hypotheses' correspond to Mach's second stage in the development of science, and manifest a tendency eventually to disappear. Already Mach himself prefers to dispense with their service as rather encumbering than facilitating thought: While Poincaré though he considers them still indispensable for the moment, holds them to be devoid of real significance.
Thus the second stage of scientific procedure in this view is of secondary importance. It is the experimental and mathematical stages that really constitute science. By observation and experiment we are brought into contact with reality; not indeed with the material world, for no such entity is supposed to exist, nor even with the world of sensible appearances strictly so-called - for an appearance implies something that appears - but with sensations. The objective condition of scientific knowledge, the reality which in science we desire to know, is sensation. The data of experience are sensations. Mach, in his Analyse der Empfindungen und das Verhältniss des Physischen zum Psychischen, has developed this view at considerable length. Sensations and sensation-complexes - these, he says, are reality. All science consists in the analysis of sensations. Nature is composed of elements given by the senses. From these we choose those which are most important for practical purposes and call them “objects” or “things.” But “things” are really abstractions, and a name is a symbol for a complex of sensations whose variations we neglect. There are no things - in-themselves, nor are sensations symbols of things, but what we call things are symbols of sensation-complexes of relative stability. Colours, sounds, pressures, spaces, durations, these are the real things. All thought is governed by the principle of Thought-Economy. We are ever trying to save ourselves trouble. Hence we have acquired the habit of grouping sensations together in a lump and calling them by a single name. One group of sensations we call ‘water’, another ‘a leaf,’ another ‘a stone’. Smaller groups, again, are combined to form larger ones. The group 'leaf' is joined to the groups ‘branch,’ ‘stem’,' etc., and the whole, being vaguely or generically pictured, becomes a ‘plant’. These larger groups, again, are included in others larger still. What we call 'the external world' comprises all those sensation-complexes which are relatively constant, i.e., which repeat themselves again and again in the same sort of way and are not subject to the control of our will: whereas 'the self' comprises that other very extensive group of sensation-complexes, some of which are always present in consciousness, though ever varying in tone, while others can be produced at any time if we so desire, and thus are directly under our control
In the external group the relations between sensation-complexes are constant, i.e., the complexes follow one another in the same order. Thus the sensation-complex (water) is juxtaposed in a certain way to another complex (Bunsen-burner), and always after a certain time the bright transparency of the former complex gives place to a dull whiteness of another considerably greater in extent. Ordinarily, however, we prefer
- according to the “Princip der Denkökonomie” - to use names to denote our sensation complexes, as it saves us the time and trouble of describing them. The usual account that one would give of the above phenomenon, for instance, would be that when we heat water over a Bunsen-burner after a time it begins to boil. Indeed, it would be very awkward for the sensationalist, if he often had to carry out Pascal’s principle of substituting the definition for the thing defined.
The physicist selects the above class of sensations, which are characterised by greater stability, greater regularity, and are common to humanity, as the data of his scientific researches while the psychologist treats of these in another way, and also of other sensations which are less stable, more subject to the control of the will, and, hence, often peculiar to the individual. But the standpoint of Mach is really psychological throughout. Both psychologist and physicist treat of the same class of objects from different points of view.
All that we can know of the world is necessarily reduced to sense-perception: and all that we can wish to know is given in the solution of a mathematical problem, in the knowledge of the functional dependence which exists between sense-elements. This exhausts the sources of the knowable.
Professor Mach has given up the apparently hopeless task of reducing things to indefinitely small and ultimate elements. Both he and Poincaré prefer to regard atoms and such like as hypotheses, as mere picturesque fictions of greater or less utility, but of no objective value; and for things Mach substitutes sensations. Scientifically, indeed, sensation is regarded as a form of energy, the differences of which are probably quantitative. But in course of time, says Mach, we shall discover that the sense of hunger is not so very different from the action of sulphuric acid on zinc, and that our will is not so very different from the pressure of stone on its support, and so we shall get nearer nature. Thus, for Energetic, everything is reducible to energy, alias sensation, and the final aim of physical science is to demonstrate the truth of this assertion.
Both Mach and Poincaré speak of the sense-data of science as if they were uninfluenced by the subjective factor in cognition. They regard them as relatively stable, independent of the individual, and therefore objective. But even in the objects of scientific knowledge, philosophers such as M.Le Roy would admit an element of “contingency.” Sensations are not given in isolation, but are grouped together in complexes and integrated into percepts and in the construction of our percepts there may enter an element of caprice. We are influenced by our point de vue choisi d'avance, practical utility in some cases, the exigencies of scientific theory in others. Hence we introduce into our percepts just what suits our convenience and leave out the rest. This follows logically from the philosophy of Pure Experience, a philosophy which is practically identical with the metaphysical standpoint of MM. Karl Pearson and Mach. For if, as M.Le Roy says, “nothing is put before the mind, but what is put by the mind”; if, in other words, we do not copy reality, but construct it, as Dr. Schiller affirms then all is due to “hypothesis and fabrication” either by the individual or by the race, i.e., we construct our percepts as well as our concepts. Again, racial development takes place by individual variation, and this is possible in the sphere of experience only if thought exercises purposive control over the data of sense, in which case even in this, the lowest stage of human knowledge, we must admit that there is an element of caprice. Hence: all scientific laws are unverifiable, to put the matter rigorously, first because they are the instrument with which we make in the continuity of the primitive datum the indispensable parcelling out (morel age) without which thought remains powerless and shut in, and again because they constitute the criterion itself with which we judge the apparatus and methods which it is necessary to use in order to subject them to an examination, the accuracy of which may be able to surpass all assignable limits.
Contingency and choice in the sphere of experimental science is emphatically denied by Poincaré “All which the scientist creates in a fact,” he says, “is the language in which he expresses it.” We do not interfere with facts, except in so far as we select those which are relevant to our purpose. In experience relations are determined, not by experiment, but by inexorable laws which govern the succession of our sensation-complexes. “We do not copy reality” - that is true; but the laws which govern the sequences and combinations of sensations are fixed for us, and not by us. They are something which we experience as a datum, not, and something we arbitrarily construct; and these laws may be known by us at least à peu près.
This view, though doubtless the correct one, is hardly consistent with the doctrine that sensations and not material objects are the data of science. If it be the mind that groups sensations together and so forms sensation-complexes or objects, then, as M.Le Roy and Dr. Schiller affirm, such groupings may not always be precisely identical. Not only may modification and even mutilation of fact have occurred during the long process in which habits of perception have been built up and have become common to the race, but such modifications are still possible since habits are only relatively constant and only approximately common to the race. Moreover, the significance of M. Poincaré's assertion that all we create in a fact is the language, in which we express it, is considerably modified when we compare it with another statement with the effect that “language is strewn with preconceived ideas”; for the latter, since their influence is unconscious, are far more dangerous than those which we deliberately formulate and makes use of in hypotheses.
M.Le Roy's statement, therefore, that scientific laws are unverifiable because they are the instruments by means of which we parcel out the primitive datum of experience would seem to be valid in a pragmatic and evolutionary theory of knowledge. His second argument (granting the validity of his premises) is no less conclusive. When the correspondence-notion of truth is rejected, our only criteria of truth are utility and consistency, both of which are determined by the development and systematisation of science itself. Scientific laws, as M. Duhem has pointed out, mutually involve and imply each other's truth. Therefore, if in no individual case we can eliminate the subjective element and so prove that a law has arisen from the manifestation of reality itself to our minds, we have no right to assume one law to prove another; all laws, whether empirical or not, will be equally unverifiable in the pragmatic and pseudo-scientific theory of knowledge.
The unrestricted jurisdiction of the Princip der Denkökonomie points to the same conclusion for, according to Professor Mach, this principle is not confined to the realm of physical theory, but is a general principle applicable to all forms of cognition alike. It governs the construction of the percepts and concepts of common-sense, just as it directs the scientist in the formulation of definitions and physical hypotheses. Efficiency depends upon economy, and efficiency, adaptation to environment, and practical utility for the control of sense-experience is the final aim, not only of physical theory, but of all human cognition.
The Pragmatism and Sensationalism of Mach and Karl Pearson, which is really a philosophical theory of knowledge, must be carefully distinguished from the view that in Physical Theory definitions and laws are merely symbolic formulae, useful for the classification, co-ordination and systematisation of scientific fact: for this view is held by many who, except on this point, are in no sense pragmatists either in regard to science or philosophy. A pragmatic interpretation of physical theory is, in fact, quite compatible with metaphysical Realism.
For instance, M. Duhem is a realist in regard to the notions of common-sense, yet he tells us that the aim of physical theory is “to construct a symbolic representation of what our senses, aided by instruments, make us know, in order to render easier, more rapid, and more sure, reasoning about experimental knowledge.” Concepts for him as for M.M. Poincaré and Mach are means to this end. Their function is symbolic. As definitions they are arbitrary, and in no way represent reality or reveal its inner rational structure. “Masses” are “coefficients which it is convenient to introduce into our calculations.” “Energy” must not be confused with the force exerted by a horse in drawing a cart: It is merely “the function of the state of a system whose total differential in every elementary modification is equal to the excess of work over heat set free.” Concepts as definitions form the basis of scientific deduction, but they do not reveal the nature of objective facts. The most they can do is to indicate certain experiences, and so enable us to verify the phenomenal relations which we have deduced by means of mathematical reasoning in which these symbolic definitions function as terms.
Some have endeavoured to find a similarity between M. Duhem's theory of chemical combination and the scholastic doctrine of matter and form. This, however, as he informs us in his work entitled Le Mixte et la combinaison chimique, in which his views on that subject are developed, is merely an analogy, and nothing more. “Forms,” as conceived by the chemist and the physicist, are quantitative, not qualitative; whereas quality is of the essence of things, the nature of which it is the business of the metaphysician and not of the scientist to determine. Nevertheless, in spite of this denial that 'forms' in chemistry and in physics are comparable with the metaphysical forms of Aristotle, and M. Duhem's standpoint is quite compatible with Realism; and it is so precisely because he relegates all questions as to the nature of quality and essence to Metaphysics.
The standpoint of M. Duhem differs essentially, therefore, from that of Karl Pearson and Mach; for, while carefully distinguishing physical theory from physical fact, M. Duhem does not identify the latter with sensation, but leaves it to the metaphysician to determine the ultimate nature of the data of experience. Again, it is only in Theory that postulation and symbolism are admitted by M. Duhem, and that we are allowed to construct and modify definitions at will. Mathematical Physics in the course of its development is independent of Experimental Physics, and uses a different method. In the latter we are bound down by empirical facts, whereas Mathematical Physics is free to disregard all facts till theory is complete, when it must be verified as a whole by comparing the conclusions which have been mathematically deduced with the complexes of experimental data. “In the course of its development a physical theory is free to choose whatever way it pleases, provided it avoids all logical contradiction; in particular, it is free to disregard the facts of experience.”
On the other hand, for Professor Mach, and apparently for
M. Poincaré also, symbolism, postulation and the principle of Thought-economy apply to theory and fact alike. The experimental differs from the mathematical stage only in this respect, which in the former we group under one name sensations which are actually present in consciousness, and our grouping is more or less spontaneous; whereas in the latter we arbitrarily combine symbols denoting sensation-complexes already grouped, and postulate that the new symbol shall denote actual groupings which have never as yet been given in consciousness.
The real difference, then, between Karl Pearson and Mach on the one hand and Duhem on the other is in regard to their philosophic standpoint. Both Karl Pearson and Mach, and, to some extent, Poincaré also, philosophise on the data of experience and on the development of knowledge overall; and their philosophy is pragmatic. M. Duhem declines to philosophise, and, if a pragmatist at all. is a pragmatist only in regard to the methodology of physical theory, an attitude which is quite consistent with philosophic Realism?
There is also a further difference between the views of M. Poincaré and M. Duhem in regard to the relation of Mathematical to Experimental Physics. M. Poincaré admits “truths founded on experience and verified almost exactly so far as concerns systems which are practically isolated,” and these truths, he says, when generalised beyond the limits within which experience verifies them, become “postulates, applicable to the whole universe and regarded as rigorously true.” “Mais, le principe désormais crystallisé pour ainsi dire, n’est plus soumis au contrôle de l'expérience. I n’est pas vrai our faux, it est commode.” With that, such principles as Newton's Laws of Inertia and of the Equality of Action and Reaction, Lavoisier's Conservation of Mass, Mayer's Conservation of Energy, and Carnot’s Degradation of Energy are axiomatic, though not a priori. They are suggested by facts, but are unverifiable, because in their absolute form they are mere conventions; and our right to postulate them lies precisely in this, that experience can never contradict them.
Mathematical Physics, on the other hand, for M. Duhem is entirely independent of experience throughout the whole process of its development. No hypothesis whatever can be verified till the theory of Physics is complete in every detail, for every physical law is “a symbolic relation the application of which to concrete reality supposes that one accepts quite a system of other laws.” No individual, physical law is, properly speaking, either true or false, but only approximate, and on that account provisional. Sufficiently approximate to-day, but the time will come when it will no longer satisfy our demand for accuracy. Principles, therefore, which Milhaud Le Roy and Poincaré alike place beyond the control of experience are, says M. Duhem, either not physical laws at all (since every physical law must retain its meaning when we insert the words à peu près, which these do not) or else, when their consequences have been fully deduced, they must be rigorously subjected to the test of experience in the theory to which they belong, and with that theory stand or fall. In other words, Poincaré, admitting the existence of relatively isolated systems of experimental facts, thinks that it is possible to apply the process of verification to a physical theory in the course of its development; while Duhem, convinced that all physical laws are intimately connected, prefers to formulate a complete and self-consistent system of hypotheses before attempting to compare the consequences of any one of these hypotheses with experimental fact. A similar difference is manifest in regard to the method of teaching Physics. Poincaré prefers the inductive and experimental method. Duhem holds that physical theory should be presented to those who are capable of receiving it, in toto, and that experiments should serve merely as illustrations of different stages in its development.
This difference between two of our most eminent physicists, though great at first sight, can, as to some extent, be explained. M. Duhem insists that all hypotheses must be verifiable à peu près if they are to have physical significance: consequently, there can be no laws in physical theory, when complete, which are not at least approximately true. On the other hand, the use of purely conventional hypotheses in the construction of a theory is allowable, provided they are ultimately verifiable in their systematic completeness. M. Poincaré points out that such conventional hypotheses are often experimental laws generalised beyond the limits within which they are verifiable, and so worded that they cannot, as such, be contradicted by experience. That in their most general form, as applicable to the whole universe, universal postulates of this kind cannot be verified directly, is obvious; for not only are they universal, but they are expressed in symbolic terms, such as energy and inertia, terms which it is almost impossible to translate into their corresponding sensations (if such there be). Yet, inasmuch as such postulates lead to particular conclusions about less abstract realities of which we can have immediate experience, inasmuch as their function is to guide us in the construction of hypotheses which are verifiable à peu près, and so have a physical sense, it may be said that even the most abstract laws and the most general principles can be verified indirectly through their consequences.
Another of the great causes for argument (and not merely philosophical argument either), stems from real or imagined or misunderstood differences in the basic assumptions upon which a discussion is based. Two people will find it very difficult to communicate productively, if they do not share a common language. As a Canadian, I too would find it impossible to communicate with a Russian, for example, if we shared no common language. Especially if the communications were taking place over the telephone, where even sign language or body language would not be possible. The trouble starts in many arguments, when the parties to the discussion appear to speak the same language. Consider English, for example. Many pundits suggest, not entirely in humour, that the greatest difficulties in an Anglo-American relations stem from the fact that the two nations speak what is supposed to be the same language. The problem of missed communication is hidden because everybody is using the same words. Just because the audio sounds are the same, everybody assumes that the meanings are the same. It is frequently discovered, after much fruitless argument and disagreements, that the two “opposing” positions in the argument are not as far apart as was initially believed. When the actual meanings are made clear, the fog created by the words used can be more easily cleared. This is one reason why good diplomats and mediators are so often successful. They make a special effort to get past the words to the meaning.
For this reason, Evolutionary Pragmatism must begin with detailed definitions of the meanings behind the words being employed. But before getting into the philosophical discussion, It would to be that it would be advantageous to digress a little, and talk a bit more about the nature and consequences of Axiom Systems.
To prevent any misunderstanding, in the discussion that follows, the terms “starting axioms”, “a priori postulates”, “initial postulates”, and “underlying basic assumptions” are going to be used relatively interchangeably. They all mean roughly the same thing. They all refer to the basic starting propositions upon which the rest of a discussion or logical analysis is based. Frequently, these starting positions are left unsaid. And there in lays the problem. Since they are left unspoken, the assumptions that you start with, may not be as the same assumption, that in most cases, this may not cause any more than temporary difficulties. But in the case of the development of an entire system of philosophical argument, the starting points are critical.
Every system as being structured has of its reasoning a set of basic axioms. Mathematics (of which the discipline of Deductive Logic is a part) is the only reasoning system that makes it a basic fundamental rule to detail and document the axioms before starting the reasoning. But all systems of reasoning do have their axioms.
The most fundamental and significant aspect of starting axioms, is that they are not “provable.” Axioms are axioms because there is no way to derive them from more fundamental principles, and no way to “prove” them by using information drawn from experience. Axioms, by their nature, can be neither “deduced,” nor “induced.” If there were a way to deduce them from other principles, then those other principles would become the axioms, and the statement that was just proved, would become one of the theorems or consequences of the more basic axioms. If there was a way to induce them from experience, then they would become a “law” in the sense of the definition provided at the start of this chapter, rather than an axiom.
It is also in the nature of starting axioms, that the structure of any reasoning built upon them is intimately dependent upon them. To use an example from Mathematics, let us consider the field of Geometry. Most of us are familiar with the geometry of the plane, called “Euclidean Geometry.” Many of us studied this geometry in high school and many of us can remember some of the Theorems that you laboured to prove during the course of your studies. Remember the struggle to use a set of limited deductive rules to prove a “New Idea?” Do you recall the geometric proof that the interior angles of a triangle (drawn on a Euclidean plane) sum to 180 degrees?
All of Euclidean Geometry is based on five basic assumptions that Euclid made about the nature of the plane upon which he played his geometry game. It also makes use of a carefully defined and quite limited set of rules of logical deduction. By employing these rules, new Theorems can be deduced from the basic axioms. Once deduced, these theorems can be used as part of the proof of another new Theorem. In this way, an entire set of knowledge about lines and polygons can be developed. All from a set of deduction rules, and five basic axioms.
Although most people are at least passing familiar with Euclidean Geometry, not many are familiar with Riemannian Geometry . . . Riemann (Georg Friedrich Bernhard Riemann, 1826-1866) was a mathematician of the 19th century. He examined Euclid's five basic axioms and decided to see what would result if he modified them. The result of his “relaxation” of only a single one of Euclid’s five axioms was his development of Riemannian Geometry. Euclid’s 5th axiom specified (approximately, in readable English)
“Through any given point, only one line can be drawn parallel to another line”: What Riemann did, was eliminate this axiom, and allow a variable number of lines to be drawn parallel to a given line. When he examined the result of this change, he found that there were only three answers that would result in a consistent set of Theorems. (The importance of consistency we will examine later.) Riemann found that through any given point, zero, one or an infinity of lines can be drawn parallel to another line. What he had discovered, was that there is a different kind of geometry, that yields different theorems, and different answers to simple questions, depending upon the detailed specification of the “Fifth Axiom.” These geometries together form the body of mathematics known as Riemannian Geometry. Euclidean Geometry is now understood by mathematicians to be a “special case” of Riemannian Geometry. That is to say, by setting a free variable (the number of lines that can be drawn parallel to another - more properly expressed as the curvature of the plane) to a specific value, Riemannian Geometry looks like Euclidean Geometry. The difference is that to Euclid, Euclidean Geometry was all there was, while to modern mathematicians, the geometry of the plane is but one case of the more generalized Riemannian Geometry. Riemannian Geometry is said to “contain” Euclidean Geometry. The fact that Riemannian Geometry contains Euclidean Geometry, does not make Euclidean geometry wrong. Given the starting assumptions of Euclidean geometry, the rest of the discipline is properly deduced, and self-consistent. It is merely that the starting assumptions have a determining impact on the resulting structure.
Two aspects of this comparison between Euclidean and Riemannian Geometry are of significance to this discussion of starting axioms. The first thing to understand from this example, is that the resulting body of knowledge deducible from the starting axioms, even following the same rules of deduction, can look quite different as a result of seemingly small changes in the axioms. The second important thing to understand, is that the answers that the two Axiom Systems give to seemingly simple questions can be quite different. To a Euclidean geometer, the sum of the interior angles of a triangle is a constant (180 degrees). To a Riemannian geometer, the sum is a function of the curvature of the plane. Since the Euclidean geometer cannot even comprehend that the plane can curve, the two cannot properly hold a meaningful discussion on this subject. It is but one example of the confusion that results when two people attempt to communicate, but do not share a common ground. They are building on different postulates.
So too with philosophy. Unless there is an agreement on starting postulates, philosophical discussion will be fruitless. The necessary agreement need not be complete, merely sufficient to allow a common basis upon which to found the discussion. The two geometers could, for example, agree to discuss the geometry of the plane. The Euclidean geometer would regard this as a discussion of “the whole shebang,” while the Riemannian geometer would regard it as a discussion of only a special subset of “the whole shebang.” This might cause some confusion if the argument digressed into the nature of “the whole shebang,” but would be sufficient to allow specific discussion on, say, the sum of the interior angles of a triangle
The foregoing diversion, is that you, must decide whether or not it is worth your while to continue reading. If you cannot agree with the basic starting axiom upon which Evolutionary Pragmatism is built, even if simply for the sake of curiosity and interest in what follows, then further efforts at understanding Evolutionary Pragmatism on your part will be a waste of your time. If you determine that you cannot accept the Basic Axiom for whatever reason, then the rest of this work will be meaningless. You will not even be able to argue that its conclusions, deductions, or hypotheses have no common ground upon which to base further understanding or discussion. As, perhaps, your thought processes would be sufficiently alien to me, as to any formal logical development are justly to be of something other to understand, particularly of the Evolutionary Pragmatism, in that we would probably not be able to discuss anything more relevant than the weather. And probably not even that: Reality is objective (independent of any observer), constant (permits repeatable observations) and self-consistent (does not exhibit mutually contradictory cause-effect relationships)
If, on the other hand, you can accept this Basic Axiom of Evolutionary Pragmatism, then you may find the rest just as stimulating as this work is interesting, as for now, with the foregoing out of the way, the processes concurrent with the theoretical involvements may lead to further developments of such Evolutionary Principles, in that is by some presented concept of “Reality.”
A presupposed accountable truth is pragmatism (James 1909; and Papineau (1987). As we have observed, that the verifications selects a prominent property of truth and considers it to be the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. True assumptions are said to be, by definition, those that provoke actions with desirable results. Once more, we have an account statement with a single attractive explanatory characteristic, besides, it postulates between truth and it alleged analysand, in this case, utility is implausibly closed. Granted, true belief tends to foster success, but it happens regularly that actions based on true beliefs lean toward disaster, while false assumptions, by pure chance, produce wonderful results.
One of the few uncontroversial facts about truth is that the preposition that ‘snow is white’ if and only ‘if snow is white’, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as w have seen, inflate it with some further principle of the form that, “X is true if and only if ‘X’ has property p” (as corresponding to reality, verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey 1927, Strawson 1950 and Quine 1990) For example, we might suppose that the basic theory of truth contains nothing more that equivalences of the form, “the proposition that p is true if and only if p” (Horwich 1990).
This sort of proposal is best given as a formal account of the ‘raison de étre’ of our notion of truth, namely that enables us to express attitudes toward these propositions we can designate but not explicitly formulate. Suppose, for example, you are told that Einstein’s last words expressed a claim about physics, an area in which you think he was very reliable. Suppose that, unknown to you, his claim was the proposition whose ‘quantum mechanics are wrong’. What conclusions can you draw? Exactly which proposition becomes the appropriate object of your belief? Surely not that quantum mechanics is wrong, but, because you are not aware that it is what he said . What we ave needed is something equivalent to the infinite conjunction: “If what Einstein said was that E = mc2, then E = mc2, and if what he said, as that quantum mechanics were wrong, then quantum mechanics are wrong . . . and so on?”
That is, a proposition ‘K’ with the following properties, that from ‘K’ and any further premises of the form. Einstein’s claim was the proposition that ‘p’ you can imply ‘p’. Whatever it is, now supposes, as the deflationist says, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. The proposition that ‘p’ is true if and only if ‘p’, then your problem is solved. For ‘K’ is the proposition, ‘Einstein’s claim is true’, it will have precisely the inferential power needed. From it and Einstein’s claim is the proposition that quantum mechanics are wrong, you can use Leibniz’s law to imply; the proposition that quantum mechanics is wrong is true, which given the relevant axiom of the deflationary theory, allows you to derive quantum mechanics is wrong. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axiom explains that function without the need for further analysis of ‘what truth is.
Not all variants of deflationism have this virtue, according to the redundancy performatives theory of truth, the pair of sentences, “The proposition that ‘p is true’: and plain ‘p’ has the same meaning and express the same statement as one and anther, so it is a syntactic illusion to think that p is true,” attributes any sort of property to a proposition (Ramsey 1924 and Strawson 1950). Yet in that case, it becomes hard to explain why we are entitled to infer ‘The proposition that quantum mechanics are wrong is true’ from Einstein’s claim is the proposition that quantum mechanics are wrong. Einstein’s claim is true, for if truth is not property, then we can no longer account for the inference by invoking that the law that if ‘X’, appears identical with ‘Y’ then any property of ‘X’ is a property of ‘Y’, and vice vera. Thus the redundancy/performatives theory, by identifying than merely correlating the contents of ‘The proposition that ‘p is true’ and ‘p’, precludes the prospect of a good explanation of one on truth’s most significant and useful characteristics. So, putting restrictions on our assembling claim to the weak is better, of its equivalence schema: The proposition that “p is true is and ‘is only p’.”
Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether the facts can be known and whether they can exist independently of our capacity to discover (Dummett 1978 and Putnam 1981). One might reason, for example, that if ‘T’ is true means nothing more than ‘T’ will be verified, then certain forms of scepticism, specifically those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, it might be said that if truth were an inexplicable, primitive, non-epistemic property, then the fact that ‘T’ is true would be completely independent of us. Moreover, we could, in that case, have no reason to assume that the propositions we believe in, that in adopting its property, so scepticism would be unavoidable. In a similar vein, it might be thought that as special, and perhaps undesirable features of the deflationary approach, is that truth is derived of such metaphysical or epistemological implications.
Upon closer scrutiny, it is far from clear that there exists ‘any’ account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although an account of truth may be expected to have such implications for facts from ‘T is true’, it cannot be assumed without further argument that the same conclusion will apply to the fact ‘T’, For it cannot be assumed that ‘T’ and ‘T’ are equivalent to one another given the account of ‘true’ that is being employed. Of course, if truth is defined in the way that the deflationist proposes, then the equivalence holds by definition, it is, nonetheless, that if true is defined by reference to some metaphysical or epistemological characteristic, the equivalence schema is thrown into doubt, pending some demonstration that the trued predicate, in the sense assumed, will be satisfied, as far as there are thoughts to be epistemologically problematic, in that of hanging over ‘T’ that do not threaten ‘T’ is true, given the needed demonstrations will be proven difficult. Similarly, if ‘truth’ is so defined that the fact ‘T’ is true is true, then again, it is unclear that the equivalence schema will hold. It would seem, therefore, that the attempt to base epistemological or metaphysical conclusions of a theory of truth must fail because in any such attempt, the equivalence schema will b e simultaneously relied on and undermined.
The identity theory of truth is notably absent from textbook discussions of truth; and there is controversy over whether it is a theory of truth at all. Those who think that it is not are likely to make one or both of the following objections: It is obviously absurd; no one has ever held it. The remainder of this section is devoted to considering these two objections.
The identity theory is clearly absurd from the point of view of those who, for instance, believe that truth-bearers are sentences and truth-makers non-linguistic states of affairs. But it may be available to those who hold the kinds of metaphysical views which make truth-bearers and truth-makers more alike. (For ease of expression, its use in the vocabulary of ‘judgments’ and ‘facts’ for truth-bearers and truth-makers respectively, recognizing that these terms can be tendentious - especially in expressing the views of philosophers who abjured them.)
Some philosophers have tried to make judgments more like facts. Russell, reacting against idealism, at one stage adopted a view of judgment which did not regard it as an intermediary between the mind and the world: instead, the constituents of judgments are the very things the judgments are about. This involves a kind of realism about judgments, and looks as though it offers the possibility of an identity theory of truth. But since both true and false judgments are equally composed of real constituents, truth would not be distinguished from falsehood by being identical with reality; an identity theory of truth is thus unavailable on this view of judgment because it would be rendered vacuous by being inevitably accompanied by an identity theory of falsehood. Those who have held this sort of view of judgments, such as Moore and Russell, have accordingly been forced to hold that truth is an unanalyzable property of some judgments. If one looks for an identity theory here, one finds what might be called an identity theory of judgment rather than of truth. Less brutally condensed accounts of these matters can be found in Baldwin (1991),
Other philosophers, notably those who have held the idealist view that reality is experience, have implied that facts are more like judgments. One such is F.H. Bradley, who explicitly embraced an identity theory of truth, regarding it as the only account capable of resolving the difficulties he finds with the correspondence theory. The way he reaches it is worth describing in a little detail, for it shows how he could avoid allowing the theory to be rendered vacuous by an accompanying identity theory of falsehood.
Bradley argues that the correspondence theory's view of facts as real and mutually independent entities is unsustainable: the impression of their independent existence is the outcome of the illegitimate projection onto the world of the divisions with which thought must work, a projection which creates the illusion that a judgment can be true by corresponding to part of a situation: as, e.g., the remark: “The pie is in the oven” might appear to be true despite its (by omission) detaching the pie from its dish and the oven from the kitchen. His hostility to such abstraction ensures that, according to Bradley's philosophical logic, at most one’s judgment can be true - that which encapsulates reality in its entirety. This allows his identity theory of truth to be accompanied by a non-identity theory of falsehood, since he can account for falsehood as a falling short of this vast judgment and hence as an abstraction of part of reality from the whole. The result is his adoption of the idea that there are degrees of truth: that judgment is the least true which is the most distant from the whole of reality. Although the consequence is that all ordinary judgments will turn out to be more or less infected by falsehood, Bradley allows some sort of place for false judgment and the possibility of distinguishing worse from better. One might argue that the reason the identity theory of truth remains only latent in Russell and Moore is the surrounding combination of their atomistic metaphysics and their assumption that truth is not a matter of degree.
For Bradley, then, at most one’s judgment can be fully true. But even this one judgment has so far been conceived as describing reality, and its truth as consisting in correspondence with a reality not distorted by being mentally cut up into illusory fragments. Accordingly, even this one, for the very reason that it remains a description, will be infected by falsehood unless it ceases altogether to be a judgment and becomes the reality it is meant to be about. This apparently bizarre claim becomes intelligible if seen as both the most extreme expression of his hostility to abstraction and a reaction to the most fundamental of his objections to the correspondence theory, which is the same as Frége's: that for there is to be correspondence rather than identity between judgment and reality, the judgment must differ from reality and in so far as it does differ, to that extent must distort and so falsify it.
Thus Bradley's version of the identity theory turns out to be misleadingly so-called. For it is in fact an eliminativist theory: when truth is attained, judgments disappear and only reality is left. It is not surprising that Bradley, despite expressing his theory in the language of identity, talked of the attainment of complete truth in terms of thought’s suicide. In the end, then, even the attribution of the identity theory of truth to one who explicitly endorsed it turns out to be dubious.
More recently there have been attempts, consciously taking inspiration from Frége, to defend a metaphysically neutral version of the theory: holding that truth-bearers are the contents of thoughts, and that facts are simply true thoughts rather than the metaphysically weighty sorts of things envisaged in correspondence theories. That is, the identity is not conceived as a (potentially troublesome) relation between an apparently mind-dependent judgment and an apparently mind-independent fact. A claimed benefit of this version is that it is not immediately disabled by the inevitable accompaniment of an identity theory of falsehood. The difficulty for these attempts is to make out the claim that they involve a theory of truth at all, since they lack independent accounts of truth-bearer and truth-maker to give the theory substance.
The most thorough account of this type is found in Dodd (2000). But although the adherence to an identity theory is actually defends a variety of deflationism: “truth is nothing more than that whose expression in a language gives that language a device for the formulation of indirect and generalized assertions.” What became of the identity theory? The answer “lies in the fact that Dodd conceives his identity theory as consisting entirely in the denial of correspondence and the identification of facts with true thoughts. It actually has nothing to say about “the nature of truth,” as traditionally conceived, offering no definition of “is true,” no explanation of what truth consists in or of the difference between truth and falsehood. This theory is “modest,” to use Dodd's expression, as opposed to ‘robust’ identity theories which begin from the bipolar recognition of independent conceptions of fact (conceived as truth-maker) and proposition (conceived as truth-bearer) employed in correspondence theories, and then attempt in one way or another to eliminate the apparent gap between them. Dodd's view is that his ‘modest’ theory gets some bite from its opposition to correspondence theories; and he urges (as does Hornsby) that we should anyway scale down our expectations of what a theory of truth can provide. However, the history of identity theories of truth reveals them as tending to mutate into other theories when put under pressure, as one can see from the discussion as presented. Dodd holds that this is a problem only for robust theories. Yet his theory also exemplifies a variety of this tendency: in the end, it evolves into deflationism.
Although it is difficult to find a completely uncontroversial attribution of the identity theory, there is evidence of its presence in the thought of a few major philosophers. As one might expect, mystical philosophers attracted by the idea that the world is a unity express views which at least resemble the theory. Bradley may also fall into this category; in any case, he and Frége have already been mentioned. Bolzano and Meinong are other possibilities: Findlay, for example, believes Meinong to have held an identity theory, reminding us that on his view, there are no entities between our minds and the facts: Facts themselves are true in so far as they are the objects of judgments. C.A. Baylis defended a similar account of truth in 1948, and Roderick Chisholm endorsed a recognizably Meinongian account in his Theory of Knowledge. A sketchy version of the theory is embraced in Woozley's Theory of Knowledge. There are also the attempts, once again already mentioned, to establish a metaphysically neutral version: these show that there can be no doubt that some philosophers have tried to defend something that they wished to call an identity theory of truth.
Thomas Baldwin argues that the identity theory of truth, though itself indefensible, has played an influential but subterranean role within philosophy from the nineteenth century onwards, citing as examples philosophers of widely different convictions. One of his attributions is queried in Stern (1993), others in Candlish (1995). Whether or not Baldwin is right - and it is possible that the theory is no more than a historical curiosity - the identity theory of truth in its full-blooded form may turn out to be best thought of as comparable to solipsism: Rarely, if ever, consciously held, but the inevitable result of thinking out the most extreme consequences of assumptions which philosophers often just take for granted.
Within measure, the most influential idea in the theory of meaning in the past hundred years is the theses that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand sentences is to know its truth-conditions. Its conception was first clearly formulated by Frége (1848-1925), as developed in a distinctive way by the early Wittgenstein (1889-1951) and is leading idea of Davidson (1917). The conception has remained central that all who of opposing theories characteristically define their position by reference to it.
The conception of meaning as truth-conditions necessarily are not and should not be advanced as a complete account of meaning. For instance, one who understands a language must have some ideas of the range of speech acts conventionally acted by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. The claim of the theorist of truth-conditions should as an alternative is targeted on the notion of content: If two indicative sentence differ in what they strictly and literally say, then this differences is fully accountable for by the difference in their truth-conditions. Most basic to truth-conditions is simply of statements that are the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turned out that the truth condition can only be defined by repeating the very statement, as a truth condition of “snow is white” is that “snow is white,” the truth condition that “Britain would have capitulated had Hitler invaded” is that “Britain would have capitulated had Hitler invaded.”
This element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to see it in a network of inferences.
Whatever it is that makes, of what otherwise would be forwarded by mere sounds and inscriptions into instrument of communication and understanding. That this may very well be it a philosophical problem, in that of demystifying this powered relation, it to if of what we know of ourselves and the world and surrounding surfaces thereof. Contributions to the study include the theory of ‘speech acts’ and the investigation of commendations. As well as the relationship between words and ideas, also the world and of its surrounding surfaces, by which some persons expressed by a sentence are often distributive dynamic attributes, whereby the causality of its relational alterative and the environmental placements or its opposite of displacement, may, in other words, that in which he or she is positioned of place. For example, the disease I refer to by the term like ‘arthritis’ or the kind of tree I refer to as a ‘birch’, will be defined by criterial possibilities of imaging two persons who are alternatively placed differently in some environmental placement, however, everything appears the same to each of them, in that, between them the possibility under which the defined space now belongs of a philosophical problem. They are the essential components of understanding and any intelligible position that is true must be capable of being understood. Such that whichever is expressed by some utterance or sentence, the proportion or claim made about the world may be its extension, the content of a predicate or substantial component in what it contributes to the content of its sentence that contain it. The nature of content is the central concern on philosophy of language.
Without displacing quantification, we are the specifying qualifies under which to liberated by a definitive meaning, in so, that to whatever has had to occasion an ordinary language especially a set of prioritized categories when each spoken language deemed by some necessity of right, in that by archaic conflicting appears of its ugly head. However, if we were to assume that without such miss-measure would their be, in at least, the balanced equalization that will have brought but only to theorizing over such matters of some absolute omnipotency. Within its position, it should, in at least, bring to some comforting definition of what one will find its meaning, additionally, it is supposed that it can be intorted as a successive session in that most humans utter sounds. Humans typically pick the sound that they predict will likely improve the odds that something desire will happen. This utterance can be “wahhhh!”, “Goobalyblock”, “Give me all your money”, “What time is it?”, “ Como estas?”, “no”, “I'm sorry”, “thank you”. The listening party will then do whatever he/she feels like. This may be ignored, answer a question, perform a cart-wheel in response to a question, lie, disregard/obey a command, etc. This process repeats while either party can and wants to talk.
There is no necessity in this model for words to mean the same thing from speaker to speaker. If we are both bilingual I could ask you questions in German that you answer in English. It is important though that each party knows what the other party means by each word though. If two languages have the same syntax it could be construed that they are both one language and that it just so happens that people who know a certain section of vocabulary rarely know the other section and vice versa. On the same token, if we mean different things by certain words we could be considered to be speaking two separate (although very similar) languages
Because all statements are about reality, it is impossible for two rational, honest humans to disagree about the truth of any utterance (if they are speaking the same language). It is also impossible that any two different rational humans would make a different choice regardless of the language they spoke. Given that both humans knew everything and could calculate anything instantaneously.
Once, again, and given that both humans knew everything, could calculate anything instantaneously, were in the exact same position, and had the exact same desires. It’s quite apparent that the meaning of any collection of symbols or utterance can only be determined if the language it is in is known.
Any statement-question command using certain assumptions can be viewed as a system of equations: This requires that we are already aware of the truth of what the statement translates to and that we can determine how statements are separated. Even if all of those conditions were met there are still infinite solutions to the equations (if the meaning of no word is known from the start). So anything that is communicated is only interpretable if the language it is in is known.
As an interesting side note, since humans do learn some words/concepts this way they can be mistaken. For example every other speaker may believe we are in some situation similar to that presented in “The Matrix”. And so that they don't have to state “In the matrix” so often (i.e:, “In the matrix my cat is black”) they define ‘is' to mean the concept you would visualize as what is provided by the you-English definition of “is, in the matrix,”
Aul Kripke claimed that are two exactly kinds of meaning. One cannot dispute this without a given definition for the concept of “meaning” he was using . It will represent instead, the present for acceptance of the definition of a possible concept of meaning and illustrate that disagreement is impossible on this issue. Hopefully the concept of meaning provided will be similar to that to which we refer when we speak so that this issue will be relevant.
The truth and meaningfulness of a statement are not functions of that statement alone. Obviously the language in which that utterance is spoken (some sound could mean one thing on one language and a different in a different language) can totally change the truth/meaningfulness of an utterance. So exactly what “the King of France is bald” means and the whether or not it is false depends on only terminology. This terminology is the definition of “false”, “meaning”, and “is” as well as the “language” being spoken
Any definitions can be used for those terms. In which it will offer of an example as a set of definitions for which the answer is obvious. For any set of concise definitions the answer to questions such as “Is that statement false or meaningless [for a specific language?]” will be apparent or the question itself will be unclear. In-frowardly, insofar as (1) Language - “Method used to convert sound/symbols into processes (or vice versa).” By definition of meaning provided, English is not a language. English doesn't state a clear set of standards for interpreting errors (unlike programming languages), so English cannot interpret all text. So by my definition set we cannot analyse a question like “Is the King of France is bald. False or meaningless?” until we specify what language (by my provided definition of language) is being used. Whatever language was come up with, it would state how to handle all situations. Some situations, which may be labelled ‘errors’, would include: groups of characters that aren't words, syntax violations, contradictions in plurality/quantity/existence, conflicting statements, etc. The Kings’ example is just simply an error. So let's, just to finish an example with an answer, ask about whether the Kings’ statement is meaningless in the language, at this moment. By my sample definitions, in my language, the statement “The king of France is bald” is meaningless. That is to say it causes an error and so all mental processes associated with that statement (updating my knowledge of the world) are cancelled. So there are no processes and it is therefore meaningless. Nonetheless, my mentality will be aware that something was said, the specifics of what was said, and will likely decide it’s efficient to debug the error [ask] “France doesn't have a king at the moment, were you referring to a different country?” just for the record he responds “No, I was impersonating an Englishman who lived a long time ago.” It is to realize that this whole issue lacks practical applications. The only reason of an explanation is how to untangle this mess of semantics is to illustrate that it can be done objectively and perhaps to shed luminance on language.
Language and philosophy have an intimate connection to one another; without a philosophical examination of the meanings and structure of language, we cannot easily ascertain the objective truth of the statements we make, nor can we usefully discuss abstract concepts. The philosophy of language seeks to understand the concepts expressed by language and to find a system by which it can effectively and accurately do so. This is more difficult than it appears at first; philosophers are looking for a theory of language which avoids the minute errors of meaning and usage which occur in all discussions of abstractive concepts and which tend to lead those discussions into complicated dead-ends.
Since so much of the philosophy is currently concerned with the linguistic representation of reality, the bond between the philosophical and the linguistic is growing stronger. Philosophers can only write syntax for the languages they want to use in expressing theory with some knowledge of linguistics; and linguists can use philosophical principles to solve problems of meaning and syntax. This strong link can be exploited to the advantage of both sides.
In recent history philosophers have struggled with the question of precision in language and have sought to construct a system or structure under which meanings can be discussed without danger of falling into circular or metaphysical traps. Two major approaches to this question have arisen in scientific circles of the twentieth century. Logical empiricism, also known as logical positivism, seeks to produce a language which consists of symbols combined precisely in accordance with specific rules; this would eliminate the philosophical convolutions that arise from the use of imprecise and confusingly ordered language. Ordinary language theory, on the other hand, suggests that these philosophical problems appear when language is used improperly: The language itself is perfectly acceptable and can be easily applied to the discussion of abstractive and philosophical conceptualizations without undue modifications, as long as it is used and interpreted properly. Each of the movements in linguistic philosophy had its strengths and weaknesses, and its supporters and detractors.
Pure metaphysical speculation which is not based on fact is, to the empiricists, neither relevant nor useful. The only truth, in this philosophy, is that which is mathematically provable or experimentally observable. This truth can be divided into two categories: analytic truths are based on inherent meanings and can be observed through the application of reason, if not experiment; synthetic truths are those facts which are obtained from the experience of reality.? Any system of communication must, in order to be meaningful, include some way to represent the truth accurately; any empiricist will tell you that this truth is only valuable and meaningful if it can be considered absolute and provable. In order to be perfectly accurate in representing the truth, language must conform to a certain set of specifications designed to prevent it from wandering into speculation, and under which it becomes possible to ascertain absolutely the truth of any statement. These rules are called formalist semantics. Syntax differs from semantics in that syntax guides the proper formation of elements of a language into statements, whereas semantics consists of the correct association of elements of language with elements of the real world.
In one view, philosophy itself cannot be anything other than logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher's task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.
The names most commonly associated with logical positivism come from the philosophical movement known as the Vienna Circle. This group of philosophers included such famous names as Carnap, Schlick, and Gödel; their main goal was to establish a context for philosophical thought which relied on observable fact and disregarded the metaphysical. They came together in the 1920's Vienna to create a philosophy that would oppose metaphysics and provide a basis for clarity in science.
Language and philosophy have an intimate connection to one another: Without a philosophical examination of the meanings and structure of language, we cannot easily ascertain the objective truth of the statements we make, nor can we usefully discuss abstract concepts. The philosophy of language seeks to understand the concepts expressed by language and to find a system by which it can effectively and accurately do so. This is more difficult than it appears at first: Philosophers are looking for a theory of language which avoids the minute errors of meaning and usage which occur in all discussions of abstract concepts and which tend to lead those discussions into complicated dead-ends.
Since so much of the philosophy is currently concerned with the linguistic representation of reality, the bond between the philosophical and the linguistic is growing stronger. Philosophers can only write syntax for the languages they want to use in expressing theory with some knowledge of linguistics; and linguists can use philosophical principles to solve problems of meaning and syntax. This strong link can be exploited to the advantage of both sides. In recent history philosophers have struggled with the question of precision in language and have sought to construct a system under which meanings can be discussed without danger of falling into circular or metaphysical traps. Two major approaches to this question have arisen in scientific circles of the twentieth century. Logical empiricism, also known as logical positivism, seeks to produce a language which consists of symbols combined precisely in accordance with specific rules: This would eliminate the philosophical convolutions that arise from the use of imprecise and confusingly ordered language. Ordinary language theory, on the other hand, suggested that these philosophical problems appear when language is used improperly: The language itself is perfectly acceptable and can be easily applied to the discussion of abstract and philosophical concepts without undue modifications, as long as it is used and interpreted properly. Each of the said movements in linguistic philosophy had its strengths and weaknesses, and its supporters and detractors.
Pure metaphysical speculation which is not based on fact is, to the empiricists, neither relevant nor useful. The only truth, in this philosophy, is that which is mathematically provable or experimentally observable. This truth can be divided into two categories: Analytic truths are based on inherent meanings and can be observed through the application of reason, if not experiment; synthetic truths are those facts which are obtained from the experience of reality. How does this apply to linguistic philosophy? Any system of communication must, in order to be meaningful, include some way to represent the truth accurately; any empiricist will tell you that this truth is only valuable and meaningful if it can be considered absolute and provable. In order to be perfectly accurate in representing the truth, language must conform to a certain set of specifications designed to prevent it from wandering into speculation, and under which it becomes possible to ascertain absolutely the truth of any statement. These rules are called “formalist semantics.” Syntax differs from semantics in that syntax guides the proper formation of elements of a language into statements, whereas semantics consists of the correct association of elements of language with elements of the real world.
In philosophy it cannot be anything other than logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher’s task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.
In the 1930's, Rudolf Carnap, one of the chief logical positivists, developed what he called Logische Syntax. The aim of this work was to describe a theory of the logical syntax of language that would allow meaningful sentences to be created without reference to the meanings of specific symbols or words. This theory would be held to the same strict standards as a scientific theory. Essentially logical syntax formed a mathematical model of language which could be manipulated and proved just as any other mathematical or logical construct. It is a reductionist model, attempting to show that the logic of sentences is based upon the order of the word-symbols in that sentence, and do not require any reference to anything outside that sequence (i.e., reference to the semantical associations of the word symbols) in order to be meaningful. In other words, a sentence arranged according to formalistic syntax will be meaningful no matter what the word-symbols themselves represent
G.E. Moore, although not an original member of the Vienna circle, was another philosopher whose work furthered the aims of logical positivism; he helped to formulate the movement’s view of the purpose of philosophy. In his view, as Qadir summarizes, “the business of Philosophy is clarification and elucidation of concepts and not the discovery of facts.” This concept is one of logical positivism's unique identifiers, distinguishing it from definitions of philosophy which purport that the task of the philosopher is to provide new knowledge. Syed Ataur Rahim, a professor at the University of Karachi, composed for his doctoral thesis a reply of metaphysics to the attacks posed by logical positivism. He asserts that the logical positivists misdefine metaphysics in their attempt to disarm it; metaphysics is not meaningless or unrelated to facts, but rather is “an epistemologic-ontological inquiry.” Rational metaphysical inquiry is necessary for the construction of rational thoughts and for the furthering of scientific inquiry; unobservable ideas, which would be considered irrelevant by logical positivists, must be entertained before the formation of rational explanations about reality can take place. Therefore, a language based only on the concepts of logical positivism would be bereft of the contributions of metaphysics, and would lead only to a sterile field of scientific thought.
During its prime, the movement contributed several theories to the study of language. Its distinction between cognitive and non-cognitive meanings separated the functions of language into informational and emotional categories; only the first category, however, was considered meaningful by the empiricists. Logical problems arise when the second category is treated in the same way as the first; it must be remembered that a statement with qualities of emotion or appeal is not subject to distinctions of truth or falsity, and is therefore meaningless. Carnap suggested that both of these two categories could be found in any one sentence; but only the cognitive portion of the sentence was significant; all emotive value was irrelevant. Additionally, a line could be drawn between the significant and the non-significant uses of language; questions of religion and ethics were considered insignificant, since they did not refer primarily to aspects of the material world, and only language which could be dealt with utilizing methods of empirical proof was considered significant.
The ideal language sought by the empiricists has several identifiable properties. It was intended primarily to add conventions to language at points where the guidelines of semantics and syntax were loose enough to allow metaphysical speculation or, equivalently, nonsense. For instance, Katz cites Carnap, one of the founders of the Vienna Circle, as suggesting the inadequacy of normal grammar because it allows sentences such as “Caesar is a prime number” to be considered grammatically correct. This ideal is based on mathematical and logical models; it represents the structure of a natural language but supposedly eliminates the aguenesses and possibilities for misconception to which natural languages are prone.
In essence, logical empiricists have several main postulates. Anything that is not based on rational fact is not eligible for philosophical contemplation. Philosophy’s main concern is with science and scientific language. The only way to make sure that language remains factual is to devise an artificial language that can only be used to create statements which refer to analytically or synthetically true information.
Formalism does have its faults, however. It presumes that all meaningful concepts can be expressed in terms of synthetically or analytically provable language, which may not necessarily be true; such a view eliminates the possibility of statements which have validity although we have yet to find methods of proving them, It is difficult to translate assertions from the ideal logical language into meaningful common language, since the former assertions are devoid of the added non-cognitive meanings which we commonly use to clarify cognitive meanings; and formalistic syntax cannot accommodate certain concepts of significance that are associated with normal language, nor can it express ideas above a certain level of complexity, since increased complexity often incorporates philosophical concepts which have a partially metaphysical nature.
In some ways the absolute refusal of empiricism to accommodate the idea of the existence of concepts other than the concrete may have contributed to its fall from philosophical popularity. This narrowed view cut off the logical positivists from consideration of all the possibilities of language. As Duke undergraduate and philosophy aficionado Allan Stevens noted: “Imagine the scientist who demands logical proof for every idea he acquires. There are two courses of action open to him. He can either spend all of his time trying to rationally verify minor reasonable truths, or he can just disregard the ideas that it is inconvenient to prove. Either example would extremely limit his total body of knowledge.”
Although this movement was considered by some to be more concerned with the reform of science than of language, one of logical positivism's major foci was the reform of language to make it more precise and so to make it a better tool for the description of reality. This goal was never realized before the movement decreased in strength, although modern linguistic theory owes a great deal to empiricist concepts
Logical empiricism did not go unchallenged by the philosophical community. One of the main counter-movements was called ordinary language theory; this theory suggested that everyday language without any special, more formal semantics could be used to discuss philosophical thought; it just had to be used correctly. Errors in usage, not deficiencies in the structure of language, led to philosophical misconceptualization. Ludwig Wittgenstein, originally a member of the positivists’ Vienna Circle, had a major influence on the formation of this theory.
Although at first he agreed with the principles of logical empiricism, Wittgenstein came to believe that it was too scientific for the topics it attempted to address and could not accommodate those valid parts of philosophical thought which were characterized by ambiguity. The artificial language created by the empiricists was overly scientific and tried to assign absolute meaning to non-absolute terms. Some terms necessarily possessed a degree of vagueness and could not be bound by the strictures of formalism, or else some valuable meanings would be lost. The search for understanding would not benefit so much from an insistence on absolute and perfect precision of meaning, created within the framework of an artificial logical language, as it would from the correct usage of the already existing language and the avoidance of errors propagated from its misuse.
Although Wittgenstein originally supported the philosophy of the Vienna Circle, he later reconsidered his views. His metamorphosing views embody the conflict between logical empiricism and ordinary language because over the course of his own scholarly career he supported the positions of both sides, at some point switching his consideration from one to the other. His Tractatus Logico-Philosophicus is a standard reading in logical positivism; his later Philosophical Investigations refutes and criticizes the thought described in the Tractatus. The main thrust of the Investigations' opposition to the ideas described in Table 1 is that the logical, fixed form of the world and the objects in it, described by the empiricists in the formulation of their theories, is in itself a metaphysical construction. Empiricism relies on the existence of a fixed form of the world because only in this fixed state can it be assured that the real world is composed of unassailable facts to which language can refer (Malcolm). In his later philosophical work, Wittgenstein began to question the unassailability of this reality; Malcolm describes his new view as a realization “, . . . that the formation of concepts, of the boundaries of what is thinkable, will be influenced by what is contingent - by facts of nature, including human nature.” Therefore the logical suppositions of positivism were themselves metaphysical concepts, as constructs of an ideal world.
Wittgenstein considered the positivistic view, which underneath all apparently complex statements and concepts lie essentially and absolutely simple elements of reality, to be allusory. This simplicity was necessary to the aforementioned construction of a fixed and logical universe to which formalistic language could be applied. But reality has not been shown to reduce to these simple elements and thus the assumption that it does is unjustifiable.
Wittgenstein also altered his conception of the proposition as correspondent to reality. In the positivistic view a thought is a representation of a certain specific reality. Propositions are verbal expressions of specific thoughts; in order for the propositions to be logically empirical, the thoughts must likewise be reducible to simple and fixed elements, which Wittgenstein decided was unnecessary. Any method by which thoughts and propositions were connected with aspects of reality could be interpreted in various ways by various people, and thus thoughts on the same real object could differ from one another according to the path through which the object was intellectually interpreted. Therefore, propositions (correspondent to thoughts) which purportedly referred to the same aspect of reality might not be equivalent, and so the empiricist principle that a statement should have a singular correspondence with reality would not be maintain.
The theory of language that Katz developed was designed to explain the theory of linguistic structure while remaining true to the factual basis of natural language. Essentially, the explanation posited is a model of communication which suggests that a speaker be following rules with a definite structure when he creates or understands novel sentences; these rules allow him to ascertain meanings compositionally, deducing the meaning of a sentence from the meanings of its parts. In this way his theory compromised between empiricism and ordinary language: It asserts that thoughts and ideas are unobservable but that this is no more a meaningless piece of metaphysics than the scientific assertion of unobservable and theoretical particles. The scientific method allows the scientifically unobservable to be considered valid, just because there is any similar method for establishing the empiricism of ideas doesn't mean that they are any more metaphysical.
Peter Achinstein examined a contemporary positivist approach which, although it did not attempt a reconciliation with ordinary language, still modified the original version to adapt it to modern thought. In his view, the anti-positivist was more concerned with the main features of concepts and ignored certain parts that might not apply in all cases. The positivist, on the other hand, wants to give all concepts an absolute and complete set of attributes that are empirically provable and uniform . The view which lies between these suggests the necessity of a general set of conditions to define concepts; however, instead of eliminating those which do not fit this structure can be used to understand more about the concepts which are slightly outside the given structure.
As we draw closer to a time in which concepts such as love and the soul can be expressed in biological terms, connected intimately with brain tissue and the workings of the body, the scientific language of the logical positivists appears more and more applicable to previously unscientific terms. It is not necessary, however, to consider immaterial concepts meaningless; it is possible that they simply have not yet crossed the boundary into the scientific world. For instance, as we come closer to describing feelings in terms of neurotransmitter levels and neuronal firings, we also come closer to giving them meaning on the empirical level. Who can say that this concept which is presently immaterial (and therefore empirically insignificant) will never come under the auspices of positivistic science? Thus it appears that the empiricists, in insisting that the metaphysical was meaningless, may have spoken too soon; over time the metaphysical may metamorphose into the physical, as we learn more and more. And in its place may arise deeper layers of the metaphysical; will we ever eliminate the unprovable?
In the 1930's, Rudolf Carnap, one of the chief logical positivists, developed what he called Logische Syntax. The aim of this work was to describe a theory of the logical syntax of language that would allow meaningful sentences to be created without reference to the meanings of specific symbols or words. This theory would be held to the same strict standards as a scientific theory. Essentially logical syntax formed a mathematical model of language which could be manipulated and proved just as any other mathematical or logical construct. It is a reductionist model attempting to show that the logic of sentences is based upon the order of the word-symbols in that sentence, and do not require any reference to anything outside that sequence (i.e., reference to the semantical associations of the word symbols) in order to be meaningful. In other words, a sentence arranged according to formalistic syntax will be meaningful no matter what the word-symbols themselves represent.
Particularly, the problem of indeterminancy of transition, inscrutability of reference, language, prediction, rule following, semantics and translation. Just a well as the topics referring to subordinate headings associated with ‘logic’. The loss of confidence has a determinate meaning (each have of the others encoding) is an element common both to postmodern uncertainties in the theory of criticism and to the analytic tradition that follows writers such as Quine (1906). Still, it may be asked, why should we suppose that fundamental epistemic notions should be kept in account for behavioural terms of what grounds are there for supposing that “p knows p,” is a subjective matter in that the prestigiousness of its statement between some subject statement and that of a physical theory that physically is forwarded upon an objection between nature and its mirror? The answer is that the only alternative seems to be to take knowledge of inner states premising from with our knowledge of other things is normally implied,. And without which our knowable knowledge of other things is normally inferred, and without which knowledge would be ungrounded. However, it is not really coherent, and does not in the last analysis’ make sense, to suggest that human knowledge have in excess of having strong foundations or grounds. it should be remembered that to say that truth and knowledge can only be judged by the statement of our own days’, which is not to say it is less meaningful nor is it more estranged than displaced off from the world, which we had supposed. Conjecturing it is justly as nothing that really counts as justification, such as the rules by which reference is governed we have already accepted, and that the pace is no way to get outside of our beliefs and our oral communications so as to find some experiment with which other than are coherent. The fact is hat the professional philosophers have thought it might be otherwise, since one and only they are haunted by the logic of epistemological scepticism.
What Quine opposes as ‘residual Platonism’ is not so much the hypostasising of non-physical entities as the notion of “correspondence” with things that final courts of appeal for evaluating present practices. Unfortunately, Quine for all that is incompatible with its basic insights, substitutes for this correspondence to physical enmities, and specially to the basic entities, whatever they turn out to be, of physical science. Nevertheless, when their doctrines are purified, the convergent is on a single claim. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings an account of knowledge that can account only to a description of human behaviour.
In regards to an epistemology of science, that this singular project seeks to bring together a number of strands in recent research in the philosophy of science and overall epistemology to provide a coherent picture of the epistemology of science: Such are they are: (1) Naturalism: This project in naturalistic in two related senses. First, it takes knowledge to be a natural phenomenon, whose nature is to be understood in terms of its role in explaining the interaction of an individual or a community and their environment. Secondly, it accepts that a satisfactory epistemology of science will have to draw upon empirical research in psychology, neuroscience, history, and sociology.
(2) A modified Kuhnian picture of the process of scientific change with modifications are to accept a Kuhnian account of the way science changes (normal science interspersed with revolutionary science) and a Kuhnian account of why this happens (the priority of paradigms as exemplary scientific achievements). The way that paradigms operate when employed by an individual, being related to the capacity for rule-less pattern recognition, is something to be explained by reference to research in psychology and neuroscience. The way that individuals are inculcated in a paradigm and acquire a shared ability to recognise certain similarity relations among puzzle-solutions is to be explained by sociology, illustrated by historical examples. (3) An investigation of the psychology of scientific inference: The picture Kuhn himself gives us is of paradigms providing us with the ability directly to recognise similarities without mediation by conscious rules. As a picture of puzzle-solving and puzzle-evaluation this is too simple, even if an important starting point, since it is clear that scientific reasoning is a multi-step process involving reflection and cogitation, and so is not a matter of one-step immediate pattern-recognition. So an important task is to see how Kuhn’s insight can be modified to accommodate a more realistic picture of scientific reasoning and inference. This needs to be linked with an account of the logical structure of scientific reasoning.
(4) Inference to the Best Explanation is taken by Inference to the Best Explanation to be the logical structure of some of the most important scientific reasoning, including reasoning that extends theoretical knowledge. Its view does employ much of the content of the account provided by Peter Lipton, but also seeks to strengthen it by showing that for knowledge it is necessary (and possible) that Inference to the Best Explanation operates by eliminating all but one live potential explanation. According to Lipton not every potential explanation gets considered: bizarre explanations don’t enter the field, plausible explanations get conscious attention. It may be proposed that one of the functions of Kuhnian paradigms and similarity recognition be to direct scientists’ attention to the plausible explanations and to filter out the bizarre ones.
(5) Social knowing The Kuhnian picture being developed is neither purely individualistic nor purely social. This raises the question, what is the relationship between individual knowledge and social knowledge? In opposition to both traditional epistemology and more radical social epistemology, as it can be of an argument that neither is reducible to the other. Instead individual and social knowledge share the same structure. Williamson claims that individual knowledge is a factive mental state; Purposing that social knowledge is a factive state of some social structure. Not every aspect of this social structure is a human individual; it will include such items as libraries, journals, laboratories, structured by relationships of trust, authority, and power. Also its proposes of a task for the sociology of scientific knowledge, which is to investigate the causes and conditions whereby a society can be so organised as to produce scientific knowledge (as opposed to mere widespread belief). (6) Scientific progress Continuing with the Williamsonian theme that knowledge is the central concept in epistemology, under which it may be to argue that scientific progress is the accumulation of scientific knowledge and that the aim of science is the production of scientific knowledge
Language and philosophy have an intimate connection to one another, without a philosophical examination of the meanings and structure of language, we cannot easily ascertain the objective truth of the statements we make, nor can we usefully discuss abstract concepts. The philosophy of language seeks to understand the concepts expressed by language and to find a system by which it can effectively and accurately do so. This is more difficult than it appears at first; philosophers are looking for a theory of language which avoids the minute errors of meaning and usage which occur in all discussions of abstract concepts and which tend to lead those discussions into complicated dead-ends.
Since so much of the philosophy is currently concerned with the linguistic representation of reality, the bond between the philosophical and the linguistic is growing stronger. Philosophers can only write syntax for the languages they want to use in expressing theory with some knowledge of linguistics; and linguists can use philosophical principles to solve problems of meaning and syntax. This strong link can be exploited to the advantage of both sides.
In recent history philosophers have struggled with the question of precision in language and have sought to construct a system under which meanings can be discussed without danger of falling into circular or metaphysical traps. Two major approaches to this question have arisen in scientific circles of the twentieth century. Logical empiricism, also known as logical positivism, seeks to produce a language which consists of symbols combined precisely in accordance with specific rules; this would eliminate the philosophical convolutions that arise from the use of imprecise and confusingly ordered language. Ordinary language theory, on the other hand, suggested that these philosophical problems appear when language is used improperly; the language itself is perfectly acceptable and can be easily applied to the discussion of abstract and philosophical concepts without undue modifications, as long as it is used and interpreted properly. Each of the movements in linguistic philosophy had its strengths and weaknesses, and its supporters and detractors.
Pure metaphysical speculation which is not based on fact is, to the empiricists, neither relevant nor useful. The only truth, in this philosophy, is that which is mathematically provable or experimentally observable. This truth can be divided into two categories: analytic truths are based on inherent meanings and can be observed through the application of reason, if not experiment; synthetic truths are those facts which are obtained from the experience of reality? Any system of communication must, in order to be meaningful, include some way to represent the truth accurately; any empiricist will tell you that this truth is only valuable and meaningful if it can be considered absolute and provable. In order to be perfectly accurate in representing the truth, language must conform to a certain set of specifications designed to prevent it from wandering into speculation, and under which it becomes possible to ascertain absolutely the truth of any statement. These rules are called formalist semantics. Syntax differs from semantics in that syntax guides the proper formation of elements of a language into statements, whereas semantics consists of the correct association of elements of language with elements of the real world.
In one view, philosophy itself cannot be anything other than logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher’s task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.
In philosophy it cannot be anything other than logically empirical because the purpose of philosophy is to elucidate and clarify truth, and this clarification consists of the examination of language to see that it conforms to the concrete facts of reality. The philosopher’s task is to analyse language and untangle the convolutions of common language into the simplicity of logical language. Sengupta. Once, again, quotes Ayer, one of the mainstays of the logical positivism movement, as saying that the philosopher “is not concerned with the physical properties of things. He is concerned only with the way in which we speak about them.” Thus philosophy must be empiricist and formalistic in order to discuss aspects of reality with accuracy and truth.
Nevertheless, one might object that peripheral self-awareness is nowhere to be found in one’s phenomenology. To be sure, the phenomenologists themselves did claim to find it. From Brentano (1874), through Husserl (1928) and Sartre (1937, 1943), to recent work by the so-called Heidelberg School, Smith (1989), and Zahavi (1999), the distinction between reflective and non-reflective self-awareness has been consistently drawn in the European continent. It may to suggest, that the distinction thus belaboured in the Phenomenological tradition be captured in the difference between transitive and intransitive modes of self-consciousness, that is, between being self-conscious of a thought or a percept and self-consciously thinking or perceiving. But a persistent objector could readily profess not to find anything like such peripheral self-awareness in her phenomenology and insist that the phenomenologists themselves have been, in this regard as in others, overly inflationist in their proclamations concerning the actual phenomenology of mental life.
This is a fair objection. But it may unwittingly impose an inordinate burden of proof on the proponent of intransitive self-consciousness. For how would one argue for the very existence of a certain mental phenomenon? Thus, to have as yet to encounter an effective argument against eliminativism about the propositional attitudes, or about consciousness and qualia, say of the sort espoused by Churchland (1984). Even so, we did encounter such an argument above, namely, that there appears to be peripheral awareness of every other sort, and it would be quite odd if the only exception was awareness of oneself. At this point, it is likened to try and explain away the relative intuitive appeal of eliminativism about intransitive self-consciousness, in comparison to, say, eliminativism about the qualitative character of colour experiences.
One factor may simply be that the qualitative character of colour experiences is much more phenomenologically impressive. In this respect, the proponent of intransitive self-consciousness is in a similar position to those philosophers who claim that conscious propositional attitudes have a phenomenal character (Strawson 1994, Horgan and Tienson 2002, Kriegel 2003, 2004). The problem they face is that the phenomenal character of propositional attitudes, if there is any, is clearly less striking than that of colour experiences. But the common tendency to take colour experiences as the gold standard of phenomenology may be theoretically limiting inasmuch as it may set the bar too high. For any other sort of phenomenology is bound to be milder.
Furthermore, special difficulties attach to noticing not just to an awareness of another perspective with a previously unrecognized body of knowledge but to a radically different way of being-in-the-world. In addition, this different way of being leads naturally to a different mode or practice of inquiry (i.e., the methods of Phenomenological research). This chapter will compare Phenomenological psychology to the more mainstream behavioural and psychoanalytic approaches (Valle, 1989), present the essence of the existential-phenomenological perspective (Valle, King, and Halling, 1989), describe the nature of an emerging transpersonal-phenomenological psychology (Valle, 1995), and present an overview of the transpersonal dimensions or themes emerging from seven recently completed empirical Phenomenological research projects.
Existentialism as the philosophy of being became intimately paired with phenomenology as the philosophy of experience because it is our experience alone that serves as a means or way to inquire about the nature of existence (i.e., what it means to be). Existential-phenomenology as a specific branch or system of philosophy was, therefore, the natural result, with what we have come to know as Phenomenological methods being the manifest, practical form of this inquiry. Existential-phenomenology when applied to experiences of psychological interest became existential-phenomenological psychology and has taken its place within the general context of humanistic or “third force” psychology; it is humanistic psychology that offers an openness to human experience as it presents itself in awareness.
From a historical perspective, the humanistic approach has been both a reaction to and a progression of the world views that constitute mainstream psychology, namely, behavioural-experimental and psychoanalytic psychology. It is in this way that the philosophical bases that underlie both existential-phenomenological and transpersonal ("fourth force") psychology have taken root and grown in this field.
In classic behaviourism, the human individual is regarded as a passive entity whose experience cannot be accurately verified or measured by natural scientific methods. This entity, seen as implicitly separate from its surrounding environment, simply responds or reacts to stimuli that impinge on it from the external physical and social world. Because only that which can be observed with the senses and quantified, and whose qualities and dimensions can be agreed to by more than one observer, is recognized as acceptable evidence, human behaviour (including verbal behaviour) became the focus of psychology
In a partial response to this situation, the radical behaviourism of Skinner (e.g., 1974) claims to have collapsed this classic behaviour - experience split by regarding thoughts and emotions as subject to the same laws that govern operant conditioning and the roles that stimuli, responses, and reinforcement schedules play within this paradigm. Thoughts and feelings are, simply, behaviours.
In the psychoanalytic perspective, an important difference with behavioural psychology stands out. Experience is recognized not only as an important part of being human but as essential in understanding the adult personality. It is within this context that both Freud’s personal unconscious and Jung's collective unconscious take their places. The human being is, thereby, more whole yet is still treated as a basically passive entity that responds to stimuli from within (e.g., childhood experiences, current emotions, and unconscious motives), rather than the pushes and pulls from without. Whether the analyst speaks of one’s unresolved oral stage issues or the subtle effects of the shadow archetype, the implicit separation of person and world remains unexamined, as does the underlying causal interpretation of all behaviour and experience. Both behavioural and analytic psychology are grounded in an uncritically accepted linear temporal perspective that seeks to explain human nature via the identification of prior causes and subsequent effects.
Only in the existential-phenomenological approach in psychology is the implicitly accepted causal way of being seen as only one of many ways human beings can experience themselves and the world. More specifically, our being presents itself to awareness as a being-in-the-world in which the human individual and his or her surrounding environment are regarded as inextricably intertwined. The person and world are said to co-constitute one another. One has no meaning when regarded independently of the other. Although the world is still regarded as essentially different from the person in kind, the human being, with his or her full experiential depth, is seen as an active agent who makes choices within a given external situation (i.e., human freedom always presents itself as a situated freedom). Other concepts coming from existential - Phenomenological psychology include the prereflective, lived structure, the life-world, and intentionality. All these represent aspects or facets of the deeper dimensions of human being and human capacity.
The prereflective level of awareness is central to understanding the nature of Phenomenological research methodology. Reflective, conceptual experience is regarded as literally a “reflection” of a preconceptual and, therefore, prelanguaged, foundational, bodily knowing that exists "as lived" before or prior to any cognitive manifestation of this purely felt-sense. Consider, for example, the way a sonata exists or lives in the hands of a performing concert pianist. If the pianist begins to think about which note to play next, the style and power of the performance is likely to suffer noticeably.
This prereflective knowing is present as the ground of any meaningful (meaning-full) human experience and exists in this way, not as a random, chaotic inner stream of subtle senses or impressions but as a prereflective structure. This embodied structure or essence exists as an aspect or a dimension of each individual’s Lebenswelt or life-world and emerges at the level of reflective awareness as meaning. Meaning, then, is regarded by the Phenomenological psychologist as the manifestation in conscious, reflective awareness of the underlying prereflective structure of the particular experience being addressed. In this sense, the purpose of any empirical Phenomenological research project is to articulate the underlying lived structure of any meaningful experience on the level of conceptual awareness. In this way, understanding for its own sake is the purpose of Phenomenological research. The results of such an investigation usually take the form of basic constituents (essential elements) that collectively represent the structure or essence of the experience for that study. They are the notes that compose the melody of the experience being investigated.
Possible topics for a Phenomenological study include, therefore, any meaningful human experience that can be articulated in our everyday language such that a reasonable number of individuals would recognize and acknowledge the experience being described (e.g., “being anxious,” “really feeling understood,” “forgiving another,” “learning,” and “feeling ashamed”). These many experiences constitute, in a real sense, the fabric of our existence as experienced. In this way, Phenomenological psychology with its attendant research methods has been, to date, a primarily existential-phenomenological psychology. From this perspective, reflective awareness and prereflective awareness are essential elements or dimensions of human being as a being-in-the-world. They co-constitute one another. One cannot be fully understood without reference to the other. They are truly two sides of the same coin.
Some experiences and certain types of awareness, however, do not seem to be captured or illuminated by Phenomenological reflections on descriptions of our conceptually recognized experiences and/or our prereflective felt-sense of things. Often referred to as transpersonal, transcendent, sacred, or spiritual experience, these types of awareness are not really experience in the way we normally use the word, nor are they the same as our prereflective sensibilities. The existential Phenomenological notion of intentionality is helpful in understanding this distinction.
The world’s transpersonal, transcendent, sacred, and spiritual represent subtle distinctions among themselves. For example, “transpersonal” currently refers to any experience that is transgenic, including the archetypal realities of Jung’s collective unconscious as well as radical transcendent awareness. Although notions such as the collective unconscious refer to states of mind that are deeper than or beyond our normal ego consciousness, “transcendent” refers to a completely sovereign or soul awareness without the slightest inclination to define itself as anything outside itself including contents of the mind, either conscious or unconscious, personal or collective (i.e., awareness that is not only transgenic but transmind). This distinction between transpersonal and transcendent awareness may lead to the emergence of a fifth force or more purely spiritual psychology.
In existential-phenomenological psychology, intentionality refers to the nature or essence of consciousness as it presents itself. Consciousness is said to be intentional, meaning that consciousness always has an object, whether that intended object be a physical object, a person, or an idea or a feeling. Consciousness is always a “consciousness of” something that is not consciousness itself. This particular way of defining or describing intentionality directly implies the deep, implicit interrelatedness between the perceiver and that which is perceived that characterizes consciousness in this approach. This inseparability enables us, through disciplined reflection, to illumine the meaning that was previously implicit and unlanguaged for us in the situation as it was lived.
Transcendent awareness, on the other hand, seems somehow “prior to” this reflective-prereflective realm, presenting itself as more of a space or ground from which our more common experience and felt-sense emerge. This space or context does, however, present itself in awareness, and is, thereby, known to the one who is experiencing. Moreover, implicit in this awareness is the direct and undeniable realization that this foundational space is not of the phenomenal realm of perceiver and the perceived. Rather, it is a noumenal, unitive space within or from which both intentional consciousness and phenomenal experience manifest. From reflections on my own experience (Valle, 1989) offering the following six qualities or characteristics of transpersonal/transcendent awareness (often recognized in the practice of meditation)
(1). There is a deep stillness and peace that I sense as both existing as itself and, at the same time, as “behind” all thoughts, emotions, or felt-senses (bodily or otherwise) that might arise or crystallize in or from this stillness. “I” experience this as an “isness” or “samness” rather than a state of whatness or “I am this” or “that.” This stillness is, by its nature, neither active nor in the body and is, in this way, prior to both the prereflective and reflective levels of awareness. (2). There is an all-pervading aura or feeling of love for and contentment with all that exists, a feeling that exists simultaneously in my mind and heart. Although rarely focussed as a specific desire for anyone or anything, it is, nevertheless, experienced as an intense, inner energy or inspired “pressure” that yearns, even “cries,” for a creative and passionate expression. I sense an open embracing of everyone and everything just as they are, that literally melts into a deep peace when I find myself able to simply “ let it all be.” Peace of mind is, here, a heart-felt peace (3) Existing as or with the stillness and love is a greatly diminished, and on occasion absent, sense of “I.” The more common sense of “I am thinking or feeling this or that” becomes a fully present “I am” or simply, when in its more intense form, an “amness” (pure Being in the Heideggerian sense). The sense of a “perceiver” and "that which is perceived” has dissolved; there is no longer any “one” to perceive as we normally experience this identity and relationship.(4) My normal sense of space seems transformed. There is no sense of “being there,” of being extended in and occupying space, but, similar to the previously mentioned, simply Being. Also, there is a loss of awareness of my body-sense as a thing or spatial container. This ranges from an experience of distance from sensory input to a radical forgetfulness of the body’s very existence. It is that my everyday, limited sense of body-space touches a sense of the infinite. (5) Time is also quite different from my everyday sense of linear passing time. Seemingly implicit in the sense of stillness described here is also a sense of time “hovering” or standing still, of being forgotten (i.e., no longer a quality of mind) much as the body is forgotten. No thoughts dwelling on the past, no thoughts moving into the future - hours of linear time are experienced as a moment, as the eternal Now.(6) Bursts or flashes of insight are often part of this awareness, insights that have no perceived or known antecedents but that emerge as complete or full-blown. These insights or intuitive “seeings” have some of the qualities of more common experience (e.g., although “lighter,” there is a felt weightiness or subtle “content” to them), but they initially have an “other-than-me” quality about them, as if the thoughts and words that emerge from the insights are being done to or, even, through me - a sense that my mind and its contents are vehicles for the manifestation as experience of something greater and/or more powerful than myself. In its most intense or purest form, the “other-than-me” quality dissolves as the “me” expands to a broader, more inclusive sense of self that holds within it all that was previously felt as "other-than-me."
Since these six qualities, we have come to recognize two additional dimensions or essential characteristics of transcendent awareness: (a) a surrendering of one's sense of control with regard to the outcome of one's actions, and the dissolution of fear that seems to always follow this “letting go,” and (b) the transformative power of transcendent experience, realized as a change in one’s preferences, inclinations, emotional and behavioural habits, and understanding of life itself. This self-transformation is often personally painful because this power both challenges and changes the comfortable patterns of thoughts and feelings we have so carefully constructed through time, a transformation of whom we believe we are.
These qualities or dimensions call us to a recontextualization of intentionality by acknowledging a field of awareness that appears to be inclusive of the intentional nature of mind but, at the same time, not of it. In this regard, (Valle, 1989) offer the notion of a “transintentionality” to philosophically address this consciousness without an object (Merrell-Wolff, 1973). As Phenomenological psychologist and researcher, Steen Halling (1988) has rightfully pointed out, consciousness without an object is also consciousness without a subject. Transintentional awareness, therefore, represents a way of being in which the separateness of a perceiver and that which is perceived has dissolved, a reality not of (or in some way beyond) time, space, and causation as we normally know them.
Here is a bridge between existential/humanistic and transpersonal/transcendent approaches in psychology. It is here that we are called to recognize the radical distinction between the reflective/prereflective realm and pure consciousness, between rational/emotive processes and transcendent/spiritual awareness, between intentional knowing of the finite and being the infinite. It is, therefore, mind, not consciousness per se, that is characterized by intentionality, and it is our recognition of the Transintentional nature of Being that calls us to investigate those experiences that clearly reflect or present these transpersonal dimensions in the explicit context of Phenomenological research methods.
This presentation is based on the following thoughts regarding the meaning of transpersonal in this context. On the basis of the themes that Huxley (1970) claimed to compose the perennial philosophy,(Valle, 1989) presented five premises that characterize any philosophy or psychology as transpersonal: (1) That a transcendent, transconceptual reality or Unity binds together (i.e., is immanent in) all apparently separate phenomena, whether these phenomena be physical, cognitive, emotional, intuitive, or spiritual: (2) That the individual or ego-self is not the ground of human awareness but, rather, only one relative reflection-manifestation of a greater transpersonal (as “beyond the personal”) Self or One (i.e., pure consciousness without subject or object). (3) That each individual can directly experience this transpersonal reality that is related to the spiritual dimensions of human life (4) That this experience represents a qualitative shift in one’s mode of experiencing and involves the expansion of one’s self-identity beyond ordinary conceptual thinking and ego-self awareness
(i.e., mind is not consciousness, however, if one is to thinking one has of oneself the dimension of consciousness). (5) That this experience is self-validating.
It has been written and taught for millennia in the spiritual circles of many cultures that sacred experience presents itself directly in one’s awareness (i.e., without any mediating sensory or reflective processes) and, as such, is self-validating. The direct personal experience of God is, therefore, the “end” of all spiritual philosophy and practice.
Transcendent/sacred/divine experience has been recognized and often discussed, both directly and metaphorically, as either intense passion or the absolute stillness of mind (these thoughts and those that follow regarding passion and peace of mind are from Valle, 1995). In day-to-day experience, a harmonious union of passion and stillness or peace of mind is rarely experienced. Passion and stillness are regarded as somehow antagonistic to each other. For example, when one is passionately involved with some project or person, the mind is quite active and intensely involved. On the other hand, the calm, serene, and profoundly peaceful quality of mind that often accompanies deep meditation is fully disengaged from and, thereby, disinterested in things and events of the world.
What presents itself as quite paradoxical on one level offers a way to approach the direct personal experience of the transcendent, that is, to first recognize and then deepen any experience in which passion and peace of mind are simultaneously fully present in one’s awareness. If divine presence manifests in human awareness in these two ways, and sacred experience is what one truly seeks, it becomes important to approach and understand those experiences wherever these two dimensions exist in an integrated and harmonious way. In this way, one comes to understand the underlying essence that these dimensions share rather than simply being satisfied with the seeming opposites they first appear to be.
The relationship between passion and peacefulness is addressed in many of the world’s scriptures and other spiritual writings. These two threads, for example, run through the Psalms (May and Metzger, 1977) of the Judeo-Christian tradition. At one point, we read, “Be still and know that I am God” (Psalm 46) and “For God alone my soul waits in silence” (Psalm 62,) and at another point, “For zeal for thy house has consumed me” (Psalm 69) and “My soul is consumed with longing for thy ordinances” (Psalm 119). Stillness, silence, zeal, and longing all seem to play an essential part in this process.
In his teachings on attaining the direct experience of God through the principles and practices of Yoga, Paramahansa Yogananda (1956) affirms, that “I am calmly active. I am actively calm. I am a Prince of Peace sitting on the throne of poise, directing the kingdom of activity.” And, more recently, Treya Wilber (quoted in Wilber, 1991) offers an eloquent exposition of this integration: Perhaps, the Carmelites’ emphasis on passion and the Buddhists’ parallel emphasis on equanimity. It suddenly occurred to me that our normal understanding of what passion means is loaded with the idea of clinging, of wanting something or someone, of fearing losing them, of possessiveness. But what if you had passion without all that stuff, passion without attachment, passion clean and pure? What would that be like, what would that mean? I thought of those moments in meditation when I’ve felt my heart open, a painfully wonderful sensation, a passionate feeling but without clinging to any content or person or thing. And the two words suddenly coupled in my mind and made a whole. Passionate equanimity - to be fully passionate about all aspects of life, about one’s relationship with spirit, to care to the depth of one’s being but with no trace of clinging or holding, that's what the phrase has come to mean to me. It feels full, rounded, complete, and challenging
It is here that existential-phenomenological psychology with its attendant descriptive research methodologies comes into play. For if, indeed, we each identify with the contents of our reflective awareness and speak to and/or share with one another from this perspective to better understand the depths and richness of our meaningful experience, then Phenomenological philosophy and method offer us the perfect, perhaps only, mirror to approach transcendent experience. Experiences that present themselves as passionate, as peaceful, or as an integrated awareness of these two become the focus for exploring in a direct, empirical, and human scientific way the nature of transcendent experience as we live it. Here are the “flesh” and promise of a transpersonal-phenomenological psychology
Particular reports for a list of the specific constituents presented in each study, a reflective overview of these results reveals an emerging pattern of common elements or themes. We offer these eleven themes as a beginning matrix or tapestry of transpersonal dimensions interwoven throughout the descriptions of these experiences, not as constituents per se resulting from a more formal protocol analysis. As we looked over the results of these studies, these themes naturally emerged, falling, even, into a natural order. Some are clearly distinct, whereas others appear as more implicitly interconnected. These themes are:(1). An instrument, vehicle, or container for the experience(2) Intense emotional or passionate states, pleasant or painful(3) Being in the present moment, often with an acute awareness of one's authentic nature (4) ascending space and time (5) Expansion of boundaries with a sense of connectedness or oneness, often with the absence of fear(6) A stillness or peace, often accompanied by a sense of surrender (7) A sense of knowing, often as sudden insights and with a heightened sense of spiritual understanding (8) Unconditional love (9) Feeling grateful, blessed, or graced (10) Ineffability (11) Self-transformation.
It seems that the transpersonal/transcendent aspects of any given experience manifest in, come through, or make themselves known via an identifiable form or vehicle. This theme was evident in all seven research studies, the specific forms being silence, being with the dying, being with suffering, near-death experience, being with one’s spiritual teacher, and synchronicity. Transpersonal experiences can come through many forms including meditation, rituals, dreams, sexual experience, celibacy, initiations, music, breath awareness, physical and emotional pain, psychedelic drugs, and the experience of beauty (Maslow’s, 1968, description and discussion of peak experiences are relevant here as well as to a number of the themes discussed below). We again use a musical analogy: Just as the violin, piano, flute, or voice can be an instrument for the manifestation/expression of a melody, so, too, there are many ways in and through which consciousness reveals its nature.
The existential-phenomenologists may interpret this as further evidence for the intentional nature of consciousness, that this is simply the way in which consciousness presents itself to the perceiver. There is also the view that consciousness is a constant stream of “energy” existing beyond the duality of subject-object (i.e., consciousness without an object) that flows through all creation, being both all-pervasive and unitive by its nature. Aware of the paradox implied in this perspective, Capra (1983) states.
[The mystical view] regards consciousness as the primary reality and ground of all being. In its purest form, consciousness, . . . is non- material, formless, and void of all content; it is often described as “pure consciousness,”ultimate reality,” a “suchness” and the like. This manifestation of pure consciousness is associated with the Divine. . . . The mystical view of consciousness is based on the experience of reality in non-ordinary modes of awareness, which are traditionally achieved through meditation, but may occur spontaneously in the process of artistic creation and in various other contexts, such as transcend.
Consorting, with any process drawing to some conclusion from a set of premises is called a processes of reasoning. If the conclusion concerns what we do, the process is called practical reasoning, otherwise pure theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premise support or even entailing the conclusions drawn, and if they are bad, the premise offers no support to the conclusion. Formal logic studies the cases under which conclusions are validly drawn from premises, but little human reasoning is overtly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go-beyond’ our premises, in that the conclusions of logically valid are abutments do not for the process of using evidence to reach a wider range of conclusive evidential matters. Nonetheless, such anticipatory pessimism in the opposite direction to the prospects of conformation, that denying that we can assess the result of abduction in terms of probability. A cognitive process of reasoning in which a conclusion is played-out from a set of premises usually confine themselves to the conclusions that are supposed in following from the premises, e.g., an inference is logically valid, in that of deductibility in a logically defined syntactic premises. But without there being to any reference to the intended interpretation of its theory. Furthermore, s we reason we use indefinite traditional knowledge or common-sense sets of presuppositions about what is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge by the way of the world in computer programs.
Without the fundamental discipline of linguistic analysis philosophy cuts itself adrift from ordinary meaning and enters an Alice-in-Wonderland fantasy of wishful wisdom. Yes, of course the nature of the human mind is a great puzzle. But if we approach this 'great puzzle' in a bare hands, undisciplined way, we put ourselves into the case of looking for something, without the faintest idea of what it is. That is a dumb quest. Probably the dumb quest for an insight into the nature of one's own mind is the worst example of such defective search procedure.
Many philosophers have been puzzled about the nature of consciousness. As a result, a huge literature allegedly about this subject, but really constituting a dense fog blanket of near- meaningless rhetoric, has been devised. In that finding its difficult to explain to an ordinary friend what is the point of such lengthy, scholastic, consciously obscure, artifice. What does it achieve? Does it clarify the individual's mind? Does it clarify the great intellectual issues of the day? Certainly not! It may serve to de-clarify the great intellectual issues of the day, because it helps to give philosophy, the art, a poor reputation: as being more interested in appearances than in realities, as being quite content to bandy-about badly focussed but meretricious sentences. We can't hope to get anywhere in philosophy unless we first concentrate our attention on focussing very firmly onto meanings.
What ever the bet on 'getting there' by the facile short-cut of introspection. To think one might get there by introspection is like thinking that the way to solve an equation is to stare at it harder and harder - for as long as it takes - until the unknown value of ‘x’ finally ('as it must of course') reveals itself! Introspective philosophy, it is widely agreed, is a reaction against positivism and physicalism: but, if so, the reaction has gone much too far. The main complaint against the positivists and the physicalists is surely that, in their blind attachment to scientific modes, they show a dismal insensitivity to human culture, human values, human relationships. They do, but it is what they lack that defines the complaint, not what they know. There can be no excuse for rejecting scientific modes of clarification out of hand in any department of human activity: least of all in one - philosophy - which must trade in clarification if it trades in anything at all. Yes, we need clarification in other areas too. But don't let's turn our backs on what we have.
Willard van Orman Quine, the most influential American philosopher of the latter half of the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quine’s early work was in mathematical logic, and issued A System of Logistics (1934), Mathematical Logic (1940) and Methods of Logic (1950) wherefore, it was with the collection of papers from a Logical Point of View (1953) that his philosophical importance became widely recognized. Quine’s work dominated concerns with the problems of convention, meaning and synonymy cemented by Word and Object (1960), in which the indeterminacy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These ‘intentional idioms’ resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing ‘eliminativism’, but regarded them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he consistently expressed suspicion of the logical and philosophical propriety of appeal to logical; possibilities and possible worlds. The language that are properly behaved and suitable for literal and true descriptions of the world as those of mathematics and science. The entities to which his theories refer must be taken with full seriousness in our ontology although an empiricist, Quine thus supposes that the abstract objects of a set theory are required by science, and therefore exist. In the theory of knowledge Quine associates with a ‘holistic view’ of verification, conceiving of a body of knowledge in terms of a web touching experience at the periphery, but with each point connected by a network of relations to other points.
Quine is also known for the view that epistemology should be naturalized, or conducted in a scientific spirit, with the object of investigation being the relationship, in human beings, between the voice of experience and the outputs of belief. Although Quine’s approaches to the major problems of philosophy have been attacked as betraying undue ‘scientism’ and sometimes ‘behaviourism’, the clarity of his vision and the scope of his writings made him the major focus of Anglo-American work of the past forty years in logic, semantics and epistemology. As well as the works cited his writings’ cover The Ways of Paradox and Other Essays (1966), Ontological Relativity and Other Essays (1969), Philosophy of Logic (1970), The Roots of Reference (1974) and The Time of My Life: An Autobiography (1985).
Coherence is a major player in the theatre of knowledge. These are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief and concerned with the contentual representation of beliefs. Consider a belief you now have the beliefs that you are reading a page in a book, in so, that, what makes that belief the belief that is? What makes it the belief that you are reading a page in a book than that of having a belief that you have a monster in your garden?
One answer is that belief has a coherent place or role in a system of beliefs, perception or having the perceptivity that has its influenced on beliefs. As you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have some monster in your garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, then its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a monster, sortal perceptivals hold accountable the perceptivity and actions tat are indeterminate to its content if its belief is the action as if simulated by its inner and latent coherence in that of your belief, however. The role that gives the belief the content it has is the role it plays within a network of relations to other beliefs, some latently causal than other relations other than to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other, justly as I infer about other beliefs.
The information of perceptibility and the output of an action supplement as the central role of the systematic relations that the belief has to other beliefs, but the systematic relations gives the belief its specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief that the representational content under which it does because of the way in which it coheres within a system of beliefs (Rosenberg 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is on e determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.
When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? Again, there is a distinction between weak and strong theoretic principles that govern its theory of coherence. Weak theories tell us that the way in which a belief coheres with a background system of beliefs in one determinant of justification, other typical determinants being perceptivity, memory, and the collection of sensory data, however, strong theories, or dominant projections are in coherence to justification as solely a matter of how a belief coheres with a system of latent hierarchical beliefs. There is, nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories between positive and negative coherence theory (Pollock 1986). A positive coherence theory tells us that if a belief coheres with a background system of beliefs, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to the positivity of a coherence theory, coherence has the power to produce justification, while according to its being adhered by negativity, the coherence theory has only the power to nullify justification.
Least there be mention, a strong coherence theory of justification is a formidable combination under which a positive and a negative theory tell us that a belief is justified if and only if it coheres with a background system of inter-connectivity of beliefs. Coherence theories of justification and knowledge have most often been rejected for being unable to deal with an accountable justification toward the perceptivity upon which its protection of knowledge (Audi 1988 and Pollock 1986), and therefore, considering a perceptual example that will serve as a kind of crucial test will be most appropriate. Suppose that a person, call her Julia, and works with a scientific instrumentation that has a gauging measure upon temperatures of liquids in a container. The gauge is marked in degrees, she looks at the gauge and sees that the reading is 125 degrees. What is she justifiably to believe, and why? Is she, for example, justified in believing that the liquid in the container is 125 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that, though her belief that she sees the numerical digits 125 degrees, and is immediately justified as a direct sensory evidence without appeal to a background system, the belief that the location in the container is 125 degrees, that results from coherence with a background system of latent beliefs that affirm to the shaping perceptivity that its 125 as visually read to be 125 degrees on the gauge that measures the temperature of the liquid in the container. This, nonetheless. Of a weak coherence view that combines coherence with direct perceptibility as its evidence, in that the foundation of justification, is to account for the justification of our beliefs.
A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shaping to sensory data that holds accountable a measure of 125, or even the more cautious belief that one sees a shape, resalting from the perceptivals of coherence theory, in that it coheres with a background system. One may argue for this strong coherence theory in a number of different ways. One line or medium through which to appeal to the coherence theory of contentual representation. If the content of the perceptual belief results from the relations of the belief to other beliefs in a network system of beliefs, then one may notably argue that the justification of perceptivity, that the belief in a resultant under which its relation of the belief to other beliefs, in the network system of beliefs is in argument for the strong coherence theory is that without any assumptive reason that the coherence theory of contentual beliefs, In as much as the supposed causes that only produce the consequences we expect. Consider the very cautious belief that ‘I see a shape’. How may the justifications for that perceptual belief are an existent result that is characterized of its material coherence with a background system of beliefs? What might the background system tell us that would justify that belief? Our background system contains a simple and primary theory about our relationship to the world and its surrounding surfaces, in that we perceive as it is or should be believed. To come to the specific point at issue, we believe that we can tell a shape when we see one, completely differentiate its form as perceived to sensory data, that we are to test of ourselves about such simple matters as whether we see a shape before us or not, as in the acceptance of opening to nature the inter-connectivity between belief and the progression through which is acquired from past experiential conditions of applicability, and not beyond deception. Moreover, when Julia sees the believing desire to act upon what either coheres with a weak or strong coherence of theory, she shows that its belief, as a measurable quality or entity of 125, has the essence in as much as there is much more of a structured distinction of circumstance, which is not of those that are deceptive about whether she sees that shape or sincerely does not see of it shaping distinction, however, light is good. The numeral shapes are largely, readily discernible and so forth. These are beliefs that Julia has single handedly authenticated by reasons for justification. Her successive malignance to sensory access to data involved is justifiably a subsequent belief, under which with those beliefs, and so she is justified and creditable.
The philosophical problems include, discovering whether belief differs from other varieties of assent, such as ‘acceptance’ discovering to what extent degree of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering it links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether prelinguistic infants or animals are property said to have beliefs.
Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, the inference must be interpreted as unconscious inferences, as information processing, based on or finding the background system that proves most convincing of acquiring its act and used from the motivational force that its underlying and hidden desire are to do so. One might object to such a account on the grounds that not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour 1985 and Lehrer 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables him or her to meet the objection. A belief coheres with a background system just in case it enables either to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer 1990).
Illustrating the relationship between positive and negative coherence theories in terms of the standard coherence theory is easy. If some objection to belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in the belief. So, to turn to Julia, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 125, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Julia, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 125 degrees. Though she believes what she reads is at 125 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells us that she is not justified in her belief about the temperature of the content in the container. By contrast, when the red light is not illuminated and the background system of Julia tell her that under such conditions that the gauge I a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tell us that she is justified in her belief because her belief coheres with her background system of Julia , telling that under such conditions that the gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tell us that she is justified in her belief because her belief coheres with her background system continuing as a trustworthy system.
As the foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what is called internalistic theories of justification: What makes of such a view the absence of any requirement that the person for whom the belief is justified have of any cognitive access to the relation of reliability in question. Lacking such access, such a person will usually have no reason for thinking that the belief is true or likely to be true, but will, on such an account are none the lesser to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.
They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. Ho w one might object, can be to assume the including of interiority. A subjective notion of justification bridge the gap between mere true belief, which might be no more than a lucky guess, and knowledge, which must be grounded in some connection between internal subjective conditions and external objective realities?
The answer is that it cannot and that something more than justified true belief is required for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What are required maybe put by saying that the justification that one must be undefeated by errors in the background system of beliefs? Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positivity I acclaimed by the coherence theory, under which the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer 1990). The connection between internal and subjective conditions of belief and external objectivity are from which reality’s result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Julia, she believes that her internal subjectivity to conditions of sensory data in which the experience and perceptual beliefs are connected with the external objectivity in which reality is the temperature of the liquid in the container in a trustworthy manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 125 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that the coherence sustained in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.
The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has been deaf-mute until they are represented in the form of some perceptual belief. Beliefs are the engines that pull the train of justification. Nevertheless, what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifacts of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher 1973 and Rosenberg 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherent.
Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background system but coherence theories for truth do not. A defender of coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depending on what causal subject to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter causal relations, this seems to exclude mathematical and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually the sort of criterion have usually supposed that it is limited to perceptual knowledge of particular fact about the subject’s environment.
For example, Armstrong (1973) proposed that a belief of the form ‘This (perceived) object is F’ is (non-inferential) knowledge if and only if the belief in a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any object ‘x’ is to occur, and so thus a perceived object of ‘y’, if ‘x’ undergoing those properties are for us to believe that ‘y’ is ‘F’, then ‘y’ is ‘F’ (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).
This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge, because it is compatible with the belief’s being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been give n good reason to think otherwise, to think that the substantive primary colours that are perceivable, that things look tinted to you and tinted things look tinted. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way. Believing in a ‘thing’, which looks to blooms of vividness that you are to believe of its tint, your belief will fail to be justified and will therefore fail to be knowledge, even though it is caused by the thing’s being tinted in such a way as t be a completely reliable ign, or to carry the information, in that the thing is tinted.
One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, by this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perceptions. The experimenter tells you that you have taken sch a drug but then says, ‘no, hold off a minute, the pill you took was just a placebo’ suppose further, that this last thing te experimenter tells you is false. Her telling you that it was a false statement, and, again suppose, telling you this gives you justification for believing that looks a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.
Goldman (1986) has proposed an importantly different causal criterion namely, that a true belief is knowledge, if it is produced by type of process that is ‘globally’ and ‘locally’ reliable. causing true beliefs as sufficiently high and globally reliable in its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situation alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and si it could in principle apply to knowledge of any kind of truth.
Goldman requires tat global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge, under which requires for knowledge but does not require for justification, which is locally reliable. His idea is that justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Thee relevant alternative account of knowledge can be motivated by noting that other concepts exhibit the same logical structure e. two examples of this are the concept ‘flat’ and te concept ‘empty’ (Dretske 1981). Both appear to be absolute concepts. . . . A space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of ;flat’, there is a standard for what counts as a bump and in the case of ‘empty’, there is a standard for what counts a thing. To be flat is to be free of any relevant bumps and to be empty is to be devoid of all relevant things.
What makes an alternative situation relevant? Goldman does not try to formulate examples of what he takes to be relevantly alterative, but suggests of one. Suppose, that a parent takes a child’s temperature with thermometer that the parent selected at random from several lying in the medicine cabinet. Only the particular thermometer chosen was in good working order, it correctly shows the child’s temperature to be normal, but if it had been abnormal then any of the other thermometers would have erroneously shown it to be normal. Te parent’s actual true belief is caused by a globally reliable process but, because it was ‘just luck’ that te parent happened to select a good thermometer, “we would not say that te parent knows that the child’s temperature is normal.” Goldman gives yet another example: Suppose: -
Wally spots Ruth across the street and correctly believes that it is Ruth.
If it did so occur that it was Ruth’s twin sister, he would be mistaken her
for Ruth. Does Wally know Ruth? As long as there is a serious possibility that person across the street might have been Joan rather than, . . .
we would deny that Wally knows.
Goldman suggests that the reason for denying knowledge in the thermometer example, be that it was ‘just luck’ tat the parent did not pick a non-working thermometer and in the twin’s example, the reason is that there was ‘a serious possibility’ that it might have been the one in which Wally could have probably had mistaken. This suggests the following criterion of relevance: An alternative situation, whereby, that the same belief is produced in thee same way but is false, it is relevantly just in case at some point before the actual belief was to its cause, by which a chance that the actual belief was to have caused, in that the chance of what situation’s having come about was instead of the actual situation was to converge, nonetheless, by the chemical components that constitute its inerter-actual exchange by which endorphin excitation was to influence e and so give to the excitability of neuronal transmitters that deliver messages, inturn, the excited endorphin’s gave ‘change’ to ‘chance’, thus it has of itself the existential position of holding a given to opportunities too decided upon numerous accounts of combination as given the chance to change. Thus and so, our interpretations of which the sensory-data is unduly persuaded by innate capabilities that at times are latently hidden to arise within the mind or brain, giving to its existential decision of a chosen chance of luck.
One of the most durable and intractable issues in the history of philosophy has been the problem of universals. Closely related to this, and a major subject of debate in 20th century philosophy, has been the problem of the nature of the meaning.
The problem of universals goes back to Plato and Aristotle. The matter at issue is that, on the one hand, the objects of experience are individual, particular, and concrete, while, on the other hand, the objects of thought, or most of the kinds of things that we know even about individuals, are general and abstract, i.e. universals. Thus, a house may be red, but there are many other red things, so redness is a general property, a universal. Redness can also be conceived in the abstract, separated from any particular thing, but it cannot exist in experience except as a property of some particular thing and it cannot even be imagined but with some other minimal properties, e.g. extension. Abstraction is especially conspicuous in mathematics, where numbers, geometrical shapes, and equations are studied in complete separation from experience. The question that may be asked, then, is how it is that general properties or abstract objects are related to the world, how they exist in or in relation to individual objects, and how it is that we know them when experience only seems to reveal individual things.
Plato's answer to this was that universals exist in a separate reality as special objects, distinct in kind, from the things of experience. This is Plato's famous theory of "Forms." Plato himself used the terms idéa and eîdos in Greek, which could mean the "look" of a thing, its form, or the kind or sort of a thing [Liddell and Scott, An Intermediate Greek-English Lexicon, Oxford, 1889, 1964, pp. 226 & 375]. Since Aristotle used the term eîdos to mean something else and consistently used idéa to refer to Plato's theory, in the history of philosophy we usually see references to Plato's "theory of Ideas."
Although Aristotle said that Socrates had never separated the Forms from the objects of experience, which is probably true, some of Socrates's language suggests the direction of Plato's theory. Thus, in the , Socrates, in asking for a definition of piety, that he does not want to know about individual pious things, but about the "idea itself," so that he may "look upon it" and, using it "as a model [parádeigma]," judge "that any action of yours or another's that is of that kind is pious, and if it is not that it is not"- G.M.A. Grube trans., Hackett, 1986]. Plato concludes that what we "look upon" as a model, and is not an object of experience, is some other kind of real object, which has an existence elsewhere. That "elsewhere" is the "World of Forms," to which we have only had access, as the Myth of Chariot in the Phaedrus says, before birth, and which we are now only remembering. Later, the decided that we have access now, immediately and intuitively, to the Forms, but while this produces a rather different kind of theory, both epistemologically and metaphysically, it still posits universals as objects at a higher level of reality than the objects of experience (which partake of matter and evil)
Plato himself realized, as recounted in the Parmenides, that there were some problems and obscurities with his theory. Some of these could be dismissed as misunderstandings; others were more serious. Most important, however, was the nature of the connection between the objects of experience and the Forms. Individual objects "participate" in the Forms and derive their character, even, Plato says in the , their existence, from the Forms, but it is never clear how this is supposed to work if the World of Forms is entirely separate from the world of experience that we have here. In the Timaeus, Plato has a Creator God, the "Demiurge," fashioning the world in the image of the Forms, but this cannot explain the on-going coming-into-being of subsequent objects that will "participate" themselves. Plato's own metaphorical language in describing the relationship, which empirical objects are "shadows" of the Forms, probably suggested the Neoplatonic solution that such objects are attenuated emanations of Being, like dim rays of sunlight at some distance from the source
Whether we take Plato's theory or the Neoplatonic version, there is no doubt that Plato's kind of theory about universals is one of Realism: Universals have real existence, just as much so, if not more so, than the individual objects of experience.
Aristotle also had a Realistic theory of universals, but he tried to avoid the problems with Plato's theory by not separating the universals, as objects, from the objects of experience. He "immanentized" the Forms. This meant, of course, that there still were Forms; it was just a matter of where they existed. So Aristotle even used one of Plato's terms, eîdos, to mean the universal object within a particular object. This word is more familiar to us in its Latin translation: species. In modern discussion, however, it is usually just called the "form" of the object. The Aristotelian "form" of an object, however, is not just what an object "looks" like. An individual object as an individual object is particular, not universal. The "form" of the object will be the complex of all its abstract features and properties. If the object looks red or looks round or looks ugly, then those features, as abstractions, belong to the "form." The individuality of the object cannot be due to any of those abstractions, which are universals, and so must be due to something else. To Aristotle that was the "matter" of the object. "Matter" confers individuality, "form" universality. Since everything that we can identify about an object, the kind of thing it is, what it is doing, where it is, etc., involves abstract properties, the "form" represents the actuality of an object. By contrast, the "matter" represents the potential or possibility of an object to have other properties.
The uses of "form" and "matter" are now rather different from what is familiar to us. Aristotelian "matter" is not something that we can see, so it is not what we usually mean by matter today. Similarly, Aristotelian "form" is not some superficial appearance of a fundamentally material object: It is the true actuality and existence of the object. This becomes clear when we note Aristotle's term for "actuality," which was enérgeia, what has become the modern word "energy." Similarly, the term for "potential" is familiar, dýnamis, which can also mean "power" and "strength."
The continuing dualism of Aristotle's theory emerges when we ask how the "forms" of things are known. An individual object Aristotle called a "primary substance" (where the Greek word for substance, one might better be translated "essence" or "being"). The abstract "form" of an object, the universal in it, Aristotle called "secondary substance." So if what we see are individual things, the primary substances, how do we get to the universals? Aristotle postulated a certain mental function, "abstraction," by which the universal is comprehended or thought in the particular. This is the equivalent of understanding what is perceived, which means that we get to the meaning of the perception. The "form" of the thing becomes its meaning, its concept, in the mind. For Plato, in effect, the meaning of the world was only outside of it.
While the Aristotelian "form" of an object is its substance (the "substantial form") and its essence, not all abstract properties belong to the essence. The "essence" is what makes the thing what it is. Properties that are not essential to the thing are accidental, e.g. the colour or the material of a chair. Thus the contrast between "substance and accident" or "essence and accident." Accidents, however, are also universals. A contrast may also be drawn between substance and "attribute." In this distinction, all properties, whether essential or accidental, belong to the substance, the thing that "stands under" (sub-stantia in Latin, hypo-keímenon, "lie under," in Greek) all the properties and, presumably, holds then together. Since the properties of the essence are thought together through the concepts produced by abstraction, the "substance" represents the principle of unity that connects them.
Concepts, or predicates, are always universals, which means that no individual can be defined, as an individual, by concepts. "Socrates," as the name of an individual, although bringing to mind many properties, is not a property; and no matter how many properties we specify, "snub-nosed," "ugly," "clever," "condemned," etc., they conceivably could apply to some other individual. From that we have a principle, still echoed by Kant, that "[primary] substance is that which is always subject, never predicate." On the other hand, a theory that eliminates the equivalent of Aristotelian "matter," like that of Leibniz, must require that individuals as such imply a unique, perhaps infinite, number of properties. Leibniz's principle of the "identity of indiscernibles" thus postulates that individuals which cannot be distinguished from each other, i.e. have all the same discernible properties, must be the same individual.
One result of Aristotle's theory was a powerful explanation for natural growth. The "form" of a thing is not just what it looks like, it is the "final cause," the purpose of the thing, the "entelechy," the "end within," which is one of the causes of natural growth and change. Before the modern discovery of DNA, this was pretty much the only theory there was to account for the growth of living things from seeds or embryos into full-grown forms. Nevertheless, it introduces some difficulties into Aristotle's theory: If the "form" is accessible to understanding by abstraction, then this cannot be the same "form" as the one that contains the adult oak tree in the acorn, since no one unfamiliar with oak trees can look at an acorn and see the full form of the tree. But if the entelechy cannot be perceived and abstracted, then it exists in the object in a way different from the external "form." But Aristotle's metaphysics makes no provision, any more than quantum mechanics, for a "hidden" internal "form." Neoplatonism took care of that by making the internal "form" transcendent, as in Plato, but this is then a fatal compromise with Aristotle's prima facie empiricism and with his move to "immanentize" Plato's Forms.
This brings us to a fundamental conflict in Aristotle's theory, which highlights its drawbacks in relation to Plato's theory. If Aristotle is going to be an empiricist, thinking that knowledge comes from experience, this puts him on a slippery slope to positivism or, more precisely, "judicial positivism": that the actual is good (or, as puts it, "the Real is Rational"). The continuing virtue of Plato's theory of Forms is that the Forms can be profoundly different from the objects of experience. The Forms are perfect, and the world falls far short of them. This seems to account for important characteristics of reality, that true justice is rarely to be found, and that mathematicians describe the strangest things that have no obvious relation to experience. Aristotle's theory can accommodate this, but only by positing "forms" that are inaccessible to perception and abstraction, which would contradict any original notion in Aristotelian epistemology that knowledge comes from experience. Again, Neoplatonism takes care of this, but only at the cost of an intuitionism that is non-empirical, indeed, mystical, in the extreme, where we certainly do have access to "forms," or the Forms, apart from experience. But if Neoplatonism were correct, then it would be possible for someone to look at an acorn and, unfamiliar with the species, see what the full-grown oak would look like. This does not seem to happen on any credible testimony.
One significant consequence of Aristotle's point of view was, indeed, a belittlement of mathematics. Without mathematical Realism, we do not have the modern notion that real science is mathematical and that mathematics reveals the fundamental characteristics of nature. Mathematics cannot be thought of as "abstracted" from experience in any ordinary way. If it is not, then mathematics is just internally constructed, out of contact with reality. This seems to be Aristotle's view, a rejection of Pythagorean and Platonic mathematical Realism. Mathematics is no more than a "device for calculation." Thus, although Aristotle is usually thought of as being more "scientific" than Plato, he rejects Plato's geometrical view of the for the sake of a completely Presocratics sort of theory of opposites. He is overall nowhere near as interested in mathematics as Plato. Aristotle's approach became accepted, all through the Middle Ages, and it wasn't until the revival of Pythagorean-Platonic ideas about mathematics, in people like Kepler and Galileo, that modern science got going.
The Neoplatonic combination of Plato and Aristotle dominated thought in, and the, beginning in Islâm and moving into , we have a revival of a stricter Aristotelianism, culminating in the massive Summas of St. Thomas Aquinas (1225-1274). It may not be a coincidence that this involved the rejection of the mystical elements in Neoplatonism, since Christianity was institutionally far more unfriendly to mysticism, with its promise of direct communication with God, than were Islâm or Judaism. What was rare or unheard of in Islâm or Judaism, mystics being condemned or even executed for heresy, was a fairly regular occurrence in Western Christianity? However, a stricter empiricism again creates the difficulty that the apparent "form" of an object cannot provide knowledge of an end (an entelechy) that is only implicit in the present object, and so hidden to present knowledge.
Curiously, the reaction to this was not immediately a new Platonism or Neoplatonism, but a more extreme empiricism: The Nominalists overcame the Aristotelian difficulty by rejecting Realism altogether. Universals were just "names," nomina, even just "puffs of air." The greatest exponent of this approach was the Englishman William of Ockham (1295-1349). To the Nominalists, the individuality of the objects of experience simply meant that only individuality exists in reality. The abolition of a real abstract structure to the world had a number of consequences for someone like Ockham. The omnipotence of God became absolute and unlimited, unrestricted by the mere abstractions of logic, so that God could even make contradictions real, which was inconceivable to Aristotelians or Platonists. Similarly, no things had natures (essences) that made them intrinsically either good or evil. Not even God was intrinsically good or evil: The Good would just be whatever God wills it to be, something else inconceivable to Aristotelians or Platonists -- but actually rather Islâmic in tone, since no human notion about the nature or essence of God can impose a limit on the Will of God.
Although the debate between the Realists and the Nominalists became the greatest controversy of Mediaeval philosophy, another classic expression of Nominalism is to be found in the British Empiricists, from John Locke (1632-1704) to George Berkeley (1685-1753) and David Hume (1711-1776). Locke started the approach by simply defining an "idea" as being an image. Since images are undoubtedly individual and concrete, this stacks the deck for Nominalism. Nevertheless, Locke wished to preserve something like a common sense meaning of "abstraction," which he thought of as taking some characteristic of a particular idea and using it in a general way: "the mind makes the particular ideas received from particular objects to become general." Thus, Locke cannot find any difference between the idea "horse" and the idea "Bucephalus" but "in leaving out something that is peculiar to each individual, and retaining so much of those particular complex ideas of several particular existences as they are found to agree in" [An Essay Concerning Human Understanding). Locke even wants to preserve a distinction between "nominal essence," the nature of things that we know about, and "real essence," the real nature of things, which we cannot know about given the limitations of human knowledge [Book III, Chapter VI, §§7-18]. How this distinction could be maintained on any kind of empiricism is mysterious. Real essences and the compromise on abstract ideas were swept away by Berkeley and Hume, who quite consistently and forthrightly argued that there was no such thing as "abstract ideas." Hume said: "Let any man try to conceive a triangle in general, which is neither Isoceles nor Scalenum, nor has any particular length or proportion of sides; and he will soon perceive the absurdity of all the scholastic notions with regard to abstraction and general ideas. (An Enquiry Concerning Human Understanding)
Of course, it is quite easy to conceive a triangle in general, which is neither Isoceles nor scalene, but Hume has done so himself. Hume's argument only works if he really means imagine rather than conceive. Hume even said: No priestly dogmas, invented on purpose to tame and subdue the rebellious reason of mankind, ever shocked common sense more than the doctrine of the infinite divisibility of extension, with its consequences; as they are pompously displayed by all geometricians and metaphysicians, with a kind of triumph and exultation.)Since infinite divisibility is rather important in geometry, and one of the "consequences . . . pompously displayed" is calculus, "geometricians" (like Isaac Newton) would probably be offended to be lumped together with metaphysicians. Hume's only recourse is that there are "general terms" to which multiple concrete "ideas" are attached. This however, fails the Socratic test for the "model" that would enable us to judge unfamiliar objects; and while the "family resemblances" of Ludwig Wittgenstein (1889-1951) can be appealed to by Nominalists for such judgments, the imprecision implied by such a test is wholly contradicted by the practice of mathematics, while that in which a "resemblance" would consist must be, indeed, some abstract feature or collection of such features. But Hume allows for no abstract features, much less the recognition of them.
How far this silliness can go is evident in recent analytic philosophy, which fancies itself in direct succession from Hume. The consequences of the project of reducing the world to objects and words is evident in the following statement by the logician Benson Mates [Elementary Logic, Oxford, 1972): Another matter deserving explanation is our decision to take sentences as the objects with which logic deals. To some ears it sounds odd to say that sentences are true or false, and throughout the history of the subject there have been proposals to talk instead about statements, propositions, thoughts, or judgments. As described by their advocates, however, these latter items appear on sober consideration to share a rather serious drawback, which, to put it in the most severe manner, is this: they do not exist: Even if they did, there are number of considerations that would justify our operating with sentences anyway. A sentence, at least in its written form, is an object having a shape accessible to sensory perception, or, at worst, it is a set of such objects. Thus "It is raining," and "Es regnet," though they may indeed be synonymous, are nonetheless a pair of easily distinguishable sentences. And in general we find that as long as we are dealing with sentences many of the properties in which the logician is interested are ascertainable by simple inspection. Only reasonably good eyesight, as contrasted with metaphysical acuity, is required to decide whether a sentence is simple or complex, affirmative or negative, or whether one sentence contains another as a part)
Reasonably good eyesight, however, is not enough to tell that "It is raining" and "Es regnet" are synonymous. That circumstance is evidently not noticed by Mates. What is needed is not eyesight, but understanding, which is nothing so esoteric as "metaphysical acuity," but instead a very simple and very common kind of thought. The "advocates" of the existence of thoughts are pretty much everyone who uses ordinary language, which probably includes. Given Mates's own example, it is very hard to deny that meaning is different from both words and objects. Mates, however, can indulge in a particularly Nominalist theory of meaning, which we see in his discussion of Set Theory , whereby each set is uniquely determined by its members; in other works, sets having the same members are identical.
However, the sets "the present [1999] King of France" and "the present [1999] King of England" both have the same members, namely none, which makes them identical with the Empty Set ("Nothing"). They are therefore in no way "uniquely determined" by their members, if we allow that their meaning, even if not their membership, is different. Thus, an "extensional" theory of meaning, which sees reference to objects as the content of meaning, must either ignore "non-existent objects" or must attribute a reality to non-existent objects greater than that allowed by common sense. Equally serious is the problem of how we would know what all the members of a non-empty set are, without omniscience, in order to be able to use the name of the set in its "uniquely determined" way. If all we know are certain members of the set, i.e. the dogs we actually know about from personal experience, then we are using the name of a subset, not the real set, of dogs
At the beginning of 20th century logic was a much more Realistic theory of meaning and universals, that of Gottlob Frege (1848-1925). For Frege, "subject" terms referred to individuals, while "predicates," i.e. abstract properties, referred to "concepts." "Concepts," then, exist as objects. In the subject we have meaning as "sense," which is very different from reference. Thus, in his classic example, the "morning star" and the "evening star" have the same reference, namely the planet Venus, but they have different senses, namely "Venus as seen in the morning" and "Venus as seen in the evening." A crude extensionalism cannot account for this. On the other hand, Frege was no metaphysician; and we have no theory to account for the nature or existence of concepts as objects, let alone to what Frege said was the reference of sentences, namely the "True" and the "False." A philosopher looking for the metaphysics of "concepts" has little to go on beyond Aristotle and Aquinas. Frege's theory of senses, however, recently clarified by , does preclude Nominalist (and all naturalistic theories, like Wittgenstein's theory of meaning as "usage") theories that only want to stick to words and individual objects.
The possibility occurs, then, that universals may occur, not in words, and also not in any kind of objects (individuals or Frege's concepts), but in the internal mechanisms of sense. This would be a "middle way" between Realism and Nominalism that has been called Conceptualism. This notion seems to go all the way back to Peter Abelard (1079-1142). The drawback of conceptualism, however, would be that universals would not be knowledge, since the structures of meaning would correspond to nothing of the kind in the world: Universals would have to be the "pragmatic" way that we conceive or organize individuals, avoiding the silliness of a Nominalism like Mates's, but there can be no real differences in the objects that our conceptions are reflecting. Conceptualism is devoid of anything like Frege's "concepts" (or Aristotle's "forms") as abstract objects.
Metaphysically, Conceptualism is therefore no different from Nominalism. It is a psychologistic theory, i.e. it attributes structures that we see in reality to structures imposed by the human psyche. Indeed, some structures in the world are imposed by the human psyche. There is nothing natural about a coffee pot, which is an artifact of human conception and human purposes. A Platonic Form or Aristotelian substance that is the objective existence of the abstract and universal coffee pot would seem to be the reductio ad absurdum of their theories as much as the "reasonably good eyesight" is of Mates's. The conventionality of such concepts provides a powerful argument for Conceptualism, as it would also for Nominalism
If Conceptualism were merely the argument that there is not always an objective structure to correspond to the difference between essence and accident, this would be quite true. However, it seems to be the case that there is an objective structure corresponding to some essences, since there are natural kinds of things (dogs, feldspars, stars, flowers, etc.) whose identity owes nothing to human convention or purposes. Furthermore, since all attributes (properties) are universals, whether essential or accidental, this argument would be beside the point. Even conventional concepts are based on real characteristics. A coffee pot must hold coffee, and its ability to do so owe nothing to convention but everything to the nature of the materials and even the nature of space. Those cannot be altered, much as many would like to, simply by making some change in the conventions of our conception
If a Conceptualist allows even a moment when real differences are recognized, then, however conventional the rest of the constructions, a fundamental element of Realism has been accepted into the theory. Thus, however conventional a fundamental unit of measure may be, but this does not make all fundamental units somehow the same. A metre really is more than three times as long as a foot, which means they are commensurable, i.e. each can be converted into the other. Commensurability and conversion are only possible because of the independent, objective, and real natures of each.
For a true Conceptualism or Nominalism, incommensurability, both of measure and of meaning, must be possible, which is why we find that Nominialists and deconstructionists are eager to leap on W.V.O. Quine's (1908-2000) arguments for the "indeterminacy of translation." The problem of the metaphysics of universals thus overlaps the epistemological issues and theories examined in "". A consistent Conceptualism is going to result in the same skepticism that we see in Hume or the same nihilism that we see played out in deconstruction, all because of the same denial of real universals and meaning which that has objective reference. Quine, like the deconstructionist Rorty, offers a muddled Pragmatism that obscures the non-responsiveness and question-begging nature of his thought.
can be said to be a Conceptualist because of the manner in which the mind's activity of synthesis puts the concepts of reason into phenomenal objects in the first place. This is definitely a Conceptualist move. However, Kant's theory does not end up being a Conceptualist theory, or any kind of psychologistic theory, if Kant is to be taken seriously when he says that it is a theory of "empirical realism." This is commonly misunderstood. Thus Jerrold Katz says: "Kant's Copernican revolution . . . makes the existence of objects in the world depend on our cognitive faculties" [Realistic Rationalism] is flatly contradict by Kant himself: Either the object alone must make the representation possible, or the representation alone must make the object possible . . . In the latter case, representation in itself does not produce its object in so far as existence is concerned, for we are not here speaking of its causality by means of will
If the existence of objects were produced by representation alone, this is what Kant called "intellectual intuition." Only God would have intellectual intuition. Our actual ability to produce the existence of objects is not by means of representation alone, but by means of will, otherwise the existence of objects is "given" to us. Instead, Kant's theory is that the character of objects is in part determined by the nature of representation. Since this is also the very thing we see in contemporary physics, in , it becomes very hard to reject Kant as some anti-realist without also a somewhat wishful-thinking rejection of this characteristic of physics
To be thinking, as often happens, that things-in-themselves in Kant are what are "really" real is to contradict the meaning of "transcendental idealism," which is that transcendent objects are only "ideal," i.e. subjective. , although leaving out most of the subtlety of Kant's theory, clarifies the metaphysics by ruling out any order of transcendent objects, whose possibility always seems to be hovering in the background for Kant, confusing his realism. Kant, however, is correct in that we inevitably try and conceive of transcendent, which means unconditioned, objects. This generates "dialectical illusion," of reason. Kant thought that some antinomies could be resolved as "postulates of practical reason" (God, freedom, and immortality); but the arguments for the postulates are not very strong (except for freedom), and discarding them helps guard against the temptation of critics to interpret Kant in terms of a kind of Cartesian "transcendental realism" (i.e. real objects are "out there," but it is not clear how or that we know them). If phenomenal objects, as individuals, are real, then the abstract structure (fallibly) conceived by us within them is also real. Empirical realism for phenomenal objects means that an initial Kantian Conceputalism turn into a Realism for universals.
Kant's theory, indeed, is not the kind of realism that we see in Descartes, or that was evidently desired by Einstein, where objects exist as such entirely independent of subjects. Instead, phenomenal objects presuppose the subject, and we cannot say whether their properties are "really" objective or "really" subjective - as examined in "." This is how Kant's theory can be both a form of Conceptualism and a form of Realism at the same time. Thus, if the mind conceives abstract properties, abstract properties will be in objects, because objects are just the other side of the structures found in the mind. But it would be equally true to say that the structures in the mind are just the other side of those in the objects. The Aristotelian function of "abstraction," by which universal forms are taken from objects into the mind, in these terms is less mysterious: Phenomenal objects are already in the mind, so the purely mental operation does not reach out into transcendent (Cartesian) reality to fetch the essences.
While Kant's empirical realism allows for an Aristotelian Realism of universals, it also means that we do not have to accept Aristotle's theory substantial forms and of essence and accident. There are conventional concepts. Not all concepts therefore correspond to real essences. To think that they do is what called "essentialism" -- a good label for such an error, though the term is now widely used by "post-modern" nihilists to condemn any doctrine of essences or natural kinds. But there are natural kinds and real essences.
Real essences, however, must be due to something; they are not just self-generating. A clue may be found in the modern theory of DNA that has replaced the entelchy of Aristotelian "form." DNA governs the growth and development of organisms through the causal laws of nature. The natural kinds of plants and animals are thus the result of causal necessity. All essences, whether real or conventional, are the result of some form of necessity. The fixity and unchangeability of Plato's original Forms, "immanentized" by Aristotle, are artifacts of a form of necessity itself, the necessity of the perfect aspect, of time which has occurred (the past or the present perfect tenses, the opposite of Aristotle's own "future contingency"). The various modes of necessity are discussed in "" and the nature of the perfect aspect in a to that essay. Purely conventional concepts rely on the fact of their use, which is a function of perfect necessity, for the fixity of their own conceptual essences. The entelechy of a coffee pot is owing entirely to human purposes, and to no causal necessity, but it is functionally parallel, in human understanding, to natural kinds created by causal laws of nature.
If we distinguish between substance and attribute and identify some attributes as essential, this will mean, not that there is a hidden, underlying substance unifying the essence, but that such a notion of substance can be replaced by the forms of necessity, whether causal for natural kinds or purposive for purely human conceptions. This means that the ghostly skeletons of the Platonic Forms, brought down to earth by Aristotle, and uncomfortably inhabiting the transient individuals that we perceive, can be eliminated. The abstract features we conceive in individual objects are not different in kind from the objects, which are themselves artifacts of necessity (logical, a prior, perfect, and causal), but the living skeleton of the objects, in a phenomenal world where necessity and contingency are the structure of everything
The fixity of our own concepts collapses all the necessities of reality into the fact of conventional usage, which Plato and Aristotle projected out into the world, even into the transcendent; but it is now possible to correct this. It is not the Concept out among objects, as Frege put it, but mental concepts do refer to some abstract structure grounded in some form of necessity. By the same token we can identify the ground of the "True" and the "False," which Frege saw as the reference of sentences, since the same necessities that unify real or conventional essences also unify predications in sentences. Kant's doctrine of the "primacy of judgment," indeed, subordinates the unity of concepts to the unity of propositions, which enables us to say that even analytic truths are of different kinds, depending on the necessity that unifies the properties in the concepts. "All placental mammals give live birth" is thus analytic of the concept "placental mammal," which is a natural kind based in causal necessity, while "All Hobbits are short" is analytic of the concept "Hobbit," which is a fictional artifact of J.R.R. Tolkien's Lord of the Rings and so dependent on the mere fact of the convention adopted by the imagination of Tolkien.
The modes of necessity are interrelated with the modes of contingency, so that perfect necessity is contingent in relation to a priori necessity, a prior necessity is contingent in relation to logical necessity, and logical necessity is contingent in relation to an “ur-contingency” that would transcend non-contradiction. Each mode of contingency, in turn, represents the possibility of something different from what we see in each subsequent mode of necessity. The very possibility that, in time, we can open the window or make some other alteration in reality is a case where we deal with the contingency of present time and our ability to bring about some new possibility. What this adds up to for universals is that as forms of necessity they represent the rules and guideposts that limit and direct possibility: Universals represent all real possibilities. Thus, what Plato would have called the Form of the Bed, really just means that beds are possible. What would have seemed like a reductio ad absurdum of Plato's theory, which if there is the Form of the Bed, there must also be the Form of the Television also (which is thus not an artifact and an invented object at all, but something that the inventor has just "remembered"), now must mean that the universal represents the possibility of the television, which is a possibility based on various necessities of physics (conditioned necessities) and facts (perfect necessities) of history.
Where the power of possibility comes from is a factor unaddressed by Plato. In Aristotle it is represented by matter, which is power and potential; but then matter is so intrinsically amorphous, merely the passive recipient of actualizing "form," that the Neoplatonists identified it with Not-Being (and evil) -- quite apt when Prime Matter, or pure potential, is not actual at all and so in fact doesn't exist -- and both Aristotle and the Neoplatonists eliminated any material component to God (or the One). Rather awkwardly, this left Aristotle's God literally "powerless": He is already perfectly actual, which means that He cannot do anything that He is not already doing. This could be argued theologically, that it would be an insult to God's foreknowledge and wisdom if anything has been left undone that He is going to have to take care of in the future, but at the same time it does seem like an insult to His Omnipotence that He cannot just decide to do something new
The failure of Aristotle's theory is that necessity and possibility are interrelated, actualization does not "use up" possibility, and that what is truly actual, phenomenal objects in the world, consists of contingent individuals and not the necessary universals of the "form." In Spinoza's metaphysics, individuals as natura naturata ("nature natured") are the visible products of coming-into-being, but the creativity of Spinoza's God is limited by a determinism that makes every event a complete product of necessity, with no contingency, and so no radical possibility, at all.
Intentionality has often been seen as the distinctive mark setting human life apart from life in general. This position has been criticised for its implied dualism and Daniel Dennett among others has put forward eliminativist accounts of mental phenomena in humans. From an evolutionary point of view absolute dualism is of course unacceptable, but rather than eliminating the peculiarities of human experiences this paper suggests that one try to trace the evolutionary origins of intentionality. It is suggested that human intentionality be seen as a special case of a more general category termed 'evolutionary intentionality'. Evolutionary intentionality is connected to the dynamic behaviour of systems based on code-duality, i.e. the perpetual reshuffling of messages back and forth between digital (DNA) and analog codes (organisms), which is the core of heredity or semiotic survival.
Mental processes such as expectations, desires or imaginations are always `about' something. If I expect you to listen to me, then this expectation concerns something which is not a part of myself. This `aboutness', whatever it is, seems to be totally absent in the physical world. A rock or a river does not represent other states of affairs. We might treat them as representations but in themselves they are not representations. Intentionality was the term Brentano introduced to characterise the idea that mental states have content (Brentano 1874/1973). And it has often been claimed that intentionality is the distinctive mark setting human mental life apart from all other phenomena in this world.
This idea of intentionality conceived as an exclusively human property has been challenged from mainly two corners in recent times. First, biologists or psychologists concerned with so-called evolutionary epistemology have suggested that also intelligent animals might posses a kind of intentionality; and researchers in the field of cognitive science have claimed that in principle even computers might exhibit intentionality. Such a claim is all the more easy to main-tain if, as the philosophers Patricia and Paul Churchland have suggested, concepts such as "mind", "consciousness", or "rationality", are the "ghosts" of our language, concepts without any real content, "neural phlogistion", so to say (Churchland 1986). Daniel Dennett's writings leads us to the same position although he at least admits the heuristic value of the intentional stance. According to Dennett "intentional systems" should be explained and predicted as if they represented things external, but this does not mean that such systems have any intrinsic intentionality (Dennett 1987).
John Searle has given a forceful philosophical criticism of these eliminativist accounts of mental processes (Searle 1992). Searle sees the discussions in cognitive science about intentionality as yet another version of the old discussion about qualia. He maintains that "first person" experiences such as e.g. feeling of tooth ache, cannot logically be reduced to "third person" (e.g. neurobiological) descriptions. Therefore, although as a materialist he admits that all kinds of experiences are caused by the physico-chemical structure of the brain, he also thinks that the eventual description of such causes would still not grasp the fundamentally subjective feeling of these experiences, i.e. the intentionality as such would not be part of such descriptions
While its tent is to share this criticism of Searle's and especially his denial of the conceptualisation of intentionality as an instantiation of a computer-program, I also think that his approach leads to an unnecessarily hermetic concept of intentionality. Following the categorical system of Charles Sanders Peirce, the founder of the American semiotic tradition, we can say that intentionality (and qualia) belong to the general category of thirdness, which has to do with thought and evolution. And it is the aim of the present paper to demonstrate how a biosemiotic, i.e. a sign-theoretical reframing of biological theory, may help in justifying an evolutionary account of intentionality.
Both Sartre and Merleau-Ponty saw self-awareness as central to consciousness and intentionality, and this self-awareness should be understood as a "pre-reflexive cogito", a consciousness which was there without being reflected upon at all: "C'est la conscience non-réflexive qui rend la reflexion possible: il y a un cogito préréflexif qui est la condition du cogito cartésien" (Sartre 1943) Now, Merleau-Ponty's radical position is that this pre-reflective self-awareness must from the very outset be contaminated by "otherness" or alterity, otherwise intersubjectivity would be impossible: subjectivity cannot consist simply in self-presence because if I were given to myself in an absolutely unique way, I would lack the means of ever recognising the embodied Other as another subjectivity (Merleau-Ponty 1945). This argument is further based on Merleau-Ponty's conception of subjectivity as essentially incarnated. To exist embodied is neither to exist as pure subject, nor as pure object, but to exist in a way that transcends this distinction, i.e. the opposition between "pour-soi" and "en-soi". That self-awareness is intrinsically an embodied self-awareness implies a loss of transparency and purity, and only therefore is intersubjectivity possible.
As Dan Zahavi explains: "When I experience myself and when I experience an Other, there is in fact a common denominator. In both cases I am dealing with incarnation, and one of the features of my embodied self-awareness is that it per definition comprises an outside: I am always a stranger to myself, and therefore open to others" (Zahavi 1996).
Merleau-Ponty was writing in a context of transcendental philosophy which is rather incompatible to the evolutionary concerns of the present paper. I nevertheless think that important aspects of his conception of intentionality is represented in the fundamentally triadic structure shown in figure 1. And I believe that it is exactly this triadic nature of the mental sphere which makes it resistant to the aggressive 'scientification' launched by cognitive science. The triadic structure cannot be reduced to a combination of dyadic relations since intentionality depends on the totality of the triad. It thus formally resembles the triadic sign relation as conceived by C. S. Peirce. According to Peirce "A sign, or Representament, is a First which stands in such a genuine triadic relation to a Second, called its Object, as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same Object" (Peirce 1955). Thus, in Peirce's philosophy the Interpretant represents a category of "thirdness" that transcends mere causality, which he saw as "secondness".
All computer programs are completely based on Peircean "secondness", i.e. syntactic operations, since application of the rules governing the manipulation of the symbols does not depend upon what the symbols "mean" (their external semantics), only upon the symbol type. The problem is not only that the semantic dimension of the mental cannot be reduced to pure syntactics. As Peter Cariani explains: there is no logical structure for the whole world so that the sign embedded in a logical "model" bears a definite logically-necessary relation to the world as model" (Cariani 1995). The problem rather is that the semantic level itself is bound up in the unpredictable and creative power of the intentional, goal-oriented embodied mind. The Other is a Representamen which determines an Interpretant (self-awareness) to assume the same triadic relation to the body in which the Other itself stands to the body What we should earn from this analysis of intentionality, but subjectivity and self-awareness is not that these phenomena are forever beyond the horizon of science. Rather we should learn that the key to a scientific understanding of the mental is embodied existence and not the fictitious idea of disembodied symbolic organisation which appeals so strongly to the aritmocentric minds of traditional scientists. Cariani has pointed out, that "Virtually, and all symbols are associated with biological organisms, whether for communication, control, or construction, and whether at a cellular, organismic or social level. We cannot understand symbols fully until we understand their role in the organisation of life" (Cariani 1995). A biosemiotic understanding of evolution seems to be the key to a scientific understanding of intentionality
Cognition seen as an evolutionary product has been studied by evolutionary epistemology. Unfortunately much work in this fascinating area has been guided by too simplistic conceptions of human cognitive abilities. Thus, much of the early work on linguistic capacities of apes was later shown to be inadequate (Sebeok and Umiker-Sebeok 1980) and sociobiological theorising generally commits the error of misplaced concreteness when personality traits are reified as natural objects of indubitable ontological status. The fundamental challenge for evolutionary epistemology as I see it is to accept that Peircean "thirdness" is real. The intentionality of human mental life is not just a "ghost", and yet it must have evolved from something else, it must have been present as a germ in our most related animals. In a strange way Merleau-Ponty himself gives us a cue when he observes that `originally consciousness is not a "I think that" but a "I can"' (Merleau-Ponty 1945). Nervous systems and brains belong to animals - they never appeared in plants - and from the evolutionary beginning their function was to guide body-action, behaviour. It is a well-known fact that animals can and do dream. This implies that the mental states can be uncoupled from bodily action. But the extent of uncoupling between behaviour and mental activity which characterises the human mind is probably unique to that specific animal. The uncoupling makes philosophers wonder how it can be that mental states are always 'about' something. But this is because they don't consider that mental 'aboutness’', human intentionality, grew out of a bodily 'aboutness'. Whatever an organism senses also mean something to it, food escape, sexual reproduction etc. This is one of the major insights brought to light through the work of Jakob von Uexküll: "Every action, therefore, that consists of perception and operation imprints its meaning on the meaningless object and thereby makes it into a subject-related meaning-carrier in the respective Umwelt" (Uexküll 1940/1982). "Umwelt" was Uexküll's term for the phenomenal worlds of animals, the subjective universe in which the animals live, or in other words the ecological niche as the organism itself perceives it.
Rather than pursuing the question of animal intentionality (see Sebeok 1986 for interesting examples) I shall address the question of intentionality as an even more general category of life, an evolutionary "aboutness" or evolutionary intentionality, i.e. the anticipatory power implicitly present in all systems based on code-duality (Hoffmeyer 1995, Hoffmeyer and Emmeche 1991)
Code-duality refers to the fact that living systems always form a unity of two coded and interacting messages, the analog coded message of the organism itself and its re-description in the digital code of DNA. As analog codes the organisms recognise and interact with each other in the ecological space giving rise to a horizontal semiotic system (the ecological hierarchy of Salthe (1985)), while as digital codes they (after eventual recombination through meiosis and fertilisation in sexually reproducing species) are passively carried forward in time between generations. This of course is the process responsible for nature's vertical semiotic system, the genealogical hierarchy (Salthe 1985). Thus, heredity should be understood as `semiotic survival' (Hoffmeyer 1995).
Code-dual systems are anticipatory in the sense, that the digital code (the gene pool) records specifications which did work well enough in the past, and which are then used by the analogy coded organisms to cope with the immediate future, thereby eventually assuring the semiotic survival into the more distant future. This of course is anticipation in the most primitive sense of extrapolation from the past (most human anticipation is so too). But the fundamentally semiotic character of this system very early in evolution assured the creation of sense facilities to strengthen anticipation. Let us now consider an example of evolutionary intentionality (Hoffmeyer 1995b). The Malayan praying mantis, Hymenopus bicornis is pink and rests on the flowers of Mela-sto-ma polyanthum, and closely resembles them in colour and shape. Insects attracted to the flower are caught by the mantis. Clearly, the mantis falsely 'pretends' to be part of the flower. This is as good as any an example of what I propose to call an evolutionary lie. Here of course no mental processes are at play, but the mantis doesn't know that it fools the insect. But if analysed at the time scale of evolution the intentionality of the deception is hard to overlook
The deception was in fact intended in the sense that the 'aboutness' of the evolutionary lineage of our mantis ancestors, i.e. its inherent project of surviving, made the lineage select a strategy which it had 'learned' was effective in deceiving the prey. The term "select" in this context is meant to imply that the lineage as a historical entity is capable of measuring niche conditions and interpret them in terms of its own historically appropriated behavioural capacities including its reproductive potential. In this understanding, the single mantis doesn't lie, but it is nevertheless an integral part of the lying lineage to which it belongs. Seen in the historical setting in which the adaptation took place the 'resemblance' between mantis and flower was meant to be a (false) 'representation', i.e. it was a lie. Lying here takes place, not at the level of the individual, but at the level of the lineage. If it is objected that evolutionary lineages cannot possibly form representations and that therefore they cannot do anything semiotic, I think the answer will be that such a claim presupposes a very narrow conception of what is a representation. For comparison let us consider the case of human visual representation such as e.g. a person who has had the bad fortune of witnessing a man falling to his death from a balloon. The icon formed in the mind of this person will be some mental representation of a complex and changing pattern of a firing collective of neurones coupled to a whole lot of other bodily processes. In the evolving mantis lineage, on the other hand, what we see is that the circumstances, i.e. the fact of preferred bugs feeding on pink flowers, caused an icon to form in the lineage consisting in the phenotypic behaviour of climbing certain pink flowers. This phenotypic behaviour is no more and no less causally connected to the feeding habits of the bugs than the vision of a falling balloon is causally linked to the actual case of a falling balloon. In both cases a representation takes place. In the case of the lineage the behaviour is some phenotypic representation of patterns of gene expression which again represent the natural history of the lineage. In the case of vision also the relation between moving objects and firing neurones are based on personal experiences (a baby cannot form this kind of icon).
Generalising from this example we can now represent evolutionary intentionality graphically as a triadic structure which is formally analogous to the triadic structure of human intentionality. The ecological niche is a sign or Representamen which determines an Interpretant (the actual pattern of life and reproduction) to assume the same triadic relation to the lineage in which the niche itself stands to the same lineage.
Ecological niche conditions thus occupy the same logical position in evolutionary intentionality as "otherness" occupies in human intentionality. At first this suggestion may seem strange, but it should understood in the light of Jakob von Uexküll's umwelts-theory (Uexküll 1940/1982). The Umwelt of an organism is to a large extent a species specific Umwelt, e.g. the umwelt of bees will generally contain a lot of vision in the ultraviolet area which is not part of the human umwelt (unless we take advantage of our technical skills). The umwelt represents a kind of collective memory created through the phylogenetic history of the lineage under the given ecological niche conditions. The umwelt therefore represents a biological counterpart to the internalised otherness at the basis of human self-awareness.
The actual pattern of life and reproduction obviously takes the position of the Interpretant. This pattern refers to the lineage since it is in fact incarnated in the body of the lineage and thus is reflected as hereditary changes in that "body" over time. Niche conditions are represented as survival strategies of the populations which constitute the lineage. But the core of this whole dynamic system is code-duality. The objectivity of the digitally coded message (the pool of genotypes) and the subjectivity of analogy coded messages (the corporeal organisms) is the biological counterpart to the "pour-soi" and the "en-soi". Just like in Merleau-Ponty's conception the non-coincidence of the subject depends on the self-referential temporality of the body-mind, thus the non-coincidence of the lineage is based on the self-referential temporality of heredity, i.e. of the perpetual translations back and forth between the digital and analogue versions of the message through the processes of reproduction and ontogenesis.
Also agreeing with Searle, in that the essence of human intentionality cannot fully be captured through 3rd person descriptions, but it denies that human intentionality is categorically distinct from the phenomena of the natural world. Following the semiotic track lain out in the philosophy of Peirce it is claimed that human intentionality has emerged as a peculiar corporeally individualised instantiation of a more general thirdness which is embedded as an irreducible element in the process of organismic evolution: evolutionary intentionality
Its objective is to examine the philosophical relevance of mind techno-science (MTS), why philosophy finds itself in a paradoxical situation where it cannot ignore this new field of knowledge, and at the same time has to reinvent itself outside its realm. In order to reach this objective, it is necessary to clarify the present interactions between artificial intelligence (AI), cognitive sciences (CS), virtual reality (VR), the humanities, their present conjuncture (post-modernism), and other issues that will be progressively conceptualized. The reason for the connection of these different fields of research seem obvious, but is, in fact, less than clear: the form and content of this connection raise questions that cannot be answered in any one of these fields alone. To deal with this general problem not only requires finding the proper information and methodology, it requires an understanding of the epistemic conjuncture at its core. The questions are many, all more or less confused: in what sort of epistemic conjuncture post-modernism finds itself, why are AI and CS in a situation beyond the reach of their actual practice but at the same time cannot afford to ignore this because it concerns their epistemic and academic environment.
Already the protests from many readers: French fog. Indeed my perspective will appear at first non-analytical, even anti-analytical. But the overplayed opposition between the two traditions, in this precise case, takes a distinctly different aspect: it is between clarifying the already largely debated problems and questioning these very problems through an analysis of their presuppositions. The risk is fully accepted; my view concerns the forest more than the trees. It concerns the forms of argumentation at the root of these problems and the way to deal with them. A two-layered reading scheme is herewith proposed: the first at the level of the global argument, the second at the level of the various problems crossed by the first one and usually discussed by cognitive and AI scientists and philosophers. This perspective asserts that this first level has its own relative autonomy, which it can be analysed with a rigour which, regarding its intrinsic intricacies, satisfies the minimal standards of an analytic tradition. If some parts of the argument do not seem satisfactory, I hope they can be rectified to open the way to a proper knowledge. Philosophy in any case cannot pretend to deliver much more
The starting point is a common sense question: how can one assert that the various sub-disciplines covered by the notions of AI and CS are generating knowledge which can be transferred to the humanities in order to provide knowledge of what is called mind in this field? Is the transfer able to preserve the knowledge value of what is being exported from one field to another?
According to present research in philosophy and historical epistemology in the field called "humanities," mind is not a substance; it is a function within a symbolic order. This order is constituted by a hierarchy of different disciplines that has been relatively stable during a certain period of time until the end of the nineteenth century. Indeed, since the 1850s and 1870s, successive mutations in logic, physics, and mathematics have deconstructed this symbolic order to an extent which seems (at least to me) until today not fully evaluated. The function at the core of the symbolic order had been hypostasised by the philosophical tradition in a conception of the mind, of its capacities (faculties), of its assignments in society, culture, and/or civilization. In any case, the historical hypostatization of this function cannot be taken for a knowledge of the mind, but it has effectively opened the possibility of transforming the function of the mind into an object of science, even of experimental investigation from the mid-nineteenth century on.
This function has imparted to the mind different roles, the most important of them being the origin of knowledge through the different faculties with which the mind was endowed in order to satisfy the function it was given within this symbolic order. So the mind came to be known and understood as the foundation of all sciences. The real sense of this is the following: in return, any development of sciences and the knowledge they produced are to be referred to the activities of the mind and herewith contribute both to its development (the historical unfolding of its virtual capacities) and to its own knowledge. Mind knows itself through the development of the different forms of knowledge it makes possible. In this symbolic order entered on the function of the mind, the role of philosophy is essential: its role is to extract from sciences the knowledge of the mind they carry and to refer to the mind this progress as a deepening of the knowledge of itself necessary to accomplish its assignment. This construction of the mind through its function within a symbolic order has produced since the seventeenth century a major ideology: the progress of sciences, being a progress of the knowledge of the mind, is a progress of all the individual minds and, as such, a progress of humanity or mankind. In his late works, Husserl has clearly expressed this idea and the consequences of its regression.
The humanities are a set of disciplines at the core of a symbolic order; they are regulated by philosophy. These disciplines, developed in the intimacy of the modern mind, are supposed to be its closest expressions the fulfilment of its powers, the medium of humanity. The modern conception of man is built up through the humanities as the presence of the mind in the world..Within the humanities, philosophy is defined as the exercise of reason. What is reason? Reason is supposed to be anchored in the mind as the origin and canon of all its activities; it exhibits and actualizes itself when it extracts from the different fields of knowledge that which concerns the mind so that it recognizes itself in its own productions. Reason is the self-reflection of the mind, the mind in search of itself in its activities. Philosophy, as reason at work in the mind, is the mental process in which all the different expressions of the mind are related to each other in the understanding of their origin. Its duty is to associate (even integrate) each individual mind, their constructs, in the generic mind of mankind. So philosophy constantly weaves the humanities with their different historical patterns; it asserts their coherence within the concept of man as origin and end of all knowledge
In such a brief summary, the argument may appear slightly ridiculous, as strange as a summary of any myth of an ancient people in the ancient Near East or Africa. But this mythology has been repeated for so long in Europe and America, it has produced such wide effects, that its failure at the end of the nineteenth century, its fast withdrawal mostly since the 1960s, leaves a void and a nostalgia that the majority of philosophical research tends simply to fulfil, explicitly or not. The mind techno-science is reaching philosophy and the humanities in this precise context. One idea is to be obtained by this approach à la Foucault. There is no doubt that AI and CS are progressively building an effective knowledge of what they define as mind. But in no way can the inter-discipline emerging at their intersection satisfy the modern function of mind. Neither their programs, nor their results, nor their internal debates can be interpreted inside the modern symbolic order, within this hierarchical organisation of different disciplines that had an endogenic development from the European sixteenth century until the end of the nineteenth. This body of knowledge being effectively produced cannot be referred to the modern mind as being conceived as the origin of different faculties and at work in the knowledge gained from them. The mind techno-science cannot have as its goal the deepening of the knowledge man (the subject of the humanities) has of himself as the origin and end of all human things. It cannot pretend to participate in the spiritual betterment of humanity. To restore a vanished order.
The reason why is that the conditions of the formation and coherence of the modern humanities are no longer satisfied. The traditional part played by the humanities in culture and society has vanished. The crisis of the humanities is not only a fashionable theme in the humanities departments of the industrial-world universities. Since the end of the nineteenth century, this is a fact, an epistemic situation, the consequences of which are difficult to fully assess. The humanities crisis is the most obvious consequence of a deeper transformation concerning the symbolic order aggregating inside one another the different fields of knowledge. Physics and mathematics dropped out in the 1880s. They no longer referred to philosophy and through it to the activities of a mind: they were building within themselves and by themselves their own foundations. This explains why the humanities are nowadays mostly reduced to philosophy, and philosophy itself divided between a quest for a back seat in the sciences and literary theory.
AI and CS are rising in this very peculiar epistemic conjuncture. A place has been left vacant to be occupied. Professional philosophers are still being trained in the different modern schools. A reconstruction of the modern function of philosophy is possible, even anticipated and asked for: the roads are drawn, the problems are well known (mind/body, mind/brain, physical/physiological, natural/artificial, etc.). The philosopher E. Husserl even tried at the beginning of the twentieth century to reconstruct the modern conception of philosophy: perhaps he failed because he did not have a proper conception of mind at his disposal! Now new answers from the mind techno-science can be provided; they are able to justify the old questions of the philosophical tradition. A ground knowledge can be deciphered through the controversies of the scientists and engineers who are ignorant of philosophy. The grand program of a reconstruction of the humanities can be designed. The present conjuncture is certainly an ambiguous opportunity for philosophy, but there is no such place to occupy, no such function to fulfil. The function has vanished. AI and CS are not coming to save the humanities. Neither are they going to take their place, because philosophy has failed to play its role. The mind techno-science will not fulfil Husserl's utopia to transform philosophy into a science.
Because of its methodology, problems, and criteria, the analytical tradition seems, for the moment, bound to reconstruct itself in the cognitive sciences: it feels itself independent from the epistemic conjuncture. Paradoxically, a style of philosophy, coming from research as diverse as M. Foucault or P. Bourdieu, have the potential to overcome the modern frame of philosophical problems and even to arrive at the rigour that it has been missing. The mind techno-science emancipates philosophy from its modern function. This is why it belongs to the post-modern epistemic conjuncture. The humanities cannot be revamped by the mind sciences, but only further deconstructed. Philosophy has to overcome its nostalgia and explore the virtualities of the present situation, the post-modern experience.
The epistemic conjuncture is more intricate. Even if AI and CS research is at a loss to provide the reconstruction of the humanities, even if this historic mythology is forgotten, the sciences of the mind raise important epistemological questions. The French epistemological tradition holds and shows that each science develops itself by the construction of its object. Through its concepts, formalisms, and experimental procedures, a theory filters the phenomena and herewith generates a quasi-object reduced to a set of parameters that can be experimentally studied. This quasi-object is not a mental construction. It grows within the development of a theory and its experimental basis and indicates the type of properties an object has within a discipline or sub-discipline. It cannot be separated from the theory to which it is linked and from the instruments by which this theory develops the different experimentation by which it proves and disproves itself. Even sciences at a primitive stage of their development, when they are not yet clearly cut off from folk knowledge, are already constructing a quasi-object. The object of any science is always, as coined by Bruno Latour, a "hybrid," indistinctly natural and artificial.
This entails two major consequences. First, it cannot be asserted that reality can be reduced to what is known by a science. Second, there is no other way than science to know what reality is. So the Real (what is reality) cannot be called upon outside of science, through philosophy, any belief, intuition, theology or poetry. But in return, the different sciences are not providing societies with a unified or unanimous knowledge of what the reality they study is by itself. Scientific knowledge cannot be cut off from the methods through which it is produced: the objects, reality or levels of reality any science investigates are defined by a theory and its method of experimentation.
From this epistemological point of view, it follows that AI research cannot state what intelligence is by itself. But intelligence cannot be known outside of the different sciences that are being built. This is why cognition is the quasi-object of the mind sciences. Cognition is not the object they are trying to know as if it existed by itself. Cognition is being constructed according to the development of these sciences and their interactions. It is a concept by which these different sciences give an operational name to the quasi-object they are producing. Intelligence, as the essence of the human mind, has not to be protected from mind techno-science, neither is it necessary to prove and explain at length that intelligence is not what these sciences are studying. The epistemological explanation is a sufficient answer that should dry up many popular (and) philosophical debates rising from the ghost of the humanities.
The epistemic conjuncture and its problems are much more complex. Indeed the present epistemological situation of the mind sciences is ambiguous and partly explains the philosophical temptations above denounced. As they are progressing from computer science models to the conceptionist paradigm, they overcome the initial behaviorist model dictating the processes being studied: the initial models were simply falsified by the very processes they allowed to investigate and they had to be refined and new ones were slowly proposed. In this situation, the mind sciences are requiring a finer description of their quasi-object, based on more complex conceptual models. The filters have to change and they have been changing in the last fifteen years. But precisely because these sciences are investigating at the same time by experimentation and computer simulation, they are not, for the time being, able to define and construct by themselves this hybrid (cognition, intelligence, and their different modalities) which is their quasi-object. Within their investigation, a type of cognition has to be so drastically reduced that mind sciences cannot pretend to explain what they are supposed to. At the same time they need a full conception of this object in order to reduce it to the parameters at their disposal. This is their present and temporary epistemological deficit
So the mind sciences find themselves in the position of requiring a pre-description of their object. They have to look for it outside, import it from outside, because they do not yet have the theoretical means to build the filters in which the effective cognitive or intelligent processes could be analysed in related parameters, so that they could be reconstructed and tested. Of course, this is how hard mind sciences are progressing already, but they are still under the influence of folk psychology and the pre-description of cognitive behaviours. In any case, the problem is not that intelligence or cognition cannot become an object of science, but that the present reduction is too strong and requires being related to different pre-descriptions outside the mind sciences
But here philosophy enters the game. Different historical schools amply provide for the time being such pre-descriptions, because their linguistic self-reflective methods based on the potentials of natural logic were the only ones available to describe basic mental processes such as belief, cognition, attention, perception, intention, etc. Phenomenology and its different trends provide an important, and more recent stock of relatively well refined descriptions of mental states and processes. These schools can provide these badly needed pre-descriptions, and their specialists can revive them and position themselves in the very development of the mind sciences
This is a false conception of the present situation of philosophy. It is just a way for modern philosophy to continue its routine and even pretend to provide (unexpected) true (scientific) answers to old problems. Everybody seems satisfied in this false association: modern philosophical inquiry seems justified instead of being disqualified, the mind scientists are gaining some ideological prestige they do not even need. Indeed, if the epistemic conjuncture concerns the global organisation of knowledge, the epistemological situation concerns the state of development of a discipline or of a theory. The epistemological situation of mind techno-science explains why it is so concerned with philosophy issues, but it also explains why some philosophical schools find so much interest in them: they can recycle their presuppositions with fresh data, launch debates, and even provide guidelines or orientations. Epistemology teaches that the present situation is only a temporary step. The next one is all the more easy to predict, because it has already happened: the formation of the connectionist paradigm shows how the mind sciences are becoming able to provide the filters for their own descriptions of the cognitive processes they are investigating. They are in the process of reducing their dependence on linguistic self-descriptions provided by philosophy and folk cognitive psychology. Connectionism attests the emergence of mind sciences as this autonomous inter-discipline I have been calling "mind techno-science.."A decisive step has been reached.
In such a situation, the domain of modern philosophy is even further contested. The problem is not at all that the mind has become a proper object of science; that has been the case since the 1850s. The problem is that the mind sciences have become able to construct themselves outside of the conception of philosophy which pretends to decide what mind is or is not, if the knowledge to be gained is possible or not, valid or not. Mind techno-science, by becoming autonomous, implicitly shows that even the analytical approach is neutral regarding its development. Just as physics and mathematics had become autonomous in the late nineteenth century, a science and a technology of the mind have become possible. This mind techno-science cannot even become a substitute for the humanities, a ground knowledge: the positivist dream is no longer feasible, simply because the order of knowledge (the web of interactions between fields of knowledge and practices) is no longer organized in a way to make it possible. The exercise of philosophy has become external to the knowledge of mind. Philosophy cannot pretend any longer that mind is its sanctuary, a strange object appearing to itself when it is described and analysed by this peculiar use of language called philosophy. Philosophy finds itself outside the mind, the mind of the philosophers as well as the mind of humanity or mankind. In fact it seems (for the moment at least) nowhere and everywhere.
This final uprooting of philosophy needs not to be dramatized. In the present situation, the task of philosophy is certainly difficult to perform, but it is at the same time quite obvious. First it is necessary to avoid any pretended fear or anxiety of (re)constructed mind, virtual (parallel) reality, artificial (non-natural) intelligence, as if we were waiting for a new Frankenstein under the cover of a Heideggerian conception of technè;. Any form of post-modern blues or pathos (the end of all things modern, the philosopher as guru) is quite superfluous and rhetorical. The path is actually predictable. Philosophy has to learn from the mind techno-science what mind is, not what it is outside of them but how it is constructed, debated, investigated in the formation and development of this inter-discipline. Philosophers of the mind have to internalize their investigations within the mind sciences; they will probably have to become mind scientists in order to become their epistemologists. There is one obvious reason to justify this tentative assertion: many mind scientists have de facto become the epistemologists of their discipline, and this work inside their own practice has played a major role in the various developments of their field. Regarding language, as there is a parapsychology, and philosophers have been able in the past to play the role of para-linguists: they pretended for a long time they were producing some knowledge of language, even if it were and still is difficult to establish its clear status. Somehow this prospect seems doomed regarding mind techno-science: its epistemology is already at work This is the reason why I think modern philosophy has no regeneration to expect from the formation of a mind techno-science. It is just another proof of the need and opportunity for philosophy to reinvent itself as it always did throughout its history. But the present conjuncture cannot be compared with the 1920s, when the young Heidegger understood that the programme of his master, Edmund Husserl, was impossible to fulfil and had become a utopia. The modern philosophy of the subject could not be reconstructed in order to save the role of reason as well as the function of philosophy in the European civilization. The epistemic conjuncture was a dramatic philosophical situation: if this reconstruction could not be properly accomplished, it meant that the new sciences of the late nineteenth and early twentieth centuries could not improve the knowledge of the mind required for the progress of humanity. Certainly Heidegger's solution has been worse than what he denounced as impossible. But his thought has been thoroughly developed, studied, and enough understood. Who can pretend nowadays that philosophy can discover what it was before (and therefore after) it was linked to science, the individual subject, modern society, etc.? The Heideggerian solution is achieved, as well as its pseudo-scientific opposites. Philosophy cannot ignore the development of mind techno-science.
Still, even if he provided the wrong answer, Heidegger has left us with the right question. The question of thought is indeed the relevant one, as long as thought knows how to invent and discover what it could be by experimenting with its position and relevance within the different fields of knowledge. At present, philosophy seems only possible as thought producing itself in an order of knowledge that nobody knows but that everybody practices in his research. Indeed, thought is concerned by the mind sciences, but neither to be digested by them as their epistemology, nor to ignore them and express its own possibility as fiction (fabula) or as a form of literature (d'écriture)
So the role of philosophy is not to reflect upon the mind sciences, but to think how thought is concerned by the development of mind techno-science, because it investigates what is thought and what is thinking in a mind. This problem is made possible at the interaction between mind techno-science (AI, CS, neuro-physiology, etc.) and the practice of thought. What becomes thinking when the various operations that have been traditionally defining thought as cognition are being simulated and mechanized, i.e. become reproducible by artefacts and machines, even if these artefacts are very abstract and formal ones? Then a non-modern distinction between thought and intelligence becomes necessary, because in the present situation, the problem not only concerns what is an intelligent behaviour, but the very intelligence of thought. Intelligence indeed has many forms and many levels, but it can only be known or investigated as a type of response to some change in an environment so that the said intelligent subject or entity reaches, through this intelligent process, a better (or new) adaptation to its environment and/or is capable of preserving or developing its autonomy. Thought, to be intelligible/intelligent, requires being treated as a behaviour or process. This very situation changes the relation between intelligence and thought. It forces thought to gain a new intelligence of intelligence and is profoundly transformed by this situation. This experience seems to me one of the most radical questions for philosophy at present.
This proves that the situation of thought and intelligence is at the core of mind techno-science. Not only does it guide its development but it presides over the progressive association of the different fields of research composing it today. It is basically a technological question since the 1940s and an analysis of this technology is able to clarify the question and situation of thinking today, as well as some problems raised by the relations between AI, CS, virtual reality, etc. I will call it "Intelligence Technology" (IT) in order to exhibit that it is not a technique as means to realize some goals under the guidance of some ideal (man, reason, spirit, etc.) or under the power of some interest (economical, etc.). This technology generates within itself its conception of thought and makes possible at its border another conception of thinking. As already shown, the humanities, either as a modern ideal or as academic institutions, are not directly concerned by this question, except through the very possibility and relevance of philosophy.
The question reads is there any to some intelligence in Intelligence Technology? The answer is the opening of another thought that can only be proved in action. This interaction within thinking, between thought and its intelligence, is the question: no theory can be made, it has just to be tried out. But I certainly do not intend to take a heroic stand and enunciate what is thinking today. On the contrary, the situation needs not to be dramatized, because it already occurred in the history of philosophy, even if the problems to raise and the answers to provide have to be original. Indeed in the early seventeenth century, Descartes saw that analytic geometry was introducing new ways of organizing thinking, a new form of intelligence. It did not concern the mind itself but its conception, not cognitive behaviours as the spontaneous activities of this mind, but a conception of knowledge and thought over imposed on the mind. A new mind was not constructed, but a new image of the mind in its act of thinking was constructed and a new definition of man became possible. This is what Descartes called method and he formulated its basic rules, not for them to be simply applied and followed, but to exhibit that a new organization and practice of thought were possible, that they could be explored and that the results of the exploration could transform the different fields of knowledge, and even open up new ones. His work was very dependant on the order of knowledge that he was at the same time contributing to establish. This "method" could be called today a model of rationality.
Probably holds that if proven by now that I am no Descartes, but the situation of philosophy in his epistemic conjuncture is quite similar to ours. IT is offering a new method and its basic rules or steps can be formulated; they have been born in computer science and information technology, and my objective is to show that they play a major role in mind techno-science. The description of these rules will not teach anything new to anyone working in these fields, but this is precisely the reason why it is so important to exhibit them.
The form of what is given (investigated) is a behaviour, a process or the function of a process. So the function always supposes a process and every process expresses a function. The first step of the method is the description of the process, i.e. its analysis in order to discern its different phases, the elementary functions composing it. This analysis is the uncovering of the structure of the process or of a function in a process. Structure can be symbolic or, according to the connectionist paradigm, subsymbolic. Indeed, the concept of structure designates a level in the analysis of phenomena and not a specific type of formal theory. What is here investigated are the properties of this level and this requires the development of original descriptive and explanatory hypotheses. The key point in IT is the relation between this structure and the process from which it was exhibited.
The second step is the expression of this structure in a formal language. It was traditionally a mathematical one, but in IT the problem is not only the formal language itself, but the language in which this structure adequately formalized can be programmed so that it can be reproduced and therefore the function itself simulated. The very stake of this second step is the decisive character of IT: once a structure is expressed (in the biotechnological sense) in a formal language, it can be programmed so that it becomes possible to interfere with it, to introduce variations in order to satisfy better the function or to act eventually upon the function itself. This potential action within the structure on the function raises fundamental questions. IT makes it possible to express structures by interfering with them, to simulate or develop new versions of any function or new functions that have in common a structure or some elements of one. To be able to analyze the structure of a function in order to act upon it and so to find within this very structure variations of the function or new functions is what is at stake and has to be thought. Functions have, in fact, become virtual modalities of structures within a technology. In IT, structures are neutral regarding the functions they have been gathered from. The consequences of this fact are innumerable and effectively bring humanity (the human community) into a new age of its evolution.
The third step is to select the medium capable of expressing the structure and its virtualities in order to fulfil the function. The medium is the carrier of the structure; it can, for instance, transmit it, introduce it into an artefact (any object, machine, etc.), etc. It actualizes the structure in an artefact, in a given environment, and for a certain task. Strictly speaking, the medium does not carry or embody the structure itself but the structure being programmed to perform a function or a set of functions. The carrier is somehow the matter in the Aristotelian sense, programmed or programmable. The decisive point is that in IT the medium is neutral regarding the structure it expresses, as the structure is neutral regarding the function. The same medium can carry different structures and, more important for our objective, the same structure can be expressed by different media.
To follow Descartes's suit, the fourth step is to program the function in a medium in order to perform the function, reproduce its various steps and their order. The fifth step is to test the program to make sure that every moment of the initial or intended process is adequately satisfied.
This is the effective situation of thought today and many points could now be clarified. The first one concerns some aspects of what virtual reality means. IT brings in a radical new conception of structure. Since the Greeks, it has been conceived as an autonomous and formal level of determination in reality, expressed and treated by mathematics. Now structure is not only the form of an object, of an entity or a process, it has become the intelligence of a process. This technology manipulates the structure it analyses and installs in it the results of these manipulations. So in IT, a structure includes its virtualities and the analysis of a process generates the virtualities of this process. This initial or actual process is to be conceived as the existing actualization of a set of virtualities internal to the structure and constituting it. This is made possible because the structure is programmable in a medium (or carrier) which over determines the object, is over imposed to it, so as to reconstruct it and make an artefact out of it.
The management of structures has become effective within their objects, entities or processes. It opens a radical transformation of our conceptions of any being. From now on, any being includes in itself its other modalities as part to its own being. Heidegger explained that things had become objects for subjects who were perceiving them and reducing them to what they appeared to them. Now the objects are becoming artefacts: what the subject perceives is only one modality of an artefact whose structure includes other modalities that exist only through IT. The individuality of an artefact comprehends virtualities which can be actualized by a technology. So virtual reality is not another reality, it is the reality. Reality has become virtual. This does not mean that what is virtual is not real and that in post-modernity reality vanishes in the realm of artefacts. It is another experience of reality: the actual or existing reality contains its victuals, other types of actualization. The object and the subject are overlapping.
Has therefore the substance of the subject become its structure which includes its potentials? Yes, if this means that the subject is not any longer closed within one's self, some master of his own being. But since Heidegger, philosophy has exhausted this interpretation. The answer is to be found in the negative one: according to a model of rationality derived from IT, the structure cannot be reduced to the form or the dunamis in Aristotle or to a program in genetics. The reason is, according to the form of the given, that the analysis of structures in IT has as a purpose the knowledge of functions or processes. IT transforms the conception of knowledge in a virtual action inside the process on the functions it satisfies: the knowledge of the process is a virtual action on the function. So the clear objective of this type of knowledge is not to study pre-programmed potentials already inscribed in a code or in the substance of a subject in order to make or let it happen. It is not a return or a reconstruction of an Aristotelian paradigm. On the contrary, the stake seems to be the opening of the structure, the introduction into it, through a given technology, of virtualities that have to be interpreted and decided upon according to the functions they are supposed to accomplish. In short, IT is not a study of what is already there but of what can happen within what there is. One reaches the most controversial point of this paper, and it needs to be justified or falsified: the function sets the limit of the technology. IT seems to be a technology that constructs its limit into itself.
The second point to be clarified is central to mind techno-science and concerns the relation between mind, brain, computer science, physics, neurology, etc. My remarks will be strictly philosophical and do not pretend to have any practical epistemological relevance; they just follow from the argumentation being built up. My assumption is that mind techno-science is presently over determined by the model of rationality at the core of IT. It explains why mind is conceived as cognition and that cognition is in its turn reduced to various cognitive behaviours or processes like problem solving, belief, attention, perception, etc. In fact what falls under cognition is an analysis of different cognitive structures. This examination can only be achieved in IT, at a symbolic or subsymbolic level, by their monetization in the field of computer science. Therefore the problem is not that mind is or is not a computer, nor what sort of computer a mind is. Certainly mind is not a computer, but computer science is at present the analysis of the structure of cognitive processes. To understand this fact and not to fall into the trap of endless controversies, one has to remember that mind techno-science cannot be thought as the present and future substitute of philosophy or of humanities. The whole (false) problem simply mixes the level of the structure and the level of the medium.
It was just argued that the level of the structure is neutral regarding the level of the medium, that a structure can be expressed by different carriers. From the point of view of IT, brain is a carrier of cognitive structures and in this respect it is similar to any physical system, for instance a machine, a computer or anything else which could perform the function described by the structure. A medium can be physical, neurological, etc., and this does not matter at all. The questions of the relation between minds and machines, brains and computers are often wrongly formulated because they ignore the level of the structure. So the relations between the different fields of research in the mind techno-science can be clarified if one acknowledges that this inter-discipline is organized by a model of rationality having its source in IT. This is why I said at the beginning that philosophy had not much to say, but that it was necessary to reduce some false problems and let an epistemology of the mind techno-science develop. Certainly philosophy has a lot to learn from its development, but at present its main task is to learn how to stop asking the wrong questions. I hope I have not made the situation worse.
Is necessary to examine some of its limits and consequences? Is there something alarming in these new virtualities offered to the power of humanity or inhumanity? Yes, if one thinks the IT paradigm according to biological and genetic research, in reference to the integrity of life or of the living being. In this case, epistemology is badly needed to explain the differences and the limits of such a paradigm according to the different fields where it is introduced and interferes. An epistemology proves its relevance when it is anchored in the very evolution of a field of knowledge, articulated to the internal and external questioning of scientists at work. Instead of deploring the end of the humanities or surreptitiously reconstructing them, it would be more relevant to study why epistemology is incapable of providing the knowledge of the sciences that our societies so badly need to understand themselves, but their past as well as what they are becoming. So to mingle the model of rationality provided by IT and the specific problems of molecular biology is false, as Descartes was wrong to assert that animals or bodies were machines.
Indeed this problem forces us to return to the question of the order of knowledge in which Intelligence Technology is developing. mind techno-science is not the substitute of the humanities and IT is not a technology taking the place of reason! At this point philosophy is radically involved. This can be introduced by further developing the end of the difference between subject and object which was one of the main features of the modern symbolic order. Such a difference does not concern artefacts. Artefacts are no longer objects, they require being known from the inside, by distinguishing their structure and its virtualities, the medium expressing it and, most of all, the functions they satisfy. Objects have become artefacts. The subject is within the artefact at the connection between the function and the structure. The artefact as it is used in everyday practice by an individual is designed. Certainly the design of an artefact is what appears to a subject, but it is conceived strictly according to the function and it does not express either the structure, or even the carrier. The design is neutral regarding the medium and the structure: the matter (which is not the medium!) of an artefact is selected according to the function.
The modern industrial conception of the object, "Form follows function," is taking a completely different meaning, because form is not any longer the structure. Form simply concerns the design. Artefacts are designed not for a substantive subject, knowing who he is or what he wants, but for a subject who explores its virtualities in the discovery and practices of artefacts. Individuals are not any longer in front of objects but in the middle of artefacts with which they interact, which they use as parts of what they are. So what they are is the uses, dispositions, and practices they develop, exchange, adapt, and invent: artefacts are the virtualities of individuals and individuals develop virtual artefacts. The object has lost the substance that was provided for it by the subject who was in front of it. Now objects are functions for virtual individuals. A world of artefacts is an age when functions, uses, practices are what matter and not substance and identity.
IT and its key concepts (structure, medium, design, function) are some of the main nodes in the present order of knowledge. But the striking feature is the primacy of function. The technology which is reducing the object to an artefact by managing its structure finds within itself its own limit: function is the beginning and the end. Function is no longer dictated by the production, the form by the matter, the structure by the form, because the manipulation of structures includes in them virtualities which are in the end decided by social practices. The relation between technology and society is radically transformed. I do not fall into a post-modern utopia of uses and customs rising and overtaking technology by the people for the people, of a humanity free from the power of technology. I just explain that the future of IT lies not within IT but outside of it, in the social and cultural practices. The core feature of IT is that what is outside of it finds itself introduced inside of it: its internal finality is what is external to it. To reach that point, structures had to become flexible, transformable, manageable. They had to include virtualities. In the end virtualities exist only according to the capacity of individuals to make them happen by actualizing some of them. IT supposes a world of events, chance, opportunities and, of course, accidents.
Urgently, structures have to be differently thought. Apparently, economists have been explaining this for the last twenty years: human capital is the main resource of high-technology societies. But they have a restricted view of this capital when it is reduced to techno-scientific skills, to the different competencies required by an industrial system based on information technology. Information is not intelligence. The virtuality of IT is that structures do not govern any longer but are governed by the functions they have the potentials to fulfil. Once again, function is the beginning and the end of IT. So the development of IT in societies, throughout their different sectors, is closely determined by the capacity of the individuals to develop and experience new and different behaviours and attitudes. These individual and collective innovations diversify social functions, desires, needs, and demands. IT is the capacity to analyze them. The consequences are innumerable: in the end, these functions are the basis of what is produced and sold. But in North America, Europe, and Japan, we see today a strong process of concentration in information industries. Of course this trend might be necessary to meet the level of investment required to implement globally information technology. But the objective and/or result of this very concentration, making the headlines, is the control of the demand by the strong structuring of the offer. To me, it seems to contradict the potentials of IT and conflict with the expected social and economical consequences of information technology. A bad philosophy and a poor epistemology might have today serious consequences.
Perceiving and observing by a sentient being (and in many non-sentient mechanisms) produce output having some relationship to the state of the world outside the observer. The characteristics of the output of the process serve as input to memory structures that store beliefs. A belief is an idea, or statement, that has one or more characteristics' values that match the values for representandums. Belief may thus be understood as a representation that is not necessarily fully justified and is not necessarily completely true, but must be true in part. A belief is an idea that is held based on some support. Thus, Swinburne has suggested that if a person believes proposition p then p must be more probable than Unfortunately, this implies that if there is a proposition, one holds a belief either in the proposition or in its negation. The holding of incorrect or weak beliefs becomes problematic, as does the imposition of a logical formulation on this sort of problem.
A statement of belief contains one or more characteristics' values matching in full or in part the values for representandums. The output of a process or set of processes provides a representation of the input to the processes. We can therefore describe a percept as the set of values in this output; it is essentially the information in the output of a process about the input. A perceiving function, f(),provides a percept, f(x), about input x. Belief is thus transmitted through the hierarchy.
Knowledge has been frequently described as ``justified true belief," a belief held by an individual that is both true and for which they have some justification. Thus, for a belief to be knowledge, it must be the case that the belief is, in fact, true, and the believer must have justification for the belief. A belief that is true but for which we have no evidence cannot be described as knowledge. If there are homunculi inside computers performing operations, those who have long believed in their presence cannot be said to have had knowledge of this, since their belief, while true, has never been justified (we assume.)
It had become common to describe knowledge as ``justified true belief" when Gettier wrote a brief article that raised a problem with this definition. As a result of Gettier's work, we can be certain that ``knowledge is not, or is not merely, justified true belief" . There have been several responses to Gettier's argument against accepting knowledge as being only justified true belief. One possible approach is to add a condition requiring that the grounds for believing a proposition do not include any false beliefs, to the requirements of justification, truth, and belief. However, this addition and several other modifications that have been proposed fail to avoid counterexamples in which ``knowledge is lacking despite the believer's not inferring his belief from any false beliefs" . Other approaches to understanding knowledge have been proposed and supported, such as having a disposition to behave or a disposition to feel a certain way
We accept here that knowledge is something like ``justified true belief." A belief is an internally accepted statement. The result of an observation or an inferential or deductive product combining observed facts about the world with reasoning processes. To understand knowledge in a way consistent with a hierarchical notion of information, it becomes necessary to understand the notions of ``truth" and ``justification" in a manner consistent with the hierarchical context.
A statement may be understood as ``true" if it exactly represents what it is describing. This is referred to as the ``correspondence" theory of truth. This applies not only to statements but to the representation and belief. The coherence theory of truth, on the other hand, suggests that truth is essentially derived from a system. A statement is true when it is consistent with a system of accepted statements. Truth may also be viewed as a representation that is learned and that will not be altered, even given additional experiences. William James thus defined truth as the vanishing point toward which we imagine that all our temporary truths will some day converge.
The justification of a belief is based on internal considerations concerning the qualities of the function producing the belief. A belief is ``justified" if and only if the input to the function is accurately represented in the output. Consider a handheld calculator which accepts the keystrokes “2" + “2" = 4, and then it plays the digit "4." We note that the digit displayed is not of the same form as the input, e.g., a keystroke. Instead, an accurate function takes keystrokes and produces a displayed number. If the calculator is broken and produces the digit ``3" given the above set of keystrokes, we clearly don't have knowledge that 2+2=4.Consider a different case where the calculator is broken but the above set of keystrokes produces, through erroneous subprocesses, the digit ``4" in the display. While the output is correct or ``true" and may be interpreted as a belief, it is not justified--the function is not accurate in that it does not operate as the user intends or understands the calculator to operate.
Other models of knowledge have been proposed, such as the notion that knowledge is one's "image," what one subjectively believes to be true. This is close to what we have referred to as a belief, and choosing to call it "knowledge" appears to only confuse the issue. Yet, like the more conventional philosophical idea of knowledge, it can be understood as the values in the output of a process, actually, the hierarchical series of processes that range from low level atomic processes up to sophisticated intellectual processes.
Perception and observation can be understood as conveying information about the input to certain processes (for humans, sensory processes such as seeing, hearing, smelling, etc.) The output of such a process may be understood as a belief. Such a belief may constitute knowledge about the input when the process or set of processes producing the belief operate in a manner consistent with the understanding of the process. These definitions of knowledge and belief are broader than the common language notions of the terms and less human-entered, in the case of belief, making the concepts more objective and more easily studied. We note that knowledge is information that is both true and justified. These perceptual, observational, and processing functions take as input sensory data from the real world, as well as personal beliefs and cultural biases, when producing information bearing output. This conceptual framework for understanding information provides a mechanism for understanding both the cultural influence on information, as well as the most minute phenomena studied by physicists In the pages that follow are to have two goals: (1) to extend explanations of the evolution of language, I-consciousness and our impression of having free will in the light of what is now called the "social intelligence hypothesis": the evolution of language is forced by natural selection mainly because of its advantage as a tool and weapon for and within the social struggle of our ancestors; (2) to show how biological and linguistic insights may contribute to the understanding of one of the most puzzling philosophical issues – and indeed of our conception as human beings – i.e. (the possibility of) our experience of ourselves and as autonomous agents. The philosophical problems of I-consciousness and free will cannot be solved as it would require the reconciliations of apparently inconsistent premises; but it may be dissolved by eliminating one of the premises, namely the claim that there are irreducible entities like free-floating selves or Cartesian egos with the ability to act due to their own non-physical power. Nevertheless our misleading conception of being such selves with free will has to be explained. And evolutionary biology and linguistics seem to be able to do this: The ego-illusion of systems which permanently confuse themselves with their own self-model, and the (in some sense inadequate) belief of having free will are sophisticated tools with great evolutionary advantages – they are the most subtle form of deception that was rewarded by natural selection, namely, a systematic and stable deception of our own.
Obviously, organisms need not be very mindful to live and reproduce. But some are. Why? Considering social factors are the most promising approach for an answer (Byrne & Whiten 1988, Whiten & Byrne 1997). A main starting point was the observation that primates appear to have more intelligence than is required for their everyday wants of feeding and ranging. Since evolution is unlikely to select for superfluous capacities, Nicholas Humphrey (1976) conjectured that something had been forgotten, namely the social complexity inherent in many primate groups, and suggested that the social environment might have been a significant selective pressure for primate intelligence. Since better access to food or a safer place to sleep or a higher rank in the complex hierarchies of primate societies normally increase the probability of producing more offspring than other group members, social intelligence pays off pretty well. Natural selection therefore favours it (or its inherited requirements). And since this selective pressure applies to all group members, an evolutionary arms race is set up, leading to a further increase of intelligence. This development probably corresponds to the rapid expansion of our ancestors’ neocortex – especially the frontal parts, which are most important for working memory and planning (Goldman-Rakic 1992) and probably consciousness (LeDoux 1996). This cortical enlargement – about a factor of three to four during the last five million years – is otherwise hard to explain. And it is biologically expensive, because the brain consumes about 20 percent of the energy when the body is idle but accounts for only two percent of its mass. Furthermore, there is evidence for a correlation between neocortical size and group-size or social complexity (Barton & Dunbar 1997)
Thus, social interactions might have been the most important driving force for the evolution of primate intelligence. The elaborated mental abilities of higher primates are conceived as the product of a cognitive arms race leading to more and more sophisticated representational capabilities (representation of complex social relationships, higher-order intentional stance, theory of mind, mind reading). This climate of competition and conflict favours the use of social manipulation to achieve individual benefits at the expense of other group members. Observing social relationships carefully, struggling for influences, making alliances, or deceiving more powerful leaders got more and more important. Particularly useful for this are manipulations in which the losers are unaware of their loss (as in some kinds of deception), or in which there are compensatory gains (as in some kinds of co-operation). Therefore, egoistic intentions remain hidden. A lot of zoo and field experiments as well as behavioural studies in the wild have already confirmed (and reinforced) these hypotheses. It was shown, for example, that apes – and to a lesser degree perhaps also monkeys – may be able to respond differently, according to the beliefs and desires of other individuals (rather than according only to the other’s overt behaviour). Hence, they possess a theory of mind (Premack & Woodruff 1978) and can assume what Daniel Dennett (1988) has called the intentional stance: They ascribe intentions to others and take them into consideration for their own actions.
Language is, among other things, a very useful tool and medium for explicit representations and metarepresentations including an intentional stance, self-attributions, I-consciousness, higher-order volitions, autonomous agency etc. These are not an epistemic luxury but have a function, i.e. a causal role. They allow a more precise representation of the external and internal states and their rational and emotional evaluation. They allow a broader range of reactions in complex situations, especially in social contexts. The concept of self reifies the organizing activity of an organism that incorporates its experience into its future actions. These capabilities are – at least at the higher-order level of human beings – based on and boosted by language, and this is probably the main reason for the development of larger brains and linguistic capabilities (cf. Goody 1997). Thus, it is reasonable to assume that these cognitive capabilities are an important factor for the origin and evolution of language and cannot be excluded by any elaborated theory trying to explain this still rather mysterious issue (cf. e.g. Aitchison 1996, Jablonski & Aiello 1998, Noble & Davidson 1996): Language was incorporated in cognitive representations of own’s and others’ intentions and offered more abstract and efficient ways to use these representations; language permits more effective classification, storage and distribution of information, and thus more efficient use of memory and communication; language is an important means to envisage the future; and language-in-use is a new and very effective sort of tool for co-operation between individuals, because it makes information explicit and easily communicable even in the absence of visual contact. Language also paved the way for even more sophisticated deceptions (i.e. lies) and influencing others to act in accordance with one’s own goals. Language is based on symbolic and abstract thought, but conversely it also enhanced their further development. Finally, language lead to more and more sophisticated models of the world and of ourselves.
Self-consciousness is a rather shaky term with many different meanings which often depend on each other, e.g. notions like self-awareness, self-knowledge, self-recognition, sense of ownership etc. (cf. Frank 1994, Bermúdez, Marcel & Eilan 1995). Self-consciousness is not a single ability or property but a complex entanglement of different features creating a special kind of knowledge. As a premise, it is assumed here that self-consciousness does not come ready-made into existence, but bootstraps itself with the help of other minds in a complex interplay of the infant with the social and physical environment starting from inborn dispositions. It depends on perspectivity due to entered information acquisition, bodily awareness due to proprioception and feedback from results of one’s own actions (including the experience of resistance). These are crucial ingredients for a higher-order form of self-consciousness, i.e. I-consciousness. It is conceptualizable and verbalizable. It is based on a feature which is called a self-model. This is an episodically active representational entity (e.g. a complex activation pattern in a human brain), the contents of which are properties of the system itself. It is embedded and constantly updated in a global model of the world created also by the brain based on perceptions, memories, innate informations etc. (Metzinger 1993). Self-models are limited in a crucial way. They cannot represent their own representations as their own representations as their own representations and so on ad infinitum. But there is (or at least was) also no need for that. From an evolutionary perspective, it would have been quite disadvantageous for our ancestors to forget their physical and social environments and plunge into a self-amplifying spiral of self-reflection. Hence, there is a – probably hard-wired – self-referential opacity: The phenomenal mental models employed by our brains are semantically transparent, i.e. they do not contain the information that they are models on the level of their content (Van Gulick 1988). Possibly these phenomenal mental models are activated in such a fast and reliable way that the brain itself is not able to recognize them as such anymore because of a lower temporal resolution of meta representational processes due to limited temporal and physical resources. If so, the system "looks through" its own representational structures as if it were in direct and immediate contact with their contents, creating a special sort of self-intimacy. This leads us to a rather dramatic – and possibly offending – hypothesis: We are systems which are not able to recognize their self-model as a self-model. For this reason we are permanently operating under the conditions of a "naive-realistic self-misunderstanding". We experience ourselves as being in direct and immediate epistemic contact with ourselves. Hence, we are systems which permanently confuse themselves with their own self-model (Metzinger 1996). In doing this, we generate an ego-illusion, which is stable, coherent, and cannot be transcended on the level of conscious experience itself.
Another controversial issue is the problem of free will ( Honderich 1988, O’Connor 1995, Walter 1998)). To define free will in the strongest sense, Libertarians often presume three necessary conditions which, taken together, are sufficient: intelligibility, freedom, and origination. Intelligibility means that a person’s free choices are based on intelligible reasons. Freedom means that this person can make different choices under completely identical conditions, i.e. that this person could act otherwise even if all natural laws and boundary conditions (including his or her own physical states) are the same. Origination means that the person is able to create his or her choices and acts according to these choices in a nonphysical way. But this presupposes an ontology (e.g. a kind of dualism or idealism) which goes beyond and is at least partly independent of the physical world. However, even such an ontology won’t offer what Libertarians want, for it cannot avoid the dilemma of plunging into an infinite regress or abruptly step on the brake at a mysterious causa sui. This is because in order for me to be truly or ultimately responsible for how I am, so that I am truly responsible for what I want and do (at least in certain respects), something impossible has to be true: There has to be a starting point in the series of acts that made me have a certain nature – a beginning that constitutes an act of ultimate self-origination. But there is no such starting point. Therefore, even if I can act as I please, I can’t please as I please. That is not to say that there are no higher-order volitions, for instance wanting to want not to stay that lazy anymore. But ultimately my reasons, beliefs and volitions are non- (our sub-)consciously determined – by earlier experiences, heredity, physiology or external influences – and therefore not ultimately up to me. Thus, in order to be ultimately autonomous and responsible, one would have to be the ultimate cause of onself, or at least of some crucial part of oneself (Strawson 1986). But this would strangely promote man to something like an Aristotelian God, a prime mover. (This is no polemic exaggeration but what Libertarians have actually conceded, see e.g. Chisholm 1964, Kane 1989)
However, there is no hint for the existence of humans as prime movers and nonphysical forces interacting with our physical world through causal loopholes. Nevertheless we do conceive ourselves, at least sometimes, as being free. We have the feeling that it is up to us to decide between alternatives. This feeling depends on second-order emotions (without which we cannot act and choose in complex situations despite of rationality), an intentional stance, a "healthy" (non-deprived) development, non-predictability or epistemic indeterminism (that is to say we cannot know the future for certain, and especially not our own future), rationality (the ability to reflect and reason), planning (and hence higher-order thoughts, a concept of the future et cetera), higher-order volitions, and sanity. These features are compatible with a naturalistic world view (Vaas 1996 & 1999) and even with determinism. Therefore it is not to deny a weaker form of free will. But this does not imply the existence of the kind of freedom and origination for which Libertarianism is arguing. The Libertarian will still insist that our subjective impression of freedom be a powerful argument for free will. Thus, a sceptic should be able to explain such an impression within a naturalistic framework. And this is what an evolutionary perspective might achieve: Ascribing intentional states to others necessarily includes ascribing volitions to them and assuming that they have the power to transfer their volitions into actions somehow, because this is the only way to get advantages from the intentional stance at all. For, if other beings are thought to have intentions but they would be causally inert, that is to say their behaviour has nothing to do with their volitions, this ascription of intentions and hence volitions simply wouldn’t matter. However the intentional stance is not an irrelevant luxury. It is a powerful tool to get along with the complexity of the social world and even an anthropomorphically-conceived nonsocial world (up to highly restricted activities – e.g. in playing computer chess nowadays it is common and helpful to think and act as if the computer "wants" and "plans" something). Individuals endowed with this tool are better prepared for the struggle of social life. And it is advantageous to assume the volitions of others as somehow being independent of the environment or the past. Not absolutely independent of course, but in an approximate sense – because this makes it a lot easier to deal with them due to the fact that complex organisms can act (or react) quite differently in similar circumstances and quite similar in very different circumstances. There is another reason to take a concept of volition as revolutionarily advantageous, and this is just the other side of the coin: To deal with other individuals in a complex way means also to plan one’s own actions carefully and evaluate their effects. This presupposes some kind of awareness of one’s own volition, hence a concept of will and self. Higher-order representations also take one’s own mental states into account – not only for decisions and follow-up analyses but also as a parameter in the plans of others regarding oneself. Thus, it is reasonable or even necessary to ascribe volitions to oneself, too – because otherwise one cannot reason about the mental states of others who are presumably dealing with oneself. This makes one’s own volitions explicit – and much more flexible. For instance, an individual may think: "She believes that I want to do this, and she will react to this in a certain way to get an advantage over me – and therefore I will act otherwise and not do this but that." At least since the point from which there has been language with an inbuilt grammatical structure distinguishing between subjects and objects, active and passive, present and future – but probably much earlier –, such concepts of volition, actions and self-notions have been flourishing. This was not only the case in contexts of cheating, however! In the course of time co-operation became more and more important among our early ancestors. And the existence of some form of language already implies a high degree of co-operation (Calvin & Bickerton 2000) – spoken language would never have emerged unless most people, most of the time, followed conventional usage. But co-operation in complex, not inherited forms also presupposes an intentional stance and the capacity to ascribe volitions to others.
Finally, evolution shaped our minds respectively our brains to cope with our complex social lives. We are forced by our very nature to interact with other people in a fundamentally different way than to interact with, say, stones and sticks (Strawson 1962). From this it is no longer a big step to a notion of free will which is a powerful tool to act in consonance with or opposition to others and to establish some kind of moral responsibility – a very effective way to influence the behaviour of others and justify punishments. Thus, free will even succeeded to become an entity of religious, philosophical or political theories and a postulate for jurisdiction. Of course we need not dismiss an intentional and personal stance. It is, obviously, crucial for our survival. We cannot leave our subjective standpoints, turning exclusively to an objective, perspectiveless view. We may accept that we have, ultimately, no free choice. Nevertheless, in our everyday life we think and act as if we did. Even sceptical philosophers do – or they might find themselves out of the race quickly. Nature is stronger than insight and "the human brain is, in large part, a machine for winning arguments, a machine for convincing others that its owner is in the right – and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than for virtue" (Wright 1994),
As Labov (1977) noted,”one of the most human thing that human beings do is talk to one another. we can refer to this activity as conversation, discourse, or spoken interaction." As "one of the most human things" which we do, it stands to reason that meaning is often assumed to be shared during verbal interaction. However, we know that words are laden with symbolic meaning in addition to being tools for the simple sharing of information or experience. A critical point is that each of us differs in terms of our information and experience, and despite the ideal of having a "standard language"-- even among people speaking the same dialect of the same language, or being truly "bilingual"--the fact is that each of us on this planet adds our own nuance to words, or phrases, or intonation, or some combination thereof.
Sociologists, social psychologists, and others study the effects of such ubiquitous experience as exposure to the language of television, of political campaigns, and of newspaper headlines. Advertisers know that many people suspend their "truth filters" for 30-second segments at a time. Psychoanalysts routinely explored "distortion" of communication, in expressing and receiving facts, fantasies, and associated experiences which carry a mutually-understood "meaning". Those reading this paper online will surely recognize that one may well get quizzical responses to comments made about one's "mouse" or being involved in a "fatal crash"
Discourse is dependent on both the context of the conversation, in "real time", and the overwearied vocabularies which are acquired over the course of social, professional, and vocational training. In other words, we communicate to some extent using a vocabulary contained in the scripts of our daily lives and daily experiences.
Freud, nearing the end of his life, and holding his first and only seminar in America, was asked for the secret of happiness, and (in German, paraphrased here), answered "Work and Love". The drives. But while Freud was best known for his interpretation of the "love" portion of that formula, the "work" portion of life is perhaps more amenable to systematic study and is also quite interesting to examine.
One's work experiences are where a great deal of our vocabulary and communication skills come from, and sometimes even our relational styles. Our workday shapes our thoughts and sets our neurons ablaze even as we are dreaming or trying to express with a loved one the trials and tribulations of our work day. Knowledge of one's use of language as a tool is knowledge of a great deal more.
In fact, we each may be speaking a different dialect of the same language, as doctors and lawyers and beauticians and homemakers and teachers and software engineers all take for granted that we are processing words and meaning in the same way. Consider, however, how we may hear "computerese" spoken in a corporate lunchroom, the latest news from Paris haute couture spoken while strolling through Bloomingdales, and self-referential, psychoanalytically-derived reverie from the student of clinical psychology. Are they speaking the same language?
How does one's vocabulary and learned way of associating words to meaning affect the way one thinks and communicates across a range of situations? How will the course of psychotherapy, which is heavily dependent on verbal representation and interaction, be affected by one's linguistic disposition and the world view this may represent (or reinforce)? These sort of broad questions will be the focus of the present paper. It maybe anticipated that many more questions will be raised than answered, but this is not seen as necessarily being a bad thing.
To what extent can we say that a speaker knows rule R of his or her language rather than rule R', given that both rules produce the same grammatical outcome? If rule R' provides a more general, technically precise formulation of the same conditions formulated by rule R, do we ascribe knowledge of R' to the speaker - even if the speaker admits only to knowing R? Assuming that a third-person report drawing on the best available theory should take precedence over the speaker's own first-person report, Chomsky claims we can and should ascribe knowledge of the more precise rule to the speaker. I argue that while the third-person reports offered by observers drawing on the best available theories provide standards by which a given behaviour may be evaluated, corresponding first-person accounts must be taken into consideration as criteria of assertibility constraining what we may conclude about the person's actual knowledge.
Given the following two choices: (A), I often read the newspaper on Sunday.(B) I read often the newspaper on Sunday. Which is a native English speaker-- call him or her S -- most likely to produce? It should be fairly obvious that he or she would likely produce the grammatically correct sentence A. What may not be so obvious is his or her reason for choosing A over B.
Chomsky explains the choice by citing the speaker's knowledge of the appropriate rule. In rejecting the grammatically incorrect sentence B, Chomsky claims, speaker S shows that he or she "knows that verbs cannot be separated from their objects by adverbs". Call this "rule R." But because he holds that the prohibition of such adverbial intervention is a consequence of the more general rule of strict adjacency, Chomsky goes further and claims that what S really knows is that "the value for the case assignment parameter in E is strict adjacency" emphasis in the original). Call this "rule R'." Both rule R and rule R' describe S's behaviour. But are we justified in claiming that S in fact knows rule R'?
It would be helpful first of all to clarify what Chomsky means by knowing a rule. Extrapolating from behavioural evidence, Chomsky claims (with some more or less weak provisos, e.g., that if a speaker's utterances conform to the conditions specified by a language rule, then that speaker knows the rule. In short, a speaker who observes a rule can be said to know that rule. In addition, Chomsky claims that knowing a rule of language is an instance of knowing-that, and therefore involves propositional knowledge. Thus according to Chomsky, if S acts in accordance with rule R of his or her language, then S knows rule R and therefore knows that R.
Chomsky's claim that knowledge of language is knowing-that has an important corollary. That is that the person who knows rule R not only knows that R, but believes that R (e.g., Ascription of knowledge of language to a person therefore entails a corresponding ascription of belief to that person. When we state that someone knows a language rule, we are in effect making a statement about his or her attitude (belief) toward the propositional content embodying the language rule.
It seems to me that in cases in which knowledge of language is ascribed, we are justified in recasting talk about knowledge into talk about beliefs. That is because what interests us is not whether or not a given rule is true, i.e., whether or not it accurately describes the appropriate language behaviour, but rather, whether or not it is considered true of his or her language by the speaker. Our ascription of knowledge of the given language rule thus involves a statement about S, specifically, about how things are with him or her as demonstrated by his or her attitude toward the relevant proposition(s). For that reason, framing the question in terms of S's beliefs is perfectly legitimate, and shows exactly what is at stake when we ascribe knowledge of language to a person.
On the basis of the foregoing, I would suggest that for any language rule R, knowing that R means the following: We can say that S knows R if S believes the propositions comprising R. If R can be stated as p, then S knows R if S believes that p. Further, for S to believe that p is for S to be disposed normally to feel/hold/agree that p.
Applying this to the example introduced at the beginning of this paper, we would say that S's knowing R means that S believes that verbs cannot be separated from their objects by adverbs.
By the same token, S's knowing R' means that S believes that the value for the case assignment parameter in E is strict adjacency.
Further, if we claim that S produces sentence A and not sentence B because S knows that R, we are in effect asserting that S's producing A comes about by virtue of S's cognitive/doxastic state having a certain content. Or: S believes that R and because S believes that R, S utters A rather than B
The general claim here is that language behaviour takes the form it does by virtue of the content of the cognitive/doxastic state that enters into/supports/underlies that behaviour. Conversely, the actual content of that cognitive/doxastic state would represent the speaker's knowledge of language and would explain why he or she produces the appropriate utterances.
Given the above definition of what it is to know, and the connection between the content of a cognitive/doxastic state and the role it plays in the explanation of behaviour, the question becomes: Which formulation of a rule describes the speaker's actual belief(s), and which formulation simply describes a set of conditions to which the speaker's behaviour (unknowingly) conforms?
Chomsky offers an answer that can be called the argument from the best theory. He states that [W] are entitled to propose that the rule R is a constituent element of Jones's language (I-language) if the best theory we can construct dealing with all relevant evidence assigns R as a constituent element of the language abstracted from Jones's attained state of knowledge)
Chomsky's argument is that if our best theory for explaining a speaker's behaviour includes attributing to him or her knowledge of a given rule, then we should conclude that this knowledge does in fact enter into the speaker's behaviour and that the speaker therefore does know the rule. Implicit in this argument is the provision that a third person attribution of knowledge of language has authority over a first person report, if the third-person attribution is made on the basis of the best possible theory available. From this point of view, the third-person claim that another person's language behaviour implicates a given cognitive/doxastic content is true simply by virtue of the claim's having been derived from the best available theory. Because it is derived from the best available theory, the third-person attribution must take precedence over any relevant first-person account.
But this does not tell us whether or not S knows rule R in the required sense of believing that R. It does not, in other words, tell us whether or not S holds the requisite attitude - belief - in relation to the propositions and constituent concepts embodying the rule he or she is said to know. This is what we need to ascertain, but how?
It is informative to note in this regard a point that Searle has raised in general objection to Chomsky's claim that speakers are (actually, in fact) following the rules he and other grammarians have formulated. According to Searle, for any attribution of rule-following, we need to show that the attributed rules are "rules that the agent is actually following, and not mere hypotheses or generalizations that correctly describe his behaviour." For Searle, the argument from the best theory does not suffice, since the descriptive or predictive accuracy of the attributed rule does not by itself prove that the rule is in fact being followed. We need, instead, "some independent reason for supposing that the rules are functioning causally"
There seem to be two points bundled into Searle's objection. The first, which Searle explicitly makes, is that behaviour which seems to be in accord with a rule must be shown to be guided by that rule in fact and not simply hypothetically. More generally, if we are to claim that a person is behaving in a certain way on account of his or her given cognitive/motivational content, we must show that this given cognitive/motivational content does in fact enter into the production of the behaviour in the specified manner.
The second point, which Searle doesn't make but which I find implicit in the call for obtaining an independent reason for attributing a rule, is that the person to whom such rule following is attributed should (somehow) understand him or herself to be following the rule. This would mean (among other things) that he or she should show evidence of believing that R, for the given attributed rule R. Such evidence could be found in the appropriate first-person avowal of belief or acceptance that R; such a first-person avowal would in fact constitute an independent reason for attributing actual, as opposed to hypothetical, rule-following to that person, if we read "independent" to mean something like "coming from a source other than the person doing the attributing." Like the previous point, this point can be generalized. Given cases in which it is claimed that a speaker knows a given rule of language, we would want independent corroboration of that claim.
A common sense attempt to corroborate a knowledge claim would have us solicit a first person report from the speaker him- or herself. We might, for instance, ask the speaker to describe what, if any, language rule he or she understands him- or herself to be following in producing a given utterance. With this evidence, we would be able to determine whether or not our hypothesized ascription of rule-following (and with it the corresponding ascription of belief) is accurate.
This first approach would require the speaker to be able to convey to us on his or her own why his or her language behaviour exhibits the regularity observed of it. But there are prima facie two problems here. First, it seems clear that not all speakers can formulate the rules their language behaviour seems to conform to, and second, not all speakers are aware of their reasons for producing utterances in the form that they do.
But neither of these considerations should be taken to mean that S necessarily cannot give us the kind of testimony we would want. In the first place, the inability to state or otherwise express a rule is not necessarily evidence that one does not know (or would not recognize) the rule any more than the inability to describe a concept is evidence that one does not know (would not recognize) the concept. And in the second place, one's not being aware of one's reasons for behaving in a given way is not necessarily evidence that one does not know why one behaved in the given way. Many actions do not ordinarily require a high degree of attentiveness to the conditions of their production in order to succeed. This is obviously true of performances based on, e.g., physical skills, which often require little or no attentive monitoring for their success. But it is also true of more formalized behaviours, language performances among them. For example, a speaker may concentrate attention on what he or she is saying and apparently not think about the syntactic conditions his or her utterance must meet. And yet afterward the speaker may acknowledge that he or she did indeed mean to conform to the appropriate syntactic rule. In any case, it seems reasonable to suppose that a speaker initially inattentive to the reasons behind his or her syntactic behaviour can at least in principle become aware of them and may impart that awareness to others. The question is how.
On the face of it, at least, introspection would appear to be the most obvious way for the speaker to gain such awareness, since what we are concerned with here are psychological facts which, one would think, could be discovered through one's focussing attention on one's own inner states. But Chomsky rejects introspection, claiming that it can tell the introspector neither that the given rule holds nor that the rule enters into the appropriate "mental computations" involved in language production.
But introspection does not exhaust all possible avenues for securing the kind of first-person evidence we would want to obtain. We could state the rule we think S is following, and ask S whether or not he or she would accept this as the correct description of the reason he or she produced the given utterance. We could likewise present S with different formulas presenting under different descriptions the same linguistic regularity observed of S, and ask him or her to choose which one correctly describes his or her understanding of why he or she conformed to that regularity. We might, to return to our previous example, show S rules R and R', and ask him or her which one, if any, describes his or her understanding of why sentence A is preferable to sentence B. No matter which specific approach would be taken, the crucial criterion would be that ascription to S of knowledge of a rule be contingent upon S's recognizing and agreeing to the propositions contained in the rule.
Thomas Nagel has in fact suggested something like this. As he puts it, So long as it would be possible with effort to bring the speaker to a genuine recognition of a grammatical rule as an expression of his understanding of the language, rather than to a mere belief, based on the observation of cases, that the rule in fact describes his competence, it is acceptable, in that to think, would indeed ascribe knowledge of that rule to the speaker. emphasis in the original).
An ascription of knowledge to a person should be contingent upon the acceptance by that person of the appropriate propositions and/or concepts as accurately articulating what he or she believes.
Generally, we can ascribe belief B to S if S, when B is brought to his or her attention, feels/holds/agrees that B. Without S's feeling/holding/agreeing that B, we could not confidently ascribe B to S. In addition, S's feeling/holding/agreeing that B can consist in the recognition that B or the acquisition of the attitude that B
Thus for S to accept that rule R correctly reflects what he or she knows about the appropriate aspect of language, S must either recognize that R or acquire the belief that R. If S were to recognize that R, then S would simply be exercising an already-existing disposition to normally hold/feel/agree that R, given the appropriate circumstances. If S were to acquire the belief that R, then S would, on the basis of, e.g., evidence presented, become disposed to hold/feel/agree that R is the case, given the appropriate circumstances. In other words, when we recognize that R, we are exercising or expressing a belief we already have, though perhaps we never had the need or opportunity to do so before. When we are brought to accept that R, we are acquiring, and consequently expressing, the belief that R. Note that in either case, the acceptance condition involves a first-person avowal of belief.
Note also that it is not necessary that the speaker come to this avowal through reflection or "introspection" or otherwise on his or her own. If a rule is described to the speaker, and the speaker agrees that he or she believes (or is brought to believe) that the rule holds in the appropriate circumstance, then it is reasonable to attribute to him or her knowledge of that rule. But it is also true that if the speaker does not recognize or accept the rule as articulating something he or she believes or has come to believe, then the plausible attribution to him or her of knowledge of that rule would be difficult to maintain.
Would Chomsky agree to make the ascription of knowledge of language rules contingent on the acceptance condition? On the one hand, he seems to accept a scenario in which a speaker comes to know the rules of grammar "from the outside" - that is, by having them taught or otherwise brought to his or her attention by another party. On the other hand, his position on the usefulness of the first-person perspective generally is that it isn't. His view seems to be based not only on his own belief that much knowledge of language is tacit, but on the widely recognized observation that first-person accounts are inherently unreliable. If we examine these two points, however, we will find that, rather than invalidating the first-person perspective altogether, they serve only to qualify the claims that can be made for it.
If, as Chomsky claims, knowledge of language is largely tacit, then claims regarding a speaker's knowledge of a given rule may be a difficult matter to decide from the speaker's point of view. Given Chomsky's understanding of tacit knowledge as knowledge that is "generally inaccessible to consciousness" and therefore presumably opaque to the knower, it is easy to see how it would be difficult to make knowledge ascription contingent on the appropriate first-person avowal. But this difficulty may be more apparent than real.
First, tacit knowledge as Chomsky understands it would appear to differ very little from ordinary knowledge outside of its being tacit., Chomsky does not claim that a speaker's tacit knowledge of language is inferentially isolated from his or her other attitude states, and in fact he has stated that speakers' decisions to use their tacit knowledge are influenced by their "goals, beliefs, expectations, and so forth" . Far from existing behind a kind of firewall separating it from ordinary beliefs and other attitude states, tacit knowledge of language would seem to be woven into the speaker's overall network of attitude states, and to exert some variety of influence on - as well as to be influenced by - those states.
Second, ordinary beliefs themselves may be largely tacit. As indicated above, beliefs are to some extent dispositional. Following , our having consciously thought about or avowed a belief is a contingent rather than a necessary feature of beliefs. This means that, as with tacit knowledge, we may "have" beliefs without necessarily having consciously thought about them. Nevertheless, when a belief of ours is brought to our attention, we do, under ordinary circumstances, tend to recognize it as such. There is no reason this cannot hold for tacit knowledge as well. In fact all that would be necessary for us to say that someone knew (believed) something, whether tacitly or not, is that when confronted with a statement or other formulation of the belief, that person should be disposed normally to feel/hold/agree that it is true
It may be objected here that the acceptance condition is contingent on the belief's accessibility to consciousness, and that tacit knowledge is, by definition, inaccessible to consciousness and therefore exempt from the acceptance condition. Again, there is no reason to suppose that tacit knowledge cannot behave like ordinary dispositions to believe, and thus to be brought to awareness given the proper circumstances. Certainly, Chomsky's statement that one can come to know initially tacit rules "from the outside" would seem to indicate his acknowledgement that one could at least in principle have conscious access to one's tacit knowledge. If this is so, then there is no reason in principle that tacit knowledge must remain tacit and thus exempt from the acceptance condition. We might say then that tacit knowledge of language is tacit to the extent that it is initially inaccessible to the person to whom it is attributed, but that given the proper conditions, this inaccessibility can be converted to the kind of accessibility enjoyed by our ordinary knowledge and thus can be brought into play in relation to the acceptance condition.
As mentioned above, Chomsky believes that first person reports regarding what one thinks one is doing are not always reliable. As he puts it, "We might ask Jones what rule he is following, but, . . . such evidence is at best very weak because people's judgments as to why they do what they do are rarely informative or trustworthy". There is truth to this assertion, but a closer look is warranted.
What Chomsky seems to be referring to here is the normal indeterminacy that may and often does characterize an agent's first person accounts of his or her reasons for performing in a given way. Such indeterminacy may be a product of any or all of a number of factors, including the relative attentiveness with which one does something, the degree of fine-grainedness or explicitness demanded of the first-person account, and the fact that internal states are not objectively separate from the first-person perspectives form the basis of reports about those states. Absolute certainty here is out of the question -- but that does not in and of itself invalidate first person accounts.
In fact, I would be inclined to understand the indeterminacy of first person reports as analogous to the underdetermination of theory by evidence. Because of the latter, we cannot (and can never) be certain that the evidence pointing to certain theoretical conclusions is absolutely conclusive. But - and Chomsky has argued this point against Quine - such underdetermination does not in and of itself automatically invalidate any reasonable conclusions we may feel we are warranted in drawing from the evidence. Just because it is possible that our conclusions will be proven wrong by more or better or subsequent evidence does not mean that we are not justified in drawing the most reasonable conclusions we can based on the evidence available to us. A similar case can be made for the value of first-person accounts. They may be far from infallible, but because they represent expressions or manifestations of what one thinks is the case with oneself, they constitute admissible evidence regarding a person's attitude states.
In fact it reasonably can be held that because they tell us how things are with a person from that person's point of view, first-person reports have a certain privileged status in instances where we are trying to determine someone's attitude toward a given proposition or set of propositions. Evidence regarding what someone thinks he or she believes would seem to be especially relevant if we want to determine whether or not that person knows (believes) a given rule of language. It seems to me that in this case a first-person account would make for useful evidence that should not be ruled out on a priori grounds
It is useful in this context to think of first-person reports in terms of the Wittgensteinian notion of criteria. Criteria, briefly, are normative considerations that provide grounds for justifying assertions, and thereby help to set conditions under which assertions are appropriate. By this reading, first-person reports would provide the criteria of assertibility constraining third-person ascriptions. First-person reports would, in other words, set out the normative conditions under which third-person ascriptions would be deemed appropriate or not, and would thus serve to restrict the range of attitudes we can (reasonably) ascribe to others. Since, as has pointed out, such criteria serve to help fix "what we may be wrong about, deceived about, under an illusion about" (62 quoted in , then first-person reports would exert a potentially limiting influence on third-person ascriptions. Specifically, they would show how claims made from the third-person perspective may fall outside the range of possible understandings that reasonably can be attributed to the person in question.
By this light, the acceptance condition would act as the relevant criterion for ascribing knowledge of a given language rule. A third-person ascription that met the acceptance condition would, all things being equal, be considered a justified ascription. Conversely, a third-person ascription that did not meet the acceptance condition would, all things being equal, be difficult to justify. Thus S's accepting rule R as expressing his or her reason for uttering sentence A rather than sentence B would provide justification for ascribing knowledge of R to S. If on the other hand S did not accept R as expressing his or her understanding of the appropriate language behaviour, then by the criterion of the acceptance condition we would not be justified in ascribing knowledge of R to S.
In spite of their built-in indeterminacy, then, first person reports and avowals would seem, for better or for worse, to be the relevant criteria by which to check the plausibility of third person ascriptions of knowledge. Consequently, first-person reports and avowals would be useful in cases where we wish to adjudicate apparently conflicting claims regarding what a given speaker knows
It is easy to see how such conflict could arise. Take, again, the example of S's producing sentence A rather than sentence B. This may been explained alternately as being due to S's knowing that R - i.e., believing that adverbs are prohibited from intervening between verbs and their objects - or S's knowing that R' - i.e., believing that the value for the case assignment parameter in E is strict adjacency. Given the two very different sets of propositions comprising these rules, we would appear to have two competing claims regarding the speaker's object of belief.
For even if we agree that rule R is nothing more than a consequence of the more general rule R' it is not at all certain that S would recognize this. Nor is it certain that S would understand the constituent concepts of which R' is comprised, even if he or she understood the constituent concepts of R. Many ordinary speakers of English (and others) know what verbs, objects, and adverbs are, but do not know what strict adjacency is. We could expect these speakers' first-person accounts of why they produced sentence A rather than sentence B to be put in terms of verbs, objects, and adverbs, and not in terms of strict adjacency. Accordingly, we reasonably could expect that linguists (and perhaps even only a subset of linguists) would be disposed to describe S's language behaviour in terms of strict adjacency, but that S, as an ordinary speaker, would not. As Chomsky concedes, many people may be reluctant to attribute knowledge of R' to S on account of the "unfamiliarity of the notions Case assignment and adjacency parameter." Chomsky, of course, does not hesitate to claim that S does indeed know strict adjacency. But his willingness to acknowledge others' reluctance to grant this point is interesting.
Still, Chomsky holds that the unfamiliarity of the concepts used to explain S's behaviour is "irrelevant to the description of [S's] state of knowledge" . What would seem to matter here is only that the concepts belong to the best available theory, in which case we must assume that they accurately reflect S's state of knowledge. But it seems to me that what really is at issue here is not the relative familiarity of the concepts per se, but rather whether or not these (or other) concepts are properly part of S's repertoire of beliefs about the language. It seems reasonable to suppose that S cannot have the requisite attitude toward concepts that he or she cannot be said to possess. If that is the case, then S's familiarity with the given concepts is hardly a matter of indifference. As Davies has pointed out in a similar context, whether or not a person understands the concepts he or she is said to know is indeed a relevant consideration.
In fact, it is difficult to see how one's not understanding a concept one is said to know can be irrelevant to deciding whether or not one knows a rule or proposition in which that concept figures. Consider the following example: I drink water because I am thirsty and I know that water will quench my thirst. But the best theory of why I drink water goes something like this: when I drink water, the water is absorbed into my bloodstream by osmosis as it enters my stomach. This causes both my blood volume and pressure to increase, and the osmotic strength of my blood to be restored to a normal level. Because this is the best theory, does that mean that I drink water because I know what that theory states?
If we take a position analogous to the position Chomsky takes regarding knowledge of the rules of language, it seems to me we would have to answer "yes." Just as Chomsky holds that S's producing sentence A is guided by his or her knowing that the value for the case assignment parameter is strict adjacency, we would hold that my drinking water is guided by my knowing that the intake of water works through osmosis to cause blood volume and pressure to rise, and osmolarity to reach the proper level. In both of these cases, we would be claiming that the person in question knows what the best theory available states about the reasons for his or her behaviour, and that this knowledge enters into the relevant mechanisms for producing the behaviour
But do I in fact know this technical explanation for my drinking water? Again, it is drawn from the best theory available, and certainly, my behaviour is perfectly in accord with what it would be if I did know what the theory describes. But the fact is that I did not know the theory (nor for that matter had I even heard of the term osmolarity) until I asked an expert. I did not, in other words, have the requisite familiarity with the propositions I would have to have if I could be said to know what the theory states.
My having consulted an expert raises a crucial point. For, according to Chomsky, my knowledge includes what is known to experts within my speech community. Citing Putnam's notion of the division of linguistic labour, Chomsky asserts that the meaning of a term may be expressed in terms of the specialized knowledge of others in my speech community. By virtue of my being a member of a given speech community (presumably, in this case, speakers of English), in other words, my knowledge of language encompasses the best theories as formulated by the appropriate experts.
But if, as I believe we should, we are to agree that the acceptance condition sets legitimate assertibility criteria constraining knowledge ascriptions, we cannot automatically attribute the experts' knowledge to any given member of a speech community. Recall the definition of knowing introduced above: even given that S belongs to a speech community in which R' is accepted as the best available explanation of a particular language behaviour, we still would have to show that S him- or herself stands in the proper attitude to the propositions and concepts making up R'. It is not enough that someone from his or her speech community stands in such an attitude; he or she must him- or herself stand in that attitude.
In light of this, I believe we can reconceive the relationship between S and rule R' of the best (yet unfamiliar) theory explaining S's language behaviour. Assuming that ascription of knowledge of R' to S is unjustified given the acceptance condition and the corresponding criteria of assertibility set by S's first-person reports, we can say that, because R' is potentially available to S by virtue of its arising from the best theory available to the relevant experts in S's speech community, R' is the standard against which S's knowledge can be measured. This is not to say that S knows R' but rather that S's state of knowledge can be brought to a level such that S will accept R' as the correct explanation of the given language behaviour
Like first-person criteria of assertibility, third-person standards of explanation cast our assertions of knowledge and avowals of belief in a normative light. My first-person report of why I think I behaved in a given way may be an adequate account of my own beliefs on the subject, but it may fail utterly as an adequate explanation of that behaviour - in the context of the most advanced or accepted thinking on the subject. The upshot of this is that we must think of the best third-person ascriptions of knowledge as hypotheses embodying explanatory standards that people may (or perhaps should) meet in the appropriate context.
This last qualifier is crucial, for there is a degree to which the adequacy of a response will be gauged in terms of the analytical or explanatory framework within which it is elicited. There may be circumstances in which "Because I knew it would quench my thirst" would be a sufficient answer to the question "Why did you drink that glass of water?" Similarly, it can be argued that there may be contexts -- the teaching of grammar to children, for instance - in which the preferability of sentence A to sentence B is better explained in terms of verbs, adverbs, and objects rather than in terms of strict adjacency.
In a general sense, third-person standards and first-person criteria set certain conditions that our assertions and avowals may meet. Explanations drawn from the best theories provide the standards toward which our own state of knowledge and repertoire of beliefs may aspire. Criteria of assertibility derived from first-person reports and avowals provide conditions placing constraints on what third-person ascriptions may hold. Thus even if the best theories for explaining behaviour serve as standards to which knowledge of that behaviour can aspire, first-person accounts still must be factored in as legitimate constraints on the range of third-person ascriptions.
When we are justified in believing a claim, we are often so justified because our belief is based on other beliefs. Yet, it is not an adequate defence of a belief merely to cite some other belief that supports it, for the supporting belief may have no epistemic credentials at all - it may be a belief based on mere prejudice, for example. In order for the supporting belief to do the work required of it, it must itself pass epistemic muster, standardly understood to mean that it must itself be justified. If so, however, the question of what justifies this belief arises as well. If it is justified on the basis of some yet further belief, that belief, too, will have to be justified; and the question will arise as to what justifies it.
Thus arises the regress problem in epistemology. Skeptics maintain that the regress cannot be avoided and hence that justification is impossible. Infinitists endorse the regress as well, but argue that the regress is not vicious and hence does not show that justification is impossible. Foundationalists and coherentists agree that the regress can be avoided and that justification is possible. They disagree about how to avoid the regress. According to foundationalism, the regress is found by finding a stopping point for the regress in terms of foundational beliefs that are justified but not wholly justified by some relationship to further beliefs. Coherentists deny the need and the possibility of finding such stopping points for the regress. Sometimes coherentism is described as the view that allows that justification can proceed in a circle (as long as the circle is large enough), and that is one logically possible version of the view (though it is very hard to find a defender of this version of coherentism). The version of coherentism that is more popular, however, objects in a more fundamental way to the regress argument. This version of coherentism denies that justification is linear in the way presupposed by the regress argument. Instead, such versions of coherentism maintain that justification is holistic in character, and the standard metaphors for coherentism are intended to convey this aspect of the view. Neurath's boat metaphor - according to which our ship of beliefs is at sea, requiring the ongoing replacement of whatever parts are defective in order to remain seaworthy–and Quine's web of belief metaphor–according to which our beliefs form an interconnected web in which the structure hangs or falls as a whole - both convey the idea that justification is a feature of a system of beliefs.
To see exactly where this conception of justification takes a stand on the regress problem, a formulation of the standard sceptical version of the regress argument will be helpful. To formulate such an argument, we need to use the idea of an inferential chain of reasons. Such an inferential chain traces the inferential dependence of a given belief, including in it as first link the belief in question, as second link whatever reason justifies it, as third link whatever epistemically supports the reason in question, and so on. The sceptical argument then proceeds as follows: No belief is justified unless its chain of reasons (i) is infinitely long,(ii) stops, or (iii) goes in a circle. An infinitely long chain of reasons involves a vicious regress of reasons that cannot justify any belief: Any stopping point to terminate the chain of reasons is arbitrary, leaving every subsequent link in the chain depending on a beginning point that cannot justify its successor link, ultimately leaving one with no justification at all.
Circular arguments cannot justify anything, leaving a chain of reasons that goes in a circle incapable of justifying any belief: coherentists are ordinarily characterized as maintaining that premise 4 of this argument is false. Though such a view would count as a version of Coherentism, standard Coherentism has no quarrel with 4, but instead rejects 1 because it presupposes that justification is non-holistic. Premise 1 assumes that justification is linear rather than holistic in virtue of characterizing justification in terms of inferential chains of reasons, and it is this feature of the regress problem to which typical coherentists object.
In sum, then, Coherentism can be negatively characterized as the view that, first, agrees with foundationalism that there is no regress of justification that is infinite (thereby rejecting both skepticism and infinitism) and, second, disagrees with foundationalism that justification depends on having an inferential chain of reasons with a suitable stopping point. This negative point can be maintained either by denying that the chain has a stopping point, thereby endorsing a linear version of Coherentism, or by denying the assumption that justification requires the existence of an inferential chain of reasons, thereby endorsing a holistic viewpoint. Since the primary examples of Coherentism in the history of the view are holistic in nature, I will focus in the remainder of this entry on this version of the views
Cherentists often defend their view by attacking foundationalism, implicitly relying on the implausibility of infinitism and skepticism. They attack foundationalism by arguing that no plausible version of the view will be able to supply enough in the way of foundational beliefs to support the entire structure of belief. This attack takes two forms. First, coherentists argue against the very idea of a basic belief, maintaining that it is always a sensible question to ask, “Why do you believe that (i.e., what reason can you give me for thinking that is true)?” Second, coherentists attack the idea that the kind of foundation developed will be adequate to support the structure. If, as is usual, foundationalists limit foundational beliefs to those about our experience in the specious present, it is hard to see how such a limited foundation can support the entire edifice of beliefs, including beliefs about the past and future, about the vast array of scientific opinion both about the observable realm and the unobservable, and about the abstract domain of mathematical and logical truth and the truths of morality. Foundationalists may, of course, introduce epistemic principles of justification that license whatever chain of reasons they wish to endorse from the foundations to the rest of the edifice of belief, but the resulting theory will look more and more ad hoc as new epistemic principles are offered whenever the threat of skepticism looms regarding a kind of belief not defensible by standard inductive and deductive rules of inference.
Regardless of the persuasiveness of these challenges to foundationalism, coherentists must and do go beyond negative philosophy to provide a positive characterization of their view. A bit of taxonomy and some specific examples will allow us to see how the required positive characterization is provided by coherentists. A useful taxonomy for Coherentism can be provided by distinguishing between subjective and objective versions of Coherentism. At a purely formal level, a version of Coherentism results from specifying two things: first, the things that must cohere in order for a given belief to be justified, and second, the relation that must hold among these things in order for the belief in question to be justified. In the realm of the logical space of Coherentism, both features can be given subjective or objective construals.
Consider first the items that need to cohere. As noted already, coherentists typically adopt a subjective viewpoint regarding the items that need to cohere, maintaining that the system on which coherence is defined is the person's system of beliefs. Coherence could be defined relative to other, more objective systems, however. Social versions of Coherentism may define coherence relative to the system of common knowledge in a given society, for example, and religious versions may define coherence relative to some body of theological doctrine. These latter two systems are objective in that the obtaining of the system in question implies nothing about the person whose belief is being evaluated. For this reason, they tend to be rather implausible, since they deny the perspectival character of justification, according to which whether or not one's beliefs are justified depends on facts about oneself and one's own perspective on the world. Versions that combine subjective and objective features are also possible. For example, a theory might begin with the system of a person's beliefs, and supplement it with additional claims that any normal person would believe in that person's situation. It is true, however, that standard versions of Coherentism are subjective about the items relative to which coherence is defined.
Even if this aspect of the view is subjective, however, belief is not the only subjective item to which a theorist might appeal, leaving one to wonder what explains the uniform agreement among coherentists that coherence should be defined relative to the class of beliefs. The reasons for this uniformity fall into two categories. One kind involves the claim that the only other possibly relevant mental states are experiential states (appearance states, sensation states), and that such states cannot be reasons at all since they lack propositional content(see Davidson 1989). This viewpoint has little plausibility to it, however. It may be true that there are some experiential states without content (perhaps the experience of pain is an experiential state without content), but it is equally true that some have content. It can appear to a person that it is raining, and the mental state involved has as content the proposition that it is raining.
A more plausible way to pursue this kind of argument is to maintain that if experiential states play a role in justification, they'll have to be able to play that role whether or not they are the kind of state that has propositional content. So, if some lack content and cannot be reasons on account of lacking content, then experiential states cannot play a role at all.
The difficulty with this line of argument is the conception of reasons it involves. It is true that if an experience has no content, then it cannot be in virtue of its content that it provides a reason. Even so, it is far from obvious that a reason has to be one in virtue of its content, for if we attend to ordinary defences people give of their beliefs, they often cite their experience as a reason. One can question whether they are merely explaining their beliefs rather than justifying them, but when that distinction is clarified, they'll still cite their experience as their reason (“Why are you grimacing?” “Because my leg hurts.” “Why do you think your leg hurts?” “Because I can feel it.” “Well, your experience may explain why you believe that your leg hurts, but I'm not asking for an explanation of your belief, I'm asking you to provide a reason for thinking that your belief that your leg hurts is correct; can you give me such a reason?” “Yes, because I can feel it hurting . . . )
The second category of defence for the idea that coherence is a relation on beliefs involves an argument to the effect that other mental states are either irrelevant to the question of the epistemic status of a belief (e.g., affective states such as hoping, wishing, fearing, and the like) or are insufficient for generating positive epistemic status (e.g., states such as sensation states or appearance states) - there is, after all, the issue of what to make of the sensory input, and that issue takes us beyond the sensation state itself (Lehrer 1974). The former point is unproblematic, but the latter point fails to imply the claim in question. Arguing that an appeal to experiential states is insufficient for justification in no way shows that an appeal to such states is not necessary for an adequate account of justification.
There is, however, a deeper motivation behind coherentists' aversion to defining coherence over a subjective system that includes experiential states. The worry is that appealing to experiential states in any way will result in a version of foundationalism. The understanding of foundationalism which results from the regress argument involves two features. The first is an asymmetry condition on the justification of beliefs - that inferential beliefs are justified in a way different from the way in which non-inferential beliefs are justified - and the second is an account of intrinsic or self-warrant for the beliefs which are foundationally warranted and which support the entire structure of justified beliefs. There are various proposals for how this latter commitment of foundationalism is to be formulated, but we can already see the outline of an argument for requiring that coherence not be defined over a system that includes experiential states. For if a theory were to include such states in the class of things with which a belief must cohere in order to be justified, the above considerations might seem to suggest that such a theory would have to involve some notion of intrinsic warrant or self-warrant. Some justification or warrant would be possessed by a belief, but not in virtue of some warrant-conferring relationship to any other belief. Hence, it might seem, this relation between the appearances and related beliefs would have to generate at least some positive degree of warrant for such beliefs, even if that warrant were not sufficient for full justification. Even if not sufficient for full justification, though, the theory would appear typically foundationalist in that it includes some notion of positive warrant not dependent on any relationship to other beliefs.
This argument is quite persuasive, but is ultimately flawed. The distinctive feature of foundationalism, in the context of the relationship between appearances and beliefs, is that this relation between appearances and beliefs is taken to be one which imparts positive epistemic status (perhaps only in the absence of defeaters). So, for example, if a version of foundationalism appeals to the appearance that it is raining as that which undergirds the foundational warrant for the belief that it is raining, that theory must maintain that the appearance supplies some positive warrant for the belief. It is this warrant-conferring requirement that allows Coherentism to escape the above argument, for it is open to coherentists to deny that appearances impart, or tend to impart (even in the absence of defeaters), any degree of positive epistemic status for related beliefs. The coherentists can maintain, instead, that appearances are necessary (in the usual situations) for those beliefs to have some degree of positive epistemic status, but in no way sufficient in themselves for any degree of positive epistemic status. Coherentists can go on to identify what would be sufficient in conjunction with the relation to appearances in typically coherentist fashion, focussing on the way in which any one of our beliefs is related to an entire system of information in question. The resulting theory would be one in which experience plays a role, but not the kind of role that is distinctive of foundationalism.
Another way to make this same point is to recall that Coherentism is not committed to the view that coherence is a relation on the system of the person's beliefs. For one thing, coherence might be a relation on an objective body of information, perhaps in the form of coherence with some body of common knowledge (or, more plausibly, by supplementing a system of beliefs with information any normal person would believe). So when coherentists defend a subjective version of the items over which coherence is defined, there cannot be some definitional requirement on the view that coherence must be a relation on a system of beliefs. That conclusion could be drawn only if there were a sound argument that showed that any appeal to experience would turn a theory into a version of foundationalism. Since the argument for that conclusion is flawed as explained above, Coherentism proper need not prohibit the subjective system over which coherence is defined from containing experiential states.
The second positive feature required of Coherentism is a clarification of the relation of coherence itself, and here again we find an important distinction between subjective and objective approaches. The most popular objective approach is explanatory Coherentism, which defines coherence in terms of that which makes for a good explanation. On such a view, hypotheses are justified by explaining the data, and the data are justified by being explained by our hypotheses. The central task for such a theory is to state conditions under which such explanation occurs .BonJour (1985) presents a different objective account of the coherence relation, citing the following five features in his account: (1) logical consistency;(2) the extent to which the system in question is probabilistically consistent; (3) the extent to which inferential connections exist between beliefs, both in terms of the number of such connections and their strength; (4) the inverse of the degree to which the system is divided into unrelated, unconnected subsystems of belief; and (5) the inverse of the degree to which the system of belief contains unexplained anomalies.
These factors are a good beginning toward an account of objective coherence, but by themselves they are not enough. We need to be told, in addition, what function on these five factors is the correct one by which to define coherence. That is, we need to know how to weight each of these factors to provide an assessment of the overall coherence of the system.
Even such a specification of the correct function on these factors would not be enough. One obvious fact about justification is that not all beliefs are justified to the same degree, so once we know what the overall coherence level is for a system of beliefs, we will need some further account of how this overall coherence level is used to determine the justificatory level of particular beliefs. It would be easy if the justificatory level simply matched the overall coherence level for the system itself, but this easy answer conflicts with the fact that not all beliefs are justified to the same degree
One way to address this problem is to distinguish between beliefs and strength of belief or degrees of belief. We believe some things more strongly or to a greater degree than other things. For example, I believe there is a cup of coffee on my desk much more strongly than I believe that I visited my parents in 1993, even though I believe both of those claims. Using the concept of a degree of belief, a coherentist may be able to identify what degree of belief coheres with a system of (degrees of) belief, and thereby explain how some beliefs are more justified than others. The explanation would be that one belief is more justified than another just in case a greater degree of belief coheres with the relevant system for one of the two beliefs.
The best-known example of a theory that employs the language of degrees of belief is also a useful example of a subjective account of the coherence relation. Such a subjective account can be developed by identifying a subjective theory of evidence that determines whether and when a person's belief, or degree of belief, is justified. A beautiful and elegant theory of this sort is a version of probabilistic Bayesianism. The version in question identifies justified beliefs with probabilistic coherence, so that a (degree of) belief is justified if and only if it is part of a system of beliefs against which no dutch book can be made. (A dutch book is a series of fair bets which are such that, if accepted, are guaranteed to produce a net loss.) In addition, this version of Bayesianism places a conditionalization requirement on justified changes in belief. Conditionalization requires that when new information is learned, one's new degree of belief match one's conditional degree of belief on that information prior to learning it. So if p is the new information learned, one should change one's degree of belief in q so that it matches one's degree of belief in q given p (together with everything else one knows) prior to learning q. The idea is that each person has an internal, subjective theory of evidence at a given time, in the form of conditional beliefs concerning all possible future courses of experience, so that when new information is acquired, all one needs to do is consult one's prior conditional degree of belief to determine what one's new degree of belief should be. Further, it is this subjective theory of evidence that defines the relation of coherence on the system of beliefs in question: coherence obtains when a belief conforms to the subjective theory of evidence in question, given the other items in the set of things over which coherence is defined
More generally, subjective versions of the coherence relation can be thought of in terms of the specification of a theory of evidence that is fully internal to the believer. One obvious way for the theory of evidence to be fully internal is for the theory of evidence to be contained within the belief system itself, as is true on the Bayesian theory above. There are other options, however. A subjective theory could appeal to dispositions to believe rather than to actual beliefs, or to something like one's deepest epistemic standards for trying to get to the truth and avoid error. Foley (1986) develops such a view in service of a type of foundationalist theory, understanding one's deepest standards in terms of the views one would hold given time to reflect without limitation and interference, and subjective coherentists could adopt much of this account in service of their view.
This broader characterization of the options open to subjective versions of the coherence relation carries the additional cost of appealing to the concept of what is internal to a believer, a notion that is none too clear (see the related entry justification, epistemic, internalist vs. externalist conceptions of). In broad terms, there are two important ways of thinking about what is internal here, one emphasizing whether the feature in question is somehow “in the head”, and the other emphasizing whether the feature is accessible to the believer on the basis of reflection alone. Unconscious beliefs would count as internal in the first sense, but not in the second; one's own existence is internal in the second sense, but presumably not in the first.
When offering a taxonomy of subjective versus objective characterizations of the coherence relation, it is not necessary to prefer one of these characterizations of what be internal. Instead, we can allow either to be used to specify a subjective account. Doing so places a greater burden on what kinds of arguments could be given for preferring one account of the coherence relation to another, and here the arguments will proceed in two stages. The first stage will address whether one's account of the coherence relation should be objective or subjective. On the side of an objective construal are the manifold intuitions in which we describe views as unjustified even though they are, from the point of view of the believer, the best view to hold. For example, we would say that cultic beliefs, such as the belief that accepting a blood transfusion is a terrible thing to do, are unjustified; and our judgment is not altered by learning that the believer in question was raised in the cult and can't be held responsible for knowing better. On the side of a subjective construal are the arguments for access internalism, according to which the fact that some people can't be held responsible for knowing better is a clear sign that their beliefs are justified, for justification is a property whose presence is detected by careful reflection. Another argument for subjective accounts relies on the new evil demon problem. Descartes' evil demon problem threatens the truth of our beliefs, for the demon makes the beliefs of the denizens of that world false. The new evil demon problem involves the concept of justification rather than truth, threatening theories that require objective likelihood of truth for a belief to be justified. For beliefs in demon worlds are false and likely to be so, but seem to have the same epistemic status as our beliefs do, since, after all, they could be us.
Recently, a new argument has appeared for subjective accounts of justification and, by extension, for subjective accounts of the coherence relation, if Coherentism is the preferred theory of justification. This argument appeals to the idea that an adequate theory of knowledge needs to account both for the nature of knowledge and for the value of knowledge. This issue arose first in Plato's dialogue between Meno and Socrates, in which Meno originally proposes that knowledge be more valuable than true belief because it get us what we want (his particular example is finding the way to Larissa). Socrates points out that true belief will work just as well, a response that befuddles Meno. When he finally replies, he expresses perplexity regarding two things. He first wonders whether knowledge is more than true belief, and he also questions why we prize knowledge more than true belief. The first issue is one concerning the nature of knowledge, and the second concerning the value of knowledge. To account for the nature of knowledge requires minimally that one offer a theory of knowledge that is counterexample-free. To account for the value of knowledge requires an explanation of why knowledge is more valuable than its (proper) parts, including true belief and justified true belief (for more on why knowledge is more than justified true belief, see knowledge, analysis of). Such an explanation would seem to require showing two things: first, that justified true belief is more valuable than true belief; and second, that justified true belief plus whatever further condition is needed to produce a counterexample-free account of the nature of knowledge is more valuable than justified true belief on its own. These requirements show the need for a conception of justification that adds value to true belief, and it is difficult for objective theories of justification to discharge this obligation. In the context of objective accounts of the coherence relation, such an account would be governed by a formal constraint to the effect that satisfying that account would increase one's chances of getting to the truth, and theories of justification guided by such a constraint are prime examples of theories that find it difficult to explain why justified true belief is more valuable than mere true belief. The problem they encounter is called "the swamping problem." It occurs when values interact in such a way that their combination is no more valuable than one of them separately, even though both factors are positively valuable. Examples that provide relevant analogies to the epistemic case include: beautiful art is no more valuable in terms of beauty for having been produced by an artist who usually produces beautiful artwork; functional furniture has no more functional value for coming from a factory that normally produces functional furniture. Just so, true beliefs are no more valuable from the epistemic point of view - the point of view defined in terms of the goal of getting to the truth and avoiding error - by having the additional property of being likely to be true.
Adopting a subjective theory allows one to avoid the swamping problem. The swamping problem arises for theories that characterize the teleological concept of justification in terms of properties whose presence makes a belief an effective means for getting to the goal of believing the truth and avoiding error. Subjective theories may also characterize the relationship between justification and truth in terms of a means/ends relationship, but they reject the requirement that something is a means to an end only if it is an effective means to that end, i.e., only if it increases the objective chances of that goal being realized. Subjectivists advert to the deepest and most important goals in life as examples, for such goals are rarely ones for which we have much idea of which means will be effective. Consider, for example, the goal of securing some particular person as a spouse, or the goal of raising psychologically healthy, emotionally responsible children. In each case, there are well-known ways in which achieving these goals can be sabotaged, and so we try not to proceed in that fashion. The problem is that there are too many ways that have worked for other people in securing similar goals, with no-good way of assessing which of these ways would be effective in the present case. Doing nothing will certainly not work, but among the various actions available, we can only choose and hope for the best.
Subjectivists say the same for beliefs. They maintain that what is objectively a good ground for a belief is no more transparent to us than is how to maximize happiness over a lifetime. We learn by trial and error on what to base our beliefs, in much the same way as we fumble along in trying for fulfilling existence. In doing our best in the pursuit of truth, subjectivists hold, we generate justification for our beliefs, even if all we have is hope that our grounds for belief make our beliefs likely to be true.
Whether these arguments on behalf of subjectivism in the theory of knowledge are weighty enough to overcome the strong intuitions on behalf of more objective accounts is not yet settled, though there is something approaching a consensus that subjectivism cannot quite be right in spite of the arguments in its favour. To the extent that the arguments are deemed plausible, a burden is created for relieving the tension that exists between the attractions of objective accounts and the arguments for subjective accounts. One move to reconcile this conflict is to posit different senses of the term ‘justified’ and its cognates. There are costs to such a move, however. One cost is that subjectivists and objectivists are confused, thinking they are disagreeing when they are not. In ordinary cases when a term has more than one meaning, competent speakers of the language are not confused in this way. Another cost is that ambiguity must be posited without any linguistic clues to its existence, and ambiguities that linguists would not discover but can only be discovered by philosophers are suspect for that reason.
Besides these family disputes within the coherentist clan, there are various problems that threaten to undermine every version of Coherentism. The focus here will be on three problems that have been widely discussed: problems related to the non-linear character of Coherentism, the input problem, and the problem of the truth connection.
No comments:
Post a Comment