December 1, 2009

Epistemology, so we are told, is theory of knowledge: Its aim is to discern and explain that quality or quantity enough of which distinguishes knowledge from mere true belief. We need a name for this quality or quantity, whatever precisely it is, call it warrant. From this point of view, the epistemology of religious belief should center on the question whether religious belief has warrant, an if it does, hoe much it has and how it gets it. As a matter of fact, however, epistemological discussion of religious belief, at least since the Enlightenment (and in the Western world, especially the English-speaking Western world) has tended to focus, not on the question whether religious belief has warrant, but whether it is justified. More precisely, it has tended to focus on the question whether those properties enjoyed by theistic belief -the belief that there exists a person like the God of traditional Christianity, Judaism and Islam: An almighty Law Maker, or an all-knowing and most wholly benevolent and a loving spiritual person who has created the living world. The chief question, therefore, has ben whether theistic belief is justified, the same question is often put by asking whether theistic belief is rational or rationally acceptable. Still further, the typical way of addressing this question has been by way of discussing arguments for or and against the existence of God. On the pro side, there are the traditional theistic proofs or arguments: The ontological, cosmological and teleological arguments, using Kants terms for them. On the other side, the anti-theistic side, the principal argument is the argument from evil, the argument that is not possible or at least probable that there be such a person as God, given all the pain, suffering and evil the world displays. This argument is flanked by subsidiary arguments, such as the claim that the very concept of God is incoherent, because, for example, it is impossible that there are the people without a body, and Freudian and Marxist claims that religious belief arises out of a sort of magnification and projection into the heavens of human attributes we think important.


But why has discussion centered on justification rather than warrant? And precisely what is justification? And why has the discussion of justification of theistic belief focussed so heavily on arguments for and against the existence of God?

As to the first question, we can see why once we see that the dominant epistemological tradition in modern Western philosophy has tended to identify warrant with justification. On this way of looking at the matter, warrant, that which distinguishes knowledge from mere true belief, just is justification. Belief theory of knowledge-the theory according to which knowledge is justified true belief has enjoyed the status of orthodoxy. According to this view, knowledge is justified truer belief, therefore any of your beliefs have warrant for you if and only if you are justified in holding it.

But what is justification? What is it to be justified in holding a belief? To get a proper sense of the answer, we must turn to those twin towers of western epistemology. René Descartes and especially, John Locke. The first thing to see is that according to Descartes and Locke, there are epistemic or intellectual duties, or obligations, or requirements. Thus, Locke:

Faith is nothing but a firm assent of the mind, which if it is regulated, A is our duty, cannot be afforded to anything, but upon good reason: And cannot be opposite to it, he that believes, without having any reason for believing, may be in love with his own fanciers: But, neither seeks truth as he ought, nor pats the obedience due his maker, which would have him use those discerning faculties he has given him: To keep him out of mistake and error. He that does this to the best of his power, however, he sometimes lights on truth, is in the right but by chance: And I know not whether the luckiest of the accidents will excuse the irregularity of his proceeding. This, at least is certain, that he must be accountable for whatever mistakes he runs into: Whereas, he that makes use of the light and faculties God has given him, by seeks sincerely to discover truth, by those helps and abilities he has, may have this satisfaction in doing his duty as rational creature, that though he should miss truth, he will not miss the reward of it. For he governs his assent right, and places it as he should, who in any case or matter whatsoever, believes or disbelieves, according as reason directs him. He manages otherwise, transgresses against his own light, and misuses those faculties, which were given him.

Rational creatures, creatures with reason, creatures capable of believing propositions (and of disbelieving and being agnostic with respect to them), say Locke, have duties and obligation with respect to the regulation of their belief or assent. Now the central core of the notion of justification(as the etymology of the term indicates) this: One is justified in doing something or in believing a certain way, if in doing one is innocent of wrong doing and hence not properly subject to blame or censure. You are justified, therefore, if you have violated no duties or obligations, if you have conformed to the relevant requirements, if you are within your rights. To be justified in believing something, then, is to be within your rights in so believing, to be flouting no duty, to be to satisfy your epistemic duties and obligations. This way of thinking of justification has been the dominant way of thinking about justification: And this way of thinking has many important contemporary representatives. Roderick Chisholm, for example (as distinguished an epistemologist as the twentieth century can boast, in his earlier work explicitly explains justification in terms of epistemic duty (Chisholm, 1977).

The (or, a) main epistemological; questions about religious believe, therefore, has been the question whether or not religious belief in general and theistic belief in particular is justified. And the traditional way to answer that question has been to inquire into the arguments for and against theism. Why this emphasis upon these arguments? An argument is a way of marshalling your propositional evidence-the evidence from other such propositions as likens to believe-for or against a given proposition. And the reason for the emphasis upon argument is the assumption that theistic belief is justified if and only if there is sufficient propositional evidence for it. If there is not much by way of propositional evidence for theism, then you are not justified in accepting it. Moreover, if you accept theistic belief without having propositional evidence for it, then you are going contrary to epistemic duty and are therefore unjustified in accepting it. Thus, W.K. William James, trumpets that it is wrong, always everything upon insufficient evidence, his is only the most strident in a vast chorus of only insisting that there is an intellectual duty not to believer in God unless you have propositional evidence for that belief. A few others in the choir: Sigmund Freud, Brand Blanshard, H.H. Price, Bertrand Russell and Michael Scriven.

Now, the justification of theistic beliefs gets identified with there being propositional evidence for it? Justification is a matter of being blameless, of having done ones duty (in this context, ones epistemic duty): What, precisely, has this to do with having propositional evidence?

The answer, once, again, is to be found in Descartes especially Locke. As, justification is the property your beliefs have when, in forming and holding them, you conform to your epistemic duties and obligations. But according to Locke, a central epistemic duty is this: To believer a proposition only to the degree that it is probable with respect to what is certain for you. What propositions are certain for you? First, according to Descartes and Locke, propositions about your own immediate experience, that you have a mild headache, or that it seems to you that you see something red: And second, propositions that are self-evident for you, necessarily true propositions so obvious that you cannot so much as entertain them without seeing that they must be true. (Examples would be simple arithmetical and logical propositions, together with such propositions as that the whole is at least as large as the parts, that red is a colour, and that whatever exists has properties). Propositions of these two sorts are certain for you, as fort other prepositions. You are justified in believing if and only if when one and only to the degree to which it is probable with respect to what is certain for you. According to Locke, therefore, and according to the whole modern Foundationalist tradition initiated by Locke and Descartes (a tradition that until has recently dominated Western thinking about these topics) there is a duty not to accept a proposition unless it is certain or probable with respect to what is certain.

In the present context, therefore, the central Lockean assumption is that there is an epistemic duty not to accept theistic belief unless it is probable with respect to what is certain for you: As a consequence, theistic belief is justified only if the existence of God is probable with respect to what is certain. Locke does not argue for his proposition, he simply announces it, and epistemological discussion of theistic belief has for the most part followed hin ion making this assumption. This enables us to see why epistemological discussion of theistic belief has tended to focus on the arguments for and against theism: On the view in question, theistic belief is justified only if it is probable with respect to what is certain, and the way to show that it is probable with respect to what it is certain are to give arguments for it from premises that are certain or, are sufficiently probable with respect to what is certain.

There are at least three important problems with this approach to the epistemology of theistic belief. First, there standards for theistic arguments have traditionally been set absurdly high (and perhaps, part of the responsibility for this must be laid as the door of some who have offered these arguments and claimed that they constitute wholly demonstrative proofs). The idea seems to test. a good theistic argument must start from what is self-evident and proceed majestically by way of self-evidently valid argument forms to its conclusion. It is no wonder that few if any theistic arguments meet that lofty standard -particularly, in view of the fact that almost no philosophical arguments of any sort meet it. (Think of your favourite philosophical argument: Does it really start from premisses that are self-evident and move by ways of self-evident argument forms to its conclusion?)

Secondly, attention has ben mostly confined to three theistic arguments: The traditional arguments, cosmological and teleological arguments, but in fact, there are many more good arguments: Arguments from the nature of proper function, and from the nature of propositions, numbers and sets. These are arguments from intentionality, from counterfactual, from the confluence of epistemic reliability with epistemic justification, from reference, simplicity, intuition and love. There are arguments from colours and flavours, from miracles, play and enjoyment, morality, from beauty and from the meaning of life. This is even a theistic argument from the existence of evil.

But there are a third and deeper problems here. The basic assumption is that theistic belief is justified only if it is or can be shown to be probable with respect to many a body of evidence or proposition-perhaps, those that are self-evident or about ones own mental life, but is this assumption true? The idea is that theistic belief is very much like a scientific hypothesis: It is acceptable if and only if there is an appropriate balance of propositional evidence in favours of it. But why believer a thing like that? Perhaps the theory of relativity or the theory of evolution is like that, such a theory has been devised to explain the phenomena and gets all its warrant from its success in so doing. However, other beliefs, e.g., memory beliefs, free-life in other minds is not like that, they are not hypothetical at all, and are not accepted because of their explanatory powers. There are instead, the propositions from which one start in attempting to give evidence for a hypothesis. Now, why assume that theistic belief, belief in God, is in this regard more like a scientific hypothesis than like, say, a memory belief? Why think that the justification of theistic belief depends upon the evidential relation of theistic belief to other things one believes? According to Locke and the beginnings of this tradition, it is because there is a duty not to assent to a proposition unless it is probable with respect to what is certain to you, but is there really any such duty? No one has succeeded in showing that, say, belief in other minds or the belief that there has been a past, is probable with respect to what is certain for us. Suppose it is not: Does it follow that you are living in epistemic sin if you believer that there are other minds? Or a past?

There are urgent questions about any view according to which one has duties of the sort do not believer p unless it is probable with respect to what is certain for you; . First, if this is a duty, is it one to which I can conform? My beliefs are for the most part not within my control: Certainly they are not within my direct control. I believer that there has been a past and that there are other people, even if these beliefs are not probable with respect to what is certain forms (and even if I came to know this) I could not give them up. Whether or not I accept such beliefs are not really up to me at all, For I can no more refrain from believing these things than I can refrain from conforming yo the law of gravity. Second, is there really any reason for thinking I have such a duty? Nearly everyone recognizes such duties as that of not engaging in gratuitous cruelty, taking care of ones children and ones aged parents, and the like, but do we also find ourselves recognizing that there is a duty not to believer what is not probable (or, what we cannot see to be probable) with respect to what are certain for us? It hardly seems so. However, it is hard to see why being justified in believing in God requires that the existence of God be probable with respect to some such body of evidence as the set of propositions certain for you. Perhaps, theistic belief is properly basic, i.e., such that one is perfectly justified in accepting it on the evidential basis of other propositions one believes.

Taking justification in that original etymological fashion, therefore, there is every reason ton doubt that one is justified in holding theistic belief only if one is justified in holding theistic belief only if one has evidence for it. Of course, the term justification has under-gone various analogical extensions in the of various philosophers, it has been used to name various properties that are different from justification etymologically so-called, but analogically related to it. In such a way, the term sometimes used to mean propositional evidence: To say that a belief is justified for someone is to saying that he has propositional evidence (or sufficient propositional evidence) for it. So taken, however, the question whether theistic belief is justified loses some of its interest; for it is not clear (given this use) beliefs that are unjustified in that sense. Perhaps, one also does not have propositional evidence for ones memory beliefs, if so, that would not be a mark against them and would not suggest that there be something wrong holding them.

Another analogically connected way to think about justification (a way to think about justification by the later Chisholm) is to think of it as simply a relation of fitting between a given proposition and ones epistemic vase -which includes the other things one believes, as well as ones experience. Perhaps tat is the way justification is to be thought of, but then, if it is no longer at all obvious that theistic belief has this property of justification if it seems as a probability with respect to many another body of evidence. Perhaps, again, it is like memory beliefs in this regard.

To recapitulate: The dominant Western tradition has been inclined to identify warrant with justification, it has been inclined to take the latter in terms of duty and the fulfilment of obligation, and hence to suppose that there is no epistemic duty not to believer in God unless you have good propositional evidence for the existence of God. Epistemological discussion of theistic belief, as a consequence, as concentrated on the propositional evidence for and against theistic belief, i.e., on arguments for and against theistic belief. But there is excellent reason to doubt that there are epistemic duties of the sort the tradition appeals to here.

And perhaps it was a mistake to identify warrant with justification in the first place. Napoleons have little warrant for him: His problem, however, need not be dereliction of epistemic duty. He is in difficulty, but it is not or necessarily that of failing to fulfill epistemic duty. He may be doing his epistemic best, but he may be doing his epistemic duty in excelsis: But his madness prevents his beliefs from having much by way of warrant. His lack of warrant is not a matter of being unjustified, i.e., failing to fulfill epistemic duty. So warrant and being epistemologically justified by name are not the same things. Another example, suppose (to use the favourite twentieth-century variant of Descartes evil demon example) I have been captured by Alpha-Centaurian super-scientists, running a cognitive experiment, they remove my brain, and keep it alive in some artificial nutrients, and by virtue of their advanced technology induce in me the beliefs I might otherwise have if I were going about my usual business. Then my beliefs would not have much by way of warrant, but would it be because I was failing to do my epistemic duty?

As a result of these and other problems, another, externalist way of thinking about knowledge has appeared in recent epistemology, that a theory of justification is internalized if and only if it requires that all of its factors needed for a belief to be epistemically accessible to that of a person, internal to his cognitive perception, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, in that they can be external to the believe s cognitive Perspectives, beyond his ken. However, epistemologists often use the distinction between internalized and externalist theories of epistemic justification without offering any very explicit explanation.

Or perhaps the thing to say, is that it has reappeared, for the dominant sprains in epistemology priori to the Enlightenment were really externalist. According to this externalist way of thinking, warrant does not depend upon satisfaction of duty, or upon anything else to which the Knower has special cognitive access (as he does to what is about his own experience and to whether he is trying his best to do his epistemic duty): It depends instead upon factors external to the epistemic agent -such factors as whether his beliefs are produced by reliable cognitive mechanisms, or whether they are produced by epistemic faculties functioning properly in-an appropriate epistemic environment.

How will we think about the epistemology of theistic belief in more than is less of an externalist way (which is at once both satisfyingly traditional and agreeably up to date)? I think, that the ontological question whether there is such a person as God is in a way priori to the epistemological question about the warrant of theistic belief. It is natural to think that if in fact we have been created by God, then the cognitive processes that issue in belief in God are indeed realizable belief-producing processes, and if in fact God created us, then no doubt the cognitive faculties that produce belief in God is functioning properly in an epistemologically congenial environment. On the other hand, if there is no such person as God, if theistic belief is an illusion of some sort, then things are much less clear. Then beliefs in God in of the most of basic ways of wishing that never doubt the production by which unrealistic thinking or another cognitive process not aimed at truth. Thus, it will have little or no warrant. And belief in God on the basis of argument would be like belief in false philosophical theories on the basis of argument: Do such beliefs have warrant? Notwithstanding, the custom of discussing the epistemological questions about theistic belief as if they could be profitably discussed independently of the ontological issue as to whether or not theism is true, is misguided. There two issues are intimately intertwined,

Nonetheless, the vacancy left, as today and as days before are an awakening and untold story beginning by some sparking conscious paradigm left by science. That is a central idea by virtue accredited by its epistemology, where in fact, is that justification and knowledge arising from the proper functioning of our intellectual virtues or faculties in an appropriate environment.

Finally, that the concerning mental faculty reliability point to the importance of an appropriate environment. The idea is that cognitive mechanisms might be reliable in some environments but not in others. Consider an example from Alvin Plantinga. On a planet revolving around Alfa Centauri, cats are invisible to human beings. Moreover, Alfa Centaurian cats emit a type of radiation that causes humans to form the belief that there I a dog barking nearby. Suppose now that you are transported to this Alfa Centaurian planet, a cat walks by, and you form the belief that there is a dog barking nearby. Surely you are not justified in believing this. However, the problem here is not with your intellectual faculties, but with your environment. Although your faculties of perception are reliable on earth, yet are unrealisable on the Alga Centaurian planet, which is an inappropriate environment for those faculties.

The central idea of virtue epistemology, as expressed in (J) above, has a high degree of initial plausibility. By masking the idea of faculties cental to the reliability if not by the virtue of epistemology, in that it explains quite neatly to why beliefs are caused by perception and memories are often justified, while beliefs caused by unrealistic and superstition are not. Secondly, the theory gives us a basis for answering certain kinds of scepticism. Specifically, we may agree that if we were brains in a vat, or victims of a Cartesian demon, then we would not have knowledge even in those rare cases where our beliefs turned out true. But virtue epistemology explains that what is important for knowledge is toast our faculties are in fact reliable in the environment in which we are. And so we do have knowledge so long as we are in fact, not victims of a Cartesian demon, or brains in a vat. Finally, Plantinga argues that virtue epistemology deals well with Gettier problems. The idea is that Gettier problems give us cases of justified belief that is truer by accident. Virtue epistemology, Plantinga argues, helps us to understand what it means for a belief to be true by accident, and provides a basis for saying why such cases are not knowledge. Beliefs are rue by accident when they are caused by otherwise reliable faculties functioning in an inappropriate environment. Plantinga develops this ligne of reasoning in Plantinga (1988).

The Humean problem if induction supposes that there is some property A pertaining to an observational or experimental situation, and that of A, some fraction m/n (possibly equal to 1) have also been instances of some logically independent property B. Suppose further that the background circumstances, have been varied to a substantial degree and that there is no collateral information available concerning the frequency of B's among As or concerning causal nomological connections between instances of A and instances of B.

In this situation, an enumerative or instantial inductive inference would move from the premise that m/n of observed 'A's' are 'B's' to the conclusion that approximately m/n of all 'A's' and 'B's'. (The usual probability qualification will be assumed to apply to the inference, than being part of the conclusion). Hereabouts the class of As should be taken to include not only unobservable As of future As, but also possible or hypothetical as. (An alternative conclusion would concern the probability or likelihood of the very next observed 'A' being a 'B').

The traditional or Humean problem of induction, often refereed to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true if the corresponding premiss is true or even that their chances of truth are significantly enhanced?

Humes discussion of this deals explicitly with cases where all observed 'A's' are 'B's', but his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume challenges the proponent of induction to supply a cogent ligne of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma, to show that there can be no such reasoning. Such reasoning would, ne argues, have to be either deductively demonstrative reasoning concerning relations of ideas or experimental, i.e., empirical, reasoning concerning mattes of fact to existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that the course of nature may change, tat an order that was observed in the past will not continue in the future: but it also cannot be the latter, since any empirical argument would appeal to the success of such reasoning in previous experiences, and the justifiability of generalizing from previous experience is precisely what is at issue-so that any such appeal would be question-begging, so then, there can be no such reasoning.

An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble or, that unobserved cases will reassembly observe cases. An inductive argument may be viewed as enthymematic, with this principle serving as a suppressed premiss, in which case the issue is obviously how such a premise can be justified. Humes argument is then that no such justification is possible: The principle cannot be justified speculatively as it is not contradictory to deny it: it cannot be justified by appeal to its having been true in pervious experience without obviously begging te question.

The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Humes argument, viz. That inductive inferences cannot be justified I the sense of showing that the conclusion of such an inference is likely to be truer if the premise is true, and thus attempt to find some other sort of justification for induction.

Bearing upon, and if not taken into account the term induction is most widely used for any process of reasoning that takes us from empirical premises to empirical conclusions supported by the premise, but not deductively entailed by them. Inductive arguments are therefore kinds of amplicative argument, in which something beyond the content of the premises is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this amplicative character, by being confined to inference in which the conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premiss telling that 'Fa', 'Fb', 'Fc'. , where 'a', 'b', 'c ~, are all of some kind 'G', It is inferred 'G's' from outside the sample, such as future 'G's' will be 'F', or perhaps other person deceive them, children may well infer that everyone is a deceiver. Different but similar inferences are those from the past possession of a property by some object to the same objects future possession, or from the constancy of some law-like pattern in events, and states of affairs to its future constancy: all objects we know of attract each the with a fore inversely proportional to the square of the distance between them, so perhaps they all do so, an will always do so.

The rational basis of any inference was challenged by David Hume (1711-76), who believed that induction of nature, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the propriety of processes of inducting ion, but sceptical about the tole of reason in either explaining it or justifying it. trying to answer Hume and to show that there is something rationally compelling about the inference is referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones for which 't' is not. It is also recognized that actual inductive habits are more complex than those of simple and science pay attention to such factors as variations within the sample of giving us the evidence, the application of ancillary beliefs about the order of nature, and so on. Nevertheless, the fundamental problem remains that any experience shows us only events occurring within a very restricted part of the vast spatial temporal order about which we then come to believer things.

All the same, the classical problem of induction is often phrased in terms of finding some reason to expect that nature is uniform. In Fact, Fiction and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous. Thus, suppose that all examined emeralds have been green. Uniformity would lead us to expect that future emeralds will be green as well. But, now we define a predicate grue: is trued if and only if 'x' is examined before time 'T' and is green, or ' ' is examined after 'T' and is blue? Let 'T' refer to some time around the present. Then if newly examined emeralds are like previous ones in respect of being grue, they will be blue. We prefer blueness a basis of prediction to gluiness, but why?

Goodman argued that although his new predicate appears to be gerrymandered, and itself involves a reference to a difference, this is just aparohial or language-relative judgement, there being no language-independent standard of similarity to which to appeal. Other philosophers have not been convince by this degree of linguistic relativism. What remains clear that the possibility of these bent predicates put a decisive obstacle in face of purely logical and syntactical approaches to problems of confirmation?.

Even so, that the theory of the measure to which evidence supports a theory, whereby a fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some body of evidence. The grandfather of confirmation theory is the German philosopher, mathematician and polymath Wilhelm Gottfried Leibniz (1646-1716), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully forma confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific.

The principal developments were due to the German logical postivists Rudolf Carnap (1891-1970).wherefore, Carnap, culminating in his Logical Foundations of Probability (1950), that Carnap's idea was that the measure needed would be the proposition of logically possible stares of affairs in which the theory and the evidence both hold, compared to the number in which the evidence itself holds that the probability of a proposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, when compared to the total range of possibilities left open by the evidence. The theory was originally reached by the French mathematician Pierre Simon de LaPlace (1749-1827), and has guided confirmation theory, for example, into the works of Carnap. The difficulty with the range theory of probability had with the theory lies in identifying sets of possibilities so that they admit of measurement. LaPlace appealed to the principle of indifference, supposing that possibilities have an equal probability unless there is reason for distinguishing them. However, unrestricted appeal to this principle introduces inconsistency. Treating possibilities as equally probable may be regarded as depending upon metaphysical choices or logical choices, as in the view of an English economist and philosopher John Maynard Keynes (1883-1946), or on semantic choices, as in the work of Carnap. In any event, it is hard to find an objective source for the authority of such a choice, and this is one of the principal difficulties in front of formalizing the theory of confirmation.

It therefore demands that we can put a measure on the 'range' of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone. Among the obstacles the enterprise meets is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language, in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated sense of what looks plausible.

Both, Frége and Carnap, represented as analyticities best friends in this century, did as much to undermine it as its worst enemies. Quine (1908-) whose early work was on mathematical logic, and issued in A System of Logistic (1934), Mathematical Logic (1940) and Methods of Logic (1950) it was with this collection of papers a Logical Point of View (1953) that his philosophical importance became widely recognized, also, Putman (1926-) his concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as it is obtained in natural science, and as it is obtained in morals and even theology. Books include Philosophy of logic (1971), Representation and Reality (1988) and Renewing Philosophy (1992). Collections of his papers include Mathematics, Master, sand Method (1975), Mind, Language, and Reality (1975), and Realism and Reason (1983). Both of which represented as having refuted the analytic/synthetic distinction, not only did no such thing, but, in fact, contributed significantly to undoing the damage done by Frége and Carnap. Finally, the epistemological significance of the distinctions is nothing like what it is commonly taken to be.

Lockes account of an analyticity proposition as, for its time, everything that a succinct account of analyticity should be (Locke, 1924, pp. 306-8) he distinguished two kinds of analytic propositions, identified propositions in which we affirm the said terms if itself, e.g., Roses are roses, and predicative propositions in which a part of the complex idea is predicated of the name of the whole, e.g., Roses are flowers, Locke calls such sentences trifling because a speaker who uses them trifles with words. A synthetic sentence, in contrast, such as a mathematical theorem, states a truth and conveys with its informative real knowledge. Correspondingly, Locke distinguishes two kinds of necessary consequences, analytic entailment where validity depends on the literal containment of the conclusions in the premiss and synthetic entailments where it does not. (Locke did not originate this concept-containment notion of analyticity. It is discussions by Arnaud and Nicole, and it is safe to say it has been around for a very long time (Arnaud, 1964).

Kants account of analyticity, which received opinion tells us is the consummate formulation of this notion in modern philosophy, is actually a step backward. What is valid in his account is not novel, and what is novel is not valid. Kant presents Lockes account of concept-containment analyticity, but introduces certain alien features, the most important being his characterizations of most important being his characterization of analytic propositions as propositions whose denials are logical contradictions (Kant, 1783). This characterization suggests that analytic propositions based on Lockes part-whole relation or Kants explicative copula are a species of logical truth. But the containment of the predicate concept in the subject concept in sentences like Bachelors are unmarried is a different relation from containment of the consequent in the antecedent in a sentence like If John is a bachelor, then John is a bachelor or Mary read Kants Critique. The former is literal containment whereas, the latter are, in general, not. Talk of the containment of the consequent of a logical truth in the metaphorical, a way of saying logically derivable.

Kants conflation of concept containment with logical containment caused him to overlook the issue of whether logical truth are synthetically deductive and the problem of how he can say mathematical truth are synthetically deductive when they cannot be denied without contradiction. Historically. , the conflation set the stage for the disappearance of the Lockean notion. Frége, whom received opinion portrays as second only to Kant among the champions of analyticity, and Carnap, who it portrays as just behind Frége, was jointly responsible for the appearance of concept-containment analyticity.

Frége was clear about the difference between concept containment and logical containment, expressing it as like the difference between the containment of beams in a house the containment of a plant in the seed (Frége, 1853). But he found the former, as Kant formulated it, defective in three ways: It explains analyticity in psychological terms, it does not cover all cases of analytic propositions, and, perhaps, most important for Fréges logicism, its notion of containment is unfruitful as a definition; mechanisms in logic and mathematics (Frége, 1853). In an insidious containment between the two notions of containment, Frége observes that with logical containment we are not simply talking out of the box again what we have just put inti it. This definition makes logical containment the basic notion. Analyticity becomes a special case of logical truth, and, even in this special case, the definitions employ the power of definition in logic and mathematics than mere concept combination.

Carnap, attempting to overcome what he saw a shortcoming in Fréges account of analyticity, took the remaining step necessary to do away explicitly with Lockean-Kantian analyticity. As Carnap saw things, it was a shortcoming of Fréges explanation that it seems to suggest that definitional relations underlying analytic propositions can be extra-logic in some sense, say, in resting on linguistic synonymy. To Carnap, this represented a failure to achieve a uniform forma treatment of analytic propositions and left us with a dubious distinction between logical and extra-logical vocabulary. Hence, he eliminated the reference to definitions in Fréges of analyticity by introducing meaning postulates, e.g., statements such as '( )' (' ' is a Bachelors-is unmarried) (Carnap, 1965). Like standard logical postulate on which they were modelled, meaning postulates express nothing more than constrains on the admissible models with respect to which sentences and deductions are evaluated for truth and validity. Thus, despite their name, its asymptomatic-balance having to pustulate itself by that in what it holds on to not more than to do with meaning than any value-added statements expressing an indispensable truth. In defining analytic propositions as consequences of (an explained set of) logical laws, Carnap explicitly removed the one place in Fréges explanation where there might be room for concept containment and with it, the last trace of Lockes distinction between semantic and other necessary consequences.

Quine, the staunchest critic of analyticity of our time, performed an invaluable service on its behalf-although, one that has come almost completely unappreciated. Quine made two devastating criticism of Carnaps meaning postulate approach that expose it as both irrelevant and vacuous. It is irrelevant because, in using particular words of a language, meaning postulates fail to explicate analyticity for sentences and language generally, that is, they do not define it for variables 'S' and 'L' (Quine, 1953). It is vacuous because, although meaning postulates tell us what sentences are to count as analytic, they do not tell us what it is for them to be analytic.

Received opinion gas it that Quine did much more than refute the analytic/synthetic distinction as Carnap tried to draw it. Received opinion has that Quine demonstrated there is no distinction, however, anyone might try to draw it. This, too, is incorrect. To argue for this stronger conclusion, Quine had to show that there is no way to draw the distinction outside logic, in particular theory in linguistic corresponding to Carnaps, Quines argument had to take an entirely different form. Some inherent feature of linguistics had to be exploited in showing that no theory in this science can deliver the distinction. But the feature Quine chose was a principle of operationalist methodology characteristic of the school of Bloomfieldian linguistics. Quine succeeds in showing that meaning cannot be made objective sense of in linguistics. If making sense of a linguistic concept requires, as that school claims, operationally defining it in terms of substitution procedures that employ only concepts unrelated to that linguistic concept. But Chomskys revolution in linguistics replaced the Bloomfieldian taxonomic model of grammars with the hypothetico-deductive model of generative linguistics, and, as a consequence, such operational definition was removed as the standard for concepts in linguistics. The standard of theoretical definition that replaced it was far more liberal, allowing the members of as family of linguistic concepts to be defied with respect to one another within a set of axioms that state their systematic interconnections -the entire system being judged by whether its consequences are confirmed by the linguistic facts. Quines argument does not even address theories of meaning based on this hypothetico-deductive model (Katz, 1988 and 1990).

Putman, the other staunch critic of analyticity, performed a service on behalf of analyticity fully on a par with, and complementary to Quines, whereas, Quine refuted Carnaps formalization of Fréges conception of analyticity, Putman refuted this very conception itself. Putman put an end to the entire attempt, initiated by Fridge and completed by Carnap, to construe analyticity as a logical concept (Putman, 1962, 1970, 1975).

However, as with Quine, received opinion has it that Putman did much more. Putman in credited with having devised science fiction cases, from the robot cat case to the twin earth cases, that are counter examples to the traditional theory of meaning. Again, received opinion is incorrect. These cases are only counter examples to Fréges version of the traditional theory of meaning. Fréges version claims both (1) that senses determines reference, and (2) that there are instances of analyticity, say, typified by cats are animals, and of synonymy, say typified by water in English and water in twin earth English. Given (1) and (2), what we call cats could not be non-animals and what we call water could not differ from what the earthier twin called water. But, as Putman's cases show, what we call cats could be Martian robots and what they call water could be something other than H2O Hence, the cases are counter examples to Fréges version of the theory.

The remaining Frégean criticism points to a genuine incompleteness of the traditional account of analyticity. There are analytic relational sentences, for example, Jane walks with those with whom she strolls, Jack kills those he

himself has murdered, etc., and analytic entailment with existential conclusions, for example, I think, therefore I exist. The containment in these sentences is just as literal as that in an analytic subject-predicate sentence like Bachelors are unmarried, such are shown to have a theory of meaning construed as a hypothetico-deductive systemisations of sense as defined in (D) overcoming the incompleteness of the traditional account in the case of such relational sentences.

Such a theory of meaning makes the principal concern of semantics the explanation of sense properties and relations like synonymy, an antonymy, redundancy, analyticity, ambiguity, etc. Furthermore, it makes grammatical structure, specifically, senses structure, the basis for explaining them. This leads directly to the discovery of a new level of grammatical structure, and this, in turn, makes possible a proper definition of analyticity. To see this, consider two simple examples. It is a semantic fact that a male Bachelors is redundant and that single person is synonymous with woman who never married; . In the case of the redundancy, we have to explain the fact that the sense of the modifier male is already contained in the sense of its head Bachelors. In the case of the synonymy, we have to explain the fact that the sense of sinister is identical to the sense of woman who never married (compositionally formed from the senses of woman, never and married). But is so far as such facts concern relations involving the components of the senses of Bachelors and spinster and is in as far as these words are syntactic simple, there must be a level of grammatical structure at which syntactic simple are semantically complex. This, in brief, is the route by which we arrive a level of decompositional semantic structure; that is the locus of sense structures masked by syntactically simple words.

Once, again, the fact that (A) itself makes no reference to logical operators or logical laws indicate that analyticity for subject-predicate sentences can be extended to simple relational sentences without treating analytic sentences as instances of logical truth. Further, the source of the incompleteness is no longer explained, as Fridge explained it, as the absence of fruitful logical apparatus, but is now explained as mistakenly treating what is only a special case of analyticity as if it were the general case. The inclusion of the predicate in the subject is the special case (where n = 1) of the general case of the inclusion of an–place predicate (and its terms) in one of its terms. Noting that the defects, by which, Quine complained of in connexion with Carnaps meaning-postulated explication are absent in (A). (A) contains no words from a natural language. It explicitly uses variable 'S' and variable 'L' because it is a definition in linguistic theory. Moreover, (A) tell us what property is in virtue of which a sentence is analytic, namely, redundant predication, that is, the predication structure of an analytic sentence is already found in the content of its term structure.

Received opinion has been anti-Lockean in holding that necessary consequences in logic and language belong to one and the same species. This seems wrong because the property of redundant predication provides a non-logic explanation of why true statements made in the literal use of analytic sentences are necessarily true. Since the property ensures that the objects of the predication in the use of an analytic sentence are chosen on the basis of the features to be predicated of them, the truth-conditions of the statement are automatically satisfied once its terms take on reference. The difference between such a linguistic source of necessity and the logical and

mathematical sources vindicate Lockes distinction between two kinds of necessary consequence.

Received opinion concerning analyticity contains another mistake. This is the idea that analyticity is inimical to science, in part, the idea developed as a reaction to certain dubious uses of analyticity such as Fréges attempt to establish logicism and Schlicks, Ayers and other logical; postivists attempt to deflate claims to metaphysical knowledge by showing that alleged deductive truth are merely empty analytic truth (Schlick, 1948, and Ayer, 1946). In part, it developed as also a response to a number of cases where alleged analytic, and hence, necessary truth, e.g., the law of excluded a seeming next-to-last subsequent to have been taken as open to revision, such cases convinced philosophers like Quine and Putnam that the analytic/synthetic distinction is an obstacle to scientific progress.

The problem, if there is, one is one is not analyticity in the concept-containment sense, but the conflation of it with analyticity in the logical sense. This made it seem as if there is a single concept of analyticity that can serve as the grounds for a wide range of deductive truth. But, just as there are two analytic/synthetic distinctions, so there are two concepts of concept. The narrow Lockean/Kantian distinction is based on a narrow notion of expressions on which concepts are senses of expressions in the language. The broad Frégean/Carnap distinction is based on a broad notion of concept on which concepts are conceptions -often scientific one about the nature of the referent (s) of expressions (Katz, 1972) and curiously Putman, 1981). Conflation of these two notions of concepts produced the illusion of a single concept with the content of philosophical, logical and mathematical conceptions, but with the status of linguistic concepts. This encouraged philosophers to think that they were in possession of concepts with the contentual representation to express substantive philosophical claims, e.g., such as Fridge, Schlick and Ayers, . . . and so on, and with a status that trivializes the task of justifying them by requiring only linguistic grounds for the deductive propositions in question.

Finally, there is an important epistemological implication of separating the broad and narrowed notions of analyticity. Fridge and Carnap took the broad notion of analyticity to provide foundations for necessary and a priority, and, hence, for some form of rationalism, and nearly all rationalistically inclined analytic philosophers followed them in this. Thus, when Quine dispatched the Frége-Carnap position on analyticity, it was widely believed that necessary, as a priority, and rationalism had also been despatched, and, as a consequence. Quine had ushered in an empiricism without dogmas and naturalized epistemology. But given there is still a notion of analyticity that enables us to pose the problem of how necessary, synthetic deductive knowledge is possible (moreover, one whose narrowness makes logical and mathematical knowledge part of the problem), Quine did not under-cut the foundations of rationalism. Hence, a serious reappraisal of the new empiricism and naturalized epistemology is, to any the least, is very much in order (Katz, 1990).

In some areas of philosophy and sometimes in things that are less than important we are to find in the deductively/inductive distinction in which has been applied to a wide range of objects, including concepts, propositions, truth and knowledge. Our primary concern will, however, be with the epistemic distinction between deductive and inductive knowledge. The most common way of marking the distinction is by reference to Kants claim that deductive knowledge is absolutely independent of all experience. It is generally agreed that Ss knowledge that p is independent of experience just in case Ss belief that p is justified independently of experience. Some authors (Butchvarov, 1970, and Pollock, 1974) are, however, in finding this negative characterization of deductive unsatisfactory knowledge and have opted for providing a positive characterization in terms of the type of justification on which such knowledge is dependent. Finally, others (Putman, 1983 and Chisholm, 1989) have attempted to mark the distinction by introducing concepts such as necessity and rational unrevisability than in terms of the type of justification relevant to deductive knowledge.

One who characterizes deductive knowledge in terms of justification that is independent of experience is faced with the task of articulating the relevant sense of experience, and proponents of the deductive ly cites intuition or intuitive apprehension as the source of deductive justification. Furthermore, they maintain that these terms refer to a distinctive type of experience that is both common and familiar to most individuals. Hence, there is a broad sense of experience in which deductive justification is dependent of experience. An initially attractive strategy is to suggest that theoretical justification must be independent of sense experience. But this account is too narrow since memory, for example, is not a form of sense experience, but justification based on memory is presumably not deductive. There appear to remain only two options: Provide a general characterization of the relevant sense of experience or enumerates those sources that are experiential. General characterizations of experience often maintain that experience provides information specific to the actual world while non-experiential sources provide information about all possible worlds. This approach, however, reduces the concept of non-experiential justification to the concept of being justified in believing a necessary truth. Accounts by enumeration have two problems (1) there is some controversy about which sources to include in the list, and (2) there is no guarantee that the list is complete. It is generally agreed that perception and memory should be included. Introspection, however, is problematic, and beliefs about ones conscious states and about the manner in which one is appeared to are plausible regarded as experientially justified. Yet, some, such as Pap (1958), maintain that experiments in imagination are the source of deductive justification. Even if this contention is rejected and deductive justification is characterized as justification independent of the evidence of perception, memory and introspection, it remains possible that there are other sources of justification. If it should be the case that clairvoyance, for example, is a source of justified beliefs, such beliefs would be justified deductively on the enumerative account.

The most common approach to offering a positive characterization of deductive justification is to maintain that in the case of basic deductive propositions, understanding the proposition is sufficient to justify one in believing that it is true. This approach faces two pressing issues. What is it to understand a proposition in the manner that suffices for justification? Proponents of the approach typically distinguish understanding the words used to express a proposition from apprehending the proposition itself and maintain that it is the latter which are relevant to deductive justification. But this move simply shifts the problem to that of specifying what it is to apprehend a proposition. Without a solution to this problem, it is difficult, if possible, to evaluate the account since one cannot be sure that the account since on cannot be sure that the requisite sense of apprehension does not justify paradigmatic inductive propositions as well. Even less is said about the manner in which apprehending a proposition justifies one in believing that it is true. Proponents are often content with the bald assertions that one who understands a basic deductive proposition can thereby see that it is true. But what requires explanation is how understanding a proposition enable one to see that it is true.

Difficulties in characterizing deductive justification in a term either of independence from experience or of its source have led, out-of-the-ordinary to present the concept of necessity into their accounts, although this appeal takes various forms. Some have employed it as a necessary condition for deductive justification, others have employed it as a sufficient condition, while still others have employed it as both. In claiming that necessity is a criterion of the deductive. Kant held that necessity is a sufficient condition for deductive justification. This claim, however, needs further clarification. There are three theses regarding the relationship between the theoretically and the necessary that can be distinguished: (I) if p is a necessary proposition and 'S' is justified in believing that 'p' is necessary, then 'S's' justification is deductive: (ii) If 'p' is a necessary proposition and 'S' is justified in believing that 'p' is necessarily true, then 'S's' justification is deductive: And (iii) If 'p' is a necessary proposition and 'S' is justified in believing that 'p', then 'S's' justification is deductive. For example, many proponents of deductive contend that all knowledge of a necessary proposition is deductive. (2) and (3) have the shortcoming of setting by stipulation the issue of whether inductive knowledge of necessary propositions is possible. (I) does not have this shortcoming since the recent examples offered in support of this claim by Kriple (1980) and others have been cases where it is alleged that knowledge of the truth value of necessary propositions is knowable inductive. (I) has the shortcoming, however, of either ruling out the possibility of being justified in believing that a proposition is necessary on the basis of testimony or else sanctioning such justification as deductive. (ii) and (iii), of course, suffer from an analogous problem. These problems are symptomatic of a general shortcoming of the approach: It attempts to provide a sufficient condition for deductive justification solely in terms of the modal status of the proposition believed without making reference to the manner in which it is justified. This shortcoming, however, can be avoided by incorporating necessity as a necessary but not sufficient condition for a prior justification as, for example, in Chisholm (1989). Here there are two theses that must be distinguished: (1) If 'S' is justified deductively in believing that 'p', then p is necessarily true. (2) If 'p' is justified deductively in believing that 'p'. Then 'p' is a necessary proposition. (1) and (2), however, allows this possibility. A further problem with both (1) and (2) is that it is not clear whether they permit deductively justified beliefs about the modal status of a proposition. For they require that in order for 'S' to be justified deductively in believing that 'p' is a necessary preposition it must be necessary that p is a necessary proposition. But the status of iterated modal propositions is controversial. Finally, (1) and (2) both preclude by stipulation the position advanced by Kripke (1980) and Kitcher (1980) that there is deductive knowledge of contingent propositions.

The concept of rational unrevisability has also been invoked to characterize deductive justification. The precise sense of rational unrevisability has been presented in different ways. Putnam (1983) takes rational unrevisability to be both a necessary and sufficient condition for deductive justification while Kitcher (1980) takes it to be only a necessary condition. There are also two different senses of rational unrevisability that have been associated with the deductive (I) a proposition is weakly unreviable just in case it is rationally unrevisable in light of any future experiential evidence, and (II) a proposition is strongly unrevisable just in case it is rationally unrevisable in light of any future evidence. Let us consider the plausibility of requiring either form of rational unrevisability as a necessary condition for deductive justification. The view that a proposition is justified deductive only if it is strongly unrevisable entails that if a non-experiential source of justified beliefs is fallible but self-correcting, it is not a deductive source of justification. Casullo (1988) has argued that it vis implausible to maintain that a proposition that is justified non-experientially is not justified deductively merely because it is revisable in light of further non-experiential evidence. The view that a proposition is justified deductively only if it is, weakly unrevisable is not open to this objection since it excludes only recession in light of experiential evidence. It does, however, face a different problem. To maintain that 'S's' justified belief that 'p' is justified deductively is to make a claim about the type of evidence that justifies 'S' in believing that 'p'. On the other hand, to maintain that 'S's' justified belief that p is rationally revisable in light of experiential evidence is to make a claim about the type of evidence that can defeat 'S's' justification for believing that p that a claim about the type of evidence that justifies 'S' in believing that 'p'. Hence, it has been argued by Edidin (1984) and Casullo (1988) that to hold that a belief is justified deductively only if it is weakly unrevisable is either to confuse supporting evidence with defeating evidence or to endorse some implausible this about the relationship between the two such as that if evidence of the sort as the kind 'A' can defeat the justification conferred on 'S's belief that 'p' by evidence of kind 'B' then 'S's' justification for believing that 'p' is based on evidence of kind 'A'.

The most influential idea in the theory of meaning in the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Fridge, was developed in a distinctive way by the early Wittgenstein, and is a leading idea of Donald Herbert Davidson (1917-), who is also known for rejection of the idea of as conceptual scheme, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translation stops so dopes the coherence of the idea that there is anything to translate. His [papers are collected in the Essays on Actions and Events (1980) and Inquiries into Truth and Interpretation (1983). However, the conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

Wittgensteins main achievement is a uniform theory of language that yields an explanation of logical truth. A factual sentence achieves sense by dividing the possibilities exhaustively into two groups, those that would make it true and those that would make it false. A truth of logic does not divide the possibilities but comes out true in all of them. It, therefore, lacks sense and says nothing, but it is not nonsense. It is a self-cancellation of sense, necessarily true because it is a tautology, the limiting case of factual discourse, like the figure '0' in mathematics. Language takes many forms and even factual discourse does not consist entirely of sentences like The fork is placed to the left of the knife. However, the first thing that he gave up was the idea that this sentence itself needed further analysis into basic sentences mentioning simple objects with no internal structure. He was to concede, that a descriptive word will often get its meaning partly from its place in a system, and he applied this idea to colour-words, arguing that the essential relations between different colours do not indicate that each colour has an internal structure that needs to be taken apart. On the contrary, analysis of our colour-words would only reveal the same pattern-ranges of incompatible properties-recurring at every level, because that is how we carve up the world.

Indeed, it may even be the case that of our ordinary language is created by moves that we ourselves make. If so, the philosophy of language will lead into the connexion between the meaning of a word and the applications of it that its users intend to make. There is also an obvious need for people to understand each others meanings of their words. There are many links between the philosophy of language and the philosophy of mind and it is not surprising that the impersonal examination of language in the Tractatus: was replaced by a very different, anthropocentric treatment in Philosophical Investigations?

If the logic of our language is created by moves that we ourselves make, various kinds of realized are threatened. First, the way in which our descriptive language carves up the world will not be forces on us by the natures of things, and the rules for the application of our words, which feel the external constraints, will really come from within us. That is a concession to nominalism that is, perhaps, readily made. The idea that logical and mathematical necessity is also generated by what we ourselves accomplish what is more paradoxical. Yet, that is the conclusion of Wittgenstein (1956) and (1976), and here his anthropocentricism has carried less conviction. However, a paradox is not sure of error and it is possible that what is needed here is a more sophisticated concept of objectivity than Platonism provides.

In his later work Wittgenstein brings the great problem of philosophy down to earth and traces them to very ordinary origins. His examination of the concept of following a rule takes him back to a fundamental question about counting things and sorting them into types: What qualifies as doing the same again? Of a courser, this question as an inconsequential fundamental and would suggest that we forget it and get on with the subject. But Wittgensteins question is not so easily dismissed. It has the naive profundity of questions that children ask when they are first taught a new subject. Such questions remain unanswered without detriment to their learning, but they point the only way to complete understanding of what is learned.

It is, nevertheless, the meaning of a complex expression in a function of the meaning of its constituents, that is, indeed, that it is just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a dynamic function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. for singular terms-proper names, indexicals, and certain pronouns -this is done by stating the reference of the term in question.

The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although, this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, the truth condition of snow is white is that snow is white, the truth condition of Britain would have capitulated had Hitler invaded is that Britain would halve capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to users it in a network of inferences.

Whatever it is that makes what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystifying this power, and to re-taste it to what we know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of communication and the relationship between words and ideas and words and the world. Together with a general bias towards the sensory, in that what lies in the mind may be thought of as something like images, and a belief hat thinking is well explained as the manipulation of images, this was developed through an understanding need to be thought of more in terms of rules and organizing principle than of any kind of copy of what is given in experience.

It has become more common to think of ideas, or concepts, as dependant upon social and especially linguistic structures, than the self-standing creations of an individual mind but the tension between the objective and the subjective aspect of the matter lingers on, for instance in debates about the possibility of objective knowledge of 'indeterminancy' in translation, and of identity between the thoughts people entertain at one time and those that they entertain at another.

Apparent facts to be explained about the distinction between knowing things and knowing about thing are these. Knowledge about things is essentially propositional knowledge, where the mental states involved refer to specific things, this propositional knowledge can be more or less complete, can be justified inferentially and on the basis of experience, and can be communicated. knowing things, on the one hand, involves experience of things. This experiential knowledge provides an epistemic basis for knowledge about things, and in some sense is difficult or impossible to communicate, perhaps because it is more or less vague, least of mention, as knowing by vicariaus living through, a sort of knowledge by acquaintance that amounts to knowing what an experience is like.

What makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what caused the subject to have the belief. Some causal theories of knowledge have it that a true belief that p is knowledge just in case that the right sort of causal connections to the fact that p. Such a criterion can be applied only to cases where the fact that p is a sort that can enter into causal relations, this seems to exclude mathematical and other necessary fact and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.

A contrast relating the more general (colour) to the more specific (red). It was originally introduced by W.E. Johnson, and, one kind of usage, the contrast differs from that of genres to species, in that the specific differences identifying a determinate is itself a medication of the determinable. Thus, what différentiâtes red from blue is just colour, Whereas many different properties may differentiate a member of one species, for instance of animals, from those of another.

What is more, belonging to the doctrine of determinism that every event has a cause. The usual explanation of this is that for every event, there is some antecedent state, related in such a way hat it would break a law of nature for this antecedent state to exist, yet the event not to happen. This is a purely metaphysical claim, and carries no implications for whether we can in principle predict the event. The main interests in determinism has been in assessing its implications for free-will, however, quantum physics is essentially indeterminate yet, the view that our actions are subject to quantum indéterminists hardly encourages a sense of our own responsibility for them. It is often supposed that if an action is the end of a causal chain, i.e., determined, and the cause stretch back in time to the event for which an agent is not conceivable responsibility, then the agent is not responsible for the action. The dilemma adds that if an action is not the end of such a chain, then either it or one of its causes occurs at random, in that no antecedent event brought it about, and in that case nobody is responsible for its occurrence either, so whether or not determinism is true, responsibility is shown to be illusory.

The theorist of truth conditions should insist that not every true statement about the reference of an expression be fit to be an axiom in a meaning-giving theory of truth for a languages. The axiom:

London refers to the city in which there was a huge fire in 1666

is a true statement about the reference of London?. It is a consequence of a theory that substitutes this axiom for A! In our simple truth theory that London is beautiful is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name London without knowing that last-mentioned truth conditions, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way that does not presuppose a deductive, non-truth conditional conception of meaning.

Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is for a persons languages to be truly descriptive by a semantic theory containing a given semantic axiom.

We can take the charge of triviality first. In more detail, it would run thus: Since the content of a claim that the sentence Paris is beautiful in which is true of the divisional region, which is no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than a grasp to truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory that, is somewhat more discriminative. Horwich calls the minimal theory of truth, or deflationary view of truth, as fathered by Fridge and Ramsey. The essential claim is that the predicate . . . is true does not have a sense, i.e., expresses no substantive or profound or explanatory concepts that ought be the topic of philosophical enquiry. The approach admits of different versions, but centers on the points (1) that it is true that p says no more nor less than p (hence redundancy) (2) that in less direct context, such as everything he said was true, or all logical consequences of true propositions are true, the predicate functions as a device enabling us; to generalize than as an adjective or predicate describing the thing he said, or the kinds of propositions that follow from true propositions. For example, the second may translate as ( p, q) (p & p q q) where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such of a science aims at the truth, or truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited objective conception of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that 'p'. Then 'p'. Discourse is to be regulated by the principle that it is wrong to assert 'p' when 'not-p'.

The disquotational theory of truth finds that the simplest formulation is the claim that expressions of the formed 'S' is true mean the same as expressions of the form 'S'. Some philosophers dislike the idea of sameness of meaning, and if this is disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. That is, it makes no difference whether people say Dogs bark is true, or whether they say that dogs bark. In the former representation of what they say the sentence Dogs bark is mentioned, but in the latter it appears to be used, so the claim that the two are equivalent needs careful formulation and defence. On the face of it someone might know that Dogs bark is true without knowing what it means, for instance, if one were to find it in a list of acknowledged truths, although he does not understand English, and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the redundancy theory of truth.

The minimal theory states that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truths. It is how widely accepted, that both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning (Davidson, 1990, Dummett, 1959 and Horwich, 1990). If the claim that the sentence Paris is beautiful is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try to explain the sentences meaning in terms of its truth conditions. The minimal theory of truth has been endorsed by Ramsey, Ayer, the later Wittgenstein, Quine, Strawson, Horwich and-confusingly and inconsistently if be it correct-Fridge himself. But is the minimal theory correct?

The minimal or redundancy theory treats instances of the equivalence principle as definitional of truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: London is beautiful is true if and only if London is beautiful, preserve a right to be interpreted specifically of this would be a pseudo-explanation if the fact that London refers to London is beautiful has the truth-condition it does. But that is very implausible: It is, after all, possible to understand the name London without understanding the predicate is beautiful. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something that is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expressions having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal; theory thus treats as definitional or stimulative something that is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth that has, among the many links that hold it in place, systematic connections with the semantic values of sub-sentential expressions.

A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truths that go beyond anything countenanced by the minimal theory. If the minimal theory treats truth as a predicate of anything linguistic, be it utterances, type-in-a-languages, or whatever, then the equivalence schema will not cover all cases, but only those in the theorists own languages. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independence propositions or thoughts will only postpone, not avoid, this issue, since at some point principles have to be stated associating these languages-independent entities with sentences of particular languages. The defender of the minimalist theory is likely to say that if a sentence 'S' of a foreign language is best translated by our sentence 'p', then the foreign sentence 'S' is true if and only if 'p'. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are persuasive in a plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individualized account of any concept that there exists what is called Determination Theory for that account-that is, a specification of how the account contributes to fixing the semantic value of that concept, the notion of a concepts semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. but this is to presuppose, than to elucidate, a general notion of truth.

It is also plausible that there are general constraints on the form of such Determination Theories, constraints that involve truth and which are not derivable from the minimalists conception. Suppose that concepts are individuated by their possession conditions. A concept is something that is capable of being a constituent of such contentual representational in a way of thinking of something-a particular object, or property, or relation, or another entity. A possession condition may in various says makes a thankers possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinkers perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subjects environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinkers non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinkers social environment is varied. A possession condition which property individuates such a concept must take into account the thinkers social relations, in particular his linguistic relations.

One such plausible general constraint is then the requirement that when a thinker forms beliefs involving a concept in accordance with its possession condition, a semantic value is assigned to the concept in such a way that the belief is true. Some general principles involving truth can indeed, as Horwich has emphasized, be derived from the equivalence schema using minimal logical apparatus. Consider, for instance, the principle that Paris is beautiful and London is beautiful is true if and only if Paris is beautiful is true if and only if Paris is beautiful is true and London is beautiful is true. This follows logically from the three instances of the equivalence principle: Paris is beautiful and London is beautiful is rue if and only if Paris is beautiful, and London is beautiful is true if and only if London is beautiful. But no logical manipulations of the equivalence schemas will allow the deprivation of that general constraint governing possession conditions, truth and the assignment of semantic values. That constraint can have courses be regarded as a further elaboration of the idea that truth is one of the aims of judgement.

We now turn to the other question, What is it for a persons languages to be correctly describable by a semantic theory containing a particular axiom, such as the axiom A6 above for conjunction? This question may be addressed at two depths of generality. At the shallower level, the question may take for granted the persons possession of the concept of conjunction, and be concerned with what has to be true for the axiom correctly to describe his languages. At a deeper level, an answer should not duck the issue of what it is to possess the concept. The answers to both questions are of great interest: We will take the lesser level of generality first.

When a person means conjunction by sand, he is not necessarily capable of formulating the explicitly of the axiom. Even if he can formulate it, his ability to formulate it is not the causal basis of his capacity to hear sentences containing the word and as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word and. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of depriving a theorem from a truth theory at some level of conscious proceedings? One problem with this is that it is quite implausible that everyone who speaks the same languages has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, thanks particularly to the work of Davies and Evans, a conception has evolved according to which an axiom is true of a persons languages only if there is a common component in the explanation of his understanding of each sentence containing the word and, a common component that explains why each such sentence is understood as meaning something involving conjunction (Davies, 1987). This conception can also be elaborated in computational terms: Suggesting that for an axiom to be true of a persons languages is for the unconscious mechanisms which produce understanding to draw on the information that a sentence of the form 'A' and 'B' are true if and only if 'A' is true and 'B' is true (Peacocke, 1986). Many different algorithms may equally draw n this information. The psychological reality of a semantic theory thus involves, in Marrs' (1982) famous classification, something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phenol logical theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithms that the languages user employs. The identification of the particular computational methods employed is a task for psychology. But semantics, syntactic and phonology theories are answerable to psychological data, and are potentially refutable by them-for these linguistic theories do make commitments to the information drawn upon by mechanisms in the languages user.

This answer to the question of what it is for an axiom to be true of a persons languages clearly takes for granted the persons possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn upon is that sentences of the form 'A' and 'B' are true if and only if 'A' is true and 'B' is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing and. So the computational answer we have returned needs further elaboration if we are to address the deeper question, which does not want to take for granted possession of the concepts expressed in the languages. It is at this point that the theory of linguistic understanding has to draws upon a theory of concepts. It is plausible that the concepts of conjunction are individuated by the following condition for a thinker to possess it.

Finally, this response to the deeper question allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory is correctly in that of another, when the two axioms assign the same semantic values, but do so by means of different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level of what it is for each axiom to be correct for a persons languages will be different accounts. Second, there is a challenge repeatedly made by the minimalist theorists of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to understand any sentence containing that expression. The combined accounts for each of he expressions that comprise a given sentence together constitute a non-circular account of what it is to understand the compete sentences. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.

A curious view common to that which is expressed by an utterance or sentence: The proposition or claim made about the world. By extension, the content of a predicate or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the central concern of the philosophy of languages, in that mental states have contents: A belief may have the content that the prime minister will resign. A concept is something that is capable of bringing a constituent of such contents.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of Mary Smith, or as the person located in a certain room now. More generally, a concept C is distinct from a concept d if it is possible for a person rationally to believe d is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by that . . . clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.

The general system of concepts with which we organize our thoughts and perceptions are to encourage a conceptual scheme of which the outstanding elements of our every day conceptual formalities include spatial and temporal relations between events and enduring objects, causal relations, other persons, meaning-bearing utterances of others, . . . and so on. To see the world as containing such things is to share this much of our conceptual scheme. A controversial argument of Davidson's urges that we would be unable to interpret speech from a different conceptual scheme as even meaningful, Davidson daringly goes on to argue that since translation proceeds according ti a principle of clarity, and since it must be possible of an omniscient translator to make sense of, us we can be assured that most of the beliefs formed within the commonsense conceptual framework are true.

Concepts are to be distinguished from a stereotype and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. None the less, we can come to learn that Anthony Blunt, art historian and Surveyor of the Queens Pictures, are a spy; we can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype associated wit the concept. Similarly, a persons conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to rejects this conception by arguing that it dies not adequately provide for the elements of fairness and respect that are required by the concepts of justice.

Basically, a concept is that which is understood by a term, particularly a predicate. To posses a concept is to be able to deploy a term expressing it in making judgements, in which the ability connexion is such things as recognizing when the term applies, and being able to understand the consequences of its application. The term idea was formally used in the came way, but is avoided because of its associations with subjective matters inferred upon mental imagery in which may be irrelevant to the possession of a concept. In the semantics of Fridge, a concept is the reference of a predicate, and cannot be referred to by a subjective term, although its recognition of as a concept, in that some such notion is needed to the explanatory justification of which that sentence of unity finds of itself from being thought of as namely categorized lists of itemized priorities.

A theory of a particular concept must be distinguished from a theory of the object or objects it selectively picks the outlying of the theory of the concept under which is partially contingent of the theory of thought and/or epistemology. A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy-and are open to the accusation of not having fully respected the distinction between the kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought I think, containing the first-person was of thinking, to conclusions about the nonmaterial nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory if concept is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.

A fundamental question for philosophy is: What individuates a given concept-that is, what makes it the one it is, rather than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question (Schiffer, 1987). An alternative approach, addressees the question by starting from the idea that a concept id individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose content contains it as a constituent. So, to take a simple case, one could propose that the logical concept and is individuated by this condition, it be the unique concept 'C' to posses that a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any two premisses 'A' and 'B', 'ACB' can be inferred, and from any premiss 'ACB', each of 'A' and 'B' can be inferred. Again, a relatively observational concept such as round can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement that individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for and does so. We can also expect to use relatively observational concepts in specifying the kind of experience that have to be mentioned in the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That to find her finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the others. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so, there is 1 so-and-so, . . . and the family consisting of the concepts; belief and desire. Such families have come to be known as local holism. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for as thinker to posses them are to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.

A possession conditions may in various ways make a thinkers possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinkers perceptual experience. Perceptual experience represents the world as a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subjects environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinkers non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinkers social environment is varied. A possession condition that properly individuates such a concept must take into account the thinkers social relations, in particular his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into making the territory of a thinkers reasons for making judgements. A thinkers visual perception can give him good reason for judging That man is bald: It does not by itself give him good reason for judging Rostropovich is bald, even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts one approach to these matters is to look to the possession condition for the concept, and consider how the referent of a concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object or, property, or function, . . . which makes the practices of judgement and inference mentioned in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessity good reasons for judging given contents. Provided the possession condition permits us to say what it is about a thinkers previous judgements that masker it, the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object has the property that in fact makes the judgmental practices mentioned in the possession condition yield true judgements, or truth-preserving inferences.

These manifesting dissimilations have occasioned the affiliated differences accorded within the distinction as associated with Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The forms are all either explicit identities, i.e., of the form 'A' is 'A', 'AB' is 'B', etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them truths of reason because the explicit identities are self-evident deducible truths, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason rest on the principle of contradiction, or identity and that they are necessary [propositions, which are true of all possible words. Some examples are All equilateral rectangles are rectangles and All bachelors are unmarried: The first is already of the form 'AB' is 'B' and the latter can be reduced to this form by substituting unmarried man fort Bachelors. Other examples, or so Leibniz believes, are God exists and the truths of logic, arithmetic and geometry.

Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing them is empirically by reference to the facts of the empirical world. Likewise, since their denial does not involve a contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are Caesar crossed the Rubicon and Leibniz was born in Leipzig, as well as propositions expressing correct scientific generalizations. In Leibniz's view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless there is a reason that it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible worlds and was therefore created by God.

In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This holds even for propositions like Caesar crossed the Rubicon: Leibniz thinks anyone who did not cross the Rubicon, would not have been Caesar). And this containment relationship! Which is eternal and unalterable even by God ~? Guarantees that every truth has a sufficient reason. If truths consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibnitz responds that not every truth can be reduced to an identity in a finite number of steps, in some instances revealing the connexion between subject and predicate concepts would requite an infinite analysis. But while this may entail that we cannot prove such propositions as deductively manifested, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is necessary truth of a special sort. A related question arises from the idea that truths of fact depend on Gods decision to creates the best of all possible worlds: If it is part of the concept of this world that it is best, now could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from Gods decision to create this world, but God had the power to decide otherwise. Yet God is necessarily good and non-deceiving, so how could he have decided to do anything else? Leibniz says much more about these masters, but it is not clear whether he offers any satisfactory solutions.

Leibniz and others have thought of truths as a property of propositions, where the latter are conceived as things that may be expressed by, but are distinct from, linguistic items like statements. On another approach, truth is a property of linguistic entities, and the basis of necessary truth in convention. Thus A.J. Ayer, for example,. Argued that the only necessary truths are analytic statements and that the latter rest entirely on our commitment to use words in certain ways.

The slogan the meaning of a statement is its method of verification expresses the empirical verifications theory of meaning. It is more than the general criterion of meaningfulness if and only if it is empirically verifiable. If says in addition what the meaning of a sentence is: It is all those observations that would confirm or disconfirmed the sentence. Sentences that would be verified or falsified by all the same observations are empirically equivalent or have the same meaning. A sentence is said to be cognitively meaningful if and only if it can be verified or falsified in experience. This is not meant to require that the sentence be conclusively verified or falsified, since universal scientific laws or hypotheses (which are supposed to pass the test) are not logically deducible from any amount of actually observed evidence.

When one predicates necessary truth of a preposition one speaks of modality dedicto. For one ascribes the modal property, necessary truth, to a dictum, namely, whatever proposition is taken as necessary. A venerable tradition, however, distinguishes this from necessary de re, wherein one predicates necessary or essential possession of some property to an on object. For example, the statement '4' is necessarily greater than '2' might be used to predicate of the object, '4', the property, being necessarily greater than '2'. That objects have some of their properties necessarily, or essentially, and others only contingently, or accidentally, are a main part of the doctrine called; essentialism. Thus, an essentials might say that Socrates had the property of being bald accidentally, but that of being self-identical, or perhaps of being human, essentially. Although essentialism has been vigorously attacked in recent years, most particularly by Quine, it also has able contemporary proponents, such as Plantinga.

Modal necessity as seen by many philosophers whom have traditionally held that every proposition has a modal status as well as a truth value. Every proposition is either necessary or contingent as well as either true or false. The issue of knowledge of the modal status of propositions has received much attention because of its intimate relationship to the issue of deductive reasoning. For example, no propositions of the theoretic content that all knowledge of necessary propositions is deductively knowledgeable. Others reject this claim by citing Kripkes (1980) alleged cases of necessary theoretical propositions. Such contentions are often inconclusive, for they fail to take into account the following tripartite distinction: 'S' knows the general modal status of 'p' just in case 'S' knows that 'p' is a necessary proposition or 'S' knows the truth that 'p' is a contingent proposition. 'S' knows the truth value of 'p' just in case 'S' knows that 'p' is true or 'S' knows that 'p' is false. 'S' knows the specific modal status of 'p' just in case 'S' knows that 'p' is necessarily true or 'S' knows that 'p' is necessarily false or 'S' knows that 'p' is contingently true or 'S' knows that 'p' is contingently false. It does not follow from the fact that knowledge of the general modal status of a proposition is a deductively reasoned distinctive modal status is also given to theoretical principles. Nor des it follow from the fact that knowledge of a specific modal status of a proposition is theoretically given as to the knowledge of its general modal status that also is deductive.

The certainties involving reason and a truth of fact are much in distinction by associative measures given through Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The former are all either explicit identities, i.e., of the form 'A' is 'A', 'AB' is 'B', etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them truths of reason because the explicit identities are self-evident theoretical truth, whereas the rest can be converted to such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason rest on the principle of contraction, or identity and that they are necessary propositions, which are true of all possible worlds. Some examples are that All bachelors are unmarried: The first is already of the form 'AB' is 'B' and the latter can be reduced to this form by substituting unmarried man for Bachelors. Other examples, or so Leibniz believes, are God exists and the truth of logic, arithmetic and geometry.

Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing hem os a theoretical manifestations, or by reference to the fact of the empirical world. Likewise, since their denial does not involve as contradiction, their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are Caesar crossed the Rubicon and Leibniz was born in Leipzig, as well as propositions expressing correct scientific generalizations. In Leibniz's view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless thee is a reason that it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.

In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This holds even for propositions like Caesar crossed the Rubicon: Leibniz thinks anyone who did not cross the Rubicon would not have been Caesar) And this containment relationship-that is eternal and unalterable even by God-guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibniz responds that not evert truth can be reduced to an identity in a finite number of steps: In some instances revealing the connexion between subject and predicate concepts would require an infinite analysis. But while this may entail that we cannot prove such propositions as deductively probable, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on Gods decision to create the best world, if it is part of the concept of this world that it is best, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from Gods decision to create this world, but God is necessarily good, so how could he have decided to do anything else? Leibniz says much more about the matters, but it is not clear whether he offers any satisfactory solutions.

The modality of a proposition is the way in which it is true or false. The most important division is between propositions true of necessity, and those true as a things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called modal include the tense indicators It will be the case that 'p' or It was the case that 'p', and there are affinities between the deontic indicators, as it ought to be the case that 'p' or it is permissible that 'p', and the logical modalities as a logic that study the notions of necessity and possibility. Modal logic was of a great importance historically, particularly in the light of various doctrines concerning the necessary properties of the deity, but was not a central topic of modern logic in its golden period at the beginning of the 20th century. It was, however, revived by C. I. Lewis, by adding to a propositional or predicate calculus two operators, and (sometimes written N and M), meaning necessarily and possibly, respectively. These like p p and p p will be to include p p , if a proposition is necessary, and p p, if a proposition is possible. The classical modal theory for modal logic, due to Kripke and the Swedish logician Stig Kanger, involves valuing propositions not as true or false simpliciter, but as true or false art possible worlds, with necessity then corresponding to truth in all worlds, and possibly to truths in some world.

The doctrine advocated by David Lewis, which different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different, this view has been charged with misrepresenting it as some insurmountably unseeing to why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference that world is actual. Critics asio charge either that the notion fails to fit with a coherent theory of how we know about possible worlds, or with a coherent theory about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denies that any other way of interpreting modal statements is tenable.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believer that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato, 429-347 Bc in view of his claim that knowledge is infallible while belief or opinion is fallible (Republic 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say I do not believer she is guilty. I know she is and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying I do not just believer she is guilty, I know she is where just makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: You do not hurt him, you killed him.

A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believer in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives us no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believer things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is what I can do, where what I can do may include answering questions. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While I know such and such might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history years priori and yet he is able to give several correct responses to questions such as When did the Battle of Hastings occur? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believer that we have the knowledge we claim, or else our behaviour is intentionally misleading.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lacks beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bains (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believer that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently guessed that it took place in 1066, we would surely describe the situation as one in which Jeans false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jeans true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believer it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believer the denial of what they believer cannot be said to' know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favours this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believer that the President is in New York City, even though she has every reason to believer that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samanthas belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jeans memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which perception basis upon itself as a fundamental philosophical topic both for its central place in a theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believer to hold of perception, (1) It gives us knowledge of the world around us. (2) We are conscious of that world by being aware of sensible qualities: Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between us and the world, and the direct objects of perception will seem to be private items in an inner theater or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like sense-data or percepts exacerbates the tendency, but once the model is in place, the first property, that perception gives us knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connexion between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by ones sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one can see, hence, comes to know something about the gauge (that it says) and, hence, know that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that ones visitors have arrived. In such cases one sees (hears, smells, etc.) that 'a' is 'F', coming to know thereby that 'a' is 'F', by seeing (hearing, etc.) that some other condition, 'b's' being 'G', obtains when this occurs, the knowledge (that a is F) is derived from, or dependent on, the more basic perceptual knowledge that 'b' is 'G'.

Perhaps as a better strategy is to tie an account save that part that evidence could justify explanation for it is its truth alone. Since, at least the time of Aristotle philosophers of explanatory knowledge have emphasizes of its importance that, in its simplest Termes, we want to know not only what are the composite peculiarities and particulars points of issue but also why it is. This consideration suggests that we define an explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are requests for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibility-questions (How is it possible for cats always to land their feet?)

In its overall sense, to explain means to make clear, to make plain, or to provide understanding. Definition of this sort are philosophically unhelpful, for the terms used in the deficient are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, as more complex explanation is required. To facilitate the requirement leaves, least of mention, for us to consider by introduction a bit of technical terminology. The term explanation is used to refer to that which is to be explained: The term explanans refers to that which does the explaining, the explanans and the explanation taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of conscious purposes. Why did you go to the pharmacy yesterday? Because I had a headache and needed to get some aspirin. It is tacitly assumed that aspirin is an appropriate medication for headaches and that going t the pharmacy would bean efficient way of getting some. Such explanations are, of course, teleological, referring, ss they do, to goals. The explanans is not the realization of a future goal-if the pharmacy happened to be closed for stocktaking the aspirin would have ben obtained there, bu t that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what doers the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. (Taylor, 1964). In that it should not be automatically be assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason, but the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness, Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious and conscious wishes. Those Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a supra-empirical purpose in invoked -, e.g., the explanations of living species in terms of Gods purpose, or the vitalistic explanations of biological phenomena in terms of a entelechy or vital principle. In recent years an anthropic principle has received attention in cosmology (Barrow and Tipler, 1986). All such explanations have been concerned by many philosophers an anthropomorphic.

Nevertheless, philosophers and scientists often maintain that functional explanations play an important an legitimate role in various sciences such as, evolutionary biology, anthropology and sociology. For example, of the peppered moth in Liverpool, the change in colour from the light phase to the dark phase and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the spaces. In the study of primitive soviets anthropologists have maintained that various rituals the (rain dance) which may be inefficacious in braining about their manifest Gaels (producing rain), actually cohesion at a period of stress (often a drought). Philosophers who admit teleological and/or functional explanations in common sense and science oftentimes take pans to argue that such explanations can be annualized entirely in terms of efficient causes, thereby escaping the charge of anthropomorphism (Wright, 1976): Again, however, not all philosophers agree.

Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientists, especially during the first half of the twentieth century-held that science provides only descriptions and predictions of natural phenomena, but not explanations for a series of influential philosophers of science-including Karl Popper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965)-maintained that empirical science can explain natural phenomena without appealing to metaphysics or theology. It appears that this view is now accepted by the vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.

Nevertheless, one important variety of reliability theory is a conclusive reason account, which includes a requirement that one's reasons for believing that ‘h' be such that in one's circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h', or, e.g., one would not believe that ‘h'. Roughly, the latter are demanded by theories that treat a Knower as ‘tracking the truth', theories that include the further demand that is roughly, if it were the case, that ‘h', then one would believe that ‘h'. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method' has been used to arrive at the belief that ‘h', then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.

But unless more conditions are added to Nozick's analysis, it will be too weak to explain why one lack's knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot's compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one) one arrives at one's belief that ‘h', not by reasoning through a false belief ut by basing belief that ‘h', upon a true existential generalization of one's evidence.

Nozick's analysis is in addition too strong to permit anyone ever to know that ‘h': ‘Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them'. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick's conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent's requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).

Some philosophers think that the category of knowing for which true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK' as merely the ability to provide a correct answer to a possible questions, however, White may be equating ‘producing' knowledge in the sense of producing ‘the correct answer to a possible question' with ‘displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h' without believing or accepting that ‘h' can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf'). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato ©. 429-347 BC) in view of his claim that knowledge is infallible while belief or opinion is fallible (‘Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying ‘I do not just believe she is guilty, I know she is' where ‘just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You do not hurt him, you killed him'.

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions'. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am unsure whether my answer is true: Still, I know it is correct'. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur'? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently ‘guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha's belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which ‘perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives ‘us' knowledge of the world around ‘us'. (2) We are conscious of that world by being aware of ‘sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data' or ‘percepts' exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include ‘scepticism' and ‘idealism'.

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a' is ‘F', coming to know thereby that ‘a' is ‘F', by seeing (hearing, etc.) that some other condition, ‘b's' being ‘G', obtains when this occurs, the knowledge (that ‘a' is ‘F') is derived from, or dependent on, the more basic perceptual knowledge that ‘b' is ‘G'.

Perhaps as a better strategy is to tie an account save that part that evidence could justify explanation for it is its truth alone. Since, at least the time of Aristotle philosophers of explanatory knowledge have emphasizes of its importance that, in its simplest therms, we want to know not only what are the composite peculiarities and particulars points of issue but also why it is. This consideration suggests that we define an explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are requests for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibility-questions (How is it possible for cats always to land their feet?)

In its overall sense, ‘to explain' means to make clear, to make plain, or to provide understanding. Definition of this sort are philosophically unhelpful, for the terms used in the deficient are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, as more complex explanation is required. To facilitate the requirement leaves, least of mention, for us to consider by introduction a bit of technical terminology. The term ‘explanation' is used to refer to that which is to be explained: The term ‘explanans' refers to that which does the explaining, the explanans and the explanation taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of conscious purposes. ‘Why did you go to the pharmacy yesterday?' ‘Because I had a headache and needed to get some aspirin.' It is tacitly assumed that aspirin is an appropriate medication for headaches and that going t the pharmacy would bean efficient way of getting some. Such explanations are, of course, teleological, referring, ss they do, to goals. The explanans is not the realisation of a future goal-if the pharmacy happened to be closed for stocktaking the aspirin would have ben obtained there, bu t that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what doers the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. (Taylor, 1964). In that it should not be automatically be assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason, but the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness, Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious and conscious wishes. Those Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a supr-empirical purpose in invoked -, e.g., the explanations of living species in terms of God's purpose, or the vitalistic explanations of biological phenomena in terms of a entelechy or vital principle. In recent years an ‘anthropic principle' has received attention in cosmology (Barrow and Tipler, 1986). All such explanations have been condemned by many philosophers an anthropomorphic.

Nevertheless, philosophers and scientists often maintain that functional explanations play an important an legitimate role in various sciences such as, evolutionary biology, anthropology and sociology. For example, of the peppered moth in Liverpool, the change in colour from the light phase to the dark phase and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the spacies. In the study of primitive soviets anthropologists have maintained that various rituals the (rain dance) which may be inefficacious in braining about their manifest gaols (producing rain), actually cohesion at a period of stress (often a drought). Philosophers who admit teleological and/or functional explanations in common sense and science oftentimes take pans to argue that such explanations can be annualized entirely in terms of efficient causes, thereby escaping the charge of anthropomorphism (Wright, 1976): Again, however, not all philosophers agree.

Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientists, especially during the first half of the twentieth century-held that science provides only descriptions and predictions of natural phenomena, but not explanations for a series of influential philosophers of science-including Karl Popper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965)-maintained that empirical science can explain natural phenomena without appealing to metaphysics or theology. It appears that this view is now accepted by the vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.

The foregoing approach, developed by Hempel, Popper and others, became virtually a ‘received view' in the 1960s and 1970s. According to this view, to give a scientific explanation of any natural phenomenon is to show how this phenomena can be subsumed under a law of nature. A particular repture in a water pipe can be explained by citing the universal law that water expands when it freezes and the fact that the temperature of water in a pipe dropped below the freezing point. General law, as well as particular facts, can be explained by subsumption, the law of conservation of linear momentum can be explained by derivation from Newton's second and third laws of motion. Each of these explanations is a deductive argument: The explanans contains one or more statements of universal laws and, in many cases, statements deceiving initial conditions. This pattern of explanation is known as the deductive-nomological (D-N) model. Any such argument shows that the explanandun had to occur given the explanans.

Many, though not all, adherents of the received view allow for explanation by subsumption under statistical laws. Hempel (1965) offers as an example the case of a man who recovered quickly from a streptococcus infection as a result of treatment with penicillin. Although not all strep infections' clar up quickly under this treatment, the probability of recovery in such cases is high, and this is sufficient for legitimate explanation According to Hempel. This example conforms to the inductive-statistical (I-S) model. Such explanations are viewed as arguments, but they are inductive than deductive. In these instances the explanation confers high inductive probability on the explanandum. An explanation of a particular fact satisfying either the D-N or I-S model is an argument to the effect that the fact in question was to b e expected by virtue of the explanans.

The received view been subjected to strenuous criticism by adherents of the causal/mechanical approach to scientific explanation (Salmon 1990). Many objections to the received view we engendered by he absence of caudal constraints (due largely to worries about Hume's critique) on the N-D and I-S models. Beginning in the late 1950s, Michael Scriven advanced serious counter-examples to Hempel's models: He was followed in the 1960s by Wesley Salmon and in the 1970s by Peter Railton. As accorded to the view, one explains phenomena identifying causes (a death is explained resalting from a massive cerebral haemorrhage) or by exposing underlying mechanisms (the behaviour of a gas is explained in terms of the motion of constituent molecules).

A unification approach to explanation carries with the basic idea that we understand our world more adequately to the extent that we can reduce the number of independent assumptions we must introduce to account for what goes on in it. Accordingly, we understand phenomena to the degree that we can fit them into an overall world picture or Weltanschauung. In order to serve in scientific explanation, the world picture must be scientifically well founded.

During the pas half-century much philosophical attention has ben focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: The forgoing brief survey does not exhaust the variety (Salmon, 19990).

In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already made of mention. Prior to take-off a flight attendant explains how to use the safety equipment on the aero-plane. In a museum the guide explain the significance of a famous painting. A mathematics teacher explains a geometrical proof to a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind, the main point is to remember the great variety of contexts in which explanations are sought and given into.

Another item of importance to epistemology is the wider held notion that non-demonstrative inferences can be characterized as inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis

Early versions of defeasibility theories had difficulty allowing for the existence of evidence that was ‘merely misleading,' as in the case where one does know that h3: ‘Tom Grabit stole a book from the library,' thanks to having seen him steal it, yet where, unbeknown to oneself, Tom's mother out of dementia gas testified that Tom was far away from the library at the time of the theft. One's justifiably believing that she gave the testimony would destroy one's justification for believing that h3' if added by itself to one's present evidence.

At least some defeasibility theories cannot deal with the knowledge one has while dying that h4: ‘In this life there is no timer at which I believe that ‘d', where the proposition that 'd' expresses the details regarding some philosophical matter, e.g., the maximum number of blades of grass ever simultaneously growing on the earth. When it just so happens that it is true that ‘d', defeasibility analyses typically consider the addition to one's dying thoughts of a belief that ‘d' in such a way as to improperly rule out actual knowledge that ‘h4'.

A quite different approach to knowledge, and one able to deal with some Gettier-type cases, involves developing some type of causal theory of Propositional knowledge. The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory': Intended here) is the that of a belief is justified just in case it was produced by a type of process that is ‘globally' reliable, that is, its propensity to produce true beliefs-that can be defined (to a god enough approximation) as the proportion of the bailiffs it produces (or would produce where it used as much as opportunity allows) that are true-is sufficiently meaningful-variations of this view have been advanced for both knowledge and justified belief. The first formulation of reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain can obtain by a reliable process. P. Unger (1968) suggested that 'S' knows that ‘p' just in case it is not at all accidental that ‘S' is right about its being the casse that ‘p'. D.M. Armstrong (1973) said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth through and by the laws of nature.

Such theories require that one or another specified relation hold that can be characterized by mention of some aspect of cassation concerning one's belief that ‘h' (or one's acceptance of the proposition that ‘h') and its relation to state of affairs ‘h*', e.g., 'h' causes the belief: 'h' is causally sufficient for the belief 'h' and the belief have a common cause. Such simple versions of a causal theory are able to deal with the original Notgot case, since it involves no such causal relationship, but cannot explain why there is ignorance in the variants where Notgot and Berent Enç (1984) have pointed out that sometimes one knows of ‘ ' that is ø thanks to recognizing a feature merely corelated with the presence of øness without endorsing a causal theory themselves, there suggest that it would need to be elaborated so as to allow that one's belief that ‘ ' has ø has been caused by a factor whose correlation with the presence of øness has caused in oneself, e.g., by evolutionary adaption in one's ancestors, the disposition that one manifests in acquiring the belief in response to the correlated factor. Not only does this strain the unity of as causal theory by complicating it, but no causal theory without other shortcomings has been able to cover instances of deductively reasoned knowledge.

Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one's believing (accepting) that ‘h' be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not ‘h', in the sense that some of one's cognitive or epistemic states, , are such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that ‘h'. In some versions, the reliability is required to be ‘global' in as far as it must concern a nomologically (probabilistic) relationship) relationship of states of type to the acquisition of true beliefs about a wider range of issues than merely whether or not ‘h'. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)

One important variety of reliability theory is a conclusive reason account, which includes a requirement that one's reasons for believing that ‘h' be such that in one's circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h', or, e.g., one would not believe that ‘h'. Roughly, the latter is demanded by theories that treat a Knower as ‘tracking the truth', theories that include the further demand that is roughly, if it were the case, that ‘h', then one would believe that ‘h'. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method' has been used to arrive at the belief that ‘h', then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.

But unless more conditions are added to Nozick's analysis, it will be too weak to explain why one lack's knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot's compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and finally for one's belief that ‘h', not by reasoning through a false belief ut by basing belief that ‘h', upon a true existential generalization of one's evidence.

Nozick's analysis is in addition too strong to permit anyone ever to know that ‘h': ‘Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them'. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick's conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent's requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).

Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK' as merely the ability to provide a correct answer to a possible questions, however, White may be equating ‘producing' knowledge in the sense of producing ‘the correct answer to a possible question' with ‘displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h' without believing or accepting that ‘h' can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf'). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible (‘Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying ‘I do not just believe she is guilty, I know she is' where ‘just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You do not hurt him, you killed him.'

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions.' On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur?' Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently ‘guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, DC. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha's belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which ‘perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives ‘us' knowledge of the world around ‘us,' (2) We are conscious of that world by being aware of ‘sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data' or ‘percepts' exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include ‘scepticism' and ‘idealism.'

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a' is ‘F', coming to know thereby that ‘a' is ‘F', by seeing (hearing, etc.) that some other condition, ‘b's' being ‘G', obtains when this occurs, the knowledge (that ‘a' is ‘F') is derived from, or dependent on, the more basic perceptual knowledge that ‘b' is ‘G'.

And finally, the representational Theory of mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have ‘intentionality'-they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with deadlocks is inaccurate.)

The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized-i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as ‘folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the ‘intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational-i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a ‘moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic ‘structural' or ‘syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties-e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal (‘what-it's-like') features (‘qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations-percepts (‘impressions'), images (‘ideas') and the like-are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content reprises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, would meet a new obstacle. (It would also raise prima face problems for reductivist representationalism

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term ‘representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal-though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to ‘see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of ‘symbol-filled arrays.' (the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalism, it is the phenomenal properties of experiences-qualia themselves-that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual ‘scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is ‘correct' (a semantic property) if in the corresponding ‘scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the ‘phenomenal concept'-a conceptual/phenomenal hybrid consisting of a phenomenological ‘sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, ‘you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties-i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery-hence the designation ‘pictorial'; though of course there may imagery in other modalities-auditory, olfactory, etc.-as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is ‘quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially-for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are ‘(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each ‘cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981)

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both ‘narrow' content (determined by intrinsic factors) and ‘wide' or ‘broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like de dicto content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some-so-called ‘subpersonal' or ‘sub-doxastic' representations-are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the ‘mental models' of Johnson-Laird 1983, the ‘retinal arrays,' ‘primal sketches' and ‘2½ -D sketches' of Marr 1982, the ‘frames' of Minsky 1974, the ‘sub-symbolic' structures of Smolensky 1989, the ‘quasi-pictures' of Kosslyn 1980, and the ‘interpreted symbol-filled arrays' of Tye 1991-in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of computational theory of mind concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Conceptionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors (‘nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism-‘localist' versions-on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of ‘weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is ‘trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition-situations in which classical systems are relatively ‘brittle' or ‘fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth-so-called extensional properties-expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions-i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Søren Aabye Kierkegaard (1813-1855), a Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.

Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this ‘leap of faith' as the essence of Christianity.

Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.

Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.

Kierkegaard maintained that systematic philosophy not only imposes a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.

In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is a refined hedonism, consisting of a search for pleasure and a cultivation of mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focused on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This ‘suspension of the ethical,' as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar ‘leap of faith' into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.

Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly somber view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.

Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War I, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.

Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics' that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.

More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.

The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will', did not exist, Nietzsche reified the ‘existence' of consciousness in the domain of subjectivity as the ground for individual ‘will' and summarily reducing all previous philosophical attempts to articulate the ‘will to truth'. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche's earlier versions to the ‘will to truth', disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will'.

In Nietzsche's view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, he deuced that we are all locked in ‘a prison house of language'. The prison as he concluded it, was also a ‘space' where the philosopher can examine the ‘innermost desires of his nature' and articulate a new message of individual existence founded on ‘will'.

Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists' ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.

Nietzsche's emotionally charged defence of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.

The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.

The mechanistic paradigms of the late in the nineteenth century where the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach's critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, ‘relativistic' notions.

Jean-Paul Sartre (1905-1980), was a French philosopher, dramatist, novelist, and political journalist, who was a leading exponent of existentialism. Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre's work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that ‘man is condemned to be free,' Sartre reminds us of the responsibility that accompanies human decisions.

Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.

Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.

In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.

In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.

The part of the theory of design or semiotics, that concerns the relationship between speakers and their signs. the study of the principles governing appropriate conversational moves is called general pragmatized, applied pragmatics treats of special kinds of linguistic interaction such as inter-views and speech asking, nevertheless, the philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.

Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.

Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.

The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism's refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists' denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.

Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.

The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle,' for example, is given by the observed consequences or properties that objects called ‘brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.

James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce's doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called ‘the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any one philosophy to explain everything.

Dewey's philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.

Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey's writings, although he aspired to synthesize the two realms.

The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-as an alternative to Rorty's interpretation of the tradition.

In an ever-changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.

One of the five branches into which semiotics is usually divided the study of meaning of words, and their relation of designed to the object studied, a semantic is provided for a formal language when an interpretation or model is specified. Nonetheless, the Semantics, the Greek semantikos, ‘significant,' the study of the meaning of linguistic signs- that is, words, expressions, and sentences. Scholars of semantics try to one answer such questions as ‘What is the meaning of (the word) X?' They do this by studying what signs are, as well as how signs possess significance-that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs-what they stand for-with the process of assigning those meanings.

Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, plus an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.

These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviourists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as ‘the victor at Jena' and ‘the loser at Waterloo,' both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.

In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a ‘science of significations' that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.

One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).

An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen-plume-of the color red-rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.

An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition-a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign ‘the moon is a sphere' is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to-the moon-is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.

The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs-by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his ‘ordinary language' philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.

From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) elocutionary acts, in which things are said with a certain sense or reference (as in ‘the moon is a sphere'); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs-that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.

What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic-that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone-independent of a speaker and hearer.

Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs), often nominal arguments (noun phrases) or, relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression ‘Bill gives Mary the book,'‘gives' is an operator that relates the arguments ‘Bill,'‘Mary,' and ‘the book.'

Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, ‘kiss' belongs to an expression class with other items such as ‘hit' and ‘see,' as well as to the conventional part of speech ‘verb,' in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In ‘Mary kissed John,' the syntactic role of ‘kiss' is to relate two nominal arguments (‘Mary' and ‘John'), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, ‘John' has the same semantic role-to identify a person-in the following two sentences: ‘John is easy to please' and ‘John is eager to please.' The syntactic role of ‘John' in the two sentences, however, is different: In the first, ‘John' is the receiver of an action; in the second, ‘John' is the actor.

Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs-usually single words as vocabulary items called lexemes-in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain ‘seat' in English, the lexemes ‘chair,'‘sofa,'‘loveseat,' and ‘bench' can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning ‘something on which to sit.'

Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.

Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as ‘Colorless green ideas sleep furiously'), although grammatical expressions, are meaningless-semantically blocked. The rules must also account for how a sentence such as ‘They passed the port at midnight' can have at least two interpretations.

Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence ‘Colorless green ideas sleep furiously' has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as ‘They passed the port at midnight'), one decides which meaning applies.

In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech-that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.

Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.

Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible-in a syntactically based theory-for surface structure and deep structure jointly to determine the semantic interpretation of an expression.

The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.

Positivism, system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge. The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte (1798-1857), but some of the positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.

Comte chose the word positivism on the ground that it indicated the ‘reality' and ‘constructive tendency' that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus control of natural forces. The two primary components of positivism, the philosophy and the polity (or program of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. A number of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.

The principle named But rejected by the English economist and philosopher John Maynard Keyes (1883-1946) whereby if there is no known reason for asserting one than another out of several alternatives, then relative to our knowledge they have an equal probability. Without restriction the principle leads to contradiction, for example, if we know nothing about the nationality of a person, we might argue that the probability is equal that she comes from England or France, and equal that she comes from Scotland or France. But from the first two assertions the probability that she belongs to Britain must be at least double the probability that be belongs to France.

A paradox arises when a set class of apparent incontrovertible premises gives unacceptable or contradictory conclusions. To solve a paradox will involve either showing that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand.

By comparison, the moral philosopher and epistemologist Bernard Bolzano (1781-1848) argues, though, that there is something else, an infinity that doe not have this whatever you need it to be elasticity. In fact a truly infinite quantity (for example, the length of a straight ligne unbounded in either direction, meaning: The magnitude of the spatial entity containing all the points determined solely by their abstractly conceivable relation to two fixed points) does not by any means need to be variable, and in adduced example it is in fact not variable. Conversely, it is quite possible for a quantity merely capable of being taken greater than we have already taken it, and of becoming larger than any pre-assigned (finite) quantity, nevertheless it is to mean, in that of all times is merely finite, which holds in particular of every numerical quantity 1, 2, 3, 4, 5.

In other words, for Bolzano there could be a true infinity that was not a variable something that was only bigger than anything you might specify. Such a true infinity was the result of joining two pints together and extending that ligne in both directions without stopping. And what is more, he could separate off the demands of calculus, using a finite quality without ever bothering with the slippery potential infinity. Here was both a deeper understanding of the nature of infinity and the basis on which are built in his safe infinity free calculus.

This use of the inexhaustible follows on directly from most Bolzanos' criticism of the way that we used as a variable something that would be bigger than anything you could specify, but never quite reached the true, absolute infinity. In Paradoxes of the Infinity Bolzano points out that is possible for a quantity merely capable of becoming larger than any one pre-assigned (finite) quantity, nevertheless to remain at all times merely finite.

Bolzano intended this as a criticism of the way infinity was treated, but Professor Jacquette sees it instead of a way of masking use of practical applications like calculus without the need for weasel words about infinity.

By replacing with ¤ we do away with one of the most common requirements for infinity, but is there anything left that map out to the real world? Can we confine infinity to that pure mathematical other world, where anything, however unreal, can be constructed, and forget about it elsewhere? Surprisingly, this seems to have been the view, at least at one point in time, even of the German mathematician and founder of set-theory Georg Cantor (1845-1918), himself, whose comments in 1883, that only the finite numbers are real.

Keeping within the lines of reason, both the Cambridge mathematician and philosopher Frank Plumpton Ramsey (1903-30) and the Italian mathematician G. Peano (1858-1932) have been to distinguish logical paradoxes and that depend upon the notion of reference or truth (semantic notions), such are the postulates justifying mathematical induction. It ensures that a numerical series is closed, in the sense that nothing but zero and its successors can be numbers. In that any series satisfying a set of axioms can be conceived as the sequence of natural numbers. Candidates from set theory include the Zermelo numbers, where the empty set is zero, and the successor of each number is its unit set, and the von Neuman numbers, where each number is the set of all smaller numbers. A similar and equally fundamental complementarity exists in the relation between zero and infinity. Although the fullness of infinity is logically antithetical to the emptiness of zero, infinity can be obtained from zero with a simple mathematical operation. The division of many numbers by zero is infinity, while the multiplication of any number by zero is zero.

With the set theory developed by the German mathematician and logician Georg Cantor. From 1878 to 1807, Cantor created a theory of abstract sets of entities that eventually became a mathematical discipline. A set, as he defined it, is a collection of definite and distinguished objects in thought or perception conceived as a whole.

Cantor attempted to prove that the process of counting and the definition of integers could be placed on a solid mathematical foundation. His method was to repeatedly place the elements in one set into one-to-one correspondence with those in another. In the case of integers, Cantor showed that each integer (1, 2, 3, . . . n) could be paired with an even integers (2, 4, 6, . . . n), and, therefore, that the set of all integers was equal to the set of all even numbers.

Amazingly, Cantor discovered that some infinite sets were large than others and that infinite sets formed a hierarchy of greater infinities. After this failed attempt to save the classical view of logical foundations and internal consistency of mathematical systems, it soon became obvious that a major crack had appeared in the seemingly sold foundations of number and mathematics. Meanwhile, an impressive number of mathematicians began to see that everything from functional analysis to the theory of real numbers depended on the problematic character of number itself.

While, in the theory of probability Ramsey was the first to show how a personalized theory could be developed, based on precise behavioural notions of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a redundancy theory of truth, which hr combined with radical views of the function of man y kinds of propositions. Neither generalizations nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy.

Ramsey advocates that of a sentence generated by taking all the sentence affirmed in a scientific theory that use some term, e.g., quark. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying quarks have such-and-such properties, Ramsey postdated that the sentence as saying that there is something that has those properties. If the process is repeated, the sentence gives the topic-neutral structure of the theory, but removes any implications that we know what the term so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. Nonetheless, it was pointed out by the Cambridge mathematician Newman that if the process is carried out for all except the logical bones of the theory, then by the Löwenheim-Skolem theorem, the result will be interpretable in any domain of sufficient cardinality, and the content of the theory may reasonably be felt to have been lost.

It seems, that the most taken of paradoxes in the foundations of set theory as discovered by Russell in 1901. Some classes have themselves as members: The class of all abstract objects, for example, is an abstract object, whereby, others do not: The class of donkeys is not itself a donkey. Now consider the class of all classes that are not members of themselves, is this class a member of itself, that, if it is, then it is not, and if it is not, then it is.

The paradox is structurally similar to easier examples, such as the paradox of the barber. Such one like a village having a barber in it, who shaves all and only the people who do not have in themselves. Who shaves the barber? If he shaves himself, then he does not, but if he does not shave himself, then he does not. The paradox is actually just a proof that there is no such barber or in other words, that the condition is inconsistent. All the same, it is no to easy to say why there is no such class as the one Russell defines. It seems that there must be some restriction on the kind of definition that are allowed to define classes and the difficulty that of finding a well-motivated principle behind any such restriction.

The French mathematician and philosopher Henri Jules Poincaré (1854-1912) believed that paradoxes like those of Russell and the barber were due to such as the impredicative definitions, and therefore proposed banning them. But, it tuns out that classical mathematics required such definitions at too many points for the ban to be easily absolved. Having, in turn, as forwarded by Poincaré and Russell, was that in order to solve the logical and semantic paradoxes it would have to ban any collection (set) containing members that can only be defined by means of the collection taken as a whole. It is, effectively by all occurring principles into which have an adopting vicious regress, as to mark the definition for which involves no such failure. There is frequently room for dispute about whether regresses are benign or vicious, since the issue will hinge on whether it is necessary to reapply the procedure. The cosmological argument is an attempt to find a stopping point for what is otherwise seen as being an infinite regress, and, to ban of the predicative definitions.

The investigation of questions that arise from reflection upon sciences and scientific inquiry, are such as called of a philosophy of science. Such questions include, what distinctions in the methods of science? s there a clear demarcation between scenes and other disciplines, and how do we place such enquires as history, economics or sociology? And scientific theories probable or more in the nature of provisional conjecture? Can the be verified or falsified? What distinguished good from bad explanations? Might there be one unified since, embracing all the special science? For much of the 20th century there questions were pursued in a highly abstract and logical framework it being supposed that as general logic of scientific discovery that a general logic of scientific discovery a justification might be found. However, many now take interests in a more historical, contextual and sometimes sociological approach, in which the methods and successes of a science at a particular time are regarded less in terms of universal logical principles and procedure, and more in terms of their availability to methods and paradigms as well as the social context.

In addition, to general questions of methodology, there are specific problems within particular sciences, giving subjects as biology, mathematics and physics.

The intuitive certainty that sparks aflame the dialectic awarenesses for its immediate concerns are either of the truth or by some other in an object of apprehensions, such as a concept. Awareness as such, has to its amounting quality value the place where philosophical understanding of the source of our knowledge are, however, in covering the sensible apprehension of things and pure intuition it is that which stricture sensation into the experience of things accent of its direction that orchestrates the celestial overture into measures in space and time.

The notion that determines how something is seen or evaluated of the status of law and morality especially associated with St Thomas Aquinas and the subsequent scholastic tradition. More widely, any attempt to cement the moral and legal order together with the nature of the cosmos or how the nature of human beings, for which sense it is also found in some Protestant writers, and arguably derivative from a Platonic view of ethics, and is implicit in ancient Stoicism. Law stands above and apart from the activities of human lawmaker, it constitutes an objective set of principles that can be seen true by natural light or reason, and (in religion versions of the theory) that express Gods will for creation. Non-religious versions of the theory substitute objective conditions for human flourishing as the source of constraints upon permissible actions and social arrangements. Within the natural law tradition, different views have been held about the relationship between the rule of law about God s will, for instance the Dutch philosopher Hugo Grothius (1583-1645), similarly takes upon the view that the content of natural law is independent of any will, including that of God, while the German theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view, thereby facing the problem of one horn of the Euthyphro dilemma, that simply states, that its dilemma arises from whatever the source of authority is supposed to be, for in which do we care about the general good because it is good, or do we just call good things that we care about. Wherefore, by facing the problem that may be to assume of a strong form, in which it is claimed that various facts entail values, or a weaker form, from which it confines itself to holding that reason by itself is capable of discerning moral requirements that are supped of binding to all human bings regardless of their desires

Although the morality of people send the ethical amount from which the same thing, is that there is a usage that restricts morality to systems such as that of the German philosopher and founder of ethical philosophy Immanuel Kant (1724-1804), based on notions such as duty, obligation, and principles of conduct, reserving ethics for more than the Aristotelian approach to practical reasoning based on the notion of a virtue, and generally avoiding the separation of moral considerations from other practical considerations. The scholarly issues are complex, with some writers seeing Kant as more Aristotelian and Aristotle as, ore involved in a separate sphere of responsibility and duty, than the simple contrast suggests. Some theorists see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be test they are the edicts of a divine lawmaker, or that they are truths of reason, knowable deductively. Other approaches to ethics (e.g., eudaimonism, situation ethics, virtue ethics) eschew general principles as much as possible, frequently disguising the great complexity of practical reasoning. For Kantian notion of the moral law is a binding requirement of the categorical imperative, and to understand whether they are equivalent at some deep level. Kants own applications of the notion are not always convincing, as for one cause of confusion in relating Kants ethics to theories such additional expressivism is that it is easy, but mistaken, to suppose that the categorical nature of the imperative means that it cannot be the expression of sentiment, but must derive from something unconditional or necessary such as the voice of reason.

For which ever reason, the mortal being makes of its presence to the future of weighing of that which one must do, or that which can be required of one. The term carries implications of that which is owed (due) to other people, or perhaps in onself. Universal duties would be owed to persons (or sentient beings) as such, whereas special duty in virtue of specific relations, such as being the child of someone, or having made someone a promise. Duty or obligation is the primary concept of deontological approaches to ethics, but is constructed in other systems out of other notions. In the system of Kant, a perfect duty is one that must be performed whatever the circumstances: Imperfect duties may have to give way to the more stringent ones. In another way, perfect duties are those that are correlative with the right to others, imperfect duties are not. Problems with the concept include the ways in which due needs to be specified (a frequent criticism of Kant is that his notion of duty is too abstract). The concept may also suggest of a regimented view of ethical life in which we are all forced conscripts in a kind of moral army, and may encourage an individualistic and antagonistic view of social relations.

The most generally accepted account of externalism and/or internalism, that this distinction is that a theory of justification is internalist if only if it requiem that all of the factors needed for a belief to be epistemologically justified for a given person be cognitively accessible to that person, internal to cognitive perceptivity, and externalist, if it allows that at least some of the justifying factors need not be thus accessible, so that thy can be external to the believers cognitive perceptive, beyond any such given relations. However, epistemologists often use the distinction between internalist and externalist theories of epistemic justification without offering any very explicit explication.

The externalist/internalist distinction has been mainly applied to theories of epistemic justification: It has also been applied in a closely related way to accounts of knowledge and in a rather different way to accounts of belief and thought contents.

The internalist requirement of cognitive accessibility can be interpreted in at least two ways: A strong version of internalism would require that the believe actually be aware of the justifying factor in order to be justified: While a weaker version would require only that he be capable of becoming aware of them by focussing his attentions appropriately, but without the need for any change of position, new information, etc. Though the phrase cognitively accessible suggests the weak interpretation, the main intuitive motivation for internalism, viz the idea that epistemic justification requires that the believe actually have in his cognitive possession a reason for thinking that the belief is true, and would require the strong interpretation.

Perhaps, the clearest example of an internalist position would be a Foundationalist view according to which foundational beliefs pertain to immediately experienced states of mind and other beliefs are justified by standing in cognitively accessible logical or inferential relations to such foundational beliefs. Such a view could count as either a strong or a weak version of internalism, depending on whether actual awareness of the justifying elements or only the capacity to become aware of them is required. Similarly, a coherent view could also be internalist, if both the beliefs or other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible.

It should be carefully noticed that when internalism is construed in this way, it is neither necessary nor sufficient by itself for internalism that the justifying factors literally be internal mental states of the person in question. Not necessary, necessary, because on at least some views, e.g., a direct realist view of perception, something other than a mental state of the believe can be cognitively accessible: Not sufficient, because there are views according to which at least some mental states need not be actual (strong version) or even possible (weak version) objects of cognitive awareness. Also, on this way of drawing the distinction, a hybrid view, according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalist view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believe actually be aware of all justifiable factors) could still be internalist in relation to a weak version (by requiring that he at least be capable of becoming aware of them).

The most prominent recent externalist views have been versions of reliabilism, whose requirements for justification is roughly that the belief be produced in a way or via a process that makes of objectively likely that the belief is true. What makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relations of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true , but will, on such an account, nonetheless be epistemically justified in according it. Thus such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

The main objection to externalism rests on the intuitive certainty that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believe actually be dialectally aware of a reason for thinking that the belief is true (or, at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason, it is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by appeal to two sorts of putative intuitive counter-examples to externalism. The first of these challenges the necessity of belief which seem intuitively to be justified, but for which the externalist conditions are not satisfied. The standard examples in this sort are cases where beliefs are produced in some very nonstandard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believe is indistinguishable from that of someone whose beliefs are produced more normally. The intuitive claim is that the believe in such a case is nonetheless epistemically justified, as much so as one whose belief is produced in a more normal way, and hence that externalist account of justification must be mistaken.

Perhaps the most striking reply to this sort of counter-example, on behalf of a cognitive process is to be assessed in normal possible worlds, i.e., in possible worlds that are actually the way our world is common-seismically believed to be, than in the world which contains the belief being judged. Since the cognitive processes employed in the Cartesian demon cases are, for which we may assume, reliable when assessed in this way, the reliability can agree that such beliefs are justified. The obvious, to a considerable degree of bringing out the issue of whether it is or not an adequate rationale for this construal of Reliabilism, so that the reply is not merely a notional presupposition guised as having representation.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to Reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the Reliabilist condition is satisfied.

One sort of response to this latter sorts of objection is to bite the bullet and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalized sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justificatory factor that is cognitively accessible to the believe in question (though it need not be actually grasped), thus ruling out, e.g., a pure Reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believe. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weakly internalized. The internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believe in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exists) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, the an knowledge?`

A rather different use of the terms internalism and externalism has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individuals mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly internalized in character. The main argument for externalism derives from the philosophy of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as direct reference theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc.-not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought from the inside, simply by reflection. If content is depend on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalized account of mental content would seem to support an externalist account of justification, by way that if part or all of the content of a belief inaccessible to the believe, then both the justifying status of other beliefs in relation to that content and the status of that content justifying the beliefs will be similarly inaccessible, thus contravening the internalist requirement for justification. An internalist must insist that there are no justifiable relations of these sorts, that our internally associable content can either be justified or justly anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

A great deal of philosophical effort has been lavished on the attempt to naturalize content, i.e. to explain in non-semantic,. Non-intentional terms what it is for something to be represental (have content) and what it is for something to have some particular content rather than some other. There appear to be only four types of theory that have been proposed: Theories that ground representation in (1) similarity, (2) conversance, (3) functional role, (4) teleology.

Similarly, theories hold that 'r' represents 'x' in virtue of being similar to 'x'. This has seemed hopeless to most as a theory of mental representation because it appears to require that things in the brain must share properties with the things they represent: To represent a cat as furry appears to require something furry in the brain. Perhaps, a notion of similarity that is naturalistic and does not involve property sharing can be worked out, but it is not obvious how.

Covariance theories hold that 'r's' represent 'x' is grounded in the fact that 'r's' occurrence canaries with that of 'x'. This is most compelling he n one thinks about detection systems, the firing a neural structures in the visual system is said to represent vertical orientations, if its firing varies with the occurrence of vertical lines in the visual field of perceptivity.

Functional role theories hold that 'r's' represent 'x' is grounded in the functional role 'r' has in the representing system, i.e., on the relations imposed by specific cognitive processes imposed by specific cognitive processes between 'r' and other representations in the system's repertoire. Functional role theories take their cue from such common-sense ideas as that people cannot believer that cats are furry if they did not know that cats are animals or that fur is like hair.

Teleological theories hold that 'r' represent 'x' if it is 'r's' function to indicate, i.e., covary with 'x'. Teleological theories differ depending on the theory of functions they import. Perhaps the most important distinction is that between historical theories of functions. Historical theories individuated functional states (hence contents) in a way that is sensitive to the historical development of the state, i.e., to factors such as the way the state was 'learned', or the way it evolved. An historical theory might hold that the function of 'r' is to indicate 'x' only if the capacity to token 'r' was developed (selected, learned) because it indicates 'x'. Thus, a state physically indistinguishable from 'r's' historical origins would not represent 'x' according to historical theories.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as a whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
-a menial representation with the same content as the word 'cow'-if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations a representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule are coincide with the identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements-'is statements' in the relevant sense-represent some state of affairs as obtaining, whereas normative statements-evaluative, and deontic ones-attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute-in a factual analyzable way-to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, or attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, value a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value-e.g., moral value-they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions-say, appropriate perceptual stimulations-a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believer in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St, Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believe who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this is united with his belief that God exists, the reasonably so in a way that an ordinary propositional belief that would not.

Some philosophers think that the category of knowing for which true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of 'PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats 'PK' as merely the ability to provide a correct answer to a possible questions, however, White may be equating 'producing' knowledge in the sense of producing 'the correct answer to a possible question' with 'displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that 'h' without believing or accepting that 'h' can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical 'seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory informant in relation to an inquirer who wants to find out whether or not 'h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried 'Wolf'). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato ©. 429-347 BC) in view of his claim that knowledge is infallible while belief or opinion is fallible ('Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say 'I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying 'I do not just believe she is guilty, I know she is' where 'just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: 'You do not hurt him, you killed him'.

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives 'us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is 'what I can do, where what I can do may include answering questions'. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, 'I am unsure whether my answer is true: Still, I know it is correct'. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While 'I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as 'When did the Battle of Hastings occur'? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is 'intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently 'guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha's belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which 'perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives 'us' knowledge of the world around 'us'. (2) We are conscious of that world by being aware of 'sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between 'us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like 'sense-data' or 'percepts' exacerbates the tendency, but once the model is in place, the first property, that perception gives 'us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include 'scepticism' and 'idealism'.

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that 'a' is 'F', coming to know thereby that 'a' is 'F', by seeing (hearing, etc.) that some other condition, 'b's' being 'G', obtains when this occurs, the knowledge (that 'a' is 'F') is derived from, or dependent on, the more basic perceptual knowledge that 'b' is 'G'.

Perhaps as a better strategy is to tie an account save that part that evidence could justify explanation for it is its truth alone. Since, at least the time of Aristotle philosophers of explanatory knowledge have emphasizes of its importance that, in its simplest therms, we want to know not only what are the composite peculiarities and particulars points of issue but also why it is. This consideration suggests that we define an explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are requests for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibility-questions (How is it possible for cats always to land their feet?)

In its overall sense, 'to explain' means to make clear, to make plain, or to provide understanding. Definition of this sort are philosophically unhelpful, for the terms used in the deficient are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, as more complex explanation is required. To facilitate the requirement leaves, least of mention, for us to consider by introduction a bit of technical terminology. The term 'explanation' is used to refer to that which is to be explained: The term 'explanans' refers to that which does the explaining, the explanans and the explanation taken together constitute the explanation.

One common type of explanation occurs when deliberate human actions are explained in terms of conscious purposes. 'Why did you go to the pharmacy yesterday?' 'Because I had a headache and needed to get some aspirin.' It is tacitly assumed that aspirin is an appropriate medication for headaches and that going t the pharmacy would bean efficient way of getting some. Such explanations are, of course, teleological, referring, ss they do, to goals. The explanans is not the realisation of a future goal-if the pharmacy happened to be closed for stocktaking the aspirin would have ben obtained there, bu t that would not invalidate the explanation. Some philosophers would say that the antecedent desire to achieve the end is what doers the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. (Taylor, 1964). In that it should not be automatically be assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason, but the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness, Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious and conscious wishes. Those Freudian explanations should probably be construed as basically causal.

Problems arise when teleological explanations are offered in other context. The behaviour of non-human animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a supr-empirical purpose in invoked, e.g., the explanations of living species in terms of God's purpose, or the vitalistic explanations of biological phenomena in terms of a entelechy or vital principle. In recent years an 'anthropic principle' has received attention in cosmology (Barrow and Tipler, 1986). All such explanations have been condemned by many philosophers an anthropomorphic.

Nevertheless, philosophers and scientists often maintain that functional explanations play an important an legitimate role in various sciences such as, evolutionary biology, anthropology and sociology. For example, of the peppered moth in Liverpool, the change in colour from the light phase to the dark phase and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the spacies. In the study of primitive soviets anthropologists have maintained that various rituals the (rain dance) which may be inefficacious in braining about their manifest gaols (producing rain), actually cohesion at a period of stress (often a drought). Philosophers who admit teleological and/or functional explanations in common sense and science oftentimes take pans to argue that such explanations can be annualized entirely in terms of efficient causes, thereby escaping the charge of anthropomorphism (Wright, 1976): Again, however, not all philosophers agree.

Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one's believing (accepting) that 'h' be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not 'h', in the sense that some of one's cognitive or epistemic states, , are such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that 'h'. In some versions, the reliability is required to be 'global' in as far as it must concern a nomologically (probabilistic-relationship) relationship that states of type to the acquisition of true beliefs about a wider range of issues than merely whether or not 'h'. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)

One important variety of reliability theory is a conclusive reason account, which includes a requirement that one's reasons for believing that 'h' be such that in one's circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that 'h', or, e.g., one would not believe that 'h'. Roughly, the latter is demanded by theories that treat a Knower as 'tracking the truth', theories that include the further demand that is roughly, if it were the case, that 'h', then one would believe that 'h'. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a 'method' has been used to arrive at the belief that 'h', then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.

But unless more conditions are added to Nozick's analysis, it will be too weak to explain why one lack's knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot's compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and finally for one's belief that 'h', not by reasoning through a false belief ut by basing belief that 'h', upon a true existential generalization of one's evidence.

Nozick's analysis is in addition too strong to permit anyone ever to know that 'h': 'Some of my beliefs about beliefs might be otherwise, e.g., I might have rejected on of them'. If I know that 'h5' then satisfaction of the antecedent of one of Nozick's conditionals would involve its being false that 'h5', thereby thwarting satisfaction of the consequent's requirement that I not then believe that 'h5'. For the belief that 'h5' is itself one of my beliefs about beliefs (Shope, 1984).

Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of 'PK' that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats 'PK' as merely the ability to provide a correct answer to a possible questions, however, White may be equating 'producing' knowledge in the sense of producing 'the correct answer to a possible question' with 'displaying' knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some non-human animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that 'h' without believing or accepting that 'h' can be modified so as to illustrate this point. Two examples concerns an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical 'seer' never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person's manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.

These considerations expose limitations in Edward Craig's analysis (1990) of the concept of knowing of a person's being a satisfactory informant in relation to an inquirer who wants to find out whether or not 'h'. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried 'Wolf'). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one's having the power to proceed in a way representing the state of affairs, causally involved in one's proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.

Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).

The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible ('Republic' 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.

A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say 'I do not believe she is guilty. I know she is' and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying 'I do not just believe she is guilty, I know she is' where 'just' makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: 'You do not hurt him, you killed him.'

H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives 'us' no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.

A.D. Woozley (1953) defends a version of the separability thesis. Woozley's version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is 'what I can do, where what I can do may include answering questions.' On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While 'I know such and such' might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.

Colin Radford (1966) extends Woozley's defence of the separability thesis. In Radford's view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year's priori and yet he is able to give several correct responses to questions such as 'When did the Battle of Hastings occur?' Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is 'intentionally misleading'.

Those that agree with Radford's defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack's beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain's (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.

D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radfod that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently 'guessed' that it took place in 1066, we would surely describe the situation as one in which Jean's false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford's original case as one that Jean's true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.

Armstrong's response to Radford was to reject Radford's claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, DC. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samanthas belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford's examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean's memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.

Least has been of mention to an approaching view from which 'perception' basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives 'us' knowledge of the world around 'us,' (2) We are conscious of that world by being aware of 'sensible qualities': Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between 'us' and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like 'sense-data' or 'percepts' exacerbates the tendency, but once the model is in place, the first property, that perception gives 'us' knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include 'scepticism' and 'idealism.'

A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.

Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one's sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much as much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise-makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one's visitors have arrived. In such cases one sees (hears, smells, etc.) that 'a' is 'F', coming to know thereby that 'a' is 'F', by seeing (hearing, etc.) that some other condition, 'b's' being 'G', obtains when this occurs, the knowledge (that 'a' is 'F') is derived from, or dependent on, the more basic perceptual knowledge that 'b' is 'G'.

And finally, the representational Theory of mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have 'intentionality'-they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with dreadlocks is inaccurate.)

The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized-i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.

Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational-i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties-e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)

Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations-percepts ('impressions'), images ('ideas') and the like-are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.

Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke).

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal-though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences-qualia themselves-that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery-hence the designation 'pictorial'; though of course there may imagery in other modalities-auditory, olfactory, etc.-as well.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially-for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists (see the next section) about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981)

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.

Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology.

Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some-so-called 'subpersonal' or 'sub-doxastic' representations-are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991-in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions-on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition-situations in which classical systems are relatively 'brittle' or 'fragile.'

Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer 1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

It is also widely held that in addition to having such properties as reference, truth-conditions and truth-so-called extensional properties-expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions-i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as a whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.

Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.

However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
-a menial representation with the same content as the word ‘cow'-if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations a representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.

Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls ‘short-armed' functional role theories are internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by ‘external' factors. Crossing the atomist-holistic distinction with the internalist-externalist distinction.

Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning ‘narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance ‘wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce ‘narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.

All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like ‘arthritis', or the kind of tree I refer to as a ‘Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: ‘situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.

Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements-‘is statements' in the relevant sense-represent some state of affairs as obtaining, whereas normative statements-evaluative, and deontic ones-attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. ‘That is a good book' expresses a value judgement though the term ‘value' is absent (nor would ‘valuable' be synonymous with ‘good'). Similarly, ‘we are morally obligated to fight' superficially expresses a statement, and ‘By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.

Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute-in a factual analysable way-to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are ‘theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, or attributing an obligation and, on the other hand, saying how the world is.

Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, value a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value-e.g., moral value-they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, ‘it is a good book, but that is no reason for a positive attribute towards it' and ‘you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, ‘an expensive book' and ‘you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are ‘value-free' in the required way.

Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions-say, appropriate perceptual stimulations-a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.

Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.

Traditionally, belief has been of epistemological interest in its propositional guise: ‘S' believes that ‘p', where ‘p' is a reposition towards which an agent, ‘S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believer in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is ‘reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.

Some philosophers have followed St, Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.

The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.

Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believe who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, the reasonably so-in a way that an ordinary propositional belief that would not.

The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.

One sort of response to this latter sorts of objection is to ‘bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general internalist view of justification that externalist are committed to reject.

A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.

An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.

Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults posses knowledge, though not the weaker conviction (if such a conviction does exists) that such individuals are epistemically justified in their beliefs. It is also at least less vulnerable to internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`

A rather different use of the terms ‘internalism' and ‘externalism' has to do with the issue of how the content of beliefs and thoughts is determined: According to an internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements is standardly classified as an external view.

As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as ‘direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment-e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criteria employed by expects in his social group, etc.-not just on what is going on internally in his mind or brain.

An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought ‘from the inside', simply by reflection. If content is depend on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors-which will not in general be available to the person whose belief or thought is in question.

The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying status of other beliefs in relation to that of the same representation is the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.

According to the act/object analysis of experience, every experience with contentual representation involves an object of experience, an act of awareness has related the subject (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects, whatever is perceived, but also to experiences like hallucinating and dream experiences, which do not. Such experiences are, nonetheless, less appearing to represent of something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which we have treated as properties, Meinongian objects, which may not exist or have any form of being, and, more commonly, private mental entities with sensory qualities. (We have now usually applied the term ‘sense-data' to the latter, but has also been used as a general term for objects f sense experiences, in the work of G. E., Moore.) Its terms of representative realism, objects of perceptions, of which we are ‘indirectly aware' are always distinct from objects of experience, of which we are ‘directly aware'. Meinongian, however, may treat objects of perception as existing objects of perception, least there is mention, Meinong's most famous doctrine derives from the problem of intentionality, which led him to countenance objects, such as the golden mountain, that is capable of being the object of thought, although they do not actually exist. This doctrine was one of the principle's targets of Russell's theory of ‘definitive descriptions', however, it came as part o a complex and interesting package of concept if the theory of meaning, and scholars are not united in what supposedly that Russell was fair to it. Meinong's works include "Über Annahmen" (1907), trs. as "On Assumptions" (1983), and "Über Möglichkeit und Wahrschein ichkeit" (1915). Nonetheless most of the philosophers will feel that the Meinongian's acceptance to impossible objects is too high a price to pay for these benefits.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing, as opposed to having exactly similar experiences, that it appears to have an answer only, on the assumptions that the experience concerned are perceptions with material objects. Even in terms of the act/object analysis the question must have an answer even when conditions are not satisfied. (The answers negative on the sense-datum theory: It could be positive of the versions of the act/object analysis, depending on the facts of the case.)

In view of the above problems, we should reassess the case of act / object analysis. The phenomenological argument is not, on reflection, convincing, for granting that any experience appears to present is easy enough 'us' with an object without accepting that it actually does. The semantic argument is more impressive, but is, nonetheless, answerable. The seemingly relational structure of attributions of experiences is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and we can handle quantification over objects of experience themselves and quantification over experience tacitly according to content, thus, ‘the after-image that John experienced was an experience of green' and ‘Macbeth saw something that his wife did not see' becomes ‘Macbeth had a visual experience that his wife did not have'.

Notwithstanding, pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated dispositions, i.e., ‘We might identify Susy's experience of a rough surface beneath her hand with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that we have somehow blocked.

This position has attractions. It does full justice. To the important role of experience as a source of belief acquisition. It would also help clear the say for a naturalistic theory of mind, since there seems to be some prospect of a physical / functionalist account of belief and other intentional states. However its failure has completely undermined pure cognitivism to accommodate the fact that experiences have a felt character that cannot be reduced to their content.

The adverbial theory of experience advocates that the grammatical object of a statement attributing an experience to someone be analysed as an adverb, for example,

Rod is experiencing a pink square.

is rewritten as?

Rod is experiencing (pink square) ly.

Also, the adverbial theory is an attempt to undermine a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basic intuition, and there is reason to believe that an effective development of the theory, which is merely hinted upon possibilities.

The relearnt intuitions are as, (i) that when we say that someone is experiencing ‘an A', this has an experience ‘of an A, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (ii) that doing this is a matter of saying something about the experience itself (and maybe also about the normal causes of like experiences) and (iii) that there is no-good reason to suppose that it involves the description of an object of which the experience is ‘'. Thus, the effective role of the content-expression is a statement of experience is to modify the C=verb it compliments, not to introduce a special type of object.

Perhaps the most important criticism of the adverbial theory is the ‘many property problem', according to which the theory does not have the resources to distinguish between e. g.,

(1) Frank has an experience of a brown triangle

and:

(2) Frank has an experience of brow n and an experience

of a triangle,

that (1) has entailed but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience that is both brown and three-sided, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular. Note, however, to which (1) is equivalent.

(1*) Frank has an experience of something's being

both brown and three-sided,

and (2) is equivalent to:

(2*) Frank has an experience of something's being

brown and a triangle of something's being triangular,

and we can explain the difference between these quite simply in terms of logical scope without invoking objects of experience. The adverbialists may use this to answer the many-property problem by arguing that the phrase ‘a brown triangle' in (1) does the same work as the clause ‘something's being both brown and triangular' in (1*). This is perfectly compactable with the view that it also has the ‘adverbial' function of modifying the verb ‘has an experience of', for it specifies the experience more narrowly just by giving a necessary condition for the satisfactions of the experience, as the condition being that there are something both brown and triangular before Frank.

A final position that we should mention is the state theory, according to which a sense experience of an ‘A' is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A'. Suitably qualified, this claim is no doubt truer, but its significance is subject to debate. Here it is enough to remark that the claim is compactable with both pure cognitivism and the adverbial theory, and that we have probably best advised state theorists to adopt adverbials as a means of developing their intuition.

Perceptual knowledge is knowledge acquired by or through the senses, this includes most of what we know. We cross intersections when everything we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up by some sensory means. Seeing that the light has turned green is learning something that the light has turned green by use of the eyes. Feeling that the melon is overripe is coming to know a fact that the melon is overripe by one's sense of touch. In each case we have somehow based on the resulting knowledge, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Seeing a rotten kumquat is not at all like the experience of smelling, tasting or feeling a rotten kumquat, yet all these experiences can result in the same primary directive as to knowledge . . . Knowledge that the kumquat is rotten, . . . although the experiences are much different, they must, if they are to yield knowledge, embody information about the kumquat: The information that it is rotten. Seeing that the fruit is rotten differs from smelling that it is rotten, not in what is known, but how it is known. In each case, the information has the same source the rotten kumquats but it is, so to speak, delivered via different channels and coded in different experiences.

Avoiding it confusing perception knowledge of facts' is important, i.e., that the kumquat is rotten, with the perception of objects, i.e., rotten kumquats, a rotten kumquat, quite another to know. By seeing or tasting, that it is a rotten kumquat. Some people do not know what kumquats smell like, as when they smell like a rotten kumquat-thinking, perhaps, that this is the way this strange fruit is supposed to smell doing not realize from the smell, i.e., do not smell that, it is rotten. In such cases people see and smell rotten kumquats-and in this sense perceive rotten kumquats, and never know that they are kumquats let alone rotten kumquats. They cannot, not at least by seeing and smelling, and not until they have learned something about [rotten] kumquats, come to know that what they are seeing or smelling is a [rotten] kumquat. Since we have geared the topic toward perceptual representations too knowledge-knowing, by sensory means or data, that something is ‘F'- wherefor, we need the question of what more, beyond the perception of F's, to see that and thereby know that they are ‘F' will be brought of question, not how we see kumquats (for even the ignorant can do this), but, how we even know, in that indeed, we do, in that of what we see.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, another fact, in a more direct way. We see, by newspapers, that our team has lost again, see, by her expression, that she is nervous. This dived or dependent sort of knowledge is particularly prevalent in the case of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other sound makers so that we can, for example, hear (by the alarm) that someone is at the door and (by the bell) that its time to get up. When we obtain knowledge in this way, unless one sees-hence, clearly comes to know something about the gauge that it reads ‘empty', the newspaper (what it says) and the person's expression, one would not see, hence, we know, that what one perceptual representation means have described as coming to know. If one cannot hear that the bell is ringing, one cannot not, at least, in this way hear that one's visitors have arrived. In such cases one sees, hears, smells, etc., that ‘an' is ‘F', coming to know thereby that ‘an' is ‘F', by seeing, hearing etc., we have derived from that come other condition, ‘b's being ‘G', that ‘an' is ‘F', or dependent on, the more basic perceptivities that of its being attributive to knowledge that of ‘b' is ‘G'.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of facts about different objects, the derived knowledge is something about the same object. That is, we see that ‘an' is ‘F' by seeing, not that another object is ‘G' no matter that ‘a' itself is ‘G'. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy' feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is an oak tree, a Porsche, a geranium, an ingenious rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also derived. Derived from the mere facts (about ‘a') usage for what is to make the identification. In this case, the perceptual knowledge is still indirect because, although the same object is involved, the facts we come to know about it are different from the facts that enable ‘us' to know it.

We sometimes describe derived knowledge as inferential, but this is misleading. At the conscious level there is no passage of the mind from premised to conclusion, no reason-sensitivity of mind from problem-solving. The observer, the one who sees that ‘an' is ‘F' by seeing that ‘b' (or, ‘a' itself) is ‘G', need not be and typically is not aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry, so I moved my hand. I did not, at least not at any conscious level, infer (from her expression and behaviour) that she was getting angry. I could (or, it seems to me) see that she was getting angry, it is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterizes so much of our perceptual knowledge -even (sometimes) the most indirect and derived forms of it do not mean that no one requires learning to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference, they recognize relevant features of trees, birds, and flowers, features they already know how to identify perceptually, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that it is an oak, a finch or a geranium. However the experts (and wee are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that it is an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say that the expert has developed identificatory skills that no longer require the sort of conscious self-inferential process that characterize a beginners efforts.

Coming to know that ‘a' is ‘F' by seeing that ‘b' is ‘G' obviously requires some background assumption on the part of the observer, an assumption to the effect that ‘a;' is ‘F' (or, perhaps only probable ‘F') when ‘b' is ‘G?'. If one does not assume (take it for granted) that they properly connect the gauge, does not (thereby) assume that it would not register ‘Empty' unless the tank was nearly empty, then even if one could see that it registered ‘Empty', one would not learn hence, would not see, that one needed gas. At least one would not see it by consulting the gauge. Likewise, in trying to identify birds, it is no use being able to see their marking if one does not know something about which birds have which marks Something of the form, a bird with these markings is (probably) a finch.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a' is ‘F', as they must if the observer is to see (by b's being G) that ‘a' is ‘F', must themselves qualify as knowledge. For if no one has known this background fact, if no one knows it whether ‘a' is ‘F' when ‘b' is ‘G', then the knowledge of b's bing G is, taken by itself, powerless to generate the knowledge that ‘a' is ‘F'. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be truer, or so it would seem.

Externalists, however, argue that the indirect knowledge that ‘a' is ‘F', though it may depend on the knowledge that ‘b' is ‘G', does not require knowledge of the connecting fact, the fact that ‘a' is ‘F' when ‘b' is ‘G'. Simple belief (or, perhaps, justified beliefs, there are stronger and weaker versions of externalism) in the connecting fact is sufficient to confer a knowledge of the connected fact. Even if, strictly speaking, I do not know she is nervous whenever she fidgets like that, I can none the less see (hence, recognized, or know) that she is nervous (by the way she fidgets) if I (correctly) assume that this behaviour is a reliable expression of nervousness. One need not know the gauge is working well to make observations (acquire observational knowledge) with it. All that we require, besides the observer believing that the gauge is reliable, is that the gauge, in fact, be reliable, i.e., that the observers background beliefs be true. Critics of externalism have been quick to point out that this theory has the unpalatable consequence-can make that knowledge possible and, in this sense, be made to rest on lucky hunches (that turn out true) and unsupported (even irrational) beliefs. Surely, internalists argue if one is going to know that ‘a' is ‘F' on the basis of b's being G, one should have (as a bare minimum) some justification for thinking that ‘a' is ‘F', or is probably ‘F', when ‘b' is ‘G'.

Whatever taken to be that these matters (with the possible exception of extreme externalism), indirect perception obviously requires some understanding (knowledge? Justification? Belief?) of the general relationship between the fact one comes to know (that ‘a' is ‘F') and the facts (that ‘b' is ‘G') that enable one to know it. And it is this requirement on background knowledge or understanding that leads to questions about the possibility of indirect perceptual knowledge. Is it really knowledge? Sceptical doubts have inspired the first question about whether we can ever know the connecting facts in question. How is it possible to learn, to acquire knowledge of, the connecting fact's knowledge of which is necessary to see (by b's being ‘G') that ‘a' is ‘F'? These connecting facts do not appear to be perceptually knowable. Quite, on the contrary, is taken to believe that they appear to be general truth knowables (if knowable at all) by inductive inference from past observations. And if one is sceptical about obtaining knowledge in this indirect, inductive as, one is, perforced, indirect knowledge, including indirect perceptivity, where we have described knowledge of a sort openly as above, that depends on in it.

Even if one puts aside such sceptical questions, least of mention, there remains a legitimate concern about the perceptual character of this kind of knowledge. If one sees that ‘a' is ‘F' by seeing that ‘b' is ‘G', is one really seeing that ‘a' is ‘F'? Isn't perception merely a part And, indeed, from an epistemological standpoint, whereby one comes to know that ‘a' is ‘F?'. One must, it is true, see that ‘b' is ‘G', but this is only one of the premises needed to reach the conclusion (knowledge) that ‘a' is ‘F'. There is also the background knowledge that is essential to te process. If we think of a theory as any factual proposition, or set of factual propositions, that cannot itself be known in some direct observational way, we can express this worry by saying that indirect perception is always theory-loaded: Seeing (indirectly) that ‘a' is ‘F' is only possible if the observer already has knowledge of (justifications for, belief in) some theory, the theory ‘connecting' the fact one comes to know (that ‘a' is ‘F') with the fact (that ‘b' is ‘G') that enables one to know it.

This of course, reverses the standard foundationalist pictures of human knowledge. Instead of theoretical knowledge depending on, and being derived from, perception, perception of the indirect sort, presupposes a prior knowledge of theories.

Foundationalist's are quick to point out that this apparent reversal in the structure of human knowledge is only apparent. Our indirect perceptual experience of fact depends on the applicable theory, yes, but this merely shows that indirect perceptional knowledge is not part of the foundation. To reach the kind of perceptual knowledge that lies at the foundation, we need to look at a form of perception that is purified of all theoretical elements. This, then, will be perceptual knowledge, pure and direct. We have needed no background knowledge or assumptions about connecting regularities in direct perception because the known facts are presented directly and immediately and not (as, in direct perception) on the basis of other facts. In direct perception all the justification (needed for knowledge) is right there in the experience itself.

What, then, about the possibility of perceptual knowledge pure and direct, the possibility of coming to know, on the basis of sensory experience, that ‘a' is ‘F' where this does not require, and in no way presupposes, backgrounds assumptions or knowledge that has a source outside the experience itself? Where is this epistemological ‘pure gold' to be found?

There are, basically, two views about the nature of direct perceptual knowledge (coherentists would deny that any of our knowledge is basic in this sense). We can call these views (following traditional nomenclature) direct realism and representationalism or representative realism. A representationalist restricts direct perceptual knowledge to objects of a very special sort: Ideas, impressions, or sensations (sometimes called sense-data)-entities in the mind of the observer. Ones perceiving fact,

i.e., that ‘b' is ‘G', only when ‘b' is a mental entity of some sort a subjective appearance or sense-data-and ‘G' is a property of this datum. Knowledge of these sensory states is supposed to be certain and infallible. These sensory facts are, so to speak, right upon against the mind's eye. One cannot be mistaken about these facts for these facts are, in really, facts about the way things appear to be, one cannot be mistaken about the way things appear to be. Normal perception of external conditions, then, turns out to be (always) a type of indirect perception. One ‘sees' that there is a tomato in front of one by seeing that the appearances (of the tomato) have a certain quality (reddish and bulgy) and inferring (this is typically said to be atomistic and unconscious), on the basis of certain background assumptions, i.e., that there typically is a tomato in front of one when one has experiences of this sort that there is a tomato in front of one. All knowledge of objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on an even more direct knowledge of the appearances.

For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded' with the theory that there is some regular, some uniform, correlation between the way things appears (known in a perceptually direct way) and the way things actually are (known, if known at all, in a perceptually indirect way).

The second view, direct realism, refuses to restrict direct perceptual knowledge to an inner world of subjective experience. Though the direct realists are willing to concede that much of our knowledge of the physical world is indirect, however, direct and immediate it may sometimes feel, some perceptual; knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on, nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right in the experience itself.

To understand the way this is supposed to work, consider an ordinary example. ‘S' identifies a banana (learns that it is a banana) by noting its shape and colour perhaps even tasting and smelling it (to make sure it's not wax). In this case the perceptual knowledge that it is a banana is the direct realist admits, indirect on S's perceptual knowledge of its shape, colour, smell, and taste. ‘S' learns that it is a banana by seeing that it is yellow, banana-shaped, etc. None the less, S's perception of the banana's colour and shape is not direct. ‘S' does not see that the object is yellow, for example, by seeing (knowing, believing) anything more basic either about the banana or anything e. g., his sensation of the banana. ‘S' has learned to identify not to make an inference, even an unconscious inference, from other things he believes. What ‘S' acquired as a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, ad in no way depends on, the having of any unfolding beliefs thereof: S' identificatory success will depend on his operating in certain special conditions, of course. ‘S' will not, perhaps, be able to identify yellow objects in dramatically reduced lighting visually, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about ‘S' can see that something is yellow does not show that his perceptual knowledge (that ‘a' is yellow) in any way depends on a belief (let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an identificatory skill, that like any skill, requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also with individuals who have developed perceptual (cognitive) skills. They needed normal conditions to do what they have learned to do. They need normal conditions too sere, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for the direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S' sees that ‘a' is ‘F' depends on his being caused to believe that ‘a' is ‘F' in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S' sees (hence, knows) that ‘a' is ‘F'. If they are not, he does not. Whether or not ‘S' knows depends, then, not on what else (if anything) ‘S' believes, but on the circumstances in which ‘S' comes to believe. This being so, this type of direct realist is a form of externalism. Direct perception of objective facts, pure perpetual knowledge of external events, is made possible because what is needed (by way of justification) fort such knowledge has been reduced. Background knowledge is not needed.

This means that the foundation of knowledge is fallible. None the less, though fallible, they are in no way derived, that is, what makes them foundations. Even if they are brittle, as foundations are sometimes, everything else upon them.

Ideally, in theory r imagination, a concept of reason that is transcendent but nonempirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel's absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed is able to confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/ still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that he is possessed by his very own fantasy.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact', and the reasoning are wrong of the ‘facts and facts, as the ‘true facts' of the case may never be known'. These usages may occasion qualms' among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, the discovery or determinations of fast or accurate information are related to, or used in the discovery of facts, then the comprising events are determined by evidence or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what s or should be.

Seriously, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that constitute a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or assists comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on theory, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, by reason of which formidable combinations awaiting upon the inception of an idea, demonstrated as true or is assumed to be demonstrated. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Looking back a century, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More striking still is the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism' and ‘idealist', say, of ‘rationalists' and ‘empiricist'.

Thus, no matter what the current debate or discussion, the central issue is often ne without conceptual and contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this over-flowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts', and the investigation of communicable communications, especially the relationship between words and ‘ideas', and words and the ‘world'. It is, nonetheless, that which is expressed by an utterance or sentence, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is capable of connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently a predicate may be thought of as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis' or the kind of tree I refer to as a ‘beech' will be defined by criteria of which I know next to nothing. This raises the possibility of imaging two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation' may here include the actual objects hey perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of one of the terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, regardless of these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being in terms of narrow content plus context.

Nevertheless, supposing that people are characterized by their rationality is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no a priori reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. But the model has been attacked, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the characterization of which reports of introspection, or sensations, or intentions, or beliefs that actually take into consideration our social lives, in order to undermine the reallocated duality upon which the Cartesian communicational description whose function was to the goings-on in an inner theatre of mind-purposes of which only the subject is the reclusive viewer. Passages that have subsequentially become known as the ‘rule following' considerations and the ‘private language argument' are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism', about the nature of mental functioning, that occurs in a language different from one's ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us' of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us'. That these programs may or may not improve to conditions that are lastly to enhance of the right sort of an existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one's riff of necessity to humanities' abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as it becomes apparent that only ordinary representational powers that by invoking the image of the learning person's capabilities are whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons proceed by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. The view is commonly held along with ‘functionalism', according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. Theories may be thought of as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which this theory can be couched, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory', enabling ‘us' to infer what thoughts or intentions explain their actions, but by re-living the situation ‘in their shoes' or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘Verstehen' traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

Any process of drawing a conclusion from a set of premises may be called a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond' our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction in terms of probability. A process of reasoning in which a conclusion is diagrammatically set from the premises of some usually confined cases in which the conclusions are supposed in following from the premises, i. e., by reason of which an inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Furthermore, as we reason we make use of an indefinite lore or commonsense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

A ‘theory' usually emerges as a body of (supposed) truths that are not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which all others can be seen to be deductively inferable. This makes the theory rather more tractable since, in a sense, all truths are contained in those few. In a theory so organized, the few truths from which all others are deductively inferred are called ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be made objects of mathematical investigation.

By theory, the philosophy of science, is a generalization or set of generalizations purportedly making reference to unobservable entities, e. g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support thereof (‘merely a theory'), current philosophical usage does indeed follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from as few than for being many governing principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused' by them. When the principles were taken as epistemologically prior, that is, as ‘axioms', either they were taken to be epistemologically privileged e g., self-evident, not needing to be demonstrated, or again, included ‘or', to such that all truths so indeed follow from them (by deductive inferences). Gödel (1984) showed-in the spirit of Hilbert, treating axiomatic theories as themselves mathematically objects -that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. In order to assess the plausibility of such theses, and in order to refine them and to explain why they hold (if they do), we require some view of what truth be-a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality' has still never been articulated satisfactorily. The nature of the alleged ‘correspondence' and the alleged ‘reality' remain objectionably obscure. Yet the familiar alternative suggestions -that true beliefs are those that are ‘mutually coherent', or ‘pragmatically useful', or ‘verifiable in suitable conditions'~ has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true', distorts its really semantic character, which is not to describe propositions but to endorse them. But this radical approach is also faced with difficulties and suggests, somewhat counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can appear to be essential yet beyond our reach. However, recent work provides some grounds for optimism. Moreover, science, unswerving exactly to position of something very well hidden, its nature in so that to make it believed, is quickly and imposes the sensing and responding to the definitive qualities or state of being actual or true, such that as a person, an entity, or an event, that it actually might be gainfully to employ of all things possessing actuality, existence, or essence. In other words, in that which objectively and in fact do seem as to be about reality, in fact, actually to the satisfying factions of instinctual needs through awareness of and adjustment to environmental demands. Thus, the act of realizing or the condition of being realized is first, and utmost the resulting infraction of realizing.

Nonetheless, a declaration made to explain or justify action, or its believing desire upon which it is to act, by which the conviction underlying fact or cause, that provide logical sense for a premise or occurrence for logical, rational. Analytic mental stars have long lost in reason. Yet, the premise usually the minor premises, of an argument, use the faculty of reason that arises to engage in conversation or discussion. To determining or conclude by logical thinking out a solution to the problem, would therefore persuade or dissuade someone with reason that posits of itself with the good sense or justification of reasonability. In which, good causes are simply justifiably to be considered as to think. By which humans seek or attain knowledge or truth. Mere reason is insufficient to convince ‘us' of its veracity. Still, intuitively is perceptively welcomed by comprehension, as the truth or fact, without the use of the rational process, as one comes to assessing someone's character, it sublimely configures one consideration, and often with resulting comprehensions, in which it is assessing situations or circumstances and draw sound conclusions into the reign of judgement.

Governing by or being according to reason or sound thinking, in that a reasonable solution to the problem, may as well, in being without bounds of common sense and arriving to a measure and fair use of reason, especially to form conclusions, inferences or judgements. In that, all manifestations of a confronting argument within the usage of thinking or thought out response to issuing the furthering argumentation to fit or join in the sum parts that are composite to the intellectual faculties, by which case human understanding or the attemptive grasp to its thought, are the resulting liberty encroaching men of zeal, well-meaningly, but without understanding.

Being or occurring in fact or actually, as having verifiable existence. Real objects, a real illness. . . .'Really true and actual and not imaginary, alleged, or ideal, as people and not ghosts, fro which are we to find on practical matters and concerns of experiencing the real world. The surrounding surfaces, might we, as, perhaps attest to this for the first time. Being no less than what they state, we have not taken its free pretence, or affections for a real experience highly, as many may encounter real trouble. This, nonetheless, projects of an existing objectivity in which the world despite subjectivity or conventions of thought or language is or have valuing representation, reckoned by actual power, in that of relating to, or being an image formed by light or another identifiable simulation, that converge in space, the stationary or fixed properties, such as a thing or whole having actual existence. All of which, are accorded a truly factual experience into which the actual attestations have brought to you by the afforded efforts of our very own imaginations.

Ideally, in theory r imagination, a concept of reason that is transcendent but nonempirical as to think os conception of and ideal thought, that potentially or actual exists in the mind as a product exclusive to the mental act. In the philosophy of Plato, an archetype of which a corresponding being in phenomenal reality is an imperfect replica, that also, Hegel's absolute truth, as the conception and ultimate product of reason (the absolute meaning a mental image of something remembered).

Conceivably, in the imagination the formation of a mental image of something that is or should be b perceived as real nor present to the senses. Nevertheless, the image so formed can confront and deal with the reality by using the creative powers of the mind. That is characteristically well removed from reality, but all powers of fantasy over reason are a degree of insanity/ still, fancy as they have given a product of the imagination free reins, that is in command of the fantasy while it is exactly the mark of the neurotic that his very own fantasy possesses him.

The totality of all things possessing actuality, existence or essence that exists objectively and in fact based on real occurrences that exist or known to have existed, a real occurrence, an event, i.e., had to prove the facts of the case, as something believed to be true or real, determining by evidence or truth as to do. However, the usage in the sense ‘allegation of fact', and the reasoning are wrong of the ‘facts' and ‘substantive facts', as we may never know the ‘facts' of the case'. These usages may occasion qualms' among critics who insist that facts can only be true, but the usages are often useful for emphasis. Therefore, we have related to, or used the discovery or determinations of fast or accurate information in the discovery of facts, then evidence has determined the comprising events or truth is much as ado about their owing actuality. Its opposition forming the literature that treats real people or events as if they were fictional or uses real people or events as essential elements in an otherwise fictional rendition, i.e., of, relating to, produced by, or characterized by internal dissension, as given to or promoting internal dissension. So, then, it is produced artificially than by a natural process, especially the lacking authenticity or genuine factitious values of another than what s or should be.

Seriously, a set of statements or principles devised to explain a group of facts or phenomena, especially one that has been repeatedly tested or is widely accepted and can be used to make predictions about natural phenomena. Having the consistency of explanatory statements, accepted principles, and methods of analysis, finds to a set of theorems that form a systematic view of a branch in mathematics or extends upon the paradigms of science, the belief or principle that guides action or assists comprehension or judgements, usually by an ascription based on limited information or knowledge, as a conjecture, tenably to assert the creation from a speculative assumption that bestows to its beginning. Theoretically, of, relating to, or based on conjecture, its philosophy is such to accord, i.e., the restriction to theory, not practical theoretical physics, as given to speculative theorizing. Also, the given idea, by reason of which formidable combinations awaiting upon the inception of an idea, demonstrated as true or is assumed to be shown. In mathematics its containment lies of the proposition that has been or is to be proved from explicit assumption and is primarily with theoretical assessments or hypothetical theorizing than practical considerations the measures its quality value.

Looking back a century, one can see a striking degree of homogeneity among the philosophers of the early twentieth century about the topics central to their concerns. More inertly there is more in the apparent obscurity and abstruseness of the concerns, which seem at first glance to be removed from the great debates of previous centuries, between ‘realism' and ‘idealist', say, of ‘rationalists' and ‘empiricist'.

Thus, no matter what the current debate or discussion, the central issue is often ne without conceptual and/or contentual representations, that if one is without concept, is without idea, such that in one foul swoop would ingest the mere truth that lies to the underlying paradoxes of why is there something instead of nothing? Whatever it is that makes, what would otherwise be mere utterances and inscriptions into instruments of communication and understanding. This philosophical problem is to demystify this overblowing emptiness, and to relate to what we know of ourselves and the world.

Contributions to this study include the theory of ‘speech arts', and the investigation of communicable communications, especially the relationship between words and ‘ideas', and words and the ‘world'. It is, nonetheless, that which and utterance or sentence expresses, the proposition or claim made about the world. By extension, the content of a predicate that any expression that is capable of connecting with one or more singular terms to make a sentence, the expressed condition that the entities referred to may satisfy, in which case the resulting sentence will be true. Consequently we may think of a predicate as a function from things to sentences or even to truth-values, or other sub-sentential components that contribute to sentences that contain it. The nature of content is the central concern of the philosophy of language.

What some person expresses of a sentence often depends on the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis' or the kind of tree I refer to as criteria of which will define a ‘beech' I know next to nothing. This raises the possibility of imaging two persons in comparatively different environments, but in which everything appears the same to each of them. The wide content of their thoughts and saying will be different if the situation surrounding them is appropriately different, ‘situation' may here include the actual objects hey perceive, or the chemical or physical kinds of objects in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example of some terms thy use. The narrow content is that part of their thought that remains identical, through the identity of the way things appear, no matter these differences of surroundings. Partisans of wide . . . ‘as, something called broadly, content may doubt whether any content is in this sense narrow, partisans of narrow content believe that it is the fundamental notion, with wide content being of narrow content plus context.

All and all, assuming their rationality has characterized people is common, and the most evident display of our rationality is capable to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers, and painters all think, and there is no deductive reason that their deliberations should take any more verbal a form than their actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world and its surrounding surface structures. Nevertheless, they have attacked the model, notably by Ludwig Wittgenstein (1889-1951), whose influential application of these ideas was in the philosophy of mind. Wittgenstein explores the role that reports of introspection, or sensations, or intentions, or beliefs actually play our social lives, to undermine the Cartesian picture that they function to describe the goings-on in an inner theatre of which the subject is the lone spectator. Passages that have subsequentially become known as the ‘rule following' considerations and the ‘private language argument' are among the fundamental topics of modern philosophy of language and mind, although their precise interpretation is endlessly controversial.

Effectively, the hypotheses especially associated with Jerry Fodor (1935-), whom is known for the ‘resolute realism', about the nature of mental functioning, that occurs in a language different from one's ordinary native language, but underlying and explaining our competence with it. The idea is a development of the notion of an innate universal grammar (Chomsky), in as such, that we agree that since a computer programs are linguistically complex sets of instructions were the relative executions by which explains of surface behaviour or the adequacy of the computerized programming installations, if it were definably amendable and, advisably corrective, in that most are disconcerting of many that are ultimately a reason for ‘us' of thinking intuitively and without the indulgence of retrospective preferences, but an ethical majority in defending of its moral line that is already confronting ‘us'. That these programs may or may not improve to conditions that are lastly to enhance of the right sort of an existence forwarded toward a more valuing amount in humanities lesser extensions that embrace one's riff of necessity to humanities' abeyance to expressions in the finer of qualities.

As an explanation of ordinary language-learning and competence, the hypothesis has not found universal favour, as only ordinary representational powers that by invoking the image of the learning person's capabilities are apparently whom the abilities for translating are contending of an innate language whose own powers are mysteriously a biological given. Perhaps, the view that everyday attributions of intentionality, beliefs, and meaning to other persons proceed by means of a tactic use of a theory that enables one to construct these interpretations as explanations of their doings. We have commonly held the view along with ‘functionalism', according to which psychological states are theoretical entities, identified by the network of their causes and effects. The theory-theory has different implications, depending upon which feature of theories is being stressed. We may think of theories as capable of formalization, as yielding predictions and explanations, as achieved by a process of theorizing, as answering to empirical evidence that is in principle describable without them, as liable to be overturned by newer and better theories, and so on.

The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the nonexistence of a medium in which we can couch this theory, as the child learns simultaneously the minds of others and the meaning of terms in its native language, is not gained by the tactic use of a ‘theory', enabling ‘us' to imply what thoughts or intentions explain their actions, but by realizing the situation ‘in their shoes' or from their point of view, and by that understanding what they experienced and theory, and therefore expressed. We achieve understanding others when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development usually associated in the ‘Verstehen' traditions of Dilthey (1833-1911), Weber (1864-1920) and Collingwood (1889-1943).

We may call any process of drawing a conclusion from a set of premises a process of reasoning. If the conclusion concerns what to do, the process is called practical reasoning, otherwise pure or theoretical reasoning. Evidently, such processes may be good or bad, if they are good, the premises support or even entail the conclusion drawn, and if they are bad, the premises offer no support to the conclusion. Formal logic studies the cases in which conclusions are validly drawn from premises, but little human reasoning is overly of the forms logicians identify. Partly, we are concerned to draw conclusions that ‘go beyond' our premises, in the way that conclusions of logically valid arguments do not for the process of using evidence to reach a wider conclusion. However, such anticipatory pessimism about the prospects of conformation theory, denying that we can assess the results of abduction in terms of probability. A process of reasoning in which a conclusion is drawn from a set of premises usually confined two cases in which the conclusions are supposed in following from the premises, i.e., the inference is logically valid, in that of deductibility in a logically defined syntactic premise but without there being to any reference to the intended interpretation of its theory. Moreover, as we reason we use an indefinite lore or commonsense set of presuppositions about what it is likely or not a task of an automated reasoning project, which is to mimic this causal use of knowledge of the way of the world in computer programs.

Some ‘theories' usually emerge as a body of [supposed] truths that have not organized, making the theory difficult to survey or study as a whole. The axiomatic method is an idea for organizing a theory, one in which tries to select from among the supposed truths a small number from which they can see all others to be deductively inferable. This makes the theory moderately tractable since, in a sense, we have contained all truths in those few. In a theory so organized, we have called the few truths from which we have deductively inferred of all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which we were used to study mathematical and physical processes, could themselves be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means to representing physical processes and mathematical structures could be investigation.

According to theory, the philosophy of science, is a generalization or set referring to unobservable entities, e. g., atoms, genes, quarks, unconscious wishes. The ideal gas law, for example, refers only to such observables as pressure, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their properties, . . . although an older usage suggests the lack of adequate evidence in support of it (‘merely a theory'), current philosophical usage does indeed follow in the tradition (as in Leibniz, 1704), as many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few than for being many governing principles. These principles were taken to be eithermetaphysically prior or epistemologically prior or both. In the first sense, they we took to be entities of such a nature that what exists s ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged e g., self-evident, not needing to be demonstrated, or again, included ‘or', to such that all truths so indeed follow from them, by deductive inferences. Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture in of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help to achieve our goals, that to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues of valid reasoning, that moral pronouncements should not be regarded as objectively true, and so on. To assess the plausibility of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the sentence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality' has still never been articulated satisfactorily, and the nature of the alleged ‘correspondence' and the alleged ‘reality' remain objectionably obscure. Yet the familiar alternative suggestions that true beliefs are those that are ‘mutually coherent', or ‘pragmatically useful', or ‘verifiable in suitable conditions' has each been confronted with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at all that the syntactic form of the predicate, ‘is true', distorts its really semantic character, which is not to describe propositions but to endorse them. However, this radical approach is also faced with difficulties and suggests, quasi counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions: An explicit account of it can seem essential yet beyond our reach. However, recent work provides some grounds for optimism.

We have based a theory in philosophy of science, is a generalization or set referring to observable entities, i.e., atoms, quarks, unconscious wishes, and so on. The ideal gas law, for example, gives to a gas as defined for the purposes of thermodynamics as one that Boyle's law, which states if a given mass of gas is compressed at constant temperature, the product of th pressure and volume remains constant. The law is fund to be only approximately true for real gases, being exactly fulfilled only at very low pressure. In addition, the ideal as has an internal energy independent of the volume occupied, i.e., it obeys Joule's law of internal energy. Fixing to its law that (1) The principle that the heat produced by an electric current, l, flowing through a resistance, R, for a fixed time. t, is give n by the product I2Rt. If the current is expressed in amp ere, the resistance in ohms, and the time in seconds then the heat produced is in joules. (2) The principle that internal energy of a gas is independent of its volume. It only applies to ideal gases, i.e., when there are no intermolecular forces, and such that there two requirements are from the point of view of th kinetic theory,, both equivalent to saying that the intermolecular attractions are to be negligible, but the first requires also tat the molecules be of negligible volume. An ideal gas in fact obeys Boyle's law, Joule's law of internal energy . Dalton's law of partial pressures, Gay-Lussac's law, and Avogadro's hypothesis exactly, whereas real gases obey them only as their pressure tends to zero. Although an older usage suggests the lack of an adequate make out in support thereafter as merely a theory .

Reference to such observables as pressure, temperature, and volume, the molecular-kinetic theory refers top molecules and their properties, although an older usage suggests the lack of adequate evidence in support of it (‘merely a theory'), progressive toward its sage; the usage does not carry that connotation. Einstein's special; Theory of relativity, for example, is considered extremely well founded.

These are two main views on the nature of theories. According to the ‘received view' theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). Under which, some theories usually emerge as a body of [ supposed ] truths that are not neatly organized, making the theory difficult to survey or study as a whole. The axiomatic method is an ideal for organizing a theory (Hilbert, 1970), one tries to select from among the supposed truths a small number from which all the others can be seen to be deductively inferable. This makes the theory more tractable since, in a sense, they contain all truth's in those few. In a theory so organized, they call the few truths from which they deductively infer all others ‘axioms'. David Hilbert (1862-1943) had argued that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could themselves be made mathematical objects, so we could make axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, objects of mathematical investigation.

In the tradition (as in Leibniz, 1704), many philosophers had the conviction that all truths, or that the cor formation to theory, of a generalization or set referring to unobservable entities, atoms genes, quarks, unconscious wishes, and so on, . . . referentially implicating among such as unobservable pressures, temperature, and volume, the ‘molecular-kinetic theory' refers to molecules and their material possessions, . . . although an older usage suggests the lack of adequate evidence in support thereof, as an existing philosophical usage does in truth. Truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or both. In the first sense, we took them to be entities of such a nature that what exists is ‘caused' by them. When we took the principles as epistemologically prior, that is, as ‘axioms', we took them to be either epistemologically privileged, i.e., self-evident, not needing to be demonstrated, or again, inclusive ‘or', to be such that all truths do indeed follow from them (by deductive inferences). Gödel (1984) showed in the spirit of Hilbert, treating axiomatic theories as themselves mathematical objects that mathematics, and even a small part. Of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that is such that we could effectively decide, of any proposition, whether or not it was in that class, would be too small to capture all of the truths.

The notion of truth occurs with remarkable frequency in our reflections on language, thought, and action. We are inclined to suppose, for example, that truth is the proper aim of scientific inquiry, that true beliefs help ‘us' to achieve our goals, tat to understand a sentence is to know which circumstances would make it true, that reliable preservation of truth as one argues from premises to a conclusion is the mark of valid reasoning, that we should not regard moral pronouncements as objectively true, and so on. To assess the plausible of such theses, and to refine them and to explain why they hold (if they do), we require some view of what truth be a theory that would account for its properties and its relations to other matters. Thus, there can be little prospect of understanding our most important faculties in the absence of a good theory of truth.

Such a thing, however, has been notoriously elusive. The ancient idea that truth is some sort of ‘correspondence with reality' has still never been articulated satisfactorily: The nature of the alleged ‘correspondence' and te alleged ‘reality remains objectivably puzzling. Yet, the familiar alternative suggests ~. That true beliefs are those that are ‘mutually coherent', or ‘pragmatically useful', or ‘they have each confronted verifiably in suitable conditions with persuasive counterexamples. A twentieth-century departure from these traditional analyses is the view that truth is not a property at al ~. That the syntactic form of the predicate, ‘is true', distorts its really semantic character, which is not to describe propositions but to endorse them. Nevertheless, they have also faced this radical approach with difficulties and suggest, a counter intuitively, that truth cannot have the vital theoretical role in semantics, epistemology and elsewhere that we are naturally inclined to give it. Thus, truth threatens to remain one of the most enigmatic of notions, an explicit account of it can appear to be essential yet, beyond our reach. However, recent work provides some grounds for optimism.

The belief that snow is white owes its truth to a certain feature of the external world, namely, to the fact that snow is white. Similarly, the belief that dogs bark is true because of the fact that dogs bark. This trivial observation leads to what is perhaps the most natural and popular account of truth, the ‘correspondence theory', according to which a belief (statement, a sentence, propositions, etc.) as true just in case there exists a fact corresponding to it (Wittgenstein, 1922, Austin, 1950). This thesis is unexceptionable of its own selfness. However, if it is to provide a rigorous, substantial and complete theory of truth ~. If it is to be more than merely a picturesque way of asserting all equivalences to the form. The belief that ‘p' is ‘true p', then we must employ of a supplement, with

accounts of what facts are, and what it is for a belief to correspond to a fact, and these are the problems on which the correspondence theory of truth has foundered. For one thing, it is far form clear that reducing ‘the belief achieves any significant gain in understanding that snow is white is true' to ‘the facts that snow is white exists': For these expressions seem equally resistant to analysis and too close in meaning for one to provide an illuminating account of the other. In addition, the general relationship that holds in particular between the belief that snow is white and the fact that snow is white, between the belief that dogs bark and the fact that dogs bark, and so on, is very hard to identify. The best attempt to date is Wittgenstein's (1922) so-called ‘picture theory', under which an elementary proposition is a configuration of terms, with whatever stare of affairs it reported, as an atomic fact is a configuration of simple objects, an atomic fact corresponds to an elementary proposition (and makes it true) when their configurations are identical and when the terms in the proposition for it to the similarly-placed objects in the fact, and the truth value of each complex proposition the truth values of the elementary ones have entailed. However, eve if this account is correct as far as it goes, it would need to be completed with plausible theories of ‘logical configuration', ‘elementary proposition', ‘reference' and ‘entailment', none of which is easy to come by way of the central characteristic of truth. One that any adequate theory must explain is that when a proposition satisfies its ‘conditions of proof or verification', then it is regarded as true. To the extent that the property of corresponding with reality is mysterious, we are going to find it impossible to see what we take to verify a proposition should indicate the possession of that property. Therefore, a tempting alternative to the correspondence theory an alternative that eschews obscure, metaphysical concept and which explains quite straightforwardly why Verifiability implies truth is simply to identify truth with Verifiability (Peirce, 1932). This idea can take on variously formed. One version involves the further assumption that verification is ‘holistic', i.e., that a belief is justified (i.e., verified) when it is part of an entire system of beliefs that are consistent and ‘harmonious' (Bradley, 1914 and Hempel, 1935). We have known this as the ‘coherence theory of truth'. Another version involves the assumption that is associated with each proposition, some specific procedure for finding out whether one should believe it or not. On this account, to say that a proposition is true is to sa that the appropriate procedure would verify (Dummett, 1979. and Putnam, 1981). In the context of mathematics this amounts to the identification of truth with provability.

The attractions of the verificationist account of truth are that it is refreshingly clear compared with the correspondence theory, and that it succeeds in connecting truth with verification. The trouble is that the bond it postulates between these notions is implausibly strong. We do indeed take verification to indicate truth, but also we recognize the possibility that a proposition may be false in spite of there being impeccable reasons to believe it, and that a proposition may be true even though we are not able to discover that it is. Verifiability and ruth are no doubt highly correlated, but surely not the same thing.

A third well-known account of truth is known as ‘pragmatism' (James, 1909 and Papineau, 1987). As we have just seen, the verificationist selects a prominent property of truth and considers it to be the essence of truth. Similarly, the pragmatist focuses on another important characteristic namely, that true belief is a good basis for action and takes this to be the very nature of truth. We have said that true assumptions were, by definition, those that provoke actions with desirable results. Again, we have an account with a single attractive explanatory feature, but again, it postulates between truth and its alleged analysand in this case, utility is implausibly close. Granted, true belief tends to foster success, but it happens regularly that actions based on true beliefs lead to disaster, while false assumptions, by pure chance, produce wonderful results.

One of the few uncontroversial facts about truth is that the proposition that snow is white if and only if snow is white, the proposition that lying is wrong is true if and only if lying is wrong, and so on. Traditional theories acknowledge this fact but regard it as insufficient and, as we have seen, inflate it with some further principle of the form, ‘X is true' if and only if ‘X' has property P (such as corresponding to reality, Verifiability, or being suitable as a basis for action), which is supposed to specify what truth is. Some radical alternatives to the traditional theories result from denying the need for any such further specification (Ramsey, 1927, Strawson, 1950 and Quine, 1990). For example, ne might suppose that the basic theory of truth contains nothing more that equivalences of the form, ‘The proposition that p is true if and only if p' (Horwich, 1990).

This sort of proposal is best presented in conjunction with an account of the ‘raison de étre' of our notion of truth, namely that it enables ‘us ' to express attitudes toward these propositions we can designate but not explicitly formulate. Suppose, for example, they tell you that Einstein's last words expressed a claim about physics, an area in which you think he was very reliable. Suppose that, unknown to you, his claim was the proposition whose quantum mechanics are wrong. What conclusion can you draw? Exactly which proposition becomes the appropriate object of your belief? Surely not that quantum mechanics are wrong, because you are not aware that is what he said. What we have needed is something equivalent to the infante conjunction:

If what Einstein said was that E = mc, then E = mc, and

if what he said as that Quantum mechanics were wrong,

then quantum mechanics are wrong . . . and so on?

That is, a proposition, ‘K' with the following properties, that from ‘K' and any further premises of the form. ‘Einstein's claim was the proposition that p' you can infer p'. Whatever it is. Now suppose, as the deflationist says, that our understanding of the truth predicate consists in the stimulative decision to accept any instance of the schema. ‘The proposition that p is true if and only if p', then we have solved your problem. For ‘K' is the proposition, ‘Einstein's claim is true ', it will have precisely the inferential power that we have needed. From it and ‘Einstein's claim is the proposition that quantum mechanics are wrong', you can use Leibniz's law to infer ‘The proposition that quantum mechanic is wrong is true; , which given the relevant axiom of the deflationary theory, allows you to derive ‘Quantum mechanics is wrong'. Thus, one point in favour of the deflationary theory is that it squares with a plausible story about the function of our notion of truth, in that its axioms explain that function without the need for further analysis of ‘what truth is'.

Not all variants of deflationism have this virtue, according to the redundancy performative theory of truth, the pair of sentences, ‘The proposition that p is true' and plain ‘p', has the same meaning and expresses the same statement as one another, so it is a syntactic illusion to think that p is true' attributes any sort of property to a proposition (Ramsey, 1927 and Strawson, 1950). Yet in that case, it becomes hard to explain why we are entitled to infer ‘The proposition that quantum mechanics are wrong is true' form ‘Einstein's claim is the proposition that quantum mechanics are wrong. ‘Einstein's claim is true'. For if truth is not property, then we can no longer account for the inference by invoking the law that if ‘X', appears identical with ‘Y' then any property of ‘X' is a property of ‘Y', and vice versa. Thus the redundancy/performative theory, by identifying rather than merely correlating the contents of ‘The proposition that p is true' and ‘p, precludes the prospect of a good explanation of one on truth's most significant and useful characteristics. So restricting our claim to the weak is better, equivalence schemas: The proposition that ‘p is true is and is only p'.

Support for deflationism depends upon the possibility of showing that its axiom instances of the equivalence schema unsupplements by any further analysis, will suffice to explain all the central facts about truth, for example, that the verification of a proposition indicates its truth, and that true beliefs have a practical value. The first of these facts follows trivially from the deflationary axioms, for given our a prior knowledge of the equivalence of ‘p' and ‘The propositions that ‘p is true', any reason to believe that ‘p' becomes an equally good reason to believe that the preposition that ‘p' is true. We can also explain the second fact in terms of the deflationary axioms, but not quite so easily. Consider, to begin with, beliefs of the form.

(B) If I perform the act ‘A', then my desires will be fulfilled.

Notice that the psychological role of such a belief is, roughly, to cause the performance of ‘A'. In other words, gave that I do have belief (B), then typically.

I will perform the act ‘A'

Notice also that when the belief is true then, given the deflationary axioms, the performance of ‘A' will in fact lead to the fulfilment of one's desires,

i.e.,

If (B) is true, then if I perform ‘A', my desires will be fulfilled

Therefore,

If (B) is true, then my desires will be fulfilled

So valuing the truth of beliefs of that form is quite treasonable. Nevertheless, inference derives such beliefs from other beliefs and can be expected to be true if those other beliefs are true. So valuing the truth of any belief that might be used in such an inference is reasonable.

To him extent that they can give such deflationary accounts of all the acts involving truth, then the collection will meet the explanatory demands on a theory of truth of all statements like, ‘The proposition that snow is white is true if and only if snow is white', and we will undermine the sense that we need some deep analysis of truth.

Nonetheless, there are several strongly felt objections to deflationism. One reason for dissatisfaction is that the theory has an infinite number of axioms, and therefore cannot be completely written down. It can be described as, the theory whose axioms are the propositions of the fore ‘p if and only if it is true that ‘p', but not explicitly formulated. This alleged defect has led some philosophers to develop theories that show, first, how the truth of any proposition derives from the referential properties of its constituents, and second, how the referential properties of primitive constituents are determined (Tarski, 1943 and Davidson, 1969). However, assuming that all propositions including belief attributions remain controversial, law of nature and counterfactual conditionals depends for their truth values on what their constituents refer from it. Moreover, there is no immediate prospect of a decent, finite theory of reference, so that it is far form clear that the infinite, that we can avoid list-like character of deflationism.

An objection to the version of the deflationary theory presented here concerns its reliance on ‘propositions' as the basic vehicles of truth. It is widely felt that the notion of the proposition is defective and that we should not employ it in semantics. If this point of view is accepted then the natural deflationary reaction is to attempt a reformation that would appeal only to sentences, for example.

‘p' is true if and only if p.

Nevertheless, this so-called ‘disquotational theory of truth' (Quine, 1990) has trouble over indexicals, demonstratives and other terms whose referents vary with the context of use. It is not the case, for example, that every instance of ‘I am hungry' is true and only if I am hungry. There is no simple way of modifying the disquotational schema to accommodate this problem. A possible way of these difficulties is to resist the critique of propositions. Such entities may exhibit an unwelcome degree of indeterminancy, and might defy reduction to familiar items, however, they do offer a plausible account of belief, as relations to propositions, and, in ordinary language at least, we indeed take them to be the primary bearers of truth. To believe a proposition is too old for it to be true. The philosophical problems include discovering whether belief differs from other varieties of assent, such as ‘acceptance', discovering to what extent degrees of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether they have properly said that prelinguistic infants or animals have beliefs.

Additionally, it is commonly supposed that problems about the nature of truth are intimately bound up with questions as to the accessibility and autonomy of facts in various domains: Questions about whether we can know the facts, and whether they can exist independently of our capacity to discover them (Dummett, 1978, and Putnam, 1981). One might reason, for example, that if ‘T is true' means' nothing more than ‘T will be verified', then certain forms of scepticism, specifically, those that doubt the correctness of our methods of verification, that will be precluded, and that the facts will have been revealed as dependent on human practices. Alternatively, we might say that if truth were an inexplicable, primitive, non-epistemic property, then the fact that ‘T' is true would be completely independent of ‘us'. Moreover, we could, in that case, have no reason to assume that the propositions we believe actually have this property, so scepticism would be unavoidable. In a similar vein, we might think that as special, and perhaps undesirable features of the deflationary approach, is that we have deprived truth of such metaphysical or epistemological implications.

On closer scrutiny, however, it is far from clear that there exists ‘any' account of truth with consequences regarding the accessibility or autonomy of non-semantic matters. For although we may expect an account of truth to have such implications for facts of the from ‘T is true', we cannot assume without further argument that the same conclusions will apply to the fact 'T'. For it cannot be assumed that ‘T' and ‘T' are true' are equivalent to one another given the account of ‘true' that is being employed. Of course, if we have defined truth in the way that the deflationist proposes, then the equivalence holds by definition. However, if reference to some metaphysical or epistemological characteristic has defined truth, then we throw the equivalence schema into doubt, pending some demonstration that the trued predicate, in the sense assumed, will satisfy in as far as there are thought to be epistemological problems hanging over ‘T's' that do not threaten ‘T is true', giving the needed demonstration will be difficult. Similarly, if we so define ‘truth' that the fact, ‘T' is felt to be more, or less, independent of human practices than the fact that ‘T is true', then again, it is unclear that the equivalence schema will hold. It would seem. Therefore, that the attempt to base epistemological or metaphysical conclusions on a theory of truth must fail because in any such attempt we will simultaneously rely on and undermine the equivalence schema.

The most influential idea in the theory of meaning in the past hundred years is the thesis that meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Frége (1848-1925), was developed in a distinctive way by the early Wittgenstein (1889-1951), and is a leading idea of Davidson (1917-). The conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.

The conception of meaning as truth-conditions needs not and should not be advanced as a complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts conventionally performed by the various types of a sentence in the language, and must have some idea of the significance of various kinds of speech acts. We should moderately target the claim of the theorist of truth-conditions on the notion of content: If two indicative sentences differ in what they strictly and literally say, then the difference accounts for this difference in their truth-conditions. Most basic to truth-conditions is simply of a statement that is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that repeating the very same statement can only define the truth condition, as a truth condition of ‘snow is white' is that snow is white, the truth condition of ‘Britain would have capitulated had Hitler invaded' is the Britain would have capitulated had Hitler invaded. It is disputed wether. This element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. The view has sometimes opposed truth-conditional theories of meaning that to know the meaning of a statement is to be able to use it in a network of inferences.

Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystify this power, and to relate it to what we know of ourselves and the world. Contributions to the study include the theory of ‘speech acts' and the investigation of communication and the relationship between words and ideas and the world and surrounding surfaces, by which some persons express by a sentence are often a function of the environment in which he or she is placed. For example, the disease I refer to by a term like ‘arthritis' or the kind of tree I refer to as a ‘maple' will horticulturally find of its criterial detection of which I know next to nothing. The raises the possibility of imagining two persons in rather differently environmental, but in which everything appears the same to each of them, but between them they define a space of philosophical problems. They are the essential components of understanding nd any intelligible proposition that is true must be capable of being understood. Such that which an utterance or sentence expresses, the proposition or claim made about the world may by extension, the content of a predicated or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the cental concern of the philosophy of language.

In particularly, the problems of indeterminancy of translation, inscrutability of reference, language, predication, reference, rule following, semantics, translation, and the topics referring to subordinate headings associated with ‘logic'. The loss of confidence in determinate meaning (‘each decoding is another encoding') is an element common both to postmodern uncertainties in the theory of criticism, and to the analytic tradition that follows writers such as Quine (1908-). Still it may be asked, why should we suppose that we should account fundamental epistemic notions for in behavioural terms what grounds are there for supposing that ‘p knows p' is a matter of the status of its statement between some subject and some object, between nature and its mirror? The answer is that the only alternative seems to be to take knowledge of inner states as premises from which we have normally inferred our knowledge of other things, and without which we have normally inferred our knowledge of other things, and without which knowledge would be ungrounded. But it is not really coherent, and does not in the last analysis make sense, to suggest that human knowledge have foundations or grounds. We should remember that to say that truth and knowledge ‘can only be judged by the standards of our own day' is not to say that it is the less important, or ‘more "cut off from the world, "' that we had supposed. It is just to say ‘that nothing counts as justification, unless by reference to what we already accept, and that there is no way to get outside our beliefs and our language so as to find some test other than coherence'. The fact is that the professional philosophers have thought it might be otherwise, since the body has haunted only them of epistemological scepticism.

What Quine opposes as ‘residual Platonism' is not so much the hypostasising of nonphysical entities as the notion of ‘correspondence' with things as the final court of appeal for evaluating present practices. Unfortunately, Quine, for all that it is incompatible with its basic insights, substitutes for this correspondence to physical entities, and specially to the basic entities, whatever they turn out to be, of physical science. But when we have purified their doctrines, they converge on a single claim. That no account of knowledge can depend on the assumption of some privileged relations to reality. Their work brings out why an account of knowledge can amount only to a description of human behaviour.

What, then, is to be said of these ‘inner states', and of the direct reports of them that have played so important a role in traditional epistemology? For a person to feel is nothing else than for him to have an ability to make a certain type of non-inferential report, to attribute feelings to infants is to acknowledge in them latent abilities of this innate kind. Non-conceptual, non-linguistic ‘knowledge' of what feelings or sensations is like is attributively to beings on the basis of potential membership of our community. We accredit infants and the more attractive animals with having feelings on the basis of that spontaneous sympathy that we extend to anything humanoid, in contrast with the mere ‘response to stimuli' attributed to photoelectric cells and to animals about which no one feels sentimentally. It is consequently wrong to suppose that moral prohibition against hurting infants and the better-looking animals are; those moral prohibitions grounded' in their possession of feelings. The relation of dependence is really the other way round. Similarly, we could not be mistaken in supposing that a four-year-old child has knowledge, but no one-year-old, any more than we could be mistaken in taking the word of a statute that eighteen-year-old can marry freely but seventeen-year-old cannot. (There is no more ‘ontological ground' for the distinction that may suit ‘us' to make in the former case than in the later.) Again, such a question as ‘Are robots' conscious?' Calling for a decision on our part whether or not to treat robots as members of our linguistic community. All this is a piece with the insight brought intro philosophy by Hegel (1770-1831), that the individual apart from his society is just another animal.

Willard van Orman Quine, the most influential American philosopher of the latter half of the 20th century, when after the wartime period in naval intelligence, punctuating the rest of his career with extensive foreign lecturing and travel. Quine's early work was on mathematical logic, and issued in "A System of Logistic" (1934), "Mathematical Logic" (1940), and "Methods of Logic" (1950), whereby it was with the collection of papers from a "Logical Point of View" (1953) that his philosophical importance became widely recognized. Quine's work dominated concern with problems of convention, meaning, and synonymy cemented by "Word and Object" (1960), in which the indeterminancy of radical translation first takes centre-stage. In this and many subsequent writings Quine takes a bleak view of the nature of the language with which we ascribe thoughts and beliefs to ourselves and others. These ‘intentional idioms' resist smooth incorporation into the scientific world view, and Quine responds with scepticism toward them, not quite endorsing ‘eliminativism', but regarding them as second-rate idioms, unsuitable for describing strict and literal facts. For similar reasons he has consistently expressed suspicion of the logical and philosophical propriety of appeal to logical possibilities and possible worlds. The languages that are properly behaved and suitable for literal and true descriptions of the world happen to those within the fields that draw upon mathematics and science. We must take the entities to which our best theories refer with full seriousness in our ontologies, although an empiricist. Quine thus supposes that science requires the abstract objects of set theory, and therefore exist. In the theory of knowledge Quine associated with a ‘holistic view' of verification, conceiving of a body of knowledge in terms of a web touching experience at the periphery, but with each point connected by a network of relations to other points.

They have also known Quine for the view that we should naturalize, or conduct epistemology in a scientific spirit, with the object of investigation being the relationship, in human beings, between the inputs of experience and the outputs of belief. Although we have attacked Quine's approaches to the major problems of philosophy as betraying undue ‘scientism' and sometimes ‘behaviourism', the clarity of his vision and the scope of his writing made him the major focus of Anglo-American work of the past forty years in logic, semantics, and epistemology. As well as the works cited his writings' cover "The Ways of Paradox and Other Essays" (1966), "Ontological Relativity and Other Essays" (1969), "Philosophy of Logic" (1970), "The Roots of Reference" (1974) and "The Time of My Life: An Autobiography" (1985).

Coherence is a major player in the theatre of knowledge. There are cogence theories of belief, truth and justification, as these are to combine themselves in the various ways to yield theories of knowledge coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, in so, that what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief that you have a centaur in the garden?

One answer is that the belief has a coherent place or role in a system of beliefs, perception or the having the perceptivity that has its influence on beliefs. As, you respond to sensory stimuli by believing that you are reading a page in a book than believing that you have a centaur in the garden. Belief has an influence on action, or its belief is a desire to act, if belief will differentiate the differences between them, that its belief is a desire or if you were to believe that you are reading a page than if you believed in something about a centaur. Sortal perceptivals hold accountably the perceptivity and action that are indeterminate to its content if its belief is the action as if stimulated by its inner and latent coherence in that of your belief, however. The same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has is the role it plays in a network of relations to other beliefs, some latently causal than others that relate to the role in inference and implication. For example, I infer different things from believing that I am reading a page in a book than from any other belief, and, justly as I infer upon another belief, of leaving to some untold story for being human.

The input of perceptibility and the output of an action supplement the central role of the systematic relations the belief has to other belief, but it is the systematic relations that give the belief the specific contentual representation it has. They are the fundamental source of the content of belief. That is how coherence comes in. A belief has the representational content by which it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from stronger coherence theories. Weak coherence theories affirm that coherence is one determinant of the representation given that the contents are of belief. Strong coherence theories of the content of belief affirm that coherence is the sole determinant of the contentual representations of belief.

When we turn from belief to justification, we confront a similar group of coherence theories. What makes one belief justified and another not? Again, there is a distinction between weak and strong theoretic principles that govern its theory of coherence. Weak theories tell ‘us' that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory, and intuitive certainty of its projectio [L], finding its English translation it would be ‘projection'. That can be said, that it is commonplace that beauty lies in the eye of the beholder, but all the same we usefully talk off the beauty of things and people as if they are identifiable real properties which they possess. Projectivism denotes any view which sees ‘us' as similarly projecting upon the world what are in fact modifications of our own minds. The term is often associated with the view of sensations and particularly secondary qualities found in writers as Hobbes (De Corpore, 1655) and Condillac (Traité des sensations, 1754). According to this view, sensations are displaced from their rightful place in the mind when we think of the world as coloured or noisy. Other examples of the idea involve things other than sensations. One is that all contingency is a projection of our ignorance, another is that the causal order of events is a projection of our own mental confidences in the way they follow from one another. But the most common application of the idea is in ethics and aesthetics, where many writers have held that talk of the value or beauty of things is a projection of the attitudes we take toward them and the pleasure we take in them.

It is natural to associate projectivism with the idea that we make some kind of mistake in talking and thinking as if the world contained the various features we describe it as having, when in reality it does not. But the view that we make no mistake, but simply adopt efficient linguistic expression for necessary ways of thinking, is also held

All the same, strong theories, or dominant projections are in coherence to justification as solely a matter of how a belief coheres with a system of latent hierarchal beliefs. There is, nonetheless, another distinction that cuts across the distinction between weak and strong coherence theories between positive and negative coherence theory (Pollock, 1986). A positive coherence theory tells ‘us' that if a belief coheres with a background system of belief, then the belief is justifiable. A negative coherence theory tells ‘us' that if a belief fails to cohere with a background system of beliefs, then the belief is not justifiable. We might put this by saying that, according to the positivity of a coherence theory, coherence has the power to produce justification, while according to its being adhered by negativity, the coherence theory has only the power to nullify justification.

Least of mention, a strong coherence theory of justification is a formidable combination by which a positive and a negative theory tell ‘us' that a belief is justifiable if and only if it coheres with a background system of inter-connectivity of beliefs. Coherence theories of justification and knowledge have most often been rejected for being unable to deal with an accountable justification toward the perceptivity upon the projection of knowledge (Audi, 1988, and Pollock, 1986), and, therefore, it will be most appropriate to consider a perceptual example that will serve as a kind of crucial test.

Suppose that a person, call her Trust, and works with a scientific instrumentation that has a gauging measure upon temperatures of liquids in a container. The gauge is marked in degrees, she looks at the gauge and sees that the reading is 105 degrees. What is she justifiably to believe, and why? Is she, for example, justified in believing that the liquid in the container is 105 degrees? Clearly, that depends on her background beliefs. A weak coherence theorist might argue that, though her belief that she sees the shape 105 is immediately justified as direct sensory evidence without appeal to a background system, the belief that the location in the container is 105 degrees results from coherence with a background system of latent beliefs that affirm to the shaping perceptivity that its 105 as visually read to be 105 degrees on the gauge that measures the temperature of the liquid in the container. This, nonetheless, of a weak coherence view that combines coherence with direct perceptivity as its evidence, in that the foundation of justification, is to account for the justification of our beliefs.

A strong coherence theory would go beyond the claim of the weak coherence theory to affirm that the justification of all beliefs, including the belief that one sees the shaping to sensory data that holds accountable a measure of 105, or even the more cautious belief that one sees a shape, resulting from the perceptivals of coherence theory, in that it coheres with a background system. One may argue for this strong coherence theory in a number of different ways. One line or medium through which to appeal to the coherence theory of contentual representations. If the content of the perceptual belief results from the relations of the belief to other beliefs in a network system of beliefs, then one may notably argue that the justification of perceptivity, that the belief is a resultant from which its relation of the belief to other beliefs, in the network system of beliefs is in argument for the strong coherence theory is that without any assumptive reason that the coherence theory of the content of beliefs is much the supposed causes that only produce the consequences we expect. Consider the very cautious belief that I see a shape. How could the justification for that perceptual belief be an existent result that they characterize of its material coherence with a background system of beliefs? What might the background system tell ‘us' that would justify that belief? Our background system contains a simple and primal theory about our relationship to the world and surrounding surfaces that we perceive as it is or should be believed. To come to the specific point at issue, we believe that we can tell a shape when we see one, completely differentiated its form as perceived to sensory data, that we are to trust of ourselves about such simple matters as wether we see a shape before ‘us' or not, as in the acceptance of opening to nature the inter-connectivity between belief and the progression through which we acquire from past experiential conditions of application, and not beyond deception. Moreover, when Julie sees the believing desire to act upon what either coheres with a weak or strong coherence of theory, she shows that its belief, as a measurable quality or entity of 105, has the essence in as much as there is much more of a structured distinction of circumstance, which is not of those that are deceptive about whether she sees that shape or sincerely does not see of its shaping distinction, however. Light is good, and the numeral shapes are large, readily discernible and so forth. These are beliefs that Julie has single handedly authenticated reasons for justification. Her successive malignance to sensory access to data involved is justifiably a subsequent belief, in that with those beliefs, and so she is justified and creditable.

The philosophical problems include discovering whether belief differs from other varieties of assent, such as ‘acceptance' discovering to what extent degrees of belief is possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills. This last set of problems includes the question of whether we have properly said that prelinguistic infants or animals have beliefs.

Thus, we might think of coherence as inference to the best explanation based on a background system of beliefs, since we are not aware of such inferences for the most part, we must interpret the inferences as unconscious inferences, as information processing, based on or accessing the background system that proves most convincing of acquiring its act and used from the motivational force that its underlying and hidden desire are to do so. One might object to such an account on the grounds that not all justifiable inferences are self-explanatory, and more generally, the account of coherence may, at best, is ably successful to competitions that are based on background systems (BonJour, 1985, and Lehrer, 1990). The belief that one sees a shape competes with the claim that one does not, with the claim that one is deceived, and other sceptical objections. The background system of beliefs informs one that one is acceptingly trustworthy and enables one to meet the objections. A belief coheres with a background system just in case it enables one to meet the sceptical objections and in the way justifies one in the belief. This is a standard strong coherence theory of justification (Lehrer, 1990).

It is easy to illustrate the relationship between positive and negative coherence theories in terms of the standard coherence theory. If some objection to a belief cannot be met in terms of the background system of beliefs of a person, then the person is not justified in that belief. So, to return to Trust, suppose that she has been told that a warning light has been installed on her gauge to tell her when it is not functioning properly and that when the red light is on, the gauge is malfunctioning. Suppose that when she sees the reading of 105, she also sees that the red light is on. Imagine, finally, that this is the first time the red light has been on, and, after years of working with the gauge, Trust, who has always placed her trust in the gauge, believes what the gauge tells her, that the liquid in the container is at 105 degrees. Though she believes what she reads is at 105 degrees is not a justified belief because it fails to cohere with her background belief that the gauge is malfunctioning. Thus, the negative coherence theory tells ‘us' that she is not justified in her belief about the temperature of the contents in the container. By contrast, when we have not illuminated the red light and the background system of Trust tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us' that she is justified in her belief because her belief coheres with her background system of Trust tells her that under such conditions that gauge is a trustworthy indicator of the temperature of the liquid in the container, then she is justified. The positive coherence theory tells ‘us' that she is justified in her belief because her belief coheres with her background system continues as a trustworthy system.

The foregoing sketch and illustration of coherence theories of justification have a common feature, namely, that they are what we have called internalistic theories of justification what makes of such a view are the absence of any requirement that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general, have no reason for thinking the belief is true or likely to be true, but will, on such an account, are none the less to appear epistemologically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological traditions, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

They are theories affirming that coherence is a matter of internal relations between beliefs and that justification is a matter of coherence. If, then, justification is solely a matter of internal relations between beliefs, we are left with the possibility that the internal relations might fail to correspond with any external reality. In what way, one object, can entirely be internal; as a subjective notion of justification bridge the gaps between mere true belief, which might be no more than a lucky guess, and knowledge, which we must ground in some connection between internal subjective conditions and external objective realities?

The answer is that it cannot and that we have required something more than justified true belief for knowledge. This result has, however, been established quite apart from consideration of coherence theories of justification. What we have required maybe put by saying that the justification that one must be undefeated by errors in the background system of beliefs. Justification is undefeated by errors just in case any correction of such errors in the background system of belief would sustain the justification of the belief on the basis of the corrected system. So knowledge, on this sort of positivity is acclaimed by the coherence theory, which is the true belief that coheres with the background belief system and corrected versions of that system. In short, knowledge is true belief plus justification resulting from coherence and undefeated by error (Lehrer, 1990). The connection between internal subjective conditions of belief and external objectivity are from which reality's result from the required correctness of our beliefs about the relations between those conditions and realities. In the example of Trust, she believes that her internal subjectivity to conditions of sensory data in which we have connected the experience and perceptual beliefs with the external objectivity in which reality is the temperature of the liquid in the container in a trustworthy manner. This background belief is essential to the justification of her belief that the temperature of the liquid in the container is 105 degrees, and the correctness of that background belief is essential to the justification remaining undefeated. So our background system of beliefs contains a simple theory about our relation to the external world that justifies certain of our beliefs that cohere with that system. For instance, such justification to convert to knowledge, that theory must be sufficiently free from error so that they have sustained the coherence in corrected versions of our background system of beliefs. The correctness of the simple background theory provides the connection between the internal condition and external reality.

The coherence theory of truth arises naturally out of a problem raised by the coherence theory of justification. The problem is that anyone seeking to determine whether she has knowledge is confined to the search for coherence among her beliefs. The sensory experiences she has been mute until they have represented them in the form of some perceptual belief. Beliefs are the engine that pulls the train of justification. But what assurance do we have that our justification is based on true beliefs? What justification do we have that any of our justifications are undefeated? The fear that we might have none, that our beliefs might be the artifact of some deceptive demon or scientist, leads to the quest to reduce truth to some form, perhaps an idealized form, of justification (Rescher, 1973, and Rosenberg, 1980). That would close the threatening sceptical gap between justification and truth. Suppose that a belief is true if and only if it is justifiable of some person. For such a person there would be no gap between justification and truth or between justification and undefeated justification. Truth would be coherence with some ideal background system of beliefs, perhaps one expressing a consensus among systems or some consensus among belief systems or some convergence toward a consensus. Such a view is theoretically attractive for the reduction it promises, but it appears open to profound objectification. One is that there is a consensus that we can all be wrong about at least some matters, for example, about the origins of the universe. If there is a consensus that we can all be wrong about something, then the consensual belief system rejects the equation of truth with the consensus. Consequently, the equation of truth with coherence with a consensual belief system is itself incoherently.

Coherence theories of the content of our beliefs and the justification of our beliefs themselves cohere with our background systems but coherence theories of truth do not. A defender of Coherentism must accept the logical gap between justified belief and truth, but may believe that our capacities suffice to close the gap to yield knowledge. That view is, at any rate, a coherent one.

What makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depend on what causal subject to have the belief. In recent decades a number of epistemologists have pursed this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p' is knowledge just in case it has the right sort of causal connection to the fact that ‘p'. Such a criterion can be applied only to cases where the fact that ‘p' is a sort that can reach causal relations, this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.

For example, Armstrong (1973, ch 12) proposed that a belief of the form ‘This (perceived) object is F' is (non-inferential) knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F', that is, the fact that the object is ‘F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘ is to occur, and so thus a perceived object of ‘y', if ' undergoing those properties are for ‘us' to believe that ‘y' is ‘F', then ‘y' is ‘F'. (Dretske (1981) offers a rather similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is ‘F').

This sort of condition fails, however, to be sufficient for non-inferential perceptual knowledge because it is compatible with the belief's being unjustified, and an unjustifiable belief cannot be knowledge. For example, suppose that your mechanisms for colour perception are working well, but you have been given good reason to think otherwise, to think, say, that the substantive primary colours that are perceivable, that things look chartreuse to you and chartreuse things look magenta. If you fail to heed these reasons you have for thinking that your colour perception or sensory data is a way and believing of a thing that looks magenta to you that it is magenta, your belief will fail to be justified and will therefore fail to be knowledge, even though the thing's being magenta in such a way causes it as to be a completely reliable sign, or to carry the information, in that the thing is magenta.

One could fend off this sort of counterexample by simply adding to the causal condition the requirement that the belief be justified, buy this enriched condition would still be insufficient. Suppose, for example, that in nearly all people, but not in you, as it happens, causes the aforementioned aberration in colour perceptions. The experimenter tells you that you have taken such a drug but then says, ‘no, wait for just a minute, the pill you took was just a placebo', suppose further, that this last thing the experimenter tells you is false. Her telling you that it was a false statement, and, again, telling you this gives you justification for believing of a thing that looks a subtractive primary colour to you that it is a sensorial primary colour, in that the fact you were to expect that the experimenters last statements were false, making it the case that your true belief is not knowledgeably correct, thought as though to satisfy its causal condition.

Goldman (1986, ch. 3) has proposed an importantly different sort of causal criterion namely, that a true belief is knowledge, if it is produced by a type of process that is ‘globally' and ‘locally' reliable. It is globally reliable if its propensity to cause true beliefs is sufficiently high. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be casually related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires that global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because they require justification for knowledge, in what requires for knowledge but does not require for justification, which is locally reliable. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Noting that other concepts exhibit the same logical structure can motivate the relevant alternative account of knowledge. Two examples of this are the concept ‘flat' and the concept ‘empty' (Dretske, 1981). Both appear to be absolute concepts-A space is empty only if it does not contain anything and a surface is flat only if it does not have any bumps. However, the absolute character of these concepts is relative to a standard. In the case of ‘flat', there is a standard for what counts as a bump and in the case of ‘empty', there is a standard for what counts as a thing. To be flat is to be free of any relevant bumps and to be empty is to be devoid of all relevant things.

This avoids the sorts of counterexamples we gave for the causal criteria, but it is vulnerable to one or ones of a different sort. Suppose you were to stand on the mainland looking over the water at an island, on which are several structures that look (from at least some point of view) as would ne of an actualized point or station of position. You happen to be looking at one of any point, in fact a barn and your belief to that effect are justified, given how it looks to you and the fact that you have exclusively of no reason to think nor believe otherwise. But suppose that the great majority of the barn-looking structures on the island are not real barns but fakes. Finally, suppose that from any viewpoint on the mainland all of the island's fake barns are obscured by trees and that circumstances made it very unlikely that you would have to a viewpoint not on the mainland. Here, it seems, your justified true belief that you are looking at a barn is not knowledge, despite the fact that there was not a serious chance that there would have developed an alternative situation, wherefore you are similarly caused to have a false belief that you are looking at a barn.

That example shows that the ‘local reliability' of the belief-producing process, on the ‘serous chance' explication of what makes an alternative relevance, yet its view-point upon which we are in showing that non-locality also might sustain of some probable course of the possibility for ‘us' to believe in. Within the experience condition of application, the relationship with the sensory-data, as having a world-view that can encompass both the hidden and manifest aspects of nature would comprise of the mind, or brain that provides the excitation of neuronal ions, giving to sensory perception an accountable assessment of data and reason-sensitivity allowing a comprehensive world-view, integrating the various aspects of the universe into one magnificent whole, a whole in which we played an organic and central role. One-hundred years ago its question would have been by a Newtonian ‘clockwork universe', a model of a I universe that is completely mechanical. The laws of nature have predetermined everything that happens and by the state of the universe in the distant past. The freedom one feels in regard to ones actions, even in regards to the movement of one's body, is an illusory infraction and the world-view expresses as the Newtonian one, is completely coherent.

Nevertheless, the human mind abhors a vacuum. When an explicit, coherent world-view is absent, it functions on the basis of a tactic one. A tactic world-view is not subject to a critical evaluation, and it can easily harbour inconsistencies. And, indeed, our tactic set of beliefs about the nature of reality consists of contradictory bits and pieces. The dominant component is a leftover from another period, the Newtonian ‘clock universe' still lingers as we cling to this old and tired model because we know of nothing else that can take its place. Our condition is the condition of a culture that is in the throes of a paradigm shift. A major paradigm shift is complex and difficult because a paradigm holds ‘us captive: We see reality through it, as through coloured glasses, but we do not know that, we are convinced that we see reality as it is. Hence the appearance of a new and different paradigm is often incomprehensible. To someone raised believing that the Earth is flat, the suggestion that the Earth is spherical would seem preposterous: If the Earth were spherical, would not the poor antipodes fall ‘down' into the sky?

And yet, as we face a new millennium, we are forced to face this challenge. The fate of the planet is in question, and it was brought to its present precarious condition largely because of our trust in the Newtonian paradigm. As Newtonian world-view has to go, and, if one looks carefully, we can discern the main feature of the new, emergent paradigm. The search for these features is what was the influence of a fading paradigm. All paradigms include subterranean realms of tactic assumptions, the influence of which outlasts the adherence to the paradigm itself.

The first line of exploration suggests the ‘weird' aspects of the quantum theory, with fertile grounds for our feeling of which should disappear in inconsistencies with the prevailing world-view. This feeling is in replacing by the new one, i.e., if one believes that the Earth is flat, the story of Magellan's travels is quite puzzling: How it is possible for a ship to travel due west and, without changing direct. Arrive at its place of departure? Obviously, when the belief replaces the flat-Earth paradigm that Earth is spherical, we have instantly resolved the puzzle.

The founders of Relativity and quantum mechanics were deeply engaging but incomplete, in that none of them attempted to construct a philosophical system, however, that the mystery at the heart of the quantum theory called for a revolution in philosophical outlooks. During which time, the 1920's, when quantum mechanics reached maturity, began the construction of a full-blooded philosophical system that we based not only on science but on nonscientific modes of knowledge as well. As, the fading influence drawn upon the paradigm goes well beyond its explicit claim. We believe, as the scenists and philosophers did, that when we wish to find out the truth about the universe, we can ignore nonscientific nodes of processing human experiences, poetry, literature, art, music are all wonderful, but, in relation to the quest for knowledge of the universe, they are irrelevant. Yet, it was Alfred North Whitehead who pointed out the fallacy of this speculative assumption. In this, as well as in other aspects of thinking of some reality in which are the building blocks of reality are not material atoms but ‘throbs of experience'. Whitehead formulated his system in the late 1920s, and yet, as far as I know, the founders of quantum mechanics were unaware of it. It was not until 1963 that J. M. Burgers pointed out that its philosophy accounts very well for the main features of the quanta, especially the ‘weird ones', enabling as in some aspects of reality is ‘higher' or 'deeper' than others, and if so, what is the structure of such hierarchical divisions? What of our place in the universe? And, finally, what is the relationship between the great aspiration within the lost realms of nature? An attempt to endow ‘us' with a cosmological meaning in such a universe seems totally absurd, and, yet, this very universe is just a paradigm, not the truth. When you reach its end, you may be willing to join the alternate view as accorded to which, surprisingly bestow ‘us' with what we have restored, although in a post-postmodern context.

Subjective matter's has regulated the philosophical implications of quantum mechanics, as to emphasis the connections between what I believe, in that investigations of such interconnectivity are anticipatorially the hesitations that are an exclusion held within the western traditions, however, the philosophical thinking, from Plato to Platinous had in some aspects of the interpretation presented her in expression of a consensus of the physical community. Some have shared and objected other aspects (sometimes vehemently) by others. Still other aspects express my own views and convictions, as turning about to be more difficult that anticipated, discovering that a conversational mode would be helpful, but, their conversations with each other and with me in hoping that all will be not only illuminating but finding to its read may approve in them, whose dreams are dreams among others than themselves.

These examples make it seem likely that, if there is a criterion for what makes an alternative situation relevant that will save Goldman's claim about reliability and the acceptance of knowledge, it will not be simple.

The interesting thesis that counts as a causal theory of justification, in the meaning of ‘causal theory' intend of the belief that is justified just in case it was produced by a type of process that is ‘globally' reliable, that is, its propensity to produce true beliefs-that can be defined to a respectable approximation, as the proportion of the belief it produces, or would produce where it used as much as opportunity allows, that is true ~Is sufficiently that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. We have advanced variations of this view for both knowledge and justified belief. The first formulations of are reliably in its account of knowing appeared in a not by F.P. Ramsey (1903-30) who made important contributions to mathematical logic, probability theory, the philosophy of science and economics. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says the is rather something that has those properties. If we have repeated the process for all of the theoretical terms, the sentence gives the ‘topic-neutral' structure of the theory, but removes any implication that we know what the term so treated denote. It leaves open the possibility of identifying the theoretical item with whatever. It is that best fits the description provided, thus, substituting the term by a variable, and exististential qualifying into the result. Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth', which he combined its radical views of the function of many kinds of the proposition. Neither generalizations, nor causal propositions, not those treating probabilities or ethics, described facts, but each has a different specific function in our intellectual commentators on the early works of Wittgenstein, and his continuing friendship with the latter liked to Wittgenstein's return to Cambridge and to philosophy in 1929.

The most sustained and influential application of these ideas were in the philosophy of mind, or brain, as Ludwig Wittgenstein (1889-1951) whom Ramsey persuaded that remained work for him to do, the way of an undoubtedly most charismatic figure of 20th-century philosophy, living and writing with a power and intensity that frequently overwhelmed his contemporaries and readers, being a kind of picture or model has centred the early period on the ‘picture theory of meaning' according to which sentence represents a state of affairs of it. Containing elements corresponding to those of the state of affairs and structure or form that mirrors that a structure of the state of affairs that it represents. We have reduced to all logic complexity that of the ‘propositional calculus, and all propositions are ‘truth-functions of atomic or basic propositions.

In the layer period the emphasis shifts dramatically to the actions of people and the role linguistic activities play in their lives. Thus, whereas in the "Tractatus" language is placed in a static, formal relationship with the world, in the later work Wittgenstein emphasis its use in the context of standardized social activities of ordering, advising, requesting, measuring, counting, excising concerns for each other, and so on. These different activities are thought of as so many ‘language games' that together make or a form of life. Philosophy typically ignores this diversity, and in generalizing and abstracting distorts the real nature of its subject-matter. In addition to the "Tractatus"and the"investigations" collections of Wittgenstein's work published posthumously include "Remarks on the Foundations of Mathematics" (1956), "Notebooks" (1914-1916 (1961), "Pholosophische Bemerkungen" (1964), "Zettel" (1967, and ‘On Certainty'.

Clearly, there are many forms of reliabilism. Just as there are many forms of ‘Foundationalism' and ‘coherence'. How is reliabilism related to these other two theories of justification? We usually regard it as a rival, and this is aptly so, in as far as Foundationalism and Coherentism traditionally focussed on purely evidential relations than psychological processes, but we might also offer reliabilism as a deeper-level theory, subsuming some of the precepts of either Foundationalism or Coherentism. Foundationalism says that there are ‘basic' beliefs, which acquire justification without dependence on inference, reliabilism might rationalize this indicating that reliable non-inferential processes have formed the basic beliefs. Coherence stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity consequently, reliabilism could complement Foundationalism and coherence than completed with them.

These examples make it seem likely that, if there is a criterion for what makes an alternate situation relevant that will save Goldman's claim about local reliability and knowledge. Will did not be simple. The interesting thesis that counts as a causal theory of justification, in the making of ‘causal theory' intended for the belief as it is justified in case it was produced by a type of process that is ‘globally' reliable, that is, its propensity to produce true beliefs that can be defined, to a generous approximations, as the proportion of the beliefs it produces, or would produce where it used as much as opportunity allows, that is true -is sufficiently relializable. We have advanced variations of this view for both knowledge and justified belief, its first formulation of a reliability account of knowing appeared in the notation from F.P.Ramsey (1903-30). The theory of probability, he was the first to show how a ‘personalists theory' could be developed, based on a precise behavioural notion of preference and expectation. In the philosophy of language. Much of Ramsey's work was directed at saving classical mathematics from ‘intuitionism', or what he called the ‘Bolshevik menace of Brouwer and Weyl. In the theory of probability he was the first to show how we could develop a personalists theory, based on precise behavioural notation of preference and expectation. In the philosophy of language, Ramsey was one of the first thankers, which he combined with radical views of the function of many kinds of a proposition. Neither generalizations, nor causal propositions, nor those treating probability or ethics, describe facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship that led to Wittgenstein's return to Cambridge and to philosophy in 1929.

Ramsey's sentence theory is the sentence generated by taking all the sentences affirmed in a scientific theory that use some term, e. g., ‘quark'. Replacing the term by a variable, and existentially quantifying into the result. Instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If we repeat the process for all of a group of the theoretical terms, the sentence gives the ‘topic-neutral' structure of the theory, but removes any implication that we know what the term so treated characterize. It leaves open the possibility of identifying the theoretical item with whatever, but it is that best fits the description provided. Virtually, all theories of knowledge. Of course, share an externalist component in requiring truth as a condition for known in. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by ways of a nomic, counterfactual or other such ‘external' relations between belief and truth. Closely allied to the nomic sufficiency account of knowledge, primarily dur to Dretshe (1971, 1981), A. I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that X's belief that ‘p' qualifies as knowledge just in case ‘X' believes ‘p', because of reasons that would not obtain unless ‘p's' being true, or because of a process or method that would not yield belief in ‘p' if ‘p' were not true. An enemy example, ‘X' would not have its current reasons for believing there is a telephone before it. Or would not come to believe this in the ways it does, thus, there is a counterfactual reliable guarantor of the belief's bing true. An undaunted and the facts of counterfactual approach say that ‘X' knows that ‘p' only if there is no ‘relevant alternative' situation in which ‘p' is false but ‘X' would still believe that a proposition ‘p'; must be sufficient to eliminate all the alternatives too ‘p' where an alternative to a proposition ‘p' is a proposition incompatible with ‘p?'. That in one's justification or evidence for ‘p' must be sufficient for one to know that every alternative too ‘p' is false. This element of our evolving thinking, sceptical arguments have exploited about which knowledge. These arguments call our attentions to alternatives that our evidence sustains itself with no elimination. The sceptic inquires to how we know that we are not seeing a cleverly disguised mule. While we do have some evidence against the likelihood of such as deception, intuitively it is not strong enough for ‘us' to know that we are not so deceived. By pointing out alternate but hidden points of nature, in that we cannot eliminate, as well as others with more general application, as dreams, hallucinations, etc. The sceptic appears to show that every alternative is seldom. If ever, satisfied.

This conclusion conflicts with another strand in our thinking about knowledge, in that we know many things. Thus, there is a tension in our ordinary thinking about knowledge ~ We believe that knowledge is, in the sense indicated, an absolute concept and yet, we also believe that there are many instances of that concept.

If one finds absoluteness to be too central a component of our concept of knowledge to be relinquished, one could argue from the absolute character of knowledge to a sceptical conclusion (Unger, 1975). Most philosophers, however, have taken the other course, choosing to respond to the conflict by giving up, perhaps reluctantly, the absolute criterion. This latter response holds as sacrosanct our commonsense belief that we know many things (Pollock, 1979 and Chisholm, 1977). Each approach is subject to the criticism that it preserves one aspect of our ordinary thinking about knowledge at the expense of denying another. We can view the theory of relevant alternatives as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning. It is possible to see epistemology as dominated by two rival metaphors. One is that of a building or pyramid, built on foundations. In this conception it is the kob of the philosopher to describe especially secure foundations, and to identify secure modes of construction, s that the resulting edifice can be shown to be sound. This metaphor of knowledge, and of a rationally defensible theory of confirmation and inference as a method of construction, as that knowledge must be regarded as a structure risen upon secure, certain foundations. These are found in some formidable combinations of experience and reason, with different schools (empiricism, rationalism) emphasizing the role of one over that of the others. Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct' ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given'.

Still, of the other metaphor, is that of a boat or fuselage, that has no foundation but owes its strength to the stability given by its interlocking parts. This rejects the idea of a basis in the ‘given', favours ideas of coherence and holism, but finds it harder to ward off scepticism. In spite of these concerns, the problem, least of mention, is of defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato's view in the "Theaetetus" that knowledge is true belief, and some logos.` Due of its natural epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or proof against ‘scepticism' or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes I the history of science. The scope for ‘external' or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Although the terms in modernity, it remains as regarded of its distinguished exponents to the approach that include, Aristotle, Hume, and J. S. Mil.

The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe o it. It places too great a confidence in the possibility of a purely a prior ‘first philosophy', or standpoint beyond that of the working practitioners, from which they can measure their best efforts as good or bad. This standpoint now seems too many philosophers to be a fantasy. The more modest of tasks that we actually adopt at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin's theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, put it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in wether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual's actual reproductive success, and fourth, in wether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean "Does natural selections always take the best path for the long-term welfare of a species?" the answer is no. That would require adaption by group selection, and this is, unlikely. If you mean "Does natural selection creates every adaption that would be valuable?" The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate it mean that will evolve.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin's theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin's theory of natural selection, variations are not pre-designed to perform certain functions. Rather, these variations that perform useful functions are selected. While those that ill-suit the purposes are not, and, yet, those that are not selected as such, the selection is responsible for the appearance that variations do intentionally occur. In the modern theory of evolution, genetic mutations provide the blind variations (blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism), the environment provides the filter of selection, and reproduction provides the retention. Fit is achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes in general.

The parallel between biological evolution and conceptual or we can see ‘epistemic' evolution as either literal or analogical. The literal version of evolutionary epistemology dees biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs', by Bradie (1986) and the ‘Darwinian approach to epistemology' by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisition of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse (1986, ch. 5) demands a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990, pp. 33-8).

On the analogical version of evolutionary epistemology, called the ‘evolution of theory's program', by Bradie (1986). And the ‘Spenserians approach' (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), a process analogous to biological natural selection has governed the development of human knowledge, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

We have usually taken both versions of evolutionary epistemology to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the analogical; the version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. (Campbell * 1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom', i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one's knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one's knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because we can empirically falsify it. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extra-ordinary issues lie to awaken the literature that involves questions about ‘realism', i.e., What sort of metaphysical commitment does an evolutionary epistemologist have to make? And progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism', a view that combines a version of epistemological ‘scepticism' and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974 a) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biological evolution does not. Some have argued that evolutionary epistemologists must give up the ‘truth-topic' sense of progress because a natural selection model is in its gross effect of non-teleological, instead, following Kuhn (1970), embraced along with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and Ruse, 1986, (Stein and Lipton (1990) have argued, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton argue that these heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descentable structures, the function of their Descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986), and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable asa long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a relatively new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.

What makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what caused the subject to have the belief. Im recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p' is knowledge just in case it has the right sort of causal connection to the fact that ‘p'. They can apply such a criterion only to cases where the fact that ‘p' is a sort that can enter inti causal relations, as this seems to exclude mathematically and other necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects' environments.

For example, Armstrong (1973) proposed that a belief of the form ‘This [perceived] object is F' is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F', that ism, the fact that the object is ‘F' contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘ ' and perceived object ‘y', if ‘ ' has those properties and believed that ‘y' is ‘F', then ‘y' is ‘F'. (Dretske (1981) offers a rather similar account, in terms of the belief's being caused by a signal received by the perceiver that carries the information that the object is ‘F').

This sort of condition fails, however, to be sufficiently for non-inferential perceptivity, for knowledge is accountable for its compatibility with the belief's being unjustified, and an unjustified belief cannot be knowledge. For example, suppose that your mechanism for the sensory data of colour as perceived, is working well, but you have been given good reason to think otherwise, to think, say, that the sensory data of things look chartreuse to say, that chartreuse things look magenta, if you fail to heed these reasons you have for thinking that your colour perception is awry and believe of a thing that looks magenta to you that it is magenta, your belief will fail top be justified and will therefore fail to be knowledge, even though it is caused by the thing's being withing the grasp of sensory perceptivity, in such a way as to be a completely reliable sign, or to carry the information that the thing is sufficiently to organize all sensory data as perceived in and of the world, or Holistic view.

The view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The fist formulation of a reliable account of knowing appeared in a note by F. P. Ramsey (1903-30), whereby much of Ramsey's work was directed at saving classical mathematics from ‘intuitionism', or what he called the ‘Bolshevik menace of Brouwer and Weyl'. In the theory of probability he was the first to develop, based on precise behavioural nations of preference and expectation. In the philosophy of language, Ramsey was one of the first thinkers to accept a ‘redundancy theory of truth', which he combined with radical views of the function of many kinds of propositions. Neither generalizations, nor causal positions, nor those treating probability or ethics, described facts, but each has a different specific function in our intellectual economy. Ramsey was one of the earliest commentators on the early work of Wittgenstein, and his continuing friendship with the later to Wittgenstein's return to Cambridge and to philosophy in 1929. Additionally, Ramsey, who said that a belief was knowledge if it is true, certain and obtained by a reliable process. P. Unger (1968) suggested that ‘S' knows that ‘p' just in case it is of at all accidental that ‘S' is right about its being the case that D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicate the truth. Armstrong said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth via laws of nature.

Closely allied to the nomic sufficiency account of knowledge, primarily due to F.I. Dretske (1971, 1981), A.I. Goldman (1976, 1986) and R. Nozick (1981). The core of this approach is that ‘S's' belief that ‘p' qualifies as knowledge just in case ‘S' believes ‘p' because of reasons that would not obtain unless ‘p's' being true, or because of a process or method that would not yield belief in ‘p' if ‘p' were not true. For example, ‘S' would not have his current reasons for believing there is a telephone before him, or would not come to believe this in the way he does, unless there was a telephone before him. Thus, there is a counterfactual reliable guarantor of the belief;'s being true. A variant of the counterfactual approach says that ‘S' knows that ‘p' only if there is no ‘relevant alternative' situation in which ‘p' is false but ‘S' would still believe that ‘p' must be sufficient to eliminate all the alternatives too ‘p', where an alternative to a proposition ‘p' is a proposition incompatible with ‘p', that is, one's justification or evidence fort ‘p' must be sufficient for one to know that every alternative too ‘p' is false.

They standardly classify Reliabilism as an ‘externalist' theory because it invokes some truth-linked factor, and truth is ‘eternal' to the believer the main argument for externalism derives from the philosophy of language, more specifically, from the various phenomena pertaining to natural kind terms, indexicals, etc., that motivate the views that have come to be known as direct reference' theories. Such phenomena seem, at least to show that the belief or thought content that can be properly attributed to a person is dependent on facts about his environment-e. g., whether he is on Earth or Twin Earth, what in fact he is pointing at, the classificatory criteria employed by the experts in his social group, etc.-Not just on what is going on internally in his mind or brain (Putnam, 175 and Burge, 1979.) Virtually all theories of knowledge, of course, share an externalist component in requiring truth as a condition for knowing. Reliabilism goes further, however, in trying to capture additional conditions for knowledge by means of a nomic, counterfactual or other such ‘external' relations between ‘belief' and ‘truth'.

The most influential counterexample to reliabilism is the demon-world and the clairvoyance examples. The demon-world example challenges the necessity of the reliability requirement, in that a possible world in which an evil demon creates deceptive visual experience, the process of vision is not reliable. Still, the visually formed beliefs in this world are intuitively justified. The clairvoyance example challenges the sufficiency of reliability. Suppose a cognitive agent possesses a reliable clairvoyance power, but has no evidence for or against his possessing such a power. Intuitively, his clairvoyantly formed beliefs are unjustifiably unreasoned, but reliabilism declares them justified.

Another form of reliabilism, ‘normal worlds', reliabilism (Goldman, 1986), answers the range problem differently, and treats the demon-world problem in the same stroke. Let a ‘normal world' be one that is consistent with our general beliefs about the actual world. Normal-worlds reliabilism says that a belief, in any possible world is justified just in case its generating processes have high truth ratios in normal worlds. This resolves the demon-world problem because the relevant truth ratio of the visual process is not its truth ratio in the demon world itself, but its ratio in normal worlds. Since this ratio is presumably high, visually formed beliefs in the demon world turn out to be justified.

Yet, a different version of reliabilism attempts to meet the demon-world and clairvoyance problems without recourse to the questionable notion of ‘normal worlds'. Consider Sosa's (1992) suggestion that justified beliefs is belief acquired through ‘intellectual virtues', and not through intellectual ‘vices', whereby virtues are reliable cognitive faculties or processes. The task is to explain how epistemic evaluators have used the notion of indelible virtues, and vices, to arrive at their judgements, especially in the problematic cases. Goldman (1992) proposes a two-stage reconstruction of an evaluator's activity. The first stage is a reliability-based acquisition of a ‘list' of virtues and vices. The second stage is application of this list to queried cases. Determining has executed the second stage whether processes in the queried cases resemble virtues or vices. We have classified visual beliefs in the demon world as justified because visual belief formation is one of the virtues. Clairvoyance formed, beliefs are classified as unjustified because clairvoyance resembles scientifically suspect processes that the evaluator represents as vices e. g., mental telepathy, ESP, and so forth.

Clearly, there are many forms of reliabilism, just as there are many forms of Foundationalism and Coherentism. How is reliabilism related to these other two theories of justification? They have usually regarded it as a rival, and this is apt in as far as Foundationalism and Coherentism traditionally focussed on purely evidential relations rather than psychological processes. But reliabilism might also to be offered as a deeper-level theory, subsuming some of the precepts of either Foundationalism or Coherentism. Foundationalism says that there are ‘basic' beliefs, which acquire justification without dependence on inference. Reliabilism might rationalize this by indicating that reliable non-inferential processes form the basic beliefs. Coherentism stresses the primary of systematicity in all doxastic decision-making. Reliabilism might rationalize this by pointing to increases in reliability that accrue from systematicity. Thus, reliabilism could complement Foundationalism and Coherentism than complete with them.

Philosophers often debate the existence of different kinds of things: Nominalists question the reality of abstract objects like class, numbers, and universals, some positivist doubt the existence of theoretical entities like neutrons or genes, and there are debates over whether there are sense-data, events and so on. Some philosophers my be happy to talk about abstract one is and theoretical entities while denying that they really exist. This requires a ‘metaphysical' concept of ‘real existence': We debate whether numbers, neutrons and sense-data really existing things. But it is difficult to see what this concept involves and the rules to be employed in setting such debates are very unclear.

Questions of existence seem always to involve general kinds of things, do numbers, sense-data or neutrons exit? Some philosophers conclude that existence is not a property of individual things, ‘exists' is not an ordinary predicate. If I refer to something, and then predicate existence of it, my utterance seems to be tautological, the object must exist for me to be able to refer to it, so predicating for me to be able to refer to it, so predicating existence of it adds nothing. And to say of something that it did not exist would be contradictory.

According to Rudolf Carnap, who pursued the enterprise of clarifying the structures of mathematical and scientific language (the only legitimate task for scientific philosophy) in "The Logische Syntax der Sprache" (1934, trs. as "The Logical Syntax of Language," (1937). Refinements to his syntactic and semantic views continued with "Meaning and Necessity" (1947), while a general loosening of the original ideal of reduction culminated in the great "Logical Foundation of Probability," the important single work of confirmation theory, in 1959. Other works concern the structure of physics and the concept of entropy. Nonetheless, questions of which framework to employ do not concern whether the entities posited by the framework ‘really exist', its pragmatic usefulness has rather settled them. Philosophical debates over existence misconstrue ‘pragmatics' questions of choice of frameworks as substantive questions of fact. Once we have adopted a framework there are substantive ‘internal' questions, are there any prime numbers between 10 and 20? ‘External' questions about choice of frameworks have a different status.

More recent philosophers, notably Quine, have questioned the distinction between linguistic framework and internal questions arising within it. Quine agrees that we have no ‘metaphysical' concept of existence against which different purported entities can be measured. If quantification of the general theoretical framework which best explains our experience, the claims that there are such things, that they exist, is true. Scruples about admitting the existence of too many different kinds of objects depend not on a metaphysical concept of existence but rather on a desire for a simple and economical theoretical framework.

It is not possible to give a definition for experience in an illuminating way, however, what experiences are through acquaintance with some of their own, e.g., a visual experience of a green after-image, a feeling of physical nausea or a tactile experience of an abrasive surface, which an actual surface ~rough or smooth might cause or which might be part of a dream, or the product of a vivid sensory imagination. The essential feature of every experience is that it feels a certain way ~That there is something that it is like to have it. We may refer to this feature of an experience is its ‘character.

Another core feature of the sorts of experience with which our concerns are those that have representational content, unless otherwise indicated, the term ‘experiences; will be reserved for these that we implicate below, that the most obvious cases of experience with content are sense experiences of the kind normally involved in perception? We may describe such experiences by mentioning their sensory modalities and their content, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in ‘Macbeth saw a dagger;'. This is, however, ambiguous between the perceptual claim ‘There was a [material ] dagger in the world that Macbeth perceived visually' and ‘Macbeth had a visual experience of a dagger', the reading with which we are concerned.

As in the case of other mental states nd events with content, it is important to distinguish between the properties that an experience represents and the properties that it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual Esperance of a pink square is a mental event, and it is therefore not itself either oink or square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of possessing that property, as in the case of a rapidly changing [complex] experience representing something as changing rapidly, but this is the exception and not the rule.

Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists, include only properties whose presence a subject could not doubt having appropriated experiences, e.g., colour and shape in the case of visual experience, i.e., colour and shape in the case of visual experience, surface texture, hardness, etc., in the case of tactile experience. This view's natural to anyone who has to a egocentric Cartesian perspective in epistemology, and who wishes for pure data experience to serve as logically certain foundations for knowledge. The term ‘sense-data', introduced by More and Russell, refer to the immediate objects of perceptual awareness, such as colour patches and shape, usually a knowing distinction from surfaces of physical objects. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more immediate, and because sense data are private and cannot appear other than they are. They are objects that change in our perceptual fields when conditions of perception change and physical objects remain constant.'

Critics of the notional questions of whether, just because physical objects can appear other than they are, there must be private, mental objects that have all the qualities the physical objects appear to have, there are also problems regarding the individuation and duration of sense-data and their relations ti physical surfaces of an object we perceive. Contemporary proponents counter that speaking only of how things an to appear cannot capture the full structure within perceptual experience captured by talk of apparent objects and their qualities.

It is nevertheless, that others who do not think that this wish can be satisfied and they impress who with the role of experience in providing animals with ecological significant information about the world around them, claim that sense experiences represent possession characteristics and kinds that are n=much richer and much more wide-ranging than the traditional sensory qualitites. We do not see only colours and shapes they tell ‘u' but also, earth, water, men, women and fire, we do not smell only odours, but also food and filth. There is no space here to examine the factors relevant to as choice between these alternatives. In so, that we are to assume and expect when it is incompatibles with a position under discussion.

Given the modality and content of a sense experience, most of ‘us' will be aware of its character even though we cannot describe that character directly. This suggests that closer intimacy and representational content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of a sense experience places limitation n its possible content, i.e., a tactile experience of something touching one's left ear is just too simple to carry the same amount of content as a typical everyday visual experience. Furthermore, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, i.e., the sort of gustatory experience that we have when eating chocolate would not represent chocolate unless chocolate normally caused it, granting a contingents tie between the characters of an experience and its possibility for casual origins, it again, followed its possible content is limited by its character.

Character and content are none the less irreducible different for the following reasons (i) There are experiences that completely lack content, i.e., certain bodily pleasures (ii) Not every aspect of he character of an experience which content is relevant to that content, i.e., the unpleasantness of an aural experiences of chalk squeaking on a board may have no responsibility significance (iii) Experiences indifferent modalities may overlap in content without a parallel experience in character, i.e., visual and active experiences of circularity feel completely different (iv) The content of an experience with a given character may vary an according tn the background of the subject, i.e., a certain aural experience may come to have the content ‘singing birds' only after the subject has learned something about birds.

According to the act/object analysis of experience, which is a special case of the act/ object analysis of consciousness, every experience involves an object of experience if it has not material object. Two main lines of argument may be offered in supports of this view, one phenomenological and the other semantic.

In an outline, the phenomenological argument is as follows. Whenever we have an experience answers to it, we seem to be presented with something through the experience that something through the experience, which if in ourselves diaphanous. The object of the experience is whatever is so presented to us. ~Be it an individual thing, an event or a state of affairs.

The semantic argument is that they require objects of experience in order to make sense of cretin factures of our talk about experience, including, in particular, the following (1) Simple attributions of experience, i.e., ‘Rod is experiencing a pink square', seem to be relational (2) We appear to refer to objects of experience and to attribute properties to them, i.e., we gave The after-image that John experienced. (3) We appear to qualify over objects of experience, i.e., Macbeth saw something that his wife did not see.

The act/object analysis faces several problems concerning the status of objects of experience. Currently the most common view is that they are ‘sense-data'-Private mental entities that actually posses the traditional sensory qualities represented by the experience of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience must apparently represent something as having a determinable property, i.e., redness, without representing it as having any subordinate determinate property, i.e., any specific given shade of red, a sense-datum may actually have our determinate property without saving any determinate property subordinate to it. Even more disturbing is that sense-data may contradictory properties, since experience can have properties, since experience can have contradictory contents. A case in point is the water fall illusion: If you stare at a waterfall for a minute and the immediately fixate on a nearby rock, you are likely to are an experience of moving upward while it remains inexactly the same place. The sense-data, . . . private mental entities that actually posses the traditional sensory qualities represented by the experience of which they are the objects, but the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable properties, i.e., redness, without representing it as having any subordinate determinate property, i.e., any specific shade of red, a sense-datum may actually have a determinate property without having any determinate property subordinate to it. Even more disturbing is the sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on nearly rock, you are likely to have a experience of the rock's moving toward while it remains in the same place. The sense-datum theorist must either deny that there as such experiences or admit contradictory objects.

Treating objects can avoid these problems of experience as properties. this, however, fails to do justice to the appearances, for experiences, however complex, but with properties embodied in individuals. The view that objects of experience is that Meinongian objects accommodate this point. It is also attractive, in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perceptivity.

According to the act/object analysis of experience, every experience with contentual representation involves an object of experience, an act of awareness has related the subject (the event of experiencing that object). This is meant applying not only to perceptions, which have material objects, whatever is perceived, but also to experiences like hallucinating and dream experiences, which do not. Such experiences are, nonetheless, less appearing to represent of something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which we have treated as properties, Meinongian objects, which may not exist or have any form of being, and, more commonly, private mental entities with sensory qualities. (We have now usually applied the term ‘sense-data' to the latter, but has also been used as a general term for objects of sense experiences, in the work of G. E., Moore.) Its terms of representative realism, objects of perceptions, of which we are ‘indirectly aware' are always distinct from objects of experience, of which we are ‘directly aware'. Meinongian, however, may treat objects of perception as existing objects of perception, least there is mention, Meinong's most famous doctrine derives from the problem of intentionality, which led him to countenance objects, such as the golden mountain, that is capable of being the object of thought, although they do not actually exist. This doctrine was one of the principle's targets of Russell's theory of ‘definitive descriptions', however, it came as part o a complex and interesting package of concept if the theory of meaning, and scholars are not united in what supposedly that Russell was fair to it. Meinong's works include "Über Annahmen" (1907), trs. as "On Assumptions" (1983), and "Über Möglichkeit und Wahrschein ichkeit" (1915). But most of the philosophers will feel that the Meinongian's acceptance to impossible objects is too high a price to pay for these benefits.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing, as opposed to having exactly similar experiences, that it appears to have an answer only, on the assumptions that the experience concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when conditions are not satisfied. (The answers negativity on the sense-datum theory: It could be positive of the versions of the act/object analysis, depending on the facts of the case.)

In view of the above problems, we should reassess the case of act/object analysis. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present 'us' with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less, answerable. The seemingly relational structure of attributions of experiences is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and we can handle quantification over objects of experience themselves and quantification over experience tacitly according to content, thus, ‘the after-image that John experienced was an experience of green' and ‘Macbeth saw something that his wife did not see' becomes ‘Macbeth had a visual experience that his wife did not have'.

Nonetheless, pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated dispositions, i.e.. ‘We might identify Susy's experience of a rough surface beneath her hand with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that we have somehow blocked.

This position has attractions. It does full justice. And to the important role of experience as a source of belief acquisition. It would also help clear the say for a naturalistic theory of mind, since there seems to be some prospect of a physical/functionalist account of belief and other intentional states. But its failure has completely undermined pure cognitivism to accommodate the fact that experiences have a felt character that cannot be reduced to their content.

The adverbial theory of experience advocates that the grammatical object of a statement attributing an experience to someone be analysed as an adverb, for example,

Rod is experiencing a pink square.

is rewritten as?

Rod is experiencing (pink square) ly.

Also, the adverbial theory is an attempt to undermine a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basic intuition, and there is reason to believe that an effective development of the theory, which is merely hinted upon possibilities.

The relearnt intuitions are as, (i) that when we say that someone is experiencing ‘an A', this has an experience ‘of an A, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (ii) that doing this is a matter of saying something about the experience itself (and maybe also about the normal causes of like experiences) and (iii) that there is no-good reason to suppose that it involves the description of an object that the experience is brought into that of what should be. Thus, the effective role of the content-expression is a statement of experience is to modify the verb it compliments, not to introduce a special type of object.

A final position that we should mention is the state theory, according to which a sense experience of an ‘A' is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A'. Suitably qualified, this claim is no doubt truer, but its significance is subject to debate. Here it is enough to remark that the claim is compactable with both pure cognitivism and the adverbial theory, and that we have probably best advised state theorists to adopt adverbials as a means of developing their intuition.

Perceptual knowledge is knowledge acquired by or through the senses, this includes most of what we know. We cross intersections when everything we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripely, and that it is time to get up by some sensory means. Seeing that the light has turned green is learning something ~ that the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact that the melon is overripe by one's sense of touch. In each case we have somehow based on the resulting knowledge, derived from or grounded in the sort of experience that characterizes the sense modality in question.

Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, another fact, in a more direct way. We see, by newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the case of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other sound makers so that we can, for example, hear (by the alarm) that someone is at the door and (by the bell) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge that it reads ‘empty', the newspaper (what it says) and the person's expression, one would not see, hence, we know, that what one perceptual representation means have described as coming to know. If one cannot hear that the bell is ringing, one cannot ~ not, at least, in this way hear that one's visitors have arrived. In such cases one sees, hears, smells, etc., that ‘a' is ‘F', coming to know thereby that ‘a' is ‘F', by seeing, hearing etc., we have derived from that come other condition, ‘b's being ‘G', that ‘a' is ‘F', or dependent on, the more basic perceptivities that of its being attributive to knowledge that of ‘b' is ‘G'.

Though perceptual knowledge about objects is often, in this way, dependent on knowledge of facts about different objects, the derived knowledge is something about the same object. That is, we see that ‘a' is ‘F' by seeing, not that another object is ‘G; , but that ‘a' itself is ‘G'. We see, by her expression, that she is nervous. She tells that the fabric is silk (not polyester) by the characteristic ‘greasy' feel of the fabric itself (not, as I do, by what is printed on the label). We tell whether it is a maple tree, a Porsche, a geranium, an ingenious rock or a misprint by its shape, colour, texture, size, behaviour and distinctive markings. Perceptual knowledge of this sort is also derived ~.~ Derived from the more facts (about ‘a') even when we use to make the identification verifiable. In this case, the perceptual knowledge is still indirect because, of what seems adequate may be that although the same object is involved, the facts we come to know about it are different from the facts that enable ‘us' to know it.

We sometimes describe derived knowledge as inferential, but this is misleading. At the conscious level there is no passage of the mind from premised to conclusion, no reason-sensitivity of mind from problem-solving. The observer, the one who sees that ‘a' is ‘F' by seeing that ‘b' (or, ‘a' itself) is ‘G', need not be and typically is not aware of any process of inference, any passage of the mind from one belief to another. The resulting knowledge, though logically derivative, is psychologically immediate. I could see that she was getting angry, so I moved my hand. I did not, at least not at any conscious level, infer (from her expression and behaviour) that she was getting angry. I could (or, it seems to me) see that she was getting angry, it is this psychological immediacy that makes indirect perceptual knowledge a species of perceptual knowledge.

The psychological immediacy that characterizes so much of our perceptual knowledge-even (sometimes) the most indirect and derived forms of it do not mean that no one requires learning to know in this way. One is not born with (may, in fact, never develop) the ability to recognize daffodils, muskrats and angry companions. It is only after a long experience that one is able visually to identify such things. Beginners may do something corresponding to inference, they recognize relevant features of trees, birds, and flowers, features they already know how to identify perceptually, and then infer (conclude), on the basis of what they see, and under the guidance of more expert observers, that it is an oak, a finch or a geranium. But the experts (and wee are all experts on many aspects of our familiar surroundings) do not typically go through such a process. The expert just sees that it is an oak, a finch or a geranium. The perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is if she cannot first see its colour and shape, but it is to say that the expert has developed identificatory skills that no longer require the sort of conscious self-inferential process that characterize a beginners efforts.

It would seem, moreover, that these background assumptions, if they are to yield knowledge that ‘a' is ‘F', as they must if the observer is to see (by b's being G) that ‘a' is ‘F', must themselves qualify as knowledge. For if no one has known this background fact, if no one knows it whether ‘a' is ‘F' when ‘b' is ‘G', then the knowledge of b's bing G is, taken by itself, powerless to generate the knowledge that ‘a' is ‘F'. If the conclusion is to be known to be true, both the premises used to reach that conclusion must be known to be truer, or so it would seem.

Externalists, however, argue that the indirect knowledge that ‘a' is ‘F', though it may depend on the knowledge that ‘b' is ‘G', does not require knowledge of the connecting fact, the fact that ‘a' is ‘F' when ‘b' is ‘G'. Simple belief (or, perhaps, justified beliefs, there are stronger and weaker versions of externalism) in the connecting fact is sufficient to confer a knowledge of the connected fact. Even if, strictly speaking, I do not know she is nervous whenever she fidgets like that, I can none the less see (hence, recognized, or know) that she is nervous (by the way she fidgets) if I (correctly) assume that this behaviour is a reliable expression of nervousness. One need not know the gauge is working well to make observations (acquire observational knowledge) with it. All that we require, besides the observer believing that the gauge is reliable, is that the gauge, in fact, be reliable, i.e., that the observers background beliefs be true. Critics of externalism have been quick to point out that this theory has the unpalatable consequence-can make that knowledge possible and, in this sense, be made to rest on lucky hunches (that turn out true) and unsupported (even irrational) beliefs. Surely, internalists argue if one is going to know that ‘a' is ‘F' on the basis of b's being G, one should have (as a bare minimum) some justification for thinking that ‘a' is ‘F', or is probably ‘F', when ‘b' is ‘G'.

Whatever view one takes about these matters (with the possible exception of extreme externalism), indirect perception obviously requires some understanding (knowledge? Justification? Belief?) of the general relationship between the fact one comes to know (that ‘a' is ‘F') and the facts (that ‘b' is ‘G') that enable one to know it. And it is this requirement on background knowledge or understanding that leads to questions about the possibility of indirect perceptual knowledge. Is it really knowledge? Sceptical doubts have inspired the first question about whether we can ever know the connecting facts in question. How is it possible to learn, to acquire knowledge of, the connecting fact's knowledge of which is necessary to see (by b's being ‘G') that ‘a' is ‘F'? These connecting facts do not appear to be perceptually knowable. Quite the contrary, they appear to be generally knowable truths (if knowable at all) by inductive inference from past observations. And if one is sceptical about obtaining knowledge in this indirect, inductive as, one is, perforced, indirect knowledge, including indirect perceptivity, where we have described knowledge as of the sorts openly as mentioned, and that it is dependent upon its manifested versifications.

Even if one puts aside such sceptical questions, least of mention, there remains a legitimate concern about the perceptual character of this kind of knowledge. If one sees that ‘a' is ‘F' by seeing that ‘b' is ‘G', is one really seeing that ‘a' is ‘F'? Isn't perception merely a part ~ And, indeed, from an epistemological standpoint, whereby one comes to know that ‘a' is ‘F?'. One must, it is true, see that ‘b' is ‘G', but this is only one of the premises needed to reach the conclusion (knowledge) that ‘a' is ‘F'. There is also the background knowledge that is essential to te process. If we think of a theory as any factual proposition, or set of factual propositions, that cannot itself be known in some direct observational way, we can express this worry by saying that indirect perception is always theory-loaded: Seeing (indirectly) that ‘a' is ‘F' is only possible if the observer already has knowledge of (justifications for, belief in) some theory, the theory ‘connecting' the fact one comes to know (that ‘a' is ‘F') with the fact (that ‘b' is ‘G') that enables one to know it.

This of course, reverses the standard foundationalist pictures of human knowledge. Instead of theoretical knowledge depending on, and being derived from, perception, perception of the indirect sort, presupposes a prior knowledge of theories.

Foundationalist's are quick to point out that this apparent reversal in the structure of human knowledge is only apparent. Our indirect perceptions of facts depend on theory, yes, but this merely shows that indirect perceptional knowledge is not part of the foundation. To reach the kind of perceptual knowledge that lies at the foundation, we need to look at a form of perception that is purified of all theoretical elements. This, then, will be perceptual knowledge, pure and direct. We have needed no background knowledge or assumptions about connecting regularities in direct perception because the known facts are presented directly and immediately and not (as, in direct perception) on the basis of other facts. In direct perception all the justification (needed for knowledge) is right there in the experience itself.

What, then, about the possibility of perceptual knowledge pure and direct, the possibility of coming to know, on the basis of sensory experience, that ‘a' is ‘F' where this does not require, and in no way presupposes, backgrounds assumptions or knowledge that has a source outside the experience itself? Where is this epistemological ‘pure gold' to be found?

There are, basically, two views about the nature of direct perceptual knowledge (coherentists would deny that any of our knowledge is basic in this sense). We can call these views (following traditional nomenclature) direct realism and representationalism or representative realism. A representationalist restricts direct perceptual knowledge to objects of a very special sort: Ideas, impressions, or sensations (sometimes called sense-data)-entities in the mind of the observer. One directly perceives a fact, i.e., that ‘b' is ‘G', only when ‘b' is a mental entity of some sort a subjective appearance or sense-data and ‘G' is a property of this datum. Knowledge of these sensory states is supposed to be certain and infallible. These sensory facts are, so to speak, right upon against the mind's eye. One cannot be mistaken about these facts for these facts are, in really, facts about the way things appear to be, one cannot be mistaken about the way things appear to be. Normal perception of external conditions, then, turns out to be (always) a type of indirect perception. One ‘sees' that there is a tomato in front of one by seeing that the appearances (of the tomato) have a certain quality (reddish and bulgy) and inferring (this is typically said to be atomistic and unconscious), on the basis of certain background assumptions, i.e., that there typically is a tomato in front of one when one has experiences of this sort that there is a tomato in front of one. All knowledge of objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on an even more direct knowledge of the appearances.

For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded' with the theory that there is some regular, some uniform, correlations between the way things appear (known in a perceptually direct way) and the way things actually are (known, if known at all, in a perceptually indirect way).

The second view, direct realism, refuses to restrict direct perceptual knowledge to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however, direct and immediate it may sometimes feel, some perceptual; knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on, nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right in the experience itself.

To understand the way this is supposed to work, consider an ordinary example. ‘S' identifies a banana (learns that it is a banana) by noting its shape and colour -perhaps even tasting and smelling it (to make sure it's not wax). In this case the perceptual knowledge that it is a banana is the direct realist admits, indirect on S's perceptual knowledge of its shape, colour, smell, and taste. ‘S' learns that it is a banana by seeing that it is yellow, banana-shaped, etc. None the less, S's perception of the banana's colour and shape is not direct. ‘S' does not see that the object is yellow, for example, by seeing (knowing, believing) anything more basic either about the banana or anything

e.g., his own sensation of the banana. ‘S' has learned to identify to do is not make an inference, even a unconscious inference, from other things he believes. What ‘S' acquired as a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, ad in no way depends on, the having of any unfolding beliefs thereof: S' identificatory success will depend on his operating in certain special conditions, of course. ‘S' will not, perhaps, be able visually to identify yellow objects in dramatically reduced lighting, at a humourously angulated view, or when afflicted with intuitive certainty upon which exists a nervous disorder. But these facts about ‘S' can see that something is yellow does not show that his perceptual knowledge (that ‘a' is yellow) in any way depends on a belief (let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an identificatory skill, that like any skill, requires certain conditions for its successful exercise. An expert basketball player cannot be shot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also with individuals who have developed perceptual (cognitive) skills. They needed normal conditions to do what they have learned to do. They need normal conditions too see, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for the direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S' sees that ‘a' is ‘F' depends on his being caused to believe that ‘a' is ‘F' in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S' sees (hence, knows) that ‘a' is ‘F'. If they aren't, he doesn't. Whether or not ‘S' knows depends, then, not on what else (if anything) ‘S' believes, but on the circumstances in which ‘S' comes to believe. This being so, this type of direct realist is a form of externalism. Direct perception of objective facts, pure perpetual knowledge of external events, is made possible because what is needed (by way of justification) for such knowledge has been reduced. Background knowledge-is not needed.

This means that the origination, or it foundations of knowledge are fallible. All the same, though fallible, they are in no way derived, that is, what makes them foundations. Even if they are brittle, as foundations are sometimes, everything else upon them.

Philosophical knowledge is approximate and contrasting philosophically can formulate a traditional view of philosophical knowledge and scientific investigations, as follows: The two types of investigations differ both in their methods (the former is intuitively deductive, and the latter empirical) and in the metaphysical status of their results (the former yields facts that are metaphysically necessary and the latter yields facts that is metaphysically contingent). Yet, the two types of investigations resemble each other in that both, if successful, uncover new facts, and these facts, although expressed in language, are generally not about language, except investigations in such specialized areas as philosophy of language and empirical linguistics.

This view of philosophical knowledge has considerable appeal, but it faces problems. First, the conclusions of some common philosophical arguments seem preposterous. Such positions as that it is no more reasonable to ear bread than arsenic (because it is only in the past that arsenic poisoned people), or that one can never know he is not dreaming, may seem to go so far against commonsense as to be for that an unacceptable reason seems much as to displeasing of issues. Second, philosophical investigation does not lead to a consensus among philosophers. Philosophy, unlike the sciences, lacks an established body of generally-agreed-upon truths. Moreover, philosophy lacks an unequivocally applicable method of setting disagreements. (The qualifier ‘unequivocally applicable' is to forestall the objection that the method has settled philosophical disagreements of intuitive deductive argumentation, which is often unresolved disagreement about which side has won a philosophical argument.)

In the face of these and other considerations, various philosophical movements have revoked the above traditional view of philosophical knowledge. Thus, verificationism responds to the unresolvability of traditional philosophical disagreements by putting forth a criterion of literal meaningfulness. ‘A statement is held to be literally meaningful if and only if it is either analytic or empirically verifiable (Ayer, 1952), where a statement is an analytic riff it is just a matter of definition. Traditional controversial philosophical views, such as that having knowledge of the world outside one's own mind is metaphysically impossible, would count as neither analytic nor empirically verifiable.

Various objections have been raised to this verification principle. The most important is that the principle is self-refuting, i.e., that when one attempts to apply the verification principle to itself, the result is that the principle comes out as literally meaningless, therefore not true because it is empirically neither verifiable nor analytic. This move may seem verifiable nor analytic. This move may seem like a trick, but it reveals a deep methodological problem with the verificationist approach. The verification principle is determined to delegitimize all controversy that is neither nor resolvable empirically or expending a recourse to definition. The principle itself, however, releases neither of the established nor empirically a recourse to definition. The principle is an attempt to rule out synthetic deductivity as a controversial issue, of debate, yet the principle itself is both synthetic deductivity and controversial. It is ironic that the self-refutingness of the verification principle is one of the very few points on which philosophers nowadays approach consensuses.

Ordinary language philosophy, another twentieth-century attempt to delegitimize traditional philosophical problems, faces a parallel but an unrecognized problem of self-refutingness. Just as they can characterize verificationism as reacting against unresolvable deductivity, ordinary language philosophy can so be characterized as reacting against deductivity as an acceding of counterintuitiveness. The ordinary language philosopher rejected counterintuitive philosophical positions (such as the view that time is unreal or that one can never know anything about other minds) by saying that these views ‘go against ordinary language', (Malcolm. In Rorty, 1970), i.e., that these views go against the way the ordinary person uses such terms as ‘know' and ‘unreal', since the ordinary person would reject the above counterintuitive statements about knowledge and time. On the ordinary language view, it follows that the sceptic does not mean the same thing by ‘know' as does the non-philosopher, since they use the terms differently and meaning is use. Thus, on this view, sceptics and anti-sceptics no more disagreement about knowledge than someone who says ‘Banks is financial institutions' and someone who say ‘Banks are the shores of rivers; discrepancies neared or vicinities around its water banks.

An obvious objection here is that many factors besides meaning help to decide use. For example, two people who disagree about whether the world is round use the word ‘round' differently in that one applies it to the world while the other does not, yet they do not by that mean different things by ‘world' or ‘round'. Ordinary language philosophy allows that this aspect of use is not part of the meaning, since it rests on a disagreement about empirical facts. Only in relegating all non-empirical disagreements to differences in linguistic meaning, the ordinary language philosopher denies the possibility of substantive, non-linguistic disagreement over deductively, non-linguistic disagreement over a speculative assertion of facts and thus, like the verificationist, disallows that ‘if a child that was learning the language were to say, in a situation where we were sitting in a room with chairs about, that it was; highly probable' that were chairs there, we should smile and correct his chairs there, we should smile and correct his language. Malcolm may be right about this case, since it is so unlikely that children would have independently developed a scientific philosophy. Nonetheless, a parallel response seems obviously inappropriate as a reply to a philosopher who says ‘One can never know that one is not dreaming', or for that matter, as a reply to an inept arithmetic student who says ‘33 =12 + 19'. If it were true that some philosophers uttering the first of these sentences were not using ‘know' in the usual sense, he could not convey his philosophical views to a French speaker by uttering the sentence's French translation (‘On ne peut jamais savoir qu‘ on ne rêve pas'), any more than one can convey his eight-year-old cousin Mary's opinion that her teacher is vicious by saying ‘Mary's teacher is viscous' if Mary wrongly thinks ‘viscous' demands ‘vicious' and continues using it that way. However, failures obviously to translate ‘know' or its cognates into their French synonyms would prevent an English-speaking sceptic from accurately representing his views in French at all. The ordinary language view that all non-empirical disagreements are linguistic disagreements entails that if someone believes the sentence ‘a being's F' when this sentence expresses the deductive proposition that ‘a being's F', then including to that in what property he takes as ‘F' to express was part of what he means by ‘a'. However, this obviously goes against the Malcolmian ‘ordinary use' of the term ‘meaning', i.e., what ordinary people, once they understand the term ‘meaning', believe on deductivity as a grounding about the extension of the term ‘meaning'. For example, the ordinary man would deny that the inept student mentioned above cannot be using his words with our usual meaning when he says ‘33 = 12 + 19'. Like the earlier objection of self-refutingness to verificationism, this objection reveals a deep methodological problem. Just as synthetic deductivity may elicit a controversy that cannot be ruled out by a principle that is both synthetic deductively and controversial, deductive counterintuitiveness cannot be ruled out by a principle that is both deductive and counterintuitive.

Criteria and knowledge, except for alleged cases that things that are evident for one just by being true, it has often been thought, anything that is known must satisfy certain ‘criteria' as well for being true. It is also thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles specifying the sorts of considerations that will make some propositions evident or just make accepting it warranted to some degree. Common suggestions for this character encompass one clearly and distinctly conceive a proposition ‘p', e.g., that 2 + 2 =4, ‘p' is evident: Or, if ‘p' coheres with the bulk of one's beliefs, ‘p' is warranted. These might be criteria under which putative self-evident truths, e.g., that one clearly and distinctly conceive ‘p'. ‘Transmit' the status as evident they already have without criteria to other propositions like ‘p', or they might be criteria by which purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create' ‘p's' epistemic status. If that in turn, can be; transmitted' to other propositions, e.g., by deduction or induction, criteria will be specifying when it is. These criteria are general principles specifying what sort of consideration ‘C' will make a proposition ‘p' evident to ‘us'.

Traditionally, suggestions include: (a) if a proposition ‘p', e.g.,

2 + 2 = 4, is clearly and distinctly conceived, then ‘p' is evident, or simply, (b) if we cannot conceive ‘p' to be false, then ‘p' is evident: or (c) whatever we are immediately conscious of in thought or experience, e.g., that we seem to see red, is evident. These might be criteria under which putative self-evident truths, e.g., that one clearly and distinctly conceive ‘p', transmits the status as evident they already have for one without criteria to other propositions like ‘p'. Alternatively, they might be criteria under which epistemic status, e.g., p being evident, is ‘originally created' by purely non-epistemic considerations, e.g., facts about how ‘p' arises to initiate that which carry on of neither self-renewal nor what is already confronting its own criterion's unquestionability.

However, it is ‘originally created', presumably epistemic status, including degrees of warranted acceptance or probability, can be ‘transmitted' deductively from premises to conclusions. Criteria then must say when and to what degree, e.g., ‘p' and ‘q' are warranted, given the epistemic considerations that ‘p' is warranted and so is ‘q'. (Must the logical connection itself be evident?) It is usually inductively, as when evidence that observed type ‘Some' things have regularly been ‘F' warrants acceptance, without undermining (overriding) evidence, of an unobserved ‘A' as ‘F'. Such warrant is defeasible. Thus, despite regular observations of black crows, thinking an unobserved crow black might not be very warranted if there have recently been radiation changes potentially affecting bird colour.

Traditionally, criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductively or inductive criteria may be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanations of data, never make things evident or warrant their acceptance enough to count as knowledge.

Contemporary philosophers, however, have defended criteria by which, e.g., considerations concerning a person's facial expression, may (defeasibly) make her pain or anguish (Lycan, 1971). More often, they have argued for criteria by which some propositions about perceived reality can be made evident by sense experience itself by evident propositions about it. For instance, without relevant evidence that perception is currently unreliable, it is evident we actually see a pink square if we have sense experience of seeming to see a pink square (Pollock, 1986): Or, if it is evident we have such experience, or if in sense experience we spontaneously think we see a pink square. The experiential consideration allegedly can be enough to make reality evident, although defeasibly. It can do this on its own, and does not need support from further considerations such as the absence of undermining evidence or inductive evidence for a general link between experience and reality. Of course, there can be undermining evidence. So we need criteria that determine when evidence undermines and ceases to undermine.

Warrant might also be increased than just ‘passed on'. The coherence of probable propositions with other probable propositions might (feasiblely) make then all more evident (Firth, 1964). Thus even if seeming to see a chair initially made a chair's presence only probable, its presence might eventually become evident by cohering with claims about chair perception in other cases (Chisholm, 1989). The latter may be warranted in turn by ‘memory' and ‘introspection' criteria, as often suggested, by which recalling or introspecting ‘p' defeasibly warrant ‘p's' acceptance. Some philosophers argue further that coherence does not just increase warrant, and defend an overall coherence criterion: Excluding perhaps initial warrant for propositions concerning our beliefs and their logical interrelations, what warrants any proposition to any degree for ‘u'; is its coherence with the most coherent system of belief available (BonJour, 1985?).

Contemporary epistemologists thus suggest the traditional picture of criteria may need alteration in three ways. Additionally, evidence may subject even our most basic judgements too rational. Correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass' evidence on linearly from a foundation of highly evident ‘premisses' to ‘conclusions' that are never more evident.

Criteria then standards take the form: ‘If ‘C', then (without undermining evidence) ‘p' is evident or warranted to degree ‘d'. Arguably, a criterion does not play a great deal of some functionalities that its own initially forming of our beliefs (Pollock, 1986.) For them to be the standards of epistemic status for ‘u', however, its typically thought criterial considerations must be omnes in the light of which we can at least check, and perhaps correct our judgements. As with justification and knowledge, the traditional view of content has been strongly internalized in character. Similarly, a Coherentists view could also be internalized, if both the belief and other states with which a justification belief is required to cohere and the coherence relations themselves are reflectively accessible. Remaining still, what makes such a view externalist is the absence of any requirement that the person for whom the belief is justified have cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be truer, but will, on such an account, nor the less, be epistemically justified in accepting it. Which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that a belief is true? An epistemologist working within this tradition is likely to feel that the externalist, than offering a competing account of the same concept of epistemic justification with which the traditional epistemologist is concerned, has simply changed the subject.

Traditionally, the epistemologists have therefore thought criterial considerations must be at least discoverable through reflection or introspection and thus ultimately concern internal factors about our conception, thoughts or experience. However, others think objective checks must be publically recognizable checks. Nevertheless, argument in Wittgenstein's "Philosophical Investigations," which is concerned with the concepts or, and relations manifestations (the inner as in and of itself with and the outer), self-states, avowals of experiences and descriptions of experiences. It is sometimes used narrowly to refer to a single chain of argument in which Wittgenstein demonstrates the incoherence of the idea that sensation-names. Names of experiences are given meaning by association with a mental ‘object', i.e., the word ‘pain' by association with the sensation of pain, or by mental (private) ostensive definition in which a mental ‘entity' supposedly functions as a sample, e.g., a mental image, stored in memory, is conceived as providing a paradigm for the application of a name.

A ‘private language' is not a private code, which could be cracked by another person, nor a language spoken by only one person, which could be taught to others, but rather a putative language, the individual words of which refer to what can (apparently) are known only be the speaker, i.e., to this empiricist jargon, to the ‘ideas' in his mind. It has been a presupposition of the mainstream of modern philosophy, empiricist, rationalist and Kantian alike, of representational idealism, and of contemporary cognitive representationalism that the languages we speak are such private languages, that the foundations of language no less than the foundations of knowledge in private experience. To undermine this picture with all its complex ramifications is the purpose of Wittgenstein's private language argument.

The idea that the language each of ‘us' speaks is essentially private, which learning a language is a matter of associating words with, or ostensively defending words by reference to, subjective experience (the ‘given'). The communication is a matter of stimulating a pattern of associations in the mind of the hearer qualitatively identical with that in the mind of the speakers is linked with multiple mutually supporting misconceptions about language, experiences and their identity, the mental and its relation to behaviour, self-knowledge and knowledge if the states of the mind of others, and thus that for criterial considerations we must ultimately concern those of public factors, e.g., that standard conditions (daylight, eye open, etc. (for reliable perceptual reports obtain.)

It remains, nonetheless, what makes criteria correct? For many epistemologists, their correctness is an irreducible necessary truth, a matter of breaking through metaphysical or of our lexical conventions, concerning epistemic status and the considerations that determine it. Others object that it remains mysterious why particular considerations are criterial unless notions of the evident or warranted or correct are further defined in non-epistemic terms. Criteria might be defined, for example, as principles reflecting our deepest self-critical thoughts about what considerations yield truth, or as norms of thought that practical rationality demands we adopt is we are to be effective agents. However, many will further objective satisfactions that criteria must yield truth or be prone among salvaged uses. They insist that necessarily (1) whatever is warranted has an objectively good chance of truth, and (2) whatever is evident is true or-almost invariably true. Epistemic notions allegedly lose their point unless they somehow measure a proposition's actual prospects for truth for ‘us'.

Against (1) and (2), a common objection is that no considerations relevantly guarantee truth, even for the most part, or in the; long run (BonJour, 1985). This is not obvious with traditional putative criterial considerations like clear and distinct conception or immediate awareness. Nevertheless, critics argue, when talk of such considerations is unambiguously construed as talk of mental activity, and is not just synonymous with talk of clearly and distinctly or immediately knowing, there is no necessary connection between being criterially evident on the basis of such considerations and being true (Sellars, 1979). The mere coincidence in some cases that the proposition we conceive is true cannot be what makes the proposition evident.

Still, (1) and (2) might be necessary, while the correctness of putative criteria is a contingent fact, given various facts about ‘u' and our world: It is no coincidence that adhering to these criteria leads to truth, almost invariably or frequently. Given our need to survive with limited intellectual resources and time, perhaps it is not as surprising that in judging issues we only demand criterial considerations that are fallible, checkable, corrective and contingently lead to truth. Nonetheless, specifying the relevant truth, connection is highly problematic. Moreover, reliability considerations now seem to be criterial for criteria although reliability, e.g., concerning perception, are not always accessible to introspection and reflection. Perhaps, traditional accessibility requirements may be rejected. Possibly, instead, what makes a putative criterion correct can differ from the criterial considerations that make its correctness evident. Thus, there might be criteria for (defeasibly) identifying criteria, e.g., whether propositions ‘feel right', or are considered warranted, in ‘thought experiments' where we imagine various putative considerations present and absent. Later reflection and inquiry might reveal what makes them all correct, e.g., reliability, or being designed by God or nature for our reliable use, etc.

In any case, if criterial considerations do not guarantee truth, knowledge will require more than truth and satisfying even the most demanding. Whether we know now say, a pink cube on a particular occasion may also require that there fortunately be no discernable facts, e.g., of our presence in a hologram gallery, to undermine the experiment basis for our judgement-or, perhaps instead, that it is no accident our judgement is true than merely probably true, given the criteria we adhere to and the circumstance, e.g., our presence in a normal room. Claims that truths that satisfy the relevant criteria are known can clearly be given many interpretations.

Many contemporary philosophers address these issued criteria with untraditional approaches to meaning and truth. Pollock (1974), for example, argues that learning ordinary concepts like ‘bird' or ‘red' involves learning to make judgements with them in condition, e.g., perpetual experiences, which warrant them. Though defeasibly, inasmuch as, we also learn to correct the judgements despite the presence of such conditions. These conditions are not logically necessary or sufficient for the truth of judgements. Nonetheless, the identity of our ordinary concepts makes the criteria we learn for making judgements necessarily correct. Although not all warranted assertions are true, there is no idea of their truths completely divorced from what undefeated criterial considerations allow ‘us' to assert. However, satisfying criteria still in some way compatible with future defeat, even frequent, and with not knowing, just as it was with error and defeat in more traditional accounts.

By appealing to defeasibly warranting criteria then, it seems we cannot show we know ‘p' rather than merely satisfy the criteria. Worse, critics argue that we cannot even have knowledge by satisfying such criteria. Knowing ‘p' allegedly requires more, but what evidence, besides that entitling ‘us' to claim the currently undefeated satisfaction of criteria, could entitle ‘us' to claim more, e.g., that ‘p' would not be defeated? Yet, a Knower, at least of reflection, must be entitled to give assurances concerning these further conditions (Wright, 1984). Otherwise, we would not be interested in a concept of knowledge as opposed to the evident or warranted. These contentions might be disputed to save a role for defeasibly warranting criteria. Yet why bother? Why can we not depict as a pint cube manifest itself in visual experiences that are essentially different from those where it merely appears present (McDowell, 1982)? We thereby know objective facts through experiences tat are criterial for them and make them indefeasibly evident. Nevertheless, to many, this requires a seamless mystified, fusion of appearance and reality. Alternatively, perhaps knowledge requires exercising an ability to judge accurately in specific relevant circumstances, but does not require criterial considerations that, as a matter of general principle, make propositions evident, even if only without undermining evidence or contingently, no matter what the context. Arguably, however, our position for giving relevant assurances does not improve with these new conditions for knowing.

Formulating general principles determining when criterial warrant is difficult and is not undermined (Pollock, 1974). So one might think that warrant in general depends just on what is presupposed as true and relevant in a potentially shifting context of thought or conversation, not on general criteria. However, defenders of criteria may protest that coherence, at least, remains as a criterion applicable across contexts.

It is often felt that ‘p' cannot be evident by satisfying criteria unless (a) criterial considerations evidently obtain, and evident either that (b) the criteria have certain correctness-masking features, e.g., leading to truth, or must that 'c', the criteria are correct. Otherwise any conformity to pertinent standards is in a relevant sense only accidental (BonJour, 1985). Yet vicious regress or circularity looms, unless (a)-(c) or supporting propositions are evident without criteria. At worst, as sceptics argue, nothing can be warranted: At best, a consistent role for criteria is limited. A common reply is that being criterially warranted, by definition, just requires the adequate (checkable) criterial considerations in fact obtain, i.e., that (a)-(c) be true. There is no need to demand further cognitive achievements for which one or more of (a) must also be evident, e.g., actually checking that criterial considerations obtain, proving truth or likelihood of truth on the basis of these considerations, or proving warrant on their basis.

Even so, how can propositions state which putative criteria are correct, be warranted? Any proposal for criterial warrant invokes the classic sceptical change of vicious regress or circularity. Yet, again, it may arguably, as with ‘p' above-mentioned, correct criteria must in fact be satisfied, but this fact itself need not be already confronting ‘us' as warranted. So, one might argue there is no debilitating regress or circle of warrant, even when, as may happen with some criterion, its correctness is warranted ultimately only because it itself is satisfied (van Cleve, 1979). Independent, ultimately non-criterial, evidence is not needed. Nonetheless, suppose we argue that our criteria are correct, because, e.g., they led to truth, are confirmed by thought experiments, or are clearly and distinctly conceived as correct, etc. however, we develop our arguments, they would not persuade those who, doubting the criteria we conform to, doubt our premises or their relevancy, dismissing our failures as merely conversational and irrelevant to our warrant, moreover, may strike sceptics and non-skeptics alike as question-begging or as arbitrarily altering what warrant requires. For the charge of ungrounded dogmatism it is inappropriate, more than the consistency of criterial warrant, including warrant about warrant, may be required, no matter what putative criteria to which we conform.

Least be there of mention, it is nevertheless, a problem of the criterion that lay upon the difficulty of how both to formulate the criteria, and to determine the extent, of knowledge and justified belief. The problem arises from the seeming justification of which is proven plausible of the following two propositions:

(1) I can identify instances (and thus determiners the

extent) of justified belief only if I already know the criteria of it.

(2) I can know the criteria of justified belief only if I can

already identify the instances of it.

If both (1) and (2) were true, I would be caught in a circle: I could know neither the criteria nor the extent of justified belief. In order to show that both can be known after all, a way out of the circle must be found. The nature of this task is best illustrated by considering the four positions that may be taken concerning the truth-values of (1) and (2):

(a) Scepticism as to the possibility of constructing a

theory of justification:

Both (1) and (2) are true, consequently, I can know neither the criteria nor the extent of justified belief. * This kind of scepticism is restricted in its scope to epistemic propositions. While it allows for the possibility of justified beliefs, it denies that we can know which beliefs are justified and which are not.)

(b) is true but (1) is false: I can identify instances of justification without applying a criterion.

(1) is true but (2) is false? : I can identify the criteria of justified belief without prior knowledge of its instances.

(d) Both (1) and (2) are false: I can know the extent of

justified belief without applying criteria, and vice versa.

The problem of a criterion may be seen as the problem of providing a rationale for a non-sceptical response, that is, for either (b), or (d).

Roderick Chisholm, who has devoted particular attention to this problem, calls the second response ‘particularism', and the third ‘Methodism'. Hume, who draws a sceptical conclusion as to the extent of empirical knowledge using; deductibility from sense-experience' as the criterion of justification, was a Methodist. Thomas Reid and G.E. Moore were particularists, the rejected Hume's criterion on the grounds that it turns obvious cases of knowledge into the cease of ignorance. Chisholm advocates particularism as the correct response. His view, which has also become known a ‘critical cognitivism' may be summarized as follows. Criteria for the application of epistemic concepts are expressed by epistemic principles. The antecedent of such a principle states the non-normative ground on which the epistemic status ascribed by the consequent supervenes (Cf. Chisholm, 1957, 1982). An example is the following:

If ‘S' is appeared to ‘F-ly', then ‘S' is justified in believing that there is an ‘F' in front of ‘S'.

According to this principle, a criterion for justifiable believing that there is something red in front of me is ‘being appeared to redly'. In constructing the theory of knowledge Chisholm coincides various principles of this kind, accepting or rejecting them depending on whether or not they fit wheat he identifies, without using any criterion, as the instances of justified belief. As the result of using this method, he rejects the principle above as too broad, and Hume's an empiricist criterion (which, unlike the criteria Chisholm tries to formulate, states a necessary condition).

If ‘S' is justified in believing that there is an ‘F' in front of ‘S', then ‘S's' belief is deducible form ‘S's' sense-experience

as to barrow. (Chisholm, 1982, and 1977).

Regarding the viability of particularism, this approach raises the question of how identifying instances of justified belief without applying any criteria is possible. Chisholm's answer rests on the premise that, in order to know, no criterion of knowledge or justification is needed (1982). He claims that this hold also for knowledge of epistemic facts. Supposing I am justified that I am justified in believing that ‘p' is the same body of evidence that justifies me in believing that ‘p'. Put differently, both JJp and Jp supervene on the same non-epistemic ground. (Chisholm 1982). Thus, in order to become justified in believing myself to be justified in believing that ‘p', I need not apply any criterion of justified belief, but I need only consider the evidence supporting ‘p'. The key assumption of particularism, then, is that in order to acquire knowledge of an epistemic fact, one need not apply, but only satisfy the antecedent condition of, the epistemic principle that governs the fact in question. Hence having knowledge of epistemic facts is possible such as ‘I am justified in believing that there is an ‘F' in front of me' without applying epistemic principles, and to use this knowledge in order to reject those principles that ae either too broad or too narrow.

According to Methodism, the correct solution to the problem proceeds the opposite way: Epistemic principles are to be formulated without using knowledge of epistemic facts. However, how could Methodism distinguish between correct and incorrect principles, given that an appeal to instances of epistemic knowledge is illegitimate? Against what could they check the correctness of a putative principle? Unless the correct criteria are immediately obvious which is doubtful, it remains unclear how Methodists could rationally prefer one principle to another. Thus Chisholm rejects Hume's criterion not because of its sceptical implications but also on grounds of its arbitrariness: Hume ‘leaves ‘us' completely in the dark as far as adopting this particular criterion that another' (1982). Particularists, then, accept the proposition (2), and thus reject responses (c) and (d), both of which affirm that (2) is false.

One problem for particularism is that it appears to beg the question against scepticism (BonJour, 1985). In order to evaluate this criticism, it must be kept in mind that particularists reject criteria with sceptical consequences on the basis of instances, whereas septics reject instances of justification on the basis of criteria. This difference in methodology is illustrated by the following two arguments:

An Anti-Sceptical Argument

(A) If the ‘reducibility from sense-experience' criterion is correct, then I am not justified in believing that these are my hands.

(B) I am justified in believing that these are my hands

Therefore:

(C) The ‘reducibility from sense-experience' criterion is not correct.

A Sceptical Argument

(A) If the ‘reducibility from sense-experience' criterion is correct, then I am not justified in believing that these are my hands

(C) The ‘deducible from sense-experience' criterion is correct.

Therefore:

(B) I am not justified in believing that these are my hands.

The problematic premises are (B) and (C). Particularists reject (C) on the basis of (B), and sceptics (B) on the basis of ©). Regarding question-begging, then, the situation is asymmetrical: Both beg the question against each other. Who, though, has the better argument? Particularists would say that accepting (B) is more reasonable than accepting (C) because the risk of making an error in accepting a general criterion is greater than in taking a specific belief to be justified.

No comments:

Post a Comment