November 30, 2009

Philosophical influence of Principia was incalculable, and from Locke's Essay onward philosophers recognized Newton's work as a new paradigm of scientific method, but without being entirely clear what different parts reason and observation play in the edifice. Although Newton ushered in so much of the scientific world-view, overall scholium at the end of Principia, he argues that ‘it is not to be conceived that mere mechanical causes could give birth to so many regular motions' and hence that his discoveries pointed to the operations of God, ‘to discourse of whom from phenomena does notably belong to natural philosophy.' Newton confesses that he has ‘not been able to discover the cause of those properties of gravity from phenomena': Hypotheses non fingo (I do not make hypotheses). It was left to Hume to argue that the kind of thing Newton does, namely place the events of nature into law-like orders and patterns, is the only kind of thing that scientific enquiry can ever do.
An ‘action at a distance' is a much contested concept in the history of physics. Aristotelian physics holds that every motion requires a conjoined mover. Action can therefore never occur at a distance, but needs a medium enveloping the body, and of which parts befit its motion and pushes it from behind (antiperistasis). Although natural motions like free fall and magnetic attraction (quaintly called ‘coition') were recognized in the post-Aristotelian period, the rise of the ‘corpusularian' philosophy. Boyle expounded in his Sceptical Chemist (1661) and The Origin and Form of Qualifies (1666), held that all material substances are composed of minutes corpuscles, themselves possessing shape, size, and motion. The different properties of materials would arise different combinations and collisions of corpuscles: chemical properties, such as solubility, would be explicable by the mechanical interactions of corpuscles, just as the capacity of a key to turn a lock is explained by their respective shapes. In Boyle's hands the idea is opposed to the Aristotelean theory of elements and principles, which he regarded as untestable and sterile. His approach is a precursor of modern chemical atomism, and had immense influence on Locke, however, Locke recognized the need for a different kind of force guaranteeing the cohesion of atoms, and both this and the interaction between such atoms were criticized by Leibniz. Although natural motion like free fall and magnetic attraction (quality called ‘coition') were recognized in the post-Aristotelian period , the rise of the ‘corpusularian' philosophy again banned ‘attraction; or unmediated action at a distance: the classic argument is that ‘matter cannot act where it is not again banned ‘attraction', or unmediated action at a distance: The classic argument is that ‘matter cannot act where it is not'.
Cartesian physical theory also postulated ‘subtle matter' to fill space and provide the medium for force and motion. Its successor, the a ether, was populated in order to provide a medium for transmitting forces and causal influences between objects that are not in directorially contact. Even Newton, whose treatment of gravity might seem to leave it conceived of a action to a distance, opposed that an intermediary must be postulated, although he could make no hypothesis as to its nature. Locke, having originally said that bodies act on each other ‘manifestly by impulse and nothing else'. However, changes his mind and strike out the words ‘and nothing else,' although impulse remains ‘the only way that we can conceive bodies function in'. In the Metaphysical Foundations of Natural Science Kant clearly sets out the view that the way in which bodies impulse each other is no more natural, or intelligible, than the way inn that they act at a distance, in particular he repeats the point half-understood by Locke, that any conception of solid, massy atoms requires understanding the force that makes them cohere as a single unity, which cannot itself be understood in terms of elastic collisions. In many cases contemporary field theories admit of alternative equivalent formulations, one with action at a distance, one with local action only.
Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1915) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms' or ‘types' in the involving evolutionary principles of the general theory of relativity (1915).
Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space. In electromagnetism the ether was supposed to give an absolute bases respect to which motion could be determined. The Galilean transformation equations represent the set of equations:
   =     vt
y  = y
z  = z
t  = t
They are used for transforming the parameters of position and motion from an observer at the point ‘O' with co-ordinates (z, y, z) to an observer at O  with co-ordinates (  , y , z ). The axis is chosen to pass through O and O . The times of an event at ‘t' and t  in the frames of reference of observers at O and O  coincided. ‘V' is the relative velocity of separation of O and O . The equation conforms to Newtonian mechanics as compared with the Lorentz transformation equations, it represents a set of equations for transforming the position-motion parameters from an observer at a point O( , y, z) to an observer at
O (  , y , z ), moving compared with one another. The equation replaces the Galilean transformation equation of Newtonian mechanics in reactivity problems. If the x-axes are chosen to pass through O  and the time of an event are t and t  in the frame of reference of the observers at O and O  respectively, where the zeros of their time scales were the instants that O and O  supported the equations are:
   =  (    vt)
y  = y
z  =z
t  =  ( t   v  / c2),
Where ‘v' is the relative velocity of separation of O, O , c is the speed of light, and ‘ ' is the function:
(1   v2/c2)-½.
Newton's laws of motion in his ‘Principia,' Newton (1687) stated the three fundamental laws of motion, which are the basis of Newtonian mechanics. The First Law of acknowledgement concerns that all bodies persevere in its state of rest, or uniform motion in a straight line, but in as far as it is compelled, to change that state by forces impressed on it. This may be regarded as a definition of force. The Second Law to acknowledge is, that the rate of change of linear momentum is propositional to the force applied, and takes place in the straight line in which that force acts. This definition can be regarded as formulating a suitable way by which forces may be measured, that is, by the acceleration they produce:
F = d( mv )/dt
i.e., F = ma = v( dm/dt ),
Of where F = force, m = masses, v = velocity, t = time, and ‘a' = acceleration, from which case, the proceeding majority of quality values were of non-relativistic cases: dm/dt = 0, i.e., the mass remains constant, and then:
F = ma.
The Third Law acknowledges, that forces are caused by the interaction of pairs of bodies. The forces exerted by ‘A' upon ‘B' and the force exerted by ‘B' upon ‘A' are simultaneous, equal in magnitude, opposite in direction and in the same straight line, caused by the same mechanism.
Appreciating the popular statement of this law about significant ‘action and reaction' leads too much misunderstanding. In particular, any two forces that happen to be equal and opposite if they act on the same body, one force, arbitrarily called ‘reaction,' are supposed to be a consequence of the other and to happen subsequently, as two forces are supposed to oppose each other, causing equilibrium, certain forces such as forces exerted by support or propellants are conventionally called ‘reaction,' causing considerable confusion.
The third law may be illustrated by the following examples. The gravitational force exerted by a body on the earth is equal and opposite to the gravitational force exerted by the earth on the body. The intermolecular repulsive forces exerted on the ground by a body resting on it, or hitting it, is equal and opposite to the intermolecular repulsive force exerted on the body by the ground. More general system of mechanics has been given by Einstein in his theory of relativity. This reduces to Newtonian mechanics when all velocities compared with the observer are small compared with those of light.
Einstein rejected the concept of absolute space and time, and made two postulates (i) The laws of nature are the same for all observers in uniform relative motion, and (ii) The speed of light in the same for all such observers, independently of the relative motions of sources and detectors. He showed that these postulates were equivalent to the requirement that co-ordinates of space and time used by different observers should be related by Lorentz transformation equations. The theory has several important consequences.
The transformation of time implies that two events that are simultaneous according to one observer will not necessarily be so according to another in uniform relative motion. This does not affect the construct of its sequence of related events so does not violate any conceptual causation. It will appear to two observers in uniform relative motion that each other's clock runs slowly. This is the phenomenon of ‘time dilation', for example, an observer moving with respect to a radioactive source finds a longer decay time than found by an observer at rest with respect to it, according to:
Tv = T0/(1   v2/c2) ½
Where Tv is the mean life measurement by an observer at relative speed ‘v', and T is the mean life maturement by an observer at rest, and ‘c' is the speed of light.
This formula has been verified in innumerable experiments. One consequence is that no body can be accelerated from a speed below ‘c' with respect to any observer to one above ‘c', since this would require infinite energy. Einstein educed that the transfer of energy  E by any process entailed the transfer of mass  m where  E =  mc2, so he concluded that the total energy ‘E' of any system of mass ‘m' would be given by:
E = mc2
The principle of conservation of mass states that in any system is constant. Although conservation of mass was verified in many experiments, the evidence for this was limited. In contrast the great success of theories assuming the conservation of energy established this principle, and Einstein assumed it as an axiom in his theory of relativity. According to this theory the transfer of energy ‘E' by any process entails the transfer of mass m = E/c2. Therefore, the conservation of energy ensures the conservation of mass.
In Einstein's theory inertial energy'. This leads to alternate statements of the principle, in which terminology is not generally consistent. Whereas, the law of equivalence of mass and energy such that mass ‘m' and energy ‘E' are related by the equation E = mc2, where ‘c' is the speed of light in a vacuum. Thus, a quantity of energy ‘E' has a mass ‘m' and a mass ‘m' has intrinsic energy ‘E'. The kinetic energy of a particle as determined by an observer with relative speed ‘v' is thus (m - m0)c2, which tends to the classical value ½mv2 if « C.
Attempts to express quantum theory in terms consistent with the requirements of relativity were begun by Sommerfeld (1915), eventually. Dirac (1928) gave a relativistic formulation of the wave mechanics of conserved particles (fermions). This explained the concept of spin and the associated magnetic moment, which had been postulated to account for certain details of spectra. The theory led to results very important for the theory of standard or elementary particles. The Klein-Gordon equation is the relativistic wave equation for ‘bosons'. It is applicable to bosons of zero spin, such as the ‘pion'. In which case, for example the Klein-Gordon Lagrangian describes a single spin-0, scalar field:
L = ½[ t t   y y   z z]   ½(2 mc / h)22
Then:
 L/ ( ) =  μ
leading to the equation:
 L/  = (2 mc/h)22+
and therefore the Lagrange equation requires that:
 μ μ + (2 mc/h)2 2 = 0.
Which is the Klein-Gordon equation describing the evolution in space and time of field ‘'? Individual ‘' excitation of the normal modes of particles of spin-0, and mass ‘m'.
A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by there being a four-dimensional co-ordinates, three of which are spatial co-ordinates and one in a dimensional frame in a time co-ordinates. These continuously of dimensional coordinate give to define a four-dimensional space and the motion of a particle can be described by a curve in this space, which is called ‘Minkowski space-time.' In certain formulations of the theory, use is made of a four-dimensional coordinate system in which three dimensions represent the spatial co-ordinates  , y, z and the fourth dimension are ‘ict', where ‘t' is time, ‘c' is the speed of light and ‘I' is  -1, points in this space are called events. The equivalent to the distance between two points is the interval ( s) between two events given by the Pythagoras law in a space-time as:
( s)2 =  ij  ij    I  j
Where:
  =  1, y =  2, z =  3 . . . , t =  4 and  11 ( )  33 ( ) = 1?  44 ( )=1.
Where components of the Minkowski metric tensor are the distances between two points are variant under the ‘Lorentz transformation', because the measurements of the positions of the points that are simultaneous according to one observer in uniform motion with respect to the first. By contrast, the interval between two events is invariant.
The equivalents to a vector in the four-dimensional space are consumed by a ‘four vector', in which has three space components and one of time component. For example, the four -vector momentum has a time component proportional to the energy of a particle, the four-vector potential has the space co-ordinates of the magnetic vector potential, while the time co-ordinates corresponds to the electric potential.
The special theory of relativity is concerned with relative motion between Nonaccelerated frames of reference. The general theory reals with general relative motion between accelerated frames of reference. In accelerated systems of reference, certain fictitious forces are observed, such as the centrifugal and Coriolis forces found in rotating systems. These are known as fictitious forces because they disappear when the observer transforms to a Nonaccelerated system. For example, to an observer in a car rounding a bend at constant velocity, objects in the car appear to suffer a force acting outward. To an observer outside the car, this is simply their tendency to continue moving in a straight line. The inertia of the objects is seen to cause a fictitious force and the observer can distinguish between non-inertial (accelerated) and inertial (Nonaccelerated) frames of reference.
A further point is that, to the observer in the car, all the objects are given the same acceleration despite their mass. This implies a connection between the fictitious forces arising from accelerated systems and forces due to gravity, where the acceleration produced is independent of the mass. Near the surface of the earth the acceleration of free fall, ‘g', is measured with respect to a nearby point on the surface. Because of the axial rotation the reference point is accelerated to the centre of the circle of its latitude, so ‘g' is not quite in magnitude or direction to the acceleration toward the centre of the earth given by the theory of ‘gravitation', in 1687 Newton presented his law of universal gravitation, according to which every particle evokes every other particle with the force, ‘F' given by:
F = Gm1 m2/  2,
Where ‘m' is the masses of two particles a distance ‘ ' apart, and ‘G' is the gravitational constant, which, according to modern measurements, has a value:
6.672 59 x 10-11 m3 kg -1 s -2.
For extended bodies the forces are found by integrations. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetrical so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law was consistent with Kepler's Laws. Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.
The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘ ' from a point mass ‘m' is therefore Gm/ 2, and acts toward ‘m' Gravitational field strength is measured in the newton per kilogram. The gravitational potential ‘V' at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass. Importantly, (a) Potential at a point distance ‘ ' from the centre of a hollow homogeneous spherical shell of mass ‘m' and outside the shell:
V =   Gm/
The potential is the same as if the mass of the shell is assumed concentrated at the centre, (b) At any point inside the spherical shell the potential is equal to its value at the surface:
V =   Gm/r
Where ‘r' is the radius of the shell, thus there is no resultant force acting at any point inside the shell and since no potential difference acts between any two points potential at a point distance ‘ ' from the centre of a homogeneous solid sphere as for it being outside the sphere is the same as that for a shell:
V =   Gm/
(d) At a point inside the sphere, of radius ‘r':
V =   Gm(3r2    2)/2r3
The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth's gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space and time, causing it to become curved. It is this curvature of space and time, produced by the presence of matter, that controls the natural motions of matter, that controls the natural motions of bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black holes' and ‘neutron stars', or when very accurate measurements can be made.
Accelerated systems and forces due to gravity, where the acceleration produced are independent of the mass, for example, a person in a sealed container could not easily determine whether he was being driven toward the floor by gravity or if the container were in space and being accelerated upward by a rocket. Observations extended in space and time could distinguish between these alternates, but otherwise they are indistinguishable. This leads to the ‘principle of equivalence', from which it follows that the inertial mass is the same as the gravitational mass. A further principle used in the general theory is that the laws of mechanics are the same in inertial and non-inertial frames of reference.
Still, the equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from Minkowski Space-time of the special theory. In special relativity the motion of a particle that is not acted on by any force is represented by a straight line in Minkowski Space-time. Overall, using Riemannian Space-time, the motion is represented by a line that is no longer straight, in the Euclidean sense but is the line giving the shortest distance. Such a line is called geodesic. Thus, a space-time is said to be curved. The extent of this curvature is given by the ‘metric tensor' for space-time, the components of which are solutions to Einstein's ‘field equations'. The fact that gravitational effects occur near masses is introduced by the postulate that the presence of matter produces this curvature of the space-time. This curvature of space-time controls the natural motions of bodies.
The predictions of general relativity only differ from Newton's theory by small amounts and most tests of the theory have been carried out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiations in the presence of large bodies, and the Einstein Shift. Very close agreements between the predications of general relativity and their accurately measured values have now been obtained. This ‘Einstein shift' or ‘gravitation red-shift' hold that a small ‘red-shift' in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiation is emitted (for a bright line) or absorbed (for a dark line). This shift can be explained in terms of either, . . . others have maintained that the construction is fundamentally providing by whichever number is assigned in that of what should be the speed or , by contrast, in the easiest of terms, a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference  , the work done is  hv/c2 so the change of frequency  v is  v/c2.
Assumptions given under which Einstein's special theory of relativity (1905) stretches toward its central position are (i) inertial frameworks are equivalent for the description of all physical phenomena, and (ii) the speed of light in empty space is constant for every observer, despite the motion of the observer or the light source, although the second assumption may seem plausible in the light of the Mitchelton-Morley experiment of 1887, which failed to find any difference in the speed of light in the direction of the earth's rotation or when measured perpendicular to it, it seems likely that Einstein was not influenced by the experiment, and may not even have known the results. Because of the second postulate, no matter how fast she travels, an observer can never overtake a ray of light, and see it as stationary beside her. However, near her speed approaches to that of light, light still retreats at its classical speed. The consequences are that space, time and mass turn relative to the observer. Measurements composed of quantities in an inertial system moving relative to one's own reveal slow clocks, with the effect increasing as the relative speed of the systems approaches the speed of light. Events deemed simultaneously as measured within one such system will not be simultaneous as measured from the other, forthrightly time and space thus lose their separate identity, and become parts of a single space-time. The special theory also has the famous consequence (E = mc2) of the equivalences of energy and mass.
Einstein's general theory of relativity (1916) treats of non -inertial systems, i.e., those accelerating relative to each other. The leading idea is that the laws of motion in an accelerating frame are equivalent to those in a gravitational field. The theory treats gravity not as a Newtonian force acting in an unknown way across distance, but a metrical property of a space-time continuum curved near matter. Gravity can be thought of as a field described by the metric tensor at every point.  The first serious non-Euclidean geometry is usually attributed to the Russian mathematician N.I. Lobachevski, writing in the 1820's, Euclid's fifth axiom, the axiom of parallels, states that through any points not falling on a straight line, one straight line can be drawn that does not intersect the fist. In Lobachevski's geometry several such lines can exist. Later G.F.B. Riemann (1822-66) realized that the two-dimensional geometry that would be hit upon by persons coffined to the surface of a sphere would be different from that of persons living on a plane: for example,   would be smaller, since the diameter of a circle, as drawn on a sphere, is largely compared with the circumference. Generalizing, Riemann reached the idea of a geometry in which there are no straight lines that do not intersect a given straight line, just as on a sphere all great circles (the shortest distance between two points) intersect.
The way then lay open to separating the question of the mathematical nature of a purely formal geometry from a question of its physical application. In 1854 Riemann showed that space of any curvature could be described by a set of numbers known as its metric tensor. For example, ten numbers suffice to describe the point of any four-dimensional manifold. To apply a geometry means finding coordinative definitions correlating the notion of the geometry, notably those of a straight line and an equal distance, with physical phenomena such as the path of a light ray, or the size of a rod at different times and places. The status of these definitions has been controversial, with some such as Poincaré seeing them simply as conventions, and others seeing them as important empirical truths. With the general rise of holism in the philosophy of science the question of status has abated a little, it being recognized simply that the coordination plays a fundamental role in physical science.
Meanwhile, the classic analogy of curved space-time is while a rock sitting on a bed. If a heavy objects where to be thrown across the bed, it is deflected toward the rock not by a mysterious force, but by the deformation of the space, i.e., the depression of the sheet around the object, a called curvilinear trajectory. Interestingly, the general theory lends some credit to a vision of the Newtonian absolute theory of space, in the sense that space itself is regarded as a thing with metrical properties of it is. The search for a unified field theory is the attempt to show that just as gravity is explicable because of the nature of a space-time, are the other fundamental physical forces: The strong and weak nuclear forces, and the electromagnetic force. The theory of relativity is the most radical challenge to the ‘common sense' view of space and time as fundamentally distinct from each other, with time as an absolute linear flow in which events are fixed in objective relationships.
After adaptive changes in the brains and bodies of hominids made it possible for modern humans to construct a symbolic universe using complex language system, something as quite dramatic and wholly unprecedented occurred. We began to perceive the world through the lenses of symbolic categories, to construct similarities and differences in terms of categorical priorities, and to organize our lives according to themes and narratives. Living in this new symbolic universe, modern humans had a large compulsion to encode and recode experiences, to translate everything into representation, and to seek out the deeper hidden and underlying logic that eliminates inconsistencies and ambiguities.
The mega-narratives or frame tale served to legitimate and rationalize the categorical oppositions and terms of relations between the myriad number of constructs in the symbolic universe of modern humans were religion. The use of religious thought for these purposes is quite apparent in the artifacts found in the fossil remains of people living in France and Spain forty thousand years ago. These artifacts provided the first concrete evidence that a fully developed language system had given birth to an intricate and complex social order.
Both religious and scientific thought seeks to frame or construct reality as to origins, primary oppositions, and underlying causes, and this partially explains why fundamental assumptions in the Western metaphysical tradition were eventually incorporated into a view of reality that would later be called scientific. The history of scientific thought reveals that the dialogue between assumptions about the character of spiritual reality in ordinary language and the character of physical reality in mathematical language was intimate and ongoing from the early Greek philosophers to the first scientific revolution in the seventeenth century. However, this dialogue did not conclude, as many have argued, with the emergence of positivism in the eighteenth and nineteenth centuries. It was perpetuated in a disguise form in the hidden ontology of classical epistemology-the central issue in the Bohr-Einstein debate.
The assumption that a one-to-one correspondence exists between every element of physical reality and physical theory may serve to bridge the gap between mind and world for those who use physical theories. Still, it also suggests that the Cartesian division be real and insurmountable in constructions of physical reality based on ordinary language. This explains in no small part why the radical separation between mind and world sanctioned by classical physics and formalized by Descartes (1596-1650) remains, as philosophical postmodernism attests, one of the most pervasive features of Western intellectual life.
Nietzsche, in subverting the epistemological authority of scientific knowledge, sought of a legitimate division between mind and world much starker than that originally envisioned by Descartes. What is not widely known, however, is that Nietzsche and other seminal figures in the history of philosophical postmodernism were very much aware of an epistemological crisis in scientific thought than arose much earlier, that occasioned by wave-particle dualism in quantum physics. This crisis resulted from attempts during the last three decades of the nineteenth century to develop a logically self-consistent definition of number and arithmetic that would serve to reinforce the classical view of correspondence between mathematical theory and physical reality. As it turned out, these efforts resulted in paradoxes of recursion and self-reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.
Nietzsche appealed to this crisis to reinforce his assumption that, without ontology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl 1859-1938, attempted to preserve the classical view of correspondences between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigour. This afforded effort to ground mathematical physics in human consciousness, or in human subjective reality, was no trivial matter, representing a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.
Since Husserl's epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian dualism. It also suggests that the hidden and underlying ontology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists-social scientists and scientists-engineers than was previously thought. The central question in this late-nineteenth-century debate over the status of the mathematical description of nature was the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, without any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, we should first examine a more detailing of the intimate and on-line dialogue between physics and metaphysics in Western thought.
The history of science reveals that scientific knowledge and method did not emerge as full-blown from the minds of the ancient Greek any more than language and culture emerged fully formed in the minds of Homo sapient's sapient. Scientific knowledge is an extension of ordinary language into grater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political and an economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation. Nevertheless, it was only after this inheritance from Greek philosophy was wedded to some essential features of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The philosophical debate that led to conclusions useful to the architects of classical physics can be briefly summarized, such when Thale's fellow Milesian Anaximander claimed that the first substance, although indeterminate, manifested itself in a conflict of oppositions between hot and cold, moist and dry. The idea of nature as a self-regulating balance of forces was subsequently elaborated upon by Heraclitus (d. after 480 Bc), who asserted that the fundamental substance is strife between opposites, which is itself the unity of the whole. It is, said Heraclitus, the tension between opposites that keeps the whole from simply ‘passing away.'
Parmenides of Elea (Bc 515 BC) argued in turn that the unifying substance is unique and static being. This led to a conclusion about the relationship between ordinary language and external reality that was later incorporated into the view of the relationship between mathematical language and physical reality. Since thinking or naming involves the presence of something, said Parmenides, thought and language must be dependent upon the existence of objects outside the human intellect. Presuming a one-to-one correspondence between word and idea and actual existing things, Parmenides concluded that our ability to think or speak of a thing at various times implies that it exists at all times. So the indivisible One does not change, and all perceived change is an illusion.
These assumptions emerged in roughly the form in which they would be used by the creators of classical physics in the thought of the atomists. Leucippus :  l. 450-420 Bc and Democritus (460-c. 370 Bc). They reconciled the two dominant and seemingly antithetical concepts of the fundamental character of being Becoming, (Heraclitus) and unchanging Being (Parmenides)-in a remarkable simple and direct way. Being, they said, is present in the invariable substance of the atoms that, through blending and separation, make up the thing of changing or becoming worlds.
The last remaining feature of what would become the paradigm for the first scientific revolution in the seventeenth century is attributed to Pythagoras (570 Bc). Like Parmenides, Pythagoras also held that the perceived world is illusory and that there is an exact correspondence between ideas and aspects of external reality. Pythagoras, however, had a different conception of the character of the idea that showed this correspondence. The truth about the fundamental character of the unified and unifying substance, which could be uncovered through reason and contemplation, is, he claimed, mathematical in form.
Pythagoras established and was the cental figure in a school of philosophy, religion and mathematics; He was apparently viewed by his followers as semi-divine. For his followers the regular solids (symmetrical three-dimensional forms in which all sides are the same regular polygons) and whole numbers became revered essences of sacred ideas. In contrast with ordinary language, the language of mathematics and geometric forms seemed closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans felt that the language empowered the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity most revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological implications of the quantum mechanical description of nature.
Yet, least of mention, progress was made in mathematics, and to a lesser extent in physics, from the time of classical Greek philosophy to the seventeenth century in Europe. In Baghdad, for example, from about A.D. 750 to A.D. 1000, substantial advancement was made in medicine and chemistry, and the relics of Greek science were translated into Arabic, digested, and preserved. Eventually these relics reentered Europe via the Arabic kingdom of Spain and Sicily, and the work of figures like Aristotle (384-32 Bc) and Ptolemy (127-148 AD) reached the budding universities of France, Italy, and England during the Middle Ages.
For much of this period the Church provided the institutions, like the reaching orders, needed for the rehabilitation of philosophy. Nonetheless, the social, political and an intellectual climate in Europe was not ripe for a revolution in scientific thought until the seventeenth century. Until later in time, lest as far into the nineteenth century, the works of the new class of intellectuals we called scientists, whom of which were more avocations than vocation, and the word scientist do not appear in English until around 1840.
Copernicus (1473-1543) would have been described by his contemporaries as an administrator, a diplomat, an avid student of economics and classical literature, and most notable, a highly honoured and placed church dignitary. Although we named a revolution after him, his devoutly conservative man did not set out to create one. The placement of the Sun at the centre of the universe, which seemed right and necessary to Copernicus, was not a result of making careful astronomical observations. In fact, he made very few observations while developing his theory, and then only to ascertain if his prior conclusions seemed correct. The Copernican system was also not any more useful in making astrological calculations than the accepted model and was, in some ways, much more difficult to implement. What, then, was his motivation for creating the model and his reasons for presuming that the model was correct?
Copernicus felt that the placement of the Sun at the centre of the universe made sense because he viewed the Sun as the symbol of the presence of a supremely intelligent and intelligible God in a man-centred world. He was apparently led to this conclusion in part because the Pythagoreans believed that fire exists at the centre of the cosmos, and Copernicus identified this fire with the fireball of the Sun. the only support that Copernicus could offer for the greater efficacy of his model was that it represented a simpler and more mathematical harmonious model of the sort that the Creator would obviously prefer. The language used by Copernicus in ‘The Revolution of Heavenly Orbs,' illustrates the religious dimension of his scientific thought: ‘In the midst of all the sun reposes, unmoving. It is more difficult to who is attributed to this most beautiful temple would place the light-giver in any other part than from where it can illumine all other parts?'
The belief that the mind of God as Divine Architect permeates the working of nature was the principle of the scientific thought of Johannes Kepler (or, Keppler, 1571-1630). Therefore, most modern physicists would probably feel some discomfort in reading Kepler's original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. Physical laws, wrote Kepler, ‘lie within the power of understanding of the human mind; God wanted us to perceive them when he created us of His own image, in order . . . ‘. , that we may take part in His own thoughts. Our knowledge of numbers and quantities is the same as that of God's, at least insofar as we can understand something of it in this mortal life.'
Believing, like Newton after him, in the literal truth of the words of the Bible, Kepler concluded that the word of God is also transcribed in the immediacy of observable nature. Kepler's discovery that the motions of the planets around the Sun were elliptical, as opposed perfecting circles, may have made the universe seem a less perfect creation of God on ordinary language. For Kepler, however, the new model placed the Sun, which he also viewed as the emblem of a divine agency, more at the centre of mathematically harmonious universes than the Copernican system allowed. Communing with the perfect mind of God requires as Kepler put it ‘knowledge of numbers and quantity.'
Since Galileo did not use, or even refer to, the planetary laws of Kepler when those laws would have made his defence of the heliocentric universe more credible, his attachment to the god-like circle was probably a more deeply rooted aesthetic and religious ideal. However, it was Galileo, even more than Newton, who was responsible for formulating the scientific idealism that quantum mechanics now force us to abandon. In ‘Dialogue Concerning the Two Great Systems of the World,' Galileo said about the following about the followers of Pythagoras: ‘I know perfectly well that the Pythagoreans had the highest esteem for the science of number and that Plato himself admired the human intellect and believed that it participates in divinity solely because understanding the nature of numbers is able. I myself am inclined to make the same judgement.'
This article of faith-mathematical and geometrical ideas mirror precisely the essences of physical reality was the basis for the first scientific law of this new science, a constant describing the acceleration of bodies in free fall, could not be confirmed by experiment. The experiments conducted by Galileo in which balls of different sizes and weights were rolled simultaneously down an inclined plane did not, as he frankly admitted, their precise results. Since a vacuum pumps had not yet been invented, there was simply no way that Galileo could subject his law to rigorous experimental proof in the seventeenth century. Galileo believed in the absolute validity of this law lacking experimental proof because he also believed that movement could be subjected absolutely to the law of number. What Galileo asserted, as the French historian of science Alexander Koyré put it, was ‘that the real are in its essence, geometrical and, consequently, subject to rigorous determination and measurement.'
The popular image of Isaac Newton (1642-1727) is that of a supremely rational and dispassionate empirical thinker. Newton, like Einstein, could concentrate unswervingly on complex theoretical problems until they yielded a solution. Yet what most consumed his restless intellect were not the laws of physics. Beyond believing, like Galileo that the essences of physical reality could be read in the language of mathematics, Newton also believed, with perhaps even greater intensity than Kepler, in the literal truths of the Bible.
For Newton the mathematical languages of physics and the language of biblical literature were equally valid sources of communion with the eternal writings in the extant documents alone consist of more than a million words in his own hand, and some of his speculations seem quite bizarre by contemporary standards. The Earth, said Newton, will still be inhabited after the day of judgement, and heaven, or the New Jerusalem, must be large enough to accommodate both the quick and the dead. Newton then put his mathematical genius to work and determined the dimensions required to house the population, his precise estimate was ‘the cube root of 12,000 furlongs.'
The point is, that during the first scientific revolution the marriage between mathematical idea and physical reality, or between mind and nature via mathematical theory, was viewed as a sacred union. In our more secular age, the correspondence takes on the appearance of an unexamined article of faith or, to borrow a phrase from William James (1842-1910), ‘an altar to an unknown god.' Heinrich Hertz, the famous nineteenth-century German physicist, nicely described what there is about the practice of physics that tends to inculcate this belief: ‘One cannot escape the feeling that these mathematical formulae have an independent existence and intelligence of their own that they are wiser than we, wiser than their discoveries. That we get more out of them than was originally put into them.'
While Hertz said that without having to contend with the implications of quantum mechanics, the feeling, the described remains the most enticing and exciting aspects of physics. That elegant mathematical formulae provide a framework for understanding the origins and transformations of a cosmos of enormous age and dimensions are a staggering discovery for bidding physicists. Professors of physics do not, of course, tell their students that the study of physical laws in an act of communion with thee perfect mind of God or that these laws have an independent existence outside the minds that discover them. The business of becoming a physicist typically begins, however, with the study of classical or Newtonian dynamics, and this training provides considerable covert reinforcement of the feeling that Hertz described.
Perhaps, the best way to examine the legacy of the dialogue between science and religion in the debate over the implications of quantum non-locality is to examine the source of Einstein's objections tp quantum epistemology in more personal terms. Einstein apparently lost faith in the God portrayed in biblical literature in early adolescence. However, as appropriated, . . . the ‘Autobiographical Notes' give to suggest that there were aspects that carry over into his understanding of the foundation for scientific knowledge,  . . . ‘Thus I came-despite the fact that I was the son of an entirely irreligious [Jewish] Breeden heritage, which is deeply held of its religiosity, which, however, found an abrupt end at the age of twelve. Though the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence waw a positively frantic [orgy] of freethinking coupled with the impression that youth is intentionally being deceived by the stat through lies that it was a crushing impression. Suspicion against every kind of authority grew out of this experience. It was clear to me that the religious paradise of youth, which was thus lost, was a first attempt ti free myself from the chains of the ‘merely personal'. The mental grasp of this extra-personal world within the frame of the given possibilities swam as highest aim half consciously and half unconsciously before the mind's eye.'
What is more, was, suggested Einstein, belief in the word of God as it is revealed in biblical literature that allowed him to dwell in a ‘religious paradise of youth' and to shield himself from the harsh realities of social and political life. In an effort to recover that inner sense of security that was lost after exposure to scientific knowledge, or to become free again of the ‘merely personal', he committed himself to understanding the ‘extra-personal world within the frame of given possibilities', or as seems obvious, to the study of physics. Although the existence of God as described in the Bible may have been in doubt, the qualities of mind that the architects of classical physics associated with this God were not. This is clear in the comments from which Einstein uses of mathematics,  . . . ‘Nature is the realization of the simplest conceivable mathematical ideas and one may be convinced that we can discover, by means of purely mathematical construction, those concepts and those lawful connections between them that furnish the key to the understanding of natural phenomena. Experience remains, of course, the sole criteria of physical utility of a mathematical construction. Nevertheless, the creative principle resides in mathematics. In a certain sense, therefore, it is true that pure thought can grasp reality, as the ancients dreamed.'
This article of faith, first articulated by Kepler, that ‘nature is the realization of the simplest conceivable mathematical ideas' allowed for Einstein to posit the first major law of modern physics much as it allows Galileo to posit the first major law of classical physics. During which time, when the special and then the general theories of relativity had not been confirmed by experiment. Many established physicists viewed them as at least minor theorises, Einstein remained entirely confident of their predictions. Ilse Rosenthal-Schneider, who visited Einstein shortly after Eddington's eclipse expedition confirmed a prediction of the general theory(1919), described Einstein's response to this news: ‘When I was giving expression to my joy that the results coincided with his calculations, he said quite unmoved, ‘but I knew the theory was correct' and when I asked, ‘what if there had been no confirmation of his prediction,' he countered: ‘Then I would have been sorry for the dear Lord-the theory is correct.'
Einstein was not given to making sarcastic or sardonic comments, particularly on matters of religion. These unguarded responses testify to his profound conviction that the language of mathematics allows the human mind access to immaterial and immutable truths existing outside the mind that conceived them. Although Einstein's belief was far more secular than Galileo's, it retained the same essential ingredients.
What continued in the twenty-three-year-long debate between Einstein and Bohr, least of mention? The primary article drawing upon its faith that contends with those opposing to the merits or limits of a physical theory, at the heart of this debate was the fundamental question, ‘What is the relationship between the mathematical forms in the human mind called physical theory and physical reality?' Einstein did not believe in a God who spoke in tongues of flame from the mountaintop in ordinary language, and he could not sustain belief in the anthropomorphic God of the West. There is also no suggestion that he embraced ontological monism, or the conception of Being featured in Eastern religious systems, like Taoism, Hinduism, and Buddhism. The closest that Einstein apparently came to affirming the existence of the ‘extra-personal' in the universe was a ‘cosmic religious feeling', which he closely associated with the classical view of scientific epistemology.
The doctrine that Einstein fought to preserve seemed the natural inheritance of physics until the approach of quantum mechanics. Although the mind that constructs reality might be evolving fictions that are not necessarily true or necessary in social and political life, there was, Einstein felt, a way of knowing, purged of deceptions and lies. He was convinced that knowledge of physical reality in physical theory mirrors the preexistent and immutable realm of physical laws. As Einstein consistently made clear, this knowledge mitigates loneliness and inculcates a sense of order and reason in a cosmos that might appear otherwise bereft of meaning and purpose.
What most disturbed Einstein about quantum mechanics was the fact that this physical theory might not, in experiment or even in principle, mirrors precisely the structure of physical reality. There is, for all the reasons we seem attested of, in that an inherent uncertainty in measurement made, . . . a quantum mechanical process reflects of a pursuit that quantum theory that has its contributive dynamic functionalities that there lay the attribution of a completeness of a quantum mechanical theory. Einstein's fearing that it would force us to recognize that this inherent uncertainty applied to all of physics, and, therefore, the ontological bridge between mathematical theory and physical reality-does not exist. This would mean, as Bohr was among the first to realize, that we must profoundly revive the epistemological foundations of modern science.
The world view of classical physics allowed the physicist to assume that communion with the essences of physical reality via mathematical laws and associated theories was possible, but it did not arrange for the knowing mind. In our new situation, the status of the knowing mind seems quite different. Modern physics distributively contributed its view toward the universe as an unbroken, undissectable and undivided dynamic whole. ‘There can hardly be a sharper contrast,' said Melic Capek, ‘than that between the everlasting atoms of classical physics and the vanishing ‘particles' of modern physics as Stapp put it: ‘Each atom turns out to be nothing but the potentialities in the behaviour of others. What we find, therefore, are not elementary space-time realities, but preferable to a certain extent in some respects as a web of relationships in which no part can stand alone, every part derives its meaning and existence only from its place within the whole''
The characteristics of particles and quanta are not isolatable, given particle-wave dualism and the incessant exchange of quanta within matter-energy fields. Matter cannot be dissected from the omnipresent sea of energy, nor can we in theory or in fact observe matter from the outside. As Heisenberg put it decades ago, ‘the cosmos is a complicated tissue of events, in which connection of different kinds alternate or overlay or combine and by that determine the texture of the whole. This means that a pure reductionist approach to understanding physical reality, which was the goal of classical physics, is no longer appropriate.
While the formalism of quantum physics predicts that correlations between particles over space-like separated regions are possible, it can say nothing about what this strange new relationship between parts (quanta) and whole (cosmos) was by means an outside formalism. This does not, however, prevent us from considering the implications in philosophical terms, as the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one and another.'
Wholeness requires a complementary relationship between unity and differences and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that make up the whole, although the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really by itself. It is the way parts are organized and not another constituent addition to those that form the totality.'
In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent' in the parts, as opposed to a mere spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collection of parts that would allegedly make up the whole in classical physics is an example of a spurious whole. Parts were some genuine wholes when the universal principle of order is inside the parts and by that adjusts each to all that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.
Modern physics also reveals, claims Harris, a complementary relationship between the differences between parts that constituted content representations that the universal ordering principle that is immanent in each part. While the whole cannot be finally revealed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each of the parts. The part can never, nonetheless, be finally isolated from the web of relationships that disclose the interconnections with the whole, and any attempt to do so results in ambiguity.
Much of the ambiguity is an attempted explanation of the characterology of wholes in both physics and biology, deriving from the assumption that order exists between or outside parts. Yet order in complementary relationships between differences and sameness in any physical event is never external to that event and finds to its event for being subjective. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum event apparent in observation or measurement, and the inseparable whole, revealed but not described by the instantaneous, and the inseparable whole, revealed but described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity to modern physics.
If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that shows of the ‘progressive principal order' of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness shows self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, concluding it is reasonable, in philosophical terms at least, that the universe is conscious.
However, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.
While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this knowledge-there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative assumptions as its basis to be drawn the obvious freedom of which if firmly grounded in scientific theory and experiments there is, however, in the scientific description of nature, the belief in radical Cartesian division between mind and world sanctioned by classical physics. Seemingly clear, that this separation between mind and world was a macro-level illusion fostered by limited awarenesses of the actual character of physical reality and by mathematical idealization extended beyond the realm of their applicability.
Thus, the grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strictly scientific terms. After all, the completeness of all previous physical theories was measured against the criterion with enormous success. Since it was this success that gave physics the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more comprehensive quantum theory will emerge to insist on these requirements.
All indications are, however, that no future theory can circumvent quantum indeterminancy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness or physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.
If a theory does so and continues to do so, which is clearly the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy per se is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationship in classical physics between ‘theory' and ‘physical reality'.
In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave function, and then taking the square of the amplitude. In the two-slit experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the absolute value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, we would simply add the probabilities of the two alternate ways and let it go at that. The classical procedure does not work here, because we are not dealing with classical atoms. In quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle'.
The superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum. As opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:
|  1 +  2 | 2   |  1 | 2 + |  2 | 2
Where  1 and  2 are the individual wave functions. On the left-hand side, the superposition principle results in extra terms that cannot be found on the right-hand side. The left-hand side of the above relations is the way a quantum physicist would compute probabilities, and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relations would not be there, and the peculiar wavellite interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like a bullet, and the final probability would be the sum of the individual probabilities. Nonetheless, once we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.
In order to give a full account of quantum recipes for computing probabilities, one has to examine what would happen in events that are compound. Compound events are ‘events that can be broken down into a series of steps, or events that consists of a number of things happening independently.' The recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.
The quantum recipe is |  1 •  2 | 2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus, the recipes of computing results in quantum theory and classical physics can be totally different. The quantum superposition effects are completely nonclassical, and there is no mathematical justification per se why the quantum recipes work. What justifies the use of quantum probability theory is the coming thing that justifies the use of quantum physics it has allowed us in countless experiments to extend our ability to co-ordinate experience with the expansive nature of unity.
A departure from the classical mechanics of Newton involving the principle that certain physical quantities can only assume discrete values. In quantum theory, introduced by Planck ( 1900 ), certain conditions are imposed on these quantities to restrict their value; the quantities are then said to be ‘quantized'.
Up to 1900, physics was based on Newtonian mechanics. Large-scale systems are usually adequately described, however, several problems could not be solved, in particular, the explanation of the curves of energy against wavelengths for ‘black-body radiation', with their characteristic maximum, as these attemptive efforts were afforded to endeavour upon the base-cases, on which the idea that the enclosure producing the radiation contained a number of ‘standing waves' and that the energy of an oscillator if ‘kT', where ‘k' in the ‘Boltzmann Constant' and ‘T' the thermodynamic temperature. It is a consequence of classical theory that the energy does not depend on the frequency of the oscillator. This inability to explain the phenomenons has been called the ‘ultraviolet catastrophe'.
Planck tackled the problem by discarding the idea that an oscillator can attain or decrease energy continuously, suggesting that it could only change by some discrete amount, which he called a ‘quantum.' This unit of energy is given by ‘hv' where ‘v' is the frequency and ‘h' is the ‘Planck Constant,' ‘h' has dimensions of energy ‘x' times of action, and was called the ‘quantum of action.' According to Planck an oscillator could only change its energy by an integral number of quanta, i.e., by hv, 2hv, 3hv, etc. This meant that the radiation in an enclosure has certain discrete energies and by considering the statistical distribution of oscillators with respect to their energies, he was able to derive the Planck Radiation Formulas. The formulae contrived by Planck, to express the distribution of dynamic energy in the normal spectrum of ‘black-body' radiation. It is usual form is:
8 chd /  5 ( exp[ch / k T]   1.
Which represents the amount of energy per unit volume in the range of wavelengths between   and   + d ? ‘c' = the speed of light and ‘h' = the Planck constant, as ‘k' = the Boltzmann constant with ‘T' equalling thermodynamic temperatures.
The idea of quanta of energy was applied to other problems in physics, when in 1905 Einstein explained features of the ‘Photoelectric Effect' by assuming that light was absorbed in quanta (photons). A further advance was made by Bohr(1913) in his theory of atomic spectra, in which he assumed that the atom can only exist in certain energy states and that light is emitted or absorbed as a result of a change from one state to another. He used the idea that the angular momentum of an orbiting electron could only assume discrete values, i.e., was quantized? A refinement of Bohr's theory was introduced by Sommerfeld in an attempt to account for fine structure in spectra. Other successes of quantum theory were its explanations of the ‘Compton Effect' and ‘Stark Effect.' Later developments involved the formulation of a new system of mechanics known as ‘Quantum Mechanics.'
What is more, in furthering to Compton's scattering was to an interaction between a photon of electromagnetic radiation and a free electron, or other charged particles, in which some of the energy of the photon is transferred to the particle. As a result, the wavelength of the photon is increased by amount   . Where:
   = ( 2h / m0 c ) sin 2 ½.
This is the Compton equation, ‘h' is the Planck constant, m0 the rest mass of the particle, ‘c' the speed of light, and the photon angle between the directions of the incident and scattered photons. The quantity ‘h/m0c' and  is known to be the ‘Compton Wavelength,' symbol  C, which for an electron is equal to 0.002 43 nm.
The outer electrons in all elements and the inner ones in those of low atomic number have ‘binding energies' negligible compared with the quantum energies of all except very soft X-and gamma rays. Thus most electrons in matter are effectively free and at rest and so cause Compton scattering. In the range of quantum energies 105 to 107 electro volts, this effect is commonly the most important process of attenuation of radiation. The scattering electron is ejected from the atom with large kinetic energy and the ionization that it causes plays an important part in the operation of detectors of radiation.
In the ‘Inverse Compton Effect' there is a gain in energy by low-energy photons as a result of being scattered by free electrons of much higher energy. As a consequence, the electrons lose energy. Whereas, the wavelength of light emitted by atoms is altered by the application of a strong transverse electric field to the source, the spectrum lines being split up into a number of sharply defined components. The displacements are symmetrical about the position of the undisplaced lines, and are prepositional of the undisplaced line, and are propositional to the field strength up to about 100 000 volts per. cm. (The Stark Effect).
Adjoined alongside with quantum mechanics, is an unstretching constitution taken advantage of forwarded mathematical physical theories-growing from Planck's ‘Quantum Theory' and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject development in several mathematical forms, including ‘Wave Mechanics' (Schrödinger) and ‘Matrix Mechanics' (Born and Heisenberg), all of which are equivalent.
In quantum mechanics, it is often found that the properties of a physical system, such as its angular moment and energy, can only take discrete values. Where this occurs the property is said to be ‘quantized' and its various possible values are labelled by a set of numbers called quantum numbers. For example, according to Bohr's theory of the atom, an electron moving in a circular orbit could occupy any orbit at any distance from the nucleus but only an orbit for which its angular momentum (mvr) was equal to nh/2 , where ‘n' is an integer (0, 1, 2, 3, etc.) and ‘h' is the Planck's constant. Thus the property of angular momentum is quantized and ‘n' is a quantum number that gives its possible values. The Bohr theory has now been superseded by a more sophisticated theory in which the idea of orbits is replaced by regions in which the electron may move, characterized by quantum numbers ‘n', ‘I', and ‘m'.
Literally (`913) tis was the first significant application of the quantum theory of atomic structure. Although the theory has been replaced in the effort of a mathematical physical theory that grew out of Plank's quantum tho ry and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject developed in several mathematical forms, including ‘wave mechanics, and ‘matrix mechanics' all of which are equivalent.
A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by four coordaites: three spacial coordinate and one of time coordinate, these coordinates define a four-dimensional space and the motion of s particle can be described by a curvature in this space, which is called Minkowski space-time.
The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from the Minkowski space-time of the special theory. In special relativity the motion of a pace that is not acted on any forces is represented by a straight line in Minkowski space-time. Overall relativity, using Riemannian space-time, the motion is represented by a line that is no longer straight (in the Euclidean sense) but is the line given the shortest distance. Since a line is called a ‘geodesic', thus space-time is said to be curved. Nonetheless, the extent of this curvature is given bit the metric tensor for space-time, the components of which ae solutions to Einstein's field equations. The fact that gravitational affected occur near masses is introduced by the postulate that the presence of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.
The predictions of general relativity only differ from Newton's theory by small amounts and most tests of the theory have been carries out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiation in the presence of large bodies, and the Einstein shift. Which of a small resift in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiations is emitted (for a bright lone) or absorbed (for a dark line). This shift can be explained in terms of either the speed or general theory of relativity. In the simplest terms a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference  . The work done is  hv/c2 so the change of frequency  v is  v/c2.
Thus and so, the theory of the atom in particle the simplest atom, that of hydrogen, consisting of a nucleus and one electron. It was assumed that there could be a ground state in which an isolatd atom would remain permanently, and short -lived states of higher energy to which the atom could be excited by collisions or absorption of radiation. It was supposed that radiation was emitted or absorbed in quanta or energy equal to integral multiplied of ‘hv', where ‘h' is the Planck constant and ‘v' is the frequency of the electromagnetic waves. (Later it as realized that a single quantum has the unique value hv). The frequency of radiation emitted on capturing a free electron into the nth state (where –1 for the ground state) was supposed to be nh/2 times the rotational frequency of the electron in a circular orbit. This idea to, and was replaced by, the concept that the angular momentum of orbited is quantized in terms of h/2 . The energy of the nth state was found to be given by:
En= me4/8h2  0 2n2
where ‘m' is the reduced mass of the electron. This formula gave excellent agreement with the then known series of lines in the visible and infrared regions of the spectrum of atomic hydrogen and predicted a series in the ultraviolet that was soon to be found by Lyman.
The extension of the theory to more complicated atoms had success but raised innumerable difficulties, which were only resolved by the development of wave mechanics.
Am allowed wave function of an electron in an atom obtained by a solution of the Schrödinger wave equations? In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2.r, where ‘e' is the electron charge and ‘r' its distance from the nucleus. A precise orbit cannot be considered as in Bohr's theory of the atom but the behaviour of the electron is described by its wave function,  , which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that |   |2dt is the probability of locating the electron in the element of volume ‘dt'.
Solution of Schrödinger's equation for the hydrogen atom shoes that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distinction in space given by the manner in which |   |2 varies with position. They also have an associated value of the energy ‘E'. There allowed wave function, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the earlier quantum theory of the atom.
‘N' the principle quantum number, can have values 1, 2, 3, etc. the orbital with n = 1 has the lowest energy. The state of the electron with n = 2, 2, 3, etc., are called shells and designated the K, L, M shells, etc., I the azimuthal quantum number, which for a given value of ‘n' can have values 0, 1, 2, . . . (–1). Thus when n = 1, I can only have the value 0. An electron in the L shell of an atom with n = 2 can occupy two Subshell of different energy corresponding to 1 = o and
I = 1. Similarly the M shell (–3) has three Subshell with I =0, I =1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s,p,d, and f orbitals respectfully. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:
 [I(I + 1)(h/2 )]
the Bohr Theory of the Atom (1913) introduced the concept that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electron states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon th development of wave mechanics after 1925.
According to modern theories, an electron does des not follow a determinate orbit as envisaged by Bohr but in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum members, and, according to Pauli exclusion principle, not more than one election can be in a given state.
An exact calculation of the energies and other properties of the quantum states is only possible for the simplest atoms but there are various approximate methods that give useful results. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. Th e outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. Other information may be obtained from magnetism, and chemical properties
Properties of [Standard] elementary particles are also described by quantum numbers. For example, an electron has the property known a ‘spin', and can exist in two possible energy states depending on whether this spin set parallel or antiparallel to a certain direction. The two states are conveniently characterized by quantum numbers + ½ and   ½. Similarly properties such as charge, Isospin, strangeness, parity and hyper-charge are characterized by quantum numbers. In interactions between particles, a particular quantum number may be conserved, i.e., the sum of the quantum numbers of the particles before and after the interaction remains the same. It is the type of interaction-strong electromagnetic, weak that determines whether the quantum number is conserved.
Bohr discovered that if you use Planck's constant in combination with the known mass and charge of the electron, the approximate size of the hydrogen atom could be derived. Assuming that a jumping electron absorbs or emits energy in units of Planck's constant, in accordance with the formula Einstein used to explain the photoelectric effect, Bohr was able to find correlations with the special spectral lines for hydrogen. More important, the model also served to explain why the electron does not, as electromagnetic theory says it should, radiate its energy quickly away and collapse into the nucleus.
Bohr reasoned that this does not occur because the orbits are quantized-electrons absorb and emit energy corresponding to the specific orbits. Their lowest energy state, or lowest orbit, is the ground state. What is notable, however, is that Bohr, although obliged to use macro-level analogies and classical theory, quickly and easily posits a view of the functional dynamic of the energy shells of the electron that has no macro-level analogy and is inexplicable within th framework of classical theory.
The central problem with Bohr's model from the perspective of classical theory was pointed out by Rutherford shortly before the first of the papers describing the model was published. "There appears to me," Rutherford wrote in a letter to Bohr, "one grave problem in your hypothesis that I have no doubt you fully realize, namely, how does an electron decide what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop." Viewing the electron as atomic in the Greek sense, or as a point-like object that moves, there is cause to wonder, in the absence of a mechanistic explanation, how this object instantaneously ‘jumps' from one shell or orbit to another. It was essentially efforts to answer this question that led to the development of quantum theory.
The effect of Bohr's model was to raise more questions than it answered. Although the model suggested that we can explain the periodic table of th elements by assuming that a maximum number of electrons are found in each shell, Bohr was not able to provide any mathematical acceptable explanation for the hypothesis. That explanation was provided in 1925 by Wolfgang Pauli, known throughout his career for his extraordinary talents as a mathematician.
Bohr had used three quantum numbers in his models-Planck's constant, mass, and charge. Pauli added a fourth, described as spin, which was initially represented with the macro-level analogy of a spinning ball on a pool table. Rather predictably, th analogy does not work. Whereas a classical spin can point in any direction, a quantum mechanical spin points either up or down along the axis of measurement. In total contrast to the classical notion of a spinning ball, we cannot even speak of the spin of the particle if no axis is measured.
When Pauli added this fourth quantum number, he found a correspondence between the number of electrons in each full shell of atoms and the new set of quantum numbers describing the shell. This became the basis for what we now call the ‘Pauli exclusion principle'. The principle is simple and yet quite startling-two electrons cannot have all their quantum numbers the same, and no two actual electrons are identical in te sense of having the same quantum number. The exclusion principle explains mathematically why there is a maximum number of electrons in the shell of any given atom. If the shell is full, adding another electron would be impossible because this would result in two electrons in the shell having the same quantum number.
This may sound a bit esoteric, but the fact that nature obeys the exclusion principle is quite fortunate from our point of view. If electrons did not obey the principle, all elements would exist at the ground state and there would be no chemical affinity between them. Structures like crystals and DNA would not exist, and the only structure allows for chemical bonds, which, in turn, result in the hierarchy of strictures from atoms, molecules, cells, plants, and animals.
The energy associated with a quantum state of an atom or other system that is fixed, or determined, by given set quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect accorded to: (i) the energy of a given state may be changed by externally applied fields (ii) there may be a number of states of equal energy in the system.
The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effected of the ‘uncertainty principle'. The ground state with lowest energy has an infinite lifetime hence, the energy, in principle is exactly determinate, the energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculation. Due to de Broglie and extended by Schrödinger, Dirac and many others, it (wave mechanics originated in the suggestion that light consists of corpuscles as well as of waves and the consequent suggestion that all [standard] elementary particles are associated with waves. Wave mechanics are based on the Schrödinger wave equation describing the wave properties of matter. It relates the energy of a system to wave function, usually, it is found that a system, such as an atom or molecule can only have certain allowed wave functions (eigenfunction) and certain allowed energies (Eigenvalues), in wave mechanics the quantum conditions arise in a natural way from the basic postulates as solutions of the wave equation. The energies of unbound states of positive energy form a continuum. This gives rise to the continuum background to an atomic spectrum as electrons are captured from unbound states. The energy of an atom state sustains essentially by some changes by the ‘Stark Effect' or the ‘Zeeman Effect'.
The vibrational energies of the molecule also have discrete values, for example, in a diatomic molecule the atom oscillates in the line joining them. There is an equilibrium distance at which the force is zero. The atoms repulse when closer and attract when further apart. The restraining force is nearly prepositional to the displacement hence, the oscillations are simple harmonic. Solution of the Schrödinger wave equation gives the energies of a harmonic oscillation as:
En = ( n + ½ ) h .
Where ‘h' is the Planck constant,   is the frequency, and ‘n' is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is not zero but ½ h . This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the ‘Morse Equation,' which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra'.
The rotational energy of a molecule is quantized also, according to the Schrödinger equation, a body with the moment of inertial I about the axis of rotation have energies given by:
EJ = h2J ( J + 1 ) / 8  2I.
Where J is the rotational quantum number, which can be zero or a positive integer. Rotational energies originate from band spectra.
The energies of the state of the nucleus are determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons because the interactions of nucleons are very complicated. The energies are very little affected by external influence but the ‘Mössbauer Effect' has permitted the observations of some minute changes.
In quantum theory, introduced by Max Planck 1858-1947 in 1900, was the first serious scientific departure from Newtonian mechanics. It involved supposing that certain physical quantities can only assume discrete values. In the following two decades it was applied successfully by Einstein and the Danish physicist Neils Bohr (1885-1962). It was superseded by quantum mechanics in the tears following 1924, when the French physicist Louis de Broglie (1892-1987) introduced the idea that a particle may also be regarded as a wave. A set of waves that represent the behaviour, under appropriate conditions, of a particle (e.g., its diffraction by a crystal lattice). The wavelength is given by the de Broglie equation. They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer experiment. The Schrödinger wave equation relates the energy of a system to a wave function, the energy of a system to a wave function, the square of the amplitude of the wave is proportional to the probability of a particle being found in a specific position. The wave function expresses the lack of possibly of defining both the position and momentum of a particle, this expression of discrete representation is called as the ‘uncertainty principle,' the allowed wave functions that have  described stationary states of a system
Part of the difficulty with the notions involved is that a system may be in an indeterminate state at a time, characterized only by the probability of some result for an observation, but then ‘become' determinate (the collapse of the wave packet) when an observation is made such as the position and momentum of a particle if that is to apply to reality itself, than to mere indetermincies of measurement. It is as if there is nothing but a potential for observation or a probability wave before observation is made, but when an observation is made the wave becomes a particle. The wave-particle duality seems to block any way of conceiving of physical reality-in quantum terms. In the famous two-slit experiment, an electron is fired at a screen with two slits, like a tennis ball thrown at a wall with two doors in it. If one puts detectors at each slit, every electron passing the screen is observed to go through exactly one slit. When the detectors are taken away, the electron acts like a wave process going through both slits and interfering with itself. A particle such an electron is usually thought of as always having an exact position, but its wave is not absolutely zero anywhere, there is therefore a finite probability of it ‘tunnelling through' from one position to emerge at another.
The unquestionable success of quantum mechanics has generated a large philosophical debate about its ultimate intelligibility and it's metaphysical implications. The wave-particle duality is already a departure from ordinary ways of conceiving of tings in space, and its difficulty is compounded by the probabilistic nature of the fundamental states of a system as they are conceived in quantum mechanics. Philosophical options for interpreting quantum mechanics have included variations of the belief that it is at best an incomplete description of a better-behaved classical underlying reality ( Einstein ), the Copenhagen interpretation according to which there are no objective unobserved events in the micro-world (Bohr and W. K. Heisenberg, 1901- 76), an ‘acausal' view of the collapse of the wave packet (J. von Neumann, 1903-57), and a ‘many world' interpretation in which time forks perpetually toward innumerable futures, so that different states of the same system exist in different parallel universes (H. Everett).
In recent tars the proliferation of subatomic particles, such as there are 36 kinds of quarks alone, in six flavours to look in various directions for unification. One avenue of approach is superstring theory, in which the four-dimensional world is thought of as the upshot of the collapse of a ten-dimensional world, with the four primary physical forces, one of gravity another is electromagnetism and the strong and weak nuclear forces, becoming seen as the result of the fracture of one primary force. While the scientific acceptability of such theories is a matter for physics, their ultimate intelligibility plainly requires some philosophical reflection.
A theory of gravitation that is consistent with quantum mechanics whose subject, still in its infancy, has no completely satisfactory theory. In controventional quantum gravity, the gravitational force is mediated by a massless spin-2 particle, called the ‘graviton'. The internal degrees of freedom of the graviton require (hij)( ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a quantum field theory, which has a regrettable tendency to produce infinite for measurable qualitites. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalizations procedures to make sense of these infinites. It has been shown that renormalization procedures fail for theories, such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity is the Planck length,
Lp = ( Gh / c3 )½   10  35 m.
Supersymmetry has been suggested as a structure that could be free from these pathological infinities. Many theorists believe that an effective superstring field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appar only as low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstring) is an extension of the ideas of Supersymmetry to one-dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The normal modes of super-strings represent an infinite set of ‘normal' elementary particles whose masses and spins are related in a special way. Thus, the graviton is only one of the string modes-when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of the set of ‘normal' particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimensions being tightly ‘curled up' in a small circle presumably of Planck length size.
In the quantum theory or quantum mechanics of an atom or other system fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that an atom can assume. The conceptual representation of an atom was first introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made very much more precisely by theory and excrement in the late-19th and 20th centuries.
Following the discovery of the electron (1897), it was recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly, but all the mass of an atom is concentrated at its centre in a region of positive charge, the nucleus, the radius of the order 10 -15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze' and is surrounded by ‘Z' electrons (Z is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the development of the quantum theory.
The ‘Bohr Theory of the Atom', 1913, introduced the concept that an electron in an atom is normally in a state of lower energy, or ground state, in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with another particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes, typically nanoseconds and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics,' after 1925.
According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum numbers, and, according to the Pauli exclusion principle, not more than one electron can be in a given state.
The Pauli exclusion principle states that no two identical ‘fermions' in any system can be in the same quantum state that is have the same set of quantum numbers. The principle was first proposed (1925) in the form that not more than two electrons in an atom could have the same set of quantum numbers. This hypothesis accounted for the main features of the structure of the atom and for the periodic table. An electron in an atom is characterized by four quantum numbers, n, I, m, and S. A particular atomic orbital, which has fixed values of n, I, and m, can thus contain a maximum of two electrons, since the spin quantum number ‘s' can only be + | or-|. In 1928 Sommerfeld applied the principle to the free electrons in solids and his theory has been greatly developed by later associates.
Additionally, an effect occurring when atoms emit or absorb radiation in the presence of a moderately strong magnetic field. Each spectral; Line is split into closely spaced polarized components, when the source is viewed at right angles to the field there are three components, the middle one having the same frequency as the unmodified line, and when the source is viewed parallel to the field there are two components, the undisplaced line being preoccupied. This is the ‘normal' Zeeman Effect. With most spectral lines, however, the anomalous Zeeman effect occurs, where there are a greater number of symmetrically arranged polarized components. In both effects the displacement of the components is a measure of the magnetic field strength. In some cases the components cannot be resolved and the spectral line appears broadened.
The Zeeman effect occurs because the energies of individual electron states depend on their inclination to the direction of the magnetic field, and because quantum energy requirements impose conditions such that the plane of an electron orbit can only set itself at certain definite angles to the applied field. These angles are such that the projection of the total angular momentum on the field direction in an integral multiple of h/2  (h is the Planck constant). The Zeeman effect is observed with moderately strong fields where the precession of the orbital angular momentum and the spin angular momentum of the electrons about each other is much faster than the total precession around the field direction. The normal Zeeman effect is observed when the conditions are such that the Landé factor is unity, otherwise the anomalous effect is found. This anomaly was one of the factors contributing to the discovery of electron spin.
Statistics that are concerned with the equilibrium distribution of elementary particles of a particular type among the various quantized energy states. It is assumed that these elementary particles are indistinguishable. The ‘Pauli Exclusion Principle' is obeyed so that no two identical ‘fermions' can be in the same quantum mechanical state. The exchange of two identical fermions, i.e., two electrons, does not affect the probability of distribution but it does involve a change in the sign of the wave function. The ‘Fermi-Dirac Distribution Law' gives  E the average number of identical fermions in a state of energy E:

 E = 1/[e  + E/kT + 1],
Where ‘k' is the Boltzmann constant, ‘T' is the thermodynamic temperature and   is a quantity depending on temperature and the concentration of particles. For the valences electrons in a solid, ‘ ' takes the form-E1/kT, where E1 is the Fermi level. Whereby, the Fermi level (or Fermi energy) E F the value of  E is exactly one half. Thus, for a system in equilibrium one half of the states with energy very nearly equal to ‘E' (if any) will be occupied. The value of EF varies very slowly with temperatures, tending to E0 as ‘T' tends to absolute zero.
In Bose-Einstein statistics, the Pauli exclusion principle is not obeyed so that any number of identical ‘bosons' can be in the same state. The exchanger of two bosons of the same type affects neither the probability of distribution nor the sign of the wave function. The ‘Bose-Einstein Distribution Law' gives  E the average number of identical bosons in a state of energy E:

 E = 1/[e  + E/kT-1].
The formula can be applied to photons, considered as quasi-particles, provided that the quantity  , which conserves the number of particles, is zero. Planck's formula for the energy distribution of ‘Black-Body Radiation' was derived from this law by Bose. At high temperatures and low concentrations both the quantum distribution laws tend to the classical distribution:

 E = Ae-E/kT.
Additionally, the property of substances that have a positive magnetic ‘susceptibility', whereby its quantity μr   1, and where μr is ‘Relative Permeability,' again, that the electric-quantity presented as  r   1, where  r is the ‘Relative Permittivity,' all of which has positivity. All of which are caused by the ‘spins' of electrons, paramagnetic substances having molecules or atoms, in which there are paired electrons and thus, resulting of a ‘Magnetic Moment.' There is also a contribution of the magnetic properties from the orbital motion of the electron, as the relative ‘permeability' of a paramagnetic substance is thus greater than that of a vacuum, i.e., it is greater than unity.
A ‘paramagnetic substance' is regarded as an assembly of magnetic dipoles that have random orientation. In the presence of a field the magnetization is determined by competition between the effect of the field, in tending to align the magnetic dipoles, and the random thermal agitation. In small fields and high temperatures, the magnetization produced is proportional to the field strength, wherefore at low temperatures or high field strengths, a state of saturation is approached. As the temperature rises, the susceptibility falls according to Curie's Law or the Curie-Weiss Law.
Furthering by Curie's Law, the susceptibility ( ) of a paramagnetic substance is unversedly proportional to the ‘thermodynamic temperature' (T):   = C/T. The constant 'C is called the ‘Curie constant' and is characteristic of the material. This law is explained by assuming that each molecule has an independent magnetic ‘dipole' moment and the tendency of the applied field to align these molecules is opposed by the random moment due to the temperature. A modification of Curie's Law, followed by many paramagnetic substances, where the Curie-Weiss law modifies its applicability in the form:
  = C/(T     ).
The law shows that the susceptibility is proportional to the excess of temperature over a fixed temperature  : ‘ ' is known as the Weiss constant and is a temperature characteristic of the material, such as sodium and potassium, also exhibit type of paramagnetic resulting from the magnetic moments of free, or nearly free electrons, in their conduction bands? This is characterized by a very small positive susceptibility and a very slight temperature dependence, and is known as ‘free-electron paramagnetism' or ‘Pauli paramagnetism'.
A property of certain solid substances that having a large positive magnetic susceptibility having capabilities of being magnetized by weak magnetic fields. The chief elements are iron, cobalt, and nickel and many ferromagnetic alloys based on these metals also exist. Justifiably, ferromagnetic materials exhibit magnetic ‘hysteresis', of which formidable combination of decaying within the change of an observed effect in response to a change in the mechanism producing the effect. (Magnetic) a phenomenon shown by ferromagnetic substances, whereby the magnetic flux through the medium depends not only on the existing magnetizing field, but also on the previous state or states of the substances, the existence of a phenomenon necessitates a dissipation of energy when the substance is subjected to a cycle of magnetic changes, this is known as the magnetic hysteresis loss. The magnetic hysteresis loops were acceding by a curved obtainability from ways of which, in themselves were of plotting the magnetic flux density ‘B', of a ferromagnetic material against the responding value of the magnetizing field 'H', the area to the ‘hysteresis loss' per unit volume in taking the specimen through the prescribed magnetizing cycle. The general forms of the hysteresis loop fore a symmetrical cycle between ‘H' and ‘- H' and ‘H-h, having inclinations that rise to hysteresis.
The magnetic hysteresis loss commands the dissipation of energy as due to magnetic hysteresis, when the magnetic material is subjected to changes, particularly, the cycle changes of magnetization, as having the larger positive magnetic susceptibility, and are capable of being magnetized by weak magnetic fields. Ferro magnetics are able to retain a certain domain of magnetization when the magnetizing field is removed. Those materials that retain a high percentage of their magnetization are said to be hard, and those that lose most of their magnetization are said to be soft, typical examples of hard ferromagnetic are cobalt steel and various alloys of nickel, aluminium and cobalt. Typical soft magnetic materials are silicon steel and soft iron, the coercive force as acknowledged to the reversed magnetic field' that is required to reduce the magnetic ‘flux density' in a substance from its remnant value to zero in characteristic of ferromagnetisms and explains by its presence of domains. A ferromagnetic domain is a region of crystalline matter, whose volume may be 10-12 to 10-8 m3, which contains atoms whose magnetic moments are aligned in the same direction. The domain is thus magnetically saturated and behaves like a magnet with its own magnetic axis and moment. The magnetic moment of the ferrometic atom results from the spin of the electron in an unfilled inner shell of the atom. The formation of a domain depends upon the strong interactions forces (Exchange forces) that are effective in a crystal lattice containing ferrometic atoms.
In an unmagnetized volume of a specimen, the domains are arranged in a random fashion with their magnetic axes pointing in all directions so that the specimen has no resultant magnetic moment. Under the influence of a weak magnetic field, those domains whose magnetic saxes have directions near to that of the field flux at the expense of their neighbours. In this process the atoms of neighbouring domains tend to align in the direction of the field but the strong influence of the growing domain causes their axes to align parallel to its magnetic axis. The growth of these domains leads to a resultant magnetic moment and hence, magnetization of the specimen in the direction of the field, with increasing field strength, the growth of domains proceeds until there is, effectively, only one domain whose magnetic axis appropriates to the field direction. The specimen now exhibits tron magnetization. Further, increasing in field strength cause the final alignment and magnetic saturation in the field direction. This explains the characteristic variation of magnetization with applied strength. The presence of domains in ferromagnetic materials can be demonstrated by use of ‘Bitter Patterns' or by ‘Barkhausen Effect,' which puts forward, that the magnetization of a ferromagnetic substance does not increase or decrease steadily with steady increase or decrease of the magnetizing field but proceeds in a series of minute jumps. The effect gives support to the domain theory of Ferromagnetism.
For ferromagnetic solids there are a change from ferromagnetic to paramagnetic behaviour above a particular temperature and the paramagnetic material then obeyed the Curie-Weiss Law above this temperature, this is the ‘Curie temperature' for the material. Below this temperature the law is not obeyed. Some paramagnetic substances, obey the temperature ‘  C' and do not obey it below, but are not ferromagnetic below this temperature. The value ‘ ' in the Curie-Weiss law can be thought of as a correction to Curie's law reelecting the extent to which the magnetic dipoles interact with each other. In materials exhibiting ‘antiferromagnetism' of which the temperature ‘ ' corresponds to the ‘Néel temperature'.
Without discredited inquisitions, the property of certain materials that have a low positive magnetic susceptibility, as in paramagnetism, and exhibit a temperature dependence similar to that encountered in ferromagnetism. The susceptibility increased with temperatures up to a certain point, called the ‘Néel Temperature,' and then falls with increasing temperatures in accordance with the Curie-Weiss law. The material thus becomes paramagnetic above the Néel temperature, which is analogous to the Curie temperature in the transition from ferromagnetism to paramagnetism. Antiferromagnetism is a property of certain inorganic compounds such as MnO, FeO, FeF2 and MnS. It results from interactions between neighbouring atoms leading and an antiparallel arrangement of adjacent magnetic dipole moments, least of mention. A system of two equal and opposite charges placed at a very short distance apart. The product of either of the charges and the distance between them is known as the ‘electric dipole moments. A small loop carrying a current  behaves as a magnetic dipole and is equal to IA, where A being the area of the loop.
The energy associated with a quantum state of an atom or other system that is fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect by ways of: (1) the energy of a given state may be changed by externally applied fields, and (2) there may be a number of states of equal energy in the system.
The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effects of the ‘uncertainty principle'. The ground state with lowest energy has an infinite lifetime, hence the energy is if, in at all as a principle that is exactly determinate. The energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculating such a system that emit electromagnetic radiation continuously and consequently no permanent atom would be possible, hence this problem was solved by the developments of quantum theory. An exact calculation of the energies and other particles of the quantum state is only possible for the simplest atom but there are various approximate methods that give useful results as an approximate method of solving a difficult problem, if the equations to be solved, and depart only slightly from those of some problems already solved. For example, the orbit of a single planet round the sun is an ellipse, that the perturbing effect of other planets modifies the orbit slightly in a way calculable by this method. The technique finds considerable application in ‘wave mechanics' and in ‘quantum electrodynamics'. Phenomena that are not amendable to solution by perturbation theory are said to be non-perturbative.
The energies of unbound states of positive total energy form a continuum. This gives rise to the continuos background to an atomic spectrum, as electrons are captured from unbound state, the energy of an atomic state can be changed by the ‘Stark Effect' or the ‘Zeeman Effect.'
The vibrational energies of molecules also have discrete values, for example, in a diatomic molecule the atoms oscillate in the line joining them. There is an equilibrium distance at which the force is zero, and the atoms deflect when closer and attract when further apart. The restraining force is very nearly proportional to the displacement, hence the oscillations are simple harmonic. Solution of the ‘Schrödinger wave equation' gives the energies of a harmonic oscillation as:
En = ( n + ½ ) hƒ
Where ‘h' is the Planck constant, ƒ is the frequency, and ‘n' is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is thus not zero but ½hƒ. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the Morse equation, which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra'.
The rotational energy of a molecule is quantized also, according to the Schrödinger equation a body with moments of inertia I about the axis of rotation have energies given by:
Ej = h2J(J + 1 )/8 2 I,
Where ‘J' is the rotational quantum number, which can be zero or a positive integer. Rotational energies are found from ‘band spectra'.
The energies of the states of the ‘nucleus' can be determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons in atoms because the interactions of nucleons are very complicated. The energies are very little affected by external influences, but the ‘Mössbauer Effect' has permitted the observation of some minute changes.
When X-rays are scattered by atomic centres arranged at regular intervals, interference phenomena occur, crystals providing grating of a suitable small interval. The interference effects may be used to provide a spectrum of the beam of X-rays, since, according to ‘Bragg's Law,' the angle of reflection of X-rays from a crystal depends on the wavelength of the rays. For lower-energy X-rays mechanically ruled grating can be used. Each chemical element emits characteristic X-rays in sharply defined groups in more widely separated regions. They are known as the K, L's, M, N. etc., promote lines of any series toward shorter wavelengths as the atomic number of the elements concerned increases. If a parallel beam of X-rays, wavelength  , strikes a set of crystal planes it is reflected from the different planes, interferences occurring between X-rays reflect from adjacent planes. Bragg's Law states that constructive interference takes place when the difference in path-lengths, BAC, is equal to an integral number of wavelengths 2d sin = n.
In which ‘n' is an integer, ‘d' is the interplanar distance, and is the angle between the incident X-ray and the crystal plane. This angle is called the ‘Bragg's Angle,' and a bright spot will be obtained on an interference pattern at this angle. A dark spot will be obtained, however, if be 2d sin = m . Where ‘m' is half-integral. The structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces.
A concept originally introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made experiment in the late-19th and early 20th century. Following the discovery of the electron (1897), they recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly all mass of the atom is concentrated at its centre in a region of positive charge, the nucleus is a region of positive charge, the nucleus, radiuses of the order 10-15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze', is surrounded by ‘Z' electrons (‘Z' is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the developments of the ‘Quantum Theory.'
The ‘Bohr Theory of the Atom' (1913) introduced the notion that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed by absorption of electromagnetic radiation or collision with other particle the atom may be excited-that is, electrons moved into a state of higher energy. Such excited states usually have short life spans (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more ‘quanta' of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Postulating elliptic orbits made attempts to improve the theory (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics' 1925.
According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of the wave equation. This determines the ‘probability' that the electron may be found in a given element of volume. A set of four quantum numbers has characterized each state, and according to the ‘Pauli Exclusion Principle', not more than one electron can be in a given state.
An exact calculation of the energies and other properties of the quantum states is possible for the simplest atoms, but various approximate methods give useful results, i.e., as an approximate method of solving a difficult problem if the equations to be solved and depart only slightly from those of some problems already solved. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. The outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. As administered by a small difference in energy between the energy levels of the 2P½ states of hydrogen. In accord with Lamb Shift, these levels would have the same energy according to the wave mechanics of Dirac. The actual shift can be explained by a correction to the energies based on the theory of the interaction of electromagnetic fields with matter, in of which the fields themselves are quantized. Yet, other information may be obtained form magnetism and other chemical properties.
Its appearance potential concludes as, (1)the potential differences through which an electron must be accelerated from rest to produce a given ion from its parent atom or molecule. (2) This potential difference multiplied bu the electron charge giving the least energy required to produce the ion. A simple ionizing process gives the ‘ionization potential' of the substance, for example:
Ar + e   Ar + + 2e.
Higher appearance potentials may be found for multiplying charged ions:
Ar + e   Ar + + + 3r.
The number of protons in a nucleus of an atom or the number of electrons resolving around the nucleus is among some concerns of atomic numbers. The atomic number determines the chemical properties of an element and the element's position in the periodic table, because of which the clarification of chemical elements, in tabular form, in the order of their atomic number. The elements show a periodicity of properties, chemically similar recurring in a definite order. The sequence of elements is thus broken into horizontal ‘periods' and vertical ‘groups' the elements in each group showing close chemical analogies, i.e., in valency, chemical properties, etc. all the isotopes of an element have the same atomic number although different isotopes gave mass numbers.
All the aforementioned devices are designed to produce collisions between particles travelling in opposite directions. This gives effectively very much higher energies available for interaction than our possible targets. High-energy nuclear reaction occurs when the particles, either moving in a stationary target collide. The particles created in these reactions are detected by sensitive equipment close to the collision site. New particles, including the tauon, W, and Z particles and requiring enormous energies for their creation, have been detected and their properties determined.
While, still, a ‘nucleon' and ‘anti-nucleon' annihilating at low energy, produce about half a dozen pions, which may be neutral or charged. By definition, mesons are both hadrons and bosons, justly as the pion and kaon are mesons. Mesons have a substructure composed of a quark and an antiquark bound together by the exchange of particles known as Gluons.
The conjugate particle or antiparticle that corresponds with another particle of identical mass and spin, but has such quantum numbers as charge (Q), baryon number (B), strangeness (S), charms, and Isospin (I3) of equal magnitude but opposite signs. Examples of a particle and its antiparticle include the electron and positron, proton and antiproton, the positive and negatively charged pions, and the ‘up' quark and ‘up' antiquark. The antiparticle corresponding to a particle with the symbol ‘an' is usually denoted ‘ '. When a particle and its antiparticle are identical, as with the photon and neutral pion, this is called a ‘self-conjugate particle'.
The critical potential to excitation energy required to change am atom or molecule from one quantum state to another of higher energy, is equal to the difference in energy of the states and is usually the difference in energy between the ground state of the atom and a specified excited state. Which the state of a system, such as an atom or molecule, when it has a higher energy than its ground state.
According to wave theory the electron may be at any distance from the nucleus, but in fact, there is only a reasonable chance of it being within a distance of-5 x 10-11 metre. Enshrouded by the maximum probability that occurs when r-a0 and where a0 is the radius of the first Bohr orbit. Representing an orbital by a surface enclosing a volume within which there is an arbitrarily decided probability is customary (say 95%) of finding the electron.
Finally, the electron in an atom can have a fourth quantum number S, characterizing its spin direction. This can be + ½ or ½, and according to the ‘Pauli Exclusion Principle,' each orbital can hold only two electrons. The four quantum numbers lead to an explanation of the periodic table of the elements.
In earlier mention, the concerns referring to the ‘moment' had been to our exchanges to issue as, i.e., the moment of inertia, moment of momentum. The moment of a force about an axis is the product of the perpendicular distance of the axis from the line of action of the force, and the component of the force in the plane perpendicular to the axis. The moment of a system of coplanar forces about an axis perpendicular to the plane containing them is the algebraic sum of the moments of the separate forces about that axis of a anticlockwise moment appear taken controventionally to be positive and clockwise of ones Uncomplementarity. The moment of momentum about an axis, symbol L is the product to the moment of inertia and angular velocity (I ). Angular momentum is a pseudo-vector quality, as it is connected in an isolated system. It is a scalar and is given a positive or negative sign as in the moment of force. When contending to systems, in which forces and motions do not all lie in one plane, the concept of the moment about a point is needed. The moment of a vector P, e.g., forces or momentous pulsivity, from which a point ‘A' is a pseudo-vector M equal to the vector product of r and P, where r is any line joining ‘A' to any point ‘B' on the line of action of P. The vector product M =  r x p is independent of the position of ‘B'  and the relation between the scalar moment about an axis and the vector moment about which a point on the axis is that the scalar is the component of the vector in the direction of the axis.
The linear momentum of a particle ‘p' is the product of the mass and the velocity of the particle. It is a vector quality directed through the particle in the direction of motion. The linear momentum of a body or of a system of particles is the vector sum of the linear momenta of the individual particle. If a body of mass ‘M' is translated with a velocity ‘V', its momentum is MV, which is the momentum of a particle of mass ‘M' at the centre of gravity of the body. (1) In any system of mutually interacting or impinging particles, the linear momentum in any fixed direction remains unaltered unless there is an external force acting in that direction. (2) Similarly, the angular momentum is constant in the case of a system rotating about a fixed axis provided that no external torque is applied.
Subatomic particles fall into two major groups: The elementary particles and the hadrons. An elementary particle is not composed of any smaller particles and therefore represents the most fundamental form of matter. A hadron is composed of panicles, including the major particles called quarks, the most common of the subatomic particles, includes the major constituents of the atom-the electron is an elementary particle, and the proton and the neutron (hadrons). An elementary particle with zero charge and a rest mass equal to:
1.674 9542 x 10-27 kg,
i.e., 939.5729 MeV / c2.
It is a constituent of every atomic nucleus except that of ordinary hydrogen, free neutrons decay by ‘beta decay' with a mean life of 914 s. the neutron has spin ½, Isospin ½, and positive parity. It is a ‘fermion' and is classified as a ‘hadron' because it has strong interaction.
Neutrons can be ejected from nuclei by high-energy particles or photons, the energy required is usually about 8 MeV, although sometimes it is less. The fission is the most productive source. They are detected using all normal detectors of ionizing radiation because of the production of secondary particles in nuclear reactions. The discovery of the neutron (Chadwick, 1932) involved the detection of the tracks of protons ejected by neutrons by elastic collisions in hydrogenous materials.
Unlike other nuclear particles, neutrons are not repelled by the electric charge of a nucleus so they are very effective in causing nuclear reactions. When there is no ‘threshold energy', the interaction ‘cross sections' become very large at low neutron energies, and the thermal neutrons produced in great numbers by nuclear reactions cause nuclear reactions on a large scale. The capture of neutrons by the (n,  ) process produces large quantities of radioactive materials, both useful nuclides such as 66Co for cancer therapy and undesirable by-product. The least energy required to cause a certain process, in particular a reaction in nuclear or particle physics. It is often important to distinguish between the energies required in the laboratory and in centre-of-mass co-ordinates. In ‘fission' the splitting of a heavy nucleus of an atom into two or more fragments of comparable size usually as the result of the impact of a neutron on the nucleus. It is normally accompanied by the emission of neutrons or gamma rays. Plutonium, uranium, and thorium are the principle fissionable elements
In nuclear reaction, a reaction between an atonic nucleus and a bombarding particle or photon leading to the creation of a new nucleus and the possible ejection of one or more particles. Nuclear reactions are often represented by enclosing brackets and symbols for the incoming and final nuclides being shown outside the brackets. For example: Energy from nuclear fissions, in its gross effect, finds the nucleuses of atoms of moderate size are more tightly held together than the largest nucleus, so that if the nucleus of a heavy atom can be induced to split into two nuclei and moderate mass, there should be considerable release of energy. By Einstein' s law of the conservation of mass and energy, this mass and energy difference is equivalent to the energy released when the nucleons binding differences are equivalent to the energy released when the nucleons bind together. Y=this energy is the binding energy, the graph of binding per nucleon, EB/A increases rapidly up to a mass number of 50-69 (iron, nickel, etc.) and then decreases slowly. There are therefore two ways in which energy can be released from a nucleus, both of which can be released from the nucleus, both of which entail a rearrangement of nuclei occurring in the lower as having to curve into form its nuclei, in the upper, higher-energy part of the curve. The fission is the splitting of heavy atoms, such as uranium, into lighter atoms, accompanied by an enormous release of energy. Fusion of light nuclei, such as deuterium and tritium, releases an even greater quantity of energy.
The work that must be done to detach a single particle from a structure of free electrons of an atom or molecule to form a negative ion. The process is sometimes called ‘electron capture, but the term is more usually applied to nuclear processes. As many atoms, molecules and free radicals from stable negative ions by capturing electrons to atoms or molecules to form a negative ion. The electron affinity is the least amount of work that must be done to separate from the ion. It is usually expressed in electro-volts.
The uranium isotope 235U will readily accept a neutron but one-seventh of the nuclei stabilized by gamma emissions while six-sevenths split into two parts. Most of the energy released amounts to about 170 MeV, in the form of the kinetic energy of these fission fragments. In addition an averaged of 2.5 neutrons of average energy 2 MeV and some gamma radiation is produced. Further energy is released later by radioactivity of the fission fragments. The total energy released is about 3 x 10-11 joule per atom fissioned, i.e., 6.5 x 1013 joule per kg conserved.
To extract energy in a controlled manner from fissionable nuclei, arrangements must be made for a sufficient proportion of the neutrons released in the fissions to cause further fissions in their turn, so that the process is continuous, the minium mass of a fissile material that will sustain a chain reaction seems confined to nuclear weaponry. Although, a reactor with a large proportion of 235U or plutonium 239Pu in the fuel uses the fast neutrons as they are liberated from the fission, such a rector is called a ‘fast reactor'. Natural uranium contains 0.7% of 235U and if the liberated neutrons can be slowed before they have much chance of meeting the more common 238U atom and then cause another fission. To slow the neutron, a moderator is used containing light atoms to which the neutrons will give kinetic energy by collision. As the neutrons eventually acquire energies appropriate to gas molecules at the temperatures of the moderator, they are then said to be thermal neutrons and the reactor is a thermal reactor.
Then, of course, the Thermal reactors, in typical thermal reactors, the fuel elements are rods embedded as a regular array in which the bulk of the moderator that the typical neutron from a fission process has a good chance of escaping from the narrowed fuel rod and making many collisions with nuclei in the moderator before again entering a fuel element. Suitable moderators are pure graphite, heavy water (D2O), are sometimes used as a coolant, and ordinary water (H2O). Very pure materials are essential as some unwanted nuclei capture neutrons readily. The reactor core is surrounded by a reflector made of suitable material to reduce the escape of neutrons from the surface. Each fuel element is encased, e.g., in magnesium alloy or stainless steel, to prevent escape of radioactive fission products. The coolant, which may be gaseous or liquid, flows along the channels over the canned fuel elements. There is an emission of gamma rays inherent in the fission process and, many of the fission products are intensely radioactive. To protect personnel, the assembly is surrounded by a massive biological shield, of concrete, with an inner iron thermal shield to protect the concrete from high temperatures caused by absorption of radiation.
To keep the power production steady, control rods are moved in or out of the assembly. These contain material that captures neutrons readily, e.g., cadmium or boron. The power production can be held steady by allowing the currents in suitably placed ionization chambers automatically to modify the settings of the rods. Further absorbent rods, the shut-down rods, are driven into the core to stop the reaction, as in an emergence if the control mechanism fails. To attain high thermodynamic efficiency so that a large proportion of the liberated energy can be used, the heat should be extracted from the reactor core at a high temperature.
In fast reactors no mediator is used, the frequency of collisions between neutrons and fissile atoms being creased by enriching the natural uranium fuel with 239Pu or additional 235U atoms that are fissioned by fast neutrons. The fast neutrons are thus built up a self-sustaining chain reaction. In these reactions the core is usually surrounded by a blanket of natural uranium into which some of the neutrons are allowed to escape. Under suitable conditions some of these neutrons will be captured by 238U atoms forming 239U atoms, which are converted to 239Pu. As more plutonium can be produced than required to enrich the fuel in the core, these are called ‘fast breeder reactors'.
Thus and so, a neutral elementary particle with spin ½, that only takes part in weak interactions. The neutrino is a lepton and exists in three types corresponding to the three types of charged leptons, that is, there are the electron neutrinos (ve) tauon neutrinos (vμ) and tauon neutrinos (v ). The antiparticle of the neutrino is the antineutrino.
Neutrinos were originally thought to have a zero mass, but recently there have been some advances to an indirect experiment that evince to the contrary. In 1985 a Soviet team reported a measurement for the first time, of a non-zero neutrino mass. The mass measured was extremely small, some 10 000 times smaller than the mass of the electron. However, subsequent attempts to reproduce the Soviet measurement were unsuccessful. More recent (1998-99), the Super-Kamiokande experiment in Japan has provided indirect evidence for massive neutrinos. The new evidence is based upon studies of neutrinos, which are created when highly energetic cosmic rays bombard the earth's upper atmosphere. By classifying the interaction of these neutrinos according to the type of neutrino involved (an electron neutrino or muon neutrino), and counting their relative numbers as a function: An oscillatory behaviour may be shown to occur. Oscillation in this sense is the charging back and forth of the neutrino's type as it travels through space or matter. The Super-Kamiokande result indicates that muon neutrinos are changing into another type of neutrino, e.g., sterile neutrinos. The experiment does not, however, determine directly the masses, though the oscillations suggest very small differences in mass between the oscillating types.
The neutrino was first postulated (Pauli 1930) to explain the continuous spectrum of beta rays. It is assumed that there is the same amount of energy available for each beta decay of a particle nuclide and that energy is shared according to a statistical law between the electron and a light neutral particle, now classified as the anti-neutrino,  e  Later it was shown that the postulated particle would also conserve angular momentum and linear momentum in the beta decays.
In addition to beta decay, the electron neutrino is also associated with, for example, positron decay and electron capture:
22Na   22Ne + e+ + ve
55Fe + e    55Mn + ve
The absorption of anti-neutrinos in matter by the process
1H +  e   n + e+
Was first demonstrated by Reines and Cowan? The muon neutrino is generated in such processes as the interactions of neutrinos are extremely weak the cross sections increase with energy and reaction can be studied at the enormous energies available with modern accelerators in some forms of ‘grand unification theories', neutrinos are predicted to have a non-zero mass. Nonetheless, no evidences have been found to support this prediction.
The antiparticle of an electron, i.e., an elementary particle with electron mass and positive charge and equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positivity and become observable. The vacant state of negativity seems to behave as a positive particle of positive energy, which is observed as a positron.
A theory of elementary particles based on the idea that the fundamental entities are not point-like particles, but finite lines (strings) or closed loops formed by stings. The original idea was that an elementary particle was the result of a standing wave in a string. A considerable amount of theoretical effort has been put into development string theories. In particular, combining the idea of strings with that of super-symmetry, which has led to the idea with which correlation holds strongly with super-strings. This theory may be a more useful route to a unified theory of fundamental interactions than quantum field theory, simply because it's probably by some unvoided infinites that arise when gravitational interactions are introduced into field theories. Thus, superstring theory inevitably leads to particles of spin 2, identified as gravitons. String theory also shows why particles violate parity conservation in weak interactions.
Superstring theories involve the idea of higher dimensional spaces: 10 dimensions for fermions and 26 dimensions for bosons. It has been suggested that there are the normal four space-time dimensions, with the extra dimension being tightly ‘curved'. Still, there are no direct experimental evidences for super-strings. They are thought to have a length of about 10-35 m and energies of 1014 GeV, which is well above the energy of any accelerator. An extension of the theory postulates that the fundamental entities are not one-dimensional but two-dimensional, i.e., they are super-membranes.
Allocations often other than what are previous than in time, awaiting the formidable combinations of what precedes the presence to the future, because of which the set of invariance of a system, a symmetry operation on a system is an operation that does not change the system. It is studied mathematically using ‘Group Theory.' Some symmetries are directly physical, for instance the reelections and rotations for molecules and translations in crystal lattices. More abstractively the implicating inclinations toward abstract symmetries involve changing properties, as in the CPT Theorem and the symmetries associated with ‘Gauge Theory.' Gauge theories are now thought to provide the basis for a description in all elementary particle interactions. The electromagnetic particle interactions are described by quantum electrodynamics, which is called Abelian gauge theory.
Quantum field theory for which measurable quantities remain unchanged under a ‘group transformation'. All these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mills in 1954, describe the interaction between two quantum fields of fermions. In which particles represented by fields whose normal modes of oscillation are quantized. Elementary particle interactions are described by relativistically invariant theories of quantized fields, ie. , By relativistic quantum field theories. Gauge transformations can take the form of a simple multiplication by a constant phase. Such transformations are called ‘global gauge transformations'. In local gauge transformations, the phase of the fields is alterable by amounts that vary with space and time; i.e., as, in Abelian gauge theories, consecutive field transformations commute. Another function of space and time. Quantum chromodynamics (the theory of the strong interaction) and electroweak and grand unified theories are all non-Abelian. In these theories consecutive field transformations do not commute. All non-Abelian gauge theories are based on work proposed by Yang and Mils, as Einstein's theory of general relativity can also be formulated as a local gauge theory.
A symmetry including both boson and fermions, in theories based on super-symmetry every boson has a corresponding boson. The boson partners of existing fermions have names formed by prefacing the names of the fermion with an ‘s' (e.g., selection, squark, lepton). The names of the fermion partners of existing bosons are obtained by changing the terminal-on of the boson to-into (e.g., photons, Gluons, and zino). Although, super-symmetries have not been observed experimentally, they may prove important in the search for a Unified Field Theory of the fundamental interactions.
The quark is a fundamental constituent of hadrons, i.e., of particles that take part in strong interactions. Quarks are never seen as free particles, which is substantiated by lack of experimental evidence for isolated quarks. The explanation given for this phenomenon in gauge theory is known a quantum chromodynamics, by which quarks are described, is that quark interaction become weaker as they come closer together and fall to zero once the distance between them is zero. The converse of this proposition is that the attractive forces between quarks become stronger s they move, as this process has no limited, quarks can never separate from each other. In some theories, it is postulated that at very high-energy temperatures, as might have prevailed in the early universe, quarks can separate, te temperature at which this occurs is called the ‘deconfinement temperatures'. Nevertheless, their existence has been demonstrated in high-energy scattering experiments and by symmetries in the properties of observed hadrons. They are regarded s elementary fermions, with spin ½, baryon number  , strangeness 0 or-1, and charm 0 or + 1. They are classified in six flavours [up (u), charm and top (t), each with charge the proton charge, down (d), strange (s) and bottom (b), each with - the proton charge. Each type has an antiquark with reversed signs of charge, baryon number, strangeness, and charm. The top quark has not been observed experimentally, but there are strong theoretical arguments for its existence. The top quark mass is known to be greater than about 90 GeV/c2.
The fractional charges of quarks are never observed in hadrons, since the quarks form combinations in which the sum of their charges is zero or integral. Hadrons can be either baryons or mesons, essentially, baryons are composed of three quarks while mesons are composed of a quark-antiquark pair. These components are bound together within the hadron by the exchange of particles known as Gluons. Gluons are neutral massless gauge bosons, the quantum field theory of electromagnetic interactions discriminate themselves against the gluon as the analogue of the photon and with a quantum number known as ‘colour' replacing that of electric charge. Each quark type ( or flavour ) comes in three colours (red, blue and green, say), where colour is simply a convenient label and has no connection with ordinary colour. Unlike the photon in quantum chromodynamics, which is electrically neutral, Gluons in quantum chromodynamics carry colour and can therefore interact with themselves. Particles that carry colour are believed not to be able to exist in free particles. Instead, quarks and Gluons are permanently confined inside hadrons (strongly interacting particles, such as the proton and the neutron).
The gluon self-interaction leads to the property known as ‘asymptotic freedom', in which the interaction strength for the strong interaction decreases as the momentum transfer involved in an interaction increase. This allows perturbation theory to be used and quantitative comparisons to be made with experiment, similar to, but less precise than those possibilities of quantum chromodynamics. Quantum chromodynamics the being tested successfully in high energy muon-nucleon scattering experiments and in proton-antiproton and electron-positron collisions at high energies. Strong evidence for the existence of colour comes from measurements of the interaction rates for e+e     hadrons and e+e-    μ+ μ . The relative rate for these two processes is a factor of three larger than would be expected without colour, this factor measures directly the number of colours, i.e., for each quark flavour.
The quarks charge and spin of these particles are the sums of the charge and spin of the component quarks and antiquarks.
In the strange baryon, e.g., the meons, either the quark or antiquark is strange. Similarly, the presence of one or more ‘c' quarks leads to charm baryons' ‘a' ‘c' or ‘d' to the charmed mesons. It has been found useful to introduce a further subdivision of quarks, each flavour coming in three colours (red, green, blue). Colour as used here serves simply as a convenient label and is unconnected with ordinary colour. A baryon comprises a red, a green, and a blue quark and a meson comprised a red and ant-red, a blue and ant-blue, or a green and antigreen quark and antiquark. In analogy with combinations of the three primary colours of light, hadrons carry no net colour, i.e., they are ‘colourless' or ‘white'. Only colourless objects can exist as free particles. The characteristics of the six quark flavours are shown in the table.
The cental feature of quantum field theory, is that the essential reality is a set of fields subject to the rules of special relativity and quantum mechanics, all else is derived as a consequence of the quantum dynamics of those fields. The quantization of fields is essentially an exercise in which we use complex mathematical models to analyse the field in terms of its associated quanta. Material reality as we know it in quantum field theory is constituted by the transformation and organization of fields and their associated quanta. Hence, this reality. Reveals a fundamental complementarity, in which particles are localized in space/time, and fields, which are not. In modern quantum field theory, all matter is composed of six strongly interacting quarks and six weakly interacting leptons. The six quarks are called up, down, charmed, strange, top, and bottom and have different rest masses and functional changes. The up and own quarks combine through the exchange of Gluons to form protons and neutrons.
The ‘lepton' belongs to the class of elementary particles, and does not take part in strong interactions. They have no substructure of quarks and are considered indivisible. They are all; fermions, and are categorized into six distinct types, the electron, muon, and tauon, which are all identically charged, but differ in mass, and the three neutrinos, which are all neutral and thought to be massless or nearly so. In their interactions the leptons appear to observe boundaries that define three families, each composed of a charged lepton and its neutrino. The families are distinguished mathematically by three quantum numbers, Ie, Iμ, and Iv lepton numbers called ‘lepton numbers. In weak interactions their IeTOT, IμTOT and I  for the individual particles are conserved.
In quantum field theory, potential vibrations at each point in the four fields are capable of manifesting themselves in their complemtarity, their expression as individual particles. The interactions of the fields result from the exchange of quanta that are carriers of the fields. The carriers of the field, known as messenger quanta, are the ‘coloured' Gluons for the strong-binding-force, of which the photon for electromagnetism, the intermediate boson for the weak force, and the graviton or gravitation. If we could re-create the energies present in the fist trillionths of trillionths of a second in the life o the universe, these four fields would, according to quantum field theory, become one fundamental field.
The movement toward a unified theory has evolved progressively from super-symmetry to super-gravity to string theory. In string theory the one-dimensional trajectories of particles, illustrated in the Feynman lectures, seem as if, in at all were possible, are replaced by the two-dimensional orbits of a string. In addition to introducing the extra dimension, represented by a smaller diameter of the string, string theory also features another mall but non-zero constant, with which is analogous to Planck's quantum of action. Since the value of the constant is quite small, it can be generally ignored but at extremely small dimensions. Still, since the constant, like Planck's constant is not zero, this results in departures from ordinary quantum field theory in very small dimensions.
Part of what makes string theory attractive is that it eliminates, or ‘transforms away', the inherent infinities found in the quantum theory of gravity. If the predictions of this theory are proven valid in repeatable experiments under controlled coeditions, it could allow gravity to be unified with the other three fundamental interactions. Nevertheless, even if string theory leads to this grand unification, it will not alter our understanding of wave-particle duality. While the success of the theory would reinforce our view of the universe as a unified dynamic process, it applies to very small dimensions, and therefore, does not alter our view of wave-particle duality.
While the formalism of quantum physics predicts that correlations between particles over space-like inseparability, of which are possible, it can say nothing about what this strange new relationship between parts (quanta) and the whole (cosmos) cause to result outside this formalism. This does not, however, prevent us from considering the implications in philosophical terms. As the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one-another.'
Wholeness requires a complementary relationship between unity and difference and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts constituting the whole, even the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really in and of itself. It is the way he parts are organized, and another constituent additional to those that constitute the totality.'
In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent' ion the parts, as opposed to a more spurious whole in which parts appear to disclose wholeness dur to relationships that are external to the arts. The collection of parts that would allegedly constitute the whole in classical physics is an example of a spurious whole. Parts continue a genuine whole when the universal principle of order is inside the parts and hereby adjusts each to all so that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relations between parts and whole in modern biology.
Modern physics also reveals, claimed Harris, complementary relationship between the differences between parts that constitute and the universal ordering principle that are immanent in each part. While the whole cannot be finally disclosed in the analysis of the parts, the study of the differences between parts provides insight
into the dynamic structure of the whole present in each part. The part can never, however, be finally isolated from the web of relationships that discloses the interconnections with the whole, and any attempt to do so results in ambiguity.
Much of the ambiguity in attempts to explain the character of wholes in both physics and biology derives from the assumption that order exists between or outside parts. Yet order in complementary relationships between difference and sameness in any physical event is never external to that event, and the cognations are immanent in the event. From this perspective, the addition of non-locality to this picture of the distributive constitution in dynamic function of wholeness is not surprising. The relationships between part, as quantum event apparent in observation or measurement, and the indivisible whole, calculate on in but are not described by the instantaneous correlations between measurements in space-like separate regions, is another extension of the part-whole complementarity in modern physics.
If the universe is a seamlessly interactive system that evolves to higher levels of complex and complicating regularities of which ae lawfully emergent in property of systems, we can assume that the cosmos is a single significant whole that evinces progressive order in complementary relations to its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that in operates in self-reflective fashion and is the ground from all emergent plexuities. Since human consciousness evinces self-reflective awareness in te human brain (well protected between the cranium walls) and since this brain, like all physical phenomena, can be viewed as an emergent property of the whole, concluding it is unreasonable, in philosophical terms at least, that the universe is conscious.
Even so, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite laterally, beyond all human representation or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatsoever to conceptual representation of design, meaning, purpose, intent, or plan associated with mytho-religious or cultural heritage. However, if one does not accept this view of the universe, there is noting in the scientific description of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as foundation of religious experiences, but can be dismissed, undermined, or invalidated with appeals to scientific knowledge.
While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this of what is obtainable, let us be quite clear on one point - there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative base on which is obviously free to do as done. However, there is another conclusion to be drawn, in that is firmly grounded in scientific theory and experiment there is no basis in the scientific descriptions of nature for believing in the radical Cartesian division between mind and world sanctioned by classical physics. Clearly, his radical separation between mind and world was a micro-level illusion fostered by limited awareness of the actual character of physical reality nd by mathematical idealizations extended beyond the realms of their applicability.
All the same, the philosophical implications might prove in themselves as a criterial motive in debative consideration to how our proposed new understanding of the relationship between parts and wholes in physical reality might affect the manner in which we deal with some major real-world problems. This will issue to demonstrate why a timely resolution of these problems is critically dependent on a renewed dialogue between members of the cultures of human-social scientists and scientists-engineers. We will also argue that the resolution of these problems could be dependent on a renewed dialogue between science and religion.
As many scholars have demonstrated, the classical paradigm in physics has greatly influenced and conditioned our understanding and management of human systems in economic and political realities. Virtually all models of these realities treat human systems as if they consist of atomized units or parts that interact with one another in terms of laws or forces external to or between the parts. These systems are also viewed as hermetic or closed and, thus, its discreteness, separateness and distinction.
Consider, for example, how the classical paradigm influenced or thinking about economic reality. In the eighteenth and nineteenth centuries, the founders of classical economics-figures like Adam Smith, David Ricardo, and Thomas Malthus conceived of the economy as a closed system in which intersections between parts (consumer, produces, distributors, etc.) are controlled by forces external to the parts (supply and demand). The central legitimating principle of free market economics, formulated by Adam Smith, is that lawful or law-like forces external to the individual units function as an invisible hand. This invisible hand, said Smith, frees the units to pursue their best interests, moves the economy forward, and usually legislates the behaviour of parts in the best vantages of the whole. (The resemblance between the invisible hand and Newton's universal law of gravity and between the relations of parts and wholes in classical economics and classical physics should be transparent.)
After roughly 1830, economists shifted the focus to the properties of the invisible hand in the interactions between pats using mathematical models. Within these models, the behaviour of pats in the economy is assumed to be analogous to the awful interactions between pats in classical mechanics. It is, therefore, not surprising that differential calculus was employed to represent economic change in a virtual world in terms of small or marginal shifts in consumption or production. The assumption was that the mathematical description of marginal shifts n the complex web of exchanges between parts (atomized units and quantities) and whole (closed economy) could reveal the lawful, or law-like, machinations of the closed economic system.
These models later became one of the fundamentals for microeconomics. Microeconomics seek to describe interactions between parts in exact quantifiable measures-such as marginal cost, marginal revenue, marginal utility, and growth of total revenue as indexed against individual units of output. In analogy with classical mechanics, the quantities are viewed as initial conditions that can serve to explain subsequent interactions between parts in the closed system in something like deterministic terms. The combination of classical micro-analysis with micro-analysis resulted in what Thorstein Veblen in 1900 termed neoclassical economics-the model for understanding economic reality that is widely used today.
Beginning in the 1939s, the challenge became to subsume the understanding of the interactions between parts in closed economic systems with more sophisticated mathematical models using devices like linear programming, game theory, and new statistical techniques. In spite of the growing mathematical sophistication, these models are based on the same assumptions from classical physics featured in previous neoclassical economic theory-with one exception. They also appeal to the assumption that systems exist in equilibrium or in perturbations from equilibria, and they seek to describe the state of the closed economic system in these terms.
One could argue that the fact that our economic models are  assumptions from classical mechanics is not a problem by appealing to the two-domain distinction between micro-level micro-level processes expatiated upon earlier. Since classical mechanic serves us well in our dealings with micro-level phenomena in situations where the speed of light is so large and the quantum of action is so small as to be safely ignored for practical purposes, economic theories based on assumptions from classical mechanics should serve us well in dealing with the micro-level behaviour of economic systems.
The obvious problem, . . . acceded peripherally,  . . .  nature is relucent to operate in accordance with these assumptions, in that the biosphere, the interaction between parts be intimately related to the hole, no collection of arts is isolated from the whole, and the ability of the whole to regulate the relative abundance of atmospheric gases suggests that the whole of the biota appear to display emergent properties that are more than the sum of its parts. What the current ecological crisis reveals in the abstract virtual world of neoclassical economic theory. The real economies are all human activities associated with the production, distribution, and exchange of tangible goods and commodities and the consumption and use of natural resources, such as arable land and water. Although expanding economic systems in the really economy ae obviously embedded in a web of relationships with the entire biosphere, our measure of healthy economic systems disguises this fact very nicely. Consider, for example, the healthy economic system written in 1996 by Frederick Hu, head of the competitive research team for the World Economic Forum-short of military conquest, economic growth is the only viable means for a country to sustain increases in natural living standards . . . An economy is internationally competitive if it does so strongly in three general areas: Abundant productive stimulations from capital, labour, infrastructure and technology, optimal economic policies such as low taxes, little interference, free trade and sound market institutions. Such as the rule of law and protection of property rights.
The prescription for medium-term growth of economies ion countries like Russia, Brazil, and China may seem utterly pragmatic and quite sound. However, the virtual economy described is a closed and hermetically sealed system in which the invisible hand of economic forces allegedly results in a health growth economy if impediments to its operation are removed or minimized. It is, of course, often trued that such prescriptions can have the desired results in terms of increases in living standards, and Russia, Brazil and China are seeking to implement them in various ways.
In the real economy, however, these systems are clearly not closed or hermetically sealed: Russia uses carbon-based fuels in production facilities that produce large amounts of carbon dioxide and other gases that contribute to global warming: Brazil is in the process of destroying a rain forest that is critical to species diversity and the maintenance of a relative abundance of atmospheric gases that regulate Earth temperature, and China is seeking to build a first-world economy based on highly polluting old-world industrial plants that burn soft coal. Not to forget, . . . the victual economic systems that the world now seems to regard as the best example of the benefits that can be derived form the workings of the invisible hand, that of the United States, operates in the real economy as one of the primary contributors to the ecological crisis.
In ‘Consilience,' Edward O. Wilson makes to comment, the case that effective and timely solutions to the problem threatening human survival is critically dependent on something like a global revolution in ethical thought and behaviour. Nonetheless, his view of the basis for this revolution is quite different from our own. Wilson claimed that since the foundations for moral reasoning evolved in what he termed ‘gene-culture' evolution, the rules of ethical behaviour re emergent aspects of our genetic inheritance. Based on the assumptions that the behaviour of contemporary hunter-gatherers resembles that of our hunter-gatherers forebears in the Palaeolithic Era, he drew on accounts of Bushman hunter-gatherers living in the centre Kalahari in an effort to demonstrate that ethical behaviour is associated with instincts like bonding, cooperation, and altruism.
Wilson argued that these instincts evolved in our hunter-gatherers accessorial descendabilities, whereby genetic mutation and the ethical behaviour associated with these genetically based instincts provided a survival advantage. He then claimed that since these genes were passed on to subsequent generations of our dependable characteristics, which eventually became pervasive in the human genome, the ethical dimension of human nature has a genetic foundation. When we fully understand the ‘innate epigenetic rules of moral reasoning,' the rules will probably turn out to be an ensemble of many algorithms whose interlocking activities guide the mind across a landscape of nuances moods and choices.
Any reasonable attempt to lay a firm foundation beneath the quagmire of human ethics in all of its myriad and often contradictory formulations is admirable, and Wilson's attempt is more admirable than most. In our view, however, there is little or no prospect that will prove as successful for any number of reasons. While the probability for us to discover some linkage between genes and behaviour, seems that the lightened path of human ethical behaviour and ranging advantages of this behaviour is far too complex, not o mentions, inconsistently been reduced to a given set classification of ‘epigenetic ruled of moral reasoning.'
Also, moral codes may derive in part from instincts that confer a survival advantage, but when we are to examine these codes, they are clearly primarily cultural products. This explains why ethical systems are constructed in a bewildering variety of ways in different cultural contexts and why they often sanction or legitimate quite different thoughts and behaviours. Let us not forget that rules of ethical behaviours are quite malleable and have been used sacredly to legitimate human activities such as slavery, colonial conquest, genocide and terrorism. As Cardinal Newman cryptically put it, ‘Oh how we hate one another for the love of God.'
According to Wilson, the ‘human mind evolved to believe in the gods' and people ‘need a sacred narrative' to his view are merely human constructs and, therefore, there is no basis for dialogue between the world views of science and religion. ‘Science for its part, will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral and religiously sentient. The result of the competition between the two world views, is believed, as I, will be the secularization of the human epic and of religion itself.
Wilson obviously has a right to his opinions, and many will agree with him for their own good reasons, but what is most interesting about his thoughtful attempted to posit a more universal basis for human ethics in that it s based on classical assumptions about the character of both physical and biological realities. While Wilson does not argue that human's behaviour is genetically determined in the strict sense, however, he does allege that there is a causal linkage between genes and behaviour that largely condition this behaviour, he appears to be a firm believer in classical assumption that reductionism can uncover the lawful essences that principally govern the physical aspects attributed to reality, including those associated with the alleged ‘epigenetic rules of moral reasoning.'
Once, again, Wilson's view is apparently nothing that cannot be reduced to scientific understandings or fully disclosed in scientific terms, and this apparency of hope for the future of humanity is that the triumph of scientific thought and method will allow us to achieve the Enlightenments ideal of disclosing the lawful regularities that govern or regulate all aspects of human experience. Hence, science will uncover the ‘bedrock of moral and religious sentiment, and the entire human epic will be mapped in the secular space of scientific formalism.' The intent is not to denigrate Wilson's attentive efforts to posit a more universal basis for the human condition, but is to demonstrate that any attempt to understand or improve upon the behaviour based on appeals to outmoded classical assumptions is unrealistic and outmoded. If the human mind did, in fact, evolve in something like deterministic fashion in gene-culture evolution-and if there were, in fact, innate mechanisms in mind that are both lawful and benevolent. Wilson's program for uncovering these mechanisms could have merit. Nevertheless, for all the reasons that have been posited, classical determinism cannot explain the human condition and its evolutionary principle that govern in their functional dynamics, as Darwinian evolution should be modified to accommodate the complementary relationships between cultural and biological principles those governing evaluations do have in them a strong, and firm grip upon genetical mutations that have attributively been the distribution in the contribution of human interactions with themselves in the finding to self-realization and undivided wholeness.
Equally important, the classical assumption that the only privileged or valid knowledge is scientific is one of the primary sources of the stark division between the two cultures of humanistic and scientists-engineers, in this view, Wilson is quite correct in assuming that a timely end to the two culture war and a renewer dialogue between members of these cultures is now critically important to human survival. It is also clear, however, that dreams of reason based on the classical paradigm will only serve to perpetuate the two-culture war. Since these dreams are also remnants of an old scientific word view that no longer applies in theory in fact, to the actual character of physical reality, as reality is a probable service to frustrate the solution for which in found of a real world problem.
However, there is a renewed basis for dialogue between the two cultures, it is believed as quite different from that described by Wilson. Since classical epistemology has been displaced, or is the process of being displaced, by the new epistemology of science, the truths of science can no longer be viewed as transcendent and absolute in the classical sense. The universe more closely resembles a giant organism than a giant machine, and it also displays emergent properties that serve to perpetuate the existence of the whole in both physics and biology that cannot be explained in terms of unrestricted determinism, simple causality, first causes, linear movements and initial conditions. Perhaps the first and most important precondition for renewed dialogue between the two cultural conflicting realizations as Einstein explicated upon its topic as, that a human being is a ‘part of the whole'. It is this spared awareness that allows for the freedom, or existential choice of self-decision of choosing our free-will and the power to differentiate a direct care to free ourselves of the ‘optical illusion' of our present conception of self as a ‘part limited in space and time', and to widen ‘our circle of compassion to embrace al living creatures and the whole of nature in its beauty'. Yet, one cannot, of course, merely reason oneself into an acceptance of this view, nonetheless, the inherent perceptions of the world are reason that the capacity for what Einstein termed ‘cosmic religious feedings.' Perhaps, our enabling capability for that which is within us to have the obtainable ability to enabling of ours is to experience the self-realization, that of its realness is to sense its proven existence of a sense of elementarily leaving to some sorted conquering sense of universal consciousness, in so given to  arise the existence of the universe, which really makes an essential difference to the existence or its penetrative spark of awakening indebtednesses of reciprocality?
Those who have this capacity will favourably be able to communicate their enhanced scientific understanding of the relations among all aspects, and in part that is our self and the whole that are the universes in ordinary language wit enormous emotional appeal. The task lies before the poets of this renewing reality have nicely been described by Jonas Salk, which ‘man has come to the threshold of a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects ‘reality'. By using the processes of Nature and metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality' as we can within te limits of our comprehension. Men will be very uneven in their capacity or such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons that using metaphorical and mythical provisions as comprehensive guides to living will always be necessary. In this way. Man's afforded efforts by the imagination and intellect can be playing the vital roles embarking upon the survival and his endurable evolution.
It is time, if not, only, concluded  from evidence in its  suggestive conditional relation, for which the religious imagination and the religious experience to engage upon the complementary truths of science in fitting that silence with meaning, as having to antiquate a continual emphasis, least of mention, that does not mean that those who do not believe in the existence of God or Being, should refrain in any sense from assessing the impletions of the new truths of science. Understanding these implications does not necessitate any ontology, and is in no way diminished by the lack of any ontology. One is free to recognize a basis for a dialogue between science and religion for the same reason that one is free to deny that this basis exists-there is nothing in our current scientific world view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in some ontology yet remains in what, and it has always been-a question, and the physical universe on the most basic level remains what it always been a riddle. The ultimate answer to the question and the ultimate meaning of the riddle is, and probably will always be, a matter of personal choice and conviction.
The present time is clearly a time of a major paradigm shift, but consider the last great paradigm shift, the one that resulted in the Newtonian framework. This previous paradigm shift was profoundly problematic for the human spirit, it led to the conviction that we are strangers, freaks of nature, conscious beings in a universe that is almost entirely unconscious, and that, since the universe its strictly deterministic, even the free will we feel in regard to the movements of our bodies is an illusion. Yet going through the acceptance of such a paradigm was probably necessary for the Western mind.
The overwhelming success of Newtonian physics led most scientists and most philosophers of the Enlightenment to rely on it exclusively. As far as the quest for knowledge about reality was concerned, they regarded all of the other mode's of expressing human experience, such as accounts of numinous emergences, poetry, art, and so on, as irrelevant. This reliance on science as the only way to the truth about the universe s clearly obsoletes. Science has to give up the illusion of its self-sufficiency and self-sufficiency of human reason. It needs to unite with other modes of knowing, n particular with contemplation, and help each of us move to higher levels of being and toward the Experience of Oneness.
If this is the direction of the emerging world-view, then the paradigm shifts we are presently going through will prove to e nourishing to the human spirit and in correspondences with its deepest conscious or unconscious yearning-the yearning to emerge out of Plato's shadows and into the light of luminosity. The Big Bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hope that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before' the big bang. According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory also to incorporate the strong nuclear force. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980s by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Line, and British astronomer Andreas Albrecht. The inflationary universe theory solves a number of problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid's geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
The theory is based on the mathematical equations, known as the field equations, of the general theory of relativity set forth in 1915 by Albert Einstein. In 1922 Russian physicist Alexander Friedmann provided a set of solutions to the field equations. These solutions have served as the framework for much of the current theoretical work on the big bang theory. American astronomer Edwin Hubble provided some of the greatest supporting evidence for the theory with his 1929 discovery that the light of distant galaxies was universally shifted toward the red end of the spectrum. Once ‘tired light' theories-that light slowly loses energy naturally, becoming more red over time-were dismissed, this shift proved that the galaxies were moving away from each other. Hubble found that galaxies farther away were moving away proportionally faster, showing that the universe is expanding uniformly. However, the universe's initial state was still unknown.
In the 1940's Russian-American physicist George Gamow worked out a theory that fit with Friedmann's solutions in which the universe expanded from a hot, dense state. In 1950 British astronomer Fred Hoyle, in support of his own opposing steady-state theory, referred to Gambas theory as a mere ‘big bang,' but the name stuck.
The overall framework of the big bang theory came out of solutions to Einstein's general relativity field equations and remains unchanged, but various details of the theory are still being modified today. Einstein himself initially believed that the universe was static. When his equations seemed to imply that the universe was expanding or contracting, Einstein added a constant term to cancel out the expansion or contraction of the universe. When the expansion of the universe was later discovered, Einstein stated that introducing this ‘cosmological constant' had been a mistake.
After Einstein's work of 1917, several scientists, including the Abbé Georges Lemaître in Belgium, Willem de Sitter in Holland, and Alexander Friedmann in Russia, succeeded in finding solutions to Einstein's field equations. The universes described by the different solutions varied. De Sitter's model had no matter in it. This model is effectively not a bad approximation, since the average density of the universe is extremely low. Lemaître's universe expanded from a ‘primeval atom.' Friedmann's universe also expanded from a very dense clump of matter, but did not involve the cosmological constant. These models explained how the universe behaved shortly after its creation, but there was still no satisfactory explanation for the beginning of the universe.
In the 1940's George ga mow was joined by his students Ralph Alphen and Robert Herman in working out details of Friedmann's solutions to Einstein's theory. They expanded on Gamow's idea that the universe expanded from a primordial state of matter called ylem consisting of protons, neutrons, and electrons in a sea of radiation. They theorized the universe was very hot at the time of the big bang (the point at which the universe explosively expanded from its primordial state), since elements are heavier than hydrogen can be formed only at a high temperature. Alpher and Hermann predicted that radiation from the big bang should still exist. Cosmic background radiation roughly corresponding to the temperature predicted by Gamow's team was detected in the 1960s, further supporting the big bang theory, though the work of Alpher, Herman, and Gamow had been forgotten.
The big bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hopes that string theory, also known as M -theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.
Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. Perhaps, that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before' the big bang.
According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory. Physicists are now searching for a grand unification theory to incorporate the strong nuclear force also. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).
One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980's by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory solves several problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid's geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.
Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed, depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.
The universe cooled as it expanded. After about one second, protons formed. In the following few minutes-often referred to as the ‘first three minutes'-combinations of protons and neutrons formed the isotope of hydrogen known as deuterium and some of the other light elements, principally helium, and some lithium, beryllium, and boron. The study of the distribution of deuterium, helium, and the other light elements is now a major field of research. The uniformity of the helium abundance around the universe supports the big bang theory and the abundance of deuterium can be used to estimate the density of matter in the universe.
From about 380,000 too about one million years after the big bang, the universe cooled to about 3000°C's (about 5000°F's) and protons and electrons combined to  hydrogen atoms. Hydrogen atoms can only absorb and emit specific colours, or wavelengths, of light. The formation of atoms allowed many other wavelengths of light, wavelengths that had been interfering with the free electrons, to travel much farther than before. This change sets free radiation that we can detect today. After billions of years of cooling, this cosmic background radiation is at 3 K (-270°C/- 454°F). The cosmic background radiation was first detected and identified in 1965 by American astrophysicists Arno Penzias and Robert Wilson.
The Cosmic Background Explorer (COBE) spacecraft, a project of the National Aeronautics and Space Administration (NASA), mapped the cosmic background radiation between 1989 and 1993. It verified that the distribution of intensity of the background radiation precisely matched that of matter that emits radiation because of its temperature, as predicted for the big bang theory. It also showed that cosmic background radiation is not uniform that it varies slightly. These variations are thought to be the seeds from which galaxies and other structures in the universe grew.
Evidence suggests that the matter that scientists detect in the universe be only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter-matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be non-radiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).
An alternative theory that explains the dark-matter model involves hot dark matter, where hot implies that the particles are moving very fast. Neutrinos, fundamental particles that travel at nearly the speed of light, are the prime example of hot dark matter. However, scientists think that the mass of a neutrino is so low that neutrinos can only account for a small portion of dark matter. If the inflationary version of big bang theory is correct, then the amount of dark matter and of whatever else might exist is just enough to bring the universe to the boundary between open and closed.
Scientists develop theoretical models to show how the universe's structures, such as clusters of galaxies, have formed. Their models invoke hot dark matter, cold dark matter, or a mixture of the two. This unseen matter would have provided the gravitational force needed to bring large structures such as clusters of galaxies together. The theories that include dark matter match the observations, although there is no consensus on the type or types of dark matter that must be included. Supercomputers are important for making such models.
Astronomers continue to make new observations that are also interpreted within the framework of the big bang theory. No major problems with the big bang theory have been found, but scientists constantly adjust the theory to match the observed universe. In particular, a ‘standard model' of the big bang has been established by results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001. The probe studied the anisotropies, or rippled, in the temperature of cosmic background radiation at a higher resolution than COBE was fully competent. These ripples suggest that regions of the young universe were hotter or cooler, by a factor of about 1/1000, than adjacent regions. WMAP's observations suggest that the rate of expansion of the universe, called Hubble's constant, is about 71 km/s/Mpc (kilometres per second per million parsec, where a parsec is about 3.26 light-years). In other words, the distance between any two objects in space that are separated by a million parsec increases by about 71 km every second in addition to any other motion they may have compared with one another. In combination with previously existing observations, this rate of expansion tells cosmologists that the universe is ‘flat,' though flatness here does not refer to the actual shape of the universe but rather than the geometric laws that apply to the universe match those of a flat plane.
To be flat, the universe must contain a certain amount of matter and energy, known as the critical density. The distribution of sizes of ripples detected by WMAP show that ordinary matter-like that making up objects and living things on Earth-accounts for only 4.4 percent of the critical density. Dark matter makes up an additional 23 percent. Astoundingly, the remaining 73 percent of the universe is composed of something, but accessorial-of a substance so mysterious that nobody knows much about it. Called ‘dark energy,' this substance provides the anti-gravity-like negative pressure that causes the universe's expansion to accelerate rather than slow. This ‘accelerating universe' was detected independently by two competing groups of astronomers in the last years of the 20th century. The ideas of an accelerating universe and the existence of dark energy have caused astronomers to modify previous ideas of the big bang universe substantially.
WMAP's results also show that cosmic background radiation was set free about 380,000 years after the big bang, later than was previously thought, and that the first stars formed only 200,000 years after the big bang, earlier than anticipated. Further refinements to the big bang theory are expected from WMAP, which continues to collect data. An even more precise mission to study the beginnings of the universe, the European Space Agency's Planck spacecraft, is scheduled to be launched in 2007.
In the 1950's cosmologists (scientists who study the evolution of the universe) were considering two theories for the origin of the universe. The first, the currently accepted big bang theory, held that the universe was created from one enormous explosion. The second, known as the steady state theory, suggested that the universe had always existed. Russian-American theoretical physicist George Gamow advanced the big bang theory and its underpinnings in a 1956 Scientific American article. Gamow's estimate of a five-billion-year-old universe is no longer considered accurate; the universe is now thought to be much older.
Most cosmologists believe that the universe began as a dense kernel of matter and radiant energy that started to expand about five billion years ago and later coalesced into galaxies.
Cosmology is the study of the general nature of the universe in space and in time-what it is now, what it was in the past and what it is likely to be in the future. Since the only forces at work between the galaxies that makes up the material universe are the forces of gravity, the cosmological problem is closely connected with the theory of gravitation, in particular with its modern version as comprised in Albert Einstein's general theory of relativity. In the frame of this theory the properties of space, time and gravitation are merged into one harmonious and elegant picture.
The basic cosmological notion of general relativity grew out of the work of great mathematicians of the 19th century. In the middle of the last century two inquisitive mathematical minds-a Russian named Nikolai Lobachevski and a Hungarian named János Bolyai-discovered that the classical geometry of Euclid was not the only possible geometry: in fact, they succeeded in constructing a geometry that was fully as logical and self-consistent as the Euclidean. They began by overthrowing Euclid's axiom about parallel lines: Namely, that only one parallel to a given straight line can be drawn through a point not on that line. Lobachevski and Bolyai both conceived a system of geometry in which a great number of lines parallel to a given line could be drawn through a point outside the line.
To illustrate the differences between Euclidean geometry and their non-Euclidean system considering just two dimensions-that is simplest is, the geometry of surfaces. In our schoolbooks this is known as ‘plane geometry,' because the Euclidean surface is a flat surface. Suppose, now, we examine the properties of a two-dimensional geometry constructed not on a plane surface but on a curved surface. For the system of Lobachevski and Bolyai we must take the curvature of the surface to be ‘negative,' which means that the curvature is not like that of the surface of a sphere but like that of a saddle. Now if we are to draw parallel lines or any figure (e.g., a triangle) on this surface, we must decide first of all how we will define a ‘straight line,' equivalent to the straight line of plane geometry. The most reasonable definition of a straight line in Euclidean geometry is that it is the path of the shortest distance between two points. On a curved surface the line, so defined, becomes a curved line known as a ‘geodesic.'
Considering a surface curved like a saddle, we find that, given a ‘straight' line or geodesic, we can draw through a point outside that line a great many geodesics that will never intersect the given line, no matter how far they are extended. They are therefore parallel to it, by the definition of parallel.
As a consequence of the overthrow of Euclid's axiom on parallel lines, many of his theorems are demolished in the new geometry. For example, the Euclidean theorem that the sum of the three angles of a triangle is 180 degrees no longer holds on a curved surface. On the saddle-shaped surface the angles of a triangle formed by three geodesics always add up to less than 180 degrees, the actual sum depending on the size of the triangle. Further, a circle on the saddle surface does not have the same properties as a circle in plane geometry. On a flat surface the circumference of a circle increases in proportion to the increase in diameter, and the area of a circle increases in proportion to the square of the increase in diameter. However, on a saddle surface both the circumference and the area of a circle increase at faster rates than on a flat surface with increasing diameters.
After Lobachevski and Bolyai, the German mathematician Bernhard Riemann constructed another non-Euclidean geometry whose two-dimensional model is a surface of positive, rather than negative, curvature-that is, the surface of a sphere. In this case a geodesic line is simply a great circle around the sphere or a segment of such a circle, and since any two great circles must intersect at two points (the poles), there are no parallel lines at all in this geometry. Again the sum of the three angles of a triangle is not 180 degrees: in this case it is always more than 180. The circumference of a circle now increases at a rate slower than in proportion to its increase in diameter, and its area increases more slowly than the square of the diameter.
Now all this is not merely an exercise in abstract reasoning but bears directly on the geometry of the universe in which we live. Is the space of our universe ‘flat,' as Euclid assumed, or is it curved negatively (per Lobachevski and Bolyai) or curved positively (Riemann)? If we were two-dimensional creatures living in a two-dimensional universe, we could tell whether we were living on a flat or a curved surface by studying the properties of triangles and circles drawn on that surface. Similarly as three-dimensional beings living in three-dimensional space we should be able, by studying geometrical properties of that space, to decide what the curvature of our space is. Riemann in fact developed mathematical formulas describing the properties of various kinds of curved space in three and more dimensions. In the early years of this century Einstein conceived the idea of the universe as a curved system in four dimensions, embodying time as the fourth dimension, and he continued to apply Riemann's formulas to test his idea.
Einstein showed that time can be considered a fourth coordinate supplementing the three coordinates of space. He connected space and time, thus establishing a ‘space-time continuum,' by means of the speed of light as a link between time and space dimensions. However, recognizing that space and time are physically different entities, he employed the imaginary number Á, or í, to express the unit of time mathematically and make the time coordinate formally equivalent to the three coordinates of space.
In his special theory of relativity Einstein made the geometry of the time-space continuum strictly Euclidean, that is, flat. The great idea that he introduced later in his general theory was that gravitation, whose effects had been neglected in the special theory, must make it curved. He saw that the gravitational effect of the masses distributed in space and moving in time was equivalent to curvature of the four-dimensional space-time continuum. In place of the classical Newtonian statement that ‘the sun produces a field of force that impels the earth to deviate from straight-line motion and to move in a circle around the sun,' Einstein substituted a statement to the effect that ‘the presence of the sun causes a curvature of the space-time continuum in its neighbourhood.'
The motion of an object in the space-time continuum can be represented by a curve called the object's ‘world line.'. . . Einstein declared, in effect: ‘The world line of the earth is a geodesic in the curved four-dimensional space around the sun.' In other words, the . . . [earth's ‘world line']. . . . Corresponds to the shortest four-dimensional distance between the position of the earth in January. . . . Its position in October . . .  Einstein's idea of the gravitational curvature of space-time was, of course, triumphantly affirmed by the discovery of perturbations in the motion of Mercury at its closest approach to the sun and of the deflection of light rays by the sun's gravitational field. Einstein next attempted to apply the idea to the universe as a whole. Does it have a general curvature, similar to the local curvature in the sun's gravitational field? He now had to consider not a single centre of gravitational force but countless focal points in a universe full of matter concentrated in galaxies whose distribution fluctuates considerably from region to region in space. However, in the large-scale view the galaxies are spread uniformly throughout space as far out as our biggest telescopes can see, and we can justifiably ‘smooth out' its matter to a general average (which comes to about one hydrogen atom per cubic metre). On this assumption the universe as a whole has a smooth general curvature.
Nevertheless, if the space of the universe is curved, what is the sign of this curvature? Is it positive, as in our two-dimensional analogy of the surface of a sphere, or is it negative, as in the case of a saddle surface? Since we cannot consider space alone, how is this space curvature related to time?
Analysing the pertinent mathematical equations, Einstein came to the conclusion that the curvature of space must be independent of time, i.e., that the universe as a whole must be unchanging (though it changes internally). However, he found to his surprise that there was no solution of the equations that would permit a static cosmos. To repair the situation, Einstein was forced to introduce an additional hypothesis that amounted to the assumption that a new kind of force was acting among the galaxies. This hypothetical force had to be independent of mass (being the same for an apple, the moon and the sun!) To gain in strength with increasing distance between the interacting objects (as no other forces ever do in physics).
Einstein's new force, called ‘cosmic repulsion,' allowed two mathematical models of a static universe. One solution, which was worked out by Einstein himself and became known as ‘Einstein's spherical universe,' gave the space of the cosmos a positive curvature. Like a sphere, this universe was closed and thus had a finite volume. The space coordinates in Einstein's spherical universe were curved in the same way as the latitude or longitude coordinates on the surface of the earth. However, the time axis of the space-time continuum ran quite straight, as in the good old classical physics. This means that no cosmic event would ever recur. The two-dimensional analogy of Einstein's space-time continuum is the surface of a cylinder, with the time axis running parallel to the axis of the cylinder and the space axis perpendicular to it.
The other static solution based on the mysterious repulsion forces was discovered by the Dutch mathematician Willem de Sitter. In his model of the universe both space and time were curved. Its geometry was similar to that of a globe, with longitude serving as the space coordinate and latitude as time.
Unhappily astronomical observations contravened within the polar differences and in finding the parallels between Einstein's and de Sitter's static models of the universe, and they were soon abandoned.
In the year 1922 a major turning point came in the cosmological problem. A Russian mathematician, Alexander A. Friedman (from whom the author of this article learned his relativity), discovered an error in Einstein's proof for a static universe. In carrying out his proof Einstein had divided both sides of an equation by a quantity that, Friedman found, could become zero under certain circumstances. Since division by zero is not permitted in algebraic computations, the possibility of a nonstatic universe could not be excluded under the circumstances in question. Friedman showed that two nonstatic models were possible. One pictured the universe as expanding with time; the other, contracting.
Einstein quickly recognized the importance of this discovery. In the last edition of his book The Meaning of Relativity he wrote: ‘The mathematician Friedman found a way out of this dilemma. He showed that having a finite density in the whole is possible, according to the field equations, (three-dimensional) space, without enlarging these field equations value orientation.' Einstein remarked to me many years ago that the cosmic repulsion idea was the biggest blunder he had made in his entire life.
Almost at the very moment that Friedman was discovering the possibility of an expanding universe by mathematical reasoning, Edwin P. Hubble at the Mount Wilson Observatory on the other side of the world found the first evidence of actual physical expansion through his telescope. He made a compilation of the distances of a number of far galaxies, whose light was shifted toward the red end of the spectrum, and it was soon found that the extent of the shift was in direct proportion to a galaxy's distance from us, as estimated by its faintness. Hubble and others interpreted the red-shift as the Doppler effect-the well-known phenomenon of lengthening of wavelengths from any radiating source that is moving rapidly away (a train whistle, a source of light or whatever). To date there has been no other reasonable explanation of the galaxies' red-shift. If the explanation is correct, it means that the galaxies are all moving away from one another with increasing velocity as they move farther apart.
Thus, Friedman and Hubble laid the foundation for the theory of the expanding universe. The theory was soon developed further by a Belgian theoretical astronomer, Georges Lemaître. He proposed that our universe started from a highly compressed and extremely hot state that he called the ‘primeval atom.' (Modern physicists would prefer the term ‘primeval nucleus.') As this matter expanded, it gradually thinned out, cooled down and reaggregated in stars and galaxies, giving rise to the highly complex structure of the universe as we know it today.
  Until a few years ago the theory of the expanding universe lay under the cloud of a very serious contradiction. The measurements of the speed of flight of the galaxies and their distances from us indicated that the expansion had started about 1.8 billion years ago. On the other hand, measurements of the age of ancient rocks in the earth by the clock of radioactivity (i.e., the decay of uranium to lead) showed that some of the rocks were at least three billion years old; more recent estimates based on other radioactive elements raise the age of the earth's crust to almost five billion years. Clearly a universe 1.8 billion years old could not contain five-billion-year-old rocks. Happily the contradiction has now been disposed of by Walter Baade's recent discovery that the distance yardstick (based on the periods of variable stars) was faulty and that the distances between galaxies are more than twice as great as they were thought to be. This change in distances raises the age of the universe to five billion years or more.
Friedman's solution of Einstein's cosmological equation, as mentioned, permits two kinds of universes. We can call one the ‘pulsating' universe. This model says that when the universe has reached a certain maximum permissible expansion, it will begin to contract; that it will shrink until its matter has been compressed to a certain maximum density, possibly that of atomic nuclear material, which is a hundred million million times denser than water; that it will then begin to expand again-and so on through the cycle ad infinitum. The other model is a ‘hyperbolic' one: it suggests that from an infinitely thin state an eternity ago the universe contracted until it reached the maximum density, from which it rebounded to an unlimited expansion that will go on indefinitely in the future.
The question whether our universe is forged within ‘pulsating' or ‘hyperbolic' should be decidable from the present rate of its expansion. The situation is analogous to the case of a rocket shot from the surface of the earth. If the velocity of the rocket is less than seven miles per second-the ‘escape velocity'-the rocket will climb only to a certain height and then fall back to the earth. (If it were completely elastic, it would bounce up again, etc., etc.). On the other hand, a rocket shot with a velocity of more than seven miles per second will escape from the earth's gravitational field and disappeared in space. The case of the receding system of galaxies is very similar to that of an escape rocket, except that instead of just two interacting bodies (the rocket and the earth, but we have an unlimited number of them escaping from one another. We find that the galaxies are fleeing from one another at seven times the velocity necessary for mutual escape.
Thus we may conclude that our universe corresponds to the ‘hyperbolic' model, so that its present expansion will never stop. We must make one reservation. The estimate of the necessary escape velocity is based on the assumption that practically all the mass of the universe is concentrated in galaxies. If intergalactic space contained matter whose total mass was more than seven times that in the galaxies, we would have to reverse our conclusion and decide that the universe is pulsating. There has been no indication so far, however, that any matter exists in intergalactic space. It could have escaped detection only if it were in the form of pure hydrogen gas, without other gases or dust
Is the universe finite or infinite? This resolves itself into the question: Is the curvature of space positive or negative-closed like that of a sphere, or open like that of a saddle? We can look for the answer by studying the geometrical properties of its three-dimensional space, just as we examined the properties of figures on two-dimensional surfaces. The most convenient property to investigate astronomically is the relation between the volume of a sphere and its radius. We saw that, in the two-dimensional case, the area of a circle increases with increasing radiuses at a faster rate on a negatively curved surface than on a Euclidean or flat surface. That on a positively curved surface the relatives rate of increases is slower. Similarly the increase of volume is faster in negatively curved space, slower in positively curved space. In Euclidean space the volume of a sphere would increase in proportion to the cube, or third power, of the increase in the radius. In negatively curved space the volume would increase faster than this; in positively curved space, slower. Thus if we look into space and find that the volume of successively larger spheres, as measured by a count of the galaxies within them, increases faster than the cube of the distance to the limit of the sphere (the radius), we can conclude that the space of our universe has negative curvature, and therefore is open and infinite. Similarly, if the number of galaxies increases at a rate slower than the cube of the distance, we live in a universe of positive curvature-closed and finite.
Following this idea, Hubble undertook to study the increase in number of galaxies with distance. He estimated the distances of the remote galaxies by their relative faintness: galaxies vary considerably in intrinsic brightness, but over a very large number of galaxies these variations are expected to average out. Hubbles' calculations produced the conclusion that the universe is a closed system-a small universe only a few billion light-years in radius!
We know now that the scale he was using was wrong: with the new yardstick the universe would be more than twice as large as he calculated. Still, there is a more fundamental doubt about his result. The whole method is based on the assumption that the intrinsic brightness of a galaxy remains constant. What if it changes with time? We are seeing the light of the distant galaxies as it was emitted at widely different times in the past-500 million, a billion, two billion years ago. If the stars in the galaxies are burning out, the galaxies must dim as they grow older. A galaxy two billion light-years away cannot be put on the same distance scale with a galaxy 500 million light-years away unless we take into account the fact that we are seeing the nearer galaxy at an older, and less bright, age. The remote galaxy is farther away than a mere comparison of the luminosity of the two would suggest.
When a correction is made for the assumed decline in brightness with age, the more distant galaxies are spread out to farther distances than Hubble assumed. In fact, the calculations of volume are changed so drastically that we may have to reverse the conclusion about the curvature of space. We are not sure, because we do not yet know enough about the evolution of galaxies. However, if we find that galaxies wane in intrinsic brightness by only a few per cent in a billion years, we will have to conclude that space is curved negatively and the universe is infinite.
Alternately, there is another line of reasoning which supports the side of infinity. Our universe seems to be hyperbolic and ever-expanding. Mathematical solutions of fundamental cosmological equations show that such a universe is open and infinite.
We have reviewed the questions that dominated the thinking of cosmologists during the first half of this century: the conception of a four-dimensional space-time continuum, of curved space, of an expanding universe and of a cosmos that is either finite or infinite. Now we must consider the major present issue in cosmology: Is the universe in truth evolving, or is it in a steady state of equilibrium that has always existed and will go on through eternity? Most cosmologists take the evolutionary view. However, in 1951 a group at the University of Cambridge, whose chief representative has been Fred Hoyle, advanced the steady-state idea. Essentially their theory is that the universe is infinite in space and time that it has neither a beginning nor an end, that the density of its matter remains constant, that new matter is steadily being created in space at a rate that exactly compensates for the thinning of matter by expansion, that as a consequence new galaxies are continually being born, and that the galaxies of the universe therefore range in age from mere youngsters to veterans of 5, 10, 20 and more billions of years. In my opinion this theory must be considered very questionable because of the simple fact (apart from other reasons) that the galaxies in our neighbourhood all seem to be of the same age as our own Milky Way. Still, the issue is many-sided and fundamental, and can be settled only by extended study of the universe as far as we can observe it, and, at best, an attempt will sum up the evolutionary theory.
We assume that the universe started from a very dense state of matter. In the early stages of its expansion, radiant energy was dominant over the mass of matter. We can measure energy and matter on a common scale by means of the well-known equation E = mc2, which says that the energy equivalent of matter is the mass of the matter multiplied by the square of the velocity of light. Energy can be translated into mass, conversely, by dividing the energy quantity by c2. Thus, we can speak of the ‘mass density' of energy. Now at the beginning the mass density of the radiant energy was incomparably greater than the density of the matter in the universe. However, in an expanding system the density of radiant energy decreases faster than does the density of matter. The former thins out as the fourth power of the distance of expansion: as the radius of the system doubles, the density of radiant energy drops to one sixteenth. The density of matter declines as the third power; a doubling of the radius means an eightfold increase in volume, or eightfold decrease in density.
Assuming that the universe at the beginning was under absolute rule by radiant energy, we can calculate that the temperature of the universe was 250 million degrees when it was one hour old, dropped to 6,000 degrees (the present temperature of our sun's surface) when it was 200,000 years old and had fallen to about 100 degrees below the freezing point of water when the universe reached its 250-millionth birthday.
This particular birthday was a crucial one in the life of the universe. It was the point at which the density of ordinary matter became greater than the mass density of radiant energy, because of the more rapid fall of the latter. The switch from the reign of radiation to the reign of matter profoundly changed matter's behaviour. During the eons of its subjugation to the will of radiant energy (i.e., light), it must have been spread uniformly through space in the form of thin gas. Nevertheless, as soon as matter became gravitationally more important than the radiant energy, it began to acquire a more interesting character. James Jeans, in his classic studies of the physics of such a situation, proved half a century ago that a gravitating gas filling a very large volume is bound to break up into individual ‘gas balls,' the size of which is determined by the density and the temperature of the gas. Thus in the year 250,000,000 A.B.E. (after the beginning of expansion), when matter was freed from the dictatorship of radiant energy, the gas broke up into giant gas clouds, slowly drifting apart as the universe continued to expand. Applying Jeans' mathematical formula for the process to the gas filling the universe at that time, I have found that these primordial balls of gas would have had just about the mass that the galaxies of stars possess today. They were then only ‘proto galaxies'-cold, dark and chaotic. Nonetheless, their gas soon condensed into stars and formed the galaxies as we see them now.
A central question in this picture of the evolutionary universe is the problem of accounting for the formation of the varied kinds of matter composing it, i.e., and the chemical elements  . . . My belief is that at the start, matter was composed simply of protons, neutrons and electrons. After five minutes the universe must have cooled enough to permit the aggregation of protons and neutrons into larger units, from deuterons (one neutron and one proton) up to the heaviest elements. This process must have ended after about 30 minutes, for by that time the temperature of the expanding universe must have dropped below the threshold of thermonuclear reactions among light elements, and the neutrons must have been used up in element-building or been converted to protons.
To many a reader the statement that the present chemical constitution of our universe was decided in half an hour five billion years ago will sound nonsensical. Yet consider a spot of ground on the atomic proving ground in Nevada where an atomic bomb was exploded three years ago. Within one microsecond the nuclear reactions generated by the bomb produced a variety of fission products. Today, 100 million-million microseconds later, the site is still ‘hot' with the surviving fission products. The ratio of one microsecond to three years is the same as the ratio of half an hour to five billion years! If we can accept a time ratio of this order in the one case, why not in the other?
The late Enrico Fermi and Anthony L. Turkevich at the Institute for Nuclear Studies of the University of Chicago undertook a detailed study of thermonuclear reactions such as must have taken place during the first half hour of the universe's expansion. They concluded that the reactions would have produced about equal amounts of hydrogen and helium, making up 99 per cent of the total material, and about 1 per cent of deuterium. We know that hydrogen and helium do in fact make up about 99 per cent of the matter of the universe. This leaves us with the problem of building the heavier elements. Some under which were built by the capture of neutrons, however, since the absence of any stable nucleus of atomic weight five makes it improbable that the heavier elements could have been produced in the first half hour in the abundances now observed, I would agree that the lion's share of the heavy elements might have been formed later in the hot interiors of stars.
All the theories-of the origin, age, extent, composition and nature of the universe-are becoming ever more subject a to test by new instruments and new techniques . . . But we must not forget that the estimate of distances of the galaxies is still founded on the debatable assumption that the brightness of galaxies does not change with time. If galaxies are constantly diminishing in brightness as they age, the calculations cannot be depended upon. Thus the question whether evolution is or is not taking place in the galaxies is of crucial importance at the present stage of our outlook on the universe
After presenting his general theory of relativity in 1915,  physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.
Physicists had known since the early 19th century that light is propagated as a transverse wave (a wave in which the vibrations move in a direction perpendicular to the direction of the advancing wave front). They assumed, however, that the wave required some material medium for its transmission, so they postulated an extremely diffuse substance, called ether, as the unobservable medium. Maxwell's theory made such an assumption unnecessary, but the ether concept was not abandoned immediately, because it fit in with the Newtonian concept of an absolute space-time frame for the universe. A famous experiment conducted by the American physicist Albert Abraham Mitchelton and the American chemist Edward Williams Morley in the late 19th century served to dispel the ether concept and was important in the development of the theory of relativity. This work led to the realization that the speed of electromagnetic radiation in a vacuum is an invariant.
At the beginning of the 20th century, however, physicists found that the wave theory did not account for all the properties of radiation. In 1900 the German physicist Max Planck demonstrated that the emission and absorption of radiation occur in finite units of energy, known as quanta. In 1904, Albert Einstein was able to explain some puzzling experimental results on the external photoelectric effect by postulating that electromagnetic radiation can behave like a particle.
Other phenomena, which occur in the interaction between radiation and matter, can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can sometimes behave like a particle, and sometimes behave like a wave. The parallel concept-that matter also exhibits the same duality of having particle-like and wavelike characteristics-was developed in 1923 by the French physicist Louis Victor, Prince de Broglie.
Planck's Constant is the fundamental physical constant, symbol h. It was first discovered (1900) by the German physicist Max Planck. Until that year, light in all forms had been thought to consist of waves. Planck noticed certain deviations from the wave theory of light on the part of radiations emitted by so-called ‘black bodies', or perfect absorbers and emitters of radiation. He came to the conclusion that these radiations were emitted in discrete units of energy, called quanta. This conclusion was the first enunciation of the quantum theory. According to Planck, the energy of a quantum of light is equal to the frequency of the light multiplied by a constant. His original theory has since had abundant experimental verification, and the growth of the quantum theory has brought about a fundamental change in the physicist's concept of light and matter, both of which are now thought to combine the properties of waves and particles. Thus, Planck's constant has become as important to the investigation of particles of matter as to quanta of light, now called photons. The first successful measurement (1916) of Planck's constant was made by the American physicist Robert Millikan. The present accepted value of the constant is
h = 6.626 × 10-34 joule-second in the metre-kilogram-second system.
As each  photon, particle of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X-rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X-rays doctors use to view a person's bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force, and one of the four fundamental forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck's constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon's energy as E=hv, where h is Planck's Constant and v is the frequency. Photons with high frequencies, such as X rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Most scientists did not pay attention to Planck's theory until 1905, when Albert Einstein used the idea of photons to explain an interaction he had studied called the photoelectric effect. In this interaction, light shining on the surface of a metal causes the metal to emit electrons. Electrons escape the metal by absorbing energy from the light. Einstein showed that light behaves as particles in this situation. If the light behaved like waves, each electron could absorb many light waves and gain ever more energy. He found, however, that a more intense beam of light, with more light waves, did not give each electron more energy. Instead, more light caused the metal to release more electrons, each of which had the same amount of energy. Each electron had to be absorbing a small piece of the light beam, or a particle of light, and all these pieces had the same amount of energy. A beam of light with a higher frequency contained pieces of light with more energy, so when electrons absorbed these particles, they too had more energy. This could only be explained using the photon view of radiation, in which each electron absorbs a single photon and gains enough energy to escape the metal.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein's study of the photoelectric effect, reveal light's particle properties.
Photon particles of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X rays doctors use to view a person's bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force.  One of the four fundamentals forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck's constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon's energy as E=hv, where h is Planck's Constant and v is the frequency. Photons with high frequencies, such as X-rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein's study of the photoelectric effect, reveal light's particle properties.
Most synonymous with quantum theory is the Uncertainty Principle, in quantum mechanics, theory states that specifying simultaneously the position and momentum of a particle is impossible, such as an electron, with precision. Also called the indeterminacy principle, the theory further states that a more accurate determination of one quantity will result in a less precise measurement of the other, and that the product of both uncertainties is never less than Planck's constant, named after the German physicist Max Planck. Of very small magnitude, the uncertainty results from the fundamental nature of the particles being observed. In quantum mechanics, probability calculations therefore replace the exact calculations of classical mechanics.
Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle was of great significance in the development of quantum mechanics. Its philosophic implications of indeterminacy created a strong trend of mysticism among scientists who interpreted the concept as a violation of the fundamental law of cause and effect. Other scientists, including Albert Einstein, believed that the uncertainty involved in observation in no way contradicted the existence of laws governing the behaviour of the particles or the ability of scientists to discover these laws.
Of a final summation, science is a systematic study of anything that can be examined, tested, and verified. The word science is derived from the Latin word scire, meaning ‘to know.' From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that can be observed or detected, and science as a whole shapes the way we understand the universe, our planet, ourselves, and other living things.
Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building on work carried out earlier. Some of this knowledge—such as our understanding of numbers-stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge-such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter)-dates back less than 50 years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what is known.
During scientific investigations, scientists put together and compare new discoveries and existing knowledge. In most cases, new discoveries extend what is currently accepted, providing further evidence that existing ideas are correct. For example, in 1676 the English physicist Robert Hooke discovered that elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances that have been made in physics since 1676, this simple law still holds true.
Scientists utilize existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events that are not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the elements were discovered several years later, his predictions proved to be correct.
In science, important advances can also be made when current ideas are shown to be wrong. A classic case of this occurred early in the 20th century, when the German geologist Alfred Wegener suggested that the continents were at one time connected, a theory known as continental drift. At the time, most geologists discounted Wegener's ideas, because the Earth's crust seemed to be fixed. Nonetheless, following the discovery of plate tectonics in the 1960s, in which scientists found that the Earth's crust is made of moving plates, continental drift became an important part of geology.
Through advances like these, scientific knowledge is constantly added to and refined. As a result, science gives us an ever more detailed insight into the way the world around us works.
For a large part of recorded history, science had little bearing on people's everyday lives. Scientific knowledge was gathered for its own sake, and it had few practical applications. However, with the dawn of the Industrial Revolution in the 18th century, this rapidly changed. Today, science has a profound effect on the way we live, largely through technology-the use of scientific knowledge for practical purposes.
Some forms of technology have become so well established that forgetting the great scientific achievements that they represent is easy. The refrigerator, for example, owes its existence to a discovery that liquids take in energy when they evaporate, a phenomenon known as latent heat. The principle of latent heat was first exploited in a practical way in 1876, and the refrigerator has played a major role in maintaining public health ever since. The first automobile, dating from the 1880's, made use of many advances in physics and engineering, including reliable ways of generating high-voltage sparks, while the first computers emerged in the 1940's from simultaneous advances in electronics and mathematics.
Other fields of science also play an important role in the things we use or consume every day. Research in food technology has created new ways of preserving and flavouring what we eat. Research in industrial chemistry has created a vast range of plastics and other synthetic materials, which have thousands of uses in the home and in industry. Synthetic materials are easily formed into complex shapes and can be used to make machine, electrical, and automotive parts, scientific and industrial instruments, decorative objects, containers, and many other items.
Alongside these achievements, science has also brought about technology that helps save human life. The kidney dialysis machine enables many people to survive kidney diseases that would once have proved fatal, and artificial valves allow sufferers of coronary heart disease to return to active living. Biochemical research is responsible for the antibiotics and vaccinations that protect us from infectious diseases, and for a wide range of other drugs used to combat specific health problems. As a result, the majority of people on the planet now live longer and healthier lives than ever before.
However, scientific discoveries can also have a negative impact in human affairs. Over the last hundred years, some of the technological advances that make life easier or more enjoyable have proved to have unwanted and often unexpected long-term effects. Industrial and agricultural chemicals pollute the global environment, even in places as remote as Antarctica, and city air is contaminated by toxic gases from vehicle exhausts. The increasing pace of innovation means that products become rapidly obsolete, adding to a rising tide of waste. Most significantly of all, the burning of fossil fuels such as coal, oil, and natural gas releases into the atmosphere carbon dioxide and other substances known as greenhouse gases. These gases have altered the composition of the entire atmosphere, producing global warming and the prospect of major climate change in years to come.
Science has also been used to develop technology that raises complex ethical questions. This is particularly true in the fields of biology and medicine. Research involving genetic engineering, cloning, and in vitro fertilization gives scientists the unprecedented power to bring about new life, or to devise new forms of living things. At the other extreme, science can also generate technology that is deliberately designed to harm or to kill. The fruits of this research include chemical and biological warfare, and nuclear weapons, by far the most destructive weapons that the world has ever known.
Scientific research can be divided into basic science, also known as pure science, and applied science. In basic science, scientists working primarily at academic institutions pursue research simply to satisfy the thirst for knowledge. In applied science, scientists at industrial corporations conduct research to achieve some kind of practical or profitable gain.
In practice, the division between basic and applied science is not always clear-cut. This is because discoveries that initially seem to have no practical use often develop one as time goes by. For example, superconductivity, the ability to conduct electricity with no resistance, was little more than a laboratory curiosity when Dutch physicist Heike Kamerlingh Onnes discovered it in 1911. Today superconducting electromagnets are used in an ever-increasing number of important applications, from diagnostic medical equipment to powerful particle accelerators.
Scientists study the origin of the solar system by analysing meteorites and collecting data from satellites and space probes. They search for the secrets of life processes by observing the activity of individual molecules in living cells. They observe the patterns of human relationships in the customs of aboriginal tribes. In each of these varied investigations the questions asked and the means employed to find answers are different. All the inquiries, however, share a common approach to problem solving known as the scientific method. Scientists may work alone or they may collaborate with other scientists. In all cases, a scientist's work must measure up to the standards of the scientific community. Scientists submit their findings to science forums, such as science journals and conferences, in order to subject the findings to the scrutiny of their peers.
Whatever the aim of their work, scientists use the same underlying steps to organize their research: (1) they make detailed observations about objects or processes, either as they occur in nature or as they take place during experiments; (2) they collect and analyse the information observed; and (3) they formulate a hypothesis that explains the behaviour of the phenomena observed.
A scientist begins an investigation by observing an object or an activity. Observation typically involves one or more of the humans senses-hearing, sights, smells, taste, and touch. Scientists typically use tools to aid in their observations. For example, a microscope helps view objects too small to be seen with the unaided human eye, while a telescope views objects too far away to be seen by the unaided eye.
Scientists typically apply their observation skills to an experiment. An experiment is any kind of trial that enables scientists to control and change at will the conditions under which events occur. It can be something extremely simple, such as heating a solid to see when it melts, or something highly complex, such as bouncing a radio signal off the surface of a distant planet. Scientists typically repeat experiments, sometimes many times, in order to be sure that the results were not affected by unforeseen factors.
Most experiments involve real objects in the physical world, such as electric circuits, chemical compounds, or living organisms. However, with the rapid progress in electronics, computer simulations can now carry out some experiments instead. If they are carefully constructed, these simulations or models can accurately predict how real objects will behave.
One advantage of a simulation is that it allows experiments to be conducted without any risks. Another is that it can alter the apparent passage of time, speeding up or slowing natural processes. This enables scientists to investigate things that happen very gradually, such as evolution in simple organisms, or ones that happen almost instantaneously, such as collisions or explosions.
During an experiment, scientists typically make measurements and collect results as they work. This information, known as data, can take many forms. Data may be a set of numbers, such as daily measurements of the temperature in a particular location or a description of side effects in an animal that has been given an experimental drug. Scientists typically use computers to arrange data in ways that make the information easier to understand and analyse. Data may be arranged into a diagram such as a graph that shows how one quantity (body temperature, for instance) varies in relation to another quantity (days since starting a drug treatment). A scientist flying in a helicopter may collect information about the location of a migrating herd of elephants in Africa during different seasons of a year. The data collected maybe in the form of geographic coordinates that can be plotted on a map to provide the position of the elephant herd at any given time during a year.
Scientists use mathematics to analyse the data and help them interpret their results. The types of mathematics used include statistics, which is the analysis of numerical data, and probability, which calculates the likelihood that any particular event will occur.
Once an experiment has been carried out and data collected and analysed, scientists look for whatever pattern their results produce and try to formulate a hypothesis that explains all the facts observed in an experiment. In developing a hypothesis, scientists employ methods of induction to generalize from the experiment's results to predict future outcomes, and deduction to infer new facts from experimental results.
Formulating a hypothesis may be difficult for scientists because there may not be enough information provided by a single experiment, or the experiment's conclusion may not fit old theories. Sometimes scientists do not have any prior idea of a hypothesis before they start their investigations, but often scientists start out with a working hypothesis that will be proved or disproved by the results of the experiment. Scientific hypotheses can be useful, just as hunches and intuition can be useful in everyday life. Yet they can also be problematic because they tempt scientists, either deliberately or unconsciously, to favour data that support their ideas. Scientists generally take great care to avoid bias, but it remains an ever-present threat. Throughout the history of science, numerous researchers have fallen into this trap, either in the hope or self-advancement that they firmly believe their ideas to be true.
If a hypothesis is borne out by repeated experiments, it becomes a theory-an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler's theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. Nevertheless, studies carried through by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.
If other scientists do not have access to scientific results, the research may as well not have been put into effect at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.
In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.
Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations-bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is soundly based.
Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected for publication.
The peer-review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other's work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.
Once a paper has been accepted and published, it becomes part of the vast and ever-expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.
When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940's. McClintock discovered a new phenomenon in corn cells known as transposable genes, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960's when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than 35 years after doing the research.
In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.
The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and diplomat Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.
In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special-interest groups.
Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, assesses research on how climate change occurs, and what affects change is likely to have on humans and their environment.
Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research. For additional information on individual sciences, refer to separate articles highlighted in the text.
The mathematical sciences investigate the relationships between things that can be measured or quantified in either a real or abstract form. Pure mathematics differs from other sciences because it deals solely with logic, rather than with nature's underlying laws. However, because it can be used to solve so many scientific problems, mathematics is usually considered to be a science itself.
Central to mathematics is arithmetic, the use of numbers for calculation. In arithmetic, mathematicians combine specific numbers to produce a result. A separate branch of mathematics, called algebra, works in a similar way, but uses general expressions that apply to numbers as a whole. For example, if there are three separate items on a restaurant bill, simple arithmetic produces the total amount to be paid. Yet the total can also be calculated by using an algebraic formula. A powerful and flexible tool, algebra enables mathematicians to solve highly complex problems in every branch of science.
Geometry investigates objects and the spaces around them. In its simplest form, it deals with objects in two or three dimensions, such as lines, circles, cubes, and spheres. Geometry can be extended to cover abstractions, including objects in many dimensions. Although we cannot perceive these extra dimensions ourselves, the logic of geometry still holds.
In geometry, working out the exact area of a rectangle or the gradient is easy (slope) of a line, but there are some problems that geometry cannot solve by conventional means. For example, geometry cannot calculate the exact gradient at a point on a curve, or the area that the curve bounds. Scientists find that calculating quantities like this helps them understand physical events, such as the speed of a rocket at any particular moment during its acceleration.
To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determination of Newton's laws of motion and the theory of electromagnetism.
The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.
Other branches of physics focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X rays-forms of energy that are closely related and that all obey the same set of rules.
Chemistry is the study of the composition of matter and the way different substances interact-subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with less harmful side effects.
The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology-one of the fastest-growing sciences today.
Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today.
The earth sciences examine the structure and composition of our planet, and the physical processes that have helped to shape it. Geology focuses on the structure of Earth, while geography is the study of everything on the planet's surface, including the physical changes that humans have brought about from, for example, farming, mining, or deforestation. Scientists in the field of geomorphology study Earth's present landforms, while mineralogists investigate the minerals in Earth's crust and the way they formed.
Water dominates Earth's surface, making it an important subject for scientific research. Oceanographers carry out research in the oceans, while scientists working in the field of hydrology investigate water resources on land, a subject of vital interest in areas prone to drought. Glaciologists study Earth's icecaps and mountain glaciers, and the effects that ice have when it forms, melts, or moves. In atmospheric science, meteorology deals with day-to-day changes in weather, but climatology investigates changes in weather patterns over the longer term.
When living things die their remains are sometimes preserved, creating a rich store of scientific information. Palaeontology is the study of plant and animal remains that have been preserved in sedimentary rock, often millions of years ago. Paleontologists study things long dead and their findings shed light on the history of evolution and on the origin and development of humans. A related science, called palynology, is the study of fossilized spores and pollen grains. Scientists study these tiny structures to learn the types of plants that grew in certain areas during Earth's history, which also helps identify what Earth's climates were like in the past.
The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.
Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes found in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.
While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.
Physiology explores how living things' work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life-the fact that most living things maintain a steady internal state when the environment around them constantly changes.
Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.
As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.
The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence's people's behaviour and attitudes.
Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share and those that are the products of local culture, learned and handed on from generation to generation.
The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well.
In technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine' was any kind of machine.
Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water's surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.
In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine.
Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.
During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. However, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.
The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.
Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared that the Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.
Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc show that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians-a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.
For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. It is, nonetheless, that in ancient Greece, often recognized as the birthplace of Western science, a new kind of scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.
Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.
Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit-not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the Centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.
As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens-the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle-students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.
In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing a figure accurate to within 1 percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.
By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the Centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in ad 529, bringing the first flowering of rationalism to an end.
For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300s, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.
Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end to self-splendour, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about AD 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of pi to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940.
The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al -Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in 30. Al-Khwarizmi also wrote on algebra (itself derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.
In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used-alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.
In Europe, historians often attribute the rebirth of science to a political event—the capture of Constantinople (now  Istanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.
The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. Yet in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, 7 volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.
The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer Nicolaus Copernicus. In it, Copernicus rejected the idea that Earth was the Centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion since which has dogged Western thought ever.
In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.
These observations of Venus helped to convince Galileo that Copernicus's Sun-centred view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.
In less contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.
Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.
Seemingly, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth's tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.
Newton's work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.
Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-head notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.
By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith's work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.
With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.
In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions-a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton's discoveries about atoms and their behaviour to draw up his periodic table of the elements.
Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.
In 1828 the German chemist Friedrich Wöhler showed that making carbon-containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened up an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.
In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he performed led to the development of electric motors and generators. While Faraday's genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's famous equations, devised in 1864, uses mathematics to explain the interactions between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves, created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well. With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X rays by German physicist Wilhelm Roentgen in 1895, Maxwell's ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-head notion that atoms were the basic unit of matter.
As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.
In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimates ranged from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus's odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescope, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.
In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur's vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.
Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. However, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that still has not subsided. Particularly controversial was Darwin's theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin's ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin's ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.
In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.
At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940s American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.
After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, side stepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.
At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine's chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than 10 a year by the 21st century.
By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. By any whimpering gait, by ways of an operative measure, the 1980s contributed the medical community's confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause hemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.
In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertions of normal or genetically altered genes into a patient's cells replace nonfunctional or missing genes.
Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection. Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fibreoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as telemedicine, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.
In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind.' In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell's activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.
In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television's picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.
In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second.
Further miniaturization led in 1971 to the first microprocessor-a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today's personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950's. Once used only by large businesses, computers are now used by professionals, small retailers, and students to perform a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to speak with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.
During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth's near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.
When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970's, NASA also developed the first robotic space probes to explore the planet's Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth's solar system.
In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.
Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known-an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927. Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world-that is, the one in which we live.
In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments' Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both an energy source and a weapon.
These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.
Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago.  However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.
Particle Accelerators, in physics, are the devices used to accelerate charged elementary particles or ions to high energies. Particle accelerators today are some of the largest and most expensive instruments used by physicists. They all have the same three basic parts: a source of elementary particles or ions, a tube pumped to a partial vacuum in which the particles can travel freely, and some means of speeding up the particles.
Charged particles can be accelerated by an electrostatic field. For example, by placing electrodes with a large potential difference at each end of an evacuated tube, British scientists' John D. Cockcroft and Ernest Thomas Sinton Walton were able to accelerate protons to 250,000 eV. Another electrostatic accelerator is the Van de Graaff accelerator, which was developed in the early 1930's by the American physicist Robert Jemison Van de Graaff. This accelerator uses the same principles as the Van de Graaff Generator. The Van de Graaff accelerator builds up a potential between two electrodes by transporting charges on a moving belt. Modern Van de Graaff accelerators can accelerate particles to energies as high as 15 MeV (15 million electron volts).
Another machine, first conceived in the late 1920's, is the linear accelerator, or linac, which uses alternating voltages of high magnitude to push particles along in a straight line. Particles pass through a line of hollow metal tubes enclosed in an evacuated cylinder. An alternating voltage is timed so that a particle is pushed forward each time it goes through a gap between two of the metal tubes. Theoretically, a linac of any energy can be built. The largest linac in the world, at Stanford University, is 3.2 km. (2 mi.) long. It is capable of accelerating electrons to an energy of 50 GeV (50 billion, or giga, electron volts). Stanford's linac is designed to collide two beams of particles accelerated on different tracks of the accelerator.
The American physicist Ernest O. Lawrence won the 1939 Nobel Prize in physics for a breakthrough in accelerator design in the early 1930's. He developed the cyclotron, the first circular accelerator. A cyclotron is to some extent like a linac wrapped into a tight spiral. Instead of many tubes, the machine had only two hollow vacuum chambers, called dees, that are shaped like capital letter Ds back to back. A magnetic field, produced by a powerful electromagnet, keeps the particles moving in a circle. Each time the charged particles pass through the gap between the dees, they are accelerated. As the particles gain energy, they spiral out toward the edge of the accelerator until they gain enough energy to exit the accelerator. The world's most powerful cyclotron, the K1200, began operating in 1988 at the National Superconducting Cyclotron Laboratory at Michigan State University. The machine is capable of accelerating nuclei to an energy approaching 8 GeV.
When nuclear particles in a cyclotron gain an energy of 20 MeV or more, they become appreciably more massive, as predicted by the theory of relativity. This tends to slow them and throws the acceleration pulses at the gaps between the dees out of phase. A solution to this problem was suggested in 1945 by the Soviet physicist Vladimir I. Veksler and the American physicist Edwin M. McMillan. The solution, the synchrocyclotron, is sometimes called the frequency-modulated cyclotron. In this instrument, the oscillator (radio-frequency generator) that accelerates the particles around the dees is automatically adjusted to stay in step with the accelerated particles; as the particles gain mass, the frequency of accelerations is lowered slightly to keep in step with them. As the maximum energy of a synchrocyclotron increases, so must its size, for the particles must have more space in which to spiral. The largest synchrocyclotron is the 600-cm. (236-in.) phasotron at the Dubna Joint Institute for Nuclear Research in Russia; it accelerates protons to more than 700 MeV and has magnets weighing 6984 metric tons (7200 tons).
When electrons are accelerated, they undergo a large increase in mass at a low energy. At 1 MeV energy, an electron weighs two and one-half times as much as an electron at rest. Synchrocyclotrons cannot be adapted to make allowance for such large increases in mass. Therefore, another type of cyclic accelerator, the betatron, is employed to accelerate electrons. The betatron consists of a doughnut-shaped evacuated chamber placed between the poles of an electromagnet. The electrons are kept in a circular path by a magnetic field called a guide field. By applying an alternating current to the electromagnet, the electromotive force induced by the changing magnetic flux through the circular orbit accelerates the electrons. During operation, both the guide field and the magnetic flux are varied to keep the radius of the orbit of the electrons constant.
The synchrotron is the most recent and most powerful member of the accelerator family. A synchrotron consists of a tube in the shape of a large ring through which the particles travel; the tube is surrounded by magnets that keep the particles moving through the centre of the tube. The particles enter the tube after having already been accelerated to several million electron volts. Particles are accelerated at one or more points on the ring each time the particles make a complete circle around the accelerator. To keep the particles in a rigid orbit, the strengths of the magnets in the ring are increased as the particles gain energy. In a few seconds, the particles reach energies greater than 1 GeV and are ejected, either directly into experiments or toward targets that produce a variety of elementary particles when struck by the accelerated particles. The synchrotron principle can be applied to either protons or electrons, although most of the large machines are proton-synchrotrons.
The first accelerator to exceed the 1 GeV mark was the cosmotron, a proton-synchrotron at Brookhaven National Laboratory, in Brookhaven, New York. The cosmotron was operated at 2.3 GeV in 1952 and later increased to 3 GeV. In the mid-1960's, two operating synchrotrons were regularly accelerating protons to energies of about 30 GeV. These were the Alternating Gradient Synchrotron at Brookhaven National Laboratory, and a similar machine near Geneva, Switzerland, operated by CERN (also known as the European Organization for Nuclear Research). By the early 1980s, the two largest proton-synchrotrons were a 500-GeV device at CERN and a similar one at the Fermi National Accelerator Laboratory (Fermilab) near Batavia, Illinois. The capacity of the latter, called Tevatron, was increased to a potential 1 TeV (trillion, or tera, eV) in 1983 by installing superconducting magnets, making it the most powerful accelerator in the world. In 1989, CERN began operating the Large-Electron Positron Collider (LEP), a 27-km. (16.7-mi.) rings that can accelerate electrons and positrons to an energy of 50 GeV.
A storage ring collider accelerator is a synchrotron that produces more energetic collisions between particles than a conventional synchrotron, which slams accelerated particles into a stationary target. A storage ring collider accelerates two sets of particles that rotate in opposite directions in the ring, then collides the two set of particles. CERN's Large Electron-Positron Collider is a storage ring collider. In 1987, Fermilab converted the Tevatron into a storage ring collider and installed a three-story-high detector that observed and measured the products of the head-on particle collisions.
As powerful as today's storage ring colliders are, physicists need even more powerful devices to test today's theories. Unfortunately, building larger rings is extremely expensive. CERN is considering building the Large Hadron Collider (LHC) in the existing 27-km. (16.7-mi.) tunnel that currently houses the Large Electron-Positron Collider. In 1988, the United States began planning for the construction of the Superconducting Super Collider (SSC) near Waxahachie, Texas. The SSC was to be an enormous storage ring collider accelerator 87 km. (54 mi.) long. However, after about one-fifth of the tunnel had been completed, the Congress of the United States voted to cancel the project in October 1993, as a result of the accelerator's projected cost of more than $10 billion.
Accelerators are used to explore atomic nuclei, thereby allowing nuclear scientists to identify new elements and to explain phenomena that affect the entire nucleus. Machines exceeding 1 GeV are used to study the fundamental particles that compose the nucleus. Several hundred of these particles have been identified. High-energy physicists hope to discover rules or principles that will permit an orderly arrangement of the proportion of sub-nuclear particles. Such an arrangement would be as useful to nuclear science as the periodic table of the chemical elements is to chemistry. Fermilab's accelerator and collider detector permit scientists to study violent particle collisions that mimic the state of the universe when it was just microseconds old. Continued study of their findings should increase scientific understanding of the makeup of the universe.
In addition, Particle Detectors, are described as  instruments used to detect and study fundamental nuclear particles, as these detectors range in complexity from the well-known portable Geiger counter to room-sized spark and bubble chambers.
One of the first detectors to be used in nuclear physics was the ionization chamber, which consists essentially of a closed vessel containing a gas and equipped with two electrodes at different electrical potentials. The electrodes, depending on the type of instrument, may consist of parallel plates or coaxial cylinders, or the walls of the chamber may act as one electrode and a wire or rod inside the chamber act as the other. When ionizing particles of radiation enter the chamber they ionize the gas between the electrodes. The ions that are thus produced migrate to the electrodes of opposite sign (negatively charged ions move toward the positive electrode, and vice versa), creating a current that may be amplified and measured directly with an electrometer-an electroscope equipped with a scale-or amplified and recorded by means of electronic circuits.
Ionization chambers adapted to detect individual ionizing particles of radiation are called counters. The Geiger-Müller counter is one of the most versatile and widely used instruments of this type. It was developed by the German physicist Hans Geiger from an instrument first devised by Geiger and the British physicist Ernest Rutherford; it was improved in 1928 by Geiger and by the German American physicist Walther Müller. The counting tube is filled with a gas or a mixture of gases at low pressure, the electrodes being the thin metal wall of the tube and a fine wire, usually made of tungsten, stretched lengthwise along the axis of the tube. A strong electric field maintained between the electrodes accelerates the ions; these then collide with atoms of the gas, detaching electrons and thus producing more ions. When the voltage was raised sufficiently, the rapidly increasing current produced by a single particle sets off a discharge throughout the counter. The pulse caused by each particle is amplified electronically and then actuates a loudspeaker or a mechanical or electronic counting device.
Detectors that enable researchers to observe the tracks that particles leave behind are called track detectors. Spark and bubble chambers are track detectors, as are the cloud chamber and nuclear emulsions. Nuclear emulsions resemble photographic emulsions but are thicker and not as sensitive to light. A charged particle passing through the emulsion ionizes silver grains along its track. These grains become black when the emulsion is developed and can be studied with a microscope.
The fundamental principle of the cloud chamber was discovered by the British physicist C. T. R. Wilson in 1896, although an actual instrument was not constructed until 1911. The cloud chamber consists of a vessel several centimetres or more in diameter, with a glass window on one side and a movable piston on the other. The piston can be dropped rapidly to expand the volume of the chamber. The chamber is usually filled with dust-free air saturated with water vapour. Dropping the piston causes the gas to expand rapidly and causes its temperature to fall. The air is now supersaturated with water vapour, but the excess vapour cannot condense unless ions are present. Charged nuclear or atomic particles produce such ions, and any such particles passing through the chamber leave behind them a trail of ionized particles upon which the excess water vapour will condense, thus making visible the course of the charged particle. These tracks can be photographed and the photographs then analysed to provide information on the characteristics of the particles.
Because the paths of electrically charged particles are bent or deflected by a magnetic field, and the amount of deflection depends on the energy of the particle, a cloud chamber is often operated within a magnetic field. The tracks of negatively and positively charged particles will curve in opposite directions. By measuring the radius of curvature of each track, its velocity can be determined. Heavy nuclei such as alpha particles form thick and dense tracks, protons form tracks of medium thickness, and electrons form thin and irregular tracks. In a later refinement of Wilson's design, called a diffusion cloud chamber, a permanent layer of supersaturated vapour is formed between warm and cold regions. The layer of supersaturated vapour is continuously sensitive to the passage of particles, and the diffusion cloud chamber does not require the expansion of a piston for its operation. Although the cloud chamber has now been supplanted almost entirely by the bubble chamber and the spark chamber, it was used in making many important discoveries in nuclear physics.
The bubble chamber, invented in 1952 by the American physicist Donald Glaser, is similar in operation to the cloud chamber. In a bubble chamber a liquid is momentarily superheated to a temperature just above its boiling point. For an instant the liquid will not boil unless some impurity or disturbance is introduced. High-energy particles provide such a disturbance. Tiny bubbles form along the tracks as these particles pass through the liquid. If a photograph is taken just after the particles have crossed the chamber, these bubbles will make visible the paths of the particles. As with the cloud chamber, a bubble chamber placed between the poles of a magnet can be used to measure the energies of the particles. Many bubble chambers are equipped with superconducting magnets instead of conventional magnets. Bubble chambers filled with liquid hydrogen allow the study of interactions between the accelerated particles and the hydrogen nuclei.
In a spark chamber, incoming high-energy particles ionize the air or a gas between plates and wire grids that are kept alternately positively and negatively charged. Sparks jump along the paths of ionization and can be photographed to show particle tracks. In some spark-chamber installations, information on particle tracks is fed directly into electronic computer circuits without the necessity of photography. A spark chamber can be operated quickly and selectively. The instrument can be set to record particle tracks only when a particle of the type that the researchers want to study is produced in a nuclear reaction. This advantage is important in studies of the rarer particles; spark-chamber pictures, however, lack the resolution and detail of bubble-chamber pictures.
The scintillation counter functions by the ionization produced by charged particles moving at high speed within certain transparent solids and liquids, known as scintillating materials, causing flashes of visible light. The gases' argon, krypton, and xenon produces ultraviolet light, and hence are used in scintillation counters. A primitive scintillation device, known as the spinthariscopes, was invented in the early 1990s and was of considerable importance in the development of nuclear physics. The spinthariscopes required, however, the counting of the scintillations by eye. Because of the uncertainties of this method, physicists turned to other detectors, including the Geiger-Müller counter. The scintillation method was revived in 1947 by placing the scintillating material in front of a photo multiplier tube, a type of photoelectric cell. The light flashes are converted into electrical pulses that can be amplified and recorded electronically.
Various organic and inorganic substances such as plastic, zinc sulfide, sodium iodide, and anthracene are used as scintillating materials. Certain substances react more favourably to specific types of radiation than others, making possible highly diversified instruments. The scintillation counter is superior to all other radiation-detecting devices in a number of fields of current research. It has replaced the Geiger -Müller counter in the detection of biological tracers and as a surveying instrument in prospecting for radioactive ores. It is also used in nuclear research, notably in the investigation of such particles as the antiproton, the meson Elementary Particles, and the neutrino. One such counter, the Crystal Ball, has been in use since 1979 for advanced particle research, first at the Stanford Linear Accelerator Centre and, since 1982, at the German Electron Synchrotron Laboratory (DESY) in Hamburg, Germany. The Crystal Ball is a hollow crystal sphere, about 2.1 m. (7 ft.) wide, that is surrounded by 730 sodium iodide crystals.
Many other types of interactions between matter and elementary particles are used in detectors. Thus in semiconductor detectors, electron-hole pairs that elementary particles produce in a semiconductor junction momentarily increase the electric conduction across the junction. The Cherenkov detector, on the other hand, makes use of the effect discovered by the Russian physicist Pavel Alekseyevich Cherenkov in 1934: A particle emits light when it passes through a nonconducting medium at a velocity higher than the velocity of light in that medium (the velocity of light in glass, for example, is lower than the velocity of light in vacuum). In Cherenkov detectors, materials such as glass, plastic, water, or carbon dioxide serve as the medium in which the light flashes are produced. As in scintillation counters, the light flashes are detected with photo multiplier tubes.
Neutral particles such as neutrons or neutrinos can be detected by nuclear reactions that occur when they collide with nuclei of certain atoms. Slow neutrons produce easily detectable alpha particles when they collide with boron nuclei in borontrifluoride. Neutrinos, which barely interact with matter, are detected in huge tanks containing perchloroethylene (C2CI4, a dry-cleaning fluid). The neutrinos that collide with chlorine nuclei produce radioactive argon nuclei. The perchlorethylene tank is flushed at regular intervals, and the newly formed argon atoms, presents in minute amounts, is counted. This type of neutrino detector, placed deep underground to shield against cosmic radiation, is currently used to measure the neutrino flux from the sun. Neutrino detectors may also take the form of scintillation counters, the tank in this case being filled with an organic liquid that emits light flashes when traversed by electrically charged particles produced by the interaction of neutrinos with the liquid's molecules.
The detectors now being developed for use with the storage rings and colliding particle beams of the most recent generation of accelerators are bubble-chamber types known as time-projection chambers. They can measure three-dimensionally the tracks produced by particles from colliding beams, with supplementary detectors to record other particles resulting from the high-power collisions. The Fermi National Accelerator Laboratory's CDF (Collision Detector Fermilab) is used with its colliding-beam accelerator to study head-on particle collisions. CDF's three different systems can capture or account for nearly all of the sub-nuclear fragments released in such violent collisions.
High-energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called superstrings. Superstrings may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, superstrings may tell us about the largest event of all time: the big bang, and the creation of the universe!
These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself. Read on to follow their progress.
The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.
The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element-that is, how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.
In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.
Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best-known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.
Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. So is everything else.
It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed.
What differentiates radio waves, visible light, and X ray is their energy. This energy is directly related to the wave's length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.
How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.
To understand really what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.
While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.
Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated.
The proposal and acceptance of quarks were a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.
Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.
Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultrareliable lasers and transistors.
The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. However, they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. Physicists cannot yet provide a satisfactory description of gravity.
The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton's explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity? No one has yet proposed a satisfactory answer to this question. Physicists have been trying to find one for a long time.
At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn't we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.
There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics is subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology-for example, how did the universe get to be the way it is, and why did galaxies form?
Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.
We can't build an accelerator to probe that distance using today's technology, because the proportions of size and energy show that it would stretch from here to the stars. However, we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity.
These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model-such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicists' questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.
Among the most promising new theories is the idea that everything is made of fundamental ‘strings,' rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle-at quantum-length distances-we would see a tiny, vibrating loop!
In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.
This is a very clever idea, since it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.
Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. The end of the story has not yet been written, however. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.
Elementary Particles, in physics, are particles that cannot be broken down into any other particles. The term elementary particles also are used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.
Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analysing information about the world, they showed that these materials were not fundamental but were made of other substances.
In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible'). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.
Scientists now believe that quarks and three other types of particles-leptons, force-carrying bosons, and the Higgs boson-are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.
Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.
In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.
Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.
Classifying particles as either fermions or bosons are similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yield a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.
For example, a hydrogen atom contains two fermions: an electron and a proton. Yet the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.
A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.
The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.
The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter is bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms radioactively to decay). Scientists believed another type of fundamental boson, called the Higgs boson, give matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.
Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs. Leptons have electric charges of either-1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or?  Antiquarks have electric charges of either  -? or +? . Leptons interact weakly with one another and with other particles, while quarks interact strongly with one another.
Leptons and quarks each come in 6 varieties. Scientists divided these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.
Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino ( ? e); the second generation, the muon ( ) and the muon neutrino ( ?  ); and the third generation, the tau (t) and the tau neutrino ( ? t;).
The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of-1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.
Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.
Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth's atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one-millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into Muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.
The third generation charged lepton is the tau. The tau has an electric charge of-1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short-only 0.3 picoseconds (a picosecond is one-trillionth of a second). Scientists believe the tau has an electrically neutral partner called the tau neutrino. While scientists have never detected a tau neutrino directly, they believe they have seen the effects of tau neutrinos during experiments. Like the other neutrinos, the tau neutrino has a very small mass or no mass at all.
The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavours,' divided into three generations. Unlike leptons, however, quarks never exist alone-they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.
Quarks are unique among all elementary particles in that they have fractional electric charges-either +? or  -? . In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.
The first generation quarks are designated up (u) and down (d); the second generation, charm and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.
The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +? , and the down quark has a charge of ~? . The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +? , and the strange quark has a charge of ~? . The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +? , and the bottom quark has a charge of ~? . The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.
Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).
Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +? , so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is~? Besides ~ ? , or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).
Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the Kaons (pronounced KAY-ons) and the D particles. Kaons ( ?) Ds comes in several different varieties, just as pions do. All varieties of Kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contains either a charm quark or a charm antiquark.
Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? In addition +? Moreover ~ ? , produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.
The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus ~ ? plus ~ ? for a net charge of 0, making the neutron electrically neutral. Neutrons have a greater mass than protons and an average lifetime of 930 seconds.
Many other baryons exist, and many contain quarks other than the up and down flavours. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.
British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.
All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks, and two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.
When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.
Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned, magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.
All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.
Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.
Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent's hands never come into contact with the atoms of the child. The electrons in the parent's s repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.
Theories about elementary particles, however, require forces to be local-that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.
Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle-any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.
For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are  different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell's work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.
Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,' the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.
Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers-they are each the other's additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.
In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.
Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.
A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle's electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of anti-red (also called cyan), anti-blue (also called yellow), or anti-green (also called magenta). Quark types and colours are not linked-quarks, for example, may be red, green, or blue.
All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark's anti-colour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour's red, blue, and green combine to produce white, so the baryon is white, or colourless.
The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.
Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.
The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.
In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of s contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.
While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.
All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. Yet the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.
The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.
One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.
The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other's antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.
When the weak force affects a particle, the particle emits one of the three weak vector bosons-W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.
A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.
Physicists call their goal of an overall theory a ‘theory of everything,' because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.
Gravitation is the weakest of the four forces on the balance, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.
Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).
The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.
Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.
Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.
The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.
Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.
Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.
The American physicists' Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.
The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.
One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.
Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time-some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstring. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.
Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles-leptons, quarks, force-carrying bosons, and the Higgs boson-appear to be ‘point particles.' A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.
In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.
Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth's atmosphere. Creating these particles requires extremely high amounts of energy.
Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles' properties.
When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).
Particle accelerators come in two basic types-linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not a flat screen) television sets and computer monitors use this method to accelerate electrons
Still, all the same, it came that on January 1, 2000, people around the world celebrated the arrival of a new millennium. Some observers noted that the Gregorian calendar, which most of the world uses, of which began in AD 1 and that the new millennium truly begins in 2001. This detail failed to stem millennial festivities, but the issue shed light on the arbitrary nature of the way human beings have measured time for . . .  well. . . . several millennia.
Few people know that the fellow responsible for the dating of the year 2000 was a diminutive Christian monk who lived nearly 15 centuries ago. The Romans called him Dionysius Exiguus-literally, Dennis the Little. His stature, however, could not contain his colossal aspiration: to reorder time itself. The tiny monk's efforts paid off. His work helped establish the basis for the Gregorian calendar used today throughout the world.
Dennis the Little lived in Rome during the 6th century, a generation after the last emperor was deposed. The eternal city had collapsed into ruins: Its walls had been breached, its aqueducts were shattered, and its streets were eerily silent. A trained mathematician, Dennis spent his days at a complex now called the Vatican, writing church canons and thinking about time.
In the year that historians now know as 525, Pope John I asked Dennis to calculate the dates upon which future Easters would fall. Then, as now, this was a complicated task, given the formula adopted by the church some two centuries earlier -that Easter will fall on the first Sunday after the first full Moon following the spring equinox. Dennis carefully studied the positions of the Moon and the Sun and produced a chart of upcoming Easters, beginning in 532. A calendar beginning in the year 532 probably struck Dennis's contemporaries as strange. For them the year was either 1285, dated from the founding of Rome, or 248, based on a calendar that started with the first year of the reign of Emperor Diocletian.
Dennis approved of neither accepted date, especially not the one glorifying the reign of Diocletian, a notorious persecutor of Christians. Instead, Dennis calculated his years from the reputed birth date of Jesus Christ. Justifying his choice, Dennis wrote that he "preferred to count and denote the years from the incarnation of our Lord, in order to make the foundation of our hope better known. . . ." Dennis's preference appeared on his new Easter charts, which began with anno Domini nostri Jesu Christi DXXXII (Latin for "in the year of our Lord Jesus Christ 532"), or AD 532.
However, Dennis got his dates wrong. Modern biblical historians believe Jesus Christ was most likely born in 4 or 5 Bc, not in the year Dennis called AD 1, although no one knows for sure. The real 2,000-year anniversary of Jesus' birth was therefore probably 1996 or 1997. Dennis pegged the birth of Christ to the year AD 1, rather than AD0, for the simple reason that Roman numerals had no zero. The mathematical concept of zero did not reach Europe until some eight centuries later. So the wee abbot started with year 1, and 2,000 years from the start of year 1 is not January 1, 2000, but January 1, 2001-a date many people find far less interesting.
These errors, however, are hardly unique in the complicated history of the Gregorian calendar, which is essentially a story of attempts, and failures, to get time right. It was not until 1949, when Communist leader Mao Zedong seized power in China, that the Gregorian calendar became the world's most widely accepted dating system. Mao ordered the changeover, believing that replacing the ancient Chinese lunar calendar with the more accurate Gregorian calendar was central to China's march toward modernity.
Mao's order completed the world conquest of a calendar that takes its name from a 16th-century pope, Gregory XIII. Gregory earned his fame by revising the calendar already modified by Dennis and first launched by Roman leader Julius Caesar in 47 BC. Caesar, in turn, borrowed his calendar from the Egyptians, who invented their calendar some 4,000 years before that. On the long road to the Gregorian calendar, fragments of many other time-measuring schemes were incorporated-from India, Sumer, Babylon, Palestine, Arabia, and pagan Europe.
Despite persistent human efforts to track the passage of time, nearly every calendar ever created has been inaccurate. One reason is that the solar year (the precise amount of time it takes the Earth to revolve once around the Sun) runs an awkward 365.252199 days-hardly an easy number to calculate without modern instruments. Another complication is the tendency of the Earth to wobble and wiggle ever so slightly in its orbit, yanked this way and that by the Moon's elliptical orbit and by the gravitational tug of the Sun. As a result, each year varies in length by a few seconds, making the exact length of any given year extraordinarily difficult to pin down.
If this sounds like splitting hairs, it is. Yet it also highlights some of the difficulties faced by astronomers, kings, priests, and other calendar makers, who tracked the seasons to know when to plant crops, collect taxes, or follow religious rituals.
The first efforts to keep a record of time probably occurred tens of thousands of years ago, when ancient humans in Europe and Africa peered up at the Moon and realized that its phases recurred in a steady, predictable fashion. A few people scratched what they saw onto rocks and bones, creating what may have been the world's first calendars. Heady stuff for skin-clad hominids, these calendars enabled them to predict when the silvery light would be available to hunt or to raid rival clans and to know how many full Moons would pass before the chill of winter gave way to spring.
The atomic grid added a second to UTC. Millennium watchers everywhere began wondering whether they should add a second to the countless clocks on buildings, in shops, and in homes that are counting down the third millennium to the very second. Most, though not all, made the change, adding another second of uncertainty to the question of when the new millennium begins.
Always the calendar invented by Caesar and Dennis the Little moves forward, rushing toward the next millennium 1,000 years from now-the progression of days, weeks, months, and years that appears to be here to
stay, despite its flaws. Other calendars have been proposed to eliminate small errors in the Gregorian calendar. Some reformers, for example, support making the unequal months uniform by updating the ancient Egyptian scheme of 12 months of 30 days each, with 5 days remaining as holidays.
During the French Revolution, the government of France adopted the
Egyptian calendar and decreed 1792 the year 1, a system that lasted until Napoleon restored the Gregorian calendar in 1806. More recently the United Nations (UN) and the Congress of the United States have reconsidered this historic alternative, calling it the World Calendar. To date, however, people seem content to use an ancient calendar designed by a Roman conqueror and an obscure abbot rather than fixing it or making it more accurate. Perhaps most of us prefer the illusion of a fixed time-line over admitting that time has meaning only because we say it does.


DUBIOSITY
                                                            
Book Three

METAPHYSICAL THINKING


In whatever way or possibility, we should not be of the assumption of taking for granted, as no thoughtful conclusion should be lightly dismissed as fallacious in the study assembled through the phenomenon of consciousness. Becoming even more so, when exercising the ingenuous humanness that caution measures, that we must try to move ahead to reach forward into the positive conclusion to its topic.
Many writers, along with a few well-known ne
w-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.
What makes our species unique is the ability to construct a virtual world in which the real world can be imaged and manipulated in abstract forms and idea. Evolution has produced hundreds of thousands of species with brains, in which tens of thousands of species with complex behavioural and learning abilities. In that respect are also many species in which sophisticated forms of group communication have evolved. For example, birds, primates, and social carnivores use extensive vocal and gestural repertoires to structure behaviour in large social groups. Although we share roughly 98 percent of our genes with our primate cousins, the course of human evolution widened the cognitive gap between us and all other species, including our cousins, into a yawning chasm.
Research in neuroscience has shown that language processing is a staggeringly complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca's area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke's area, by the auditory cortex, is associated with sound analysis in the sequencing of words.
Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, we thought the cerebellum to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.
The midbrain and brain stem, situated on top of the spinal cord, coordinate and articulate the numerous amounts of ideas and output systems that, to play an extreme and crucial role in the interplay through which we have adaptively adjusted and coordinated the distributable dynamic communicative functions. Vocalization has some special associations with the midbrain, which coordinates the interaction of the oral and respiratory tracks necessary to make speech sounds. Since this vocalization requires synchronous activity among oral, vocal, and respiratory muscles, these functions probably connect to a central site. This site resembles the central greyness founded around the brain. The central gray area links the reticular nuclei and brain stem motor nuclei to comprise a distributed network for sound production. While human speech is dependent on structures in the cerebral cortex, and on rapid movement of the oral and vocal muscles, this is not true for vocalisation in other mammals.
Research in neuroscience reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules were eventually wired together on some neural circuit board.
Similarly, individual linguistic symbols are continued as given to clusters of distributed brain areas and are not in a particular area. The specific sound patterns of words may be produced in dedicated regions. All the same, the symbolic and referential relationships between words are generated through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain regions that require input from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a ne ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Although male and female hominids favoured pair bonding and created more complex social organizations in the interests of survival, the interplay between social evolution and biological evolution changed the terms of survival radically. The enhanced ability to use symbolic communication to construct of social interaction eventually made this communication the largest determinant of survival. Since this communication was based on a symbolic vocalization that requires the evolution of neural mechanisms and processes that did not evolve in any other species, this marked the emergence of a mental realm that would increasingly appear as separate nd distinct from the external material realm.
Nonetheless, if we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the active experience of the world symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Most experts agree that our ancestries became knowledgeably articulated in the spoken exchange as based on complex grammar and syntax between two hundred thousand and some hundred thousand years ago. The mechanisms in the human brain that allowed for this great achievement clearly evolved, however, over great spans of time. In biology textbooks, the lists of prior adaptations that enhanced the ability of our ancestors to use communication normally include those that are inclining to inclinations to increase intelligence. As to approach a significant alteration of oral and auditory abilities, in that the separation or localization of functional representations is found on two sides of the brain. The evolution of some innate or hard wired grammar, however, when we look at how our ability to use language could have really evolved over the entire course of hominid evolution. The process seems more basic and more counterintuitive than we had previously imagined.
Although we share some aspects of vocalization with our primate cousins, the mechanisms of human vocalization are quite different and have evolved over great spans of time. Incremental increases in hominid brain size over the last 2.5 million years enhanced cortical control over the larynx, which originally evolved to prevent food and other particles from entering the windpipe or trachea; This eventually contributed to the use of vocal symbolization. Humans have more voluntary motor control over sound produced in the larynx than any other vocal species, and this control are associated with higher brain systems involved in skeletal muscle control as opposed to just visceral control. As a result, humans have direct cortical motor control over phonation and oral movement while chimps do not.
The larynx in modern humans is positioned in a comparatively low position to the throat and significantly increases the range and flexibility of sound production. The low position of the larynx allows greater changes in the volume to the resonant chamber formed by the mouth and pharynx and makes it easier to shift sounds to the mouth and away from the nasal cavity. Formidable conclusions are those of the sounds that comprise vowel components of speeches that become much more variable, including extremes in resonance combinations such as the "ee" sound in "tree" and the "aw" sound in "flaw." Equally important, the repositioning of the larynx dramatically increases the ability of the mouth and tongue to modify vocal sounds. This shift in the larynx also makes it more likely that food and water passing over the larynx will enter the trachea, and this explains why humans are more inclined to experience choking. Yet this disadvantage, which could have caused the shift to e selected against, was clearly out-weighed by the advantage of being able to produce all the sounds used in modern language systems.
Some have argued that this removal of constraints on vocalization suggests that spoken language based on complex symbol systems emerged quite suddenly in modern humans only about one hundred thousand years ago. It is, however, far more likely that language use began with very primitive symbolic systems and evolved over time to increasingly complex systems. The first symbolic systems were not full-blown language systems, and they were probably not as flexible and complex as the vocal calls and gestural displays of modern primates. The first users of primitive symbolic systems probably coordinated most of their social comminations with call and display behavioural attitudes alike those of the modern ape and monkeys.
Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.
The first stone choppers appear in the fossil remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis - the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool maskers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.
The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. Increased connectivity enhanced this adaption over time to brain regions involved in language processing.
Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-anecdotical symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the essentially stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, he realized that the different chances of survival of different endowed offsprings could account for the natural evolution of species. Nature "selects" those members of some spacies best adapted to the environment in which they are themselves, just as human animal breeders may select for desirable traits for their livestock, and by that control the evolution of the kind of animal they wish. In the phase of Spencer, nature guarantees the "survival of the fittest." The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change, and Darwin himself remained open to the search for additional mechanisms, also reaming convinced that natural selection was at the heat of it. It was only with the later discovery of the "gene" as the unit of inheritance that the syntheses known as "neo-Darwinism" became the orthodox theory of evolution.
The solutions to the mysterious evolution by natural selection can shape sophisticated mechanisms are to found in the working of natural section, in that for the sake of some purpose, namely, some action, the body as a whole must evidently exist for the sake of some complex action: The process is fundamentally very simple as natural selection occurs whenever genetically influence's variation among individual effects their survival and reproduction. If a gene codes for characteristics that result in fewer viable offspring in future generations, that gene is gradually eliminated. For instance, genetic mutation that an increase vulnerability to infection, or cause foolish risk taking or lack of interest in sex, will never become common. On the other hand, genes that cause resistance that causes infection, appropriate risk taking and success in choosing fertile mates are likely to spread in the gene pool even if they have substantial costs.
A classical example is the spread of a gene for dark wing colour in a British moth population living downward form major source of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by birds, while a rare mutant form of a moth whose colour closely matched that of the bark escaped the predator beaks. As the tree trucks became darkened, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all on that point to say is that natural selection insole no plan, no goal, and no direction - just genes increasing and decreasing in frequency depending on whether individuals with these genes have, compared with order individuals, greater of lesser reproductive success.
The simplicity of natural selection has been obscured by many misconceptions. For instance, Herbert Spencer's nineteenth-century catch phrase "survival of the fittest" is widely thought to summarize the process, but an abstractive actuality openly provides a given forwarding to several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, the die. Survival increases fitness only insofar as it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in a reduced longevity. Conversely, a gene that deceases total lifetime reproduction will obviously be eliminated by selection even if it increases an individual's survival.
Further confusion arises from the ambiguous meaning of "fittest." The fittest individuals in the biological scene, is not necessarily the healthiest, stronger, or fastest. In today's world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fattiness. To someone who understands natural selection, it is no surprise that the parents who are not concerned about their children;'s reproduction.
A gene or an individual cannot be called "fit" in isolation but only with reference to some particular spacies in a particular environment. Even in a single environment, every gene involves compromise. Consider a gene that makes rabbits more fearful and thereby helps to keep then from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might be, on average, some bitless well fed than their bolder companions. Of, a hundred downbounded in the March swamps awaiting for spring, two thirds of them starve to death while this is the fate of only one-third of the rabbits who lack the gene for fearfulness, it has been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect, but it all depends on the current environment.

The version of an evolutionary ethic called "social Darwinism" emphasizes the struggle for natural selection, and draws the conclusion that we should glorify the assists each struggle, usually by enhancing competitive and aggressive relations between people in society, or better societies themselves. More recently the reaction between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
The most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in  actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Even so, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be "real" only when it is "observed" phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. In that respect, no simple reason of why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we encounter by engaging the "eventful horizon" or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or "actualized" in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the "indivisible" whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts ( in that, to know what it is like to have an experience is to know its qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be "proven" in scientific terms and what can be reasonably "inferred" in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet are those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally have expertise on only one side of a two-culture divide. Perhaps, more important, many potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact of nature named for by non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, fewer resultant amounts of back-ground implications should feel free to ignore it. Yet this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions as addressed to the relinquishing clasp of closure, and unswervingly close of its circle, resolve in the equations of eternity and complete of the universe of its obtainable gains for which its unification holds all that should be.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may tenably be able to "see" that some result's following, or that by some description is appropriate, or our inability to describe the situation may itself have some consequential consequence. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final snip alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. On that point, no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Though experiments with and one dislike is sometimes called intuition pumps.
For overfamiliar reasons, of hypothesizing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and in that respect no deductive reason that their deliberations should take any more verbal a form than this action. It is permanently tempting to conceive of this activity as for the presence inbounded in the mind of elements of some language, or other medium that represents aspects of the world. In whatever manner, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. And such of an inner present seems unnecessary, since an intelligent outcome might arouse of some principal measure from it.
In the philosophy of mind and ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus' dog. This animal, tracking prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other.  However, it did not go by this or that.  Therefore, he went the other way. The ‘syllogism of the dog' was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being to an exceeding degree below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog's behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, sand Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals' silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are some symbols
It is, nonetheless, that Decanters's first work, the Regulae ad Directionem Ingenii (1628/9), was never complected, yet in Holland between 1628 and 1649, Descartes first wrote, and then cautiously suppressed, Le Monde (1934), and in 1637 produced the Discours de la méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian co-ordinates. His best-known philosophical work, the Meditationes de Prima Phi losophiia (Meditations on First Philosophy), together with objections by distinguished contemporaries and replies by Descartes (The Objections and Replies), appeared in 1641. The authors of the objections are: First set, the Dutch, thgirst aet, Hobbes, fourth set. Arnauld, fifth set, Gassendi and the sixth set, Mersenne. The second edition (1642) of the Meditations included a seventh se t by the Jesuit Pierre Bourdin. Descartes's penultimate work, the Principia Pilosophiae (Principles of the Soul), published in 1644 was designed partly for use as a theological textbook. His last work was Les Passions de l´ame (The Passions of the Soul) published in 1649. When in Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of late rising in order to give lessons at 5:00 a.m. His last words are supposed to have been "Ça, mon âme, il faut partir" (so, my soul, it is time to part).
All the same, Descartes's theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the bassi alone of which progress is possible.
The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which are in principle capable of letting us down. This is eventually found in the celebrated "Cogito ergo sum": I think, therefore I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of a various counter-attack on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly to ascertain that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a "clear and distinct perception" of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, "to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit."
By dissimilarity, Descartes's notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
Although the structure of Descartes's epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the "otherness" of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.
Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.
Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of a dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.
The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the "I," that is the subject, as the only certainty, he defied materialism, and thus the concept of some "res extensa." The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a "res extensa" and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.
By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical oppure of subject-object, since which has been the fundamental question in philosophy ever. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.
Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?
If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to compliment meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in  actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Thus far it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be "real" only when it is "observed" phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason that this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we confront as the "event horizon" or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that an undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or "actualized" in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the "indivisible" whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts (qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be "proven" in scientific terms and what can be reasonably "inferred" in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason, the implications of the amazing new fact of nature sustaining the non-locality that cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer amounts of back-ground implications should feel free to ignore it. However, this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close the circle, resolves the equations of eternity and complete the universe to obtainably gain in its unification of which that holds within.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may then be able to "see" that some result following, or tat some description is appropriate, or our inability to describe the situation may itself have some consequences. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, in order to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final outline alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. There is no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Thought experiments are alike of one that dislikes and are sometimes called intuition pumps.
For familiar reasons, supposing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and there is no a priori reason that their deliberations should take any more verbal a form than this actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world. Still, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. Such an inner present seems unnecessary, since an intelligent outcome might arise in principle weigh out it.
In the philosophy of mind as well as ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus' dog. This animal, tracking a prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that, but he went the other way. The ‘syllogism of the dog' was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being somewhat below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog's behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, and Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals' silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are a symbol
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the "otherness" of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger undissectible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. However, it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator's scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying  substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view "essences" underlying and unifying physical reality as if they were "substances."
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler's original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. "Physical laws," wrote Kepler, "lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God's, at least as far as we can understand something of it in this mortal life."
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into "customary points of view and forms of perception." The framers of classical physics derived, like the rest of us there, "customary points of view and forms of perception" from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: "We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts."
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of "consciousness" and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing "I" in the stark Cartesian division between mind and world that some have rather aptly described as "the disease of the Western mind." In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term "philosophy" in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By "metaphysics" he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals' explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton's "Principia Mathematica" in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes's stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler,  and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which "solved the riddle of the universe," but only to replace it by another riddle: The riddle of itself. Yet, we discover the "certain principles of physical reality," said Descartes, "not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth." Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally "revealed" truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the "hidden ontology of classical epistemology." Descartes lingers in the widespread conviction that science does not provide a "place for man" or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories'. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal' concepts. Modal concept's concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science." The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology' into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the "self contents" immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true' does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p' says no more nor less than ‘p' (so, redundancy") (2) that in less direct context, such as ‘everything he said was true', or ‘all logical consequences of true propositions as true', the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: ( p, q)(p & p   q   q)' where there is no use of a notion of true statements. It is supposed in classical (two-valued) logic that each statement has one of these values, and not as both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, if this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white schemes. For the issue of whether falsity is the only way of failing to be true. The view, if a language is provided with a truth definition, according to the semantic theory of th truth is a sufficiently characterization of its concept of truth, there is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to that of the disquotational theory
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth' or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective' concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p', when ‘p'‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p': When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since "agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x' who is hungry and has a desire for food at time ‘t'. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x' at that time. Moreover, for b/(p) to cause ‘x' to eat what is in front of it at ‘t', b/(p) must be a belief that ‘x' has at ‘t'. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be "I am facing food now." On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p', I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That's what makes my belief refers to me and to when I have it. And that's why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any "sense" of "I" or "now," to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguity, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers has called the subpersonal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual components of subjective beliefs are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature": "natura non facit saltum, nature makes no leaps." Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom.  However, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist point of view are it's the advocate's Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically cayuses them, what affects they have on other mental states and what affects they have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to maske the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware ee or "realization" of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be variably realized in causal architectures, just as much as they can be in different neurophysiological stares.
Nevertheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to themselves and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain "I'-thoughts.
Nonetheless, both subject and object, either mind or matter, are real or both are unreal, imaginary. The assumption of just an illusory subject or illusory object leads to dead-ends and to absurdities. This would entail an extreme form of skepticism, wherein everything is relative or subjective and nothing could be known for sure. This is not only devastating for the human mind, but also most ludicrous.
Does this leave us with the only option, that both, subject and objects are alike real? That would again create a real dualism, which we realized, is only created in our mind. So, what part of this dualism is not real?
To answer this, we have first to inquire into the meaning of the term "real." Reality comes from the Latin word "realitas," which could be literally translated by "thing-hood." "Res" does not only have the meaning of a material thing." "Res" can have a lot of different meanings in Latin.  Most of them have little to do with materiality, e.g., affairs, events, business, a coherent collection of any kind, situation, etc. These so-called simulative terms are always subjective, and therefore related to the way of thinking and feeling of human beings. Outside of the realm of human beings, reality has no meaning at all. Only in the context of conscious and rational beings does reality become something meaningful. Reality is the whole of the human affairs insofar as these are related to our world around us. Reality is never the bare physical world, without the human being. Reality is the totality of human experience and thought in relation to an objective world.
Now this is the next aspect we have to analyse. Is this objective world, which we encounter in our experience and thought, something that exists on its own or is it dependent on our subjectivity? That the subjective mode of our consciousness affects the perceptions of the objective world is conceded by most of the scientists. Nevertheless, they assume a real and objective world, that would even exist without a human being alive or observing it. One way to handle this problem is the Kantian solution of the "thing-in-itself," that is inaccessible to our mind because of mind's inherent limitations. This does not help us very much, but just posits some undefinable entity outside of our experience and understanding. Hegel, on the other side, denied the inaccessibility of the "thing-in-itself" and thought, that knowledge of the world as it is in itself is attainable, but only by "absolute knowing"  the highest form of consciousness.
One of the most persuasive proofs of an independent objective world, is the following thesis by science: If we put a camera into a landscape, where no human beings are present, and when we leave this place and let the camera take some pictures automatically through a timer, and when we come back some days later to develop the pictures, we will find the same picture of the landscape as if we had taken the picture ourselves. Also, common-sense tells us: if we wake up in the morning, it is highly probable, even sure, that we find ourselves in the same environment, without changes, without things having left their places uncaused.
Is this empirical argument sufficient to persuade even the most sceptical thinker, which there is an objective world out there? Hardly. If a sceptic nonetheless tries to uphold the position of a solipsistic monism, then the above-mentioned argument would only be valid, if the objects out there were assumed to be subjective mental constructs. Not even Berkeley assumed such an extreme position. His immaterialism was based on the presumption, that the world around us is the object of God's mind, that means, that all the objects are ideas in a universal mind. This is more persuasive. We could even close the gap between the religious concept of "God" and the philosophical concept  by relating both of them to the modern quantum physical concept of a vacuum. All have one thing in common: there must be an underlying reality, which contains and produces all the objects. This idea of an underlying reality is interestingly enough a continuous line of thought throughout the history of mankind. Almost every great philosopher or every great religion assumed some kind of supreme reality. I deal with this idea in my historical account of mind's development.
We're still stuck with the problem of subject and object. If we assume, that there may be an underlying reality, neither physical nor mental, neither object nor subject, but producing both aspects, we end up with the identity of subject and object. So long as there is only this universal "vacuum," nothing is yet differentiated. Everything is one and the same. By a dialectical process of division or by random fluctuations of the vacuum, elementary forms are created, which develop into more complex forms and finally into living beings with both a mental and a physical aspect. The only question to answer is, how these two aspects were produced and developed. Maybe there are an infinite numbers of aspects, but only two are visible to us, such as Spinoza postulated it. Also, since the mind does not evolve out of matter, there must have been either a concomitant evolution of mind and matter or matter has evolved whereas mind has not. Consequently mind is valued somehow superiorly to matter. Since both are aspects of one reality, both are alike significant. Science conceives the whole physical world and the human beings to have evolved gradually from an original vacuum state of the universe (singularity). So, has mind just popped into the world at some time in the past, or has mind emerged from the complexity of matter? The latter are not sustainable, and this leaves us with the possibility, that the other aspect, mind, has different attributes and qualities. This could be proven empirically. We do not believe, that our personality is something material, that our emotions, our love and fear are of a physical nature. The qualia and properties of consciousness are completely different from the properties of matter as science has defined it. By the very nature and essence of each aspect, we can assume therefore a different dialectical movement. Whereas matter is by the very nature of its properties bound to evolve gradually and existing in a perpetual movement and change, mind, on the other hand, by the very nature of its own properties, is bound to a different evolution and existence. Mind as such has not evolved. The individualized form of mind in the human body, that is, the subject, can change, although in different ways than matter changes. Both aspects have their own sets of laws and patterns. Since mind is also non-local, it comprises all individual minds. Actually, there is only one consciousness, which is only artificially split into individual minds. That's because of the connection with brain-organs, which are the means of manifestation and expression for consciousness. Both aspects are interdependent and constitute the world and the beings as we know them.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were
planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. But it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator's scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying  substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view "essences" underlying and unifying physical reality as if they were "substances."
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler's original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. "Physical laws," wrote Kepler, "lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God's, at least as far as we can understand something of it in this mortal life."
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into "customary points of view and forms of perception." The framers of classical physics derived, like the rest of us there, "customary points of view and forms of perception" from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: "We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts."
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of "consciousness" and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing "I" in the stark Cartesian division between mind and world that some have rather aptly described as "the disease of the Western mind." In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term "philosophy" in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By "metaphysics" he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals' explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton's "Principia Mathematica" in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes's stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler,  and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which "solved the riddle of the universe," but only to replace it by another riddle: The riddle of itself. Yet, we discover the "certain principles of physical reality," said Descartes, "not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth." Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally "revealed" truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the "hidden ontology of classical epistemology." Descartes lingers in the widespread conviction that science does not provide a "place for man" or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories'. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal' concepts. Modal concept's concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science." The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Consciousness may possibly be the most challenging and pervasive source of problems in the whole of philosophy. Our own consciousness seems to be the most basic fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like your? Is ours like that of animals? Might machines come to have consciousness? Is it possible for there to be disembodied consciousness? Whatever complex biological and neural processes go backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the "I," or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find consciousness and draw the conclusion that consciousness resides in simple subjects, not complex ones. Eve n if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we many still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.
The nature of the conscious experience has been the largest single obstacle to physicalism, behaviourism, and functionalism in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesin. But many philosophers are convinced that we can divide and conquer: We may make progress by breaking the subject into different skills and recognizing that rather than a single self or observer we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, ad above all no inner spectator.
A fundamental philosophical topic both for its central place in any theory of knowledge, and its central place in any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception. (1) It gives us knowledge of the world around us (2) We are conscious of that world by being aware of "sensible qualities," colours, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received (much of this complexity has been revealed by the difficulty of writing programs enabling commuters to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of there being a central, ghostly, conscious self. Fed information in the same way that a screen is fed information by a remote television camera. Once such a model is in place, experience will seem like a model getting between us and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is especially acuter when we consider the secondary qualities of colour, sound, tactile feelings, and taste, which can easily seem to have a purely private existence inside the perceiver, like sensations of pain. Calling such supposed items names like sense data or percepts exacerbate the tendency. But once the model is in place, the fist property, the perception gives us knowledge or the inner world around us, is quickly threatened, for there now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.
A more hopeful approach is to claim that complexities of (3) and (4) explain how we can have direct acquaintances of the world, than suggesting that the acquaintance we do have at best an emendable indiction. It is pointed out that perceptions are not like sensations, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world as bing such-and-such a way, than to enjoy a mere modification of sensation. Nut.  Such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than strange optional extra.
Even to be, that if one is without idea, one is without concept, and, in the same likeness that, if one is without concept he too is without idea. Idea (Gk., visible form) that may be a notion as if by stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow though t of as representing something about the orld, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept of  the number series or of justice, for example, thought of as independent objects of enquiry and perhaps of knowledge. These two poles are not distinct in meaning by the term kept, although they give rise to many problems of interpretation, but between them they define a space of philosophical problems. On the one hand, ideas are that with which we think. Or in Locke's terms, whatever the mind may ne employed about in thinking Looked at that way they seem to be inherently transient, fleeting, and unstable private presence. On the other, ideas provide the way in which objective knowledge can ne expressed. They are the essential components of understanding and any intelligible proposition that is true must be capable of being understood. Plato's theory of "Form" is a celebration of the objective and timeless existence of ideas as concepts, and in this hand ideas are reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin, this doctrine, notably in the Timarus opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.
Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation of images, this was developed by Locke, Berkeley, and Hume into a full-scale view of the understanding as the domain of images, although they were all aware of anomalies that were later regarded as fatal to this doctrine. The defects in the account were exposed by Kant, who realized that the understanding needs to be thought of more in terms of rules and organized principles than of any kind of copy of what is given in experience. Kant also recognized the danger of the opposite extreme (that of Leibniz) of failing to connect the elements of understanding with those of experience at all (Critique of Pure Reason).
It has become more common to think of ideas, or concepts as dependent upon social and especially linguistic structures, than the self-standing creatures of an individual mind, but the tension between the objective and the subjective aspects of the matter lingers on, for instance in debates about the possibility of objective knowledge, of indeterminacy in translation, and of identity between thoughts people entertain at one time and those that they entertain at another.
To possess a concept is able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term "idea" was formerly used in the same way, but is avoided because of its association with subjective mental imagery, which may be irrelevant to the possession of concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. Frége regarded predicates as incomplete expressions for a function, such as, sine . . . or log . . . is incomplete. Predicates refer to concepts, which themselves are "unsaturated," and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
Mental states have contents: A belief may have the content that I will catch the train, a hope may have the content that the prime minister will resign. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something – a particular object, or property, or relation. Or another entity.
Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of May Smith, or as the person located in a certain room now. More generally, a concept "c" is such-and-such without believing "d" is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by "that . . .  " clauses, as in our opening examples, they will be capable of been true or false, depending on the way the world is.
Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money, none the less, we can come to learn that Anthony Blunt, are historian and Surveyor of the Queen's Picture, is a spy: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype association with the concept. Similarly, a person's conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.
A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy - and perhaps even some of our contemporaries - are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought "I think," containing the first-person way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts theory is required to have an adequate account to its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
A fundamental question for philosophy is: What individuates a given concept - that is, what makes it the one is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative addresses the question by stating from the ideas that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So to take a simple case, on e could propose the logical concept "and" is individuated by this conditions: It is the unique concept "C" to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any to premisses "A" and "B," "ABC" can be inferred: And from any premiss "ABC," each of "A" and "B" can be inferred. Again, a relatively observational concept such as "round" can be individuated in part by stating that the thinker find specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are based on perception that individuates a concept by saying what is required for a thinker to possess it can be described as giving the possession condition for the concept.
A possession condition for a particular concept may actually make use of that concept. The possession condition for "and" does not. We can also expect to use relatively observational concepts in specifying the kind of experiences that have to be of comment in the possession condition for relatively observational concepts. We must avoid, as mentioned of the concept in question as such, within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession, in talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker's mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.
Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so's, there is 1 so-and-so,  . . . : And the family consisting of the concepts "belief" ad "desire." Such families have come to be known as "local holisms." A local Holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated, and the possession conditions for concepts higher in ranking must presuppose only possession of concepts at the same or lower level in the ranking.
A possession condition may in various ways make a thinker's possession of a particular concept dependents upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker's perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience e to the subject's environment. If this is so, then, more is of mention, that it is much greater of the experiences in a possession condition will make possession of that concept dependent in particular upon the environmental relations of the thinker. Also, from intuitive particularities, that evens though the thinker's non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker's social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker's social relations, in particular his linguistic relations.
Concepts have a normative dimension, a fact strongly emphasized by Kriple. For any judgement whose content involves s a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker's reason for making judgements. A thinker's visual perception can give him good reason for judging "That man is bald": It does not by itself give him good reason for judging "Rostropovich is bald," even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent if the concept is that object (or property, or function, . . . ) which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permit s us to say what it is about a thinker's previous judgements that make it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a newly encountered object. The judgement is correct if t he new object has the property that in fact makes the judgmental practices in the possession condition yield true judgements, or truth-preserving inferences.
Despite the fact that the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology' into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the "self contents" immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true' does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p' says no more nor less than ‘p' (so, redundancy") (2) that in less direct context, such as ‘everything he said was true', or ‘all logical consequences of true propositions as true', the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: ( p, Q)(P & p   q   q)' where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth' or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective' concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p', when ‘p'‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p'. When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since "agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x' who is hungry and has a desire for food at time ‘t'. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x' at that time. Moreover, for b/(p) to cause ‘x' to eat what is in front of it at ‘t', b/(p) must be a belief that ‘x' has at ‘t'. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be "I am facing food now." On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p', I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That's what makes my belief refers to me and to when I have it. And that's why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any "sense" of "I" or "now," to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguities, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers have called the sub-personal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual component of subjective belies are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature, "natura non facit saltum," nature makes no leaps. Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom however, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist points of view are it's the advocate's Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically situations to them, in of what effects them have on other mental states and what affects them have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or "realization" of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be "variably realized" in causal architectures, just as much as they can be in different neurophysiological stares.
The anticipation, to revolve os such can find the tranquillity in functional logic and mathematics as function, a relation that auspicates members of one class "X" with some unique member "y" of another "Y." The associations are written as y = f(x), The class "X" is called the domain of the function, and "Y" its range. Thus "the father of x" is a function whose domain includes all people, and whose range is the class of male parents.  Whose range is the class of male parents, but the relation "by that x" is not a function, because a person can have more than one son. "Sine x" is a function from angles of a circle  x, is a function of its diameter x, . . . and so on. Functions may take sequences x1. . . .Xn as their arguments, in which case they may be thought of as associating a unique member of "Y" with any ordered, n-tuple as argument. Given the equation y = f(x1 . . . Xn), x1 . . .  Xn is called the independent variables, or argument of the function, and "y" the dependent variable or value, functions may be many-one, meaning that differed not members of "X" may take the same member of "Y" as their value, or one-one when to each member of "X" may take the same member of "Y" as their value, or one-one when to each member of "X" their corresponds a distinct member of "Y." A function with "X" and "Y" is called a mapping from "X" to"Y" is also called a mapping from "X" to "Y," written f X   Y, if the function is such that (1) If x, y   X and f(x) = f(y) then x's = y, then the function is an injection from to Y, if also: (2) If y   Y, then ( x)(x   X & Y = f(x)). Then the function is a bi-jection of "X" onto "Y." A di-jection is both an injection and a sir-jection where a subjection is any function whose domain is "X" and whose range is the whole of "Y." Since functions ae relations a function may be defined asa set of "ordered" pairs where "x" is a member of "X" sand "y" of "Y."
One of Frége's logical insights was that a concept is analogous of a function, as a predicate analogous to the expression for a function (a functor). Just as "the square root of x" takes you from one number to another, so "x is a philosopher' refers to a function that takes us from his person to truth-values: True for values of "x" who are philosophers, and false otherwise."
Functionalism can be attached both in its commitment to immediate justification and its claim that all medially justified beliefs ultimately depend on the former. Though, in cases, is the latter that is the position's weaker point, most of the critical immediately unremitting have been directed ti the former. As much of this criticism has ben directed against some particular from of immediate justification, ignoring the possibility of other forms. Thus much anti-foundationalist artillery has been derricked at the "myth of the given" to consciousness in pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963) The most prominent general argument against immediate justifications is a whatever use taken does so if the subject is justified in supposing that the putative justifier has what it takes to do so. Hence, since the justification of the original belief depends on the justification of the higher level belief just specified, the justification is not immediate after all. We may lack adequate support for any such higher level as requirement for justification: And if it were imposed we would be launched on an infinite regress, for a similar requirement would hold equally for the higher belief that the original justifier was efficacious.
The reflexive considerations initiated by functionalism evoke an intelligent system, or mind, may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in co-ordination switch each other. The sub-systems may be envisaged as homunculi, or small, relatively stupid agents. The archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make u a machine that can play chess, write dictionaries, etc.
Nonetheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain "I'-thoughts.
What is an, "I"-thought" Obviously, an "I"-thought is a thought that involves self-reference. I can think an, "I"-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but are not "I"-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an "I"-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in downtown Toronto. Is ‘A', is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And si I am not thinking genuine "I"-thoughts, because one cannot think a genuine "I"-thought if one is ignorant that one is thinking about oneself. So it is natural to conclude that "I"-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun "I," because one cannot be the first-person pronoun without knowing that one is thinking about oneself.
This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun "I" or indirectly by means of the direct reflexive pronoun "he," such they are first-person contents.
The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word "I" (or, "my") of which is called "the use as object" and "the use as subject." Examples of the first kind of use are these" "My arm is broken," "I have grown six inches," "I have a bump on my forehead," "The wind blows my hair about." Examples of the second kind are: "I see so-and-so," "I try to lift my arm," "I think it will rain," "I have a toothache." (Wittgenstein 1958)
The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour's. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing when I have a toothache. To ask "are you sure that it is you who have pains?" would be nonsensical (Wittgenstein, 1958?).
Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of "I" as object, can be analysed in terms of more basic propositions. Such that the thought "I am B" involves such a use of "I." Then we can understand it as a conjunction of the following two thoughts" "a is B" and "I am a." We can term the former a predication component and the latter an identification component (Evans 1982). The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is B, even though mistaken in identifying oneself as that person.
To say that a statement "a is B" is subject to error through misidentification relative to the term "a" means the following is possible: The speaker knows some particular thing to be "B," but makes the mistake of asserting "a is B" because, and only because, he mistakenly thinks that the thing he knows to be "B" is what "a" refers to (Shoemaker 1968).
The point, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker's criterion of immunity to error through misidentification relative to the first-person pronoun (simply "immunity to error through misidentification") is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.
The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
To say that a statement "a is b" is subject to error through misidentification relative to the term "a" means that some particular thing is "b," because his belief is based on an appropriate evidence base, but he makes the mistake of asserting "a is b" because, and only because, he mistakenly thinks that the thing he justified believes to be "b" is what "a" refers to.
Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification rather than knowledge. The connection to be captured is between the sources and grounds from which a beef is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
A suggestive definition is to say that a statement "a is b" is subject to error through misidentification relative to the term "a" means that the following is possible: The speaker is warranted in believing that some particular thing is "b," because his belief is based on an appropriate evidence base, but he makes the mistake of asserting "a is b" because, and only because, he mistakenly thinks that the thing he justified believes to be "b" is what "a" refers to.
First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.
All first-person consent subject to error through misidentification contain an identification component of the form "I am a" and employ of the first-person-pronoun contents with an identification component and those without such a component. In that identification component, does it or does it not have an identification component? Clearly, then, on pain of an infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification components, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.
It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.
Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed (Dummett 1978).
Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.
As it stands, the problem between to different roles that the pronoun "he" can play of such oracle clauses. On the one hand, "he" can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that "he," is functioning as a quasi-indicator. Then when "he" is functioning as a quasi-indicator, it be written as "he." Others have described this as the indirect reflexive pronoun. When "he" is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of o reality need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.
There is canning obviousness, but central question that arises in considering the relation between the content of thought and the content of language, namely, whether there can be thought without language as theories like the functionalist theory. The conception of thought and language that underlies the Thought-Language Principe is clearly opposed to the proposal that there might be thought without language, but it is important to realize that neither the principle nor the considerations adverted to by Dummett directly yield the conclusion that there cannot be that in the absence of language. According to the principle, the capacity for thinking particular thoughts can only be analysed through the capacity for linguistic expression of those thoughts. On the face of it, however, this does not yield the claim that the capacity for thinking particular thoughts cannot exist without the capacity for their linguistic expression.
Thoughts being wholly communicable not entail that thoughts must always be communicated, which would be an absurd conclusion. Nor does it appear to entail that there must always be a possibility of communicating thoughts in any sense in which this would be incompatible with the ascription of thoughts to a nonlinguistic creature. There is, after all, a primary distinction between thoughts being wholly communicable and it being actually possible to communicate any given thought. But without that conclusion there seems no way of getting from a thesis about the necessary communicability of thought to a thesis about the impossibility of thought without language.
A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has distinguished psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one's distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of a an objects that have both primary and secondary qualities, but thee is n reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.
First, to take a step back from primitive self-consciousness to consider the account of self-identifying first-person thoughts as given in Gareth Evans's Variety of Reference (1982). Evens places considerable stress on the connection between the form of self-consciousness that he is considering and a grasp of the spatial nature of the world. As far as Evans is concerned, the capacity to think genuine first-person thought implicates a capacity for self-location, which he construes in terms of a thinker's to conceive of himself as an idea with an element of the objective order. Thought, do not endorse the particular gloss that Evans puts on this, the general idea is very powerful. The relevance of spatiality to self-consciousness comes about not merely because he world is spatial but also because the self-consciousness subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world. Evans tends to stress a dependence in the opposite direction between these notions
The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which the course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive (Evans 1982).
But the main criteria of his work is very much that the dependence holds equally in the opposite direction.
It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature's self-awareness must be awareness of itself as a spatial bing that acts up and is acted upon by the spatial world. Evans's own gloss on how a subject's self-awareness, is awareness of himself as a spatial being involves the subject's mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like "I perceive such and such, such and such holds at P; So (probably) am P and "I am, such who does not hold at P, so I cannot really be perceiving such and such, even though it appears that I am" (Evans 1982). This is not very  satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of a specification of the precise forms of behaviour that can only be explained by there ascription of such a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.
But it must not be forgotten that a vital role in this is layed by the subject's own actions and movements. Appreciating the spatiality of the environment and one's place in it is largely a function of grasping one's possibilities for action within the environment: Realizing that f one wants to return to a particular place from here one must pass through these intermediate places, or that if there is something there that one wants, one should take this route to obtain it. That this is something that Evans's account could potentially overlook emerge when one reflects that a simple theory of perception of the form that described could be possessed and decoyed by a subject that only moves passively, in that it incorporates the dimension of action by emphasizing the particularities of navigation.
Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning involved in over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.
In short, any learned cognitive ability be contractible out of more primitive abilities already in existence.  There are good reason to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitives, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.
Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in pre-linguistic or non-linguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.
The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place rom. birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.

No comments:

Post a Comment