Go to home page

This article appears in the August 18, 2023 issue of Executive Intelligence Review.

May 1985

The Continuing Hoax of ‘Artificial Intelligence’:
The Multi-Billion-Dollar Boondoggle

[Print version of this article]

Editor’s Note: This article originally appeared in EIR, Vol. 12, No. 19, May 14, 1985.

Like the medieval alchemists who sought to change base metals into gold, today’s proponents of the “Artificial Intelligence” hoax are ignorant of the most basic scientific principles. AI attempts to comprehend human intelligence by means of mathematical analysis based upon formal logic. Shown: “The Alchemist,” by Peter Brueghel the Elder.

“Intelligence consists not of solving problems; intelligence consists of not having problems,” said Berkeley, California philosophy professor Hubert Dreyfus, on April 17, 1985, at an Austin, Texas “Symposium on the Humanities.” Dreyfus is a Massachusetts Institute of Technology-trained specialist in what is called Artificial Intelligence. Austin, Texas’s micro-electronics center, is a hotbed of the multi-billion-dollar boondoggle, called Artificial Intelligence.

“Artificial Intelligence (AI),” was launched at the Massachusetts Institute of Technology’s RLE laboratories, during the 1950s, as an effort to demonstrate that human intelligence could be simulated, and surpassed by digital-computing devices. AI research was launched by a circle including Professor Margaret Mead, who operated through a seed-funding conduit known as the Josiah Macy, Jr. Foundation, a circle continuing the “Unification of the Sciences” project launched circa 1938 under the joint leadership of Bertrand Russell and the University of Chicago’s Robert M. Hutchins. The basis selected for attempted achievement of AI, combined the notions of “artificial intelligence” developed by MIT’s Professor Norbert Wiener and Princeton’s John von Neumann, as popularized in Wiener’s 1947–1948 editions of Cybernetics.

Professor Dreyfus devoted his remarks at Austin, to his own explanation of the reasons he suspects AI research continues to be the failure it has been repeatedly admitted to be since the late 1950s. Yet, despite repeated confessions of failure by AI specialists, the percentage of total research and development grants, and professorships, continuing to be poured into this multi-billion-dollar boondoggle, has grown over the past 30 years.

Since the close of the 1950s, when the first admissions of uselessness of AI research were fielded, the variety of explanations for the failure has been as varied as descriptions of the elephant by the fabled committee of blind men. Dreyfus’s purported explanation is noteworthy, as being among the most pathetic heard from such specialists so far. At the Austin conference, Dreyfus said that AI research is permanently stuck, because AI has been based on the premise that human intelligence consists of “reasoning” things out. The “human dimension, involving flesh and feelings,” Dreyfus said, “goes beyond reasoning.” Rules and reasoning, he said, are only the most basic aspect of human behavior. On such premises, he concluded that the objective is to avoid all problems which can not be solved on this rudimentary level.

The pouring of billions of dollars into research projects which have been repeatedly proven absurd, is a prevailing fact of the so-called “social sciences,” such as anthropology-ethnology, sociology, and psychology. There is perhaps no instance of research-grants for physical science, in which repeatedly proven absurdity has been so richly funded as in the instance of AI. The reason this AI boondoggle has been so long tolerated by non-scientific circles, is obvious enough: the superstitious mystique of Zbigniew Brzezinski’s “technetronic age,” the same mystique which overwhelms the science-ignorant technician confronted with the programming of a digital computer. The reasons trained scientists do not blow the whistle on this billion-dollar-boondoggle, are a bit more complicated.

Dreyfus’s recent explanation for the continued bankruptcy of AI research has the merit of pointing almost directly to the pseudo-scientific beliefs among the scientifically-educated personnel who devote their professions to this useless effort. Turn Dreyfus’s explanation upside-down. Instead of saying that human intelligence is not rational, simply recognize that the definition of “human intelligence” adopted by AI professionals is absurd.

There exists an established body of scientific knowledge, which does enable us to define “human intelligence’s” rudimentary principles in mathematical-physics terms of reference. It is relevant that the famous David Hilbert threw Norbert Wiener, the author of Cybernetics, and co-author of modern “information theory,” out of a pre-World War I seminar at Göttingen University. The grounds for this expulsion, was Wiener’s stubbornly persisting scientific incompetence. Wiener’s incompetence is essentially identical with the leading features of John von Neumann’s efforts to apply a neo-positivist definition of formalist mathematics to a “theory of brain-function.” Wiener and von Neumann were among the leading opponents of the kind of physics which does explain many characteristic features of human intelligence, opponents of the line of development in physics running through Leonardo da Vinci, Leibniz, Euler, and Gauss.

This identifies part of the reasons for the failures of that multi-billion-dollar boondoggle called AI. However, the problem is not merely the awe for the Wiener-Shannon and von Neumann doctrines of “information theory,” among science-educated specialists. The undeserved aspects of the reputations of Wiener and von Neumann appear to be valid among most science-educated professions today, because the textbooks and classrooms, of secondary schools as well as universities, are saturated with the effluvia of so-called formal logic.

Today’s student knows almost nothing of the most important developments, and related controversies, within the history of modern science, and does not know, that the foundations of modern science, insofar as its fundamentals are developed today, were established by a succession of scientific workers whose work is known only in bits of snatches to textbook students of today. These include, notably, Leonardo, Leibniz, Euler, the celebrities of the Monge-Carnot École Polytechnique, and the circle of Carl Gauss in nineteenth-century Germany. Modern secondary and university students of mathematics, are so consistently “brainwashed,” by drill and grill in the delusion that natural science is a subject of neo-Aristotelian formal logic, that they must tend to conclude that most of the fundamental discoveries upon which physics today depends, were the product of a method of inquiry “totally unscientific” by today’s academic standards in mathematical formalism.

The record of bankruptcy of the AI boondoggle, is useful only to the degree it exhibits the impossibility of comprehending, even defining, human intelligence or human “information,” by methods of mathematical analysis based upon formal logic. It exhibits, at least implicitly, the principle, that what is called “logic” today, and human reasoning, are incompatible notions. If this failure of AI were examined against the backdrop of Leibniz’s denunciations of Descartes, and the raging controversies within nineteenth-century science, the fact is most clearly presented to us, that Gauss, Riemann, Weierstrass, and Cantor, were correct, and their opponents, the late-nineteenth-century proponents of “statistical mechanics,” represented the wrong turn in scientific method and education.

What keeps the multi-billion-dollar AI boondoggle going, is the reluctance of modern “secular humanists” to admit, that the laws of the universe are not consistent with a statistical theory derived from formal logic. The AI crowd, is not only historically (Russell, Hutchins, Kurt Lewin, Carnap, Mead, et al.) “secular humanist;” excepting scientists influenced strongly by religious convictions to the contrary, the scientific community at large is dominated increasingly by a Vienna-Circle-flavored sort of neo-positivist “secular humanism.” The case of Charles Darwin’s manager, the Julian Huxley who coined the term “agnosticism,” indicates the role of British “radical empiricism” in shaping “secular humanist” thought in the United States. Von Neumann typifies the neo-positivist influences of the “Vienna Circle” upon U.S. universities’ science departments. These typify the leading proponents of the “statistical” faction in mathematical physics and other specialties over the recent 130-odd years. Today, especially among scientific professionals, the “secular humanist” and “statistical” standpoints are not only strongly correlated, but are functionally interdependent.

In this report, we contrast the proper definition of “human intelligence,” as situated within the history of modem science, with the absurd assumptions, rooted in “statistical theory,” on which Wiener and von Neumann founded the multi-billion AI boondoggle.

Living versus Dead Matter

It is elementary, for any effort to define human intelligence in the language of a mathematical physics, to begin with the fact that human beings are living organisms. The precondition for defining living organisms, is to locate a fundamental and infallible distinction, between living and non-living processes in nature generally. Once that is accomplished, we must next isolate some infallible, fundamental distinction between human and animal behavior.

The rigorous definition of the distinction between living and non-living processes was first defined for modern science, by the work of Luca Pacioli and Leonardo da Vinci, at the close of the fifteenth century. The continuing line of inquiry along the lines established by Leonardo, runs through the work on optical characteristics of living processes, by Louis Pasteur, into lines of inquiry in what is called “non-linear spectroscopy,” today. The physics which is uniquely suited to living processes so defined, is the physics based on the methods employed by Carl Gauss, Bernhard Riemann, and Karl Weierstrass, during the nineteenth century.

Before turning to the issues of the distinctions between human and animal behavior, we summarize the nature of the case for living processes. We begin with a summary of the connection of the initial discoveries in biology, by Pacioli and Leonardo, to the preceding discoveries of Nicholas of Cusa. This is the starting-point from which all successful approaches to definitions of living processes has proceeded, from then to the present time.

Modern science began during the middle of the fifteenth century, with the elaboration of rigorous principles of scientific method by Cardinal Nicholas of Cusa, e.g., Cusa’s De Docta Ignorantia (On Learned Ignorance). For example, Cusa was the first modern thinker to present a heliocentric hypothesis on the ordering of the solar system (not Copernicus). The central feature of Cusa’s own original discoveries, was his discovery of a conception called today “the isoperimetric principle” of topology, as later refined by the work of Leibniz, Leonhard Euler, and the Bernoullis. A clear understanding of the implications of this isoperimetric principle is indispensable for comprehending the work of Pacioli and Leonardo, and the later work of Pasteur and “non-linear spectroscopy” today. Without grasp of these implications, the mere existence of the biologist teaching biology at the head of the classroom remains a subject of profound mathematical uncertainty.

Cusa proved that both the axioms and the deductive method of the famous Elements of Euclid, are intrinsically absurd. Neither points nor “straightness” have any self-evident form of existence in the universe. The isoperimetric theorem proves conclusively, that the only form of self-evident existence of form and matter in our universe, is circular action.

However, circular action does not mean the simple drawing of a circle, as by aid of a compass. To define a “straight line,” we must create a diameter for circular action, by “folding” a circle perfectly against itself. This folding of the primitive circle perfectly against itself, introduces the first principle of measurement, measurement by one-half. To create a point, we must fold a half-circle against itself. By circular action, acting upon these two additional elements created by circular action, the point and the line, everything that can be constructed within Euclidean geometry is constructed, using nothing but construction, without deductive logic.

Therefore, the minimal condition for producing the shapes constructible within Euclidean space, is what we must best describe as triply-self-reflexive circular action. By self-reflexive, we mean that triply-self-reflexive circular action acts upon everything constructed by such circular action. By triply-self-reflexive, we mean that, circular action is acting triply upon circular action itself.

This is simply illustrated, as a definition, in the following way.

At every arbitrarily small interval of circular action, the same kind of circular action is acting, as if at “right angles,” upon that circular action. At every arbitrarily small interval of the second moment of circular action, in turn, a third of the same kind of circular action is action upon the second, as if at “right angles” to both the first and the second.

This is the minimal form of isoperimetric action sufficient to define a Euclidean space of construction, the minimal preconditions required to generate a “straight line” and a “point.”

Taking one aspect of triply-self-reflexive circular action, the following correction must be added to the picture.

Human perception is limited to perception of changes (transformations) occurring in a finite interval of space-time. Perception of “instantaneous” objects is not possible: “Instantaneous” objects of perception do not exist. Therefore, we can perceive nothing, except under the condition, that the act of perception ends at a slightly later point in time than it begins.

Therefore, the simplest conceivable form of circular action in physical space-time, is in the form of a cylindrical helix. Or, if the action increases or decreases at a constant rate, the circular action occurs as a self-similar-spiral action on the surface of a growing cone. The first helical geometry, is the axiomatic basis for what is called Fourier Analysis. A geometry based axiomatically upon conic self-similar-spiral action, is a Gaussian (constructive) geometry. Other terms for “Gaussian geometry,” are the Gaussian “geometry of the continuous manifold,” or Gaussian “functions of a complex variable.”

The history of modern science’s progress toward a physics theory of living processes, is summarily as follows.

The first step was accomplished by Luca Pacioli and Leonardo. Pacioli, working from the starting-point of Cusa’s “Minimum-Maximum (isoperimetric) Principle,” reworked the scope of the Tenth through Thirteenth books of Euclid’s Elements, to reconstruct a proof cited in Plato’s Timaeus dialogue: the proof, that only five kinds of regular polyhedra can be constructed in Euclidean space. During the eighteenth century, Leonhard Euler developed a more rigorous proof of this. Out of this work, Leonardo developed the foundations of modern optics and hydrodynamics, including a forerunner of Riemannian stereographic projection, spherical projective perspective.

As Euler demonstrated the point rigorously, of the five constructable regular polyhedra, four are simply constructed from one, the regular dodecahedron whose surfaces are regular pentagons. The construction of both the dodecahedron and the regular pentagon, is based upon preceding construction of a derivative of circular action, called the Golden Section. The Golden Section’s general significance is, that it defines the boundaries of constructability within visible (“Euclidean”) space. The proof, earlier reported in Plato’s Timaeus, that, in visible space, only five kinds of polyhedra can be constructed, reflects an efficient limit determining all forms of constructability in “Euclidean space.

The first step toward founding biological science, was accomplished by Pacioli and Leonardo, by showing that the elementary distinction of living from non-living processes, is that living processes’ forms and morphology of function, are congruent with the Golden Section.

Until the nineteenth century, at least approximately so, the explanation of the reason for this morphological distinction between living and non-living processes, was that the so-called Fibonacci series’ ratios for successive intervals, converges upon the ratios of self-similar growth given by the Golden Section. The Fibonacci Series, is the classical geometrical method for estimating population-growth, developed by Leonardo of Pisa. The increase of the number of cells in a tissue, for example, is a form of self-similar growth of populations, comparable in broad terms to self-similar growth of populations, at a constant set of birth and death rates.

Pacioli and Leonardo showed, that the shapes determined by growth of plants and animals, including human beings, were elaborated in forms consistent with the harmonic ratios determined by the Golden Section.

A century after Leonardo’s work, this generalization about living processes had to be modified slightly, because of the discoveries of a leading follower of Cusa and Leonardo, Johannes Kepler. Kepler constructed an hypothesis for the determination of the Solar System’s orbits, an hypothesis based directly on Cusa’s arguments for an heliocentric solar system, and the work of Pacioli and Leonardo on Platonic Solids and the Golden Section. Kepler’s solar hypothesis was an hypothesis based on the Golden Section, an hypothesis which employed astronomical data to demonstrate empirically, that the laws of the universe as a whole were coherent with the harmonics determined by the Golden Section. Since Gauss later proved conclusively, that Kepler’s astrophysics was the correct choice, and Kepler’s critics absurd, the universe has such a proven similarity in its underlying principles to living processes; in modern verbiage, we must say that the universe as a whole is essentially “negentropic,” not “statistically entropic.”

This implication of Kepler’s work was later extended by Bernhard Riemann, who insisted and showed, that, at its extremes, in astrophysics and microphysics, the laws of the universe must be characteristically “negentropic.” Hence, the contrast between living and non-living processes applies only to the very large experimental domain between the astrophysical and microphysical extremes. With that qualification, Pacioli’s and Leonardo’s discoveries respecting the distinction between living and non-living processes, are essentially in force to the present time.

That principle of living processes is valid as far as it goes, but inadequate. The deeper implications of a triply-self-reflexive circular action, are not yet incorporated within it, in that form.

There exist, as visible images, forms which are not constructable within Euclidean space. We say that these are “incommensurable,” in the sense that only forms which can be rigorously constructed are “commensurable;” any other meaning of “commensurable” is either trivial or false. Those forms which are not commensurable with construction in Euclidean space, all reduce axiomatically to what are called “transcendental functions”: functions whose constructability requires such mutually coherent transcendentals as pi [π], the Eulerian logarithmic base, and trigonometric functions. This principled limitation of visible (“Euclidean”) space, was already a central feature of the work of Plato.

In the simplest terms of reference, transcendental functions reflect the fact, that physical space-time is dominated by a rotational orientation in space, as triply-self-reflexive circular action requires. The so-called Cartesian coordinates, must not be seen as axes of reference for primitively “straight-line” action; they must be interpreted as axes of triply-self-reflexive rotation, and Cartesian space seen also as a misleading interpretation of a space whose geometry is that of a Riemannian sphere.

The significance of transcendental values, is that they correspond, in physics, to self-similar-spiral action, as the primitive (elementary) form of action, in cylindric or conic functions, as in Fourier Analysis or Gaussian geometry, respectively. In these geometries, some (Fourier cylindric) or all (Gaussian manifold) of the transcendental values are constructable with the same efficiency as constructable forms in visible (Euclidean) space.

It happens, that all forms in visible space, which are projections of conic forms of self-similar-spiral action, have everywhere the metrical characteristics determined by the Golden Section. This is the physics-basis of proof supporting J.S. Bach’s values of “equal tempering” in well-tempered polyphony, for example. That is the proper mathematical-physics meaning of the cited discoveries of Pacioli and Leonardo. The adequate explanation for characteristic distinctions of living from non-living processes, must therefore be sought out within the Gaussian domain.

To accomplish that, one must first consider the most general kind of problem raised by Gauss’s discoveries in geometry. Triply-self-reflexive conic self-similar-spiral action defines a range of physics-functions which are efficiently continuous as physical processes, which are nonetheless characterized mathematically by a more or less dense frequency of mathematical discontinuities. In elementary geometry, we already face the problem of algebraic discontinuities, in such forms as points, lines, surfaces, and solids. In physics, these confront us in such forms as what are mistakenly interpreted as “elementary particles,” and in other forms. The center of the elementary problems confronting the effort to elaborate a Gaussian physics, is to show mathematically how processes which are efficiently continuous in physical space-time, are continuous in some way despite the generation of what are often increasing densities of mathematical discontinuities.

This problem situates the task of restating Leonardo’s distinction of living from non-living processes, in terms of the Gaussian manifold. The problem of densely discontinuous mathematical functions corresponding to efficiently continuing physical processes, was the central feature of the work of such collaborators and followers of Gauss as Lejeune Dirichlet, Bernhard Riemann, Karl Weierstrass, and Georg Cantor. That is the physics-significance of the work on topology accomplished by Dirichlet, Riemann, Weierstrass, and Cantor. It is within the framework of the admittedly incomplete accomplishments of these figures, that the distinction between living and non-living processes must be resituated.

The Bearing of Economic Science

To continue our summary account of the problems of defining “life” and “human intelligence” from the vantage-point just identified, it is most useful to examine the way in which human intelligence shows itself to be the characteristic feature of economic processes. By “economic science,” we signify that founding of economic science, by Leibniz, on which the principles of the United States’ founding “American System of political-economy” (Alexander Hamilton), were premised: not the mere “money theories” popularly taught and practiced as “economics” in the United States and Europe today.

The most characteristic feature of human society, is implicitly defined thus. Whereas, a primitive form of human society is capable of sustaining a worldwide population of not more than approximately 10 million individuals, there exist nearly 5 billion today. This growth in the potential relative population-density of the human species, by nearly three orders of magnitude, is the most characteristic distinction of the human from all inferior species. No lower species could willfully increase its potential relative population-density by a single order of magnitude. No lower species can willfully improve its day-to-day behavior by aid of advances in scientific and related knowledge.

That circumscribes the range of phenomena to be examined, as reflective of “human intelligence.”

Consider only, more narrowly, the effect on population growth of the irregularly-paced but more or less continuous explosion of science and industrial society’s technology, since Cusa set the progress of science into motion during the middle of the fifteenth century. (The case can be generalized, for the study of the technological dynamics of earlier forms of society.) All advances in technology, and of potential relative population-density, occur principally as technological advances in qualities of producers’ goods, in an increasingly energy-intensive and capital-intensive mode of alteration of basic economic infrastructure and work-places. The source of these advances in technology is the improved power of the individual human mind, to generate and to assimilate efficiently new conceptions flowing from fundamental scientific progress.

Those aspects of the potential creative powers of the human mind, which bear upon the generation of fundamental scientific discoveries, are, in this way, an efficient physical cause in the universe.

In the case, that a modern form of agro-industrial society is maintaining a constant rate of technological progress, in an energy-intensive, capital-intensive mode of production of physical goods, the most elementary picture of such economic growth, is a picture of an efficiently continuous function subsuming increasing density of mathematical discontinuities. Doubly self-reflexive, conic, self-similar-spiral action, is the minimal requirement for portraying the effect of constant technological progress upon such an economy. Instead of a simple cone, the growth of per-capita potential relative population density, generates a bell-mouthed horn, whose side-view cross-section describes an hyperbolic curve, seeming to zoom off into Cartesian “infinity.” The central axis of that horn represents a uniform time-scale. Obviously, the action is efficiently continuous, past the interval of that flaring of the hyperbolic curve toward “infinity.” This is exemplary of physical processes which are efficiently continuous, despite discontinuities subsumed by such processes.

Without going into greater detail than is directly relevant in this report on AI’s incompetency, the following remarks on this economic-process function are sufficient.

Riemann’s contributions to fundamental advances in physics, center upon his appreciation of the treatment of this problem, of dense discontinuities generated within an efficiently continuous function, by, chiefly, Dirichlet and Weierstrass. (The question of the determination, “enumerability,” of such discontinuities within an arbitrarily small interval of a function, including seemingly “arbitrary” functions, is a central topic of the 1871–1883 contributions of Cantor.) As early as his 1854 habilitation dissertation, “On the Hypotheses Which Underlie Geometry,” Riemann indicates the general nature of the solution to the problem we have described for economic processes. In a famous later paper, his 1859 “On the Propagation of Plane Air Waves of Finite Magnitude,” predicting supersonic shock-waves and isentropic compression of plasmas, Riemann defines an exemplary case for the application of the relevant principle earlier tentatively supplied in his 1854 dissertation. When a true singularity, such as the indicated sort of discontinuity, is generated within an efficiently continuous process, that determines an alteration of the metrical characteristics of the local (or larger) physical space-time of the process affected. The characteristic action of the continuous function continues to operate, but the action occurs in a physical space-time whose metrical characteristics have been altered, as the instance of supersonic flight illustrates most simply.

In the sort of idealized economic process, which we have portrayed, at the flaring mouth of the hyperbola, a new hyperbolic curving, in an altered “economic physical space-time,” begins. The second curve flares into a discontinuity, as did the first, with an analogous continuation of the function. And, so forth and so on. Relative to the time-axis, the interval between these discontinuities becomes shorter. This shortening of the interval defines an harmonic series.

The degree of higher organization of the economy, has therefore the following gross characteristics. First, the effect of technological progress (under stipulated, ideal conditions), is to generate a series of ever-more-frequent “Riemannian shock-wave-like” discontinuities. Second, the increasing density of such discontinuities, so generated, is harmonically determined. Finally, the increasing density of such harmonically ordered discontinuities of the function, is the measure of increasingly higher organization of the process.

The relationships are made more sensible, by removing the implicit assumptions of a Cartesian schema, by projecting the function onto a Riemannian sphere, so that the lines of discontinuity obviously do not shoot off into a Cartesian sort of “infinity.” The proper design of the function, and of the significance of a Riemann-Weierstrass Surface for plotting the function, is more or less obvious at that point.

The economy which corresponds to this function, will describe, in projection, an idealized harmonic growth consistent with the Golden Section. The economy thus appears to be a single living organism, to the effect that sick and dying economies correspond, in these terms of reference, to sick and dying forms of living organisms. The U.S. economy, under the influence of the now-accelerating “post-industrial” trend of the recent 20 years, is such a sick and dying organism.

Leibniz already showed that “technology” was a matter of the form of internal organization of productive processes. His version of the Principle of Least Action, employed this feature of empirically demonstrable technological progress, to assist in proving that Cusa’s isoperimetric principle was also the elementary form of physical cause-and-effect action in the universe.

The advances in efficiently employed technology, which are the sole ultimate source of economic growth, represent the imposition of forms created in the individual human mind, upon the productive process. Hence, rigorous analysis of the function of technological progress is an implicit reflection of the forms of creative mental activity deserving of the title, “human intelligence.”

These Gaussian forms of action, which we have outlined for economic processes, and, implicitly, for human intelligence, are the same forms of action necessarily characteristic of living processes generally. However, in man, the principle, characteristic of biological activity defined as distinct from mental activity, the same “negentropic” principle which Kepler implicitly proved to underlie the ordering of the universe, occurs as an efficient activity of thought itself. It is this efficient form of thought, on which the continued existence of society depends. This form of thought, constitutes the essence of what is properly defined as “human intelligence.”

Ludwig Boltzmann’s Error

Now, we turn our attention to the roots of those popular delusions, which have aided in the perpetuation of the multi-billion-dollar AI boondoggle. We begin with a few more or less indispensable references to the historical roots of the problem.

Today, it is a popular form of ignorance, to trace the emergence of modern science from Francis Bacon’s founding of British empiricism. In fact, the utter fraud and triviality of Bacon’s writings, is efficiently symptomized by the fact, that the fruitless Bacon adopted as the target of his attacks the most profoundly fruitful scientist in all English history to date, William Gilbert. Galileo’s fraudulent experimental concoctions, the beginning of the effort to overturn the work of Kepler, and the Gnostic cultist Fludd’s attacks on Kepler, are the beginnings of modern empiricism. The comprehensive attack upon science begins with René Descartes, of which the work of Newton is merely a parody on this account.

The key to this emergence of empiricism and positivism, is that it was begun over a century after the foundations of modern science were established, and that each of the principal figures involved in this countercultural attack on science, Descartes included, were agents of the Venice-directed forces behind the disastrous 1516–1653 Counter-Reformation. During the sixteenth century, the forces of the Counter-Reformation merely attempted to stamp out science, by aid of the Inquisition. At the beginning of the seventeenth century, the emphasis on inquisitional methods was replaced by methods of attempted corruption through cooptation.

In this respect, the Leibniz-Newton controversy is of relatively trivial significance, essentially a by-product of efforts by the Duke of Marlborough’s faction, to prevent Leibniz’s appointment as the prime minister of England. It is the fierce fight against Descartes’ evil, first by the circles of Desargues, Fermat, and Pascal, followed by a full-fledged attack by Leibniz, which is the key to the internal history of science since the beginning of the eighteenth century. The case of Newton’s follies, is merely adjunct and essentially peripheral to the issue of Descartes.

By the beginning of the nineteenth century, Newton was broadly and rightly discredited outside Britain, and Descartes was almost in total disrepute even in France itself. Yet, Cartesian principles dominate scientific teaching and opinion today. How this rather abrupt, nineteenth-century change occurred, involves two distinct, but closely correlated phases of action against the tradition of Leibniz.

Descartes’ reputation was reestablished in 1815, by decree of the pro-feudalist forces behind the 1815 Treaty of Vienna. Carnot and Monge were expelled from France’s leading scientific institution, the École Polytechnique, and the institution placed under the supervision of the neo-Cartesian Laplace. Laplace uprooted entirely the educational program of the École, and handed leading political authority over French scientific opinion, to his protégé, the nasty plagiarist, Augustin Cauchy, whose absurd concoctions are ritually taught to nearly all victims of elementary differential-calculus courses today. Except for the current of the Carnot-Monge tradition typified by the persecuted Louis Pasteur, science died rapidly in France after 1815, to be replaced by the ideologically fascist (Synarchist) positivism emerging from the corrupted École Polytechnique.

After 1815, the main currents of French science, like Carnot himself, fled to Alexander von Humboldt’s patronage, in Germany. By 1827, the transfer of world-leadership in science, from France to Germany, was more or less completed. From the 1815–1827 interval, into somewhat later than 1857, leadership in world-science was dominated by the circles of Humboldt and Gauss.

Beginning 1850, an escalating effort was launched, to attempt to destroy science in Germany, too. There were four points from which coordinated attacks upon science were launched: Metternich’s Vienna, Cauchy’s France, Britain, and within Germany itself. The principal figures of this anti-science effort, included Clausius, Kelvin, Helmholtz, Maxwell, Mach, Rayleigh, and Boltzmann; the principal targets, from then deep into the twentieth century, have been Gauss, Riemann, Weierstrass, Cantor, and, to a lesser degree, Felix Klein. By the 1880s, the anti-science, or “statistical” faction, of neo-Cartesians, had won the fight politically. The crushing of Germany, in the wake of World War I, nearly eradicated even the much-diluted German remains of the Leibniz-Gauss tradition.

The key point, which must be stressed, if the nature and outcome of these factional struggles within science are to be understood, is that, throughout, the anti-science faction prevailed not through scientific methods of disputation, but because the anti-science faction was deployed with backing from the most powerful assortment of pro-feudalistic wealthy families of Europe. The families either controlled the government, and also the dominant institutions of banking and insurance, or they controlled the universities directly. The outcome of the fight within science was arranged, thus, politically, by the simple expedient of determining which faction’s representatives were appointed to key university and related positions.

James Maxwell, who was perhaps, in some ways, the best of a very bad lot, frankly admitted the nature of his own largely plagiaristic work in electrodynamics. He frankly justified what might otherwise be deemed his large plagiarism from the extant work of Gauss, Weber, and Riemann on electrodynamics, by announcing that his purpose was to recapitulate electro-dynamics, to free it from the methods and geometrical conceptions of Gauss, Weber, and Riemann. Hence, the absurdities irreparably embedded in the axiomatic features of Maxwell’s work. Hence, Maxwell’s invention of an “ether-fluid,” to avoid the principle of Gaussian physics, that only a geometrically ordered physical space-time exists, rather than Cartesian particles roaming in empty space and time. In an effort to save Cartesian geometry, Maxwell filled Descartes’ empty space-time with an ether-fluid.

The discrediting of Maxwell’s hopes for an efficient sort of ether-fluid, so discredited the idea of locating a dynamics in anything but a Gaussian manifold, that rather than accepting Gauss, his factional opponents retreated increasingly from classical dynamics, into substituting statistics for causality.

Among the most significant of the exotic concoctions produced by the anti-science faction, was the work of Ludwig Boltzmann. The most significant feature of Boltzmann’s work, is his effort to explain away the occurrence of phenomena which are not statistically entropic, such as living organisms, by means of a curious application of Laplace’s arguments, “a calculus of statistical fluctuations.” If Boltzmann’s arguments are applied with consistency, mankind’s existence is based on calendars and clocks which run backwards, while the rest of the universe is based on calendars and clocks which run forward. (Boltzmann set his own clock straight, in 1901, by committing suicide at the Thurn und Taxis castle of Duino, in Trieste.)

Norbert Wiener explicitly based his definitions of “negentropy” and “information theory” upon Boltzmann’s doctrine of statistical fluctuations. The axiomatic premises adopted by von Neumann are, variously, explicitly or implicitly identical to those cited by Wiener. Hence, we have a modern doctrine of political-economy, “econometrics,” whose only benefit is to guide nations to economic self-destruction. Hence, we have the costly AI boondoggle.

As we have summarized the case, an effective approach to discovering the commensurability of living processes and human intelligence is embedded in the internal history of the development of modern science. However, since the empiricist and neo-positivist factions of academic life, have been embedded in the science profession, politically, increasingly, over the recent hundred years, any effort to resume the line of development of scientific method typified by Leibniz and Gauss, challenges the politically motivated misassumptions imposed upon the teaching of science over many decades.

Science’s Revenge on Bertrand Russell

Bertrand Russell, a key figure in the thuggery against Riemann, Cantor, and Felix Klein, from as early as the 1890s, was the grandson and true political heir of the Lord Russell who dedicated his career to attempting to destroy the United States and everything for which that republic stands. It is the vile stream of radical positivism, which Russell represented to the end of his long-overdue demise, which has produced for us today, amid other afflictions, this AI boondoggle.

Russell was a particularly virulent representative of the pro-feudalistic aristocratic families of Europe, a stratum of powerful families, whose success in imposing their capricious wills upon ordinary people and governments, encourages them to act as if they viewed themselves as reincarnations of the fabled Gods of Olympus. In this state of arrogance, they act as if they imagined themselves not only gods, but so powerful that they might pit their wills against the Creator Himself. Their ultimate fate, as the great Aeschylus warned them, is to bring the wrath of the laws of the universe upon not only themselves, but those cowardly or greedy enough to tolerate Olympian insolence against the laws of the Creator.

So, today, as we have compromised the vital interests of the nation and people of the United States, for sake of peaceful accommodation with such “families,” we have imposed upon ourselves those monetary and economic policies of practice which are not only destroying the U.S. economy, but weakening our nation to the degree that we become the easy prey of growing Soviet imperial power. Similarly, in abandoning the principles of science’s search for truth, whomever that truth may or may not please, we make ourselves not only prey to the waste of billions on such boondoggles as AI, but cripple that science upon which we must largely depend, to continue to be able to feed and defend our own population.

AI may reflect the prevailing prejudices of an extant scientific community, but if that is unchangeable, then AI typifies a society which, according to Aeschylus’ principle, has lost the moral fitness to survive.

Back to top    Go to home page

clear
clear
clear