Subscribe to EIR Online

This article appears in the February 1, 2019 issue of Executive Intelligence Review.

[Print version of this article]

Memorandum
to the International Caucus of Labor Committees (ICLC)

The Next Twelve Months’ Work
Must Consolidate and Systematize the Cosmological Ontological Standpoint of Cusa’s Founding of Modern Science

The editors of EIR are publishing here a memorandum by Mr. LaRouche addressed to members of the ICLC.[fn_1] All of the footnotes have been added by the editors. This is the second of Mr. LaRouche’s previously unpublished 1986 works that we have published this year. On October 6, 1986 a massive raid on EIR’s office in Leesburg, Virginia was executed by the very same forces that are today involved in an ongoing coup attempt against President Trump. Mr. LaRouche was then targeted for elimination by the British Empire forces that had deemed intolerable LaRouche’s collaboration with President Reagan on the Strategic Defense Initiative (SDI).

During the recent six months, senior physicists associated with the work of the Fusion Energy Foundation [FEF][fn_2] have begun to effect a reworking of areas of mathematical physics from the standpoint of the Cusa-Leonardo-Kepler-Leibniz-Riemann definition of the Principle of Least Action. This addresses, variously directly or at least implicitly, the most profound of the lingering problems of twentieth-century physics. The particular lines of investigation being pursued in this way, will probably lead to discoveries of the broadest practical importance for today’s scientific work.

The importance of the work of these physicists forces us to see more clearly than before, certain relevant omissions in our own elaboration of the principles of constructive physical geometry. During the period 1969-1973, I outlined certain directions of education and related exploration of the principles and implications of Bernhard Riemann’s fundamental contributions to physics. This was launched initially, to provide graduates of my one-semester introductory course in economic science with the prerequisites for a more advanced education in that science. Despite the significant accomplishments which have been made under those auspices, during the recent fifteen years, the results of this progress have not yet been systematized in the needed fashion. Those FEF seminars convened on the subject of this freshly elaboration of the least-action principle, have recently demonstrated most clearly the practical difficulties caused by lack of such systematic elaboration of the principles of constructive physical geometry.

This report is principally occupied with addressing two aspects of this task of systematization:

1. More narrowly, we must identify and understand most clearly, the mutually exclusive, axiomatic differences between the two principal, contending ideas of physics and cosmogony, among professional physicists and mathematicians over the recent three centuries, since Francis Bacon and Rene Descartes. We must emphasize that the definition of substance, as provided by constructive geometry, is irreconcilable with the definition of substance associated with Euclidean deductive geometry, or, with mathematics based on the notions of an axiomatic arithmetic and formalist algebra.

2. More broadly, we must expose the influence of the Romantic fraud, which separates the idea of reason in the physical sciences from the domains of politics, morality, law, psychology, and the arts.

We must stress, that the ontological and methodological fallacies of the deductive-empiricist approach to physics, are coherent with [Friedrich Carl von] Savigny’s irrationalist dogma of hermetic separation of Geisteswissenschaft from Naturwissenschaft.

The kind of systematization required, is illustrated in a simplified but useful way, by the following syllabus:

1. Professor Jacob Steiner’s elementary course in synthetic geometry, through the scope of topics of the tenth through thirteen books of Euclid’s Elements.

2. The introduction of the proof for the Bernouilli-Euler “isoperimetric theorem” as a self-reflexive correction in axiomatic assumptions of synthetic geometry. The examination of Nicholas of Cusa’s “Maximum Minimum Principle” and Gottfried Leibniz’s cohering Principle of Least Action, from this standpoint in synthetic geometry.

3. The leading work of Luca Pacioli and Leonardo da Vinci, especially on the distinction between living and nonliving processes, from this vantage-point in physical synthetic geometry.

4. The mastery of Johannes Kepler’s founding of a comprehensive mathematical physics, on the basis of the crucial contributions to axiomatics of constructive-geometric physics by, chiefly, Cusa and Pacioli-Leonardo.

5. The retrospective view of Kepler’s physics, by Leibniz’s elaboration of the Principle of Least Action, and Leibniz’s fulfilling Kepler’s specifications for the kind of differential calculus derived from a constructive approach to geometry.

6. The retrospective view of Kepler’s physics, by [Carl Friedrich] Gauss et al. and the derivation, from this, of the geometrically constructed doctrine of functions of a complex domain.

7. The problem of continuous functions subsuming dense generation of mathematical discontinuities (the Dirichlet-Weierstrass problem), and the general solution contributed by Bernhard Riemann, all from the standpoint of a constructive physical geometry of the Gaussian complex domain.

8. The notion of the ontologically transfinite: Georg Cantor’s 1871-1883 contributions viewed from the vantage-point of a Riemann-Surface function: the hierarchical ordering of ontological (and mathematical) transfiniteness, inherent to a complex domain defined in terms of multiply connected conic self-similar-spiral forms of hyperspherical functions.

9. The distinction between “physical space-time,” as an indivisible unit of conception, and Cartesian or neo-Cartesian notions of distinctions among abstractly distinct space, time, and matter. The ontological meaning of “substance,” as oppositely defined in the two opposing views.

10. The elements of physics, especially hydrodynamics and electro-hydrodynamics, defined in this elaborated context. The case of well-tempered polyphony, as encompassing all of the essential notions of such a physics.

The education, and related professional conditioning of modern physicists, as well as laymen, has imbued most moderns with the wrong view on each of these ten points. As a result, the experimental physicist is crippled by the belief that no experimental design or result is professionally credible unless the explanation of every feature of design and result is consistent with the neo-Cartesian formalist method and axiomatic assumptions.

Portrait of Gauss by Christian Albrecht Jensen
Left to right: Gottfried Leibniz (1646-1716), Carl Friedrich Gauss ( 1777-1855), Bernhard Riemann (1826-1866)

The last attempt to refute Kepler, Leibniz, Gauss, Riemann et al. from a Cartesian-Newtonian “classical” standpoint, was that of James Maxwell. Maxwell, who explicitly claimed that he was rejecting all in Gauss and Riemann not consistent with “our own” geometry, a neo-Cartesian one, made the notion of the “ether” the central feature of his work; this “ether,” like the mythical ‘quark” of today, was introduced to attempt to explain away every phenomenon of electro-dynamics which otherwise required a Gauss-Riemann notion of a physical space-time most characterized by a specific geometry of such physical space-time. By purporting to fill Cartesian empty space with an “ether,” Maxwell purported to explain away all need for the kinds of geometrical conceptions he abhorred. With such experiments as that of Michelson-Morley, and the experimental proofs of the case for special relativity, the “ether” was tossed away, and, with it, “classical” Cartesian-Newtonian mechanics. With the influence of the work of Ludwig Boltzmann, neo-LaPlaceian statistical mechanics appeared as the replacement for “classical” mechanics.

Despite this crisis-ridden, paradoxical character of anti-Gaussian modern mathematical physics, the conditioned professional adheres stubbornly to the conceits of naive sense-certainty: chiefly, that matter is reducible to elementary point-masses, and that least action is action along a straight-line pathway between any two points.

In contrast, in real physics, action is perceived solely in the form of a local or larger transformation within continuous physical space-time. Matter is perceived only in the form of such finite transformations in physical space-time.

Although “straight line” (linear) action exists, it exists only conditionally, in the same sense that a straight line is constructed by multiply connected circular action in elementary synthetic geometry. Matter exists only as transformations in physical space-time, and the primary form of action in physical space-time is either simply circular, or helical, or conic self-similar-spiral action. Action corresponding to this primary form, is called “least action.”

“Substance” is defined rigorously, therefore, as a finite transformation in physical space-time, by means of mathematical (geometric-trigonometric) statements “normalized” in terms of least action. All elementary laws of the universe must be stated in these, and only these terms of reference.

The implications of the “Dirichlet Principle” determine the characteristic geometrical features of real physical space-time in general. That is, continuous functions based upon multiply connected conic-spiral action, define an ordered density of mathematical discontinuities within that continuous function. These are termed “discontinuities,” because, in the least degree of distinction, they admit of no linear interpretation of the continuous function; more profoundly, because they involve transfinite orderings, as the Riemann Surface function defines this. In physics, they are called “singularities,” and include such phenomena as electrons, “plasmoids,” and so forth. Winston Bostick’s treatment of the electron, is an example of viewing an “elementary particle” as a singularity which is brought into existence, or dissolved, by a nonlinear continuous function. What we imagine, ordinarily, as “matter,” is a discrete form of singularity in a nonlinear continuous function. However, it is clear from this, that the notion of “substantiality” must be a more general one. The objects we call “matter” are but a special case of a more general, underlying substantiality, physical space-time as a whole. This substantiality is expressed for human perception as any least action form of finite transformation within physical space-time as a whole.

Since the late nineteenth century, it has been a classical classroom exercise, to show that what is associated with so-called Newtonian universal gravitation, is nothing but a deductive manipulation of Kepler’s three laws of motion. Kepler already defined gravitation, before Galileo and Newton, and this classroom exercise proves that Galileo and Newton discovered nothing at all that was either useful or original; in fact, Newton led his dupes a giant step backwards.

Johannes Kepler (1571-1630)
The musical scales shown here are adapted from Kepler’s Harmony, and show the “tonalities” of the harmonic orbits of the planets. Above is the major scale; below is the minor scale. Gauss predicted the next sighting of the asteroid Pallas on the basis of Kepler’s harmonic values for the exploded planet which must once have existed in an orbit between those of Mars and Jupiter. That is the space marked vacant above.

The relevant point to be stressed in this connection, is that Kepler’s laws are independent of any specification of the masses of the planetary bodies. The construction of Kepler’s laws depended upon nothing but the elaboration of the harmonic metrical characteristics of universal physical space-time, without yet considering the masses of the bodies. The central assumptions in Kepler’s astrophysical hypothesis, were two. First, directly, explicitly, Kepler based his work on the demonstrations of Pacioli and Leonardo: Pacioli’s De Divina Proportione, and the Pacioli-Leonardo demonstration that the highest-order processes in the universe had harmonic orderings coherent with the Golden Section. Secondly, as Kepler references Cusa explicitly, Kepler’s physics depends entirely upon the “hereditary” implications of Cusa’s “Maximum Minimum Principle” (Least Action).

Since the work of Gauss and Riemann, most notably, we know that any process of such metrical characteristics, is a subsumed reflection of the kind of complex hyperspace ordered in terms of conic, multiply connected, self-similar-spiral action. In other words, Kepler already showed that our physical universe is Riemannian: that universal physical space-time has a “shaping,” and that the fundamental laws of the universe are, either of the form of apparently “dimensionless constants,” or of a form ontologically akin to such constants: the finite, limiting speed of electrodynamic propagation, universal gravitation, the quantum constant, and the so-called fine structure constant. Each and all of these “constants” are functionally interdependent, and are more accurately stated in the relatively “dimensionless” terms of a “pure” synthetic geometry of Gauss-Riemann physical space-time.

Once we introduce “mass” to physical functions, these “dimensionless constants,” can be restated in terms of “dimensional” formulations of classical mechanics; however, the fact that we usually employ most of these in that derived form, does not prove that they are of such “dimensional” form in their proper, most elementary statement.

For example: the attribution of a quantum factor to a photon, depends upon interaction of that beam of electromagnetic radiation with some target. For reasons of physical geometry, that interaction must be defined in the most elementary terms, as a function of wavelength (frequency). Looking at this matter more closely, we find a reciprocal relationship between the speed of light and the quantum: the two express the same underlying universal principle of physical space-time. Gravitation, similarly, and the relationship between gravitation and the “fine structure constant.”

The method involved, is essentially socratic method.

We are conditioned, these days, to justify certain axiomatic assumptions of mathematical physics, on the grounds of the apparent practical advantages of such assumptions. We are conditioned, not to subject those assumptions to a rigorous sort of socratic criticism, epistemological criticism. The traditional defense against such criticism, is for the affronted defender of such axiomatic traditions to list some of the physics discoveries which are credited with depending upon such axioms. The affronted defender refuses to consider the criticism itself, on the pragmatic grounds that existing assumptions appear to work quite well.

What actually works, unquestionably works, at least up to some limit. The pragmatic view has two obvious flaws. First, a more rigorous set of assumptions, in place of conventionally taught ones, would not impair any practical result, but could only supply a more coherent, better insight into the “why” of what appears to work. Second, since all such pragmatic axiomatic assumptions place limits on the scope of efficient practice; by adhering to provably flawed such assumptions, as socratic epistemology can prove this flaw to exist, we halt the possibility of practical scientific progress to that degree.

There is a deeper psychological problem involved in the pragmatist’s viewpoint. On the surface, it might appear, that the pragmatist is conditioned to certain principles, which have appeared to serve him well, and is disinclined to go through the rigors of a re-education. He has a certain personal investment in the prestige gained by aid of assimilating and defending those assumptions. On the deeper level, many of these assumptions are provable irrational ones, which he learned mostly by means of years of classroom and related kinds of conditioning. He was never convinced, by reason, that these were necessary principles, but only that his professional standing and competence appeared to depend upon accepting their authority. Hence, this lack of rational resolution for such assumptions signifies that they have, for him, the kind of efficiency a superstitious fellow might attribute to tricks of symbolic magic, or astrology. He has the resulting anxiety, that to give up such assumptions, is of the form: to lose some of his own “magical” powers.

This irrationalist element stands in contrast to the physicists’ usually well-deserved reputation for greater rationality than most. This spoiling, irrationalist streak, clearly arises from two kinds of sources. First, the physicist is a person in society, and is subject to the prevailing philosophical irrationalism of contemporary cultural paradigms in society generally; this general influence tends to spill over into areas of his professional work, and especially into the domain of heteronomic relations with fellow-professionals. The “personal” element so defined, tends to color his factional position on scientific issues. Second, more narrowly, as is shown most efficiently by rigorous analysis of the work of Immanuel Kant, the mechanistic, linear world-outlook in physics, is in itself an axiomatic root of a tendency for irrationalism within physics practice. This notion of universal physical lawfulness implicitly defines a universe in which life could not have developed. This, Cartesian or Newtonian tradition, is in specific contrast to the standpoint of Leonardo and Kepler, for example. The physics of the latter, is consistent with the necessary existence of life in the universe. Hence, we have the spectacle of the otherwise rational physicist or chemist, asserting the authority of his existence before the lecture hall, and yet asserting a mathematical method which appears to prove that the lecturer does not exist.

In terms of physics as such, the mechanistic method insists that the universe is characteristically entropic, and that the elementary laws of cause and effect in that universe are linear in form. This admission was already made by Isaac Newton, and admission on which Leibniz focussed attention later, in the Leibniz-Newton-Clarke correspondence. Any scheme which assumes, that matter is composed of self-evidently existent discrete particles, acting in straight-line relations in empty, Cartesian space, already assumes that the universe is running down in the fashion of a mechanical time-piece.

In contrast, Kepler assumed, and demonstrated, that the universe is characteristically negentropic.

We have referenced the proof for Kepler’s laws on a number of occasions earlier. It is important, for rigorous clarity, to identify that point again here. The most crucial experimental proof was supplied by Gauss, when Gauss predicted the next sighting of the asteroid Pallas on the basis of Kepler’s harmonic values for the exploded planet which must once have existed in an orbit between those of Mars and Jupiter. The fact that the former existence, and explosion of this missing planet, was integral to the entire construction of Kepler’s laws, signified that the existing of an asteroid with such harmonic orbital values was conclusive proof of the relative validity of Kepler’s hypothesis, relative to all those who opposed Kepler from a Cartesian-Newtonian mechanistic standpoint.

This proof suffices to demonstrate that the universe is characteristically negentropic, not entropic. For reasons clear from the Dirichlet-Weierstrass-Riemann treatment of the problem of discontinuity in continuous complex functions, the fundamental laws of physics are not linear in form, but are nonlinear. All linear formulations of such laws are, at best, a crude approximation, and, fundamentally, absurd.

Isaac Newton

The irrationalist element within “classical” mechanics and deductive, formal algebra, is thus located.

1. No system of thought, however “rational” deductively, can account for the full range of cause-and-effect relations within the experimental domain of physics, chemistry, and biology.

2. Within the range of phenomena for which mechanistic or formal-deductive approaches do produce some useful results, the system as a whole depends upon included rule of thumb terms which have no rigorous basis within the terms of the system as a whole, but which are included as plausible terms merely because they appear to work in many cases.

On the first account, physical reality is “nonlinear,” to the effect that any attempts to measure cause-effect in terms of linearly stated laws, are merely crude approximations, approximations which break down entirely for non-linear cases. On the second account, we have paradoxes such as the three-body problem, and the general incoherence of efforts to account for rotational principles within the axiomatic system of mechanics. That is, it can not be shown that the rotational terms are derived consistently, constructively, from a linear set of axioms; these terms appear to have no rational necessity corresponding to their experimental relevance, and are therefore introduced to the deductive system as rather arbitrary added postulates. This is the general case for hydrodynamic and analogous electrodynamic phenomena. The first class of paradox is most clearly shown in the case of negentropic or related sorts of non-linear processes. The second class is most commonly shown within the range of hydrodynamic and related phenomena which appear to belong to the domain of mechanics, rather than negentropic processes. This is approximately, the essential division of types of anomalies distinguishing the two classes of paradoxes.

On these two accounts, the mind perceives a gap in the process of reasoning, from the generally consistent basis of a deductive-axiomatic mechanics, to the terms of description for the “anomalous” classes of phenomena. The existence of this gap in the reasoning process, compared with the greater or lesser practical efficiency of the arbitrary element, appears to the mind as like “magic.” Why it appears to work, is, at bottom, a mystery; things which work, but are premised upon mysterious principles, are deemed by the mind to be “magical.”

It is most advantageous, to view this sort of problem from the standpoint of the two central, celebrated fallacies in Kant’s Critique of Judgment: Kant’s epistemologically interdependent assumptions, that there is no knowable, rational basis for human scientific (or other) creative discoveries, and that there is, on the same premises, nothing but an arbitrary basis for assessing the qualities of truth and beauty in works of art. We have shown for the case of music, if only so far in an elementary way, that Kant’s judgment is not only an absurd one, but a wicked one. We have also shown, that what is demonstrated for the case of music, applies in a general way to all creative work, scientific discovery included.

Essentially, Least Action and negentropy are cohering notions. As the case of Kepler’s work implies, Least Action is metrically characteristic of the physical space-time manifold generated by multiply connected, conic, self-similar spiral action. Such multiply connected functions form, intrinsically, a class of complex functions which are efficiently continuous, and yet densely populated with self-generated discontinuities. The best measure of negentropy, is the rate of increasing density of such discontinuities within a function which otherwise conforms to Least Action in such a manifold. This requires a mathematical universe, in which the elementary laws are stated, elementarily, as “nonlinear” functions, and in which the normalized, elementary form of statement of any event, measures the transformation so measured in terms of reference to negentropy as the metrical characteristic of the universe.

The practical content of this is most usefully demonstrated, by reference to the elements of economic science.

As we have shown, the fundamental metrical feature of economic processes is stated in terms of a variable rate of increase of the potential population-density. In economic processes, there never exists the kind of von Neumann “equilibrium” defined in terms of solutions to simultaneous linear inequalities. The minimal condition for the sustainable existence of the human species, is some positive rate of increase of potential population-density. This minimal condition is represented by a “nonlinear,” negentropic function, which describes what may be called a “world-line.” This function is continuous, if “function” is defined as a Riemann Surface function. In other words, by application of Dirichlet’s principle of topology, the current state of the continuous function is situated in that transfinite ordering which provides perfect connectivity for a domain including all of the singularities subsumed. Since the continuous function so described is becoming ever-richer in singularities, the corresponding type of Riemann Surface function is required for the general case represented by the “world-line.”

The anti-entropic development of the universe is characterized by two related non-linear constants: a minimal rate of expansion of development, which if not met, results in extinction; and the requirement to purge obsolete closed systems in order for the system to grow. Depicted here are two such examples of this governing principle—the P-T Mass Extinction and the K-T Mass Extinction—in which certain species are required to be superseded for the emergence of new species of higher energy flux metabolisms.

Values greater, or lesser than that of this “world-line,” are similarly defined. All such Riemann Surface functions, by definition, are purely negentropic functions. “Entropic” functions, are defined as negative “negentropic” functions: a Riemann Surface “backwards,” so to speak, but for the qualification, that “backwards” is not merely a reverse of “forwards,” but a different pathway analytically.

Similarly, the “world-line” is not a fixed one. Every increase of the rate of economic growth, redefines the required minimal value of “world-line” from that point onwards. By increasing the potential population-density above that required by previously established “world-line” values, we “upshift” the “world-line” function from that point onwards.

For economic processes, we have stated the following general restrictions:

1. All positive values of the function require an increase of the relative content of properly defined per-capita market-baskets of human consumption. This has the significance of an increase of the density of singularities.

2. The per-capita throughput of usable energy, must be increased, per-capita and per-hectare. This is restated, as increase of energy-throughput per-capita unit of actual and potential population-density, respectively (energy-intensity, in the first degree).

3. The energy-density cross-section of generated and applied energy must increase historically (energy-intensity, in the second degree).

4. The ratio of employment in rural production must decrease, subject to a per-capita increase of output of such goods for the population as a whole (capital-intensity, in the first degree).

5. The ratio of employment in production of capital goods, to employment in production of consumer goods, must increase (capital-intensity, in the second degree).

6. The technology-intensity of modes of production and existence must be increased, in a manner consistent with Leibniz’s elementary definition of “technology.”

These are features of a “nonlinear” function, the “world-line” and related functions. Every transformation in “economic space,” is measured in terms of that function. The generalized notion of that function is:

1. The variable form of the “world-line” function at each point in the process.

2. The rate of increase of potential population-density, relative to that momentary value of the “world-line” function.

That is the most elementary of all the statements which can be made in “economic space.”

The point to be stressed in this location, is that this elementary function in “economic space,” is exemplary of all proper physical functions bearing upon fundamentals in the universe.

Mankind knows the universe, only from the standpoint of the criteria of successful human practice. “Successful human practice,” can be defined as nothing less than increase of the potential population-density, as we have specified that summarily here. This statement is complete, on the condition that we recognize that technological progress represents the generation and efficient assimilation of notions developed by means of self-improvement of that divine spark of potential for creative reasoning which distinguishes mankind from the beasts. Labor in a technologically progressive, energy-intensive, capital-intensive mode, is rightly called “the human form of labor,” to distinguish a human form of existence from a bestialized condition of mankind.

The question of human knowledge, is a question of knowledgeable human practice. Universal knowledge, is therefore the form of knowledge related to the most universal feature of human practice: increase of the potential population-density, by means of the practice of a human form of labor.

This does not define human knowledge as intrinsically pragmatic. Human knowledge is absolute knowledge of the universe, relative to the degree of its perfection as knowledge pertaining to the most universal feature of human practice. However, it is absolute knowledge of the universe stated in the language of the most characteristic terms of universal human practice.

Thus, economic science, properly defined, is the same thing as a universal physics. It is the ultimate standpoint from which we can discover which assumptions of physics are valid or not. Both, economic science in particular, and general physics in particular, must be caused to converge upon one another, to become one. That standpoint is our standpoint as a philosophical association. This is understood as our standpoint, on condition that we emphasize that economic science treats performance relative to the human form of labor, as we have indicated here: the elaboration of the development of the divine spark of potential for creative reason peculiar to the human individual.

Statements made in this form are the only truly rational statements about the physical universe. The following points, relative to that, are leading:

1. All such statements are derived from the consistent elaboration of a constructive geometry, from the unique starting-point of a Principle of Least Action (Cusa’s “Maximum Minimum Principle”). No arbitrary element is ever introduced to this process of construction.

2. Negentropy, while reflected in harmonic orderings congruent with the Golden Section, can be explicitly defined only in the Gauss-Riemann complex domain, a specific form of extended elaboration of such synthetic geometry.

3. Every theorem stated in such terms, is implicitly reduced, by a socratic method of back-tracing the hereditary principle of construction, to the unique root-principle of Least Action.

4. No rational algebraic statement of a function can be made, which is not better restated as a trigonometric function, and thus shown to be a description of a locus generated by a Gauss-Riemann constructive geometry of the complex domain. Implicitly, any seemingly arbitrary algebraic function, which corresponds to actual processes, can be made rationally knowable as a continuous function by such methods.

5. No phenomenon which can be comprehended in mathematical-physical terms of a continuous function, is rightly knowable rationally, in any terms but these constructive terms.

6. The inclusion of negentropic processes in this class, an inherent feature of such a constructive geometry of the complex domain, signifies that living processes and analogous nonlinear processes, are rationally knowable in these terms of reference.

7. Creative discovery, is the (constructive geometrical) form of activity of the human mind which is in one-for-one correspondence with a living process’s characteristic features.

Thus, the creative faculties of the human mind, are rigorously comprehensible in the same terms as a competent mathematical physics, on condition that the right such physics is employed.

Such comprehensibility does not exist within the scope of a formal, axiomatic-deductive sort of linear system. Hence, Kant was conditionally correct, that creativity and the notion of beauty were unknowable in his system of thought.

“The intellect is to truth, as an inscribed polygon is to the inscribing circle. The more angles the inscribed polygon has, the more similar it is to the circle. However, even if the number of angles is increased ad infinitum, the polygon never becomes equal to the circle.”
— Nicholas of Cusa (1401-1464, pictured at top right.)

The History of Our Approach, Briefly

Certain aspects of the internal history of our international philosophical association [the ICLC], and of my own relevant points of contribution to that history, have direct bearing on this ongoing work.

Over the interval 1948-1952, my own intellectual ferment was chiefly energized by a sense that the Wiener-Shannon[fn_3] “information theory” dogma was so evil in its practical implications, that I must devote my life, if need be, to refuting it.

My approach was informed chiefly, by the influence of Leibniz upon me during my early adolescence. To refute Wiener, I chose as a practical context, the role of the human mind in generating and assimilating improved technologies. I assumed that the measure of “human intelligence” was that aspect of ideas which contributed in some demonstrable way to an increase of the negentropy of society’s existence, and that a general definition of both “information” and “negentropy” must be supplied from this standpoint.

My concern, was to reduce a statement of economic processes to the form of thermodynamic functions, and to measure an increase of per-capita power achieved through technological progress as the implicit measure of the negentropy of human practice. The ideas which mediated this transformation, must then be correlated with such result, and analyzed, in correlation with the result, to define “information” negentropically. That was the first step, the “LaRouche” component, of what was later termed “the LaRouche-Riemann method.”

Through study of the work of Georg Cantor, I was led to a correct appreciation of Riemann’s work, most emphatically of the general thesis given preliminary summary in his “On the Hypotheses Which Underlie Geometry.” In that dissertation, I found Riemann’s correct definition of “negentropy.” It was clear that the geometrical method congruent with this dissertation, supplied the approach, both to measure technological progress as negentropy, thermodynamically, and to examine that aspect of the structure of human creative thinking which enabled the mind to produce and assimilate technological advances.

Library of Congress
Georg Cantor (1845-1918)

That, in kernel, was the beginning of the “LaRouche-Riemann method.”

The philosophical and related scientific work of our association originated in my concern to assemble the basis for a second course in economics, to be supplied by those who had completed the one-semester introductory course. As part of this, Uwe Parpart contracted to produce a report on the essential features of Riemann’s and Cantor’s contributions. Later, in a March 1973 paper presented as a guidance memorandum to the “science project,” I outlined the case for a Riemannian integration of economic science and biology, and the need to base the entire work of the “science project” on this point of methodological reference.

In early December 1978, we launched the project for producing computer-based analyses of the turns in the U.S. economy, with both fortunate and dismal results. The dismal result was Dr. Steven Bardwell’s organization of a calculus curriculum, which centered itself on a Cauchyan approach to the elements of differential calculus, an intrinsically incompetent, but academically popular approach, explicitly contrary, axiomatically, to my own and Riemann’s method. Although the attendance at the course rapidly collapsed, the general effect was that persons influenced by the course, or by its reputation, knew significantly less about economic science than before the course was begun.

This was the state of affairs prevailing at the time of a series of seminars near Wiesbaden, during the spring of 1981. During those seminars, I proposed a new tactic for focussing students’ attention on the crucial issues of the LaRouche-Riemann method: the construction of the principles of well-tempered polyphony from the starting-point of a conic self-similar-spiral. This construction was undertaken by Jonathan Tennenbaum and Ralf Schauerhammer, who presented the results at an international conference later than year, and presented amplified results at a later international conference. Broadly, the tactic succeeded. Serious attention to the principles of synthetic geometry spread, the understanding of the ABCs of the LaRouche-Riemann method was significantly improved, and there were significant benefits in terms of better understanding of the function of technology in economic processes.

The elaboration of that tactic remains far from complete, even with respect to the principles of well-tempered polyphony itself. The musical elaboration is of more than incidental importance for economic and science and physics.

It is more readily obvious, that the “art for art’s sake,” and kindred cultish irrationalisms dominating the music profession today, are crippling the musical work and pleasure of both performers and audiences. The damage done to music, by cutting it off from that rigorous rationalism which dominated the work of Bach, Mozart, and Beethoven, is more readily recognized than the effects of this separation upon physical science. Yet, on reflection, it should be clear that nothing is more wickedly subversive of the physical sciences than to degrade physical science into a compartmentalized, mechanistic occupation divorced from the wholeness of the mental life and experience, of the scientist and student.

The physicist urgently requires that the methods proper to the physical sciences be experienced as the essential feature of some aspect of classical art. Once the student of physics, for example, has discovered that the principles of Beethoven’s method of composition are in correspondence with nothing less than the principles of a Riemann Surface, that student must sense the richness and universality of those principles. This sort of experience is indispensable to making professional work in physical science sensed as an occupation of the whole person. It is also indispensable to true rigor in the physical sciences, to the effect that all that is relevant to the existence of mankind, and of mankind’s development must be brought to bear on the practice of the physical sciences.

It is the universal applicability of rigorous methods of reason, to every aspect of the universe, which impels us to perfect those methods in a manner consistent with that universality. This universality, which characterizes the work of a Cusa, a Leonardo, a Kepler, a Leibniz, is the spirit of true scientific inquiry, the spirit of universality which must be recaptured and practiced today, the spirit of rigorous method and universality which characterizes the leaders of every true renaissance in human history.

www.arttoday.com
Ludwig van Beethoven

The advantages of concentrating upon the principles of well-tempered polyphony, from this vantage-point, are broadly obvious ones. What need be demonstrated in this connection, is that the agapic experience of beauty, as classical polyphony affords this, is not a mysterious quality, but something which can be comprehended rationally. The unity of reasoning-powers and the higher (agapic) faculties of emotion, demonstrated and experienced in such an approach to music, is an experience which illuminates, transforms, and uplifts the entire personality. In the scientist, such an experience feeds that fire of impassioned creativity, which is the essence of all true scientific progress,

More broadly, the present bottleneck is the lack of the ten-point systematic program in the foundations of physical geometry, as we described that in outline, above.

The quality of the properly educated person, is the developed capacity to reconstruct every conception, solely by rigorous reasoning, without reliance upon citations by “authorities.” Nothing is authoritative, no matter who or how many have said it, unless one is able to reconstruct the proof of that idea oneself, as if no authority but oneself had ever existed. This reconstruction must meet the specifications of socratic method, as a rigorous synthetic geometry does. That is, in synthetic geometry, we start with nothing but the isoperimetric principle. We construct a straight line and a point by means of doubly connected circular action, and derive the entirety of mathematics, including Riemannian physics, by nothing but that “hereditary” principle. Thus, socratically, all theorems are traced back, rigorously, to the isoperimetric principle.

Can you stand before a class, assuming that they know nothing but the course in elementary synthetic geometry which precedes the introduction of the isoperimetric theorem, and construct the entirety of Gauss-Riemann mathematics’ essentials from that starting-point, using nothing but the hereditary principle of synthetic geometry? Until you can do just that, at least in principle, you really do not know any advanced theorem in physics. Without that, at many points of your argument, you must invoke the mystical blessing of some putative “authority.” You do not really know; you merely place your faith on crucial points, in the assertions of a man in whose authority you have placed your faith.

It is therefore most difficult, to discuss the ontological implications of the Principle of Least Action, until you and your conversational partner share a grounding in the kind of basic program we have outlined. You must know that program, and if your partner in the conversation does not, you must be able to refer his or her attention to such a program. If he or she does not understand the conception, for want of familiarity with such a program, you might, if time allows, summarize the crucial points of the program, and then restate the proposition in those terms of reference. Or, if time does not allow, you can refer his or her attention to the program, and indicate where the theorem in question lies in the setting of that program.

This program represents the next pedagogical step which must be completed, if we are to effect orderly progress in the direction we have been working these past years. This is needed, as the best way to present the methodological standpoint from which our approach to the ontological implications of Least Action can be comprehended in a thoroughly rigorous way, to provide the grounding context in which the issues posed can be discussed.

Also, I say without fear of exaggeration, that many among us do not yet understand what the Principle of Least Action signifies ontologically. This deficiency is not likely to be corrected, until the indicated outline is worked through by them.

NASA/CXC/SAO
“The foundation of competent physical science and Classical artistic composition,” LaRouche writes, “is commonly located only in the principle of insight: insight as distinguished from sense-perception.” The Crab Nebula presents a useful demonstration of the Platonic principle that the world is apprehended by the creative mind, not by sense perception. These images, captured using different instruments, are all quite different in visual appearance; it is the contradiction among them that can lead the mind to a conception of how this perplexing nebula actually functions. Shown here are images of the Crab Nebula, a supernova remnant in the constellation Taurus, at four different wavelengths.

The Proposition in View

There are three principal areas of experimental inquiry, upon which our attention to Least Action is presently focussed, or at least chiefly so: astrophysics, microphysics, and optical biophysics. These are the three facets of the universal, in which the experimental results are presented most immediately in terms of Least Action, and in the most elementary way. To prove a principle of nature, it is our primary concern to prove the principle equally efficient in each and all of these three areas. To the degree we succeed in that, the principle is conditionally true, and is absolutely true relative to contrary views today.

In astrophysics and microphysics, our leading concern now is simply to demonstrate that the Least Action harmonic ordering is consistently determined by certain, provably equivalent “dimensionless constants” (as we have supplied a qualified definition of “dimensionless constants” above). In other words, that the “shaping” of physical space-time in the astrophysical and microphysical domains, is determined in the same lawful way. We are also concerned to situate the same kinds of phenomena in terms of the scales of Ångstrom units and microns, in the domain of optical biophysics.

We wish to proceed from such explorations, to the goal of redefining physics as electrohydrodynamics, proceeding from the elementary phenomena of astrophysics and microphysics, into the hydrodynamics of electromagnetic processes, by the methods associated with constructive geometry.

For example, we have also settled upon crucial evidence which demonstrates that acoustic air waves are defined by electromagnetic radiation, rather than percussive interaction: in terms of self-induced transparency of the medium for potential rates of propagation. We are also concerned with the direct role of the helical-rotational aspect of coherent radiation in terms of the physics of refraction, and the bearing of this on the phenomena of least action in such matters. So, the list goes on.

The prudence of bold leaps in physical science, is in direct proportion to the depth and scope of the rigor one has achieved in mastery of the elementary. Prudent boldness depends upon this principle: Since all theorems in physical (constructive) geometry are rooted in the hereditary principle of construction, two things follow:

1. Nothing is formally true, if it is implicitly, hereditarily, a violation of the underlying principles.

2. As Leonardo da Vinci insisted upon this point, the features of an hypothesis demanded by hereditary implications of underlying principles, are almost certainly true, even if there is so far a lack of experimental evidence to substantiate this particular feature.

Without a rigorous grounding in fundamentals of physical geometry, one dare not trust one’s judgment to such bolder enterprises. Without the kind of mastery of constructive physical geometry which is profoundly consistent with socratic method, the rule should be great self-doubt, and great cautiousness.

The price to be paid to reach the empyreal delights of effective boldness, is ruthless and exhaustive rigor in mastery of fundamentals.


[fn_1]. In a 1981 article, LaRouche described the ICLC “as an international academy movement, consciously modeled in intent and practice upon such precedents as Plato’s Academy at Athens, and tracing its heritage through Philo, Augustinian Christianity, the Arab Renaissance, and the 15th-century Golden Renaissance . . . in existence since 1973-1974, based chiefly in the U.S.A., Canada, Latin America, and Western Europe.” [back to text for fn_1]

[fn_2]. The Fusion Energy Foundation (FEF) was founded at the initiative of Lyndon LaRouche in 1974. It published the popular Fusion magazine and the technical International Journal of Fusion Energy. Soon after the October 6, 1986 raid on EIR’s office, federal marshals seized the FEF’s offices and bank accounts, effectively closing the FEF and forcing the discontinuance of its publications. [back to text for fn_2]

[fn_3]. Norbert Wiener (1894-1964) and Claude Shannon (1916-2001). [back to text for fn_3]

Back to top

clear
clear
clear