A TRANSLATION OF SCHRÖDINGER'S "CAT PARADOX PAPER"

This translation was originally published in Proceedings of the American Philosophical Society, 124, 323-38. [And then appeared as Section I.11 of Part I of Quantum Theory and Measurement (J.A. Wheeler and W.H. Zurek, eds., Princeton university Press, New Jersey 1983).]

- Introductory Note
- 1. The Physics of Models
- 2. Statistics of Model Variables in Quantum Mechanics
- 3. Examples of Probability Predictions
- 4. Can One Base the Theory on Ideal Ensembles?
- 5. Are the Variables Really Blurred?
- 6. The Deliberate About-face of the Epistemological Viewpoint
- 7. The Psi-function as Expectation-catalog
- 8. Theory of Measurement, Part One
- 9. The Psi-function as Description of State
- 10. Theory of Measurement, Part Two
- 11. Resolution of the "Entanglement" Result Dependent on the Experimenter's Intention
- 12. An example
- 13. Continuation of the Example: All Possible Measurements are Entangled Unequivocally
- 14. Time-dependence of the Entanglement. Consideration of the Special Role of Time
- 15. Natural Law or Calculating Device?
- Notes

TRANSLATION

Of natural objects, whose observed behavior one might treat, one set sup a
representation - based on the experimental data in one's possession but
without handcuffing the intuitive imagination - that is worked out in all
details exactly, *much* more exactly than any experience, considering its
limited extent, can ever authenticate. The representation in its absolute
determinacy resembles a mathematical concept or a geometric figure which can
be completely calculated from a number of *determining parts*; as, e.g., a
triangle's one side and two adjoining angles, as determining parts, also
determine the third angle, the other two sides, the three altitudes, the
radius of the inscribed circle, etc. Yet the representation differs
intrinsically from a geometric figure in this important respect, that also in
*time* as fourth dimension it is just as sharply determined as the figure is
in the three space dimensions. Thus it is a question (as is self-evident)
always of a concept that changes with time, that can assume different
*states*; and if a state becomes known in the necessary number of determining
parts, then not only are all other parts also given for this moment (as
illustrated for the triangle above), but likewise all parts, the complete
state, for any given later time; just as the character of a triangle on its
base determines its character at the apex. It is part of the inner law of the
concept that it should change in a given manner, that is, if left to itself in
a given initial state, that it should continuously run through a given
sequence of states, each one of which it reaches at a fully determined time.
That is its nature, that is the hypothesis, which, as I said above, one builds
on a foundation of intuitive imagination.

Of course one must not think so literally, that in this way one learns how
things go in the real world. To show that one does not think this, one calls
the precise thinking aid that one has created, an *image* or a *model*. With
its hindsight-free clarity, which cannot be attained without arbitrariness,
one has merely insured that a fully determined hypothesis can be tested for
its consequences, without admitting further arbitrariness during the tedious
calculations required for deriving those consequences. Here one has explicit
marching orders and actually works out only what a clever fellow could have
told directly from the data! At least one then knows where the arbitrariness
lies amd where improvement must be made in case of disagreement with
experience: in the initial hypothesis or model. For this one must always be
prepared. If in many various experiments the natural object behaves like the
model, one is happy and thinks that the image fits the reality in essential
features. If it fails to agree, under novel experiments or with refined
measuring techniques, it is not said that one should *not* be happy. For
basically this is the means of gradually bringing our picture, i.e., our
thinking, closer to the realities.

The classical method of the precise model has as principal goal keeping the
unavoidable arbitrariness neatly isolated in the assumptions, more or less as
body cells isolate the nucleoplasm, for the historical process of adaptation
to continuing experience. Perhaps the method is based on the belief that
*somehow* the initial state *really* determines uniquely the subsequent
events, or that a *complete* model, agreeing with reality in *complete
exactness* would permit predictive calculation of outcomes of all experiments
with complete exactness. Perhaps on the other hand this belief is based on the
method. But it is quite probable that the adaptation of thought to experience
is an infinite process and that "complete model" is a contradiction in terms,
somewhat like "largest integer".

A clear presentation of what is meant by *classical model*, its *determining
parts*, its *state*, is the foundation for all that follows. Above all, a
*determinate model* and a *determinate state of the same* must not be
confused. Best consider an example. The Rutherford model of the hydrogen atom
consists of two point masses. As determining parts one could for example use
the two times three rectangular coordinates of the two points and the two
times three components of their velocities along the coordinate axes - thus
twelve in all. Instead of these one could also choose: the coordinates and
velocity components of the *center of mass*, plus the *separation* of the two
points, *two angles* that establish the direction in space of the line joining
them, and the speeds (= time derivatives) with which the separation and the
two angles are changing at the particular moment; this again adds up of course
to twelve. It is *not* part of the concept "R-model of the H-atom" that the
determining parts should have particular numerical values. Such being assigned
to them, one arrives at a *determinate state* of the model. The clear view
over the totality of possible states - yet without relationship among them -
constitutes "the model" or "the model in *any* state *whatsoever*". But the
concept of the model then amounts to more than merely: the two points in
certain positions, endowed with certain velocities. It embodies also knowledge
for *every* state how it will change with time in absence of outside
interference. (Information on how one half of the determining parts will
change with time is indeed given by the other half, but how this other half
will change must be independently determined.) *This* knowledge is implicit in
the assumptions: the points have the masses m, M and the charges -e, +e and
therefore attract each other with force e^2/r^2, if their separation is r.

These results, with definite numerical values for m, M, and e (but of course
*not* for r), belong to the description *of the model* (not first and only to
that of a definite state). m, M, and e are *not* determining parts. By
contrast, separation r is one. It appears as the seventh in the second "set"
of the example introduced above. And if one uses the first, r is not an
independent thirteenth but can be calculated from the six rectangular
coordinates:

r = | (x1 - x2)^2 + (y1 - y2)^2 + (z1 - z2)^2 | ^ (1/2) .

The number of determining parts (which are often called *variables* in
contrast to *constants of the model* such as m, M, e) is unlimited. Twelve
conveniently chosen ones determine all others, or the *state*. No twelve have
the privilege of being *the* determining parts. examples of other especially
important determining parts are: the energy, the three components of angular
momentum relative to center of mass, the kinetic energy of center of mass
motion. These just named have, however, a special character. They are indeed
*variable*, i.e., they have different values in different states. But in every
*sequence* of states, that is actually passed through in the course of time,
they retain the same value. So they are also called *constants of the motio*n
- differing from constants of the model.

One might think that for anyone believing this, the classical models have
played out their roles. But this is not the case. Rather one uses precisely
*them*, not only to express the negative of th new doctrine, but also to
describe the diminished mutual determinacy remaining afterwards as though
obtaining among the same variables of the same models as were used earlier,
as follows:

*A*. The classical concept of *state* becomes lost, in that at most a
well-chosen *half* of a complete set of variables can be assigned definite
numerical values; in the Rutherford example for instance the six rectangular
coordinates *or* the velocity components (still other groupings are possible).
the other half then remains completely indeterminate, while supernumerary
parts can show highly varying degrees of indeterminacy. In general, of a
complete set (for the R-model twelve parts) *all* will be known only
uncertainly. One can best keep track of the degree of uncertainty by following
classical mechanics and choosing variables arranged in *pairs* of so-called
canonically-conjugate ones. The simplest example is a space coordinate x of a
point mass and the component p_x along the same direction, its linear momentum
(i.e., mass times velocity). Two such constrain each other in the precision
with which they may be simultaneously known, in that the product of their
tolerance- or variation-widths (customarily designated by putting a Delta
ahead of the quantity) cannot fall *below* the magnitude of a certain
universal constant,[4] thus:

Delta-x . Delta-p_x >= hbar.

(Heisenberg uncertainty relation.)

*B*. If even at any given moment not all variables are determined by some of
them, then of course neither are they all determined for a later moment by
data obtainable either. This may be called a break with causality, but in view
of *A.*, it is nothing essentially new. If a classical state does not exist at
any moment, it can hardly change causally. What do change are the *statistics*
or *probabilities*, *these* moreover causally. Individual variables meanwhile
may become more, or less, uncertain. Overall it may be said that the total
precision of the description does not change with time, because the principle
of limitations decsribed under *A.* remains the same at every moment.

Now what is the meaning of the terms "uncertain", "statistics", "probability"?
Here Q.M. gives the following account. It takes over unquestioningly from the
classical model the entire infinite roll call of imaginable variables or
determining parts and proclaims each part to be *directly measurable*, indeed
measuravle to arbitrary precision, so far as it alone is concerned. If through
a well-chosen, constrained set of measurements one has gained that maximal
knowledge of an object which is just possible according to *A.*, then the
mathematical apparatus of the new theory provides means of assigning, for the
same or for any later instant of time, a fully determined *statistical
distribution* to *every* variable, that is, an indication of the fraction of
cases it will be found at this or that value, or within this or that small
interval (which is also called probability.) The doctrine is that this is in
fact the probability of encountering the relevant variable, if one measures it
at the relevant time, at this or that value. By a single trial the correctness
of this *probability prediction* can be given at most an approximate test,
namely in the case that it is comparatively sharp, i.e., declares possible
only a small range of values. To test it thoroughly one must repeat the entire
trial *ab ovo* (i.e., including the orientational or preparatory measurements)
*very* often and may use only those cases in which the *preparatory*
measurements gave exactly the same results. For these cases, then, the
statistics of a particular variable, reckoned forward from the preparatory
measurements, is to be confirmed by measurement - this is the doctrine.

One must guard against criticizing this doctrine because it is so difficult to
express; this is a matter of language. But a different criticism surfaces.
Scarcely a single physicist of the classical era would have dared to believe,
in thinking about a model, that its determining parts are measurable on the
natural object. Only much remoter consequences of the picture were actually
open to experimental test. And all experience pointed towards one conclusion:
long before the advancing experimental arts had bridged the broad chasm, the
model would have substantially changed through adaptation to new facts. --Now
while the new theory calls the classical model incapable of specifying all
details of the *mutual interrelationship of the determining parts* (for which
its creators intended it), it nevertheless considers the model suitable for
guiding us as to just which measurements can in principle be made on the
relevant natural object. This would have seemed to those who thought up the
picture a scandalous extension of their thought-pattern and an unscrupulous
proscription against future development. Would it not be pre-established
harmony of a peculiar sort if the classical-epoch researchers, those who, as
we hear today, had no idea of what *measuring* truly is, had unwittingly gone
on to give us as legacy a guidance scheme revealing just what is fundamentally
measurable for instance about a hydrogen atom!?

I hope later to make clear that the reigning doctrine is born of distress. Meanwhile I continue to expound it.

If one measures the energy of a Planck oscillator, the probability of finding
for it a value between E and E' cannot possibly be other than zero unles
between E and E' there lies at least one value from the series 3.pi.hbar.nu,
5.pi.hbar.nu, 7.pi.hbar.nu, 9.pi.hbar.nu,... For any interval containing none
of these values the probability is zero. In plain English: other measurement
results are excluded. The values are odd multiples of the *constant of the
model* pi.hbar.nu

(Planck constant) / 2.pi, nu = frequency of the oscillator.

Two points stand out. First, no account is taken of preceding measurements - these are quite unnecessary. Second, the statement certainly doesn't suffer an excessive lack of precision - quite to the contrary it is sharper than any actual measurement could ever be.

Figure 1. /|\ | |M . . O........F . . .

Angular momentum. M is a material point, O a geometric reference
point. The vector arrow represents the momentum (= mass times
velocity) of M. Then the *angular momentum* is the product of the
length of the arrow by the length OF.

Another typical example is magnitude of angular momentum. In Fig. 1 let M be a
moving point mass, with the vector representing, in magnitude and direction,
its momentum (mass times velocity). O is any arbitrary fixed point in space,
say the origin of coordinates; thus not a physically significant point, but
rather a geometric reference point. As magnitude of the angular momentum of M
about O classical mechanics designates the product of the length of the
momentum vector by the length of the *normal OF*. In Q.M. the magnitude of
angular momentum is governed much as the energy of the oscillator. Again the
probability is zero for any interval not containing some value(s) from the
following series

hbar(2)^(1/2), hbar(2x3)^(1/2), hbar(3x4)^(1/2), hbar(4x5)^(1/2),...;

that is, only one of these values is allowed. Again this is true without
reference to preceding measurements. And one readily conceives how important
is this precise statement, *much* more important than knowing which of these
values, or what probability for each of them, would actually pertain to a
given case. Moreover it is also noteworthy here that there is no mention of
the reference point: however it is chosen one will get a value from the
series. This assertion seems unreasonable for the model, because the normal OF
changes *continuously* as the point O is displaced, if the momentum vector
remains unchanged. In this example we see how Q.M. does indeed use the model
to read of those quantities which one can measure and for which it makes sense
to predict results, but finds the classical model inadequate for explicating
relationships among these quantities. Now in both examples does one not get
the feeling that the essential content of what is being said can only with
some difficulty be forced into the Spanish boot of a prediction of probability
of finding this or that measurement result for a variable of the classical
model? Does one not get the impression that here one deals with fundamental
properties of *new* classes of characteristics, that keep only the name in
common with classical ones? And by no means do we speak here of exceptional
cases, rather it is precisely the truly valuable statements of the new theory
that have this character. There are indeed problems more nearly of the type
for which the mode of expression is suitable. But they are by no means equally
important. Moreover of no importance whatever are those that are naively set
up as class exercises. "Given the position of the elctron in the hydrogen atom
at time t=0, find the statistics of its position at a later time." No one
cares about that.

The big idea seems to be that all statements pertain to the intuitive model. But the useful statements are scarcely intuitive in it, and its intuitive aspects are of little worth.

The second interpretation is especially appealing to those acquainted with the
*statistical viewpoint* that came up in the second half of the preceding
century; the more so, considering that on the eve of the new century quantum
theory was born *from it*, from a central problem in the statistical theory of
heat (Max Planck's Theory of Heat Radiation, December, 1899). The essence of
this line of thought is precisely this, that one practically never knows all
the determining parts of the system, but rather *much* fewer. To describe an
actual body at a given moment one relies therefore not on *one* state of the
model but on a so-called *Gibbs ensemble*. By this is meant an ideal, that is,
merely imagined ensemble of states, that accurately reflects our limited
knowledge of the actual body. The body is then considered to behave as though
in a single state *arbitrarily chosen from this ensemble*. This interpretation
had the most extensive results. Its highest triumphs were in those cases for
which *not* all states appearing in the ensemble led to *the same* observable
behavior. Thus the body's conduct is now this way, now that, just as foreseen
(thermodynamics fluctuations). At first thought one might well attempt
likewise to refer back the always uncertain statements of Q.M. to an ideal
ensemble of states, of which a quite specific one applies in any concrete
instance - but one does not know which one.

That this won't work is shown by the *one* example of angular momentum, as one
of many. Imagine in Fig. 1 the point M to be situated at various positions
relative to O and fitted with various momentum vectors, and all these
possibilities to be combined into an ideal ensemble. Then one can indeed so
choose these positions and vectors that in every case the product of vector
length by length of normal OF yields one or the other of the acceptable values
- relative to the particular point O. But for an arbitrarily different point
O', of course, unacceptable values occur. Thus appeal to the ensemble is no
help at all. --Another example is the oscillator energy. Take the case that it
has a sharply determined value, e.g., the lowest, 3.pi.hbar.nu. The separation
of the two point masses (that constitute the oscillator) then appears very
*unsharp*. To be able to refer this statement to a statistical collective of
states would require the distribution of separations to be sharply limited, at
least toward large values, by that separation for which the *potential energy*
alone would equal or exceed the value 3.pi.hbar.nu. But that's not the way it
is - arbitrarily large separations occur, even though with markedly reduced
probability. And this is no mere secondary calculation result, that might in
some fashion be circumvented, without striking at the heart of the theory:
along with many others, the quantum mechanical treatment of radioactivity
(Gamow) rests on this state of affairs. --One could go on indefinitely with
more examples. One shoudl note that there was no question of any
time-dependent changes. It would be of no help to permit the model to vary
quite "unclassically", perhaps to "jump". Already for the single instant
things go wrong. At no moment does there exist an ensemble of classical states
of the model that squares with the totality of quantum mechanical statements
of this moment. The same can also be said as follows: if I wish to ascribe to
the model at each moment a definite (merely not exactly known to me) state, or
(which is the same) to *all* determining parts definite (merely not eactly
known to me) numerical values, then there is no supposition as to these
numerical values *to be imagined* that would not conflict with some portion of
quantum theoretical assertions.

That is not quite what one expects, on hearing that the pronouncements of the new theory are always uncertain compared to the classical ones.

That it is in fact not impossible to express the degree and kind of blurring
of *all* variables in *one* perfectly *clear* concept follows at once from the
fact that Q.M. as a matter of fact has and uses such an instrument, the
so-called wave function or psi-function, also called system vector. Much more
is to be said about it further on. That it is an abstract, unintuitive
mathematical construct is a scruple that almost always surfaces against new
aids to thought and that carries no great message. At all events it is an
imagined entity that images the blurring of all variables at every moment just
as clearly and faithfully as does the classical model its sharp numerical
values. Its equation of motion too, the law of its time variation, so long as
the system is left undisturbed, lags not one iota, in clarity and determinacy,
behind the equations of motion of the classical model. So the latter could be
straight-forwardly replaced by the psi-function, so long as the blurring is
confined to atomic scale, not open to direct control. In fact the function has
provided quite intuitive and convenient ideas, for instance the "cloud of
negative electricity" around the nucleus, etc. But serious misgivings arise if
one notices that the uncertainty affects macroscopically tangible and visible
things, for which the term "blurring" seems simply wrong. The state of a
radioactive nucleus is presumably blurred in such a degree and fashion that
neither the instant of decay nor the direction, in which the emitted
alpha-particle leaves the nucleus, is well-established. Inside the nucleus,
blurring doesn't bother us. The emerging particle is described, if one wants
to expain intuitively, as a spherical wave that continuously emanates in all
directions and that impinges continuously on a surrounding luminescent screen
over its full expanse. The screen however does not show a more or less
constant uniform glow, but rather lights up at *one* instant at *one* spot -
or, to honor the truth, it lights up now here, now there, for it is impossible
to do the experiment with only a single radioactive atom. If in place of the
luminescent screen one uses a spatially extended detector, perhaps a gas that
is ionised by the alpha-particles, one finds the ion pairs arranged along
rectilinear columns,[5] that project backwards on to the bit of radioactive
matter from which the alpha-radiation comes (C.T.R. Wilson's cloud chamber
tracks, made visible by drops of moisture condensed on the ions).

One can even set up quite ridiculous cases. A cat is penned up in a steel
chamber, along with the following device (which must be secured against direct
interference by the cat): in a Geiger counter there is a tiny bit of
radioactive substance, *so* small, that *perhaps* in the course of the hour
one of the atoms decays, but also, with equal probability, perhaps none; if it
happens, the counter tube discharges and through a relay releases a hammer
which shatters a small flask of hydrocyanic acid. If one has left this entire
system to itself for an hour, one would say that the cat still lives *if*
meanwhile no atom has decayed. The psi-function of the entire system would
express this by having in it the living and dead cat (pardon the expression)
mixed or smeared out in equal parts.

It is typical of these cases that an indeterminacy originally restricted to
the atomic domain becomes transformed into macroscopic indeterminacy, which
can then be *resolved* by direct observation. That prevents us from so naively
accepting as valid a "blurred model" for representing reality. In itself it
would not embody anything unclear or contradictory. There is a difference
between a shaky or out-of-focus photograph and a snapshot of clouds and fog
banks.

Now this sheds some light on the origin of the proposition that I mentioned at the end of Sect. 2 as something very far-reaching: that all model quantities are measurable in principle. One can hardly get along without this article of belief if one sees himself constrained, in the intersts of physical methodology, to call in as dictatorial help the above-mentioned philosophical principle, which no sensible person can fail to esteem as the supreme protector of all empiricism.

Reality resists imitation through a model. So one lets go of niave realism and
leans directly on the indubitable proposition that *actually* (for the
physicist) after all is said and done there is only observation, measurement.
Then all our physical thinking thenceforth has as sole basis and as sole
object the results of measurements which can in principle be carried out, for
we must now explicitly *not* relate our thinking any longer to any other kind
of reality or to a model. All numbers arising in our physical calculations
must be interpreted as measurement results. But since we didn't just now come
into the world and start to build up our science from scratch, but rather have
in use a quite definite shceme of calculation, from which in view of the great
progress in Q.M. we would less than ever want to be parted, we see ourselves
forced to dictate from the writing-table which measurements are in principle
possible, that is, must be possible in order to support adequately our
reckoning system. This allows a sharp value for each single variable of the
model (indeed for a whole "half set") and so each single variable must be
measurable to arbitrary exactness. We cannot be satisfied with less, for we
have lost our naively realistic innocence. We have nothing but our reckoning
scheme, i.e., what is a *best possible* knowledge of the object. And if we
couldn't do that, then indeed would our measurement reality become highly
dependent on the diligence or laziness of the experimenter, how much trouble
he takes to inform himself. We must go on to tell him how far he could go if
only he were clever enough. Otherwise it would be seriously feared that just
there, where we forbid further questions, there might well still be something
worth knowing that we might ask about.

*The systematically arranged interaction of two systems (measured object and
measuring instrument) is called a measurement on the first system, if a
directly-sensible variable feature of the second (pointer position) is always
reproduced within certain error limits when the process is immediately
repeated (on the same object, which in the meantime must not be exposed to any
additional influences)*.

This statement will require considerable added comment: it is by no means a faultless definition. Empirics is more complicated than mathematics and is not so easily captured in polished sentences.

*Before* the first measurement there might have been an arbitrary
quantum-theory prediction for it. *After* it the prediction *always* runs:
within error limits again the same result. The expectation-catalog (=
psi-function) is therefore changed by the measurement in respect to the
variable being measured. If the measurement procedure is known from beforehand
to be *reliable*, then the first measurement at once reduces the theoretical
expectation within error limits on to the value found, regardless of whatever
the prior expectation may have been. This is the typical abrupt change of the
psi-function discussed above. But the expectation-catalog changes in
unforeseen manner not only for the measured variable itself, but also for
others, in particular for its "canonical conjugate". If for instance one has a
rather sharp prediction for the *momentum* of a particle and proceeds to
measure its *position* more exactly than is compatible with Theorem A of Sec.
2, then the *momentum* prediction must change. The quantum mechanical
reckoning moreover takes care of this automatically; there is no psi-function
whatsoever that would contradict Theorem A when one deduces from it the
combined expectations.

Since the expectation-catalog changes radically during measurement, the object
is then no longer suited for testing, in their full extent, the statistical
predictions made earlier; at the very least for the measured variable itself,
since for it now the (nearly) same value would occur over and over again.
*That* is the reason for the prescription already given in Sect. 2: one can
indeed test the probability predictions completely, but for this one must
repeat the entire experiment *ab ovo*. One's prior treatment of the measured
object (or one identical to it) must be exactly the same as that given the
first time, in order that the same expectation-catalog (= psi-function) should
be valid as before the first measurement. Then one "repeats" it. (This
repeating now means of course something quite other than earlier!) All this
one must do not twice but very often. Then the predicted statistics are
established - that is the doctrine.

One should note the difference between the error limits and the error
distribution of the *measurement*, on the one hand, and the theoretically
predicted statistics, on the other hand. They have nothing to do with each
other. They are established by the two quite different types of *repetition*
just discussed.

Here there is opportunity to deepen somewhat the above-attempted delimitation
of *measuring*. There are measuring instruments that remain fixed on the
reading given by the measurement just made. Or the pointer could remain stuck
because of a defect. One would then repeatedly make exactly the same reading,
and according to our instruction that would be a spectacularly accurate
measurement. Moreover that would be true not merely for the object but also
for the instrment itself! As a matter of fact there is still missing from our
exposition an important point, but one which could not readily be stated
earlier, namely what it is that truly makes the difference between *object*
and *instrument* (tat it is the latter on which the reading is made, is more
or less superficial). We have just seen that the instrument under certain
circumstances, as required, must be set back to its neutral initial condition
before any control measurement is made. This is well known to the
experimentalist. Theoretically the matter may best be expressed by prescribing
that on principle the instrument should be subjected to the identical prior
treatment before each measurement, so that *for it* each time the same
expectation-catalog (= psi-function) applies, as it is brought up to the
object. For the object it is just the other way around, any interference being
forbidden when a control measurement is to be made, a "repetition of the first
kind" (that leads to *error* statistics). That is the characteristic
difference between objectand instrument. It disappears for a "repetition of
the second kind" (that serves for checking the quantum predictions). Here the
difference between the two is actually rather insignificant.

>From this we gather further that for a second measurement one may use a
similarly built and similarly prepared instrument - it need not necessarily be
*the same one*; this is in fact sometimes done, as a check on the first one.
it may indeed happen that two qute differently built instruments are so
related to each other that if one measures with them one after the other
(repetition of the first kind!) their two indications are in one-to-one
correlation with each other. They then measure on the object esssentially the
same variable - i.e., the same for suitable calibration of the scales.

Thence it follows that two different catalogs, that apply to the same system under different circumstances or at different times, may well partially overlap, but never so that the one is entirely contained within the other. For otherwise it would be susceptible to completion through additional correct statements, namely through those by which the other one exceeds it. --The mathematical structure of the theory automatically satisfies this condition. There is no psi-function that furnishes exactly the same statements as another and in addition several more.

Therefore if a system changes, whether by itself or because of measurements,
there must always be statements missing from the new function that were
contained in the earlier one. In the catalog not just new entries, but also
deletions, must be made. Now knowledge can well be *gained*, but not *lost*.
So the deletions mean that the previously correct statements have now become
incorrect. A correct statement can become incorrect only if the *object* to
which it applies changes. I consider it acceptable to express this reasoning
sequence as follows:

Theorem 1: If different psi-functions are under discussion the system is in different states.

If one speaks only of systems for which a psi-function is in general available, then the inverse of this theorem runs:

Theorem 2: For the same psi-function the system is in the same state.

The inverse does not follow from Theorem 1 but independently of it, directly
from *completeness* or *maximality*. Whoever for the same expectation-catalog
would yet claim a difference is possible, would be admitting that it (the
catalog) does not give information on all justifiable questions. --The
language usage of almost all authors implies the validity of the above two
theorems. Of course, they set up a kind of new reality - in entirely
legitimate fashion, I believe. Moreover they are not trivially tautological,
not mere verbal interpretations of "state". Without presupposed maximality
of the expectation-catalog, change of the psi-function could be brought about
by mere collecting of new information.

We must face up to yet another objection to the derivation of Theorem 1. One
can argue that each individual statement or item of knowledge, under
examination there, is after all a probability statement, to which the
category of *correct*, or *incorrect* does not apply in any relation to an
individual case, but rather in relation to a collective that comes into being
from one's preparing the system a thousand times in identical fashion (in
order then to allow the same measurement to follow; cf. Sect. 8.). That makes
sense, but we must specify all members of this collective to be identically
prepared, since to each the same psi-function, the same statement-catalog
applies and we dare not specify differences that are not specified in the
catalog (cf. the foundation of Theorem 2). Thus the collective is made up of
identical individual cases. If a statement is wrong for *it*, then the
individual case must have changed, or else the collective too would again
be the same.

So, using catchwords for emphasis, I try again to contrast: 1.) The
discontinuity of the expectation-catalog due to measurement is
*unavoidable*, for if measurement is to retain any meaning at all then
the *measured value*, from a good measurement, *must* obtain. 2.) The
discontinuous change is certainly *not* governed by the otherwise valid
causal law, since it depends on the measured value, which is not
predetermined. 3.) The change also definitely includes (because of
"maximality") some *loss* of knowledge, but knowledge cannot be lost, and
so the object *must* change - *both* along with the discontinuous changes
and *also*, during these changes, in an unforeseen, *different* way.

How does this add up? Things are not at all simple. It is the most difficult and most interesting point of the theory. Obviously we must try to comprehend objectively the interaction between measured object and measuring instrument. To that end we must lay out a few very abstract considerations.

This is the point. Whenever one has a complete expectation-catalog - a maximum total knowledge - a psi-function - for two completely separated bodies, or, in better terms, for each of them singly, then one obviously has it also for the two bodies together, i.e., if one imagines that neither of them singly but rather the two of them together make up the object of interest, of our questions about the future.[6]

But the converse is not true. *Maximal knowledge of a total system does not
necessarily include total knowledge of all its parts, not even when these
are fully separated from each other and at the moment are not influencing
each other at all.* Thus it may be that some part of what one knows may
pertain to relations or stipulations between the two subsystems (we shall
limit ourselves to two), as follows: if a particular measurement on the
first system yields *this* result, then for a particular measurement on the
second the valid expectation statistics are such and such; but if the
measurement in question on the first system should have *that* result, then
some other expectation holds for that on the second; should a third result
occur for the first, then still another expectation applies to the second;
and so on, in the manner of a complete disjunction of all possible
measurement results which the one specifically contemplated measurement on
the first system can yield. In this way, any measurement process at all or,
what amounts to the same, any variable at all of the second system can be
tied to the not-yet-known value of any variable of the first, and of
course *vice versa* also. If that is the case, if such conditional
statements occur in the combined catalog, *then it can not possibly be
maximal in regard to the individual systems*. For the content of two
maximal individual catalogs would by itself suffice for a maximal combined
catalog; the conditional statements could not be added on.

These conditional predictions, moreover, are not something that has suddenly
fallen in here from the blue. They are in every expectation-catalog. If one
knows the psi-function and makes a particular measurement and this has a
particular result, then one again knows the psi-function, *voila tout*. It's
jst that for the case under discussion, because the combined system is
supposed to consist of two fully separated parts, the matter stands out as
a bit strange. For thus it becomes meaningful to distinguish between
measurements on the one and measurements on the other subsystem. This
provides to each full title to a private maximal catalog; on the other hand
it remains possible that a portion of the attainable combined knowledge is,
so to say, squandered on conditional statements, that operate between the
subsystems, so that the private expectancies are left unfulfilled - even
though the combined catalog is maximal, that is even though the psi-function
of the combined system is known.

Let us pause for a moment. This result in its abstractness actually says it all: Best possible knowledge of a whole does not necessarily include the same for its parts. let us translate this into terms of Sect. 9: The whole is in a definite state, the parts taken individually are not.

"How so? Surely a system must be in some sort of state." "No. State is psi-function, is maximal sum of knowledge. I didn't necessarily provide myself with this, I may have been lazy. Then the system is in no state."

"Fine, but then too the agnostic prohibition of questions is not yet in force and in our case I can tell myself: the subsystem is already in some state, I just don't know which."

"Wait. Unfortunately no. There is no `I just don't know.' For as to the total system, maximal knowledge is at hand..."

*The insufficiency of the psi-function as model replacement rests solely
on the fact that one doesn't always have it.* If one does have it, then
by all means let it serve as description of the state. But sometimes one
does not have it, in cases where one might reasonably expect to. And in
that case, one dare not postulate that it "is actually a particular one,
one just doesn't know it"; the above-chosen standpoint forbids this. "It"
is namely a sum of knowledge; and knowledge, that no one knows, is none.
----

We continue. That a portion of the knowledge should float in the form of
disjunctive conditional statements *between* the two systems can certainly
not happen if we bring up the two from opposite ends of the world and
juxtapose them without interaction. For then indeed the two "know" nothing
about each other. A measurement on one cannot possibly furnish any grasp
of what is to be expected of the other. Any "entanglement of predictions"
that takes place can obviously only go back to the fact that the two
bodies at some earlier time formed in a true sense *one* system, that is
were interacting, and have left behind *traces* on each other. If two
separated bodies, each by itself known maximally, enter a situation in
which they influence each other, and separate again, then there occurs
regularly that which I have just called *entanglement* of our knowledge of
the two bodies. the combined expectation-catalog consists initially of a
logical sum of the individual catalogs; during the process it develops
causally in accord with known law (there is no question whatever of
measurement here). The knowledge remains maximal, but at its end, if the
two bodies have again separated, it is not again split into a logical sum
of knowledges about the individual bodies. What still remains *of that*
may have becomes less than maximal, even very strongly so. --One notes
the great difference over against the classical model theory, where of
course from known initial states and with known interaction the
individual end states would be exctaly known.

The measuring process described in Sect. 8 now fits neatly into this general scheme, if we apply it to the combined system, measured object + measuring instrument. As we thus construct an objective picture of this process, like that of any other, we dare hope to clear up, if not altogether avoid, the singular jump of the psi-function. So now the one body is the measured object, the other the instrument. To suppress any interference from outside we arrange for the instrument by means of built-in clockwork to creep up automatically to the object and in like manner creep away again. The reading itself we postpone, as our immediate purpose is to investigate whatever may be happening "objectively"; but for later use we let the result be recorded automatically in the instrument, as indeed is often done these days.

Now how do things stand, after automatically completed measurement? We
possess, afterwards same as before, a maximal expectation-catalog for the
total system. The recorded measurement result is of course not included
therein. As to the instrument the catalog is far from complete, telling
us nothing at all about where the recording pen left its trace. (Remember
that poisoned cat!) What this amounts to is that our knowledge has
evaporated into conditional statements: *if* the mark is at line 1, *then*
things are thus and so for the measured object, *if* it is at line 2,
then such and such, if at 3, then a third, etc. Now has the psi-function
of the measured *object* made a leap? Has it developed further in accord
with natural law (in accord with the partial differential equation)? No
to both questions. It is no more. It has become snarled up, in accord with
the causal law of the *combined* psi-function, with that of the measuring
instrument. *The expectation-catalog of the object has split into a
conditional disjunction if expectation-catalogs* - like a Baedeker that one
has taken apart in the proper manner. Along with each section there is given
also the probability that it proves correct - transcribed from the original
expectation-catalog of the object. But which one proves right - which
section of the Baedeker should guide the ongoing journey - that can be
determined only by actual inspection of the record.

And what if we *don't* look? Let's say it was photographically recorded
and by bad luck light reaches the film before it was developed. Or we
inadvertently put in black paper instead of film. Then indeed have we not
only not learned anything new from the miscarried measurement, but we have
suffered loss of knowledge. This is not surprising. It is only natural that
outside interference will almost always spoil the knowledge that one has
of a system. The interference, if it is to allow the knowledge to be gained
back afterwards, must be circumspect indeed.

What have we won by this analysis? *First*, the insight into the disjunctive
splitting of the expectation-catalog, which still takes place quite
continuously and is brought about through embedment in a combined catalog
for instrument and object. From this amalgamation the object can again be
separated out only by the living subject actually taking cognizance of the
result of the measurement. Some time or other this must happen if that which
has gone on is actually to be called a measurement - however dear to our
hearts is was to prepare the process throughout as objectively as possible.
And that is the *second* insight we have won: *not until this inspection*,
which determines the disjunction, does anything discontinuous, or leaping,
take place. One is inclined to call this a *mental* action, for the object
is already out of touch, is no longer physically affected: what befalls it
is already past. But it would not be quite right to say that the
psi-function of the object which changes *otherwise* according to a
partial differential equation, independent of the observer, should *now*
change leap-fashion because of a mental act. For it had disappeared, it
was no more. Whatever is not, no more can it change. It is born anew, is
reconstituted, is separated out from the entangled knowledge that one has,
through an act of perception, which as a matter of fact is not a physical
effect on the measured object. From the form in which the psi-function
was last known, to the new in which it reappears, runs no continuous
road - it ran indeed through annihilation. Contrasting the two forms, the
thing looks like a leap. In truth something of importance happens in
between, namely the influence of the two bodies on each other, during
which the object possessed no private expectation-catalog nor had any
claim thereunto, because it was not independent.

For in the first place the knowledge of the total system remains always
maximal, being in no way damaged by good and exact measurements. In the
second place: conditional statements of the form "if for A ... then for B
B. For it is *not* conditional and to it nothing at all can be added on
relevant to B. Thirdly: conditional statements in the inverse sense (if
for B ... then for A ...) can be transformed into statements about A
alone, because all probabilities for B are already known unconditionally.
The entanglement is thus completely put aside, and since the knowledge of
the total system has remaind maximal, it can only mean that along with
the maximal catalog of B came the same thing for A.

And it cannot happen the other way around, that A becomes maximally known
indirectly, through measurements on B, before B is. For then all
conclusions work in the reversed direction - that is, B is too. The
systems become simultaneously maximally known, as asserted. Incidentally,
this would also be true if one did not limit the measurement to just one
of the two systems. But the interesting point is precisely this, that one
*can* limit it to one of the two; that thereby one reaches his goal.

*Which* measurements on B and in what sequence they are undertaken, is
left entirely to the arbitrary choice of the experimenter. He need not
pick out specific variables, in order to be able to use the conditional
statements. He is free to formulate a plan that would lead him to
maximal knowledge of B, even if he should know nothing at all about B.
And it can do no harm if he carries through this plan to the end. If he
asks himself after each measurement whether he has perhaps already
reached his goal, he does only to spare himself from further, superfluous
labor.

What sort of A-catalog comes forth in this indirect way depends obviously
on the measured values that are found in B (before the entanglement is
entirely resolved: not on more, on any later ones, in case the measuring
goes on superfluously). Suppose now that in this way I derived an
A-catalog in a particular case. then I can look back and consider whether
I might perhaps have found a *different one* if I had put into action a
*different* measuring plan for B. But since after all I neither have
actually touched the system A, nor in the imagined other case would have
touched it, the statements of the other catalog, whatever it might be,
must *also* be correct. They must therefore be entirely contained within
the first, since the first is maximal. But so is the second. So it must
be identical with the first.

Strangely enough, the mathematical structure of the theory by no means
satisfies this requirement automatically. Even worse, examples can be set
up where the requireement is necessarily violated. It is true that in any
experiment one can actually carry out only *one> group of measurements
(always on B), for once that has happened the entanglement is resolved
and one leans nothing more about A from further measurements on B. But
there are cases of entanglement in which two definite programs are
specifiable, fo which each 1) must lead to resolution of the entanglement,
and 2) must lead to an A-catalog to which the other can not possibly
lead - whatsoever measured values may turn up in one case or the other.
It is simply like this, that the two series of A-catalogs, that can
possibly arise from the one or the other of the programs, are sharply
separated and have in common not a single term.
*

*
These are especially pointed cases, in which the conclusion lies so
clearly exposed. In general one must reflect more carefully. If two
programs of measurement on B are proposed, along with the two-series
of A-catalogs to which they can lead, then it is by no means sufficient
that the two series have one or more terms in common in order for one to
be able to say: well now, surely one of these will always turn up - and
so to set forth the requirements as "presumably fulfilled". That's not
enough. For indeed one knows the probability of every measurement on
B, considered as measurement on the total system, and under many
ab-ovo-repetitions each one must occur with the frequency assigned to it.
Therefore the two series of A-catalogs would have to agree, member by
member, and furthermore the probabilities in each series would have to be
the same. And that not merely for these two programs but also for each of
the infinitely many that one might think up. But this is utterly out of
the question. The requirement that the A-catalog that one gets should
always be the same, regardless of what measurements on B bring it into
being, this requirement, is plainly and simply never fulfilled.
*

*
Now we wish to discuss a simple "pointed" example.
*

In the cited paper it is shown that between these two systems an
entanglement can arise, which at a particular moment, can be compactly
shown in the two equations: q = Q and p = -P. That means: *I know*, if
a measurement of q on the system yields a certain value, that a
Q-measurement performed immediately thereafter on the second will give the
*same* value, and vice versa; and *I know*, if a p-measurement on the first
system yields a certain value, that a P-measurement performed immediately
thereafter will give the opposite value, and vice versa.

A single measurement of *q or p or Q or P* resolves the entanglement and
makes both systems maximally known. A second measurement on the same system
modifies only the statements about *it*, but teaches nothing more about the
other. So one cannot check both equations in a single experiment. But one
can repeat the experiment *ab ovo* a thousand times; each time set up the
same entanglement; according to whim check one or the other of the
equations; and find confirmed that one which one is momentarily pleased to
check. We assume that all this has been done.

If for the thousand-and-first experiment one is then seized by the desire to give up further checking and then measure q on the first system and P on the second, and one obtains

q = 4; P = 7;

can one then doubt that

q = 4; p = -7

would have been a correct prediction for the first system, or

Q = 4; P = 7

a correct prediction for the second? Quantum predictions are indeed not subject to test as to their full content, ever, in a single experiment; yet they are correct, in that whoever possessed them suffered no disillusion, whichever half he decided to check.

There's no doubt about it. Every measurement is for its system the first. Measurements on separated systems cannot directly influence each other - that would be magic. Neither can it be by chance, if from a thousand experiments it is established that virginal measurements agree.

The prediction catalog q = 4, p = -7 would of course by hypermaximal.

But let us once more make the matter very clear. Let us focus attention on
the system labelled with small letters p, q and call it for brevity the
"small" one. then things stand as follows. I can direct *one* of two
questions to the small system, either that about q or that about p. Before
doing so I can, if I choose, procure the answer to *one* of these questions
b a measurement on the fully separated other system (which we shall regard
as auxiliary apparatus), or I may intend to take care of this afterwards,
My small system, like a schoolboy under examination, *cannot possibly know*
whether I have done this or for which questions, or whether and for which
I intend to do it later. From arbitrarily many pretrials I know that the
pupil will correctly answer the first question that I put to him. From
that it follows that in every case he *knows* the answer to *both*
questions. That the answering of the first question, that it pleases me
to put to him, so tires or confuses the pupil that his further answers are
worthless, changes nothing at all of this conclusion. No school principal
would judge otherwise, if this situation repeated itself with thousands of
pupils of similar provenance, however he much he might wonder *what*
makes all the scholars so dim-witted or obstinate after the answering of
the first question. he wuold not come to think that his, the teacher's,
consulting a textbook first suggests to the pupil the correct answer, or
even, in the cases when the teacher chooses to consult it only after ensuing
answers by the pupil, that the pupil's answer has changed the text of the
notebook in the pupil's favor.

Thus my small system holds a quite definite answer to the q-question and
to the p-question in readiness fpr the case that one or the other is the
first to be put directly to it. Of this preparedness not an iota can be
changed if I should perhaps measure the Q on the auxiliary system (in the
analogy: if the teacher looks up one of the questions in his notebook and
thereby inded ruins with an inkblot *the* page where the other answer
stands). The quantum mechanician maintains that after a Q-measurement on
the auxiliary system my small system has a psi-function in which "q is
fully sharp, but p fully indeterminate". And yet, as already mentioned,
not an iota is changed of the fact that my small system also has ready an
answer to the p-question, and indeed the same one as before.

But the situation is even worse yet. Not only to the q-question and to the p-question does my clever pupil have a definite answer ready, but rather also to a thousand others, and indeed without my having the least insight into the memory technique by which he is able to do it. p and q are not the only variables that I can measure. Any combination of them whatsoever, for example

p^2 + q^2

also corresponds to a fully definite measurement according to the formulation of Q.M. Now it can be shown[8] that also for this the answer can be obtained by a measurement on the auxiliary system, namely by measurement of P^2 + Q^2, and indeed the answers are just the same. By general rules of Q.M. this sum of squares can only take on a value from the series

hbar, 3.hbar, 5.hbar, 7.hbar, ...

The answer that ym small system has ready for the (p^2+q^2)-question (in case this should be the first it must face) must be a number from this series. --It is very much the same with measurement of

p^2 + a^2 . q^2

where a is an arbitrary positive constant. In this case the answer must be, according to Q.M. a number from the following series

a.hbar, 3a.hbar, 5a.hbar, 7a.hbar, ...

For each numerical value of a one gets a different question, and to each my small system holds ready an answer from the series (formed with the a-value in question).

Most astonishing is this: these answers cannot possibly be related to each other in the way given by the formulas! For let q' be the answer held ready for the q-question, and p' for the p-question, then the relation

(p'^2 + a^2 . q'^2) / (a.hbar) = an odd integer

cannot possibly hold for given numerical values q' and p' and for *any
positive numer a*. This is by no means an operation with imagined numbers,
that one cannot really ascertain. One can in fact get two of the numbers,
e.g., q' and p', the one by direct, the other by indirect measurement. And
then one can (pardon the expression) convince himself that the above
expression, formed with the numbers q' and p' and an arbitrary a, is not an
odd integer.

The lack of insight into the relationships among the various answers held
in readiness (into the "memory technique" of the pupil) is a total one, a
gap not to be filled perhaps by a new kind of algebra of Q.M. The lack is all
the stranger, since on the other hand one can show: the entanglement is
already uniquely determined by the requirements q = Q and p = -P. If we know
that the coordinates are equal and the momenta equal but opposite, then there
follows by quantum mechanics a *fully determined* one-to-one arrangement of
*all possible* measurements on both systems. For *every* measurement on the
"small" one the numerical result can be procured by a suitably arranged
measurement on the "large" one, and each measurement on the large stipulates
the result that a particular measurement on the small would give or has
given. (Of course in the same sense as always heretofore: only the virgin
measurement on each system counts.) As soon as we have brought the two
systems into the situation where they (briefly put) coincide in coordinate
and momentum, then they (briefly put) coincide also in regard to all
other variables.

But as to how the numerical values of all these variables of *one* system
relate to each other we know nothing at all, even though for each the system
must have a quite specific one in readiness, for if we wish we can learn it
from the auxiliary system and then find it always confirmed by direct
measurement.

Should one now think that because we are so ignorant about the relations
among the variable-values held ready in *one* system, that none exists, that
far-ranging arbitrary combination can occur? That would mean that such a
system of "*one* degree of freedom" would need not merely *two* numbers for
adequately describing it, as in classical mechanics, but rather many more,
perhaps infinitely many. It is then nevertheless strange that two systems
always agree in *all* variables if they agree in two. Therefore one would
have to make the second assumption, that this is due to our awkwardness;
would have to think that as a practical matter we are not competent to bring
two systems into a situation such that they coincide in reference to two
variables, without *nolens volens* bringing about coincidence also for all
other variables, even though that would not in itself be necssary. One
wold have to make these *two* assumptions in order not to perceive as a
great dilemma the complete lack of insight into the interrelationship of
variable values within one system.

q_t = q + (p/m)t Q_t = Q + (P/m)t

Let us first talk about the small system. The most natural way of describing
it classically at time t is in terms of coordinate and momentum *at this
time*, i.e., in terms of q_t and p. But one may do it differently. In place
of q_t one could specify q. It too is a "determining part at time t", and
indeed at every time t, and in fact one that does not change with time.
This is similar to the way in which I can specify a certain determining part
of my own preson, namely my *age*, either through the hnumber 48, which
changes with time and in the system corresponds to specifying q_t, or
through the number 1887, which is usual in documents and corresponds to
specifying q. Now according to the foregoing:

q = q_t - (p/m)t

Similarly for the second system. So we take as determining parts

for the first system q_t - (p/m)t and p. for the second system Q_t - (P/m)t and P.

The advantage is that *among these the same entanglement goes on
indefinitely*:

q_t - (p/m)t = Q_t - (P/m)t p = -P

or solved:

q_t = Q_t - (2 t/m)P; p = -P.

So that what changes with time is just this: the coordinate of the "small" system is not ascertained simply by a coordinate measurement on the auxiliary system, but rather by a measurement of the aggregate

Q_t - (2 t/m)P.

Here however, one must not get the idea that maybe he measures Q-t *and* P,
because that just won't go. Rather one must suppose, as one always must
suppose in Q.M., that there is a direct measurement procedure for this
aggregate. Except for this change, *everything* that was said in Sections
12 and 13 applies at any point of time; in particular there exists at all
times the one-to-one entanglement of *all* variables together with its
evil consequences.

It is just this way too, if within each system a force works, except that then q_t and p are entangled with variables that are more complicated combinations of Q_t and P.

I have briefly explained this in order that we may consider the following.
That the entanglement should change with time makes us after all a bit
thoughtful. Must perhaps all measurements, that were under discussion, be
completed in very short time, actually *instantaneously*, in zero time,
in order that the unwelcome consequences be vindicated? Can the ghost be
banished by reference to the fact that measurements take time? No. For
each single experiment one needs just *one* measurement on each system; only
the virginal one matters, further ones apart from this would be without
effect. How long the measurement lasts need not therefore concern us, since
we have no second one following on. One must merely be able to so arrange
the two virgin measurements that they yield variable values for the same
definite *point* of time, known to us in advance - known in advance, because
after all we must direct the measurements at a pair of variables that are
entangled at just this point of time.

"Perhaps it is not possible so to direct the measurements?"

"Perhaps. I even presume so. Merely: *today's* Q.M. must require this. For
it is now set up so that its predictions are always made for a *point*
of time. Since they are supposed to rlate to measurement results, they
would be entirely without content if the relevant variables were not
measurable *for* a definite point of time, whether the measurement itself
lasts a long or a short while."

When we *learn* the result is of course quite immaterial. Theoretically
that has as little weight as for instance the fact that one needs several
months to integrate the differential equations of the weather for the next
three days. --The drastic analogy with the pupil exmaination misses the
mark in a few points of the law's letter, but it fits the spirit of the
law. The expression "the system knows" will perhaps no longer carry the
meaning that the answer comes forth from an instantaneous situation; it
may perhaps derive from a succession of situations, that occupies a
finite length of time. But even if it be so, it need not concern us so
long as the system somehow brings forth the answer from within itself,
with no other help than that we tell it (through the experimental
arrangement) *which* question we would like to have answered; and so long as
the answer itself is uniquely tied to a *moment* of time: which for better
or for worse must be presumed for every measurement to which contemporary
Q.M. speaks, for otherwise the quantum mechanical predictions would have
no content.

In our discussion, however, we have stumbled across a possibility. If the formulation could be so carried out that the quantum mechanical predictions did not or did not always pertain to a quite sharply defined point of time, then one would also be freed from requiring this of the measurement results. thereby, since the entangled variables change with time, setting up the antinomical assertions would become much more difficult.

That prediction for sharply-defined time is a blunder, is probable also
on other grounds. The numerical value of time is like any other the result
of observation. Can one make exception just for measurement with a clock?
Must it not like any other pertain to a variable that in general has no
sharp value and in any case cannot have it simultaneously with *any* other
variable? If one predicts the value of *another* for a particular *point
of time*, must one not fear that both can never be sharply known together?
Within contemporary Q.M. one can hardly deal with this apprehension. For
time is always considered a priori as known precisely, although one would
have to admit that every look-at-the-clock disturbs the clock's motion in
uncontrollable fashion.

Permit to repeat that we do not possess a Q.M. whose statements should *not*
be valid for sharply fixed points of time. It seems to me that this lack
manifests itself directly in the former antinomies. Which is not to say
that it is the only lack which manifests itslef in them.

The remarkable theory of measurement, the apparent jumping around of the
psi-function, and finally the "antinomies of entanglement", all derive from
the simple manner in which the calculation methods of quantum mechanics allow
two separated systems conceptually to be combined together into a single one;
for which the methods seem plainly predestined. When two systems interact,
their psi-functions, as we have seen, do not come into interaction but
rather they immediately cease to exist and a single one, for the combined
system, takes their place. It consists, to mention this briefly, at first
simply of the *product* of the two individual functions; which, since the
one function depends on qute different variables from the other, is a
function of all these variables, or "acts in a space of much higher
dimension number" than the individual functions. As soon as the systems
begin to influence each other, the combined function ceases to be a
product and moreover does not again divide up, after they have again become
separated, into factors that can be assigned individually to the systems.
Thus one disposes provisionally (until the entanglement is resolved by an
actual observation) of only a *common* description of the two in that space
of higher dimension. This is the reason that knowledge of the individual
systems can decline to the scantiest, even to zero, while knowledge of the
combined system remains continually maximal. Best possible knowledge of a
whole does *not* include best possible knowledge of its parts - and that
is what keeps coming back to haunt us.

Whoever reflectes on this must after all be left fairly thoughtful by the
following fact. the conceptual joining of two or more systems into *one*
encounters great difficulty as soon as one attempts to introduce the
principle of special relativity into Q.M. Already seven years ago
P.A.M. Dirac found a startlingly simple and elegant relativistic solution
to the problem of a single electron.[10] A series of experimental
confirmations, marked by the key terms electron spin, positive electron,
and pair creation, can leave no doubt as to the basic correctness of the
solution. But in the first place it does nevertheless very strongly
transcend the conceptual plan of Q.M. (that which I have attempted to
picture *here*),[11] and in the second place one runs into stubborn
resistance as soon as one seeks to go forward, according to the prototype
of non-relativistic theory, from the Dirac solution to the problem of
several electrons. (This shows at once that the solution lies outside
the general plan, in which, as mentioned, the combining together of
subsystems is extremely simple.) I do not presume to pass judgment on the
attempts which have been made in this direction.[12] That they have reached
their goal, I must doubt first of all because the authors make no such claim.

Matter stand much the same with another system, the electromagnetic field.
Its laws are "relativity personified", a *non*-relatviistic treatment being
in general impossible. Yet it was this field, which in terms of the
classical model of heat radiation provided the first hurdle for quantum
theory, that was the first system to be "quantized". That this could be
successfully done with simple means comes about because here one has things
a bit easier, in that the photons, the "atoms of light", do not in general
interact directly with each other,[13] but only via the charged particles.
Today we do not as yet have a truly unexceptionabl quantum theory of the
electromagnetic field.[14] One can go a long way with *building up out
of subsystems* (Dirac's theory of light[15]), yet without qute reaching
the goal.

The simple procedure provided for this by the non-relativistic theory is perhaps after all only a convenient calculational trick, but one that today, as we have seen, has attained influence of unprecedented scope over our basic attitude toward nature.

My warmest thanks to Imperial Chemical Industries, London, for the leisure to write this article.

[1] E. Schrödinger, "Die gegenwärtige Situation in der Quantenmechanik", Naturwissenschaften 23: pp.807-812; 823-828; 844-849 (1935).

[2] A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47: p.777 (1935).

[3] E. Schrödinger, Proc. Cambridge Phil. Soc. 31: p.555 (1935); *ibid.*,
32: p.446 (1936).

[4] h = 1.041 x 10^(-27) erg sec. Usually in the literature the 2.pi-fold of
this (6.542 x 10^(-27) erg sec) is designated as h and for *our* h an h with a
cross-bar is written. [Transl. Note: In conformity with the now universal
usage, hbar is used in the translation in place of h.]

[5] For illustration see Fig. 5 or 6 on p.375 of the 1927 volume of this journal; or Fig. 1, p.734 of the preceding year's volume (1934), though these are proton tracks.

[6] Obviously. We cannot fail to have, for instance, statements on the relation of the two to each other. For that would be, at least one of the two, something in addition to its psi-function. And such there cannot be.

[7] A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47: 777 (1935). The appearance of this work motivated the present - shall I say lecture or general confession?

(Paris, 1931); Cursos de la Universidad Internacional de Verano en Santander, 1: p.60 (Madrid, Signo, 1935).

[10] Proc. Roy. Soc. Lond. A117: p.610 (1928).

[11] P.A.M. Dirac, The Principles of Quantum Mechanics, 1st ed., p.239; 2nd ed. p.252. Oxford: Clarendon Press, 1930 or 1935.

[12] Herewith a few of the more important references: G. Breit, Phys. Rev. 34: p.553 (1929) and 39: p.616 (1932); C. Mo/ller, Z. Physik 70: p.786 (1931); P.A.M. Dirac, Proc. Roy. Soc. Lond. A136: p.453 (1932) and Proc. Cambridge Phil Soc. 30: p.150 (1934); R. Peierls, Proc. Roy. Soc. Lond. A146: p.420 (1934); W. Heisenberg, Z. Physik 90: p.209 (1934).

[13] But this holds, probably, only approximately. See M. Born and L. Infled, Proc. Roy. Soc. Lond. A144: p.425 and A147: p.522 (1934); A150: p.141 (1935). This is the most recent attempt at a quantum electrodynamics.

[14] Here again the most important works, partially assignable, according to their contents, also according to the penultimate citation: P. Jordan and W. Pauli, Z. Physik 47: p.151 (1928); W. Heisenberg and W. Pauli, Z. Physik 56: p.1 (1929); 59: p.168 (1930); P.A.M. Dirac, V.A. Fock, and B. Podolsky, ,cite>Physik. Z. Sowjetunion 6: p.468 (1932); N. Bohr and L. Rosenfeld, Danske. Videns. Selsk. (math.-phys.) 12: p.8 (1933).

[15] An excellent reference: E. Fermi, Rev. Mod. Phys. 4: p.87 (1932).