# High school physics FAQ

High school physics poses questions from the profound to the peculiar. This page collects frequently asked questions from the high school physics forum created for students studying "HSC Physics" in the state of New South Wales, Australia. If you wish to add questions or to extend answers, please do so via that forum. We also maintain a web site of high school physics resources for the NSW syllabus. Other questions may be addressed to J.Wolfe@unsw.edu.au

### Relativity

For background, see our web sites
Einstein Light: relativity in 10 minutes... or 10 hours
is our contribution to the 100th anniversary of relativity. It has a set of multimedia presentations of some of the key points, and a large set of web pages going into more detail.

#### Inertial and non-inertial frames of reference.

"We are asked to perform an investigation between non-inertial and inertial frames of reference. (As in to do an experiment in the real world to see whether we are in inertial frame or non-inertial frame.)"
An inertial frame is one in which Newton's laws work: F = ma. The surface of the Earth is almost an inertial frame (not quite: it is rotating, but very slowly: one turn every 23.9 hours. See Foucault's pendulum for a discussion). Another frame moving at constant velocity to one inertial frame is also inertial. So a bus that is not accelerating (including not turning and not bumping) is an inertial frame. A bus going round a corner is not an inertial frame: a standing passenger seems to accelerate sideways, or must hold on to the bar just to stay 'in the same place'. These are investigations that you have already done.

At funfairs, there are often merry-go-rounds or more dangerous variants on this. I went on one at the Easter Show callled the 'gravitron': a big cylinder that spun--we were 'pinned to the wall'. On these, a ball does not travel in a vertical plane: it seems to turn corners. Kinematics seems seriously weird in this frame. Then the guy running the gadget shouted at us to stop throwing the ball and he was probably right to do so, because it was hard to predict where it was going to go. Inside the 'gravitron', or another fairground ride, many such simple experiments will show that Newton's laws seem to fail, and so that you are in a non-inertial frame. See the principle of relativity. #### Centripital forces and inertial frames

"My physics teacher tells me that when I go around a sharp curve in my car, there is no force causing me to move away from the center of cuvature. So what is happening to make me feel as if I am sliding towards the outside?"

#### The two airplane version of the twin paradox: is General Relativity involved?

(See the twin paradox for an introduction.) "We were discussing proof of special relativity/time dilation in class and used, as an example, the idea of a clock being taken on a fast plane having time run slower than an identical clock left on the ground. The suggestion was made, however, that this plane would really be accelerating (circling the earth) and would therefore be in a non-inertial frame of reference and we would need to use general relativity!"
Yes, you're right. The fast plane (the one going East) is subject to General Relativity (GR) corrections (a) because of its variable altitude in the Earth's gravitational field and (b) because of the centripital acceleration. For the plane going West, the centripital acceleration is much smaller: roughly speaking, it stays between the Earth and the sun while the Earth rotates below it. (at temperate latitudes this is possible). The gravitational field term would be rougly the same for both, but not exactly, because East and West bound planes often fly at different altitudes to take advantage of or to avoid the jet stream.

So for both reasons (gravity and accelerating frame, which are locally indistinguishable according to the Principle of GR), there is a GR correction. The gravitational term is of the same order for both planes. In fact, the gravitational and SR terms turn out to be of comparable size: both are hundreds of nanoseconds. The acceleration term is smaller than the gravitational term. (The acceleration and gravitational terms would be comparable for a satellite, but planes travel much more slowly than near-earth satellites. The SR and gravitational terms are comparable for an object on the Earth's surface.)

So yes, an explanation of the time difference in the two clocks requires either an explicit calculation of the two terms. The original report is: J.C. Hafele and R. E. Keating, Science 177, 166 (1972). In fact, the GR terms, while of the same order as the SR terms, are fairly similar for the two planes. So the main effect is the SR effect, and it is in agreement with SR calculations. Seen from a non-rotating frame of reference, above the South pole, the Eastward flying plane has its speed plus the speed of the ground (the atmosphere travels with the Earth, to a good approximation). The Earth turns below the Westward flying plane. At sufficiently high latitude, it can stay in the same time zone for the entire flight.

Analyse and interpret some of Einstein's thought experiments involving mirrors and trains and discuss the relationship between thought and reality.
The thought experiment involving the mirrors on either side of a train is rather like Michelson and Morley's interferometer experiment, though the geometry is more complicated. In this case the Earth is the 'train' travelling towards and away from a distant star a 6 month intervals.

The relationship between thought and reality is the province of philosophers and, enjoyable though it be, I'd rather not trespass.

### Mass defect

The syllabus says: "Explain the concept of a mass defect using Einstein's equivalence between mass and energy" How? And does it work only for nuclear reactions?

### Mass dilation

"What is mass dilation?"

### Relativity and space travel

"What are the implications of mass increase, time dilation and length contraction for space travel?"
The short answer is "not very much". For space travel of the sort that we can conceive, the effects due to time dilation and length contraction are tiny. This does not mean that they cannot be measured: it is not highly difficult to measure the very small time dilation effect. But the practical implications are minimal.

### Space travel

Space on the UNSW HSC site for background.

#### What is the advantage of setting PE = 0 at r = infinity instead of having, lets say, the centre of the earth to be zero?

 Gravitational PE at a distance r from the earth's centre is given byU = − GMm/r, where r > radius of the Earth, and U = 0 at r = infinity. The equation is only true when r is greater than or equal to the radius of the Earth. Newton wrote a nice theorem establishing this: for gravity, a hollow shell has no effect if you're inside it, and if you're outside it, all of its mass may be considered to act at the centre. So the equation involving the mass M of the whole earth only applies when you are outside the earth. So the centre of the Earth can't be used. Using the surface (r = rE) would be possible: this would give Urelative = − GMm/r + GMm/rE However, this is not used. It is more awkward, it has a new parameter to remember, and it is specific for one particular planet. The sketch compares the usual astronomical version (GMm/r, the solid line) and the local version (mgh, dashed line). mgh is a poor approximation for altitudes that are not negligible in comparison with the radius of the Earth. #### The syllabus says 'Define gravitational potenial energy as the work done to move an object from a very large distance away to a point in a gravitational field.' Can anyone explain this? Why do we define gravitational potential energy?

1) We divide forces into two sorts: conservative forces (such as gravity) and nonconservative forces (such as friction). By definition, the work done during a round trip is zero for a conservative force. Example: I lift my pen from the desk, I do work against gravity. I lower it to the desk, gravity does work on me. Total work done in the round trip = 0. I slide the pen to the right, I do work against friction. I slide it back to the left and I still do work against friction. Total work done in the round trip > 0.

2) For an object acted on by a conservative force, we can define a potential energy due to that conservative force, as a function of position. The difference in potential energy (Ub - Ua) between points a and b is defined as the work done against that force to move the object from a to b. (You can now see why we can't do this for nonconservative forces: if we do a round trip from point b to point b, we do work against such forces and so the potential energy at b would have no unique definition.)

3) Gravity is a conservative force. So, for a body with mass m, we define the difference in gravitational potential energy between points a and b as the work as the work done against gravity to move mass m from a to b.

4) Note that the force may vary with position: the Earth's gravitational field gets stronger as we approach the surface of the Earth, from either direction. So we could say

dU = dW = -F.ds where F is the gravitational force, and so
U = - integral F.ds.
Notice that the integral introduces a constant of integration. This constant corresponds to a reference point for gravitational potential energy: in terms of the definition above, it is the gravitational potential energy at point a, where we chose a to be some convenient reference (see the section above).

We are supposed to discuss the relative energy costs associated with Space Travel.

"The main energy cost associated with space travel is currently fuel to reach Low Earth Orbit. Energy is also needed to leave this orbit, change direction/ accelerate and for communication. What else is there to say?"

It's true that we have to give a satellite the extra gravitational potential energy of its orbit, plus the kinetic energy associated with a speed that allows it to stay there.

But we actually give spend more energy than that, because we have to lift a lot of fuel. Most of the fuel doesn't go very high, but we still have to lift and to accelerate that fuel. So most of the energy goes into carrying fuel, and most of that fuel is there to carry more fuel, and most of that..... Which is why the Saturn V booster is a very big can of fuel.

If we fired the satellite out of a gun (as in HG Wells' "From the Earth to the Moon"), we would burn the fuel on the ground, and therefore not have to lift nor to accelerate it We would therefore need only a tiny fraction of the energy that is currently used. Unfortunately, the huge acceleration would damage the satellite and kill any passengers.

Microwaves have wavelengths measured in microns to cm. What we call radio waves are usually rather longer, say metres to many km.

In order to communicate over a long distance, you need to confine the radiated power to a beam of small cross sectional area, in other words to send out a nearly parallel beam. Then you need to intercept it and focus it. For both of these, you use parabolic dish, a little like those uesd for satellite television. Now the whole idea of rays and focussing only works well for dishes much bigger than a wavelength. So, for microwaves, the dish need only be metres in size. For radio waves, larger dishes are required.

For the dish on the ground, this is not a big problem, and dishes like the one at Tidbinbilla near Canberra are used both for transmission and reception. (There is a fun (but scientifically silly) movie called The Dish, which is about the use of the radiotelescope at Parkes--one of the world's finest astronomical instruments--for communication with Apollo XI.)

For the dish on the spacecraft, there are limits to the possible size. So the solution is to keep the wavelength short, the dish on the spacecraft smallish, and the dish on the ground big.

Does a bullet that is shot straight up return to the ground at the same speed and if so, why?
One the moon, in the absence of air resistance, mechanical energy of projectiles is conserved. When it returned to the same level (same gravitational PE) it would have the same kinetic energy.

On the Earth, falling small objects quickly reach a speed at which the force of drag equals their weight: their terminal speed. For a bullet, this would be several tens of metres per second, whereas a bullet is usually fired at hundreds of m/s.

### Astrophysics

Question: Recall the components of the electromagnetic spectrum and describe the properties of each component. Explain why some of these wavebands can only be detected from space-based observations. Question: Define the terms "resolution" and "sensitivity".
Resolution refers to details in space or angle. For instance, your eye has a resolution of about 1 minute of angle (about one sixtieth of a degree or 0.0003 radians). So, at 20 cms from your eye, you can resolve about 20 cm* 0.0003 radians = 0.06 mm. This is the size of a moderately large biological cell, such as an egg.

Sensitivity refers to the minimum signal required by an instrument. Larger instruments receive more light, and so (all else equal) are more sensitive. For example, under optimum, dark adapated conditions, your eye requires about 70 photons to form an image (only about 10% of these are captured by photoreceptors).

The sensitivity of telescopes is often limited by optical noise (stray light) or by electrical noise in the detectors. Electrical noise is often minimised by running the detectors at very low temperatures, where the thermal motion of electrons is reduced.

New generation telescopes.
What is the impact of astrophysics on society?. Parallax, resolution, Airy disc, parsecs, distances
What are the the trigonometric parallax limits for space-based and ground-based methods of measurement?

Parallax refers to the different views that you see from two different positions. Try this experiment. Hold the index finger of your left hand vertical, 20 cm in front of you. Hold the index finger of your right hand vertical, 40 cm in front of you. Now close your left eye and, using just your right eye, move the two fingers sideways until they line up. Now close your right eye and open the left. The closer finger has 'jumped' to the right of the further finger. Repeat a few times. Compared to a distant background, both fingers have both jumped to the right, but the closer one jumps father. If you measure the angles through which they jump and the distance between your eyes, you can work out how far away the fingers are.

For distant objects, the distance between our viewing positions must be greater than the distance between your eyes. Fortunately for astronomers, the Earth shifts our telescopes round the sun once a year, so we can get a separation equal to the diameter of the orbit of the Earth (16 light minutes) if we wait six months, as shown in this diagram. In this sketch, which is not to scale, imagine an observer looking at objects A and B, standing at the pole of the Earth with his head towards us. Now he sees object A to be to the right of B. Six months ago, he saw it to be to the left of B. Now most stars are so far away from us that we cannot observe any relative motion in this way. However, for close stars it is possible. The next sketch shows the path of light from a close object and from a very distant star. From trigonometry,

D = R/tan θ = R/θ
where we have used the small angle approximation for θ measured in radians. A parsec is defined as the distance to an object that 'moves' (with respect to the distant stars) by an angle of 1 second (1/3600 of a degree) when the Earth moves by the mean radius of its orbit. In terms of this sketch, if θ = one second, D = 1 parsec. Now all stars except the sun are more than one parsec distant, so to measure their distance by parallax, we need to be able to resolve angles of about 1 second or better. Is this possible?

One limitation to the angular resolution of telescopes is due to a wave effect called diffraction. When parallel light passes is incident on a circular lens or circular telescope with an aperture a, it cannot be focussed onto a perfect point, but rather makes a small circular smudge called the Airy disc. Around this bright circle is a dark ring, then a series of bright and dark rings: the diffraction pattern of a circular aperture. The angular diameter of the central bright ring is of the order of λ/a, where λ is the wavelength of the light (or other waves). If the angular separation between two stars is smaller than the size of this disc (as is the case for the majority of double stars), then it is very difficult to resolve them as two different stars. (In practice, this theoretical limit is not always achieved in optical telescopes because of such effects as the bending of light in the atmosphere.) So an angle of λ/a is approximately the theoretical limit to the angle that can be resolved by a telescope (or camera, or eye*).

Radio telescopes, which use long wavelengths (eg 21 cm, the wavelength of the 'hydrogen line') have to be much bigger than optical telescopes (L ~ 0.0005 mm), but in both cases, the bigger the better. Optical telescopes may have a of several metres. Individual radio telescopes may have a of 10s of m (eg the dish at Parkes is 64 m), but separate radio telescopes may be connected to provide a bigger effective aperture. The Australia Telescope links radio telescopes across the country to provide an effective aperture of thousands of km.

Space based optical telescopes have the advantage that they have no atmospheric distortion and so they can measure smaller angles than ground based ones.

* λ/a is also one of the limits to the angle you can resolve with your eye. This limit is only achieved, however, when your pupil is almost closed (aperture a less than about 2 mm), in very bright light. In dim light, when you pupil is open wider, the angular resolution is typically a bit better than one minute (1/60 degrees), and is determined by the spacing of photoreceptors in your retina (which you can now work out).

How can one tell how far away a star is? For close stars, you can use parallax. But because the Earth's orbit is small compared to interstellar distances, this doesn't work for most stars, or for galaxies.

Fortunately for astronomers and cosmoligists, there is a class of stars called Cepheid variables. These stars have brightness that varies periodically over time. Further, the period T of the oscillation in brightness is related to the total output power of the star. For a large, high power Cepheid variable, the period may be longer than a month. For a small, low power star, it may be days.

Once we know the relation between the period T and the light power P, we can determine how much light power P it puts out by measuring the variation in its brightness. If we know how much light it puts out and how bright it appears viewed from Earth, we can work out how far away it is. Here's how it works:

First, we look at Cepheid variables whose distances are known. Some of them are close enough to allow us to determine their distance r from parallax. From the intensity (I = power per unit area) of light received on Earth, we work out their power P from the inverse square law. Consider a sphere centred on the Cepheid variable. The area of the sphere is 4πr2. All of the power P passes through this area, so the power per unit area is:

I = P/(4 π r2).
From these measurements on relatively close Cepheid variables, we construct the calibration curve of P as a function of T. Once we have this curve, then whenever we find a distant Cepheid, we use T to get P. Then, using the same inverse square law, we can work out the distance r.

Cepheid variables were first studied in the first decade of the twentieth century by Henrietta Leavitt, one of the first women to become famous in astronomy. She studied cepheid variables in one of the clouds of Magellan. There is little proportional difference in the distance from Earth to the stars in this galaxy, so she knew that the different apparent brightness was determined only by the power output. At the time, she could only use the cepheid variables for relative distances, because the parallax method was not sufficiently accurate.

We now know that the Cepheid variable cycle involves thermal feedback produced by the different ionization states of helium, which is relatively abundant in older stars. Doubly ionised helium is more opaque than singly ionised. If the star is hot enough to produce doubly ionised helium, this opaque layer insulates the star, making it hotter still. As the temperature rises it expands, but this expansion cools it, so the helium captures an electron and becomes less opaque, which continues the cooling. Cooling causes it to contract, which raises the temperature, and the cycle continues.

The Cepheid variable method works for distant stars in our own galaxy, and it also works for 'close' galaxies such as the Magellanic clouds. However, for distant galaxies, we can no longer distinguish individual stars. In these cases, various other methods are used. For instance, we can estimate the power of the whole galaxy (eg the brightest galaxy in a cluster) and use that to infer the distance from the inverse square law. One type of supernova (the exploding white dwarf star) provide another method, because the stage at which they explode depends on their size, and so they do not vary much in intrinsic brightness.

#### The most distant objects are reported to be about 13 billion light years away, and the universe is said to be 14 billion light years away. What stops us seeing further?

In a sense, the furthest visible thing one can see is exactly cT away, where T is the age of the universe. That is the big bang itself, which we see as the microwave background. This is microwave radiation coming almost uniformly from all directions in the sky - it is the bang, red shifted and cooled by the expansion of the universe, which makes its wavelengths longer just as it makes intergalactic distances longer. Those microwaves have taken all of the age of the universe to get here.

For localised objects, the oldest ones have to be very hot so that they are still visible with huge red shifts. And galaxies and stars didn't form for a while, so the oldest visible galaxies are a little younger than the universe.

### The atom, photoelectric effect, energy levels, quanta, black body radiation,.

#### Where exactly did E = hν come from? [snip] What exactly was the maths that Planck was trying to do that made him quantise?

The keys are black body radiation (see the preceding question) and Boltzmann's ideas in statistical mechanics. Along with others at the time, Planck was trying to explain black body radiation: the intensity, as a function of wavelength, of EMR emitted from a surface in equilibrium with its own radiation (as in the interior of a heated box: see above). Classical physics gives an expression proportional to 1/λ4. This is approximately what is observed for very long wavelengths. But it fails spectacularly at medium and short wavelengths, a problem known as the ultraviolet catastrophe. To write an equation that fits the data, Planck needed to put in the denominator a factor that we now write as (exp(hc/λkBT) − 1):
B(λ,T) = (2hc2λ5)/(exp(hc/λkBT) − 1).
We now recognise this as very much like the Boltzmann distribution of energies, exp(−E/kBT), which in turn comes from the equipartition of energy among different ways of vibrating.
Proviso: actually, the one in this factor comes from the Bose-Einstein distribution, which applies to photons and other bosons, and to which Boltzmann's distribution is only an approximation. The statistics of energy distribution depends on whether particles can share quantum numbers (bosons) or not (fermions). Boltzmann's distribution was extended for bosons (photons, gluons, the W, Z, Higgs and the graviton, if it exists) by Bose and Einstein, hence the name) and for fermions (leptons and quarks) by Fermi and Dirac, hence that name. To obtain Boltzmann's distribution, consider the case of small λ, or large energy, where the 1 in the denominator is negligible.

Planck didn't like statistical mechanics and Boltzmann's work. Reluctantly, however, he found it necessary to use it. If the energy could only be emitted and absorbed radiation with energy amounts that were inversely proportional to λ (E = hν = hc/λ), then Boltzmann's equal distribution of energy among different modes would give the required short wavelength behaviour. The next step, the idea that energies of radiation were inherently quantised (a little like the way matter is quantised in atoms etc), came from Einstein's analysis of the photoelectric effect, which is our next question.

I sometimes wonder how close Boltzmann was to discovering quantum mechanics. If you think about the entropy of a single atom in a box, classical physics would give it infinite entropy. Boltzmann would have known this, and quantization of energy is only a couple of steps away... But sadly, Boltzmann was under attack from the philosophers for his insights in thermal physics and was probably not in a great position to pursue further radical implications of his ideas.

#### The photoelectric effect

"How is the photoelectric effect used in the following: breathalysers, solar cells and photocells?"
This would be stretching the usual meaning of the phrase 'photoelectric effect', which usually refers to the interaction of a photon and an electron in a metal to produce an electron no longer bound by the metal. Probably it is a misprint for 'photovoltaic effect', which is used in breathalysers, solar cells and photocells.

Photocells come in different types. Some are photovoltaic cells, some are phototransistors. In a phototransistor, the base of the transistor is exposed to light. Because photons can produce electrons in this region, the input light effectively replaces the base current (the input current in a normal transistor). So the output current of the transistor is determined by the light input. (See the photovoltaic effect and transistors.)

One type of breathalyser uses a chemical reaction involving alcohol to change the colour in an indicator. (The only one that I've ever been able to examine worked that way.) Some use the infrared spectrum of alcohol. They work like this:

Light of known spectrum passes through the breath and then through optical filters that respond to different wavelengths onto a photocell. Thus (part of) the spectrum of chemicals in the breath, including that of alcohol, if present is measured.

The photovoltaic effect is involved in the photocell. Go to Solar cells and the photovoltaic effect.

Could you please tell me about the relationship in solar cells among the photoelectric effect, semiconductors, electric fields and current?
As explained above, this may be a confusion between 'photoelectric' and 'photovoltaic'. Solar cells are usually semiconductors. A photon of light interacts with an electron and transfers it to a state with a higher electric potential. This creates an emf: the electron with higher potential can flow back to its original state via the external circuit and thus do work. Go to Solar cells and the photovoltaic effect.

#### By thinking that electrons behave like waves, how does it help to explain that the accelerating particle does not give out energy?

In the 'solar system' model of the atom, a particle-like electron travels in a circular orbit around the atom. There are different circles for orbits with different energy. Travelling in a circle, it would be accelerating (centripital acceleration) and so would radiate. In quantum mechanics, the atom has an electron wave. The wave is a bit like a standing wave in a string (see waves and strings), except that it is three dimensional. It is going nowhere. No acceleration, so no radiation. Different energy orbits have different waves, most of them have nodes, just like the waves in a string, except that in three dimensions nodes are surfaces, not points.

Now in a string, the wave is in the displacement of the string. What is it that waves in an electron wave? The quantity is called ψ, the Greek letter psi. ψ is a complex quantity � it has real an imaginary components. If you take ψ at any place and multiply it by ψ* (which is like ψ, but has the opposite imaginary component, you get the probability of interacting with the electron at that point. So the atomic nucleus is pictured (especially in chemistry text books) with clouds of 'electron probability' around it in different orbitals.

The interpretation of ψ as a function of position and time is a subtle question. In the case of the atom, ψ is a standing wave. In other cases, like the ψ for electrons in a cathode ray tube, ψ has the form of a travelling wave.

Is an electron a particle or a wave? Another subtle question. It can have wave properties (eg a wavelength) and particle properties (eg a position). However, it cannot be a 'good' wave and a 'good' particle at the same time. Because of Heisenberg's Uncertainty Principle, a precise measurement of the wavelength implies a poor measurement of position, and vice versa. So an experiment in which wavelength is controlled precisely will give you wave effects (such as interference), but the electrons will not be localised in space. Conversely, if you constrain the position, you have an uncertainty in the momentum and wavelength. Some people say that little things like electrons are 'wavicles'. I prefer to say that they are neither wave nor particle, and that these macroscopic ideas are misleading when applied to electrons.

#### Electron microscope

What is magnetic diffraction and focussing of electron beams? What are the differences in resolving power between optical and electron microscopes?

#### Heisenberg's Uncertainty Principle

(See also the separate page on this topic.) Heisenberg's Uncertainty Principle follows from a classical result, which is at least as old as Fourier. I prefer to introduce it as the Musician's Uncertainty Principle. When musicians tune up, we listen to the note for a long time so that we can adjust the frequency precisely. We tune by removing beats. (See What are interference beats?) If the frequency difference is one Hz, then you hear an interference beat every second. So, roughly speaking, if the frequencies differ by Δf, then you need a time of 1/Δf to notice. In other words:
Δf.Δt > ~ 1

(time taken to measure f) times (error in f) is about one or greater.

Musicians know this: if the chord is short, or if you are playing a percussive instrument, the tuning is not as critical. In a long sustained chord, you have to get the tuning accurate.

Now in quantum mechanics and atomic physics, the energies of photons are hf, where h is Planck's constant. So in order to know the energy, you have to take a certain time to measure the frequency. Mutliplying our previous inequality by h on both sides gives us

Δ(hf).Δt > ~ h

(uncertainty in energy) times (uncertainty in time) is greater than about h.

which is one example of the Heisenberg uncertainty principle. Applying it to spatial frequency (number of cycles per unit distance) rather than temporal frequency (number of cycles per unit time) gives
Δp.Δx > ~ h
(uncertainty in momentum) times (uncertainty in position) is greater than about h.

Werner Heisenberg won the Nobel prize in 1932.

Because h is so small (6.63 10-34 Js), the consequences of the uncertainty principle are usually only important for photons, fundamental particles and phonons. There are, however, many physical processes whose evolution with time depends sensitively on the initial conditions. (Sensitivity to intial conditions is fashionably called chaos.) The uncertainty principle prohibits exact knowledge of initial conditions, and therefore repeated performances of such processes will diverge. (Physicists will also tell you that one cannot have exact knowledge anyway, for a variety of practical reasons, including the fact that you don't have enough memory to record the infinite number of significant figures required to record an exact measurement.)

Some philosophers regard the consequences of the uncertainty principle as having a more fundamental importance. The argument goes like this: if one could know exactly the position, velocity and other details, one could, in principle, compute the complete future of the universe. Since one cannot know the position and momentum of even one particle with complete precision, this calculation is impossible, even in principle. Most scientists find this a trivial argument. A memory capable of storing all this information would be as complex as the universe, and then the contents of that memory would have to be included in the calculation, and that would make the amount of information greater, and that information would have to be stored..... We rather point out that all of that information is actually contained in the universe which, as an analogue computer, is computing its own future already.

(See also Heisenberg's uncertainty principle and the musician's uncertainty principle, which has some demonstrations.)

#### The spin quantum number

What does "spin" refer to in particle physics? And why is this a necessary concept?
Let's start with some classical ideas. Angular momentum is the rotational analogue of (linear) momentum. If an everyday object is spinning, it has angular momentum. If we attach electric charge to that spinning object, the circulating charge acts like a loop of current, and produces a magnetic dipole, ie a little electromagnet.

So, when we find that an electron has angular momentum and a magnetic dipole, it is natural to talk of its spin. Natural, but somewhat misleading, because on the very small scale one must use quantum mechanics, rather than classical mechanics. Like the energy of electrons in an atom, the spin of a fundamental particle is quantised: only discrete values are allowed (+ and - 1/2 for the electron). Further, if one imagines the electron is a little ball of spinning charge and applies classical physics, one gets the wrong answer for the magnetic dipole.

So why is it a necessary concept? If we apply an external magnetic field, the energy of an electron will be increased or decreased depending on the direction of its magnetic dipole (and thus on the value of its spin). It also gives an extra quantum number. The Pauli exclusion principle forbids electrons to have the same quantum numbers so, for any energy level in the atom, there can be two electrons, with positive and negative spin. Thus spin allows twice as many electrons, which has very considerable consequences for the periodic table and chemistry!

#### Accelerators as probes of nuclear structure

Can you please explain why accelerators are used to probe into the structure of matter?
One reason is related to the de Broglie hypothesis: that the wavelength λ of any particle is λ = h/p, where h is Planck's constant and p its momentum. So, in order to probe the nucleus (size ~ 10-15m), we need a 'probe' particle smaller than this, which means one with large momentum. To obtain information about quarks (ie to look inside a nucleon), even higher momenta are required. Accelerators provide particles with very large momentum (travelling within a fraction of a percent of the speed of light), and the momentum is well known.

To step back in history: Rutherford was able to probe the inside of the atom by using 'probe' particles smaller than the atom. From the angles of recoil, he was able to make an important conclusion about the atom: that nearly all the mass was localised in a very small region (the nucleus).

The second reason is to do with mass-energy equivalence. If you organise a collision that, relative to the centre of mass of the colliding particles, has a kinetic energy E, it is possible to create a particle-antiparticle pair, provided that 2mc2 is less than E. Smashing an accelerated particle into a target is therefore one way to study structure. One drawback is that the kinetic energy in centre of mass frame is not very high, particularly when relativistic effects are included.

A better way of studying subatomic particles is to smash a proton (mass mp) into an antiproton. Sometimes they destroy each other, and then you get the kinetic energy of the collision, plus 2mpc2. Which may be enough to create various different particle-antiparticle pairs. Note that you usually create (or destroy) particle-antiparticle pairs, so that the total charge and spin of the things you create or destroy is zero.

### Semiconductors, transistors, solar cells etc

#### How do diodes and transistors work?

When you take some p material and n material and put them together, you get a diode (see the schematic diagram below). If you make the p side positive and the n side negative, then holes move from the p side to the junction, while electrons move from the n side to the junction. At the junction, the electrons 'fill' the holes, thereby destroying both the free electron and the hole. So the process can continue indefinitely: the diode conducts in this direction. If you reverse the polarity, holes and electrons both move away from the junction. This leaves no charge carriers near the junction, so there is no conduction. Thus a diode conducts current in only one direction---the direction of the arrow in its circuit symbol. Diodes are useful in rectification (turning AC to DC), in logic circuits and in many other applications in electronics. A junction transistor consists of a thin layer of one type of semiconductor sandwiched between two layers of the other type, as shown in the schematic diagrams above.

Let's look at the npn transistor. An electron can travel from the emitter (n doped) to the collector (n doped) only if it can get through the base without 'colliding' with a hole in the base (p doped). If the layer is very thin, that is possible, but the chance of an electron getting through is a sensitive function of the potential difference between base and emitter (called the bias voltage). For typical silicon transistors, if you set the base-emitter voltage at 0.2 V, there is hardly any current between collector and emitter. If you set it at 0.6 or 0.7 V, you get close to maximum collector current (the size depends upon the size and packaging of the transistor, but tens or hundreds of mA is typical). If you set the base-emitter voltage above this, you have an ex-transistor.

The pnp transistor operates similarly, except that it is the 'holes' that migrate across the thin base region, and the electrons in this region that control the flow. It is convenient to have symmetrical transistors (npn and pnp) for circuits with positive and negative supplies.

The field effect transistor or FET is simpler than the junction transistor. We show a p-gate transistor. The current through the n-doped material passes through the narrow section where it passes the 'gate' of p-doped material. The effective width of this passage can be made thinner or thicker by varying the voltage of the gate, and so removing conduction electrons from the thin passage. An advantage of FETs is that the input resistance of the device is high, which is what one usually wants when amplifying a small signal. In junction transistors, the input resistance is low. A disadvantage of FETs is that they usually can handle only small currents.

#### What was the impact of the invention of transistors, microchips and microprocessors on society?

Can you provide more resources for teaching the Age of Silicon (NSW syllabus topic)?
The US PBS has a website associated with their documentary Transistorized. It has a good basic review of the science involved, but much historical detail (including a lot of interesting things about the various personalities involved).

Another possibility is the Nobel prize website. The 2000 prize in Physics was awarded to Alferov, Kroemer and Kilby for contributions to the foundations of much of modern electronics and infomation technologies. The resources on this site range from the somewhat technical (each awardee gets to write a scientific article for reviews of modern physics on their work), through to their acceptance speeches and so forth, right down to the basic including graphics-heavy descriptions of their work, and even an online game for kids to learn about the various prizes (see , which relates to Kirby's work on the integrated circuit).

The more recent the prize, the more public stuff they have on the website, but you can go right back to Bardeen, Brattain and Shockleys prize in '56 for the transistor, and some of the greats like Einstein, Heisenberg and co.

#### Solar cells and the photovoltaic effect

Could you please tell me about the relationship in solar cells between the photoelectric effect, semiconductors, electric fields and current?

### States of matter

The syllabus asks us to recall the states of matter and their properties and debate whether superconductivity is a new "state"

### Bose-Einstein condensates

Why is large wavelength so imporant? How does this tie in with quantum mechanics?
Are Bose-Einstein condensates a new state of matter.
They certainly have properties that are very different from those of solids, liquids, gases & plasmas. Is that enough to be called a new state of matter? This is a semantic question.

### Superconductors

How do superconductors work?
It is difficult to give a qualitative/conceptual description without missing something, but here is a simplistic explanation:

Electrons in normal metals occupy a set of quantum states, up to some maximum energy (called the "Fermi energy"). The relatively free "conduction electrons" (those which come from the unpaired valence electrons of the atoms) interact strongly with the positively charged ion cores, and as an electron moves through the lattice it will cause the cores to be displaced from their equlilibrium positions. Electrons with energies near the Fermi energy are able to change their quantum state relatively easily, and thus any interaction, such as with the lattice, can result in a drastic change in the quantum states of these electrons. It happens that in superconductors the electrons near the Fermi energy become highly correlated, forming a macroscopic coherent quantum state with exotic properies. This state can be thought of as being made up of "electron pairs", but it is important to understand that these pairs are transitory things which change continuously in a dynamical way. At any instant a given electron is a member of many pairs.

Does superconductivity qualify as a new state of matter? Not according to the classification scheme proposed here. If we had a scheme that made metals a different state of matter from materials that don't share electrons, then under such a scheme superconductors might be a new state of matter. However, this is all taxonomy and semantics, and is not of great importance to physics.

#### What is a phonon?

The atoms of a crystalline solid form a regular lattice structure, which can be likened to an array of balls connected by springs, the springs representing forces between neighbouring atoms. Such structures can vibrate mechanically in various ways, either because of thermal motion or some external force. A sound wave travelling through a solid is an example of the latter. Quantum mechanics tells us that these vibrations can only gain or lose energy in discrete amounts and these are called 'phonons', by analogy with photons for light. Under ordinary conditions there is an enormous number of phonons or photons present and we do not see this 'graininess'. However, careful experiments confirm the picture.

#### Levitation of a magnet by a superconductor

The magnetic field does not penetrate into a superconductor. This is called the Meissner effect. This effect together with conservation of the magnetic flux provides the levitation of a magnetic object above a superconductor or a superconductor above the magnet.

When a magnet is brought near a superconductor, this exclusion of the magnetic field distorts the field lines, as shown in the diagram below. In order to be quantitative about it, we observe that the smaller the gap between the object and the superconductor, the larger the magnetic field in the gap: the field is compressed in this gap. We can calculate the magnetic pressure (a force per unit area), which at any point is equal to the energy density (energy per unit volume) of the magnetic field. This is given by:

p = B2/2μo.
where B is the field strength and μo is the magnetic permeability of space. As we bring the magnet closer to the superconductor (middle picture), the field lines must be closer together, so the field is more intense, so its energy density and magnetic pressure p rise. Mechanical equilibrium (levitation) is achieved when the force due to this magnetic pressure is equal to the weight of the object.

To estimate the size of this effect, let us consider a cubic iron bar magnet with side a = 1 cm. Let the mass of the magnet be 0.01 kg, and let the field near the pole, in the absence of any superconductor (picture at left), be Bo ~ 0.01 Tesla. Now let us put the cube at distance x from the superconductor. To do the geometry properly is a little difficult, but we can make some approximations.

In the diagram at left, the field from the pole diverges over a distance comparable with the size of the magnet, ie over ~a. In the middle diagram, it diverges over the distance x, so the field lines have been concentrated by a factor of about a/x, so the field between the magnet and the superconductor has a field strength

B ~ Boa/x
For mechanical equilibrium, we want the force of the magnetic pressure acting on the pole of the magnet to equal the weight, so
mg = pa2 = a2B2/2μo.
Rearranging gives an estimate for x, the levitation height: You probably don't have liquid nitrogen to do the experiment, but you can make a similar measurement using two similar magnets and by taking advantage of symmetry. If you keep the magnets symmetrical while bringing similar poles towards each other, then (because the magnets are equally strong) no field lines cross the plane of symmetry. So the top half of the field diagram looks just like that of the levitation case. So the obvious question is why you cannot levitate in the second case. Well, the problem is that the symmetry of the two magnets is unstable and you will need to supply horizonatl forces to keep them upright. With some ingenuity, you may be able to supply such horizontal forces without much vertical force. If you do, you can measure the distance at which the upper magnet is supported, and this is roughly twice the distance at which it would levitate over a superconductor. It is also possible to supply the horizontal forces using an array of magnets, and some 'executive' toys use this principle to levitate permanent magnets.

By the way, this trick of using two similar magnets in symmetry would also be an easy way to calculate the forces, rather than to estimate them as we have done above. We treat the superconductor as a 'magnetic mirror' and calculate the 'image forces' due to the mirror image of the magnet.

#### The Meissner effect

The two following properties are specific for superconductors:
• 1) Zero resistance to electric current;
• 2) Repulsion of the magnetic field (Meissner effect).
Actually a normal metal (non superconductor) also has zero resistance at zero absolute temperature. However a normal metal never manifests the Meissner effect. Thus the Meissner effect is the most important property of a superconductor.

This wave function has a very special property of rigidity: it requires a lot of applied energy to change the 'shape' of the wave function. (To discuss this correctly, we really do need complex mathematical operations, so this simplified discussion is not quite correct. and the shape we are talking about is a shape in Hilbert space.) As a result of this rigidity, the curl of the electric current density is proportional to the magnetic field. (The curl is a mathematical way of representing properties of the shapes of lines in a vector field, the current density in this case. It is a measure of the amount of twist in the lines.) Let us assume now that the magnetic field penetrates inside volume of the superconductor. Hence, because of the property mentioned above, the field induces currents inside the volume of the superconductor. Any current is related to some internal movement and hence to the kinetic energy related to this movement. Therefore such a state would have a very high energy. To minimize the energy the superconductor develops currents on the surface in such a way that they exactly compensate the magnetic field inside. In this case current flows only within thin 10-8m surface layer and therefore energy of the system is relatively low. This explains the mechanism of the Meissner effect.

The Meissner effect is very important for condensed matter physics and it is equally important for elementary particle physics. The masses of the particles arise due to the effect that is very similar to the Meissner one because the physical vacuum is in the state similar to the state of a superconductor (Meissner state). However at early stages of the history the Universe was very hot, the vacuum was in the "normal" (non "superconducting") state, and all the particles were massless.

(The answers on levitation and the Meissner effect were provided by Prof Oleg Sushkov.)

#### Superconductivity and computers

I've read several books now all describing superconductors, if used in computers, will allow them to operate at higher speeds. None of them describe how this actually happens.

#### Applications to magnetic fields, motors, power distributions, MRI

I'm finding it really difficult to find any info of "the effects of those applications [of superconductivity] on computers, generators, motors and transmission of electricity through power grids".
Currently, these are almost entirely potential applications, so do not expect to find much information. For potential applications to computers, see the previous section.

There are also potential applications to motors and generators. The stationary magnets or stators in these devices are often electromagnets, in order to save weight or initial expense. (See Motors and generators for details.) If we could easily make these electromagnets superconducting, then we would save on the electrical power required to keep current flowing in them to maintain the field. This would make them more efficient. However, the insulation required to retain the liquid helium necessary for high current super conductors makes such systems large, heavy and expensive.

Power engineers dream of using superconducting cables for the transmission of electricity through power grids. About two thirds of the power generated by power stations is lost in the distribution network, including ohmic heating of the transmission cables. However, the prospect of cooling the distribution network is daunting. 'High' temperature superconductors (those that superconduct at liquid nitrogen temperatures) are not (yet) suitable for transmission cables because they cannot transmit high current density, they are usually not ductile or flexible.

(A little parenthesis about comparing electric cars with petrol cars. Although fossil fuelled power station generators are much more efficient than motor car engines, the distribution of electrical energy is much less efficient than the distribution of petrol, which tends to cancel out the efficiency of generation. However, power stations are less polluting than cars, and electric cars are still much more efficient than petrol cars. This is because electric cars are designed rationally for intelligent driving. Petrol cars are constrained by the need to protect the fragile egos of some male drivers and so are almost always vastly overpowered. In principle one could design efficient petrol cars but, because of the temporarily low price of oil, there is little incentive to do so.)

There are however some actual applications of superconductors. One relatively common example is in the large electromagnet used for the constant field in Magnetic Resonance Imaging (see MRI). Liquid helium cools the wire coils to allow the large currents required to maintain the large, uniform magnetic field efficiently. An interesting problem arises when one wishes to turn off the field and bring the magnet back to room temperature. The magnet is a large inductance (see AC circuits) with value L, carrying a large current i, and thefore storing an energy Li2/2. When the temperature starts to rise, resistance appears in the coils and the ohmic (i2R) heating would quickly dissipate all this magnetic energy as heat. Despite the presence of the liquid helium, there are difficulties in disposing of this energy safely, so the current must be gradually reduced to zero before the coils can be warmed.

### Nuclear physics, radioisotopes, neutrinos etc

#### What are some of the industrial and medical applications of radioactivity and nuclear physics?

Radioactive materials give off charged particles (electrons, antielectrons, helium nuclei) and (uncharged) neutrons, and also high energy photons (packets of light). The charged particles collide with the charged particles in ordinary matter and give off more high energy photons.

A photon has (is?) an electromagnetic field. If it has sufficient energy, it can interact with an electron and remove it from its atom. This is called ionizing radiation.

Such radiation can, in sufficient doses, kill cells because, if it strikes DNA in enough places, it disrupts the molecule and prevents reproduction.

and how is this property utilised in medicine and industries?

#### How are isotopes used in engineering or agriculture?

An important example is in nuclear engineering: In countries such as France, the construction of power stations involves lots of nuclear engineers. One of the more important isotopes is 235U. There are also nuclear reactors used for producing diverse isotopes for medical and other uses. Some bad and good news: in various countries, engineers are involved both in the building and the dismantling of nuclear weapons.

Radioactive isotopes are used in some measurement devices. The domestic smoke detector is the most common: they sometimes use Americium. Radioactive sources are also used to measure the thickness and composition of thin films (suitably calibrated, a measure of transmitted radiation tells you how much material was present to absorbe the radiation.

#### How are isotopes used in agriculture?

13C has been widely used (ie sufficiently widely that I've heard of it) in plant physiology to study the carbon cycle and photosynthesis. If you put 13C into a particular sugar and you find it in starch, then you know that there is a pathway from that sugar to starch etc. Often when researchers study biochemistry they choose a radioactive isotope to 'label' the biochemical in which they are interested. Then you can measure the concentration by measuring the radiation, and even better you can trace where it has come from. (Biologists would have better examples.)

As to agriculture, the link below reports the use of 15N to study root biomass and uptake, 13C for carbon exchange and 137Cs for studying soil redistribution.

#### How is beta decay described in terms of quarks?

A neutron can decay into a proton, an electron and an antineutrino. A neutron consists of an 'up' quark and two 'down' quarks. A proton consists of two 'up' quarks and one 'down' quark. The electron and the antineutrino are both leptons. How is it that one of the quarks appears to have changed from a 'down' quark to an 'up' quark?

A neutron consists of "up" and "down" quarks:    n = (udd).
Similarly, a proton is                         p = (uud)
The electric charge of the u quark is 2/3 (in units of the elementary charge) and the electric charge of the d quark is -1/3. As a result the proton has charge 1 and the neutron is neutral. The neutron decay goes via the following mechanism: The d quark decays into u quark and a virtual W- boson. As a result of the decay, the neutron is converted to proton. The virtual W- boson lives for 10-26 seconds and than decays to electron and antineutrino. Blink and you miss it!

### Motors and Generators

See
Motors and generators for a background to this topic. There are descriptions and diagrams of the main classes of motors.

#### Force between two wires

The HSC physics syllabus, in the Motors and Generators topic, asks us to "solve problems and analyse information about simple motors using F/L = k.I1I2/d". I have never seen a problem relating this equation to motors. Can you suggest any? What relevance does this equation have to the operation of a simple motor?
Sorry, I cannot see any important direct relevance. The equation refers to infinite wires, ie wires whose separation is much smaller than their lengths. So the forces that make a motor work don't relate to this equation. The forces between say elements of wire in a stator coil and a rotor coil that are parallel are calculated using the Biot-Savart law rather than this equation. The only places where lengths of wires in motors are much closer than their length are between adjacent wires in the coil winding. And yes it's true that there would be a force tending to push the wires closer together, but as the wires (or their lacquer) are touching already, this doesn't have much effect.

Indirectly, one could say that the equation that you quote defines the ampere. So many electrical measurements on motors depend indirectly on this equation.

Does the force between current carrying wires on either the same or opposite side of the coil effect the torque?
Internal forces in a rigid object have no direct effect on its motion. If the force were large enough to change the geometry of the coil, then there could be an effect, but this would be very small for any normal coil.
What is the implication of the force between current carrying wires for power distribution networks?
The wires must be very close together for the force to be important. In power distribution, the wires are usually separated by distances that make the force tiny. In transformers, wires are wound closely together and they exert larger forces on each other. I expect that transformers are tightly wound so that they don't rattle.

### Transformers

This section includes transformers, power lines, induction cooktops, eddy current switching and regenerative braking. See
Transformers for background.

#### Transformers in electricity supply and the home

Why does one need to have transformers in the transfer of electrical energy from a power station to its point of use? Much of the energy generated by a power station is lost in ohmic losses in the distribution network. The power lost in a wire of resistance R is R*i2. To make R small requires thicker wire, which is expensive. So i is made smaller. To deliver the same power P = Vi, V must be made bigger. So, near the power station, a 'step-up' transformer steps the voltage up to a high value (e.g. 110 kV). At local substations and surburban transformers, this is successively stepped down to 240 V.

Discuss why some electrical appliances in the home that are connected to the mains domestic power supply use a transformer Many appliances require low voltage for solid state electronics (transistor circuits typically require 5 - 20 V DC) or for safety. (eg a low voltage electric toothbrush is safer than a 240 one).

Some appliances require high voltage (neon lights, the cathode ray tubes in TVs and computer monitors).

The usual approach is to use a transformer to get the voltage to the appropriate value, and then (if required) a rectifier, capacitors and regulator to convert AC to DC. (There are also DC voltage converters that 'chop' the AC on and off, allowing capacitors to charge to the desired voltage. These do not need transformers.)

#### How is heating caused by eddy currents in transformers overcome?

Laminating the core reduces the heat loss. The core is made out of strips of metal, separated by lacquer or other insulator, so the area of the eddy currents is reduced. Consequently, the flux and therefore the Faraday emf are smaller, so smaller eddy currents flow. You may notice transformers buzzing at 100 Hz (near G at the bottom of the bass clef) as the laminations all become electromagnets 100 times per second.

Eddy current heating is never completely overcome, and transformers are usually at least a little warm. (In Sydney, cockroaches like to live on or near transformers because they are warm.)

#### How does the principle of induction apply to cooktops in electric ranges?

Some electric ranges have a coil of wire instead of a hot plate. This carries AC and produces a strongly varying magnetic field. When a conducting saucepan is placed upon it, eddy currents are induced in the metal in the base of the saucepan. These produce heat via ohmic losses (and hysteresis in magnetisation).

One could consider the coil in the cooktop as a primary and the metal in the saucepan as the secondary of a transformer.

As the heat is produced directly in the saucepan itself, less heat is wasted in the cooktop or the air.

There is a further subtlety. Let's compare a pot made from aluminium , which is non-magnetic but has low resistivity, with one made from iron, which is magnetic but has rather higher resistivity. If the two were placed in the same magnetic field varying in the same way, the Faraday emf would be the same. The ohmic power loss in the two would be given by V2/R, where V is the Faraday emf and R the resistance. R would be lower in the Al, so the ohmic power loss would be less.

However, there are three complications to this argument. First, the magnetic field in the Al pan will be less, because it is non-magnetic. This makes it a less effective transformer, just as an air-cored transformer is less effective than an iron cored one in low frequency applications. (Technically, we say that the magnetic permeability Al is much lower than that of Fe. The ratio is a few thousand times, so the effect is large.) Second, a higher secondary current actually decreases the magnetic field, because it provides a back emf in the primary. And thirdly, ohmic losses are not the only losses. Energy is also lost in magnetising and demagnetising the iron each time (technically hysteresis losses). So for these three reasons, the heating in the iron pot will be greater than in the Al pot.

#### How are eddy currents used in switching devices?

Proximity switches often involve eddy currents. These switches have the advantage that they have no (internal) moving parts or electrical contacts. This is an advantage because springs and bearings can wear out and because electrical contacts can become corroded. Proximity switches are activated by the proximity of a conductor, such as a human finger. You will have noticed them in the control panels of lifts, for example. If the proximity of your finger throws the switch but the proximity of a non-conductor does not, the switch may work on eddy currents (although it could also work by capacitance). Proximity switches of one type work like this:

The switch contains an oscillator using a resonant circuit operating at radio frequencies. The coil sets up a high frequency magnetic field. When a conductor enters this field, eddy currents flow in the conductor. This can have two effects. First, it changes the impedance of the coil, and so changes the frequency* of the resonant circuit. Also, because of ohmic losses in the conductor, energy is lost from the resonant circuit into the conductor. This may be large enough to stop* the oscillations. One or other of these changes then activates the switching circuitry. The actual switching of a high current circuit might achieved by a solid state device (MOSFETs, TRIACs etc) but for high current applications a relay is often used.

* You can think of the coil and the conductor as the primary and secondary of a transformer. The resistive load of the secondary is 'reflected' into the primary circuit, changing its properties. Note that both the frequency and the Q value of a resonant circuit depend on the resistance of the coil.

### Eddy current braking

Electromagnetic current braking is smoother, but why is it an advantage over conventional braking? Does it brake smoother in less time and less distance then conventional braking?
The deceleration due to friction braking is limited (i) by how much normal force you can apply to the brake pad or brake shoe (often not a problem with servo-assisted hydraulic cylinders application), (ii) by the mechanics and size of the pad: it may shear or break up if the forces are too great and (iii) by what to do with the heat generated. The last is quite important: think of how much kinetic energy a train has at speed: if you turn that into heat in a series of small brake pads they may melt, vaporise, weld to the wheels etc.

The deceleration due to magnetic braking is limited by (i) the strength and size of the magnetic fields available, (ii)

### Oscilloscope

Could someone please explain to me the timebase properties in CRO. Does it relate to horizontal or vertical movements?
The timebase affects horizontal motion of the beam. (The voltage inputs affect vertical motion of the beam when in 'timebase' mode, but in X-Y mode, the two different input voltages deflect in the X and Y directions.)

The electron beam in a CRO can be deflected left-and-right or up-and-down by electric fields. These are produced between two metal plates which have a potential difference supplied by the horizontal and vertical amplifiers.

In the timebase mode, a voltage which increases linearly with time is generated and input to the horizontal amplifier: this sweeps the beam smoothly from left to right. (It then starts again from the left, at a time determined by the trigger, so the timebase waveform is actually a sawtooth. You never see this waveform on the screen, however: it's used to drive the beam.)

Without any voltage input, the timebase thus causes the oscilloscope to "draw" a horizontal line across the screen. The speed of 'drawing' the graph depends on the timebase settings, sometimes called the SWEEP SPEED.

This control knob (usually towards the right) sets the time axis so that that one division represents the time interval indicated around the dial from seconds (fully counterclockwise) down to microseconds (fully clockwise). If you set this knob fully counterclockwise, the beam will sweep so slowly across the screen that you can see it cross. Fully clockwise and the sweep will look like a continuous line, because your eyes are not fast enough to see the motion.

### Simple Harmonic Motion under gravity

We attach a mass m below a vertical spring with constant k (fig�b). At equilibrium, the spring is extended a distance x, where
kx�=�mg. Let us now measure position with respect to this equilibrium position, using the new variable y. For instance, we might apply a steady force F vertically and achieve a new equilibrium at position y (fig c) where

F + k(x - y) = mg.
Substituting for x gives us
F + mg - ky = mg,     or
F = ky.
Put in words: the weight of the mass and the force due to the equilibrium displacement of the mass cancel out. Displacement from this equilibrium position requires a force ky, and so the potential energy at position y (with respect to y = 0) is
Utotal    =    integral of ky dy  =   (1/2)ky2
This potential energy could be said to include a compent due to the spring and another due to gravity. The choice of a reference for potential energy is arbitrary. Let's take it instead with respect to the unstretched position, and signify this difference with a prime.
U'total  =   U'grav + U'spring   =   (1/2)k(x - y)2 + mgy

=   (1/2)kx2 - kxy + (1/2)ky2 + mgy

=   (1/2)kx2 + (1/2)ky2

where the first term is a constant (the spring energy at equilibrium) and the second term is the energy required to displace it from that position.

So, from both the Newtonian and Hamiltonian points of view, the mass on the spring in the gravitational field behaves exactly as it would in the absence of gravity, except for the altered equilibrium position.

### Drift velocity

drift velocity and Ohm's law that introduces this topic. The text below is a summary of several questions about drift velocity of charge carriers in a conductor. It arises from a peculiar and confusing statement about drift velocity in the physics syllabus in New South Wales, Australia, and two multiple choice quesions in specimen papers in which insufficient information was given to allow one to answer. Briefly, if you are not studying in a NSW high school, you don't need to read this section.

What affects the drift velocity v of charge carriers in a conductor? We assume that steady state conditions are achieved, that the temperature is not changing and that the medium is behaving in a linear way. These conditions are well approximated in many experiments.
v = constant.E.q
where E is the applied electric field and q is the charge of the carrier. The constant depends on the material and the charge carrier being considered: it is high for a good conductor and a mobile carrier.

Why v depends on E: In steady state, the speed is (approx) proportional to the force moving it, which is Eq. If we increase the electric field (eg increase the voltage applied to a fixed length of conductor), then the force causing the charge carriers to move is greater, so they achieve a higher drift velocity before the 'driving force' is equal to the 'drag force' due to in the conductor. The driving force is (approx) proportional to the force moving it, which is Eq. If we increase the electric field (eg increase the voltage applied to a fixed length of conductor), then the force causing the charge carriers to move is greater, so they achieve a higher drift velocity before the 'driving force' is equal to the 'drag force' due to interactions with the medium. (More formally, we would normally do this in terms of average speeds and consider the acceleration in the field and the regular collisions with the atoms in the material.)

Notice that the drift velocity does not depend explictly on the geometry of the conductor in question (its area and length): if the cross sectional area were larger, and if we kept the electric field the same, then more charge would move at the same speed and we would get a larger current. If we made the conductor longer, but kept the field constant, then we would have a larger voltage applied to a higher resistance sample and the current would be constant, the same number of charge carriers would pass a given point per unit time at the same speed.

Given the drift velocity, the material and the geometry, we can then work out the current. This is proportional to the number of current carriers available per unit volume, their charge, their drift velocity and the cross sectional area

I = nqvA
Does this mean that v and A are inversely proportional? Only in the special and rather peculiar case where you keep the current constant. But to keep I constant, you would have to change the field. Far more common and easy to do would be to keep E constant (e.g. keep the voltage constant and change A).

Is it possible to make v inversely proportional to A? Yes, over a limited range of currents. One way would be to use a very large EMF in series with a resistance R that was very much greater than the resistance of your sample.

Question from the specimen paper (as quoted by one correspondent)

"In a certain experimental arrangement, the effect of the variation of the cross-sectional area of a conductor on the drift velocity of the electrons was investigated. If the original drift velocity was v m/s, then after doubling the cross-sectional area the new drift velocity would be expected to be:
a) v m/s      b) v/2 m/s      c) 2v m/s      d) 4v m/s"
This question cannot be answered without more information about the experimental arrangement. If the potential difference were held constant (a fairly common experiment), then the answer closest to correct would be (a). If the current were held constant (a less common but possible experiment), then the answer closest to correct would be (b).
Question from another specimen paper (as quoted by another correspondent)

Q8 Which of the following does the drift speed of electrons in a conducting wire depend on?
(A) The length of the wire      (B) The cross-sectional area of the wire      (C) The insulating material around the wire      (D) The straightness of the wire

This question cannot be answered without more infomation. If one connected the wire to an ideal EMF, the answer would be (A). If the internal resistance of the battery were much larger than the resistance of the wire, the answer would be approximately (B).
First, the real HSC paper is expected to contain few mistakes. When/if you encounter them, don't waste too much time. Give the answer that is least wrong---here (a)---and then quickly write that the question is incomplete. Then move on to the next question. After the exam is over, raise the point with your teacher (and on the bulletin board if you like) and then start making a fuss. If a question has been set and the correct answer is not given, or if the question is confusing, then the markers would be under considerable pressure to omit that question from the result.

### Miscelleaneous questions in history and social studies

Michael Frayn's play "Copenhagen" treats the relationship between Niels Bohr and Werner Heisenberg. Bohr's wife Marguerite is the only other character, but many of the great theoretical physicists of the 1920's are mentioned. It is a great piece of theatre, and a painless way of learning a bit of history. (Physicists usually don't know much history--many of us went into physics because we didn't like rote learning. Be aware that physics syllabi in other states and in universities have much less history and rote learning than does the NSW high school syllabus.)

The Einstein-Planck debate
What debate? Einstein and Planck met (for instance at the Solvay Conference in 1927), they presumably spoke together, and they presumably must have disagreed on at least something, even if only on where to go for a beer after a hard day's physics. So in that sense, there probably has been a debate. The trouble is that there seems to be no evidence of it. This question was posted to a newsgroup for historians of science, and no-one responded with evidence. Because the peculiar syllabus taught in high schools in New South Wales devotes a segment to a debate between these two people, students in that state have been searching for evidence of such a debate. So far, no-one has posted any evidence on this site, although Bob Emery reported juxtaposed quotations of Einstein and Planck in the prologue of the book Planck M. (1931) Where is Science Going? Ox Bow Press, Woodbridge, CT (1981 reprint).

So it's one of those delightful rare things: an open question to which an answer just might possibly be found, particularly if you read German and have access to previously unpublished correspondence from the early twentieth century. Good luck! If you find something, please let us know and we shall make the answer available here.

The Nobel prizes in physics
What has been the impact of advances in understanding of matter on work of physicists ?
This question is pretty vague, and many answers are possible. Here are just a few.

They also led to molecular physics (with many applications in chemistry, biochemistry, pharmacy, medicine, materials science, engineering etc).

Knowledge of the nucleus led to the Standard Model, a comprehensive theory that explains what we know about nuclear interactions and which unifies the strong and weak nuclear forces with electromagnetism. This also led to a greater comprehension of the early stages of the universe.

### 2001 HSC Paper on Physics

Question 4 caused a lot of feedback. Was it B or C?
Both answers are correct. B is true: both generators produce AC. C is also true: Generator 1 produces DC and Generator 2 produces AC.

This comes about because generator 1 produces both AC and DC at the same time (ie a current that has both an AC and a DC component.

The splits in the ring in generator 1 are in such a position that the circuit is reversed at two orientations when the flux is changing at a reasonably high rate, so it goes quickly from an emf in one direction to an equal emf in the other. The current produced would in the shape of sin(ωt) for half a cycle then −sin(ωt) for the other half cycle, but it doesn't change at π/4 or 3π/4 radians, so the output has some DC component, and a larger AC component. Exactly where it changes depends on how you estimate the angle in the drawing. The current will not be discontinuous because of the inductance of the coil, and there will be some spikes and sparking.

So the examiners should have marked both B and C correct.

Other problems with the 2001 paper
Q3. 'Resistance' should be 'Resistivity', or else there should be some words between 'resistance of' and 'mercury'. The question as asked is dimensionally incorrect. This error is unlikely to have confused anyone.

Q28 The quantity should really be the 'specific acoustic impedance' or 'wave impedance', rather than the 'acoustic impedance', but this is unlikely to have confused anyone.

Question 5

### Hints on doing tests  