Matter & Interactions II, Week 11

More with circuits, and this time capacitors, and the brilliantly simple description M&I provides for their behavior. In chapter 19, we see that traditional textbooks have misled students in a very serious way regarding the behavior of capacitors. Those “other” textbooks neglect fringe fields. Ultimately, and unfortunately, this means that capacitors should not work at all! The reason becomes obvious in chapter 19 of M&I. We see that in a circuit consisting of a charged capacitor and and a resistor, it’s the capacitor’s fringe field that initiates the redistribution of surface charge that, in turn, establishes the electric field inside the wire that drives the current. The fringe field plays the same role that a battery’s field plays in a circuit with a flashlight bulb and battery. It initiates the charge redistribution transient interval. As you may have already guessed, the capacitor’s fringe field is what stops the charging process for an (initially) uncharged capacitor in series with a battery. As the capacitor charges, its fringe field increases and counters the electric field of the redistributed surface charges, thus decreasing the net field with time. If we want functional circuits, we simply cannot neglect fringe fields.

Ultimately, the M&I model for circuits amounts to the reality that a circuit’s behavior is entirely due to surface charge redistributing itself along the circuit’s surface in such a way as to create a steady state or a quasisteady state. It’s just that simple. You don’t need potential difference. You don’t need resistance. You don’t need Ohm’s law. You only need charged particles and electric fields.

One thing keeps bothering me though. Consider one flashlight bulb in series with a battery. The circuit draws a certain current i_1 for example. Now, consider adding nothing but a second, identical flashlight bulb in parallel with the first one. Each bulb’s brightness should be very nearly the same as that of the original bulb. The parallel circuit draws twice the current of the original lone bulb i_2 = 2i_1 but that doubled current is divided equally between the two parallel flashlight bulbs. That’s all perfectly logical, and I can correctly derive this result algebraically. I end up with a factor of 2 multiplying the product of either bulb’s fliament’s electron number density, cross sectional area, and electron mobility.

i_2 \propto 2nAu

My uneasiness is over the quantity to which we should assign the factor of 2. A desktop experiment in chapter 18 that establishes we get a greater current in a wire when the wire’s cross sectional area increases. Good. However, in putting two bulbs in parallel is it really obvious that the effective cross sectional area of the entire circuit has doubled? It’s not so obvious to me because the cross sectional area can possibly only double by virtue of adding an identical flashlight bulb in parallel with the first one. Unlike the experiment I mentioned, nothing about the wires in the circuit change. Adding a second bulb surely doesn’t change the wire’s mobile electron number density; that’s silly. Adding a second bulb also surely doesn’t change the wire’s electron mobility; that’s equally silly. Well, that leaves the cross sectional area to which we could assign the factor of 2, but it’s not obvious to me that this is so obvious. One student pointed out that the factor of 2 probably shouldn’t be thought of as “assigned to” any particular variable but rather to the quantity nAu as a whole. This immediately reminded me of the relativistic expression for a particle’s momentum \vec{p} = \gamma m \vec{v} where, despite stubborn authors who refuse to actually read Einstein’s work, the \gamma applies to the quantity as a whole and not merely to the mass.

So, my question boils down to whether or not there is an obvious way to “assign” the factor of 2 to the cross sectional area. I welcome comments, discussion, and feedback.

 


Matter & Interactions II, Week 10

Chpater 18. Circuits. You don’t need resistance. You don’t need Ohm’s law. All you need is the fact that charged particles respond to electric fields created by other charged particles. It’s just that simple.

When I took my first electromagnetism course, I felt stupid becuase I never could just look at a circuit and tell what was in series and what was in parallel. And the cube of resistors…well I still have bad dreams about that. One thing I know now that I didn’t know then is that according to traditional textbooks, circuits simply should not work. Ideal wires don’t exist, and neither do ideal batteries nor ideal light bulbs. Fringe fields, however, do indeed exist and capacitors just wouldn’t work without them. So basically, I now know that the traditional textbook treatment of circuits is not just flawed, but deeply flawed to the point of being unrealistic.

Enter Matter & Interactions. M&I’s approach to circuits invokes the concept of a surface charge gradient to establish a uniform electric field inside the circuit, which drives the current. This was tough to wrap my brain around at first, but now I really think it should be the new standard mainstream explanation for circuits in physics textbooks. the concept of resistance isn’t necessary. It’s there, but not in its usual macroscopic form. M&I treats circuits from a purely microscopic point of view with fundamental parameters like mobile electron number density, electron mobility, and conductivity and geometry in the form of wire length and cross sectional area. Combine these with charge conservation (in the form of the “node rule”) and energy conservation per charge (in the form of the “loop rule”) and that’s all you need. That’s ALL you need. No more “total resistance” and “total current” nonsense either. In its place is a tight, coherent, and internally consistent framework where the sought after quantities are the steady state electric field in each part of the circuit and the resulting current in each part. No more remembering that series resistors simply add and parallel resistors add reciprocally. Far more intuitive is the essentially directly observable fact that putting resistors in series is effectively the same as increasing the filament length and putting resistors in parallel is effectively the same as increasing the circuit’s cross sectional area. It’s so simple, like physics is supposed to be.

Of course, in the next chapter (chapter 19) the traditional “Ohm’s law” model of circuits is seen to be emergent from chapter 18’s microscopic description, but honestly, I see no reason to dwell on this. Most of my students are going to become engineers anyway, and they’ll have their own yearlong circuit courses in which they’ll learn all the necessary details from the engineering perspective. For now, they’re much better off understanding how circuits REALLY work and if they do, they’ll be far ahead of me when I was in their shoes as an introductory student, and will have the deepest understanding of anyone else in their classes after transferring. That’s my main goal after all.

Feedback welcome.


Matter & Interactions II, Week 7

This week, I was away at the winter AAPT meeting in Atlanta. Students began working on the experiments from chapter 17, which serve to introduce magnetic fields.

I want to emphasize some really cool things about the mathematical expression for a particle’s magnetic field:

\vec{B}_{\mbox{\tiny particle}}=\dfrac{\mu_o}{4\pi}\dfrac{Q\vec{v}\times\hat{r}}{\lVert\vec{r}\rVert^2}

This is really a single particle form of the Biot-Savart law. I’m going to morph it into something really interesting. I’m going to make use of the fact that c^2 = \frac{1}{\mu_o\epsilon_o}, which I assert to students will be derived in a later chapter.

\vec{B}_{\mbox{\tiny particle}}=\dfrac{\mu_o}{4\pi}\dfrac{Q\vec{v}\times\hat{r}}{\lVert\vec{r}\rVert^2}

\vec{B}_{\mbox{\tiny particle}}=\dfrac{1}{4\pi\epsilon_oc^2}\dfrac{Q}{\lVert\vec{r}\rVert^2}\vec{v}\times\hat{r}

\vec{B}_{\mbox{\tiny particle}}=\dfrac{1}{c^2}\vec{v}\times\left(\dfrac{1}{4\pi\epsilon_o}\dfrac{Q}{\lVert\vec{r}\rVert^2}\hat{r}\right)

\vec{B}_{\mbox{\tiny particle}}=\dfrac{1}{c^2}\vec{v}\times\vec{E}_{\mbox{\tiny particle}}

THIS IS AMAZING! This demonstrates that this new thing called magnetic field is kind of like a velocity dependent electric field. That’s an oversimplification, but it hints that something deep is revealing itself here. Velocity connotes reference frame, and we see a big hint here that magnetic field depends on one’s reference frame. This is foreshadowing special relativity! We can show something else with one more slight rearrangement.

c\vec{B}_{\mbox{\tiny particle}}=\dfrac{\vec{v}}{c}\times\vec{E}_{\mbox{\tiny particle}}

This means that if we express velocity in fractions of c, then the quantity c\vec{B} has the same dimensions as \vec{E} and can thus be expressed in the same unit as electric field! This conceptualization allows for some beautiful symmetry to show itself later on when we get to the Maxwell equations. In some ways, electric fields and magnetic fields are interchangeable. Again, this is a hint of some underlying unification of the two, the electromagnetic field tensor, which I’m working hard to find a way to introduce into the introductory course. If students can understand simple Lorentz transformations, then they should be able to understand how the electromagnetic field tensor transforms from one frame to another within the framework of special relativity and we can show some beautiful physics. I realize I’m in the minority when it comes to something like this becuase we tend to think of our students as not being mathematically prepared. I’ve come to realize that perhaps…just perhaps…that is our perception only becuase we aren’t giving them the best mathematical foundations upon which to prepare for physics. Maybe it’s our fault. Maybe.

Anyway, these ruminations are things I want, and hope, students see on their own but all too often I find they, at least in my case, have difficulty even engaging at a minimal level. I struggle with this, and like to think and hope that maybe it’s because they don’t see the beauty. That’s why I nudge them in these new and different directions. Like I said above, it may very well be our fault.

Feedback is welcome as always.


Matter & Interactions II, Week 6

I’m writing this a whole week late due, in part, to having been away at an AAPT meeting and having to plan and execute a large regional meeting of amateur astronomers.

This week was all about the concept of electric potential and how it relates to electric field. I love telling students that this topic is “potentially confusing” becuase the word “potential” comes up in two different contexts. The first is in the context of potential energy. Potential energy, which I try very hard to call interaction energy, is a property of a system, not of an individual entity. There must be at least two interacting entities to correctly speak of interaction energy. Following Hecht [reference needed], I like to think of energy, and thus interaction energy, as a way of describing change in a system using scalars rather than vectors. Conservative forces, like gravitational and electric forces, can be described with scalar energies and fortunately, these forces play a central role in introductory physics. The second context is that of electric potential, a new quantity that is the quotient of a change in electric potential energy and the amount of charge that gets moved around as a result of an interaction. The distinction between the two contexts is subtle but very important.

Oh and speaking of potential or interacting energy, Matter & Interactions is the only textbook I know of that correctly shows the origins of The World’s Most Annoying Negative Sign (TWMANS) and how it relates to potential energy. When you write the total change in your system’s energy, you can attribute it to work done by internal forces and work done by external forces. When you rearrange this expression to put all the internal terms on the lefthand side and all the external terms on the righthand side, you pick a negative sign that goes on to become TWMANS. This term with the negative sign, which is nothing more than the oppositve of the work done by forces internal to the system, is DEFINED to be the change in potential energy for the system. It’s just that simple, but this little negative sign caused me so much grief in both undergrad and graduate courses. Some authors explicitly included it, other didn’t, and instead flipped the integration limits on integrals to account for it. Chabay and Sherwood include it explicitly and consistently and there should be no trouble in knowing when and where it’s needed.

There is also some interesting mathematics in this chapter. Line integrals and gradients are everywhere and we see they are intimately related. In fact, they are inverses of each other. I want to talk about one mathematical issue in particular, though, and that is within the context of the following problem statement:

Given a region of space where there is a uniform electric field \vec{E} and a potential difference \Delta V between two points separated by displacement \Delta \vec{r}, calculate the magnitude of the electric field \lVert \vec{E}\rVert.

This problem amounts to “unwrapping” a dot product (in this case \Delta V = -\vec{E}\bullet\Delta\vec{r} ), something the textbooks, to my knowledge, never demonstrate how to do. My experience is that student inevitably treat the dot product as scalar multiplication and attempt to divide by \Delta\vec{r} and of course dividing by a vector isn’t defined in Gibbsian vector analysis. I think the only permanent cure for this problem is to take a more formal approach to introducing vectors and dot products earlier in the course but I tend to think I’m in the minority on that, and I don’t really care. The problem needs to be addressed one way or the other. Solving either a dot product or a cross product for an unknown vector requires knowledge of two quantites (the unknown’s dot product with a known vector and the unknown’s cross product with a known vector OR the unknown’s divergence and curl) as constraints on the solution. Fortunately, at this point in the course we’re dealing with static electric fields, which have no curl (\nabla\times\vec{E}=0) or equivalently (I think) \vec{E} is collinear with \nabla V (differeing in signs because gradient points in the direction of increasing potential (I don’t like saying that for some reason…) and electric field points in the direction of decreasing potential) so we can find something about \vec{E} from just a dot product alone. So, students need to solve \Delta V = -\vec{E}\bullet\Delta\vec{r} for \lVert\vec{E}\rVert. Here’s the beginning of the solution. The first trick is to express the righthand side in terms of scalars.

\Delta V = -\vec{E}\bullet\Delta\vec{r}

\Delta V = -\lVert\vec{E}\rVert \lVert\Delta\vec{r}\rVert\cos\theta

\lVert\vec{E}\rVert = -\dfrac{\Delta V}{\lVert\Delta\vec{r}\rVert\cos\theta}

We have a slight problem, and that is the lefthand side is a vector magnitude and thus is always positive. We must ensure that the righthand side is always positive. I see two ways to do this. If \vec{E} and \Delta\vec{r} are parallel (\theta=0) then \Delta V must represent a negative number and TWMANS will ensure that we get a positive value for the righthand side, and thus also for the lefthand side. If \vec{E} and \Delta\vec{r} are antiparallel (\theta=\pi) then \Delta V must represent a positive number and TWMANS, along with the trig function, will ensure that we get a positive value for the righthand side, and again also for the lefthand side. I want to install this kind of deep, geometric reasoning in my students but I’m finding that it’s rather difficult. Their approach is to simply take the absolute value of the righthand side.

\lVert\vec{E}\rVert = \left\lvert-\dfrac{\Delta V}{\lVert\Delta\vec{r}\rVert\cos\theta}\right\rvert

It works numerically of course, but bypasses the physics in my opinion. There’s one more thing I want students to see here, and that is the connection to the concept of gradient. Somehow, they need to see

\lVert\vec{E}\rVert = -\dfrac{\Delta V}{\lVert\Delta\vec{r}\rVert\cos\theta}

as

E_x = -\dfrac{\partial V}{\partial x}

and I think this can be done if we think about the role of the trig function here, which tells us how much of \Delta\vec{r} is parallel to \vec{E}, and remembering that the component label x is really just an arbitrary label for a particular direction. We could just as well use y, z, or any other label. We must be careful about signs here too, because the sign of E_x must be consistent with the geometry relative to the displacement.

As an aside, it kinda irks me that position vectors seem to be the only vectors for which we label the components with coordinates. I don’t know why that bothers me so much, but it does. Seems to me we should use r_x rather than just x and there’s probably a deep reason for this, but I’ve yet to stumble onto it. Perhaps it’s just as simple as noticing that a position’s components are coordinates. Is it that simple?

As always, feedback is welcome.


Matter & Interactions II, Week 5

This week was all about calculating electric fields for continuous charge distributions. This is usually students’ first exposure to what they think of as “calculus-based” physics because they are explicitly setting up and doing integrals. There’s lots going on behind the scenes though.

In calculus class, students are used to manipulating functions by taking their derivatives, indefinite integrals, and definite integrals. In physics, however, these ready made functions don’t exist. When we write dQ, there is no function Q() for which we calculate a differential. The symbol dQ represents a small quantity of charge, a “chunk” as I usually call it. That’s is. There’s nothing more. Similarly, dm represents a small “chunk” of mass rather than the differential of a function m(). The progress usually begins with uniform linear charge distributions and progresses to angular (i.e. linear charge distributions bent into arcs of varying extents), then area, then volume charge distributions (Are “area” and “volume” adjectives?). One cool thing is how each type of distribution can be constructed from a previous one. You can make a cylinder of charge out of lines of charge. You can make a loop of charge out of a line of charge. You can make a plane of charge out of lines of charge. You can make a sphere of charge out of loops of charge. Beautiful! Lots of ways to approach setting up the integral that sweeps through the charge distribution to get the net field.

It’s interesting to ponder the effect of changing the coordinate origin. Consider a charge rod. If rod’s left end is at the origin, the limits of integration are 0 and L (the rod’s length). If the rod’s center is at the origin, the limits of integration are -L/2 and +L/2. The integrand looks slightly different, but the resulting definite integral is the same in both cases! Trivial? No! It’s yet another indication that Nature doesn’t care about coordinate systems; they’re a human invention and subject to our desire for mathematical convenience. This is also a good time to recall even (f(-x) = f(x)) and odd (f(-x) = -f(x)) functions becuase then one can look at an integral and its limits and predict whether or not the integral must vanish and this connects with symmetry arguments from geometry. This, to me, is one of the very definitions of mathematical beauty. A given charge distribution’s electric field is independent of the coordinate system used to derive it. The forthcoming chapter on Gauss’s law and Ampère’s law relies on symmetries to predict electric and magnetic field structures for calculating flux and circulation and that’s foreshadowed in this chapter.

This is a lot to convey to students and from their point of view it’s a lot to understand. I hope I can do better at getting it all across to them than was done for me.

Feedback welcome as always.


Matter & Interactions II, Week 2

This week was yet another partial week. Between weather and holidays, we’ve not yet had a full week of classes. Such is life I guess.

This week, we looked at the electric field of a static particle and the electric field of a fixed dipole on the dipole’s axis and on the perpendicular bisector of the axis. I really with introductory textbooks would introduce the full expression, in coordinate-free form of course, for a dipole field. I think it would go along way toward reinforcing introductory understanding of vectors. We already present a particle’s field in coordinate free form, but why not a dipole’s field? No one that I know of has taken the plunge. That includes me unfortunately. Maybe someday.

We spent all of Thursday (the course meets M-Th 10:00 a.m. -11:20 a.m.) working with GlowScript, our main programming environment this semester. I demonstrated how to define a new function, sgn() in this case. I’m rather surprised that it’s not internally defined by default, but it’s trivial to add to one’s program.

There’s not much else to say about this week. It’s all about laying a good foundation for the coming chapters. That’s important, but alas not always exciting.

 


Matter & Interactions II, Week 1

This week was supposed to begin on Monday, but we lost both Monday and Tuesday to snow and icy roads so this week was effectively just a two day week.

On Wednesday, I demonstrated Jupyter notebooks and informed the class that effective this semester, we’re moving away from Classic VPython. From this point on, we will only use GlowScript and Jupyter VPython. Using the latter is very important because it allows for file I/O whereas there’s no easy way to do that (that I’m aware of) with GlowScript. We will also continue using LaTeX (via Overleaf) for writing solutions.

 

forcefield

 

On Thursday, I gave an overview of chapter 13 on electric force and the electric field of a particle. It’s interesting to note that the denominators of both expressions contains an area, specifically the area of a sphere. What might that be related to? I teased the class with this question in anticipation of the chapter on Gauss’s law.

Note the presence of absolute value bars and the sgn() function in each expression. Charge, unlike mass, can be positive or negative. A vector’s magnitude, however, must always be positive without exception, at least if we are going to stick with the fundamental definitions from first semester physics. That means that we must use the absolute value of charge to calculate the magnitude of an electric force or electric field. We could always sidestep this issue by instead defining the signed magnitude to be the scalar part of the vector, but this isn’t consistent with a vector being the product of a magnitude and a direction. In the expression for electric force, note that we could also take the absolute value of the product of the two charges, which might be a better way to write it. I’ll have to think about that.

Anyway, the sgn() function is necessary computationally. A person can work out the correct directions for force and field by physical and geometric reasoning, but a computer must be told explicitly how to do it, and that’s the purpose of the sgn() function here. It assures the correct geometry based on the signs of the charges. I’ve never seen this use in any textbook, but it seems quite necessary to me in order to maintain the fundamental definition of a vector’s magnitude. Thus, I include it.

Also note that we use double bars for vector magnitudes and single bars for absolute values. These are two conceptually different things and thus I feel they warrant different symbols. It is also consistent with what my students see in their calculus textbook and I try to maintain some sense of consistency between their math and physics texts.

UPDATE: Oh, one more thing. Every textbook I know of freely switches between Q and q for chcarge, even for the same expression and sometimes even for the same expression in the same chapter. This is confusing. To eliminate this confusion, I consistently use Q for a source charge (a charge associated with the creation (I don’t like that word) of an electric field) and q for an experiential charge (a charge that experiences an electric field created by another charge).

UPDATE: In the fourth edition of Matter & Interactions, Chabay and Sherwood deal with the sign issue by treating everything to the left of the unit vector in the above expressions as a signed scalar quantity and mention on page 520 that one should take the absolute value of this quantity to get the magnitude of the associated vector. Computationally, they calculate a particle’s electric field in one expression, without separately calculating the magnitude and direction, and this is fine. I think students should be aware of different sign conventions and their implications, but I also think foundational definition should be sacrosanct. If the foundation is variable, it isn’t a foundation after all.

UPDATE: After much thought, I have decided that I am okay with defining the magnitude of a particle’s electric field to be the absolute value of the quantity preceding the direction and excluding the sgn() function. The resulting caveat is that without taking the absolute value, we must not call this quantity a magnitude; it is a signed scalar.

I ended Thursday’s class with a question:

WHY must the electric force shared by two charged particles lie along the line connecting them? 

This question can be answered with no numerical calculation or computation at all, but with physical reasoning using symmetry, specifically the fact that space is isotropic. The logic goes something like this:

  • Define a system to consist of two charged particles with charges Q and q, isolated from all other influences.
  • Assume that the force on q due to Q has a component that is NOT along the line connecting them, and draw an arrow representing this force with its tail on q.
  • Rotate the system around an axis coinciding with the line connecting q and Q by 180 degrees, and draw the new system.
  • Note that the rotated system is indistinguishable from the original system. This is important, because if nothing about the system changed, then we should expect there to be no change in the force on q due to Q.
  • However, since we assumed that the force on q has a component perpendicular to the line connecting q and Q, the force “looks different” for the rotated system compared to the original system. A uniqueness theorem guarantees that for every charge distribution, there is one and only one net force on each particle. Thus, there cannot be more than one “correct” net force on q due to Q.
  • If space is indeed isotropic, then if a change to the system causes the system to “look the same” then it cannot be the case that the force on q due to Q can have a component perpendicular to the line connecting q and Q.
  • Therefore, the force on q must be such that is has no component perpendicular to the line connecting q and Q, and thus it must lie along that line, and we have used a simple proof by contradiction.

This type of powerful reasoning, appealing to symmetry, has many uses in electromagnetic theory, specifically in the introductory course where students need to ascertain the directions of electric fields due to certain charge distributions. Symmetry plays a role in setting up the integrals necessary for such calculuations. I think it is important to introduce reasoning by symmetry as early as possible. Note that this reasoning can also be applied to the geometry of the gravitational force from introductory mechanics.

Feedback is always welcome!