I’m combining two weeks in this post.
The first week, we dealt with magnetic forces. One thing that I have never thought much about is the fact that the quantity is effectively an electric field, but one that depends on velocity. When velocity is involved, reference frames are involved, and that of course means Einstein is talking to us again. M&I addresses the fact that what we detect as an electric field and/or a magnetic field depends on our reference frame. This is fundamental material that I feel should be included in every introductory electromagnetic theory course. There’s really no good reason to omit it given that special relativity is a foundation of all contemporary physics. It’s sad to think that beginning next fall, our students won’t be exposed to this material any more.
The second week gets us into chapter 21, which presents Gauss’s law and Ampére’s law. There are many fine points and details to present here. I’ll try to list as many as I can think of.
- I use the words pierciness, flowiness, spreadingoutness, and swirliness to introduce the concepts of flux, circulation, divergence, and curl respectively.
- We have the term flux for the quantity given by surface integrals, but we rarely if ever see the term circulation for line integrals. I recommend introducing the term, primarly because it forms the basis for the definition of curl.
- The distinction between an open surface and a closed surface is very important.
- I, like M&I, prefer to write vector area as rather than because it allows for introducing a “sneaky one” into the calculation of flux that lets a dot product become a product of scalars when the field is parallel to the surface’s unit normal:
- Similarly, I like an element of vector length, at least for electromagnetic theory, as rather than (the is supposed to be bold but it doesn’t look bold to me). I don’t think I have ever seen this notation in an introductory course before, but I like it because students have seen unit tangents in calculus and this notation closely parallels that for vector area as described above. Plus, it also allows for a “sneaky one” into the calculation of circulation when the field is parallel to the path’s unit tangent::
- After this chapter, we can finally write Maxwell’s equations for the first time. I show them as both integral equations and as differential equations. One of my usual final exam questions is to write each of the four equations as both an integral equation and a differential equation and to provide a one sentence interpretation of each form of each equation.
That’s about it for these two chapters. I thought there was something else I wanted to talk about, but it seems to have escaped me and I’ll update this post if and when I remember it.
Feedback welcome as always.
We’re hanging out in chapter 19 looking at the properties of capacitors in circuits.
In response to my (chemist) department chair’s accusation that I’m not rigorous enough in my teaching of “the scientific method” as it’s practiced in chemistry, I just had “the talk” about “THE” scientific method with the class and about how it doesn’t exist. I will never forget Dave McComas (IBEX) telling the audience at an invited session I organized at AAPT in Ontario (CA) that we MUST stop presenting “the scientific method” as it is too frequently presented in the textbooks because it simply does not reflect how science works. No one hypothesizes a scientific discovery. Once a prediction is made and experimentally (or observationally in the case of astronomy) verified, that’s not a prediction because the outcome is expected. Even if the prediction isn’t verified, one of the required known possible outcomes is that the prediction is wrong. There’s nothing surprising here. True discoveries happen when we find something we had no reason to expect to be there in the first place. The Higgs boson? Not a discovery, because it was predicted forty years or so ago and we only recently had the technology to test for its presence. I don’t think anyone honestly expected it to not be found, but I think many theoretical particle physicists (not so) secretly hoped it wouldn’t be found because then we would have actually learned something new (namely that the standard model has problems).
The “scientific method” simply doesn’t exist as a finite numbered sequence of steps whose ordering is the same from discipline to discipline. Textbooks need to stop presenting that way. Scientific methodology is more akin to a carousel upon which astronomers, chemists, physicists, geologists, or biologists (and all the others I didn’t specify) jump at different places. Observational astronomers simply don’t begin by “forming an hypothesis” as too many overly simplistic sources may indicate. Practitioners in different disciplines begin the scientific process at different places by the very nature of their disciplines and I don’t think there’s a way to overcome that.
Rather than a rote sequence of steps, scientific methodology should focus on validity through testability and falsifiability. I know there are some people who think that falsifiability has problems, and I acknowledge them. However, within the context of introductory science courses, testability and falsifiability together form a more accurate framework for how science actually works. This is the approach I have been taking for over a decade in my introductory astronomy course. It is not within my purview to decide what is and is not appropriate for other disciplines, like chemistry. My chemist colleagues can present scientific methodology as they see fit. I ask for the same respect in doing so within my disciplines (physics and astronomy).
I now consider “the scientific method” to have been adequately “covered” in my calculus-based physics course.
Feedback welcome as always.
More with circuits, and this time capacitors, and the brilliantly simple description M&I provides for their behavior. In chapter 19, we see that traditional textbooks have misled students in a very serious way regarding the behavior of capacitors. Those “other” textbooks neglect fringe fields. Ultimately, and unfortunately, this means that capacitors should not work at all! The reason becomes obvious in chapter 19 of M&I. We see that in a circuit consisting of a charged capacitor and and a resistor, it’s the capacitor’s fringe field that initiates the redistribution of surface charge that, in turn, establishes the electric field inside the wire that drives the current. The fringe field plays the same role that a battery’s field plays in a circuit with a flashlight bulb and battery. It initiates the charge redistribution transient interval. As you may have already guessed, the capacitor’s fringe field is what stops the charging process for an (initially) uncharged capacitor in series with a battery. As the capacitor charges, its fringe field increases and counters the electric field of the redistributed surface charges, thus decreasing the net field with time. If we want functional circuits, we simply cannot neglect fringe fields.
Ultimately, the M&I model for circuits amounts to the reality that a circuit’s behavior is entirely due to surface charge redistributing itself along the circuit’s surface in such a way as to create a steady state or a quasisteady state. It’s just that simple. You don’t need potential difference. You don’t need resistance. You don’t need Ohm’s law. You only need charged particles and electric fields.
One thing keeps bothering me though. Consider one flashlight bulb in series with a battery. The circuit draws a certain current for example. Now, consider adding nothing but a second, identical flashlight bulb in parallel with the first one. Each bulb’s brightness should be very nearly the same as that of the original bulb. The parallel circuit draws twice the current of the original lone bulb but that doubled current is divided equally between the two parallel flashlight bulbs. That’s all perfectly logical, and I can correctly derive this result algebraically. I end up with a factor of 2 multiplying the product of either bulb’s fliament’s electron number density, cross sectional area, and electron mobility.
My uneasiness is over the quantity to which we should assign the factor of 2. A desktop experiment in chapter 18 that establishes we get a greater current in a wire when the wire’s cross sectional area increases. Good. However, in putting two bulbs in parallel is it really obvious that the effective cross sectional area of the entire circuit has doubled? It’s not so obvious to me because the cross sectional area can possibly only double by virtue of adding an identical flashlight bulb in parallel with the first one. Unlike the experiment I mentioned, nothing about the wires in the circuit change. Adding a second bulb surely doesn’t change the wire’s mobile electron number density; that’s silly. Adding a second bulb also surely doesn’t change the wire’s electron mobility; that’s equally silly. Well, that leaves the cross sectional area to which we could assign the factor of 2, but it’s not obvious to me that this is so obvious. One student pointed out that the factor of 2 probably shouldn’t be thought of as “assigned to” any particular variable but rather to the quantity as a whole. This immediately reminded me of the relativistic expression for a particle’s momentum where, despite stubborn authors who refuse to actually read Einstein’s work, the applies to the quantity as a whole and not merely to the mass.
So, my question boils down to whether or not there is an obvious way to “assign” the factor of 2 to the cross sectional area. I welcome comments, discussion, and feedback.
Chpater 18. Circuits. You don’t need resistance. You don’t need Ohm’s law. All you need is the fact that charged particles respond to electric fields created by other charged particles. It’s just that simple.
When I took my first electromagnetism course, I felt stupid becuase I never could just look at a circuit and tell what was in series and what was in parallel. And the cube of resistors…well I still have bad dreams about that. One thing I know now that I didn’t know then is that according to traditional textbooks, circuits simply should not work. Ideal wires don’t exist, and neither do ideal batteries nor ideal light bulbs. Fringe fields, however, do indeed exist and capacitors just wouldn’t work without them. So basically, I now know that the traditional textbook treatment of circuits is not just flawed, but deeply flawed to the point of being unrealistic.
Enter Matter & Interactions. M&I’s approach to circuits invokes the concept of a surface charge gradient to establish a uniform electric field inside the circuit, which drives the current. This was tough to wrap my brain around at first, but now I really think it should be the new standard mainstream explanation for circuits in physics textbooks. the concept of resistance isn’t necessary. It’s there, but not in its usual macroscopic form. M&I treats circuits from a purely microscopic point of view with fundamental parameters like mobile electron number density, electron mobility, and conductivity and geometry in the form of wire length and cross sectional area. Combine these with charge conservation (in the form of the “node rule”) and energy conservation per charge (in the form of the “loop rule”) and that’s all you need. That’s ALL you need. No more “total resistance” and “total current” nonsense either. In its place is a tight, coherent, and internally consistent framework where the sought after quantities are the steady state electric field in each part of the circuit and the resulting current in each part. No more remembering that series resistors simply add and parallel resistors add reciprocally. Far more intuitive is the essentially directly observable fact that putting resistors in series is effectively the same as increasing the filament length and putting resistors in parallel is effectively the same as increasing the circuit’s cross sectional area. It’s so simple, like physics is supposed to be.
Of course, in the next chapter (chapter 19) the traditional “Ohm’s law” model of circuits is seen to be emergent from chapter 18’s microscopic description, but honestly, I see no reason to dwell on this. Most of my students are going to become engineers anyway, and they’ll have their own yearlong circuit courses in which they’ll learn all the necessary details from the engineering perspective. For now, they’re much better off understanding how circuits REALLY work and if they do, they’ll be far ahead of me when I was in their shoes as an introductory student, and will have the deepest understanding of anyone else in their classes after transferring. That’s my main goal after all.
This week was a very short week consisting of only two days. We met as usual on Monday, but Tuesday was a “flip day” and ran as a Friday. This class doesn’t meet on Fridays so we only had one day this week, and we devoted it to tying up loose ends from chapter 17.
Next week, barring losing days to winter weather as I sit here and watch the forecast deteriorate, we will hit circuits the M&I way!
In an interesting development, I was informed by my coworker (a PhD physicist) that our department chair had approached him Tuesday morning to ask if he would like to take my calculus-based physics courses from me next year on the grounds that I’m not rigorous enough. Needless to say, I was shocked becuase the chair had not mentioned this to me and indeed has not spoken to me about it at all. Had my coworker not told me I would not have known.
My chair, a PhD chemist, seems to think that because M&I emphasizes computation over traditional labs, and that what labs we do are not as rigorous as chemistry labs, either M&I or I or perhaps both are not appropriate for our students and indeed may be causing them to be ill prepared for transfer to universities. Of course this is all nonsense, but my chair actually said to my face that she knows more about computation, theory, and experiment than I do and that labs must be done the “chemistry way” or they’re not valid. If this weren’t so disgustingly true, it would be mildly funny. It’s not funny. It’s true.
I don’t know what I’m going to do, but it’s clear both M&I and I are probably on our way out at my current instituion. My colleague (who by the way has no interest in teaching calculus-based physics) and I are both exploring numerous options, including leaving for another instiution.
This week was devoted entirely to programming for chapter 17 on magnetic fields. At least two students had difficulties with lists, which was surprising they’d used them in a previous chapter. It was like they’d never seen them before. Must be something in the water.
Next week is basically a waste of time becuase thee are only two class days, and only one for which this class will meet. Monday will be a normal day but Tuesday will run as a Friday. Strange? It’s an artifact of the fantasy world I inhabit during the week.
This week, I was away at the winter AAPT meeting in Atlanta. Students began working on the experiments from chapter 17, which serve to introduce magnetic fields.
I want to emphasize some really cool things about the mathematical expression for a particle’s magnetic field:
This is really a single particle form of the Biot-Savart law. I’m going to morph it into something really interesting. I’m going to make use of the fact that , which I assert to students will be derived in a later chapter.
THIS IS AMAZING! This demonstrates that this new thing called magnetic field is kind of like a velocity dependent electric field. That’s an oversimplification, but it hints that something deep is revealing itself here. Velocity connotes reference frame, and we see a big hint here that magnetic field depends on one’s reference frame. This is foreshadowing special relativity! We can show something else with one more slight rearrangement.
This means that if we express velocity in fractions of , then the quantity has the same dimensions as and can thus be expressed in the same unit as electric field! This conceptualization allows for some beautiful symmetry to show itself later on when we get to the Maxwell equations. In some ways, electric fields and magnetic fields are interchangeable. Again, this is a hint of some underlying unification of the two, the electromagnetic field tensor, which I’m working hard to find a way to introduce into the introductory course. If students can understand simple Lorentz transformations, then they should be able to understand how the electromagnetic field tensor transforms from one frame to another within the framework of special relativity and we can show some beautiful physics. I realize I’m in the minority when it comes to something like this becuase we tend to think of our students as not being mathematically prepared. I’ve come to realize that perhaps…just perhaps…that is our perception only becuase we aren’t giving them the best mathematical foundations upon which to prepare for physics. Maybe it’s our fault. Maybe.
Anyway, these ruminations are things I want, and hope, students see on their own but all too often I find they, at least in my case, have difficulty even engaging at a minimal level. I struggle with this, and like to think and hope that maybe it’s because they don’t see the beauty. That’s why I nudge them in these new and different directions. Like I said above, it may very well be our fault.
Feedback is welcome as always.