# Matter & Interactions II, Week 12

We’re hanging out in chapter 19 looking at the properties of capacitors in circuits.

In response to my (chemist) department chair’s accusation that I’m not rigorous enough in my teaching of “the scientific method” as it’s practiced in chemistry, I just had “the talk” about “THE” scientific method with the class and about how it doesn’t exist. I will never forget Dave McComas (IBEX) telling the audience at an invited session I organized at AAPT in Ontario (CA) that we MUST stop presenting “the scientific method” as it is too frequently presented in the textbooks because it simply does not reflect how science works. No one hypothesizes a scientific discovery. Once a prediction is made and experimentally (or observationally in the case of astronomy) verified, that’s not a prediction because the outcome is expected. Even if the prediction isn’t verified, one of the required known possible outcomes is that the prediction is wrong. There’s nothing surprising here. True discoveries happen when we find something we had no reason to expect to be there in the first place. The Higgs boson? Not a discovery, because it was predicted forty years or so ago and we only recently had the technology to test for its presence. I don’t think anyone honestly expected it to not be found, but I think many theoretical particle physicists (not so) secretly hoped it wouldn’t be found because then we would have actually learned something new (namely that the standard model has problems).

The “scientific method” simply doesn’t exist as a finite numbered sequence of steps whose ordering is the same from discipline to discipline. Textbooks need to stop presenting that way. Scientific methodology is more akin to a carousel upon which astronomers, chemists, physicists, geologists, or biologists (and all the others I didn’t specify) jump at different places. Observational astronomers simply don’t begin by “forming an hypothesis” as too many overly simplistic sources may indicate. Practitioners in different disciplines begin the scientific process at different places by the very nature of their disciplines and I don’t think there’s a way to overcome that.

Rather than a rote sequence of steps, scientific methodology should focus on validity through testability and falsifiability. I know there are some people who think that falsifiability has problems, and I acknowledge them. However, within the context of introductory science courses, testability and falsifiability together form a more accurate framework for how science actually works. This is the approach I have been taking for over a decade in my introductory astronomy course. It is not within my purview to decide what is and is not appropriate for other disciplines, like chemistry. My chemist colleagues can present scientific methodology as they see fit. I ask for the same respect in doing so within my disciplines (physics and astronomy).

I now consider “the scientific method” to have been adequately “covered” in my calculus-based physics course.

Feedback welcome as always.

# Matter & Interactions II, Week 11

More with circuits, and this time capacitors, and the brilliantly simple description M&I provides for their behavior. In chapter 19, we see that traditional textbooks have misled students in a very serious way regarding the behavior of capacitors. Those “other” textbooks neglect fringe fields. Ultimately, and unfortunately, this means that capacitors should not work at all! The reason becomes obvious in chapter 19 of M&I. We see that in a circuit consisting of a charged capacitor and and a resistor, it’s the capacitor’s fringe field that initiates the redistribution of surface charge that, in turn, establishes the electric field inside the wire that drives the current. The fringe field plays the same role that a battery’s field plays in a circuit with a flashlight bulb and battery. It initiates the charge redistribution transient interval. As you may have already guessed, the capacitor’s fringe field is what stops the charging process for an (initially) uncharged capacitor in series with a battery. As the capacitor charges, its fringe field increases and counters the electric field of the redistributed surface charges, thus decreasing the net field with time. If we want functional circuits, we simply cannot neglect fringe fields.

Ultimately, the M&I model for circuits amounts to the reality that a circuit’s behavior is entirely due to surface charge redistributing itself along the circuit’s surface in such a way as to create a steady state or a quasisteady state. It’s just that simple. You don’t need potential difference. You don’t need resistance. You don’t need Ohm’s law. You only need charged particles and electric fields.

One thing keeps bothering me though. Consider one flashlight bulb in series with a battery. The circuit draws a certain current $i_1$ for example. Now, consider adding nothing but a second, identical flashlight bulb in parallel with the first one. Each bulb’s brightness should be very nearly the same as that of the original bulb. The parallel circuit draws twice the current of the original lone bulb $i_2 = 2i_1$ but that doubled current is divided equally between the two parallel flashlight bulbs. That’s all perfectly logical, and I can correctly derive this result algebraically. I end up with a factor of 2 multiplying the product of either bulb’s fliament’s electron number density, cross sectional area, and electron mobility.

$i_2 \propto 2nAu$

My uneasiness is over the quantity to which we should assign the factor of 2. A desktop experiment in chapter 18 that establishes we get a greater current in a wire when the wire’s cross sectional area increases. Good. However, in putting two bulbs in parallel is it really obvious that the effective cross sectional area of the entire circuit has doubled? It’s not so obvious to me because the cross sectional area can possibly only double by virtue of adding an identical flashlight bulb in parallel with the first one. Unlike the experiment I mentioned, nothing about the wires in the circuit change. Adding a second bulb surely doesn’t change the wire’s mobile electron number density; that’s silly. Adding a second bulb also surely doesn’t change the wire’s electron mobility; that’s equally silly. Well, that leaves the cross sectional area to which we could assign the factor of 2, but it’s not obvious to me that this is so obvious. One student pointed out that the factor of 2 probably shouldn’t be thought of as “assigned to” any particular variable but rather to the quantity $nAu$ as a whole. This immediately reminded me of the relativistic expression for a particle’s momentum $\vec{p} = \gamma m \vec{v}$ where, despite stubborn authors who refuse to actually read Einstein’s work, the $\gamma$ applies to the quantity as a whole and not merely to the mass.

So, my question boils down to whether or not there is an obvious way to “assign” the factor of 2 to the cross sectional area. I welcome comments, discussion, and feedback.

# Matter & Interactions II, Week 10

Chpater 18. Circuits. You don’t need resistance. You don’t need Ohm’s law. All you need is the fact that charged particles respond to electric fields created by other charged particles. It’s just that simple.

When I took my first electromagnetism course, I felt stupid becuase I never could just look at a circuit and tell what was in series and what was in parallel. And the cube of resistors…well I still have bad dreams about that. One thing I know now that I didn’t know then is that according to traditional textbooks, circuits simply should not work. Ideal wires don’t exist, and neither do ideal batteries nor ideal light bulbs. Fringe fields, however, do indeed exist and capacitors just wouldn’t work without them. So basically, I now know that the traditional textbook treatment of circuits is not just flawed, but deeply flawed to the point of being unrealistic.

Enter Matter & Interactions. M&I’s approach to circuits invokes the concept of a surface charge gradient to establish a uniform electric field inside the circuit, which drives the current. This was tough to wrap my brain around at first, but now I really think it should be the new standard mainstream explanation for circuits in physics textbooks. the concept of resistance isn’t necessary. It’s there, but not in its usual macroscopic form. M&I treats circuits from a purely microscopic point of view with fundamental parameters like mobile electron number density, electron mobility, and conductivity and geometry in the form of wire length and cross sectional area. Combine these with charge conservation (in the form of the “node rule”) and energy conservation per charge (in the form of the “loop rule”) and that’s all you need. That’s ALL you need. No more “total resistance” and “total current” nonsense either. In its place is a tight, coherent, and internally consistent framework where the sought after quantities are the steady state electric field in each part of the circuit and the resulting current in each part. No more remembering that series resistors simply add and parallel resistors add reciprocally. Far more intuitive is the essentially directly observable fact that putting resistors in series is effectively the same as increasing the filament length and putting resistors in parallel is effectively the same as increasing the circuit’s cross sectional area. It’s so simple, like physics is supposed to be.

Of course, in the next chapter (chapter 19) the traditional “Ohm’s law” model of circuits is seen to be emergent from chapter 18’s microscopic description, but honestly, I see no reason to dwell on this. Most of my students are going to become engineers anyway, and they’ll have their own yearlong circuit courses in which they’ll learn all the necessary details from the engineering perspective. For now, they’re much better off understanding how circuits REALLY work and if they do, they’ll be far ahead of me when I was in their shoes as an introductory student, and will have the deepest understanding of anyone else in their classes after transferring. That’s my main goal after all.

Feedback welcome.

# Conceptual Understanding in Introductory Physics XXVIII

You may not agree that the topic(s) of this question belong in an introductory calculus-based physics course, but I’m going to pretend they do for the duration of this post. Gradient, divergence, and curl are broached in Matter & Interactions within the context of electromagnetic fields. Actually, gradient appears in the mechanics portion of the course.

One problem with these three concepts, especially divergence and curl, is the distinction between their actual definitions and how they are calculuated. The former are rarely, if ever, seen at the introductory level and usually first appear in upper level courses. However, some authors [cite examples here] replace the physical definitions with the mathematical symbols invented by Heaviside and Gibbs to represent the calculation of these quantities. In other words, the divergence of $\mathbf{A}$ is frequently defined as $\nabla\cdot\mathbf{A}$ and the curl of $\mathbf{A}$ is frequently defined as $\nabla\times\mathbf{A}$. These should be treated as nothing more than symbols representing their respective physical quantities and should not be taken as equations for calculation. If one insists on keeping this notation, then the dot and cross should at least be kept with the nabla symbol so that $\nabla\cdot$ represents divergence and $\nabla\times$ represents curl. Either way, these are operators that operate on vectors and their symbols should reflect that concept and should be interpreted as such and not as a recipe for calculation. This book by Tai was extremely helpful in getting this point across to me.

Gradient has its own unique problem in that some sources claim that one can only take the gradient of a scalar, which is patently false. One can indeed take the gradient of, for example, a gradient but the object one gets back is not a vector. If we adopt a unified approach to vector algebra and vector calculus we find that there are patterns associating the operand and the result when using these vector opators. For example, operating on a vector with $\nabla$ doesn’t produce a vector; it produces a second rank tensor. This is one reason I would love to find a way to bring this approach into the introductory course. So many things would be unified.

But now, on to the questions I want to ask here

(a) Write a conceptual definition of gradient in words.

(b) Write a mathematical definition of gradient that does not depend on any particular coordinate system. You must not use the nabla symbol.

(c) Write a conceptual definition of divergence in words.

(d) Write a mathematical definition of divergence that does not depend on any particular coordinate system. You must not use the nabla symbol.

(e) Write a conceptual definition of curl in words.

(f) Write a mathematical definition of curl that does not depend on any particular coordinate system. You must not use the nabla symbol.

Go!

(Note: I need to revisit this post in the future to make sure the notion of applying gradient to a vector quantity can be handled in the coordinate free way I have in mind. My intuition is that it can be, but I need to work out some details. )

# Angular Quantities II

In this post, I will address the first question on the list in the previous post. What exactly does it mean for something to be a vector?

In almost every introductory physics course, vectors are introduced as “quantities having magnitude and direction” and are eventually equated to graphical arrows. A vector is neither of these, but is something far more sophisticated. Remember that I’m coming at this as a physicist, not a pure mathematician. I will probably get more than a few things incorrect. Let me know if/when that happens. Let me see if I can present this at a level suitable for an introductory calculus-based physics course. Imagine you walk into class on the first day and start talking. Here goes.

We live in a Universe with has measureable properties, and containing physical entities that also have measureable properties. A lot of physics consists of attempting to measure, and thus quantify, these properties (experiment). More important to some physicists is describing these properties mathematically and making predictions about them (theory) rather than attempting to measure them. We can invent mathematical objects to represent these measureable properties. The word represent is important here, because the mathematical object representing an entity is not the same thing as the entity itself. These mathematical objects themselves have properties, and these properties allows us to manipulate these objects so as to use them to make predictions about Nature.

The properties possesed by the mathematical objects we use to describe Nature collectively form something with a very strange name: a vector space. That sounds very technical and complicated. It is indeed a very technical term because it means something profound. However, as I will try to convince you now, it is not necessarily complicated at all. Let me attempt to show you.

I will use bold symbols (e.g. $\mathbf{u}, \mathbf{v}, \mathbf{w}$ etc.) to represent mathematical objects with the properties that collectively form a vector space. These mathematical objects have a generic name: vectors. Yes, that’s their name. Note that there is nothing at all here to do with arrows or anything else really. Vectors are nothing more than mathematical objects with properties that let us model and make predictions about the properties of the Universe we observe and try to understand in Nature. Be careful to understand that there are two sets of properties here, those of the Universe and its inhabitant entities, and those of the mathematical objects we use to represent those things. I’m not saying this is the best way to describe this, but it’s a start.

I will use italic symbols (e.g. $a, b, c$ etc.) to represent ordinary numbers you are already familiar with. Technically, the are real numbers and every math course you have ever taken has used them whether or not you knew they had a name.

• In a vector space, addition is a closed operation.

If $\mathbf{u}$ and $\mathbf{v}$ are vectors then $\mathbf{u}+\mathbf{v}$ is also a vector.

Now consider scalar multiplication. You’ve known how to multiply real numbers for a long time, and again, there isn’t much new to see here. Multiplying a scalar and a vector gives another vector. We will explore the goemetric implication of this later. Like vector addition, scalar multiplication is a closed operation.

• In a vector space, scalar multiplication is a closed operation.

If $\mathbf{w}$ is a vector and $c$ is a scalar, then $c\mathbf{w}$ is also a vector.

Here is a list of remaining properties that define a vector space.

• In a vector space, addition is commutative, meaning that the order of the vectors being added doesn’t matter.

$\mathbf{a}+\mathbf{b}=\mathbf{b}+\mathbf{a}$

• In a vector space, addition is associative, meaning vectors can be grouped in any way as long as the order isn’t changed.

$(\mathbf{u}+\mathbf{v})+\mathbf{w} = \mathbf{u}+(\mathbf{v}+\mathbf{w})$

• In a vector space, scalar multiplication is associative. The things you’re multiplying can be grouped differently as long as their order isn’t changed.You get the same vector either way. Cool!

$a(b\mathbf{c}) = (ab)\mathbf{c}$

• In a vector space, when you have the sum of two scalars multiplying a vector, the thing you get back is the sum of each scalar multiplying that vector.

$(a+b)\mathbf{c}=a\mathbf{c}+b\mathbf{c}$

• In a vector space, scalar multiplication is distributive over vector addition. Some authors equivalently say that vector addition is linear. Both of these mean the same thing, but I think the second way of saying it is more important, and I will try to show why later. When you have a scalar multiplying the sum of two vectors, the vector you get back is the sum of that scalar multiplying each vector separately.

$a(\mathbf{b}+\mathbf{c})=a\mathbf{b}+a\mathbf{c}$

• In a vector space, there is a multiplicative identity element such that multiplying it by any vector you get the same vector back. This effectively defines a unity element, commonly called 1 (one). This is important because sometimes we can exploit what I like to call a “sneaky 1” to help manipulate a mathematical expression. More on that when we need it.

$1\mathbf{u} = \mathbf{u}$

• In a vector space, there is an additive identity element such that adding it to any vector gives that same vector back as the sum. This is effectively a definition of a zero vector.Seeing zero written this way (as a vector) may seem strange, but you will get used to it.

$\mathbf{b} + \mathbf{0} = \mathbf{b}$

• In a vector space, there is a member of the vector space called an inverse element such that adding it to any vector gives the identity element (zero element). For any vector $\mathbf{v}$ we have a vector $-\mathbf{v}$ such that the two sum to zero. Do not think of the $-$ sign as subtraction. Think of it as merely a symbol that turns the vector in to its additive inverse.

$\mathbf{v}+(-\mathbf{v}) = \mathbf{0}$

We’re done. That’s it. These properties collectively and operationally define a vector space that is inhabited by mathematical objects called vectors. These properties also define the things we can do to manipulate vectors. Note there is no mention of subtraction, and there is no mention of division. There is vector addition and scalar multiplication. That’s all there is. This is really simple! Also note there is no mention of magnitude, direction, arrows, components, dot products, or cross products. If you don’t know what those three terms mean don’t worry. We will define them later.

Let me now convince you that you have dealt with vector spaces and vectors for many years and didn’t realize it. Consider the real numbers (that’s all positive numbers, negative numbers, and zero regardless of whether they’re rational or not, and regardless of whether they’re integers or not). Do they meet each and every one of the properties above? To convince yourself that they do, go through them one by one. Does adding two real numbers give a real number? Yes (3.2 + 5.9 = 9.1). Does adding 0 to 5 give 5? Yes. Does adding 6 to -6 give 0? Yes. You can do the rest. Therefore, I claim that without knowing it, you have been using vector spaces and vectors all along!

Now, let me ask you a new question. Consider only the natural numbers. Recall that these numbers are the ones you use for counting and you’ve probably been using them longer than you’ve been using real numbers! Do the natural numbers (counting numbers) form a vector space with each number being a vector? I will tell you that the answer is no, they do not, but I don’t want you to take my word for it. Go through each of the above properties one by one using counting numbers and see if you can convince yourself that these number do not inhabit a vector space.

This is a physics class, so let’s get more physicsy. In physics, as in all science, we use a system of units called the SI System. All scientists know about this system of units, but some subdisciplines (e.g. astrophysics) don’t use them yet. I hope this changes because it will make many things simpler, but I digress. The SI System consists of seven independent fundamental units that represent seven fundamental quantities: mass, length (I prefer spatial displacement), time (I prefer temporal displacement), thermodynamic temperature, amount, luminous intensity, and electric current. All physically measureable properties in our Universe can be expressed in various combinations of these seven fundamental quantities and their units. Your question is: Do these seven fundamental form a vector space? What a weird question! Still, it’s one you can address by, again, working your way through the defining properties of a vector space given above. See what you can come up with.

This may seem a very strange way to begin introductory physics, and it is! It’s strange, but I hope it will help get you to a place where your understanding is deeper than it would be had we begun in a traditional way. Accept the strangeness and uncomfortableness you feel right now, and then let it go. There’s much learning to be done, and it starts here.

# Angular Quantities I

This is the first in a series of posts in which I want to share some hopefully interesting things about mathematical descriptions of rotational motion. This series was inspired by a talk given at the 2015 winter AAPT meeting in San Diego. The author claimed to have found a way to represent angular displacement as a vector (true, such an expression exists and is not widely used) and that angular displacements commute (false, in general they do not except when infinitesimal). The same author presented an updated poster on this topic at the recent winter meeting in Atlanta. In researching the arguments presented in these two talks, following up on the references therein, and in searching the undergraduate and graduate physics and mathematics teaching literature on descriptions of angular quantities, I stumbled onto some of the most interesting topics I’ve ever encountered. As you may have already guessed, I want to find ways of bringing these gems of understanding into the introductory courses so students won’t be so mystified when then encounter the in upper level courses. By the way, the papers from these talks aren’t availble online; I only have paper copies and I do not have the author’s permission to distribute them.

I am sure most of this will be trivial for many readers, so apoligies in advance. Even though I too studied out of Goldstein in grad school, it was not the case that all my existing conceptual mysteries were solved. As always, I tend to frame things from the point of view of that introductory physics student for whom we want to provide an unparalleled physics experience. I don’t want that student to ever say, “Well that was never pointed out to me in intro physics.” I want that student’s conceptual foundation to be better than mine was when I was that student.

In this initial post, I will list as many of the questions I can think of that arose as I researched this topic. I will not answer any of them in this post, but will attempt to do that in subsequent posts. I will put the questions into some preliminary order, but I can’t guarantee that order won’t change later. Some questions may change to more accurately reflect what I’m trying to explain.

1. What does it mean to be a vector?
2. What do vector dot products and vector cross products mean geometrically?
3. What is the physical significance of the double cross product (aka triple cross product)?
4. Is there a coordinate free expression for the total time derivative of a vector?
5. Is there a coordinate free expression for the time derivative of a unit vector (a direction)?
6. Can angular velocity be described as a vector?
7. Can angular displacement be described as a vector?
8. If work is calculated as the dot product of two vectors, then when calculating rotational work how can angular displacement not be a vector?
9. If angular velocity is a vector, shouldn’t its integral also be a vector and not a scalar?
10. Why does translational displacement commute?
11. How, if at all, are translation and rotation (revolution?) related?
12. Why do infinitesimal angular displacements commute?
13. Why do finite angular displacements not commute?
14. What is the distinction between rotation and orientation?
15. Is angular velocity the derivative of a rotation?
16. So then what is angular velocity the derivative of anyway?
17. Can angular velocity be integrated to get angular displacement?
18. Can these ideas be brought into the introductory calculus-based or algebra-based physics courses?

I think that’s all, at least for now. I don’t claim this list to be comprehensive. The number of questions isn’t significant either. Let’s see where this goes.

# Matter & Interactions II, Week 9

This week was a very short week consisting of only two days. We met as usual on Monday, but Tuesday was a “flip day” and ran as a Friday. This class doesn’t meet on Fridays so we only had one day this week, and we devoted it to tying up loose ends from chapter 17.

Next week, barring losing days to winter weather as I sit here and watch the forecast deteriorate, we will hit circuits the M&I way!

In an interesting development, I was informed by my coworker (a PhD physicist) that our department chair had approached him Tuesday morning to ask if he would like to take my calculus-based physics courses from me next year on the grounds that I’m not rigorous enough. Needless to say, I was shocked becuase the chair had not mentioned this to me and indeed has not spoken to me about it at all. Had my coworker not told me I would not have known.

My chair, a PhD chemist, seems to think that because M&I emphasizes computation over traditional labs, and that what labs we do are not as rigorous as chemistry labs, either M&I or I or perhaps both are not appropriate for our students and indeed may be causing them to be ill prepared for transfer to universities. Of course this is all nonsense, but my chair actually said to my face that she knows more about computation, theory, and experiment than I do and that labs must be done the “chemistry way” or they’re not valid. If this weren’t so disgustingly true, it would be mildly funny. It’s not funny. It’s true.

I don’t know what I’m going to do, but it’s clear both M&I and I are probably on our way out at my current instituion. My colleague (who by the way has no interest in teaching calculus-based physics) and I are both exploring numerous options, including leaving for another instiution.