# Did Feynman Invent Feynman Notation?

In section section 27-3 of The Feynman Lectures on Physics, Feynman describes a notation for manipulating vector expressions in a way that endows nabla with the property of following a rule similar to the product rule with which our introductory calculus students are familiar. It allows a vector expression with more than one variable to be expanded as though nabla operates on one variable while the other is held constant. The vector being differentiated is indicated with a subscript on nabla. Feynman’s equation 27.10 shows how this is written, and it rather like treating the subscripted nablas as partial derivative operators. Feynman’s equation 27.11 shows the resulting vector identity for the divergence of a cross product. In between these two equations. Feynman explains that the subscripted nabla can be manipulated as though it were a vector (it is not) according to the rules of dot products (commutative), cross products (anticommutative), triple scalar products (cyclic permutation, swapping dots and crosses, etc.), and triple vector products (BAC-CAB, Jacobi identity, etc.) The strategy is to end up with only one vector (the one corresponding to a subscript) immediately to the right of each correspondingly subscripted nabla. Then you drop the subscripts, and you should have a valid vector identity. In the audio version of this lecture, Feynman comments that he doesn’t understand why this technique isn’t taught. It was never shown to me as either an undergraduate or graduate student. I suspect it’s treated as “one of those things” students are simply assumed to pick up at one point or another without it ever being explicitly addressed (much like critical thinking is treated).

The issue here, for me, is whether or not Feynman invented this way of manipulating vector expressions. After all, the notation carries his name so it might be reasonable to assume he invented the underlying method. My research shows that a very similar methodology is documented in the very first (as far as I know) textbook on vector analysis, Wilson’s Vector Analysis: A Text-Book for the use of Students of Mathematics and Physics. This is the famous work based on Gibbs’ lecture notes and is the definitive work on contemporary vector analysis. I continue to be surprised at how few people have consulted it (based on my asking whether or not they have). I offer the PDF version to my physics students in the hopes they will use it in their studies. Chapter 3 is on the differential calculus of vectors and section 74 on page 159 begins a presentation of using nabla as a “partial” operator in an expression, operating on only one vector while holding another constant. Wilson introduces a subscript notation that, unlike Feynman’s, indicates which vector is held constant for a differentiation.

This brings to my mind the question of whether or not Feynman was aware of Wilson’s textbook and this method documented therein and decided to change the nature of the subscript to show what is differentiated rather than what is not. I don’t see how there is any way to know for sure, but it’s an interesting question in my mind because I suspect many students are not aware of Wilson’s textbook.

Wilson shows many worked examples on subsequent pages. Section 75 on page 161 shows more examples and consequences of this technique leading to a statement on page 162 that blows my mind! In the paragraph immediately surrounding equation (47) we see the following:

If u be a unit vector, say a, the formula (referring to equation 47) expresses the fact that the directional derivative (expression omitted) of a vector function v in the direction a is equal to the derivative of the projection of the vector v in that direction plus the vector product of the curl of v into the direction a.

Wow! This mean that applying nabla as a partial operator leads to something of geometrical significance, which to me constitutes a new identity itself. The lefthand side of Wilson’s equation (47) can be interpreted as the dot product of vector a and the gradient of vector v (a second rank tensor). My last post asks how the righthand side follows geometrically from that, something I’ve never seen in the literature.

Tai’s recent book on vector and dyadic analysis presents what he calls the “method of symbolic vector” which seems to formalize Wilson’s both Feynman’s methods. The idea is that nabla is temporarily treated as a vector (with a new symbol) and any expression in which is appears can be symbolically manipulated according to all the rules of vector analysis to end up with a valid identity when nabla is once again treated as a differential operator (and restored to its rightful symbol). Tai definitely knew about Wilson’s text as he references it frequently and devotes a considerable number of pages to commentary on Gibbs’ choices of notation (e.g. Gibbs’ use of a dot product as a symbol for divergence despite divergence not being defined as a dot product at all, and similarly his use of a cross product as a symbol for curl despite curl not being defined as a cross product), etc. Tai refers to Feynman only once, at the bottom of page 147 and continuing onto page 148, but the reference is vague.

Regardless of who initially invented the use of nabla as a partial operator, I feel we need to expose students to this as early as possible as part of a stronger foundation in classical vector analysis than they currently get in the introductory courses.

# HELP! A Stubborn Vector Identity to Understand

Over the past three years or so, I have been researching the history and implementation of Gibbsian vector analysis with the intent of finding ways to incorporate it more thoroughly and more meaningfully into introductory calculus-based physics (possibly algebra/trig-based physics too). Understanding the usual list of vector identities has been part of this research. One vector identity that has frustrated me involves probably the most innocent looking quantity, the gradient of the dot product of two vectors. I have seen no fewer than five different expressions for the expansion of this seemingly harmless quantity. Here they are.

Now, equation (1) uses Feynman notation, which endows the nabla operator with the property of obeying the Leibniz rule, or the product rule, for derivatives.  The subscript refers to the vector on which the nabla operator is operating while the other vector is treated as a constant. Note that in chapter 3 of Wilson’s text based on Gibbs’ lecture notes, the subscript denotes which vector is to be held constant, precisely the opposite of the way Feynman presents it. Equation (1) is merely an alternative way of writing the lefthand side and offers nothing new algebraically.

Equation (2) shows nabla operating on each vector in the dot product, which is something many students never see. Like I was told years ago, they are told that one can only take the gradient of a scalar and not a vector, which is patently false. The twist is that, unlike the gradient of a scalar, the gradient of a vector is not a vector; it is a second rank tensor which can be represented by a matrix. This tensor, and its matrix representation, is also called the Jacobian. The dot product of this tensor with a vector gives a vector, so equation (2) is consistent with the fact that the lefthand side must be a vector. I can derive this expression using index notation.

Equation (3) is equation (2) written in (a very slight variation of) matrix notation (the vectors are written as vectors and not as column matrices). I don’t think there is anything more to it.

Equation (4) is the traditional expansion of the lefthand side. It is derived from the BAC-CAB rule, with suitable rearrangements to make sure nabla operates on one vector in each term. Two such applications give equation (4). The “reverse divergence” operators are actually directional derivatives operating on the vectors immediately to the right of each operator. I can derive this expression using index notation.

Equation (5) is shown in problem 1.8.12 on page 48 of Arfken (6th edition). It has the advantage of using the divergences of the two vectors, which I think are easier to understand than the “reverse divergence” operators in equation (4). However, the “reverse curl” operators are completely new to me and I have never seen them in the literature anywhere other than in this problem in Arfken. I think this equation can be derived from equation (4) by appropriately manipulating the various dot and cross products. I have not yet attempted to derive this expression with index notation.

Now, many questions come to mind. I have arranged the first and second terms on the righthand sides of equations (4) and (5) to correspond to the first term on the righthand sides of equations (1), (2), and (3). Similarly, the third and fourth terms on the righthand sides of (4) and (5) correspond to the second term on the righthand sides of equations (1), (2), and (3). By comparison, this must mean that somehow from the gradient (Jacobian) of a vector come both a dot product and a triple cross product. How can this be?

How can the gradient (Jacobian) of a vector be decomposed into a dot product and a triple cross product?

I think I can partly see where the dot product comes from, and it’s basically the notion of a directional derivative. The triple cross products are a complete mystery to me. Is there a geometrical reason for their presence? Would expressing all this in the language of differential forms help? Equations (4) and (5) also seem to imply that the triple cross products are associative, which they generally are not. I think I can justify the steps to get from (4) to (5), so if anyone can help me understand geometrically how the Jacobian can be decomposed into a dot product (directional derivative) and the cross product of a vector and the curl of the other vector, I’d be very grateful.

# Conceptual Understanding in Introductory Physics XXIV

This question was inspired by chapters 13, 14, and 15 of Matter & Interactions and would, I think, make a good final exam question even in courses where M&I isn’t used. The story line in those chapters makes a wonderful progression through different charge distributions and their fields and interactions with other similar charge distributions. The rather obvious patterns in this progression are worth emphasizing. They seem to be a consequence of superposition, which is one of most conceptually astounding ideas in physics.

Make a table giving at least one charge distribution, or combination of distributions, that gives rise to an electric field or electric interaction (force) that varies as 1/(r^n) where n = 0, 1, 2, …, 9. It may be the case that not all values of n are represented in the table.

I can think of at least one example where a double digit value of n is needed, but most courses don’t deal with that situation.

# Conceptual Understanding in Introductory Physics XXIII

This question came to me while I was planning for this semester’s introductory calculus-based e&m course (using Matter & Interactions of course). My overall desire and plan is to move away from the traditional number crunching type of problems, where all students really do is manipulate coordinate components of vectors or perhaps vector magnitudes, all without any genuine concern for the underlying geometrical implications. I completely understand the importance of this skill, but with computing having become rather ubiquitous I think such number crunching can be relegated to computational activities and labs. To change the status quo, I want to build a library of conceptual questions and problems that go as far beyond number crunching as I can get. I want students to think about the assumptions we make in physics and about how those assumptions are formulated. I want students to be able to, as Cliff Swartz once said, know the answer to a problem before calculating it. (I’ll link to the reference for that paraphrased quote once I dig it up.)

This particular question addresses two things, one that I never questioned as a student and one I only recently thought about as a teacher. It also addresses my continual search for ways to introduce symmetry arguments into introductory physics as early as possible. See what you think. You may find this question intimately related to this post.

(a) Formulate an explanation for why the electric field of a particle, or any other finite charge distribution, must decrease, as opposed to increase or remain constant, as distance from the charge distribution increases.

(b) Formulate an explanation for why the electric field of an infinite (keeping in mind that true infinite charge distributions don’t exist) charge distribution must remain constant, as opposed to increase or decrease, as distance from the charge distribution increases. (It may help to consider an Aronsonian operational definition of “infinite charge distribution.” In other words, if a charge distribution can’t be truly infinite then what precisely do we really mean by “infinite charge distribution” in the first place?)

By the way, as always these questions are framed within the context of introductory calculus-based physics. I hope I have made correct assumptions about the physics of the situations. If not, please feel free to let me know. Oh, and yes, you could probably use gravitational or magnetic fields instead of electric fields in this question.