10. Summary

With this chapter, the text on naked algebra comes to an end. Here we have managed to give concrete form to the doctrine of differential calculus that we presented in Living Geometry, on many examples from a variety of scientific disciplines. Although physics dominated the most, this is to be expected, as a differential number directly for it was developed, and many physical applications go beyond their field.

I hope you now have some more or less comprehensive idea of what derivatives and integration and differential equations are. When one becomes acquainted with a new subject, it is often the case that one understands a concept, but is not sure how it is connected to another concept, creating great uncertainty: this text tried to connect everything as much as possible.

I would like to end this text hopefully, which is why I am going to bring up all the topics that did not fit in the original text, albeit interesting extensions. It is no longer an objective for you to learn anything in particular, but rather an overview of all the strange corners through which the study of differential calculus can take place. Therefore, the level of rigorousness will drop a notch from the previous chapters; on the other hand, I will try to replenish everything with resources where everything is explained properly.

Multidimensional count

The functions we've been using all along have mostly just one variable and they're denoted by $f(x)$. This is enough for us in many cases, however, for example, when describing things in a three-dimensional space, we need to think of something more general. So we simply introduce a function of two variables $f(x,y)$, which gives one number when two numbers are supplied. An example of such a feature may be $$\begin{align*} f(x,y) = x^2 + y^2 \,, \text{např. } f(2,-2) = 8 \,. \end{align*}$$

Of course, we can similarly introduce a function of three, four, or $n$ variables. Then how do we derive? We're going to introduce a so-called partial derivative, or a derivative with respect to one variable. It is marked using the $\partial$ symbol. Its definition is $$\frac{\partial f(x,y)}{\partial x} \equiv \lim_{h \to 0} \frac{f(x+h,y) - f(x,y)}{h} \,,$$

or we're just varying one variable. For our particular example, we have $$\frac{\partial f(x,y)}{\partial x} = 2x \,,$$

In practice, numerical methods, such as gradient descent, are used to find extremes of multidimensional functions.

Because $y$ is a constant because of the change in $x$ and the derivative of the constant is zero. With multidimensional functions, it's harder to find a maximum because we have two parameters that we can change. More freedom means more options, and it requires slightly more advanced methods.

Differential Operators

In nature, certain derivative shapes occur more often than others. That is why we are introducing so-called differential operators, which are abbreviations to write derivatives. Let's move for a moment into a world of three spatial variables and one temporal. For example, let's monitor the temperature of the room, depending on where we measure, which evolves over time: $T(x,y,z,t)$. We then introduce the Laplacean operator $\Delta$: $$\begin{align*} \Delta T(x,y,z) \equiv \frac{\partial^2 T(x,y,z)}{\partial x^2} + \frac{\partial^2 T(x,y,z)}{\partial y^2} + \frac{\partial^2 T(x,y,z)}{\partial z^2} = \left( \frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2} \right) T(x,y,z) \,. \end{align*}$$

Yes, Laplace's operator, Laplace for short, looks just like Delta. This combination of second derivatives has the significance of the difference in average ambient temperature from the center. That's why we're constructing the Fourier heat-conduction equation, which reads as follows: $$\lambda\Delta T(x,y,z,t) = \frac{\partial}{\partial t} T(x,y,z,t) \,.$$

It means that the change in temperature over time is proportional to the difference in temperature from the mean of the surroundings ($\lambda$ is just a coefficient of temperature conductivity). The heat is so overflowing, and the moment it stops moving, it will be valid $$\frac{\partial}{\partial t} T(x,y,z,t) = 0 \Rightarrow \lambda\Delta T(x,y,z,t) = 0 \,.$$

Or, everywhere, the temperature is the same as the average ambient temperature, so everywhere, the heat is the same. In one dimension, and using other considerations, it simplifies to the equation that we saw in chapter seven.

Wave equation

Solving multi-dimensional equations is usually not as simple as the simplified example of the heat conduction equation presented above. These are what we call partial differential equations (PDRs). One particularly important example of a partial differential equation is a wave equation that has the following shape for the function of the two variables $u(x,t)$: $$\begin{align*} \frac{\partial^2 }{\partial x^2} u(x,t) &= \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} u(x,t) \,, \end{align*}$$

Where $c$ is the rate of wave propagation. This equation has many types of solutions. But we imagine one particular: any function in the form of $u(x,t)=f(x-ct)$. Let's try to install it: $$\begin{align*} \frac{\partial^2 }{\partial x^2} f(x-ct) &= \frac{\mathrm{d}^2 f(x-ct)}{ \mathrm{d} (x-ct)^2} \left( \frac{\mathrm{d} (x-ct)}{ \mathrm{d} x} \right)^2 = \frac{\mathrm{d}^2 f(x-ct)}{ \mathrm{d} (x-ct)^2} 1^2 \\ \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} u(x,t) &= \frac{\mathrm{d}^2 f(x-ct)}{ \mathrm{d} (x-ct)^2} \frac{1}{c^2} \left(\frac{\mathrm{d} (x-ct)}{ \mathrm{d} t} \right)^2 = \frac{\mathrm{d}^2 f(x-ct)}{ \mathrm{d} (x-ct)^2} \frac{1}{c^2} c^2 \,. \end{align*}$$

We see that the right and the left side equal each other independently of the choice of function $f$. So the solution is indeed a wave, or some function that moves at $c$ in time. You can imagine, for example, the function $f(x,t) = (x-ct)^2$ over time $t=0$ and then gradually in higher times. It's a parabola that moves right at $c$.

This equation is encountered, for example, in the vibrations of a string, in the conduction of a signal through an underwater cable, in electromagnetic waves (light), or in the waves themselves at sea. We would get an even more interesting solution if we were thinking of a function with three spatial parameters, but this is already too complex.

Endless series

In the differential count, we can't ignore the infinite totals (series). We can remember the first integration we did: in chapter three of Alive Geometry, we calculated the area under the parabola. But we used the sum above all limits by the growing number of members. We can further improve the techniques of adding up such sums by using a differential number, e.g. it can be proven that $$\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} \,.$$

The fact that the sum has a specific result, we call it converging (meeting). By contrast, the sum $$\sum_{n=1}^{\infty} \frac{1}{n}$$

so-called diverges (diverges), or its sum grows beyond all limits. A notch heavier than sums of numbers are sums of functions in general form $$\sum_{n=1}^{\infty} f_n (x)\,,$$

where $f_n(x)$ is some sequence of functions, e.g. $f_n(x) = \frac{x}{n}$. If the $f_n(x)$ function is appropriate, then we can find what such a sum converts to and solve infinite infinite sum at once -- one for each $x$. But the solution is even more complicated because there are more types of convergence (point and uniform).

Taylor Development

Taylor's development comes when we ask the question: what polynomial best approximates any function? After a lot of mathematical analysis, we can get an answer to this question: if we choose a nice enough function of $f(x)$, we can approximate it as $$\begin{align*} f(x_0 + x) \approx \sum_{i=0}^{i=n} x^n \frac{1}{i!} \frac{\mathrm{d}^0}{\mathrm{d} x^n} f(x)\lvert_{x=x_0} \,, \end{align*}$$

here $x_0$ can be chosen arbitrarily as the point around which we perform the development, normally choosing $x_0=0$. A $i!$ sign means a factorial of $i$. The factorial of the number $3 is $3!=3\cdot 2 \cdot 1 = 6$ atp. An approximation is also the most accurate the closer to point $x_0$ we are.

We can choose the $n$ number however we want, the larger the $n$ number, the more accurate the approximation we get. Note that for $n=1$, we have a derivative formula that we used in Living Geometry (only then were we writing $x + \Delta x$): $$ f(x_0 + x) \approx f(x_0) + x \cdot f'(x_0) \,.$$

It begs the question: what if we said $n$ was infinity? Using mathematical analysis, it can then be shown that, under certain conditions, an infinite series actually converts to a function. So we can express functions as polynomials of infinite degree, and that comes in handy a lot of times. For example, the sine can be expressed as $$\sin (x) = \sum_{i=0}^\infty \frac{(-1)^n}{(2n+1)!} x^{2n+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} -\dots $$

You can see the illustration in the figure below, where the sine function and their first few approximations.





Fourier Development

If a function can be expressed as a polynomial of infinite degree, can it still be expressed in some other way in the same spirit? This question was asked by Joseph Fourier in the 18th century and he discovered that functions can also be expressed by an infinite series of sine functions with different periods. We will no longer represent such a decay mathematically, but point it out to the rectangular function in the figure below. There we can see the fourier line for $n=1, $n=2, $n=10$, and $n=100$, as it gets closer and closer to the rectangular function.





Fourier's decay also has a physical significance. If we think of the $f(x)$ function as a sound signal, something like $I(t)$, the intensity over time, then the decay will show us which frequency is in it. Also, the $sin(a)(x -xt)$ functions are a prominent solution to the aforementioned wave equation: we can use the Fourier decomposition to combine them to create new solutions.

Functional analysis

In multidimensional analysis, we expanded the dimensions of the space in which we can derivate. In functional analysis, we break out of this space into somewhere else entirely: the function space. In a multidimensional space, we can express a point as a so-called vector, e.g., in a 3D space as a trifecta of $(x,y,z)$. In contrast, a point in the function space is $f(x)$ we could be trying to create a table: $(f(1), f(2), f(3), \dots)$. So we might think that this is how you can express a function as an infinite vector, but a bridge error. Consider the zero chapter of Living Geometry, where we touched on the difference between whole and real numbers: real numbers are innumerable.

The space of functions is therefore infinitely more diverse than any multidimensional space. Yet there is a light of hope: polynomials. These can be expressed as a point in countless infinite space, which is a little more pleasant. How? Each polynomial is positively identified by its coefficients. We can then write $P(x) = 1 + x^1 - 2x^2$ as $P=(1,1,-2,\dots)$. And the polynomial class isn't poor at all, remember, because of Taylor's decay, a lot of functions can be expressed as an endless polynomial!

So if we know that space of functions is in some sense tolerable, how do we derive from it? Simply, we introduce the so-called $\mathcal F$ functionary. We put a function in it and it gives us a number, so $$ \mathcal F : f(x) \mapsto F[f(x)] = a \,, $$

where $a$ is some real number depending on the choice of $f(x)$. A simple functionary can be, for example, $\mathcal F_0$, one that assigns a zero to each function. But it is not suited to anything practical, so we set up mostly others. For example, if $f(x)$ denotes the speed of a car depending on its location on the highway, that functionary can be selected to calculate the total car consumption. Then if we can derive it, we can find this extreme function that minimalizes the fuel consumption. So while in normal analysis we have a given function and we look for its maximum, here we have a general problem and we look for the function that best solves it.

General

So these are the guidelines that we can go further in exploring mathematical analysis. We have omitted an important sector of comprehensive analysis here, as we did not present comprehensive figures at all, so it would be difficult to explain everything. Finally, a few more references.

<< Previous chapter