We're in the final episode of the series' journey beyond calculus. It's time to look back at the mathematical concepts we've encountered along the way and meet them together. Specifically, we will reveal the relationship that exists between the recently introduced derivative and the integration that I roughly outlined in the chapter on Archimedes. It turns out that concepts as different as calculating the area under the curve and calculating the rate of change of function share very much, their relationship being mathematically called the fundamental theorem of the differential. Once illuminated, this sentence will also shine a number of physical applications of derivative and integration onto the surface. A door full of unprecedented possibilities will open, and we will, unfortunately, no longer be able to pass through it. But at least the right soil will be prepared for passage.

When I write exemplary, I mean orderly. Not all functions have derivatives at all points. For example, discontinuous functions (those that you draw with a few strokes between which you take the pencil off the paper) do not have derivatives at points where they do not bind (points of discontinuity).

In the last chapter, we learned that the derivative of a function at a point is the slope of a tangent to that function at that point. Together with the formula, the derivative can be seen in the figure below, where we derive an exemplary function.

In some literature, you encounter the notion of a definite and indeterminate integral. We will not make such a division, as I consider it unnecessary.

In the chapter with Archimedes, we counted the area under the parabola. So we did this calculation purely as a mathematical curiosity, but it turns out that the calculation of the area under the curve is present in many mathematical and physical problems. That is why the maternity integration operation is being introduced. We say that the integral from the $f(x)$ function from $a$ to $b$ is equal to the area under the curve for $x$ in the interval from $a$ to $b$. We write $\int_a^b f(x)\,\mathrm{d} x= S$. All of this is shown in the figure below, where naturally the area under the $x$ axis is considered negative.

The concept of integration comes from the Latin word integer, which means complete (in (ne) + tangere (touch), that is, untouchable). Integration, then, is a kind of civilization, a renewal. So not for nothing do some politicians talk about integration as the inclusion of a group in a wider society. The $\int$ sign is not a $I$, but an elongated $S$. In fact, it was previously common to write a small $s$ like this, for example in a seam, but you could still find that $s$ in a German newspaper or on houses today. A small $s$ represents a sum, since integration is the sum of the rectangles that make up the area under the curve.

We're facing a similar terminological problem to the derivative. The integral from $a$ to $b$ is the area under the $f(x)$ curve from $a$ to $b$. But let's assume now that $a=0$ and $b$ is the variable $x$. Then we introduce integration, which is a mathematical operation that assigns the $f(x)$ function to another $F(x)$ function that is equal to the $f(x)$ integral from $0$ to $x$.

Let's take a closer look at the integration symbol. But let's first go back to Archimedes and calculate the area under some function from $0 to $1. Let's write instead of the $\int$ sum we were introducing at the time: $$S\approx \sum_{i=0}^n f(i/n) 1/n \,.$$

This expression means that we are adding the contents of rectangles with edges of $1/n$ and $f(i/n)$. If $n$ is big enough, we'll go to the integral: $$S= \int_0^1 f(x)\, \mathrm{d} x \,.$$

In this integral, we have rectangles with a very small edge of $\mathrm{d} x$ and a second edge of $f(x)$. We are running $x$ from zero to $1$ in steps $\mathrm{d} x$.

In the context of integration, we further introduce the notion of primitive function. We say that $F(x)$ is a primitive function to the $f(x)$ function if it pays $$F'(x)=f(x)\,.$$

For example, if we have a function of $f(x)=3x^2$, its primitive function is $F(x)=x^3$. But there is a second primitive function of $F_2(x)=x^3 +1$ and a similarly infinite variety of primitive functions where we add a different constant to the original each time. So the primitive function is not clearly defined, and we write the general primitive function as $$F(x) + C\,,$$

where the name of the integration constant settled for $C$.

Derivation and integration are thus general operations on functions that also result in a function, according to the table.

Of course, the table shows that the function resulting from the $f(x)$ integration is the area under the $f(x)$ curve from $0$ to $x$. Only $x$ is a variable, so we consider $F(x)$ a function. However, there is one other as yet undetected relationship, namely $$F'(x) = f(x)\,.$$

Other authors introduce the integration operation a little differently, and so it cannot be argued that integration is the inverse of derivative. Nevertheless, these are too technical details.

The fundamental theorem describes that the $F(x)$ function obtained by integrating $f(x)$ is also a primitive function of $f(x)$. In other words, a derivative is an inverse operation to integrate -- if we integrate the function and then immediately derive it, we get the same function again. It's just saying the fundamental theorem of the differential calculus, let's explain it with a drawing.

So let's assume that we have some function of $f(x)$ and we know its primitive function of $F(x)$. Let's look at what the content under the curve is from zero to the point $x_0$ and from zero to the point $x_0+h$.

I would like to stress (again) that you will not find mathematical evidence in the true sense of the word in this text, it is not rigorous enough for that. The claims made are not precise enough and do not capture all possible examples. This is just an illustration of the main ideas.

The image shows a colored area of $F(x_0)$ and a larger hatched area of $F(x_0+h)$. Now let's look at a smaller hatched area. This can be expressed first as $F(x_0+h)-F(x_0)$. But second, it can be expressed roughly as $f(x_0)\cdot h$ (or $f(x_0+h)\cdot h$). Equation written by: $$F(x_0+h)-F(x_0) \approx f(x_0)\cdot h\,.$$

The smaller we choose $h$, the more accurately our equality will pay. The $f(x_0)\cdot h$ rectangle will increasingly replicate the shape of the niche (smaller hatched areas). We can also say that for the given rectangle tolerance from the exact shape of the niche, we can choose a $h$ so that the actual error is less than the required deviation. I.e., if we wanted the error to be less than a percentage, we could choose a small enough $h$. A tenth of a percent? Same. A thousand percent? Same. Millionth? I guess you already know where this is going: the limit transition. So we divide the equation above by $h$ and add the limit there. This will get rid of the inaccuracy and can add the real =. $$f(x_0) = \lim_{h\to 0} \frac{F(x_0+h)-F(x_0)}{h}\,.$$

Wow, we've done all the hard math, I hope it didn't hurt so much. Now, how do we use our new knowledge for something useful? Let's start with what Newton probably invented the calculus for: the movement of the bodies. To no one's surprise, the velocity of the body is the change in its position over time, we know $v=s/t$. But this is true of average speed. What to do with immediate? Simply, as I once wrote in the opening chapter, all you have to do is take the average for infinitesimally small time. This infinitely small time can be expressed by the limit: $$v=\lim_{\Delta t\to 0} \frac{\Delta s}{\Delta t}\,.$$

The definition just needs to be expanded, and we can see that speed is just a derivative of position by time. So, for example, if we record our position on GPS during the day, its derivative can determine how fast we've been moving -- something today's smart devices use. Similarly, we can use the fundamental theorem of differential calculus: the insulted position is the integral of velocity ($s=\int_0^t v\,\mathrm{d} t$). That is, if we're constantly looking at the speedometer in the car and recording speeds that may be changing rapidly, it's only possible to use that speed to determine what distance we've traveled without looking out of the car!

But it's not just speed, the derivative refers to anything that changes over time. Electrical current? Changing the charge over time, the derivative of $\frac{\mathrm{d}}{\mathrm{d}t} Q$. Water flow? Volume change over time, that is $\frac{\mathrm{d}}{\mathrm{d}t} V$. Stock market growth? This, too, can be analysed using a derivative. We find countless applications.

In addition, one more remarkable thing can be investigated using derivative: the highs and lows of function. Try looking at an exemplary function from this chapter that I've shown the definition of a derivative. Her maximum (highest point) is such a hill. Imagine a tangent at that point: it's parallel to the $x$ axis. That means its derivative is equal to zero. But the reverse is also true: if the derivative of a function at some point is equal to zero, then the maximum (or minimum) is at that point. This may be a so-called local maximum, which means that the function grows even higher after a while, but it's still worth knowing that point.

The search for the minimum will be used in another area: the search for stability of the mechanical system (e.g. the bridge). It's common knowledge that physical systems try to get into a state with as little energy as possible. Once they're in it, they're in a stable state (mostly). So the bridge builder may ask: for what size of girder is the bridge the most stable? He calculates the potential energy of the bridge (the energy the bridge has vis-\u00e0-vis the Earth's gravitational field, usually for the mass point $E=mgh$), deciphers it by the length of the girder, and finds its most stable length. Similarly, a derivative can also be used to look for spring stability or a set of two atoms. That sounds more complicated, but the principle is the same: just know how to derive.

What does it mean if the derivative of a function is positive at some point? The tangent function at this point points upwards. That means the slope of the function is up. In other words, the function is rising at this point. So if the derivative of a function is positive, the function is growing (rising), if it's negative, it's decreasing (decreasing). You can verify this with a pencil and paper, for example, on the function in the first picture. Of course, if the derivative is zero, the function neither grows nor rises. We are moving into the extreme situation described in the paragraph above. Needless to say, how useful this derivative property is-we can use it to find out about other functions, to investigate them.

Extreme point means either maximum or minimum. Whether we've found the minimum or the maximum can be verified with the second derivatives (the derivative of the function).

Integrals alone can be used to calculate body volumes. Have you ever wondered why the volume of a sphere is just $4/3 \pi r^3$? It is sufficient to determine the coordinates appropriately, and integration can produce a result. A slight modification of the procedure can also calculate the centre of gravity of the bodies, which again can be used in static.

In this series of articles, while we have shown the principle of derivative, we have nevertheless calculated it very lengthy, using limits. How can derivatives be calculated in practice?

According to the old saying, derivative is like squeezing paste out of a tube. It's easy at the beginning, but sometimes there's trouble squeezing it all. And integration? Well, it's an inverse operation...

Numbers are given a table of known derivatives and integrals for disposition. With them, the often-desired problem can be immediately found solved (but a proper physicist or mathematician would never use a table whose results he did not, at least long ago, deduce). In addition, there are even more advanced rules for computing: e.g. per partes integration, rules for product and share derivatives, and so on. When calculating the derivative of an expression, it is usually relatively simple. But integrals can be very messy, some of which can't even be solved.

Let's not kid ourselves, derivatives calculations can be done with software these days. For example, on the www.woframalpha.com page, just enter for example `differentiate x^3*2+3x`

and the robot replies briskly that this equals $6x^2+3$. Such an example could be calculated faster than a machine by an experienced mathematician or physicist, but for more complex examples, tungsten is already clearly winning. Similarly, you can calculate integrals with `integrate`

, you can try it yourself.

Perhaps the most complex area of mathematics in the field of differential calculus is differential equations. These are problems such as: I think function. Its derivative is equal to $x^4. What function do I think?, just written down in mathematical language as an equation. We're used to having one, maybe two numbers as a solution from a standard equation. But in this case, we want infinitely more as a solution: function. The above example of a differential equation can be solved by simple integration, but in general, solving differential equations takes a very long time. Some equations, such as Navier Stokes, which describes the movement of liquids, remain unsolved for hundreds of years despite the hard work of mathematicians.

Differential equations play a very important role in physics: they form the laws of nature. E.g. Newton's law $F=m\cdot a$ (force is equal to mass times acceleration) is actually a differential equation. Acceleration is the change in velocity over time, so the derivative of velocity over time. $$F=m \frac{\mathrm{d}^2}{\mathrm{d}t^2} x \,.$$

So the force is proportional to the second derivative of velocity. For example, if we have a resistance force mentioned at the beginning of a series, that is proportional to the speed. Pays $F=K\cdot \frac{\mathrm{d}}{\mathrm{d}t} x$. Overall, the equation takes shape: $$K\cdot \frac{\mathrm{d}}{\mathrm{d}t} x=m \frac{\mathrm{d}^2}{\mathrm{d}t^2} x \,.$$

The solution to this equation is an exponentially decreasing function. Similarly, we would get different solutions for different forces. So most physical equations are based on that principle.

Differential calculus is a very large subject for which many sides of mathematical theory usually need to be built. I have tried to condense the theory as much as possible, but I believe that the theory has retained the elements that make it special, and it has not lost its clarity. I hope, too, that you've made your way through the geometric landscape of differential calculus at least a little bit of mathematics and thought about a concept. Perhaps the next time someone says they've never been good at math and don't care, you'll be able to show them a bit of mathematical beauty.