When calculus was developed in the 17th century, it was known as the infinitesimal calculus, that is how to calculate with infinitely small (infinitesimal) quantities. This was treated with a lot of suspicion, because the only real number which is infinitely small is zero, and it's impossible to divide by zero, yet people were pretending that they could divide by infinitesimals.
In England, the infinitesimal calculus was invented by Isaac Newton, but he didn't trust it; he used it to get answers but then found a proof of each answer using geometry, and he only published the geometry. This is a very difficult method; only Newton was enough of a genius to really pull it off. The main philosophical opponent of infintesimals, Bishop Berkeley, also lived in England, so calculus wasn't used very effectively in England for almost 200 years.
Meanwhile in Germany, the infinitesimal calculus was independently discovered by Gottfried Leibniz; he was not as smart as Newton, so he had no choice but to use infinitesimals, no matter how fishy they were. So in France and Germany, many people were able to use infinitesimal calculus to great success, especially in applications to physics. The experiments showed that their science was correct, even if the mathematics didn't make logical sense.
By the 19th century, there were two main ways that people used infinitesimals, which we now call the differential calculus and the integral calculus. Between them, Augustin Cauchy and Bernhard Riemann were able to explain both of these in a logically rigorous way based on the concept of limits instead of infinitesimals. Finally, Cauchy's student Karl Weierstrass found a complicated but perfectly rigorous account of limits, called the epsilon–delta definition. Now calculus finally had a solid logical foundation, but it no longer had anything to do with infinitesimals.
This was great for the logicians and philosophers, but most scientists ignored it all. They were interested in using calculus to solve problems, not in complicated definitions to make ideas perfectly precise. So they continued to use calculus the way that Leibniz first developed it, the easy way, even though this meant thinking about infinitesimals. However, mathematics books began to use the epsilon–delta method, because this was the only way to make the subject mathematically precise, even though this made things more difficult to understand.
In 1960, Abraham Robinson had a surprise: using advanced ideas from mathematical logic, he found a rigorous way to make sense of infinitesimals after all! At first, this was even more complicated than the epsilon–delta definitions, but since then people have distilled it down to its simple essence. This method of doing calculus with infinitesimals, called nonstandard calculus, seems to be easier for students to understand. However, most textbooks have completely ignored it, sticking to the epsilon–delta approach. (One exception is Jerome Keisler's Elementary Calculus.)
There's an even more fundamental problem with the epsilon–delta method. In the textbook for this class, Weierstrass's precise definition of limit never appears! The reason is that this class is concerned with applications, and it's not important to establish a logically sound theory. Very well, but if we're not going to do the theory, then there's no reason to use this approach at all, and we can go back to the way that Leibniz originally did calculus. Furthermore, Leibniz's original infinitesimal calculus is the way that calculus is still applied in practice, and practical applications are what we are interested in.
Therefore, I'm going to run this class using infinitesimals. I know that what I say can be made rigorously precise using Robinson's ideas, but you don't have to worry about that. What you need to know is how to use infinitesimals to solve practical problems, which is the easiest way to use calculus, and that is what I'll teach you.
The permanent URI of this web page
is
http://tobybartels.name/MATH-1400/2011FA/introduction/
.