where \displaystyle \left({{\rm d}^ny\over {\rm d}x^n}\right)_{x_0} is the value of \displaystyle {{\rm d}^ny\over {\rm d}x^n} at x = x_0 .
Often x_0 = 0 , and the solution is
\displaystyle y = y_0 + x \left({dy\over dx}\right)_0 + {x^2\over 2!}\left({dy\over dx}\right)_0 + {x^3\over 3!}\left({dy\over dx}\right)_0 + ...
An approximate value of a definite integral can sometimes be found by expanding the integrand in a series and then integrating the series term by term. This only works satisfactorily when the limits are substituted, if successive terms after the first few are rapidly getting smaller and smaller.
Chapter 2 summary - Complex numbers
The complex number z = a + ib, a, b \in \Re can also be written as
z = r(\cos \theta + i\sin \theta) \quad and \quad z = re^{i\theta}
where -\pi < \theta \leq \pi and where r = \sqrt {(a^2 + b^2} and \theta is the angle which the line representing z on an Argand diagram makes with the positive x-axis.
\cos iz = \cosh z \sin iz = i\sinh z \cosh iz = \cos z \sinh iz = i\sin z
where z \in \unicode{x2102} .
If z_1 = r_1(\cos \theta_1 + i\sin\theta_1) and z_2 = r_2(\cos \theta_2 + i\sin\theta_2) then
divide the matrix of cofactors by the determinant.
A matrix is called singular if its determinant is zero.
Only non-singular square matrices have an inverse.
To find a 3 × 3 matrix which represents a linear transformation T, you find T({\bf i}) , T({\bf j}) and T({\bf k}) and make these the first, second and third columns respectively of the matrix.
If {\bf A} is a matrix and {\bf v} is a non-zero vector such that {\bf Av} = \lambda{\bf v} where \lambda is a scalar, then {\bf v} is called an eigenvector of {\bf A} and \lambda is the corresponding eigenvalue.
|{\bf A} - \lambda{\bf I}| = 0 is the characteristic equation of the matrix {\bf A}. The solutions of this equation given the eigenvalues of {\bf A}.
\displaystyle {1\over \sqrt {(a^2+b^2+c^2)}} \pmatrix{a\\ b\\ c} is the normalised eigenvector of \pmatrix{a\\ b\\ c} .
The eigenvectors {\bf v}_1 and {\bf v}_2 are orthogonal if
{\bf v}_1 \cdot {\bf v}_2 = 0
The matrix \displaystyle \pmatrix{a& b& c\\ d& e& f\\ g& h& i} is orthogonal if \pmatrix{a\\ d\\ g} , \pmatrix{b\\ e\\ h} and \pmatrix{c\\ f\\ i} are each normalised eigenvectors and if \pmatrix{a\\ d\\ g} \cdot \pmatrix{b\\ e\\ h} = 0 , \pmatrix{a\\ d\\ g} \cdot \pmatrix{c\\ f\\ i} = 0 and \pmatrix{b\\ e\\ h} \cdot \pmatrix{c\\ f\\ i} = 0 .
If {\bf A} is orthogonal then {\bf A}^{\rm T} = {\bf A}^{-1} .
A square matrix whose elements are all zero except those on the leading diagonal is called a diagonal matrix.
\displaystyle \pmatrix{a& 0\\ 0& b} and \displaystyle \pmatrix{a& 0& 0\\ 0& b& 0\\ 0& 0& c} are diagonal matrices.
A matrix {\bf A} which has {\bf A}^{\rm T} = {\bf A} is symmetric.
If {\bf A} is symmetric and {\bf P} is an orthogonal matrix whose columns are the normalised, orthogonal eigenvectors of {\bf A}, then {\bf P}^{\rm T}{\bf AP} is diagonal.
where \theta is the angle between {\bf a} and {\bf b} and \hat {\bf n} is a unit vector perpendicular to both {\bf a} and {\bf b} which is in the direction that a right-handed corkscrew would move when turned from {\bf a} to {\bf b}.
{\bf a} \times {\bf b} = -{\bf b} \times {\bf a}
{\bf i} \times {\bf j} = {\bf k}
{\bf j} \times {\bf k} = {\bf i}
{\bf k} \times {\bf i} = {\bf j}
where {\bf i}, {\bf j}, {\bf k} are the unit vectors in the directions of the positive x-, y- and z-axes respectively.
{\bf i} \times {\bf i} = {\bf 0}
{\bf j} \times {\bf j} = {\bf 0}
{\bf k} \times {\bf k} = {\bf 0}
where {\bf i}, {\bf j}, {\bf k} are the unit vectors in the directions of the positive x-, y- and z-axes respectively.
The equation of the line passing through A, with position vector {\bf a}, and the point R, with position vector {\bf r}, and which is parallel to the vector {\bf b}, is
({\bf r} - {\bf a}) \times {\bf b} = {\bf 0}
The equation of the plane containing the points A and R, with position vectors {\bf a} and {\bf r} respectively, is {\bf r} \cdot {\bf n} = p , where p = {\bf a} \cdot {\bf n} and {\bf n} is a vector perpendicular to the plane.
The vector equation of a plane passing through the point with position vector {\bf a} and where {\bf b} and {\bf c} are non-parallel vectors in the plane, neither of which is zero is
{\bf r} = {\bf a} + \lambda{\bf b} + \mu{\bf c} ,
where \lambda and \mu are scalars.
The distance d from the origin to the plane containing the point with position vector {\bf r} is
d = {\bf r} \cdot \hat {\bf n}
where \hat {\bf n} is a unit vector perpendicular to the plane.
The acute angle \theta between a line, with direction vector {\bf b}, and a plane is
where {\bf n}_1 is a vector perpendicular to one of the planes and {\bf n}_2 is a vector perpendicular to the other plane.
The shortest distance between the lines with equations {\bf r} = {\bf a} + \lambda{\bf b} and {\bf r} = {\bf c} + \mu{\bf d} where \lambda, \mu are scalars is given by
A proof by mathematical induction consists in showing that if a theorum is true for some special integral value of n, say n = k , then it is true for n = k +1 . Also you need to show that the theorum is true for some trivial value of n such as n = 1 (or n = 2 , etc.). Then if it is true for n = k + 1 , when it is true for n = k , and if it is true for n = 1 , then it is true for n = 1 + 1 = 2 , n = 1 + 2 = 3 and so on for all positive integral n.