Skip to content

Instantly share code, notes, and snippets.

@therewillbecode
Last active December 27, 2020 11:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save therewillbecode/8faa486657c969839274834cf7b4db80 to your computer and use it in GitHub Desktop.
Save therewillbecode/8faa486657c969839274834cf7b4db80 to your computer and use it in GitHub Desktop.
math_insights

Number theory

Primes are the atoms of the integers.

Fundamental theorem of arithmetic

Absolute values and inequalities

Remember to flip inequalities when taking the reciprocal of both sides or multiplying/diving both sides by a negative.

|-a| = |a|

|ab| = |a||b|

|a/b| = |a|/|b|

|a + b| <= |a| + |b| Triangle inequality

\sqrt{ x^2 } = |x|

|x| < a \iff -a < x < a

a < b \iff b - a > 0

b < a \iff a - b > 0

a < b \iff a + c < b + c

a < b \iff a - c < b - c

Radicals

nth root is just the same as multiplying the power by 1/n

To rationalise a denominator just multiply by the conjugate.

Absolute values when you have a even root and an even power inside the radical which results in an odd power

Remember when you have a even root radical like sqrt(x^2) which ends up in a value to an odd power you need to take absolute value of result sqrt(x^2) = |x|

However sqrt(x^4) = x^2 we dont need the absolute value here since x^2 has a event power which is already the same as taking the absolute value x^2 = |x^2|

Also sqrt(x^6) = |x^3| but sqrt(x^8) = |x^4|

Factorials

Important rule for factorials

(n + 1) ! = (n+1) * n!

So

n!/(n+1)! = n!/(n+1)*n! = n+1

Link between Binomial coefficients and binomial expansion

If we have (a+b)^n this is a binomial expression

Then this is expanded without cofficients as a^n*b^0 + a^(n-1)*b^1 + a^(n-2)b^2 + ... + a^1b^n Now each coefficient is the binomial coefficient n choose k where k is the power of b

N choose K aka the binomial coefficient comes into this because when we are mutlipling out binomial products such as (a+b)(a+b)(a+b)

Once you take the a and multiply it by another term in another (a+b) then you have eliminated those two terms. Thereafter there is one less term available and you keep multipling it through the remaining terms. The binomial coefficient gives you the coefficient of the binomial expansion because it gives you the number of ways there are to pick K things from N number of things.

Complex Numbers

Addition is the composition of translations (shifts); multiplication is the composition of rotations.

y__> I think of it as the exponent is the composition of rotations yeah, multiplying by e^(ix) rotates by x radians so composition of rotations is why e^(i(x+y)) = e^(ix) e^(iy) <zincy__> I guess negating the exponent will always give a flip on the x axis? yes <zincy__> I love complex numbers the exponent represents the angle, and negative angles go in the opposite side

<zincy__> So if you are taking the real part of a fraction <zincy__> and the denominator is a real number <zincy__> so e^i -1 / (-4cos(1)+4) <zincy__> can we split the fraction <zincy__> and drop the imaginary part which is the e^i/(-4 cost(1)+4) if the bottom is a real number, then you can just take real part of top

  • [itchyjunk] has quit (Remote host closed the connection)
  • plutonas has quit (Quit: Connection closed for inactivity) reason: if a,b,c are real, then (a+bi)/c = a/c + b/c i, with real part a/c = re(top) / bottom

puzzle it's also a somewhat interesting puzzle to figure out why multiplying by e^(ix) is a rotation by angle x assuming that e^(ix) = cos(x) + isin(x)

Analysis

Can think of infimum as a limit.

Convergence (sequences)

Add section on proving convergence of sequence using definition.

Boundedness of convergent sequences thereom

http://mathonline.wikidot.com/the-boundedness-of-convergent-sequences-theorem

Convergence proof using definition of limit (epsilon delta proofs)

N can be thought of as a function from epsilon to some natural number N where setting n > N means successive terms in epsilon neighborhood, formally l-eps > a_n > l + eps .

Subsequences

A subsequence of a sequence is where we replace the k in a_k with a_{f(k)} where f : N -> N and is monotonic.

For example (a_2k) would be all the even kth terms of the sequence (a_k) and (a_2k+1) would be all the odd kth terms of the sequence.

Every sequence is a subsequence of itself. This subsequence would just be the identity map for the subscript index.

Remember - all subsequences of (a_k) converge doesnt imply (a_k) converges, think of ((-1)^n) However if a subsequence of (a_k) diverges then (a_k) diverges

All subsequences of (a_k) must converge to the same limit otherwise the original sequence (a_k) diverges! Think of ((-1)^n) as an example.

Consider 1/3n+1 . Does it converge? Well, it is a subsequence of 1/n and 1/n converges therefore its subsequence 1/3n+1 must converge.

Note (−1)n is 1 for even n and −1 for odd n. So 1+(−1)n is 2 for even n and 1 for odd n. Therefore, if a_n=1+(−1)^n, then a2n=2 and a_{2n+1}=0. Since the subsequences a2n and a2n+1 have different limits (lim a_{2n}=2 and lim a_{2n+1}=0), the limit lim a_n does not exist.

Algebra of limit

Remember when you are taking the limit of a fraction you can divide every term in the fraction by the same value and then use algebra of limits to obtain the final limit.

\begin{proof}
\begin{align*}
    &  \lim_{k \to \infty } \left|\dfrac{  k^4 }{  (k+1) }   \right|   \\\\
  &= \lim_{k \to \infty }  \dfrac{ k^4 }{  (k+1) }    \quad \quad \quad \quad \impliedby k \ge 0\\\\  
  &= \lim_{k \to \infty }  \left( \dfrac{ k }{  k+1 } \right)    \\\\  
  &=  \lim_{k \to \infty }  \left( \dfrac{ \dfrac{k}{k \mathstrut} \mathstrut}{ \mathstrut \dfrac{k \mathstrut}{ \mathstrut k} + \dfrac{\mathstrut 1}{k} } \right)    \\\\  
  &= \quad  \lim_{k \to \infty }  \left( \dfrac{ 1 \mathstrut}{ \mathstrut 1 + \dfrac{\mathstrut 1}{k} } \right)   \\\\  
  &\quad = \left( \dfrac{ \lim (1) \mathstrut}{ \mathstrut \lim (1) +
   \lim \left( \dfrac{\mathstrut 1}{k} \right) } \right)  \quad\quad\quad \impliedby \text{Algebra of limits}\\\\
   &\quad =   \left( \dfrac{ 1 \mathstrut}{ \mathstrut 1 + 0 } \right)   \\\\  
   &\quad =     1    \\\\  

Cheatsheet on sequence convergence/divergence http://mathonline.wikidot.com/strategies-for-determining-the-convergence-or-divergence-of

infinite series

We can view a series as a sequence of partial sums, so all our tools from sequences carries over - subsequences, sandwiching etc

It is important to be clear about the difference between the

(i) The sequence of terms (a_1 , a_2 , a_3 , ... )
(ii) The sequence of partial sums: (s_1 , s_2 , s_3 , ...), where s_n = a_1 + a_2 + ··· + a_n

An important observation a_n = s_n − s_n−1 for n >= 2;

The sum of an infinite series is equal to the limit of partial sums.

Most often we don't have a formula for the partial sums so we have to use the convergence tests for series (Ratio, Root test and friends)

remember that as a sequence (1/n) converges but the harmonic series \sum 1/n diverges. Easy to forget!

Intuition behind why the harmonic series \sum 1\n diverges.

Treat the divergence of the harmonic series as a philosophical discovery. Adding up pieces of things which get infinitely smaller when these pieces are added the sum can still be infinitely large. This is because there are infinitely many pieces.

As an analogy think of a stack of books. You can stack books infinitely high by pushing the next one on the top out by an incrementally smaller and smaller amount. What this means is that your stack of books because a bit longer horizonatally for each one you put on top. Even though the horizontal increase gets smaller and smaller. No matter how small the increase gets you have infinitely many books to add to make up the horizontal increase you were getting n books ago.

Think of hilberts hotel - infinitely many rooms and the hotel is full. You can send the guests to the next room along and there will always be a room free. Anyway.

Just because you are adding smaller and smaller pieces of something infinitely does not mean these pieces will all fit in a finite box. You may need an infinite box. The case of the harmonic series can be seen as infinite smaller pieces of an infinite quantity. Just because you have infinitely smaller and smaller pieces does not mean all these pieces will fit in a finite box.

You can regroup the terms of the harmonic series as 1/2 + 1/2 + 1/2 + ...

See here https://socratic.org/questions/how-do-you-show-that-the-harmonic-series-diverges The smaller the terms get the more we need to make up 1/2, but we have infinitely many terms so there are always more. Think of Hilbert's hotel thought experiment.

Lim sup and lim inf

Tails/Shift and monotonicity still applies with series

Can use monotonic and bounded for series to show convergence of series too and do this on subsequences too

Can also use tails lemma or the shift rule to show convergence of series by showing convergence of tail

Add all this to notes and verify

Thereom: A series of non-negative terms monotonically increases

Assume a_k is real. Then (s_n) is monotonic increasing if and only if a_k > 0 (for k > 2). Remember (s_n) is the sequence of partial sums not the sequence of terms itself which are (a_n)

convergence of infinite series

Remember \sum (-1)^{k+1} diverges/ You simply cannot parenthesis infinite terms like this (1-1) + (1-1) ... Also \sum (-1)^{k+1} = (-1,1-1,1 ...) is a divergent sequence. So the fact the sequence of (a_n) diverges means this series has no hope of converging.

Telescoping series converge because the individual terms cancel. \sum (a_{n+1) - a_n)

an infinite series converges iff the partial sums converge to some finite number

If the sequence of terms of an infinite series does not converge to 0 then the series converges. Otherwise if the infinite series converges to 0 then all bets or off and we don't know if the series converges or diverges. So we can only say somthing about the series if the terms do not approach 0.

To prove convergence of an infinite series we have to either show that the sequence of partial sums converges or use one of the series tests for convergence. Generally we won't be able to compute what value the series sums to.

Here is a 4 hour video where redpenblackpen shows the behaviour of 100 infinite series: https://www.youtube.com/watch?v=jTuTEcwvkP4&ab_channel=blackpenredpen

Convergence tests for infinite series

Useful cheatsheet on tests http://www.toomey.org/tutor/harolds_cheat_sheets/Harolds_Series_Convergence_Tests_Cheat_Sheet_2016.pdf

Geometric Series

For series with a common ratio r s n goes to infinity, the absolute value of r must be less than one for the series to converge.

A repeating decimal can be thought of as a geometric series whose common ratio is a power of 1/10. For example:

0.7777 = 7/10 + 7/100 + 7/1000 + 7/10000 + ...

Arithmetic Series

Arithmetic series will only converge for the series 0,0,0,0 ... So if an arithmetic series converges then its limit is necessarily 0. See the nth-term test for divergence

p series

p-series are a specific type of series of the form a_n = 1/n^p

If p > 1, the series will converge, or in other words, the series will add up to a finite value.

If 0 < p ≤ 1, the series will diverge

Alternating series

An alternating series is of form \sum (-1)^n a_k = -(a_0) + a_1 + -(a_2) + a_3 + ...

If both a_{2k} and a_{2k+1} are convergent, the limit will exist. Remember that \sum a_k = \sum a_{2k} + \sum a_{2k+1} \implies \sum a_k = L_1 + L_2
Hence \sum a_k is some finite value \iff \sum a_k converges.

if only one is convergent, the sum will diverge. When they're both divergent all bets are off and you need to look more closely.

absolute vs conditional convergence

Convergence generally refers to absolute convergence.

Absolute convergence mean all arrangements/permutations of our |a_k| terms result in the same finite sum .

In other words the sum of the infinite series is not changed by moving terms around. The positioning of terms usually only change the sum of the series when we have negative and positive terms in the series.

Conditional convergence means only some arrangements of a series converge and some rearrangements of a series can result in a different sum.

Absolute Convergence Implies conditional convergence

This is handy because you can use it to indirectly prove series with negative terms converge. |-a_n| converges \implies -a_n converges.

Often this is useful for cases where you have negative terms in your series but you want to use the integral test which only works on series with positive terms.

Examples of series where it is handy to use absolute convergence implies conditional convergance being useful i.e for example 1/n^2 sin x or with cos x as we can take modulus of trig function and then prove the positive terms converge in order to show the terms converge conditionally. - (find more examples of this)

This means that we can show a series converges by showing that the series is absolutely convergent to show it is convergent.

For example cos(n)\n^2 bounces around beteen negative and positive. But we can take the absolute value of each term , show it is less than 1/n^2 which is a p series where p > 1 so cos(n)\n^2 is sandwiched between 0 and convergent series and therefore is absolutely convergent.

It follows from absolute convergence that cos(n)\n^2 converges.

Remember about sandwiching lemma! Can still use our tools from sequences for series because partial sums are sequences!.

use the comparison test to show the absolute series converges absolutely which then means it converges.

Divergence test - Or do terms -> 0 ?

If the sequence of terms that make up each term in the series don't converge to 0 then our series won't converge. We need the terms to get smaller and smaller for our series to converge.

However, crucially if terms do converge to 0 then all bets are off! This does not imply convergence or divergence. Only failing to converge to 0 implies divergence. Think of the harmonic series \sum 1/n its terms -> 0 and yet it diverges. This is because infinite series do not behave like finite series.

Limit comparison test

Essentially we are looking at the rato of terms in each 2 series and seeing if they behave similarly. I.e does the limit of the ratio of terms between two different series converge to a finite value.

if a_n and b_n are both positive terms of their respective series then

if the ratio of terms for each seq(a_n / b_n) approaches a positive and finite limit, then either a_n diverges and b_n diverges or a_n converges and b_n converges.

As the ratio of successive terms get closer and closer both sequences then we can conclude they either both diverge or converge.

Ratio test

We are examining the limit of the ratio of consecutive terms. So if the ratio gets smaller and smaller the original series converges.

Root test

This very useful test. I you see things raised to the nth power in a series then root test may be handy. Remember how sqrts work sqrt(2^n) is 2^(1/n * n) = 2^(1) = 2

Remember we must consider lim sup if we want to use the root test to prove a series diverges.

l'hopital's rule

Useful when we are trying to find the limit of a ratio of terms which when evaluated are indeterminate. Such as \infty/0

We usually figure out derivatives by using limits. However l'hopital goes the other way and we use derivatives to figure out limits. Note l'Hopital can only be applied when we have a limit of a quotient which is in indeterminate form.

Indeterminate forms are forms such as the following \left( 0 \right)\left( { \pm ,\infty } \right)\hspace{0.25in}{1^\infty }\hspace{0.25in}{0^0}\hspace{0.25in}{\infty ^0}\hspace{0.25in}\infty - \infty

\mathop {\lim }\limits_{x \to a} \frac{{f\left( x \right)}}{{g\left( x \right)}} = \frac{0}{0}\hspace{0.5in}{\mbox{OR}}\hspace{0.5in}\mathop {\lim }\limits_{x \to a} \frac{{f\left( x \right)}}{{g\left( x \right)}} = \frac{{ \pm ,\infty }}{{ \pm ,\infty }}

We are essentially looking at the derivatives to see which value is reached first. \mathop {\lim }\limits_{x \to a} \frac{{f\left( x \right)}}{{g\left( x \right)}} = \mathop {\lim }\limits_{x \to a} \frac{{f'\left( x \right)}}{{g'\left( x \right)}}

Integration

Integration can be thought of as the effect of a continuous process of change.

Integration by substitution feels like function composition because it is essentially viewing functions as their compositions of functions and relats to the chain rule. You go from thinking of f(x) to f(x(u)) to write f entirely in terms of u.

h(g(x))=(g(x))17 = (x2 + 1)17.

Chain rule

We view a function as a composition of smaller functions and then differentiate these smaller functions and end up with the f'.

Say we want to differentiate f(x) = (x^2 + 1)^17

Then split this function up into its composition of smaller functions. h(g(x)) = (g(x))^17 = (x^2 + 1)^17.

Which we then differentiate and multiply the derivates (chain rule) f'(x) = 17(x^2 + 1)^16 * 2x

Which we can view with a diagram: g → x2 + 1 h → (x^2 + 1)^17

Chain Rule - Definition If f(x) = h(g(x)) then f'(x) = h'(g(x)) × g'(x)

Power series

Power series are a form of infinite series. Convergence the ratio and root tests tend to be frequently used to show convergence.

Useful handout https://www.math.uh.edu/~jiwenhe/Math1432/lectures/lecture27_handout.pdf

Notice n starts at 0 typically.

A power series is a series of the form \sum \limits_{n=0}^{\infty} a_n (x - a)^n

Geometric series are a special case of power series

Remember what a geometric series looks like \sum \limits_{n=0}^{\infty} ax^n = ax^0 + ax^1 + ax^2 + ...

Power series centred at 0 always converge for |x| < 1 (Geometric series)

Geometric series is a special case of a power series where c=0. So when c=0 you get the usual geometric series criteria where the power series will converge for |a| < 1 and diverge for |a| > 1 and the sum would just be a/1-x So interval of convergence is -1 < x < 1 and radius of convergence is 1 This case of the power series is centred a 0. The radius of convergence is how far can we stray within our centre and still converge. This distance from the centre is the radius (half interval of convergence).

The power series is said to be centered at a. When x = a then the power series will always converge. because you end up with \sum \limits_{n=0}^{\infty} a_n (x - a)^n = a_0 (x - a)^0 + a_1(x-a)^1 + a_2(x-a)^2 + ... \sum \limits_{n=0}^{\infty} a_n (x - a)^n = a_0 1 + a_10 + a_20 + ...

So if x = a then we always get convergence because we are setting x to the centre of the power series and this will evaluate to a_0 since the rest of the terms are *0.

"Centred at" means the value of x such that the power series does not depend on x. So if x and a are the same then you are ending up with (x-a) = 0 which means that you have a zero in every term. So the value of power series does not depend on x.

In general the radius of convergence if the "size" of the interval where the series converges. A series will fall into one of three categories. We talk of disc of converge for complex power series and intervals of converge for real power series.

The series converges for all real numbers. We say here the radius os convergence is ∞ The series converges on an interval from a to b (possibly including the endpoints). We say here that the radius of convergence is b−a. The series converges only at one number a. We say here that the radius of convergence is 0. So there is always a radius of convergence

A Power series can be thought of an infinite polynomial. Which is just a function. inside the circle of convergence we can think of a power series as like a polynomial of degree \infty for the purposes of differentiation.

Unlike regular infinite series whose sum/value do not depend on a variable. A power series has an x variable in it and so the value of the power series does depend on the value of x and thus can be thought of as a function of x.

Power series can therefore be used to represent functions. Although a power series which fails to converge is useless for representing a function.

f(x) = \sum \limits_{n=0}^{\infty} a_n (x - a)^n

Being able to represent a function by an “infinite polynomial” is a powerful tool. Polynomial functions are the easiest functions to analyze, since they only involve the basic arithmetic operations of addition, subtraction, multiplication, and division.

Radius Of convergence

Radius of convergence for Real numbers can be thought of as a one dimensional interval on the real number line for which the value of x leads to convergence of power series <-->

It is called "Radius" because in the complex plane this radius is a 2 dimensional radius. For the Reals though you can think of the radius as a one dimensional interval on the real number line.

The radius of convergence is half the length of the interval of convergence.

Interval of convergence

If a power series converges at x_0 then it converges in the closed interval of (-|x_0|, |x_0|)

Check endpoints separately for interval of convergence to check if open/ closed or semi-closed interval.

Importantly convergence is only guaranteed inside the open interval. You need to check both ends independently by plugging the x value back in to see if the end points of the interval are included (closed) for the interval of convergence. i.e 1 < x < 2 vs 1 <= x <= 2 or 1 <= x < 2

To get the radius of convergence we are essentially asking what is half the length of the interval of x for which this power series converges.

The radius can be 0, ie power series only converges for x = 0 or the radius could be \infty the power series might converge for all values of x.

Differentiation of power series

Can be used to find sum of a power series (or a series if you substitute x)

Look at this

https://math.stackexchange.com/a/811447/652394

differentiation in this case essentially just a method of transforming the equation between a known power series and the geometric sum and into the "target" power series you want to find

What are polynomials?

Best intuition for polynomials is to think of them as functions.

"a ring is what you get, when you have any set of things (that you decide to call "numbers"), with operations that you call "addition" and "multiplication", that satisfy a few specific laws/properties that feel familiar from arithmetic over e.g. integers. then, you get polynomials by throwing in one (or more) indeterminate into the mix, (conceptually) standing for "an arbitrary number" - ski

Why are polynomials interesting?

https://betterexplained.com/articles/intuition-for-polynomials/

Math is a language

  • ski: math is (partly) a language
  • CGILightning: ski, what's the other part of math?
  • ski: well. you could talk about concepts, mental constructions, ideas, visualization, imagination, creativity, hunches, analogy, inspiration, intuition, beauty, symmetry, balance, courage, making up your own rules, following whereever things lead, trying things out, adjusting, starting over, experimentation, gathering experience, &c.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment