- Use the alternating series test to check if an alternating series converges
- Understand the difference between absolute and conditional convergence
The Alternating Series Test
The Main Idea
Alternating series have terms that flip between positive and negative, creating a fundamentally different convergence behavior than series with all positive terms. The alternating signs can actually help a series converge even when the corresponding positive series diverges.
Alternating series have the form [latex]\sum_{n=1}^{\infty}(-1)^{n+1}b_n = b_1 - b_2 + b_3 - b_4 + \cdots[/latex] or [latex]\sum_{n=1}^{\infty}(-1)^n b_n = -b_1 + b_2 - b_3 + b_4 - \cdots[/latex], where [latex]b_n \geq 0[/latex].
The Alternating Series Test: An alternating series converges if:
- The terms [latex]b_n[/latex] are eventually decreasing: [latex]b_{n+1} \leq b_n[/latex]
- The terms approach zero: [latex]\lim_{n \to \infty} b_n = 0[/latex]
Why does alternating help? The partial sums oscillate in a controlled way. Odd partial sums form a decreasing sequence bounded below, while even partial sums form an increasing sequence bounded above. Both converge to the same limit.
The alternating signs create “partial cancellation” that allows the series to converge even when the absolute values of the terms don’t decrease fast enough for absolute convergence.
When the test fails: If [latex]b_n \not\to 0[/latex], use the divergence test—the series diverges regardless of the alternating pattern. If [latex]b_n[/latex] doesn’t decrease eventually, the test is inconclusive.
Determine whether the series [latex]\displaystyle\sum _{n=1}^{\infty }\frac{{\left(-1\right)}^{n+1}n}{{2}^{n}}[/latex] converges or diverges.
For closed captioning, open the video on its original page by clicking the Youtube logo in the lower right-hand corner of the video display. In YouTube, the video will begin at the same starting point as this clip, but will continue playing until the very end.
You can view the transcript for this segmented clip of “5.5.1” here (opens in new window).
Remainder of an Alternating Series
The Main Idea
One of the biggest advantages of alternating series is that they come with built-in error estimates. When you approximate an alternating series with a partial sum, you can easily bound how much error you’re making.
The result: For an alternating series that passes the alternating series test, if you approximate the infinite sum [latex]S[/latex] using the [latex]N[/latex]th partial sum [latex]S_N[/latex], then the error satisfies:
[latex]|R_N| = |S - S_N| \leq b_{N+1}[/latex]
What does this mean? The error is at most the size of the very next term you would add. This is incredibly convenient compared to other series where error estimation can be quite complex.
Why does this work? The alternating nature creates a controlled oscillation. The partial sums “bracket” the true sum, alternating above and below it. The distance from [latex]S_N[/latex] to the true sum [latex]S[/latex] can’t be more than the distance to the next partial sum [latex]S_{N+1}[/latex].
Key Strategy: To get your error below a desired threshold, find the smallest [latex]N[/latex] such that [latex]b_{N+1}[/latex] is smaller than your error tolerance.
Important requirement: This estimate only works if the series satisfies the alternating series test conditions (terms decrease and approach zero). Without these conditions, the estimate doesn’t apply.
Find a bound for [latex]{R}_{20}[/latex] when approximating [latex]\displaystyle\sum _{n=1}^{\infty }\frac{{\left(-1\right)}^{n+1}}{n}[/latex] by [latex]{S}_{20}[/latex].
Absolute and Conditional Convergence
The Main Idea
When a series has both positive and negative terms, there are two distinct ways it can converge, and the difference between them reveals profound mathematical behavior.
The key distinction:
- Absolute convergence: Both [latex]\sum a_n[/latex] and [latex]\sum |a_n|[/latex] converge
- Conditional convergence: [latex]\sum a_n[/latex] converges but [latex]\sum |a_n|[/latex] diverges
Absolute convergence is stronger than regular convergence. If [latex]\sum |a_n|[/latex] converges, then [latex]\sum a_n[/latex] automatically converges. The converse is not true.
Why does absolute convergence matter? It provides stability. If a series converges absolutely, you can rearrange its terms in any order and still get the same sum.
If a series converges conditionally, you can rearrange its terms to:
- Make it diverge to infinity
- Make it converge to any real number you choose
- Make it diverge to negative infinity
The Riemann Rearrangement Theorem: Any conditionally convergent series can be rearranged to converge to any desired value or to diverge. This shows that conditional convergence is fundamentally unstable.
When you have a series with mixed signs, first test [latex]\sum |a_n|[/latex] using your standard tests (comparison, ratio, etc.). If it converges, you have absolute convergence. If it diverges, test the original series with alternating series test or other methods to check for conditional convergence.
Determine whether the series [latex]\displaystyle\sum _{n=1}^{\infty }{\left(-1\right)}^{n+1}\frac{n}{\left(2{n}^{3}+1\right)}[/latex] converges absolutely, converges conditionally, or diverges.
For closed captioning, open the video on its original page by clicking the Youtube logo in the lower right-hand corner of the video display. In YouTube, the video will begin at the same starting point as this clip, but will continue playing until the very end.You can view the transcript for this segmented clip of “5.5.3” here (opens in new window).