CS50 Lecture 3 — Algorithms
Learn searching and sorting algorithms, understand Big O notation, and discover why choosing the right approach can make your code millions of times faster.
Resources
Why Algorithms Matter
You can solve most problems with brute force — check every possibility until you find the answer. But when your data grows, brute force collapses.
Imagine you build an export lead finder that searches through 1 million companies. A bad algorithm takes 1 million steps. A good algorithm takes 20 steps. Same result, wildly different performance.
This lecture teaches you to think about efficiency — not just "does it work?" but "how well does it scale?"
Searching Algorithms
Linear Search
Check each item one by one:
For each element in the list:
If it's what we're looking for:
Return it
Return "not found"
Performance: If you have n items, it takes up to n steps. For 1 million items, that's up to 1 million checks.
Binary Search
Only works on sorted data. Cut the search space in half each time:
Look at the middle element
If it's what we want: done!
If what we want is less: search left half
If what we want is more: search right half
Repeat
Performance: For 1 million items, it takes about 20 steps (log₂ of 1,000,000 ≈ 20). That's the power of halving.
Remember the phone book from Week 1? That was binary search.
Big O Notation — Measuring Speed
Big O is how computer scientists describe algorithm speed without worrying about specific hardware:
| Notation | Name | Example | 1M items | |----------|------|---------|----------| | O(1) | Constant | Array access by index | 1 step | | O(log n) | Logarithmic | Binary search | ~20 steps | | O(n) | Linear | Linear search | 1M steps | | O(n log n) | Linearithmic | Merge sort | ~20M steps | | O(n²) | Quadratic | Bubble sort | 1 trillion steps |
The difference is dramatic. O(n²) on a million items is essentially unusable. O(log n) is instant.
How to read Big O: It describes the worst case — the maximum number of steps as input grows. When someone says "this algorithm is O(n)", they mean the number of steps grows proportionally with the input size.
Sorting Algorithms
Sorting is one of the most studied problems in computer science. Why? Because sorted data enables fast search (binary search) and organized output.
Bubble Sort — Simple but Slow
Compare adjacent pairs and swap if they're in the wrong order. Repeat until no more swaps needed.
[5, 3, 8, 1] → compare 5,3 → swap → [3, 5, 8, 1]
[3, 5, 8, 1] → compare 5,8 → ok → [3, 5, 8, 1]
[3, 5, 8, 1] → compare 8,1 → swap → [3, 5, 1, 8]
... repeat passes until sorted → [1, 3, 5, 8]
Speed: O(n²) — for each element, you might need to compare against all other elements. Terrible for large datasets.
Selection Sort — Find the Minimum
Find the smallest item, move it to position 0. Find the next smallest, move to position 1. Repeat.
Speed: O(n²) — same as bubble sort. Simple to understand but doesn't scale.
Merge Sort — Divide and Conquer
Split the list in half, sort each half, then merge the sorted halves together.
[5, 3, 8, 1]
Split: [5, 3] and [8, 1]
Split: [5] [3] and [8] [1]
Merge: [3, 5] and [1, 8]
Merge: [1, 3, 5, 8]
Speed: O(n log n) — dramatically faster than O(n²) for large lists. This is what most real sorting implementations use.
The insight: By splitting the problem in half recursively, you do much less work overall. This "divide and conquer" strategy appears everywhere in computing.
Recursion — Functions That Call Themselves
Merge sort uses recursion — a function that calls itself with a smaller input:
merge_sort(list):
If list has 1 item: return it (already sorted)
Split list into left_half and right_half
Sort left_half = merge_sort(left_half) ← calls itself!
Sort right_half = merge_sort(right_half) ← calls itself!
Return merge(left_half, right_half)
Every recursive function needs:
- Base case: When to stop (list has 1 item)
- Recursive case: Break the problem into smaller pieces and call yourself
Recursion is initially confusing but incredibly powerful. It's how file systems are traversed (folders within folders), how HTML is parsed (elements within elements), and how many algorithms work.
Connecting to Your Work
When your Next.js app fetches data and filters/sorts it, these algorithms are running underneath:
array.sort()in JavaScript uses a variant of merge sortarray.find()uses linear search- Database indexes use structures similar to binary search
When a page loads slowly, it might be because something is doing O(n²) work that could be O(n log n). Understanding Big O helps you identify and fix these problems.
What to Do This Week
- Watch CS50 Lecture 3 — the sorting visualizations are excellent
- Visit VisualGo (link above) to see sorting algorithms animated
- Think about your own data: If you had a list of 10,000 export leads, how would you search for one? How would you sort them?
- Take the quiz below.
Quiz
For a sorted list of 1,000,000 items, approximately how many steps does binary search need?