Arrays

The simplest arrangement: data in a row, accessed by position.

🤔 Why does this exist?

The Story

The Computer's Memory

Memory is a long row of numbered slots

RAM is physically organized as billions of sequential addresses. Each has a unique number.

The CPU can jump to any address instantly

Given an address, the CPU reaches it in one step. This is called "random access".

Calculating an address is nearly free

Simple arithmetic: base_address + (index × item_size). One multiplication, one addition.

The Problem Before Arrays

Imagine storing 1000 student grades. Early computers had to track each grade's address separately—a second list just to find the first list.

Naive Approach: Store each item wherever there's space

Finding item #500 meant searching from the beginning. No pattern = no shortcut.

Cost: Average lookup checked half the items

💭

"Why can't we just say "give me item 500" and get it immediately?"

The Array: Contiguous Memory + Position

💡 Key Insight: If items are stored back-to-back with no gaps, we can calculate any item's location instantly.

The Pattern: Reserve a continuous block of memory. Item N lives at: start + (N × size). Done.

GAINED

Instant access to any position—just calculate the address

SACRIFICED

Fixed size. Moving or inserting items means physically shifting memory.

Array Memory Layout

0x1000
10
[0]
0x1004
25
[1]
0x1008
30
[2]
0x100c
45
[3]
0x1010
50
[4]
0x1014
[5]
0x1018
[6]
0x101c
[7]

Formula: address = 0x1000 + (index × 4)
arr[3] = 0x1000 + 12 = 0x100C → 45

Interactive Visualization

Data
Active
Highlighted
Empty

💡 Insert at index 0 to see why shifting is expensive!

Cost Meter

Shifts
0
Lookups
0
Movements
0
Total Operations0

Constraints

Toggle constraints to see how they affect the choice of data structure.

Quick Reference

Access by indexO(1)
Insert at endO(1)
Insert at middleO(n)
DeleteO(n)
Search (unsorted)O(n)
Binary Search (sorted)O(log n)

Click "Code Examples" or "Case Studies" above to explore!

This lesson is part of a shared journey.