Interactive Data Structures Visualizer — Dark Tech Academy

Learn Data Structures Visually

Interactive simulations, sorting visualizations, and AI-powered quizzes

What is a Data Structures Visualizer?

A Data Structures Visualizer helps students understand complex concepts like trees, graphs, and sorting algorithms through interactive animations. It allows learners to explore these concepts step-by-step and improve their problem-solving skills.

From Binary Trees to Sorting Algorithms, this platform provides a complete visual learn data structures experience — powered by interactive simulations and AI quizzes.

📚 Stack
Linear Structures
Trees
Algorithms
Others
Representation type
Tree visualization
→ Insert values to build the binary tree (level-order)
Index mapping formulas
Root → index 0
Left child of i → 2×i + 1
Right child of i → 2×i + 2
Parent of i → ⌊(i−1) / 2⌋
struct Node {
int data;
struct Node *left, *right;
};
Default
Visited
Found
Not found
→ BST property: left < root < right at every node
BST properties

✦ Left subtree values are less than root

✦ Right subtree values are greater than root

✦ Both subtrees are individually valid BSTs

✦ In-order traversal yields a sorted sequence

✦ Search: O(log n) avg, O(n) worst case

Traversals
→ Build a BST then click a traversal
BF = 0 (balanced)
BF = ±1 (ok)
BF = ±2 (needs rotation)
AVL tree
Rotation log
→ Insert nodes to see rotations
LL rotation
Single right rotation
BF=+2, BF(left)≥0
RR rotation
Single left rotation
BF=−2, BF(right)≤0
LR rotation
Left then right
BF=+2, BF(left)<0
RL rotation
Right then left
BF=−2, BF(right)>0
→ AVL guarantees |BF| ≤ 1 at every node — height always O(log n)
max keys = 2t−1
Default node
Visited
Found
Not found / split
→ B-Tree: each node holds up to 2t−1 keys; splits on overflow, merges on underflow
B-Tree properties

✦ Every node has between t−1 and 2t−1 keys (root may have 1)

✦ All leaves appear at the same depth

✦ Keys in a node are stored in sorted order

✦ An internal node with k keys has exactly k+1 children

✦ Search / Insert / Delete: O(log n) guaranteed

When is a B-Tree used?

Databases (B+ Tree indexes in MySQL, PostgreSQL)

Filesystems (NTFS, HFS+, ext4)

✦ When data lives on disk — wide nodes minimise I/O

Insert algorithm
1. Descend from root to a leaf
2. Insert the key into the leaf (sorted)
3. If leaf has 2t−1 keys → SPLIT
4. Promote middle key to parent
5. Recurse upward if parent overflows
6. Root splits → tree grows 1 level
Default
On splay path
Splayed to root
Recently rotated
Splay tree
Rotation log
→ Insert or search nodes to see splay rotations
Zig
Single rotation — when accessed node is a child of root
Zig-Zig
Two rotations in the same direction (LL or RR)
Zig-Zag
Two rotations in opposite directions (LR or RL)
→ Every access splays the node to the root — amortized O(log n) per operation
BST Deletion — Interactive Demo
Search path
Node to delete
Inorder successor
Replacement
→ Load Demo tree then try deleting values (50, 30, 70 are interesting)
Deletion cases
Case 1 — Leaf node (0 children)
Simply remove the node. Set parent's pointer to NULL.
Case 2 — One child
Bypass the node: connect its parent directly to its single child.
Case 3 — Two children
Find inorder successor (smallest in right subtree). Copy its value here, then delete the successor (which is Case 1 or 2).
Step log
→ Perform a deletion to see step-by-step explanation
AVL Deletion — With automatic rebalancing
→ AVL deletion rebalances automatically after removing a node
Rotation log after deletion
→ Delete nodes to trigger rebalancing
AVL Delete Algorithm:
1. Find node using BST delete logic
2. After removing, update heights going up
3. For each unbalanced ancestor, check BF
4. Apply LL/RR/LR/RL rotation as needed
5. Continue up to root (multiple rotations possible)
Select rotation type

LL — Single Right Rotation

BF(node) = +2, BF(left child) ≥ 0

RR — Single Left Rotation

BF(node) = −2, BF(right child) ≤ 0

LR — Double: Left then Right

BF(node) = +2, BF(left child) < 0

RL — Double: Right then Left

BF(node) = −2, BF(right child) > 0

LL — Single Right Rotation
Triggered when: BF = +2 at a node AND left child has BF ≥ 0
BEFORE ROTATION
AFTER ROTATION
Live demo — insert values to trigger this rotation
→ Click "Auto-trigger" to automatically insert values that cause this rotation
Choose a tree shape to visualize memory
Tree visualization
→ Select a tree shape to visualize
Array representation — memory layout
Used Wasted (NULL) Unallocated
Array vs Linked — space complexity breakdown
Array Representation
→ Load a tree shape to see analysis
Linked List Representation
→ Always uses exactly 3×n words (data + left* + right*) regardless of shape
Space comparison for different tree shapes
Tree type Nodes (n) Height (h) Array slots Wasted slots Waste % Linked slots
Complete BT 7 2 7 0 0% 7×3=21
Balanced BST 7 2 7 0 0% 7×3=21
Right-skewed 7 6 2⁷−1=127 120 94% 7×3=21
n-node skewed n n−1 2ⁿ−1 2ⁿ−n−1 ≈100% n×3
Time complexity — bar chart
Binary Tree BST AVL Tree
Height race — build same sequence in all 3 trees
Binary Tree (level-order)
height
BST (insertion order)
height
AVL Tree (auto-balanced)
height
Detailed comparison
Feature Binary Tree BST AVL Tree
Search avg O(n) O(log n) O(log n)
Search worst O(n) O(n) O(log n)
Insert avg O(log n) O(log n) O(log n)
Insert worst O(log n) O(n) O(log n)
Delete avg O(log n) O(log n) O(log n)
Delete worst O(log n) O(n) O(log n)
Height guarantee O(n) O(n) O(log n)
Self-balancing No No Yes
Sorted in-order No Yes Yes
Space (array rep) O(n) if complete O(2ⁿ) if skewed O(n)
Space (linked rep) O(n) O(n) O(n)
Delete complexity Simple 3 cases 3 cases + rebalance
Implementation Simple Moderate Complex
Array vs linked list representation
Array representation

✦ Root at index 0, children at 2i+1, 2i+2

✦ Fast random access by index — O(1)

✦ No pointer overhead per node

✦ Wastes memory for sparse/skewed trees (O(2ⁿ))

✦ Best for complete binary trees

Linked list representation

✦ Each node: data + left* + right*

✦ Memory efficient — always O(n) regardless of shape

✦ Pointer overhead: 2 pointers per node

✦ Dynamic — no pre-allocation needed

✦ Works for any tree shape

Sorting algorithms — time & space comparison
Algorithm Best Average Worst Space Stable Notes
Bubble Sort O(n) O(n²) O(n²) O(1) Simple, educational
Selection Sort O(n²) O(n²) O(n²) O(1) Min swaps (n−1)
Insertion Sort O(n) O(n²) O(n²) O(1) Fast for small/nearly sorted
Shell Sort O(n log n) O(n log²n) O(n²) O(1) Gap-based insertion
Merge Sort O(n log n) O(n log n) O(n log n) O(n) Divide & conquer, guaranteed
Quick Sort O(n log n) O(n log n) O(n²) O(log n) Fastest in practice
Heap Sort O(n log n) O(n log n) O(n log n) O(1) In-place, guaranteed
Counting Sort O(n+k) O(n+k) O(n+k) O(k) Integers only, k=range
Radix Sort O(nk) O(nk) O(nk) O(n+k) Digit-by-digit, k=digits
Green = optimal   Orange = acceptable   Red = poor   ·   Stable = equal elements preserve original order
🧠
AI-Generated Quiz
Fresh questions every time · AI-generated content — may contain errors.
1. Pick your topics
2. How many questions?
3. Difficulty
← Select at least one topic to start
⚡ Powered by AI
Sorting Visualizer — Watch 9 classic algorithms sort in real-time. Pick an algorithm, generate an array, then hit Start. Color legend: Blue=Comparing · Orange=Swapping · Purple=Pivot · Red=Min · Green=Sorted
Algorithms: Bubble · Selection · Insertion · Merge · Quick · Heap · Shell · Counting · Radix
Algorithm
Best: O(n) Avg: O(n²) Worst: O(n²) Space: O(1)
Unsorted
Comparing
Swapping
Pivot
Minimum
Sorted
0
Comparisons
0
Swaps / Writes
0ms
Time Elapsed
→ Generate an array and press ▶ Start to begin visualization
Stack — A Last-In, First-Out (LIFO) data structure. Elements are pushed onto and popped from the same end (the top). Think of a stack of plates: you always add/remove from the top.
Real-world analogy: Undo/Redo in a text editor — each action is pushed; Ctrl+Z pops the last one
Implementation
Stack visualization (Array)
→ Push values onto the stack (max 8)
Array slots (index 0 = bottom)
int stk[MAX] — static array
int top = -1 (empty)
push: stk[++top] = val
pop : return stk[top--]
peek: return stk[top]
Time complexity
Operation Array Linked List
Push O(1) O(1)
Pop O(1) O(1)
Peek / Top O(1) O(1)
Search O(n) O(n)
Space O(n) — fixed O(n) — dynamic
Stack applications

Function call stack — return addresses, local vars

Undo / Redo — history in text editors

Expression evaluation — infix to postfix

Backtracking — DFS traversal, maze solving

Balanced parentheses — compiler syntax check

Queue — A First-In, First-Out (FIFO) data structure. Elements are enqueued at the rear and dequeued from the front. Like a line at a ticket counter: first person in is first served.
Real-world analogy: CPU process scheduling, printer spooler, BFS graph traversal
Queue type
Queue visualization — Linear
→ Enqueue values; elements flow from rear to front (FIFO)
Pointer state
Front : −1
Rear  : −1
Size  : 0
Max   : 8
Linear Queue:
✦ Enqueue: rear++ → arr[rear] = val
✦ Dequeue: return arr[front++]
✦ Empty: front > rear
✦ Full: rear == MAX−1
⚠ Problem: front slots never reused (false overflow)
Time & space complexity
Operation Array Linked List
Enqueue O(1) O(1)
Dequeue O(1) O(1)
Front peek O(1) O(1)
Space O(n) fixed O(n) dynamic
Wasted space Yes (linear) No (circular)
Queue applications

BFS traversal — level-by-level graph/tree walk

CPU scheduling — round-robin process queue

Printer spooler — jobs wait in order

Network buffers — packet queuing in routers

Producer-consumer — inter-thread communication

Linked List — A linear data structure where each element (node) holds a data value and a pointer to the next node. Nodes live anywhere in memory — not in contiguous slots like arrays.
Real-world analogy: A treasure hunt — each clue points to the location of the next clue
List type
Linked list — Singly linked
→ Insert nodes to build the linked list
Operation log
→ Perform an operation to see step-by-step explanation
Singly vs Doubly
Feature Singly Doubly
Pointers / node 1 (next) 2 (prev+next)
Memory / node Less More
Reverse traversal No Yes
Delete given ptr O(n) O(1)
Insert at head O(1) O(1)
Insert at tail O(n)* O(1) w/ tail*
Node structure in C
/* Singly */ struct SNode { int data; struct SNode *next; }; /* Doubly */ struct DNode { int data; struct DNode *prev; struct DNode *next; };

✦ Insert at head: O(1) always

✦ Insert/delete at tail: O(1) with tail pointer

✦ Random access: O(n) — no index math

✦ Search: O(n) — must traverse

Red-Black Tree — A self-balancing BST where every node is colored RED or BLACK. Five invariants guarantee the tree height stays O(log n), making worst-case search/insert/delete always O(log n). Used in C++ std::map, Java TreeMap, and Linux process scheduler.
Real-world: Java TreeMap, C++ std::map, Linux CFS scheduler use Red-Black Trees
RED node
BLACK node
Violation (RED-RED)
Red-Black Tree
→ Insert values to see automatic Red-Black balancing with recoloring & rotations
Fix-up log
→ Insert nodes to see fix-up steps
Rule checker
Rule 1 — Root color
Root must always be BLACK
Status:
Rule 2 — No RED-RED
A RED node cannot have a RED parent
Status:
Rule 3 — Black height
Equal black-nodes on every root→leaf path
Status:
Rule 4 — NIL leaves
All NULL (NIL) leaf nodes are BLACK
Status: ✓ Always
Fix-up cases during insertion
Case 0 — Node is root
Simply recolor to BLACK. Done.
Case 1 — Parent is BLACK
No violation. Tree is valid — no fix needed.
Case 2 — Uncle is RED
Recolor: parent → BLACK, uncle → BLACK, grandparent → RED. Move problem up the tree.
Case 3 — Uncle is BLACK (outer)
Single rotation of grandparent + swap colors of parent & grandparent.
Case 4 — Uncle is BLACK (inner)
Double rotation: first rotate parent to convert to Case 3, then apply Case 3.
Complexity comparison
Operation BST (avg) BST (worst) Red-Black
Search O(log n) O(n) O(log n)
Insert O(log n) O(n) O(log n)
Delete O(log n) O(n) O(log n)
Height O(log n) avg O(n) skewed ≤ 2·log₂(n+1)
Rotations per insert ≤ 2
Key insight:
Red-Black trees guarantee height ≤ 2·log₂(n+1) because the longest path (alternating RED-BLACK) can be at most twice the shortest path (all BLACK). This ensures guaranteed O(log n) for all operations, making it the industry choice for sorted maps and sets.
Hash Table — A data structure that maps keys to indices using a hash function. Provides O(1) average-case insert, search, and delete. When two keys hash to the same index, a collision occurs — resolved by Separate Chaining or Linear Probing.
🗄️ Real-world: Python dicts, Java HashMap, database indices, browser caches
Collision resolution strategy
Mode: Separate Chaining
Hash fn:
→ Using default: hash(k) = k % 11
→ Enter a key and click Insert / Search / Delete to see the hash calculation
Hash table — size = 11 (prime), hash(k) = k % 11
→ Insert keys to see how hash(k) = k % 11 distributes them
Load factor
items / capacity 0/11 = 0.00
0.00.5 (good)0.7 (warn)1.0
How it works
Separate Chaining:
✦ Each bucket holds a linked list
✦ Collision: append to existing chain
✦ Search: hash(k) → traverse chain
✦ Load factor α = n/m (can exceed 1.0)
✦ Avg search: O(1 + α) — good if α ≤ 1
✦ Worst case: all keys hash to one bucket → O(n)
Collision resolution comparison
Property Separate Chaining Linear Probing
Insert (avg) O(1) O(1)
Search (avg) O(1+α) O(1/(1−α))
Delete O(1) Lazy deletion
Load factor limit α can > 1 Must be < 1
Cache performance Poor (pointers) Excellent (array)
Clustering No Primary clustering
Extra memory For linked nodes None
Why prime table size?

✦ Prime numbers (11, 13, 17…) reduce clustering and collisions.

✦ With a composite size (e.g. 12), multiples of 3 always map to indices 0, 3, 6, 9 — creating hotspots.

✦ A prime size ensures better distribution: gcd(step, size) = 1 guarantees full coverage during probing.

Real-world note: Java's HashMap uses a power-of-2 size with bit-masking for speed, but applies a secondary "scrambling" step to the hash to compensate for poor distribution. Python dicts use open addressing with a perturbation scheme.
Graph — A collection of nodes (vertices) connected by edges. Graphs model networks, maps, dependencies, and relationships. BFS explores level-by-level (shortest path in unweighted graphs); DFS dives deep first (cycle detection, topological sort). Dijkstra finds shortest paths in weighted graphs.
🗺️ Real-world: Google Maps (Dijkstra), social networks (BFS), compilers (DFS topological sort)
Graph type
(Undirected)
Canvas mode
Click anywhere on canvas to add a node
Graph canvas — Undirected
→ Click canvas to add nodes, then switch to Add Edge mode to connect them
Traversal log
→ Run BFS, DFS, or Dijkstra to see step-by-step traversal
Algorithm legend
Unvisited
Current
Visited
Selected/Shortest
BFS — Uses a Queue. Visits closest nodes first.
Finds shortest path (unweighted). O(V+E).

DFS — Uses a Stack (recursive). Dives deep first.
Detects cycles, finds connected components. O(V+E).

Dijkstra — Greedy + Min-heap. Finds shortest distances
from source to all nodes in weighted graph. O((V+E) log V).
Graph representation comparison
Property Adjacency Matrix Adjacency List
Space O(V²) O(V+E)
Edge check O(1) O(degree)
Add edge O(1) O(1)
Get neighbors O(V) O(degree)
Best for Dense graphs Sparse graphs
BFS/DFS O(V²) O(V+E)
BFS vs DFS quick reference

BFS → shortest path (unweighted) — social network degrees

DFS → cycle detection — dependency graph validation

DFS → topological sort — build systems, course scheduling

Dijkstra → weighted shortest — GPS routing (A* variant)

Connected components — use DFS or Union-Find

🔤 Trie — Prefix Tree

Each node stores a single character. Shared prefixes reuse nodes → fast word lookups, autocomplete, and spellchecking.

Trie structure
→ Insert words to build the prefix tree
Words stored
(empty)
Algorithm legend
Inner node
Word end
Match / highlight
Insert / Search / Delete — O(L) where L = word length
Space — O(total characters), best when many shared prefixes

🔺 Heap / Priority Queue

Complete binary tree maintaining the heap invariant parent ≤ children (Min-Heap) or parent ≥ children (Max-Heap). Priority Queue stores (priority, value) pairs.

Mode: Min-Heap · hint: numbers only
Heap tree
→ Insert values to build the min-heap
Array representation
(empty)
Algorithm legend
Node
Bubbling
Root / min
Insert / Extract-Min — O(log n)
Peek — O(1)
Build-Heap — O(n) via bottom-up heapify