Detail
The thesis tackles Collatz by replacing ad-hoc rules with linear operators and tensor-network structure, framing the map in tools familiar to quantum information. That operator view yields circuit-depth and entanglement-style bounds that let us prune trajectories instead of brute-forcing them.
I benchmarked structured heuristics that prune the search tree using tensor contractions and low-rank factorizations. Those heuristics make it cheap to discard families of trajectories that cannot reach 1 without crossing explicit cutoffs on norm growth or parity patterns.
The analysis highlights both wins and limits: operator factorizations dramatically reduce naive search, but become brittle when the induced matrices lose sparsity. I document the failure cases alongside the cases where tensor-network intuition provides provable pruning.
Across the write-up I emphasize reproducibility: a small suite of visualizations and benchmarks ships with the thesis so readers can replay the pruning strategies and see where they break.