A pre-print uploaded to arXiv on 29 May 2025 by researchers at Italy’s Istituto Italiano di Tecnologia introduces ReassembleNet, a deep-learning system that learns where every broken piece belongs by focusing on just 20 “anchor” points along each jagged edge.
The network begins by detecting dozens of candidate corners on a shard, then a learnable selector keeps the 20 points that carry the most geometric and textural information. Storing only those coordinates and their features trims graphics-card memory to about 37 GB for puzzles of up to 128 pieces—small enough for a single NVIDIA A100 card—where previous diffusion-based programs ran out of memory when they tried to track whole edges.
On the RePAIR benchmark of 809 real Pompeian-style fragments, ReassembleNet set a new best score: average rotation error fell to 34.3 degrees and translation error to 17.9 mm, beating the long-time geometry-only leader Greedy Geom Match by 55 percent and 86 percent, respectively. In practice, conservators can now start with a short, ranked list of likely fits instead of testing every shard against every other.
Because labelled training puzzles are scarce, the team first created 5 000 “practice” puzzles—45 200 fragments in all—by digitally breaking high-resolution fresco photos, adding erosion and small random shifts to mimic excavation damage. Pre-training on this semi-synthetic set before fine-tuning on real data shaved another two millimetres off placement error.
The authors acknowledge two limits: every shard still receives the same 20 anchor points, even if a particularly complex piece might need more, and the texture module does not yet recognise patterns regardless of rotation. Even so, by proving that a small, carefully chosen slice of edge geometry can guide powerful AI, the study points to a future where months of manual puzzle-work shrink to minutes—and long-silent murals re-emerge far sooner for scholars and the public alike.
Produced with the assistance of a news-analysis system.