Paper Dissector — AI Summarizer
Watch how an LLM chunking pipeline breaks down an academic paper into structured summaries with animated chunking, compression, and extractive summarization.
How It Works
The demo presents a mock arXiv paper on Neural Architecture Search. Click "Summarize" on any section to watch the text split into color-coded chunks, then compress into summary bullets using extractive summarization (TF-IDF scoring, position bias, signal phrase detection).
| Control | Action |
|---|---|
| Section buttons | Summarize individual sections |
| Summarize All | Process the entire paper |
| Chunk size slider | Adjust chunk size (50-200 words) |
| Overlap slider | Control chunk overlap (0-50 words) |
| Output tabs | View Key Findings, Methods, or Results |
The Concept
Long-document summarization requires breaking text into manageable chunks, scoring sentences by relevance, and assembling structured output. This demo visualizes the pipeline that production LLM summarizers use — chunking with overlap to preserve context, then extracting the most informative sentences.