How "Website to Text" actually works here
Paste a URL into the input. The page renders in a headless browser so JavaScript-rendered content actually loads, then a Mozilla-Readability-style extraction pass identifies the main content area, then a final flattening step strips structural markers (no # for headings, no * for lists, no fenced code blocks — just the words). The result is plain UTF-8, copy-paste-ready, search-indexable, ready for any tool that wants string input.
Single page or whole site
Paste one URL for a single-page extraction — the most common use case. For converting many pages of a site to text, run the same workflow URL-by-URL through the web tool, or use an OSS pipeline locally (Trafilatura's trafilatura --sitemap command crawls a site and emits one text file per page, MIT-licensed). MDisBetter is a web tool today, no programmatic API; the OSS path is what makes the multi-page case scale.
If structure matters: use Markdown instead
Plain text is fine for search indexing, NLP token streams, and grep-style scripts. For AI input — feeding the page content to ChatGPT, Claude, or Gemini — Markdown reads more accurately and at lower token cost because the model can use the heading hierarchy to navigate. See /convert/url-to-markdown for the structure-preserving alternative.