Why pasting URLs into ChatGPT keeps disappointing you
ChatGPT's built-in fetcher is a thin wrapper around a browser: it loads the page, runs some JavaScript, gives up if it takes too long, and hands the result to the model. Anything behind a login, anything served from a CDN with bot detection, anything that loads content via fetch() after the initial render — ChatGPT often sees an empty shell or a Cloudflare challenge page. You then paste the URL again, get the same result, and conclude the model "can't read".
The actual fix is to do the rendering yourself once. Convert the URL to Markdown — the converter handles JavaScript execution, paywall bypassing where allowed, and stripping the navigation/sidebar/footer/cookie-banner garbage. ChatGPT then reads clean prose with proper headings, instead of an HTML carcass.
The token-savings angle nobody mentions
Copy-paste an article's HTML into ChatGPT and you spend tokens on every class="prose-paragraph-large", every inline SVG, every analytics blob. A typical Substack post is ~120k tokens of HTML and ~6k tokens of Markdown — a 95% reduction with zero loss of meaning. On GPT-4o's 128k window, that's the difference between fitting one article and fitting twenty.