Pricing Dashboard Sign up
Recent
· 11 min read · MDisBetter

Word to Markdown for Technical Writers: Migration Playbook

Most technical writing teams who migrate from Word to a docs-as-code workflow underestimate the same thing: the back catalog. The tooling decision (MkDocs vs Docusaurus vs Jekyll vs Antora) gets the strategy meetings, the style-guide rewrite gets the workshop time, and the migration of the existing 600 .docx files quietly becomes someone's nights-and-weekends problem for six months. Done well, the migration is not a single project — it is a sequenced playbook that audits the library first, categorises documents by traffic and freshness, converts them progressively, and lets the new docs-as-code site grow alongside the legacy Word library until the cutover happens naturally. This article walks through that playbook end-to-end, with honest notes on what the web tool handles and where you should drop down to Pandoc on a local machine for batch work.

Why technical writers are migrating off Word in 2026

The case for moving technical documentation from Word to Markdown stopped being controversial around the time Stripe, Twilio, and GitLab made their public docs sites the gold standard the industry now imitates. The drivers are familiar to anyone who has lived through a documentation migration:

The decision is mostly settled. The execution is where teams stall — because the legacy library is large, heterogeneous, and full of documents whose authors have left the company.

Step 1: audit the existing Word library

Before converting anything, run an inventory. The output of this stage is a spreadsheet (or a database table, if your library is large enough to justify it) with one row per .docx file and the following columns:

For a library of a few hundred files, a junior writer can produce this inventory in a week. For libraries in the thousands, automate the file metadata extraction with python-docx or oletools and join against your portal's analytics export. Even an imperfect traffic signal — "this doc had 4 views last quarter" vs "this doc had 4,200" — changes the migration priorities dramatically.

Step 2: categorise into three buckets

Not every document deserves the same treatment. The pragmatic split most teams converge on:

BucketDefinitionAction
High-traffic / currentFrequently viewed, actively maintained, business-criticalConvert with care. Manual review of every output. First into the new site.
Archive / referenceRarely viewed but legally or historically importantBulk convert. Park in an /archive/ section. Don't restyle.
Stale / candidate for retirementNot viewed in 12+ months, owner unknown or goneDon't migrate at all. Flag for SME review or formal retirement.

The retirement bucket is the most uncomfortable conversation but the highest-leverage. Most documentation libraries are 30-50% stale. Migrating that material costs real time and pollutes the new site with content nobody should be reading. Better to retire it explicitly than to drag it forward.

Step 3: convert the high-traffic documents through the web tool

For the high-traffic bucket — the 50-200 documents that drive most reader value — the right workflow is one-at-a-time conversion through word-to-markdown with a manual quality check on every output. The reasons to be careful:

Workflow per document:

  1. Upload the .docx to the web converter
  2. Download the .md output
  3. Open in your editor (VS Code, Typora, Obsidian, whatever)
  4. Walk the document top to bottom, checking heading hierarchy, table integrity, image references, and code blocks
  5. Apply the new style guide as you go (rename ambiguous headings, split long paragraphs, add cross-references using your new site's URL structure)
  6. Commit to the docs-as-code repo with a meaningful message

A skilled tech writer can process 5-8 high-care documents per day at this level of attention. For 100 high-traffic documents, budget roughly three person-weeks of focused work. The output is the curated core of your new site.

Step 4: bulk-convert the archive bucket with Pandoc locally

For the archive bucket, the web tool's one-at-a-time workflow is the wrong shape. You have hundreds or thousands of files that need a structurally correct conversion but don't need restyling. The right tool is Pandoc on a local machine in a batch script:

#!/bin/bash
# Bulk convert /docs/legacy/*.docx to /docs/archive/*.md

cd ~/docs/legacy
for f in *.docx; do
  out="../archive/${f%.docx}.md"
  pandoc "$f" -f docx -t gfm --wrap=preserve --extract-media=../archive/media -o "$out"
  echo "Converted: $f -> $out"
done

Pandoc is the structural workhorse of the docs world: it preserves heading hierarchy, extracts embedded images to a sibling /media/ folder, and produces GitHub-flavored Markdown that renders correctly in MkDocs, Docusaurus, and Jekyll without further tweaks. For 2,000 files, the script runs overnight on a laptop. The output goes into the new site's /archive/ section with a banner indicating the content was bulk-converted and may need editorial review.

This is the part of the playbook where being honest matters: the web tool at mdisbetter is a one-file-at-a-time tool. For libraries of thousands of documents, run Pandoc locally. The right answer is using both.

Step 5: choose the docs-as-code stack

The three contenders most technical writing teams settle between in 2026:

The choice doesn't matter as much as the migration timing. All four render Markdown the same way; switching later is a matter of swapping the build config, not rewriting content.

Step 6: manage style consistency during migration

The temptation when converting hundreds of documents written by dozens of authors over many years: leave the original style alone. The result: a new docs site that looks like a museum of conflicting voices. Better to invest in a style pass.

Practical checklist for the editorial pass on each high-care document:

Tools like Vale (an open-source prose linter) can be integrated into the docs-as-code pipeline to enforce these rules automatically on every pull request. Most teams adopt Vale within the first six months of going docs-as-code; it's the closest thing to a style copilot for technical writers.

Step 7: handle images, tables, and diagrams

Three converted-from-Word elements need extra attention:

Images: Pandoc and the web tool both extract embedded images to a sibling folder. Resulting filenames are usually generic (image1.png, image2.png). Rename to descriptive filenames during the editorial pass. For a serious docs site, run the images through an image optimizer (squoosh, sharp) before committing — the originals embedded in Word are often 4MB PNG screenshots that should be 200KB.

Tables: Markdown's table model is much simpler than Word's. Merged cells, multi-row headers, and complex spans don't survive cleanly. For deep technical detail on what does and doesn't translate, see why Word tables are the hardest conversion problem. Practical recipe: simple data tables convert fine; complex layouts should be either flattened to simple rows/columns or replaced with a different presentation (lists, code blocks, or HTML embedded in the Markdown).

Diagrams: Word's drawing canvas does not survive conversion meaningfully. Better long-term path: rebuild diagrams in Mermaid (rendered inline by MkDocs and Docusaurus), draw.io, or Excalidraw. The migration is the right moment to standardize.

Step 8: cross-link and publish progressively

Don't wait for the entire library to be converted before going live. The docs-as-code site can launch with the high-traffic core (50-100 documents), with the archive bucket appearing in /archive/ and stale documents simply not migrated. Use redirects from the old SharePoint/Word URLs to the new site to preserve any deep links your users have bookmarked.

For cross-feature workflows, the documentation team's Word inputs are not the only legacy format. Conference recordings and webinars become reference material via audio to Markdown; competitive documentation pages become research material via URL to Markdown; and PDF white papers become editable source via PDF to Markdown. The same docs-as-code repo can absorb content from all four sources, with the same Markdown grammar throughout.

For more on building knowledge bases at enterprise scale see word to Markdown for enterprise knowledge bases; for the SOP/wiki use case, see word to Markdown for SOPs; for the deep-dive on why batch conversion is hard, see building an enterprise document migration pipeline.

Realistic timeline

For a 1,000-document Word library:

The whole migration is realistically a two-quarter project for a mid-sized team. Teams that try to compress it into a single sprint either ship a half-converted mess or burn out their writers. The progressive playbook is what makes it sustainable.

Frequently asked questions

Should I convert documents one at a time through the web tool, or batch with Pandoc?
Both — for different buckets. Use the web tool at mdisbetter.com for the high-traffic documents that need careful one-at-a-time conversion with an editorial pass. Use Pandoc locally in a bash loop for the archive bucket where structural correctness matters but stylistic restyling does not. The web tool is one-file-at-a-time by design; for thousands of documents the right answer is a local Pandoc batch script overnight, which we show in step 4. Mixing both based on document priority is the practical playbook.
What heading hierarchy should I use after converting from Word?
Standardize on a single H1 per page (the page title), H2 for major sections, H3 for sub-sections, and stop at H4 unless you have a deep reference document where it's truly necessary. Word documents accumulated over years often have five or six heading levels, much of it unintentional from copy-pasted content. Flatten aggressively during the editorial pass — most readers can't navigate beyond H3 in their head, and search engines reward clear hierarchy. MkDocs and Docusaurus both auto-generate sidebar navigation from your H2/H3 structure, so a clean hierarchy directly improves the site's usability.
How do I preserve the version history of a Word document when migrating to Git?
You don't, in any meaningful sense. Word's track-changes history doesn't translate to Git's commit log. Two pragmatic approaches: (1) commit the converted Markdown as a single initial commit per file with a message like 'Migrated from legacy Word doc 2018-2025' and start the Git history fresh from there, or (2) keep the original .docx files in a separate /legacy-archive/ folder of the repo for historical reference. Most teams do option 1 with a banner on the page noting the migration date. The real version history value of docs-as-code is forward-looking: every future change is properly versioned, blamed, and reviewed via pull request.