Automate Content Publishing with Make.com: One Article to Five Platforms

April 5, 2026 · 11 min read · make.com, content-automation, ai-content, notion, cross-post
Automate Content Publishing with Make.com: One Article to Five Platforms

I publish most articles to five places: my own site, LinkedIn, Dev.to, Hashnode, and X. Doing that by hand costs 30 to 45 minutes per piece. Copy the body, reformat for each platform, upload a cover image, adjust tags, set canonical URLs, schedule. Multiply by every article and the cost is hours per week of editor time that belongs elsewhere.

This is the tutorial for the Make.com scenario that replaced that routine. One Notion row flips to approved, one scenario fires, five platforms publish with proper metadata and canonical links pointing home. The setup takes an afternoon. It pays for itself in a week.

If you are picking between Make.com and n8n for this, read Make.com vs n8n for production workloads first. Make wins for fast assembly and first-party connectors. n8n wins for Code-Node flexibility and self-hosting. For this specific use case, either works. I built it in Make because the LinkedIn and Notion connectors are already there.

Why content teams lose hours to cross-posting

Cross-posting is deceptively boring work. No single step is hard. The sum is a tax on every article you ship.

A typical hand-rolled flow: open LinkedIn, paste body, delete markdown syntax that does not render, rewrite the first two lines as a hook because LinkedIn truncates. Open Dev.to, paste markdown, add frontmatter, pick four tags, set canonical_url, upload cover. Open Hashnode, authenticate, paste, pick tags again, canonical again. Medium, same again. X, write a 280 character version with a link, maybe thread it.

Every platform has its own quirks. Miss one and engagement drops or the SEO attribution leaks to the syndicated copy. The work is mechanical and high-risk, which is exactly what automation is for.

Realistic goal and stack

The goal is one article, five platforms, automatic, with canonical URLs back to my site and platform-specific intros that actually read like they were written for each audience.

The stack:

  • Source: Notion database, one row per article. Properties: title, slug, status, platforms (multi-select), published_at_* per platform, url_* per platform
  • Trigger: Make.com “Watch Database Items” on Notion, filtered to status = approved
  • Transform: Notion blocks to markdown and HTML, plus a Claude API call for platform-specific intros
  • Router: one branch per destination
  • Destinations: LinkedIn UGC Posts, Dev.to /api/articles, Hashnode GraphQL, Medium /users/{id}/posts, X v2 tweets
  • Writeback: update the Notion row with per-platform URLs and timestamps

My site is the canonical source. Every syndicated copy points back via canonical_url or equivalent. That is not optional. More on that below.

The scenario step by step

Build order in Make.com, module by module.

1. Notion > Watch Database Items. Point at your articles database. Set the filter to status = approved. Poll interval 15 minutes is fine for most teams. Make stores the last-seen cursor so you do not reprocess rows.

2. Notion > Get a Page (or HTTP to blocks/{id}/children). The trigger gives you the page metadata. You need the body separately because Notion splits content into blocks. I use the HTTP module with the official API because it handles pagination and gives me the raw blocks to convert.

3. Code / HTTP > Convert blocks to markdown. Two options. Run a small conversion service (notion-to-markdown behind a tiny HTTP endpoint) and call it here, or use a Make Code module if your plan supports it. You want clean markdown out: code fences preserved, images with URLs, headings intact.

4. Anthropic Claude > Generate platform intros. Single HTTP call to https://api.anthropic.com/v1/messages with tool use. One call returns all five platform-specific intros as structured JSON. Full pattern in Claude API structured output. The tool schema:

{
  "name": "platform_intros",
  "input_schema": {
    "type": "object",
    "properties": {
      "linkedin_hook": {"type": "string", "description": "First 2 lines, no markdown, hook only"},
      "devto_intro": {"type": "string", "description": "Markdown intro, 2-3 sentences"},
      "hashnode_meta": {"type": "string", "description": "Meta description, under 160 chars"},
      "medium_subtitle": {"type": "string", "description": "One line subtitle"},
      "x_post": {"type": "string", "description": "Under 240 chars to leave room for the URL"}
    },
    "required": ["linkedin_hook", "devto_intro", "hashnode_meta", "medium_subtitle", "x_post"]
  }
}

Force the tool with tool_choice: {"type": "tool", "name": "platform_intros"}. The response is guaranteed valid JSON matching your schema.

5. Router. One route per platform in the platforms multi-select. Use filters on each route so LinkedIn only fires when LinkedIn is selected, and so on.

6. Per-platform HTTP modules. Details in the next section.

7. Notion > Update Database Item. After each successful post, write the returned URL to url_linkedin, url_devto, etc., and stamp published_at_* with now. Put this at the end of each route.

The full scenario lands at 12 to 15 operations per article. One article a day is about 400 operations a month, well inside the Make.com Core tier.

Per-platform quirks

LinkedIn uses the UGC Posts API at https://api.linkedin.com/v2/ugcPosts. No markdown. Plaintext only. URLs auto-unfurl into cards if the Open Graph tags on your site are set. The first two lines are the hook and get cut off by the “see more” fold, so the Claude-generated hook belongs here. Use your personal or organization URN as author. Rate limit is roughly 100 posts per day per account, which no sane editorial calendar hits.

Dev.to uses POST /api/articles with a JSON body. Body is markdown. Tag limit is four, plain strings, no #. The canonical field is canonical_url (underscore). Include published: true to publish immediately. Auth header is api-key: <your key>. Rate limit is 9 posts per 30 seconds, which Make’s built-in rate-limit module handles cleanly if you ever batch.

Hashnode is GraphQL, not REST. Endpoint https://gql.hashnode.com. You need the publication ID (grab it once from the dashboard, store it as a Make connection variable). The mutation:

mutation PublishPost($input: PublishPostInput!) {
  publishPost(input: $input) {
    post { id url }
  }
}

Pass title, contentMarkdown, publicationId, tags (array of objects with slug and name), and originalArticleURL for the canonical. Tags have to exist or be allowed on your publication.

Medium uses POST /users/{userId}/posts. Pass title, contentFormat: "markdown", content, and critically canonicalUrl. Medium gives you limited styling control. The canonical header is the whole SEO game on Medium. Without it, Medium outranks your site for your own content.

X uses the v2 tweets endpoint. 280 character limit including the URL (URLs get counted as 23 characters regardless of length). For threads, post the first tweet, grab the ID, and post each follow-up with reply: {in_reply_to_tweet_id: <id>}. Rate limits are tight on X, stagger if you ever batch multiple articles.

This is the one thing most teams skip and it is the most expensive mistake.

Your site is the canonical source. Every syndicated copy must declare that. The reasons:

  • Without canonical tags, Google sees five near-duplicate pages and picks one to index. It usually picks the high-domain-authority platform (Medium, Dev.to), not your site.
  • With canonical tags pointing home, the syndicated copy passes link equity to your original. Your site ranks, not Medium.
  • If you ever unpublish a syndicated copy, the canonical makes sure search engines move the ranking back cleanly.

The field names per platform:

PlatformCanonical field
Dev.tocanonical_url
HashnodeoriginalArticleURL
MediumcanonicalUrl
LinkedInN/A (no native field, the unfurled link handles attribution)
XN/A (link in the post)

Set these in the HTTP bodies. Test with view-source: on the published copies to confirm the <link rel="canonical"> tag points at your site.

Structured output from Claude saves a call per platform

The naive approach is five Claude calls per article, one per platform. That is five round trips, five sets of input tokens, five rate-limit windows.

The correct approach is one call with tool use. The tool schema forces the model to return all five intros in a single structured response. That cuts API calls by 5x and lets you cache the brand-voice prefix once.

Wrap the system prompt with prompt caching on the brand-voice section. Full pattern in Claude API structured output. In practice, per article: ~1000 input tokens, ~500 output tokens, one cache hit on the system prefix. Cost per article is roughly $0.01 using Sonnet 4.6. The cost graph stays flat even as volume grows.

Error handling that keeps scenarios running

The default Make behavior is to halt the scenario on any module error. For a five-platform fan-out, that is the wrong default. One LinkedIn auth hiccup should not stop Dev.to, Hashnode, and Medium from going out.

Configure error handling on each platform route:

  • Break directive: on the HTTP module, add a break route on error. Retry once after 30 seconds.
  • Resume with log: if retry fails, write the failure to a Slack channel (or a Notion error log row) and continue. Do not halt.
  • Per-platform status: the Notion writeback should write either url_* on success or error_* with the message on failure. That way you can see at a glance which platforms need a manual rerun.

Never let one failure block the rest. Cross-posting is parallel work, treat it that way.

Rate limiting

  • LinkedIn: 100 posts per day per account. No real risk for normal editorial pace.
  • Dev.to: 9 posts per 30 seconds. Make’s rate-limit module handles this if you ever backfill.
  • Hashnode: soft limits, not documented precisely. One per minute is safe.
  • Medium: undocumented. I have not hit it.
  • X: tight. 300 posts per 3 hours per user. If you thread, each reply counts. Stagger scheduled posts at least 2 minutes apart.

For daily publication this is all well inside limits. For backfilling an archive, add a Sleep module between platforms.

Metadata you want to capture

Every platform returns useful data in the response. Capture it. You will need it for edits, deletes, and analytics later.

Per platform, write back to Notion:

  • Published URL (for sharing and analytics joins)
  • Published timestamp (in the platform’s own time, for engagement windows)
  • Platform-specific ID (LinkedIn URN, Dev.to article ID, Hashnode post ID, Medium post ID, X tweet ID). You need these for any future edit or delete call.

Schema in Notion: one URL property, one date property, one text property per platform. Looks verbose, but it is the only way to close the loop.

Cost of running this

Running costs for one article per day:

ComponentCost per articleMonthly (30 articles)
Make.com ops12-15 ops~400 ops (Core tier covers it)
Claude API~$0.01~$0.30
Notion$0$0
Platform APIs$0$0

Total ongoing cost is the Make.com subscription plus pennies in Claude tokens. The hand-rolled alternative is 30 to 45 minutes of editor time per article, which for one article a day is 15 to 22 hours a month of skilled work. The ROI is not close.

From cross-posting to full editorial ops

Cross-posting is one pipe in a full editorial system. The others:

  • Briefing generation: research > outline > approved brief, before writing starts
  • Draft review: style checks, claim verification, internal link suggestions
  • Performance tracking: pull engagement metrics at 48h, 7d, 30d, log back to Notion
  • Content repurposing: long article > LinkedIn carousel > X thread > newsletter section

You can build each piece yourself in Make.com or n8n. If you want it assembled as one product with the editorial workflows already wired up, that is Teedian. Same building blocks, pre-integrated.

Common mistakes

  • No canonical URLs. Medium outranks you for your own content. Every syndicated copy must declare canonical back to your site.
  • Identical text on every platform. LinkedIn is not Dev.to is not Medium. Each audience expects a different tone on the intro. The Claude structured-output call is cheap, use it.
  • No error handling. First failure halts the scenario, the other four platforms miss the post, you find out Monday morning.
  • Publishing to X before the blog is live. The X post links to the canonical URL. If that 404s for an hour because the deploy lagged, you lose the highest-velocity engagement window. Order the scenario so the site canonical is live first.
  • No retry on transient failures. LinkedIn auth blips happen. Retry once before escalating.
  • Hardcoded publication IDs. Store them as Make connection variables or Notion config rows, not inline in the module.

Extending

Once the core scenario runs stable, extensions are straightforward:

  • Add Telegram posting for community groups (simple bot API call)
  • Add email newsletter (Mailchimp, Brevo) as another destination
  • Add a 48-hour-later trigger that pulls engagement and logs it back to Notion
  • Add a rewriter that tunes the same article for a second language and runs the pipeline again
  • Chain in a Claude structured-output step that generates thread variants per platform

Each extension is another branch off the existing router.

When n8n is a better fit

Make.com is the right tool for this tutorial because the connectors (Notion, LinkedIn, HTTP, Code) are first-party and the scenario fits the visual canvas. When to reach for n8n instead:

  • You want full Code-Node flexibility for per-platform text tweaks without inventing sub-scenarios
  • You need self-hosting for data sovereignty (regulated industries, EU data residency)
  • Volume is over 20,000 executions per month and Make.com ops pricing starts to hurt
  • You need long-lived jobs (over 40 minutes) that Make’s per-scenario timeout will kill

For editorial teams shipping up to a few articles a day, Make.com is the faster build and the cheaper run. For heavier or more code-bent teams, n8n wins. The Make vs n8n breakdown goes deeper. German readers: Make vs n8n Vergleich.

See the full Teedian engine