Building an Automated Content Pipeline That Posts to 6 Platforms
Every developer knows the pain: you write a great article, publish it on one platform, and then spend the next hour manually reformatting and posting it everywhere else. LinkedIn wants a professional tone. Twitter needs a 280-character hook. Dev.to wants proper frontmatter. Instagram needs an image.
I got tired of this, so I built an automated content distribution pipeline that publishes to 6 platforms from a single source of truth. Here's how.
The Architecture
The system is built on a simple principle: write once, distribute everywhere.
Hashnode (source of truth)
└──► Cross-Post Service
├──► Dev.to (API, canonical URL back to Hashnode)
├──► LinkedIn (Marketing API v2, article share)
├──► Threads (Meta API, text teaser)
├──► Instagram (Content Publishing API, card image)
├──► Twitter/X (API v2, 280-char hook)
└──► daily.dev (auto via RSS + Squad)
Each platform gets a tailored variant — not a blind copy-paste. The pipeline understands platform-specific constraints:
| Platform | Format | Limit | Auth |
|---|---|---|---|
| Dev.to | Markdown + frontmatter | ~10K words | API key |
| Rich text + article card | 3,000 chars | OAuth 2.0 | |
| Threads | Plain text | 500 chars | Bearer token |
| Image + caption | 2,200 chars | Graph API | |
| Twitter/X | Text + link | 280 chars | OAuth 1.0a HMAC-SHA1 |
| daily.dev | RSS auto-discovery | N/A | RSS feed |
The Platform Adapter Pattern
Each platform implements a common interface:
class PlatformAdapter(ABC):
RATE_LIMIT: tuple[int, int] # (max_requests, window_seconds)
@abstractmethod
async def publish(
self, title: str, body: str,
canonical_url: str | None = None,
tags: list[str] | None = None,
metadata: dict | None = None,
) -> CrossPostResult: ...
async def check_rate_limit(self) -> bool:
# Redis-backed sliding window
...
This means adding a new platform is just implementing one class. The cross-post orchestrator doesn't care about platform-specific details — it just calls adapter.publish().
Smart Content Composition
The hardest part isn't the API calls — it's making content feel native to each platform. Nobody wants to read a 2,000-word blog post crammed into a tweet.
For Twitter, the composer prioritizes:
- Body text (or title as fallback)
- Canonical URL (t.co wraps to 23 chars)
- Hashtags (only if space permits)
- Truncate with ellipsis at 280 chars
For Threads, it's different — title and body are joined, links are validated (max 5 unique URLs), and text is capped at 500 chars.
The Quality Ladder
Before any content goes out, it runs through a quality ladder:
- Ollama (local, free) generates the first draft
- A deterministic quality gate checks readability, hashtag density, and boring-start patterns
- If it fails, Claude Haiku polishes it (~$0.002 per variant)
- If still fails, template fallback (guaranteed output)
This keeps costs near zero for 80% of content while ensuring quality.
The Feedback Loop
A weekly Celery task evaluates published content performance:
- Pulls engagement metrics (likes, comments, shares) per platform
- Compares top performers vs bottom performers
- Extracts patterns into a LearningEmbedding table
- These patterns are injected into future content generation prompts
Over time, the system learns what works on each platform and adapts.
Secrets Management
With 6+ platforms, credential management gets messy fast. Every secret lives in 1Password and is referenced via op:// URIs in a single .env.tpl file. Both dev and prod resolve secrets at runtime — no .env files committed, ever.
What's Next
- Performance dashboard — visualize engagement metrics per platform in the UI
- Token refresh automation — OAuth tokens expire (LinkedIn: 60 days, Meta: 60 days)
- RSS feed — already serving published content for aggregators
- WhatsApp broadcast — notify subscribers when new content drops
The Stack
- Backend: Python 3.13, FastAPI, SQLAlchemy 2 (async), Celery, PostgreSQL + pgvector
- Frontend: Next.js 16, TypeScript, TanStack Query, Tailwind, shadcn/ui
- AI: Quality ladder (Ollama to Haiku), RAG enrichment, performance learning loop
- Infra: 1Password secrets, Redis rate limiting, Hetzner VPS, Cloudflare tunnel
The entire pipeline — from content creation to multi-platform distribution — runs from a single dashboard. Write once, publish everywhere, learn from the results.
This post was itself cross-posted to multiple platforms using the pipeline described above.