All work
Tools, software & programs

LinkedIn Content Engine

A content automation system that turns a weekly theme and field notes into a complete, structured LinkedIn brief — four ready-to-post drafts, trend-matched to that week's headlines, exported as formatted Word documents in under two minutes.

Node.jsExpressAnthropic SDKdocxTypeScript
Outcomes
4 posts
per weekly brief
<2 min
full week generated
Trend-fed
live Google News input

Consistent LinkedIn presence is one of those content marketing problems that looks manageable until you’re doing it every week. The brief needs to be topical, the formats need to vary, and the drafts need to be structured enough to edit quickly rather than rewrite from scratch. Doing that manually every Monday is a tax on time that compounds badly across a quarter. I built the LinkedIn Content Engine to eliminate that tax.

What I built

A Node.js pipeline with an Express backend and a form-based frontend. A content planner fills in four fields: the week’s theme, field notes or observations from the week, a repurposing flag for any existing content worth reactivating, and an optional context note. The system handles the rest.

Trend integration. Before generation runs, the engine pulls the week’s top headlines from Google News via RSS, filtered by topic relevance and cached for two hours. The trend data gets woven into the generation prompt so each post has a natural hook to something happening in the market right now, not a generic observation that could have been written any week.

The generation pipeline. All four posts are generated in a single structured pass through the Claude API. The system prompt encodes a full content strategy: hook formats, storytelling structures, the ratio of insight to call-to-action, tone calibration for a professional audience. The model returns clean JSON. The server validates the shape and hands it to the export layer.

Dual-document export. Two Word documents come out of every run. The main brief contains all four posts formatted and ready to edit. The second document is structured as a NotebookLM source, so the content can feed directly into a research and ideation workflow without copy-pasting. Both files are zipped and returned in a single download.

Usage logging. Every run logs token consumption, generation time, and week metadata. The log sits behind an API route so it’s easy to track cost and output quality over time without leaving the tool.

Why a pipeline rather than a prompt

The manual version of this job is not just writing. It’s sourcing trends, deciding formats, maintaining tonal consistency across four different posts, and structuring output so it’s fast to edit rather than slow to rewrite. A single ChatGPT prompt doesn’t encode any of that. A pipeline does. The content strategy lives in the system prompt, the trend awareness lives in the RSS layer, and the structure lives in the export templates. The result is output that lands closer to publish-ready, not just “something to start from.”

What it improved

A full weekly content brief that used to take forty-five minutes of planning, drafting, and formatting now takes under two minutes to generate and another ten to review and personalise. The posts come out topical, varied in format, and structured consistently enough that editing feels like finishing rather than starting. For any team running a LinkedIn content programme at volume, that compression is the difference between content marketing that actually ships and content marketing that stays on the backlog.