i saw a tweet from tom dörr about a reddit scraper that works without api keys. my immediate thought: this should be an mcp server.
i've been coding with ai for over a year now. the pattern i keep coming back to is give ai tools, not data. instead of copy-pasting reddit threads into a prompt or telling claude "hey, reference this repo and try to scrape data," just make the capability available directly. let claude decide when to fetch, what to query, how to combine information.
so i built mcp-reddit: an mcp server that gives claude desktop and claude code native access to reddit. no api keys, no oauth flows. scrape subreddits, users, posts. store everything locally. query offline.
build time: ~3 hours (thanks claude code)
auth required: none
install: pip install mcp-reddit
why this matters
reddit is genuinely useful for research. what are people complaining about? what's trending in a niche? what pain points exist? business ideas? it's all there in discussions.
but accessing it programmatically sucks. reddit's api has rate limits, authentication requirements, and pricing that doesn't make sense for personal tools. the friction kills casual use cases.
the scraper i found uses old.reddit.com and libreddit mirrors—public, scrapeable, no auth needed. wrapping it as an mcp server means i can just ask claude "what's trending on r/claudeai?" and it handles the rest.
the build: 3 hours with claude code
i used claude opus 4.5 via claude code for the whole thing. the process was smooth:
6:57 UTC: project setup, claude.md, hooks
7:40 UTC: core mcp server done, 7 tools working
8:19 UTC: published to pypi
9:14 UTC: added media downloads, bumped to v0.2
from zero to published package in a few hours. that's the power of coding with ai—you can actually ship side projects instead of abandoning them halfway through.
what it does
flowchart LR
A[Claude Desktop
or Claude Code] -->|tool call| B[mcp-reddit
MCP Server]
B -->|scrape| C[old.reddit.com]
B -->|fallback| D[Libreddit
Mirrors]
B -->|store| E[Local SQLite
~/.mcp-reddit/data]
E -->|query| B
B -->|response| A
8 tools available to claude:
scrape_subreddit — bulk collect posts (hot, new, top, with time filters)
scrape_user — get posts and comments from a user
scrape_post — single post with full comments + optional media download
get_posts / get_comments / search_reddit — query local database
get_top_posts — highest-scoring scraped content
list_scraped_sources — see what you've collected
offline-first
everything gets stored locally in sqlite. scrape once, query forever. no network calls needed after the initial fetch.
this is intentional. if you're researching a topic, scrape the relevant subreddits once, then ask claude to analyze, summarize, find patterns—all from your local data. faster, and you're not hammering reddit's servers.
setup
install:
pip install mcp-reddit
add to claude desktop or claude code config:
{
"mcpServers": {
"reddit": {
"command": "uvx",
"args": ["mcp-reddit"]
}
}
}
restart claude. done.
why this one felt different
i've built other things—chrome extensions, websites, apps. they all took a while. this was the fastest i've ever gone from idea to something live that other people can use.
it's also my first package on pypi (didn't even know about pypi before this). checked pypistats after one day: 320+ downloads.
there are entire paid products built around this—gummysearch, painonsocial, saasfinder—all for "find business ideas from reddit" or "research pain points." this mcp just... does that. for free. in your claude desktop.
how i'm using it
mostly casual stuff so far—asking what's trending on r/claudeai, checking discussions. but the real potential is research: finding pain points, business ideas, what people are complaining about in a niche.
haven't fully explored it yet, but the capability is there now. that's the point—build the tool, make it available, use it when you need it.
the takeaway: if you see a useful tool or script, think about whether it should be an mcp server. the pattern is powerful—give ai tools, not data. let it decide how to use them.
building mcps is easier than you think, especially with claude code. this took me a few hours. the hardest part was deciding to start.
credits to @ksanjeev284 for the underlying scraper.