Over the last two years I’ve built a small ecosystem of tools—some practical, some ambitious, some “because I wanted to see if I could.” Looking back, the pattern is pretty clear: when something feels oddly hard, slow, or opaque, I tend to build a thinner, clearer version that fits the way I actually think.
Here’s the tour.
The “I just want this to work” automations
A campground availability checker (Glacier National Park edition)
Glacier National Park is close to me, and it’s one of those places where the planning window never matches real life. By the time I’m ready to book a campsite, everything is gone—because the reservation system rewards people who plan early, not people who plan well.
So I stopped relying on the official waitlist and built my own watcher.
I reverse-engineered the API behind the campground reservation site and wrote a Python script that runs on a GitHub Action every 5–10 minutes. It can check:
- any campsite in a campground,
- specific sites I care about,
- and even vehicle permits for Going-to-the-Sun Road.
When it finds availability, it sends me a Telegram notification. It’s simple, reliable, and it works.
Calendar-based cash flow planning (twice)
I’ve always felt like cash flow planning tools miss something obvious: calendars are how humans naturally think about time-based commitments. So I built two different cash flow planner apps that use a calendar interface for scheduling expenses.
The newest one is intentionally “quick and dirty” in the best way:
- a lightweight Flask app,
- a calendar library,
- and a YAML file to store cash flow data.
Half a day of work got me something that feels more intuitive than most commercial options I’ve tried. It still feels like a weird gap in the market: good calendar-based cash flow planning shouldn’t be rare.
Newsletter and list infrastructure (built for scale)
Deliverability and list platforms: SES + domains + Listmonk/Sendy
I’ve built list and delivery infrastructure for multiple newsletters using:
- Amazon SES for delivery,
- custom domains for reputation and branding,
- and both Listmonk and Sendy as front-end platforms.
This is very much a “playbook” I’ve refined over time: cold email content to people, run the system cleanly, keep deliverability healthy, and treat the pipeline like a product.
In total, I’ve operated this successfully at roughly 30–40k people across multiple lists.
RAG, dashboards, and “make the invisible visible” interfaces
A custom RAG app with retrieval controls (for a friend)
One of the more distinctive things I built was a retrieval-augmented generation app for a friend. It runs in a Docker container, has a Streamlit front end, and uses LanceDB for the vector database.
But the interesting part isn’t the tech stack—it’s the philosophy.
Most RAG apps try to hide retrieval. They quietly fetch context and blend it into the answer, as if the user shouldn’t have opinions about what got retrieved.
Mine does the opposite: it makes retrieval tunable.
The interface gives the user control over:
- how many documents are retrieved,
- a threshold slider to control “how close” matches must be,
- and a visualization, a histogram to help you see retrieval quality/distribution at a glance.
Instead of forcing you into item-by-item micromanagement (“exclude this chunk”), it offers a more useful middle layer: “pull fewer, but more relevant,” or “pull more, but allow looser matches.”
Then I added a prompt library: the final LLM call includes retrieved context plus a selected prompt from a curated set. I’ve seen partial versions of this in other tools like Onyx, but I liked the simplicity of treating prompts like reusable presets. It made the tool dramatically more usable.
A Streamlit app to visualize Indeed hiring data (including a Pyodide version)
This one was a quick build, but satisfying: a Streamlit app that live-retrieves Indeed’s job data from the GitHub repo where they publish it, then graphs it in a friendly interface.
I even made a version that runs in Pyodide (Python in the browser), which was mostly just fun: fewer moving parts, instant interactivity, and a neat demonstration of “this doesn’t have to be heavy to be useful.”
Hotel rate tracking via Booking.com scraping (pre-Apify lessons)
I built a system that, for a given city and time window (usually 6–12 months), scraped Booking.com hotel pricing and saved it into JSON. Then I wrapped it with a Streamlit viewer.
The idea was straightforward: help Airbnb operators price competitively by knowing what hotels are charging.
The reality was also straightforward: scraping is the hard part. Dynamic sites don’t want to be scraped, and maintaining scrapers is a tax you pay forever.
If I were building this today, I’d almost certainly use an Apify actor for the data collection and focus on the customer-facing layer. This project taught me (again) that sometimes the “best code” is rented.
A client dashboard: enrich, qualify, and suggest outreach
I also built a multi-client dashboard to surface performance numbers across different systems, including website visitor de-anonymization.
The flow looked like this:
- Pull site visitor data from a de-anonymization tool (Warmly? Clearbit? one of that category).
- Enrich the visitor/company.
- Use an LLM to grade the visitor as a prospect.
- Show the result in a dashboard and suggest first outreach actions.
The implementation evolved over time—starting as a Flask app, experimenting with different front-end approaches (including fast HTML-style tooling), and eventually landing on something practical: a place where clients could see “enriched and qualified” visitors and what to do next.
A Cloudflare KV backend for LinkedIn auth cookies
One oddly specific piece of infrastructure: a Cloudflare Workers + KV store backend that securely stores LinkedIn authentication cookies. A browser plugin front end could fetch/store those cookies via the backend.
This was part of building systems that interact with LinkedIn while keeping the sensitive “session” layer separate and controllable.
The big one: a social listening + scraping system
If there’s a “main thread” through a lot of my recent work, it’s this:
I built a social listening / content scraping system that stores publishing and social activity in MongoDB. It can track a predefined set of people (call them influencers, experts, prospects—whatever fits), and it can also discover content through search-based discovery across the web and platforms like LinkedIn and Twitter.
At its largest ambition, it’s basically a partial “digital twin” of an information ecosystem: who’s publishing, what they’re saying, what’s getting engagement, and how activity clusters over time.
And once that data exists, you can do interesting things with it:
- Concise activity feeds: Instead of wrestling with LinkedIn’s UI for each person, you can see relevant ecosystem activity in a cleaner, denser format.
- Engagement recommendations: Interleave the feed with guidance on how to engage with a prospective client—what to comment on, what to reference, what signals matter.
- Automated newsletter generation: Filter to “the good stuff,” summarize it, and generate newsletters from the ecosystem.
A note I can’t ignore: early versions of this involved me hand-building scrapers (with heavy LLM assistance, but still… hand-built). In hindsight it was kind of insane. These days, it’s often smarter to pay for proven scraping expertise and treat your advantage as what you do with the data.
List building as a pipeline, not a pile of CSVs
I built an email list contact management system that started as a Streamlit app and eventually became something I liked more: a command-line, Unix-like set of utilities for interacting with the database.
The database acts as the backbone for a list-building pipeline:
- start with LinkedIn scrape output,
- enrich progressively (including email lookup),
- validate addresses via Zerobounce,
- and integrate services primarily through APIs rather than constant CSV import/export.
The practical result is speed: list building went from “a bunch of manual file shuffling” to a pipeline with much higher throughput and fewer failure points.
One-off systems for specific client outcomes
Finding buyers in company transactions
For one client, I built a custom scraping + enrichment + opportunity discovery system to find transactions they didn’t know about—specifically: when a company sold, who bought it.
It did:
- broad searching across sources likely to contain the answer,
- LLM-based extraction of relevant facts,
- LLM grading/qualification of candidate opportunities,
- plus some deliberate manual steps where human judgment mattered.
This was less about building a product and more about building a machine that reliably produces a particular kind of truth from messy public information.
Hardware detours and unfinished experiments
Two Arduino-based sewing machine controllers
These might be my favorite “this is so specific it has to be homemade” builds: two custom Arduino sewing machine controllers that translate foot pedal input into PWM DC motor control.
One pedal was potentiometer-based; the other used air pressure. The output controlled either:
- a 90V PWM-DC treadmill motor (converted),
- or a universal 120V motor.
The key feature was customization: three speed bands, including an ultra-slow mode (think one stitch per second), a slow band, and then a proportional mode. The fun part wasn’t just making it work—it was making it behave exactly the way I wanted.
A voice survey app I didn’t finish (for a good reason)
I started building a simple voice survey web app:
- show a question,
- record a voice response,
- transcribe it,
- store it,
- analyze it with an LLM.
Then I stopped—because I realized tools like Voiceform already exist and would get me to the outcome faster. It was a useful reminder: quitting can be a form of shipping, if you quit in favor of the right tool.
A personal to-do app in Svelte
Mostly a side project and an excuse to play with Svelte. The UI was interesting: a 3x3 grid of tiles; no more and no less than 9 top-level initiatives. Tapping a tile opened up a 3x3 grid of tasks within that initiative. Tapping a task opened up a modal card showing the details of the task (name, priority, etc). Swiping right dismissed the modal or allowed navigating up through the hierarchy.
An RSS feed for Torah readings via Safari API
I also built a small app that generates an RSS feed for weekly and daily Torah readings by pulling from the Sefaria API. Simple idea, satisfying result: religious study content delivered in a format the internet still does extremely well.
The throughline
A lot of these projects share the same instinct: remove friction, expose control where it matters, and turn messy systems into something you can actually reason about.
Also: I spent a lot of time building “data collection” muscles—and then learned, repeatedly, that the real leverage is often in what you build on top of the data, not the scraping itself.
That’s a pretty good snapshot of the last two years: a pile of practical tools, a few ambitious systems, and a growing preference for investing in the parts that compound.