I designed the Poetry Explorer; now it was time to build it. As I’ve covered before, I’m a product designer, not a developer.

I’m not completely lacking in technical skills. I can read Python well enough to understand what’s happening. I can write HTML/CSS/JavaScript/PHP to various degrees, but I can’t architect a production-grade backend from scratch without assistance or implement vector search from scratch.

So I didn’t write anything from scratch. I used LLMs, docs, and developer community resources to help me and iterated until it worked. This wasn’t because I’m trying to become a developer. That wasn’t my goal. I needed to build a product I could launch on San Antonio Review to create a better content-exploration experience for our readers and a more efficient curation workflow for our editors.

I’ve worked in the full product lifecycle for a long time, in startup and enterprise software environments. One of the things I love best about building things and working with technology is how I can apply this to my work with development teams. I understand which questions to ask. I can propose alternatives when something isn’t working because I understand constraints. And I function as a better partner. This project was no different. It may have been a new domain, but the same benefits apply.

This is a resourceful implementation by a designer who understands how to design systems.

What I needed was something that fit our small budget, required minimal coding, or was something that I could handle with the help of coding assistants, was embeddable on WordPress, didn’t require months of enrichment, and had a system that uses embedding-based semantic search to approximate emotional and thematic similarity.

So, I started chatting with LLMs and using them to generate code, researching Stack Overflow and asking questions, and reading docs until I got to this:

  • a JSONL file with poems
  • a Python script that reads each poem in the JSONL file, calls OpenAI Embeddings API, and stores vectors and metadata in a Pinecone index
  • a Python API (/chat endpoint) where:
    • a user sends a query, “I feel anxious about work, and want something hopeful.”
    • the query is embedded, Pinecone is searched, and the top 3 poems are retrieved.
    • those poems and the user message are sent to OpenAI chat with the Poetry Finder system prompt
    • the generated answer (plus titles, authors, URLS) is returned to frontend.
  • a webpage that talks to the Python API that is embedded on WordPress

I’m using Render to host the Python API because it was easy to set up and low-cost. Netlify is hosting the front end that I embedded on the Poetry category page at San Antonio Review.

I chose Pinecone and OpenAI for similar reasons. I’m generating under 5,000 vectors, so Pinecone was no cost to use. OpenAI was similarly cost-effective for the size of the dataset.

But ultimately, all of these technical, platform, and tool choices were driven by the same motivation. I wanted to build something that produced semantic similarity so that I could connect readers and editors with emotionally resonant content. Given my technical and budget constraints, which solutions would let me do that? Those are the same questions I ask in any design project.

As with all projects, many things remain in the roadmap, like performance optimization, completing the classifications, and building in logging. And as I go along, I learn to ask the questions I didn’t know I needed to ask. Those get added to the roadmap too. I don’t consider all of these things failures; it’s how launching a product like this works. It’s low-stakes. I ensured nobody’s data was exposed. Conversations are not stored in a database or persisted long-term. So I’m comfortable.

Could a professional developer have done this better? Absolutely. I can’t address scale, performance, or production quality. But designers who can implement have a superpower because they understand constraints and complexity in ways other designers don’t.