This is Part 1 of a 4-part series on what I learned from surveying 777 dental practices while building CLIN. Part 2 covers the actual data. Part 3 analyzes the patterns. Part 4 shows how these insights led us to pivot to Dentplicity.
Getting healthcare professionals to respond to surveys is brutally difficult. They're overwhelmed, protective of their time, and skeptical of yet another software company asking for "just five minutes."
After ten months building CLIN in stealth, I needed to understand dental practice financial challenges before finalizing our neobank architecture. I couldn't build sophisticated banking infrastructure—from RTP settlement files to Durbin-exempt partnership structures—without understanding exactly how practices manage money, when they need credit, and what breaks in their current workflows.
Traditional market research felt hollow. I needed real conversations with real practice owners about cash flow timing, check clearing delays, and payment processing costs—the granular financial details that determine whether to build on FedNow rails or stick with ACH settlement timing.
The result: 777 verified responses from dental practices across the U.S. and Canada, with 100+ extended interviews. Here's exactly how I made it work, and how these insights shaped every technical decision from FBO account structures to interchange optimization.
Building the Outreach Infrastructure
My first attempts were disasters. Generic subject lines like "Quick Survey About Your Practice" generated 0.3% response rates. Healthcare professionals receive dozens of vendor pitches weekly—my emails disappeared into the noise.
I needed to scale legitimate outreach, which meant learning entirely new skills as an entrepreneur.
The tech stack that worked:
Instantly.ai for campaign management: This became my command center. Their YouTube channel is a goldmine for anyone learning cold email—I probably watched 30+ hours of their content. Most importantly, I learned about domain warming, which was absolutely clutch. You can't just spin up a new domain and blast emails; you need gradual warm-up sequences to build sender reputation.
Python scrapers for lead generation: We built custom scrapers to pull publicly available information from Google Maps and practice websites. Did we violate some terms of service? Probably (wink emoji). But we were collecting public information, enriching it, and making it valuable for genuine outreach.
LinkedIn Sales Navigator: Essential for finding decision-makers and understanding practice structures before reaching out.
The Local Ecosystem Strategy
Beyond cold email, we tapped into the dental community infrastructure:
Study clubs and labs: We reached out to organizations like Glidewell Clinical Education Center in Irvine—a state-of-the-art facility with a 40-seat auditorium that had hosted over 100,000 attendees in their online study club alone. These weren't just educational programs; they were community hubs where practice owners genuinely connected.
The relationship-first approach: When you come from a place of wanting to educate rather than sell, and genuinely want to understand the space, people are incredibly willing to help. Glidewell's success proved this—they built trust through education first, sales second. We introduced ourselves to the ecosystem first, then asked for insights.
The Campaign Testing That Actually Worked
I ran what I called "ABCDEFG campaigns" on Instantly.ai—testing everything from subject lines to call-to-actions. The 5-second rule became crucial: you need to get your message across via subject line, preview text, and first body paragraph within 5 seconds.
Email structure that converted:
- Line 1: "Hey, I'm building something..."
- Line 2: "Here's what I'm noticing people doing..."
- Line 3: "I'd love your thoughts—absolutely not trying to sell you"
- Personal email signature (not company domain)
The breakthrough insight: The more personal and shorter the email, the less explaining I had to do about the product. Making people know this was a real message from a real person got replies and high open rates.
Email hygiene learnings: List validation was crucial—clean, segmented lists by specialty and geography. How quickly we replied mattered enormously, and we focused on scheduling actual conversations, not just collecting survey responses.
Subject Line Testing
I A/B tested 47 different subject lines across 5,000+ emails. The winners:
"Building for my mom's dental practice - quick question" (8.4% open rate)
"Independent practice financial challenges (2-minute survey)" (7.9% open rate)
"Helping small practices like yours - quick input needed" (7.1% open rate)
The losers were predictably corporate:
- "Healthcare Fintech Market Research" (1.2% open rate)
- "Survey: Practice Management Solutions" (0.8% open rate)
- "Quick Survey About Your Business" (0.6% open rate)
Pattern: Personal connection beats professional polish. Healthcare professionals respond to authenticity, not marketing speak.
Timing Everything
Healthcare schedules create predictable availability windows. After tracking response patterns for two months:
Best response times:
- Tuesday-Thursday, 11 AM - 2 PM EST: 73% higher response rates
- Avoid Mondays: Administrative catch-up day
- Avoid Fridays: Early closures and weekend prep
- Never evenings/weekends: Respect work-life boundaries
Regional variations mattered: West Coast practices responded better to 10 AM PST emails (1 PM EST). East Coast practices preferred 11 AM EST timing.
The Survey Design That Worked
I used TypeForm with conditional logic—drawing on my urban planning background from the Minnesota Irvine urban planning survey to avoid leading questions. The goal was to let providers just talk and listen.
Five minutes maximum. Seven questions total. Every question had to deliver specific, actionable insights.
Question 1: Practice basics "How many dentists work in your practice?" (Solo, 2-3, 4-5, 6+)
Question 2: Financial pain identification
"What's your biggest monthly operational challenge?" (Multiple choice with write-in option)
Question 3: Pain quantification "What percentage of your time do financial management tasks consume?" (Less than 10%, 10-25%, 25-50%, 50%+)
Question 4: Current tools "What software do you use for financial management?" (Open text)
Question 5: Solution interest "If someone built financial tools specifically for dental practices, what would matter most?" (Ranked priorities)
Question 6: Willingness to pay "What do you currently spend monthly on financial/administrative tools?" (Ranges: Under $500, $500-1K, $1K-2.5K, $2.5K-5K, $5K+)
Question 7: Follow-up permission "Would you be open to a 15-minute call to discuss these challenges in more detail?"
Why TypeForm's conditional logic was crucial: Respondents never had to skip irrelevant questions—because with Logic, they never even saw them. This created higher completion rates through a more personal, human experience. For dental practices specifically, I could customize follow-up questions based on practice size, patient type, or satisfaction ratings.
Key insight: Asking for specific percentages and dollar amounts generated quantifiable data I could compare across practices. Vague questions like "Do you have cash flow challenges?" produced useless yes/no answers. As Rob Fitzpatrick outlines in The Mom Test, the key to customer discovery is asking about specific past behaviors rather than hypothetical future intentions—which is exactly why asking for current spending amounts worked better than asking if they'd pay for new features.
I was building toward specific technical decisions: Should I integrate with RTP for instant settlement? How much would practices pay for same-day clearing? Did their cash flow patterns justify the engineering complexity of FedNow implementation versus traditional ACH timing?
These weren't just product questions—they were infrastructure architecture decisions that would determine our entire technical stack.
Turning Conversations Into Knowledge
The follow-up that mattered most: transcript analysis with Google Notebook LM.
Out of 777 responses, 108 were interested in Zoom calls—and most of those actually happened. I took those transcripts and fed them into Google Notebook LM and Google Gemini, creating what became a living body of knowledge on the pains and trials of dental practice owners.
Why Notebook LM was crucial: Instead of manually coding hundreds of interview transcripts like traditional qualitative analysis, Notebook LM's AI identified semantic patterns automatically. I could ask "identify all passages discussing cash flow timing" and get relevant excerpts across all interviews in minutes instead of hours of manual review.
This became my secret weapon—I'd listen to these insights podcasts while walking my dog, constantly learning from the actual voice of dental practice owners.
The Follow-Up Strategy That Built Relationships
Every completed survey received a personalized thank-you within 24 hours. I provided immediate value back.
The benchmark report: Respondents received a one-page summary comparing their challenges to similar practices in their region. "Based on 47 practices your size in California, 34% cite cash flow timing as their top challenge vs. 67% citing staff costs..."
This approach accomplished three things:
- Immediate value: Practices got useful benchmarking data
- Credibility building: Showed I was analyzing data seriously, not just collecting it
- Interview conversion: 31% of survey respondents agreed to extended interviews
Banking infrastructure insights: The benchmark data revealed specific patterns that informed our technical architecture. Practices with >$50K monthly card volume consistently mentioned 2-3 day settlement delays as cash flow bottlenecks. This validated the business case for RTP integration—not for all customers, but for high-volume practices where instant settlement would justify the per-transaction cost premium.
Small practices cared more about predictable timing than speed. Large practices needed both speed and predictability. This segmentation became foundational to our tiered banking service architecture.
Geographic Distribution Strategy
I wanted geographic diversity but concentrated outreach where it would matter most for product development.
Target distribution:
- 30% California: Largest dental market, regulatory complexity
- 15% Texas: Independent practice concentration
- 10% Florida: Retiree population, unique payment patterns
- 8% New York: High operational costs, tech adoption
- 37% Other states: Ensuring national representation
Outreach sources:
- State dental board public directories
- Practice websites with contact forms
- LinkedIn practice administrator networks
- Industry conference attendee lists
Verification process: Every response required practice verification—website match, phone number confirmation, or LinkedIn profile validation. This eliminated 23% of initial responses as spam or incomplete.
What Didn't Work
Industry partnerships: Approached three dental industry publications about survey promotion. All wanted payment or reciprocal marketing arrangements. Bootstrap constraints made this impossible, but direct outreach proved more authentic anyway.
Social media promotion: Posted survey links on dental Facebook groups and LinkedIn. Generated high response volume but low quality—mostly vendors and students, not practice owners.
Referral incentives: Offered $25 Amazon gift cards for completed surveys. Attracted participants motivated by rewards rather than genuine interest in the problem. Quality dropped significantly.
Cold calling: Tried calling practices directly. Receptionists blocked access, and interrupting patient care felt disrespectful. Email remained superior for initial contact.
The Human Element
The most valuable responses came from practices willing to share specific challenges. A dentist in Portland wrote three paragraphs about insurance claim delays affecting cash flow. A practice administrator in Nashville detailed exact software integration failures.
These detailed responses became product features. When multiple practices mentioned similar pain points, those moved to the top of our development priority list.
Banking infrastructure implications: The Portland dentist's insurance delay story led directly to our credit facility design. If insurance payments took 45-90 days but practices needed cash flow predictability, we could advance against verified claims—but only with real-time risk monitoring and automated underwriting based on deposit data patterns.
The Nashville integration failures informed our API-first architecture. Practices wanted their existing practice management software to "just work" with banking—no separate logins, no manual reconciliation. This meant building robust webhooks, standardized data formats, and fallback systems for when integrations failed.
Lesson: Survey quantity matters, but survey quality determines product success. One detailed response from a frustrated practice owner teaches more than ten quick checkbox surveys—and in our case, each detailed response informed specific technical architecture decisions that would cost hundreds of thousands to implement incorrectly.
Scaling the Methodology
After proving the approach worked, I systematized outreach:
Weekly targets: 250 new emails, 50 follow-ups, 10 benchmark reports Template variations: Five different email approaches to avoid spam filters Response tracking: Detailed spreadsheet with open rates, response rates, geographic distribution Quality control: Phone verification for every response claiming $25k+ monthly spend
Results That Mattered
777 verified responses over ten months. 15.5% overall response rate after methodology optimization. 100+ extended interviews with practice owners genuinely interested in solutions.
Every feature we built had specific customer validation behind it. No guesswork, no assumptions—just direct feedback from practices we wanted to serve.
The technical infrastructure that emerged: These surveys didn't just inform product features—they determined our entire banking architecture. Cash flow timing insights led us to Durbin-exempt bank partnerships for 10x higher interchange rates. Integration pain points drove our API-first, webhook-heavy architecture. Credit needs shaped our real-time underwriting models based on deposit data patterns.
When VCs ask about product-market fit, I can point to specific survey responses that justify every major technical decision from FBO account structures to ML fraud detection algorithms. That's the difference between building on assumptions versus building on actual customer insights.
For Other Healthcare Entrepreneurs
This methodology transfers beyond dentistry. Healthcare professionals respond to authenticity, respect for their time, and immediate value exchange. Lead with personal connection, not business pitch. Ask specific questions that generate actionable data. Always provide value back to respondents.
The banking infrastructure lesson: If you're building in healthcare fintech, these conversations aren't just about product-market fit—they're about technical architecture validation. Understanding whether your customers need same-day settlement, real-time payments, or traditional ACH timing determines your entire infrastructure stack. Getting this wrong costs millions in retrofitting.
The 777 survey responses became our foundation for product development, pricing strategy, go-to-market approach, and most critically—our technical banking architecture. When you bootstrap—especially working from Newport Beach coffee shops between customer calls—every customer conversation matters. Make them count, and make sure they inform not just what you build, but how you build it.
Next: Part 2 breaks down exactly what those 777 practices told us about their financial challenges.
-AM
arvindmurthy at gmail
Data sources: CLIN Customer Discovery Whitepaper (777 verified survey responses, completed March 2025), personal outreach methodology documentation