Real Expertise vs. AI Content: Dr. Rachel Foster’s Fight to Outrank ChatGPT | Interview

Real Expertise vs. AI Content- Dr. Rachel Foster's Fight to Outrank ChatGPT Real Expertise vs. AI Content- Dr. Rachel Foster's Fight to Outrank ChatGPT

Rachel: Hi Morgan, can you hear me?

Morgan: Yeah, perfectly. How are you?

Rachel: Tired. [laughs] But good. I just got off a twelve-hour shift, so forgive me if I’m not super articulate.

Morgan: No worries. I appreciate you making time. So you’re an actual doctor who creates health content?

Rachel: Yeah. I’m an emergency medicine physician. Been practicing for twenty years. And I’ve been writing health content online since about 2016— started as a side project to help patients understand medical information better.

Morgan: What made you start creating content?

Rachel: Honestly? Frustration. I’d spend thirty minutes explaining something to a patient, and then they’d go home and Google it and find some clickbait article full of misinformation. So I thought, maybe I can create better information that actually shows up when people search.

Morgan: Did it work?

Rachel: For a while, yeah. By 2020, I had this website with maybe 200 articles about common medical conditions, symptoms, treatments. I wasn’t making tons of money from it— maybe $3,000 a month in ad revenue— but it was reaching people. I’d get emails from readers saying “Your article helped me understand my diagnosis” or “I showed this to my doctor and it improved our conversation.”

Morgan: That sounds really fulfilling.

Rachel: It was. That was the whole point. Not the money, but knowing I was helping people navigate a really confusing system.

Morgan: When did AI content start affecting you?

Rachel: I first noticed it in mid-2023. I started seeing my articles drop in rankings. Like, I had this article about “Signs of a Heart Attack in Women” that had been ranking #1 for like three years. Suddenly it’s on page two, and when I check what’s ranking above me, it’s these generic health sites with articles that are clearly AI-generated.

Morgan: How could you tell they were AI-generated?

Rachel: The writing style is so obvious once you know what to look for. Very smooth, very confident, but completely lacking in nuance. Like, one article said “Women experiencing heart attacks may feel chest pain, shortness of breath, or nausea.” Which is technically true, but it’s also incredibly generic and doesn’t capture the complexity of how heart attacks actually present in women.

Morgan: But it ranked higher than your article?

Rachel: Way higher. And it killed me because my article had specific details from twenty years of seeing actual patients. I wrote about how women often describe the pain differently, how they’re more likely to have atypical symptoms, how age and risk factors change the presentation. Real clinical expertise. But Google seemed to prefer the AI slop.

Morgan: Why do you think that happened?

Rachel: Because the AI articles were perfect for SEO. They had all the right keywords, perfect structure, FAQ sections, related questions— everything Google’s algorithm was looking for. My articles were written for humans, not for algorithms.

Morgan: What did you do when you noticed this?

Rachel: At first, I tried to ignore it. I told myself that quality would win eventually. But by early 2024, my traffic had dropped by like 60%. Articles that used to get 10,000 views a month were getting 3,000. My ad revenue went from $3,000 a month to barely $1,000.

Morgan: That must have been frustrating.

Rachel: It was enraging. I’m a board-certified physician with two decades of experience, and I’m being outranked by content that was written by a chatbot in thirty seconds. No medical training, no clinical experience, just pattern matching from scraped internet text.

Morgan: Did you consider using AI yourself?

Rachel: I tried it. I fed ChatGPT some prompts like “Write an article about diabetes management” and it generated this very polished, very generic article. And I hated it. It wasn’t wrong, exactly, but it was soulless. It didn’t have any of the context or judgment that comes from actually treating diabetic patients.

Morgan: So what did you do instead?

Rachel: I decided to fight back by leaning into what AI can’t replicate— real expertise and personal experience. I started adding way more specific details to my articles. Case examples from my ER shifts, recent research I’d read, nuanced explanations of when to worry versus when to wait and see.

Morgan: Did that help?

Rachel: Not immediately. For like six months, I was doing all this extra work and seeing no results. I was writing these incredibly detailed, evidence-based articles, and they were still getting buried under AI content.

Morgan: When did things turn around?

Rachel: Late 2024, early 2025. I started noticing my articles slowly climbing back up. Not to where they’d been before, but improving. And I think it was because Google was finally starting to figure out how to identify and deprioritize low-quality AI content.

Morgan: What changed in your rankings?

Rachel: My heart attack article got back to position three. My diabetes article hit position two. Still not number one for most things, but way better than page two. And the traffic started recovering— I’m at like $2,200 a month in ad revenue now.

Morgan: That’s still down from your peak though.

Rachel: Yeah, and I don’t think I’ll ever get back to where I was. The landscape has changed. There are just so many more sites publishing health content now, and a lot of them have bigger teams and more resources than I do.

Morgan: Do you think human expertise still matters in the age of AI?

Rachel: [pause] I have to believe it does. Because if it doesn’t, then what’s the point? If a chatbot can replace twenty years of medical training and experience, then we’re all fucked.

Morgan: But you’re worried it might not matter enough.

Rachel: Yeah. I mean, I’m seeing patients every day who come in having self-diagnosed based on something they read online. And increasingly, that content is AI-generated. And sometimes it’s harmless, but sometimes people delay seeking care because the AI article said “this is probably nothing.”

Morgan: That’s dangerous.

Rachel: It’s really dangerous. And the thing is, the AI content isn’t always wrong. It’s just not specific enough. It can’t account for individual context. It can’t say “For someone your age with your medical history, this symptom is more concerning than it would be for someone else.”

Morgan: How do you compete with that?

Rachel: By being radically specific. I’ve started including way more author bylines, credentials, photos of me at work. I link to my medical license verification. I add disclaimers that say “I’m writing this as a practicing ER physician who has treated thousands of patients with these conditions.”

Morgan: Is that enough?

Rachel: I don’t know. I hope so. But I also think there needs to be systemic change. Google needs to do a better job of identifying and elevating content from actual medical professionals versus content generated by AI or written by non-experts.

Morgan: Have you seen Google try to do that?

Rachel: They say they prioritize E-E-A-T— Experience, Expertise, Authoritativeness, Trustworthiness. And I think they’re trying. But the problem is that AI content can fake a lot of those signals. It can cite real studies, use medical terminology correctly, sound authoritative. So distinguishing real expertise from simulated expertise is really hard.

Morgan: What would you want Google to do differently?

Rachel: I’d want them to verify credentials. Like, if someone’s writing health content and claiming to be a doctor, make them prove it. Link to their NPI number, their medical license, something. Because right now, anyone can slap “Dr.” in their byline and Google has no way to verify if it’s real.

Morgan: Do you think that would help?

Rachel: It would help separate actual medical professionals from content farms. But it wouldn’t solve the AI problem entirely because AI content doesn’t usually claim to be written by doctors. It’s just… published with no clear authorship.

Morgan: Have you thought about giving up on the website?

Rachel: All the time. [laughs] Like, I’m already working fifty-plus hours a week as a doctor. Why am I spending my limited free time fighting with AI content for search rankings?

Morgan: Why do you keep doing it?

Rachel: Because I still get those emails. People saying my article helped them understand their diagnosis, or helped them advocate for themselves with their doctor, or gave them the confidence to seek care when they were scared. And I can’t get those stories from anything else I do in my career.

Morgan: That’s powerful.

Rachel: It is. And it reminds me that even if I’m not ranking #1 anymore, even if my traffic is down, I’m still reaching people who need accurate medical information. And that matters more than the money or the rankings.

Morgan: Do you think AI will eventually replace medical content creators?

Rachel: For basic information? Maybe. Like, if someone just wants to know “What is pneumonia?” an AI can probably answer that adequately. But for nuanced, context-dependent medical information? I don’t think AI can replace human expertise. At least not yet.

Morgan: What’s an example of something AI can’t do well?

Rachel: Risk assessment. Like, AI can tell you the general symptoms of a stroke. But it can’t help you assess whether your specific combination of symptoms— which might be atypical— warrants immediate emergency care versus waiting to see your primary care doctor. That requires clinical judgment.

Morgan: And people need that kind of information.

Rachel: Desperately. That’s what I try to provide in my articles. Not just “here are the symptoms” but “here’s how to think about whether your symptoms are concerning for YOU specifically.”

Morgan: Does AI ever get things wrong in health content?

Rachel: Oh, constantly. I’ve seen AI articles that mix up similar-sounding conditions, that cite studies incorrectly, that give medication advice that’s technically accurate but would be dangerous for certain patient populations. The problem is that these errors are subtle enough that non-experts won’t catch them.

Morgan: That’s terrifying.

Rachel: It is. And it’s why I keep fighting to make sure human-created, expert-vetted content is still visible and accessible.

Morgan: What advice would you give to other experts who are being outranked by AI?

Rachel: Double down on what makes you unique. Your experience, your credentials, your specific insights. AI can mimic expertise, but it can’t replicate twenty years of seeing actual patients. Make that clear in everything you publish.

Morgan: Is that sustainable long-term?

Rachel: [pause] I honestly don’t know. I hope so. But I also think we need help from the platforms. If Google and other search engines don’t actively prioritize real expertise over simulated expertise, then experts are going to give up. And that would be a massive loss for information quality online.

Morgan: Do you think Google cares about that?

Rachel: I think they care about liability. If someone dies because they followed medical advice from an AI-generated article, and that article ranked #1 on Google, that’s a PR nightmare. So yeah, I think they care. I just don’t know if they care enough to make the hard changes needed.

Morgan: What would those hard changes look like?

Rachel: Manual review of health content. Verified credentials for medical authors. Maybe even removing AI-generated health content entirely unless it’s clearly labeled as such. These would be expensive and complicated, but they’d make a real difference.

Morgan: Do you think that’ll happen?

Rachel: [sighs] Probably not. At least not until something really bad happens that forces their hand.

Morgan: That’s pretty pessimistic.

Rachel: I’m an ER doctor. Pessimism is kind of our default. [laughs]

Morgan: [laughs] Fair enough. Alright, last question— are you glad you started creating health content, or do you wish you’d never bothered?

Rachel: [long pause] I’m glad I did it. Even with all the frustration and the AI competition and the declining traffic. Because I know I’ve helped people. And at the end of the day, that’s why I became a doctor in the first place.

Morgan: That’s a good answer.

Rachel: [laughs] Thanks. I try.

Morgan: Alright, I’ll let you go. I know you’re exhausted. Thanks for making time for this.

Rachel: Thanks for caring about this issue. It’s nice to talk to someone who gets it.

Morgan: Absolutely. Take care, Rachel.

Rachel: You too, Morgan.

[end]


Key Lessons Learned

“I’m a board-certified physician with two decades of experience, and I’m being outranked by content that was written by a chatbot in thirty seconds.”

1. SEO Optimization Beats Medical Expertise (Initially)

AI-generated articles ranked higher despite lacking clinical experience because they were “perfect for SEO”—optimal keyword usage, FAQ sections, structured formatting. Google’s algorithm initially rewarded technical optimization over actual expertise.

2. AI Content Is Generically Correct But Dangerously Non-Specific

AI can accurately state “women experiencing heart attacks may feel chest pain, shortness of breath, or nausea,” but it can’t provide the nuanced clinical judgment that comes from treating thousands of actual patients—like recognizing atypical presentations or risk-stratifying symptoms.

3. Quality Doesn’t Automatically Win

Dr. Foster assumed quality would triumph and initially tried to ignore AI competition. Traffic dropped 60% over nine months (from \$3,000 to \$1,000 monthly ad revenue) before she adapted her strategy. Expertise alone isn’t enough without SEO sophistication.

4. Real Expertise Requires Radical Transparency

To compete, Dr. Foster added author bylines with credentials, photos from work, medical license verification links, and explicit disclaimers about her clinical experience. Making expertise visible and verifiable became essential differentiation.

5. AI Can Fake Authority Signals

AI content can cite real studies, use medical terminology correctly, and sound authoritative—making it difficult for algorithms to distinguish simulated expertise from real clinical experience. E-E-A-T principles are easier to describe than to algorithmically enforce.

6. Context and Nuance Are Irreplaceable

“It can’t say ‘For someone your age with your medical history, this symptom is more concerning than it would be for someone else.'”

AI provides general information but lacks the clinical judgment to help individuals assess personal risk based on their specific circumstances, medical history, and symptom presentation.

7. Subtle Medical Errors Are the Most Dangerous

AI doesn’t make obvious mistakes like “aspirin cures cancer.” It makes subtle errors: mixing up similar conditions, citing studies incorrectly, or giving advice that’s technically accurate but dangerous for specific patient populations. Non-experts can’t catch these errors.

8. Mission Sustains Through Algorithm Changes

Dr. Foster continues despite lower traffic and revenue because she still receives emails from people saying her articles helped them understand diagnoses or advocate for themselves. Purpose outlasts optimization when rankings and revenue decline.

9. Platform Changes Lag Behind Harms

Google likely won’t implement expensive credential verification or AI content labeling until “something really bad happens that forces their hand.” Reactive rather than proactive platform governance means experts must adapt while waiting for systemic solutions.

10. Human Expertise Must Be More Than Correct

To compete with AI, expert content must be radically specific, transparently credentialed, contextually nuanced, and personally accountable. Being right isn’t enough—experts must prove they’re uniquely qualified to be right in ways AI cannot replicate.


About Dr. Rachel Foster

Dr. Rachel Foster is a board-certified emergency medicine physician with twenty years of clinical experience treating acute medical conditions in high-pressure hospital settings. After completing her residency in 2005, she’s worked in emergency departments across the Northeast, currently practicing at a Level I trauma center.

In 2016, frustrated by patients finding medical misinformation online, Dr. Foster started creating evidence-based health content to help people understand medical conditions, symptoms, and treatment options. Her website grew to 200 articles addressing common emergency and primary care concerns, reaching approximately 120,000 monthly visitors by 2020 and generating \$3,000 in monthly ad revenue.

Starting in mid-2023, Dr. Foster watched her carefully researched articles get systematically outranked by AI-generated health content. Despite two decades of clinical experience and meticulous attention to medical accuracy, her traffic dropped 60% as algorithms prioritized SEO-optimized AI content over human expertise.

“If a chatbot can replace twenty years of medical training and experience, then we’re all fucked.”

Rather than abandon her project, Dr. Foster adapted by radically emphasizing her credentials and clinical experience—adding author bylines with verification links, including workplace photos, citing specific patient cases, and explicitly framing content around her professional authority. By late 2024, her rankings began recovering as Google’s algorithms improved at identifying low-quality AI content.

Dr. Foster’s traffic has partially recovered to approximately 70% of peak levels ($2,200 monthly ad revenue), though she acknowledges she may never fully reclaim her former rankings. She continues creating content because readers regularly share how her articles helped them understand diagnoses, communicate with doctors, or make informed healthcare decisions.

She now advocates for platform-level changes including credential verification for medical content creators, clearer labeling of AI-generated health information, and algorithmic prioritization of demonstrable medical expertise over SEO optimization.

Dr. Foster lives in Boston, works fifty-plus hour weeks in emergency medicine, and spends her limited free time fighting to keep expert medical information accessible in an increasingly AI-dominated search landscape.


This interview was conducted via video call in November 2025, immediately after Dr. Foster completed a twelve-hour ER shift. She was forthcoming about both her professional frustrations and her commitment to providing accurate medical information. The conversation has been edited for clarity while preserving her emphasis on the irreplaceable value of clinical experience and judgment.

Click to rate this post!
[Total: 0 Average: 0]
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use