I used ChatGPT to search what community services are available for over 65s in Newham. I had been building a list of services for our patients and wanted to see what it would return.
The list looked right. Most of the services I recognised. I had used them before. One I had not heard of caught my eye. A community centre called Bonny Downs.
I clicked the link.
Oops. That page can't be found.
It annoyed me. ChatGPT had given me an answer and referenced a broken link. Which made me wonder. If this one is wrong, what about the others.
Then I thought about a normal Tuesday. Back to back patients. Meetings to get back for. Family in front of me asking a simple question. Would I really have clicked every link before I answered them. Or would I have taken ChatGPT at face value and passed it on.
That is the question I want you to sit with before the rest of this issue.
This Week's Workflow
Last week I told you Perplexity is my first stop for research. It is. But research is rarely the whole job. Most clinical tasks need more than one tool because most clinical tasks need more than one kind of thinking.
Last weekend I tested this on a real example. Cognitive rehabilitation guidance for a carer supporting a spouse home from hospital after a traumatic brain injury. The kind of thing I would normally explain on a home visit and hope the family retained. The kind of thing that almost always falls through the gap between what we say and what gets remembered.
I started with ChatGPT. Not for research. For roleplay. I asked it to play the spouse. Late sixties. No medical background. Tired. Worried. I told it to ask me the questions a real family member would ask, and to flag when I used words it did not understand.

ChatGPT Roleplay Prompt
The conversation was uncomfortable in the right way. The line that stopped me was this.
I want to help them, I really do. But when we are actually at home, in the middle of the day, I do not always know what I am supposed to do.
That is every carer I have ever met. ChatGPT did not invent it. It surfaced it.
Then I went to Perplexity. I asked for current UK guidance on cognitive rehabilitation after fall related traumatic brain injury. Compensatory strategies. Environmental adaptations. Carer involvement. Progression principles. It pulled from eighteen sources including NICE NG211 and QS74.

Perplexity Operational Research Prompt
I now had two things ChatGPT could not give me on its own. The questions families actually ask. The evidence base I needed to answer them properly.
Then I went to Claude. I gave it the role, the context, and the information in that order. The role was a clinical documentation assistant. The context was the carer questions from ChatGPT and the evidence summary from Perplexity. The information was the composite scenario and the four strategies I had prescribed.

Claude Clinical Documentation Prompt
Claude produced a draft carer guide written for both the patient and the spouse - Carer guidance document (cognitive rehabilitation plan).
Three tools. Three jobs. Each one doing the thing the others cannot.
Find. Decide. Explain.
Use Perplexity when you are asking what is currently true. Use ChatGPT when you are asking what would a person actually ask me. Use Claude when you are asking can you bring this together and explain it clearly. The mistake most clinicians make is using one tool for everything because it is the one they have heard of.
Information Governance
Same rule as every issue. Nothing relating to a real patient goes into any of these tools. Not their name. Not their condition combined with their age and living situation. Not anything that would identify them to someone with reasonable knowledge of your service.
This issue covers a workflow that produces written guidance for patients and the people who support them. That is slightly different territory from the documentation workflows in earlier issues so it is worth being clear about where the lines sit.
Use composite scenarios not real patients. A composite describes a patient type not a specific person. An older adult several weeks post stroke living at home with someone who supports them is a population. It is not a real person. Stay at that level. The roleplay will still bring the same questions and the guidance will read just as well.
When you have a real patient in front of you the personalisation happens inside your Trust's documentation systems. Your EPR. Your service's tools. Claude does not see the real patient information.
One thing worth knowing about distribution. Individualised written guidance you produce for a specific patient you have assessed and signed off as part of their care sits within your normal professional scope. Material intended for general distribution is different. Ward leaflets. Resources handed to multiple patients. Anything Trust branded. If you want to use this workflow to produce that kind of material take the draft to whoever governs patient information in your Trust before it goes anywhere. The route exists. Use it.
The same reminder as always. ChatGPT, Perplexity, and Claude are consumer products. They have not been through NHS procurement or been assessed against DCB0129 or DCB0160. Those are the clinical risk management standards NHS health IT systems are assessed against. Until your IG lead tells you otherwise keep real patient context out of the tools entirely.
If you are ever unsure ask your IG lead before you paste anything.
Where To Start This Week
Pick one conversation you have regularly that does not always land the way you want it to. A discharge explanation. A carer instruction. A self management plan you talk through on a home visit that families forget by the time you have driven away.
Open ChatGPT. Ask it to roleplay the family member or patient on the other side of that conversation. No medical background. Tired. Worried. Tell it to ask you the questions it would actually ask, and to flag when you use words it does not understand.
Have the conversation. Notice where it pushes back. Notice the questions you had not thought to answer.
Ten minutes. You will hear something about your own communication you did not expect.
Prompts Worth Saving
Prompt One. The roleplay.
Best used in ChatGPT.
You are a [family member or patient] of someone facing [condition or situation]. You have no medical background. You are tired and worried. I am the [health professional] involved in their care. Ask me the questions you would actually have. Tell me when I use words you do not understand. Tell me when an instruction is not clear enough to follow at home. Push back if my answers are too clinical. Keep going until you feel you understand what is happening and what you need to do.
Prompt Two. The operational research.
Best used in Perplexity.
Find the most relevant studies on [topic]. Then translate the findings into NHS operational impact. For each insight include what the study found, what it means in practice for NHS services, impact on cost, staffing, or patient flow, and whether it is realistically implementable. Include citations.
Prompt Three. The clinical documentation template.
Best used in Claude.
Download →Full Prompt
https://clinicallyintelligent.com/downloads/Prompt-03-Documentation-Clinically-Intelligent.pdf
Act as a clinical documentation assistant helping a [health professional] build a reusable [type of intervention] guidance template. I will give you a composite patient scenario and the [strategies / exercises / techniques] I have prescribed. Produce the output as a document laid out simply and clearly so a patient or carer can read it without effort. Use clear headings, short paragraphs, plain English at reading age eleven, and a warm tone written directly to the patient and the family member supporting them. Explain each [strategy / exercise / technique] using established clinical knowledge. Do not invent patient specific details. If something has not been provided write 'to be personalised by clinician' in that field. Scenario and [strategies / exercises / techniques]: [PASTE HERE]
Full Pack (all prompts + template)
Download everything →
https://clinicallyintelligent.com/downloads/Issue-04-Complete-Pack-Clinically-Intelligent.pdf
Opinion
We have all had families who are anxious, rightly so, and we are always told to put ourselves in their shoes and try to understand where they are coming from. That is harder than it sounds when you are a clinician with years of experience and the language to match.
In Newham we have one of the lowest health literacy rates in the UK. Families are usually told a lot of information from a lot of different professionals involved in the care of their relative. Discharge summaries provide the recommendation but they do not always provide all the detail. The language is often above what a family can comfortably read. Some professionals do not input anything onto them at all.
As an OT I advise patients on my recommendations every day. Most of that advice is verbal. There is no way to guarantee the family understood it. There is no way to guarantee they will retain it once I have left the home.
That gap between what we say and what gets remembered is where families struggle most. It is also where AI has something to offer that the system has not given them. Not as a replacement for the conversation. As a way to give the conversation something to leave behind.
In Case You Missed It
NHS App AI triage. Over one million patients can now book GP appointments directly through the NHS App using a tool called Smart Triage. Patients describe their symptoms and get routed to the right appointment without a receptionist or doctor reviewing the request first. Worth watching what this does to downstream workload for AHPs and community teams who pick up the patients GPs route on. Source: Digital Health, April 2026.
Digital by default. NHS England has set out its position on digital by default with electronic patient records at the centre of the strategy. Most clinicians will tell you the issue is not whether their Trust has an EPR. It is whether anyone has optimised it. Adoption is the headline. Optimisation is where the workload actually lives. Source: NHS England and HTN, April 2026.
Ambient scribing guidance. NHS England has published formal guidance on AI enabled ambient scribing in health and care settings. The guidance acknowledges these tools can reduce documentation workload but flags safety, data protection, and system integration as ongoing concerns. The tools have been running ahead of the guidance for the last year. The guidance is now catching up. Source: NHS England, April 2026.
That is all for Issue 04. Every week I will bring you something practical you can use and a view on where this space is heading. Next week: a single tool that fixes the part of your week you probably tolerate but shouldn’t.
If someone you know would find this useful, pass it on.
Clinically Intelligent drops every Wednesday. If you are not yet subscribed you can join free at clinicallyintelligent.com.

