Keywords
Feasibility study, realist process evaluation, GP consultations,
Use of telephone, video and online consultations in general practice is increasing. This can lead to transactional consultations which make it harder for patients to describe how symptoms affect their lives, and confusion about plans for future care. The aim of this study was to test the feasibility of a randomised control trial (RCT) for a complex intervention designed to address patients’ concerns more comprehensively and help them remember advice from general practitioners (GPs).
The complex intervention used two technologies: a patient-completed pre-consultation form at consultation opening and a doctor-provided summary report printed or texted at consultation closure. The feasibility of the intervention was tested in a cluster-randomised framework in six practices: four randomised to intervention, and two to control. Thirty patients were recruited per practice. Quantitative data was collected via patient-reported questionnaires and health records. GPs, patients and administrators were interviewed. Analysis included a process evaluation, recruitment and follow-up rates, and data completeness to assess feasibility of a future RCT.
The intervention was acceptable and useful to patients and GPs, but the process for the pre-consultation form required too much support from the researchers for a trial to be feasible. Both technologies were useful for different types of patients. Recruitment rates were high (n=194) but so was attrition, therefore criteria to progress to an RCT were not met.
Both the pre-consultation form and the summary report showed important potential benefits. They should be considered as separate interventions and evaluated independently. The technology to send pre-consultation forms needs further development to allow integration with GP computer systems. The additional time needed to generate summary reports meant GPs preferred to use it selectively. Collecting outcome data using online questionnaires was efficient but associated with high attrition, so alternative approaches are needed before a full RCT is feasible.
The problem: Patients sometimes feel GP consultations are too short. Sometimes problems are missed, or patients do not remember everything the GP tells them. We wanted to improve GP consultations.
What we did: We developed a better way to start and end consultations. Before a GP consultation, patients fill in a form online with more detail on their problems. This is shared with their GP. At the end of the consultation GPs can give patients a one-page summary of what was discussed.
Would it work? Investigating if the new method works would require a full trial. In a full trial, some GP practices would use the new method and some would carry on as normal. We would then compare how patients did in each group. Full trials are expensive; so instead we did a small study to see if a full trial is practical. In this small study, four different practices used the method with 30 patients each. These patients completed questionnaires before the consultation and were sent follow-up questionnaires. We collected the same information from 30 patients each in two practices where the GPs did not use the new method. We then interviewed patients, GPs and administrators.
What we found: Patients and GPs found the online form and summary useful, but they do not necessarily need to be used together as each is independently useful.
Patients completed the questionnaire before the consultation, but only 36% completed the follow-up questionnaire.
Administrators needed help from the research team to share the online form with the GP. This would not be practical in a full trial.
Conclusions: Because of the findings, we do not currently propose a full trial. However, the online form and summary are both useful and could be taken forward separately and tested in other ways.
Feasibility study, realist process evaluation, GP consultations,
Patients often leave GP consultations with unaddressed concerns1,2. This can lead to high rates of re-consultation and increased morbidity in the population. Previous research shows that approximately 27% of patients consulting in primary care have seen a doctor or nurse for the same problem in the last four weeks3, and up to 50% of consultations in primary care are followed by another consultation within two weeks4. Although there are no estimates of re-consultation for unaddressed concerns in primary care, we know that problems are missed in up to 50% of primary care consultations2, and that reducing consultation rates by just 1% in 2016 could have saved the National Health Service (NHS) over £100 million5.
The increased levels of telephone triage since the start of the coronavirus disease 2019 (COVID-19) pandemic may have made it more difficult for patients to communicate their concerns. Telephone consultations tend to narrowly focus on presenting symptoms6,7 and may direct GP consultations down a more transactional route. For example, GPs may miss mental health problems or multiple patient concerns and lose opportunities for health promotion. Increased levels of non-face-to-face consultations may even lead to delayed diagnoses8. This might particularly affect patients who find it harder to communicate by telephone, further entrenching existing health inequities.
Patients seen in primary care often present with multiple complex problems, many of which are unrelated to physical symptoms, and include informational needs on symptom-management or self-care, emotional problems, health concerns or social problems9. In the context of multiple presenting problems, GPs tend to focus on physical symptoms10. While this prioritisation is entirely appropriate to ensure patient safety, any missed opportunities to improve patient understanding and ability to self-care is also costly: a study in 2015 found that increasing patient engagement in their own health could save the NHS £2 billion by 202011. Small changes to improve the ability of GPs to address patients’ presenting problems, concerns and questions could therefore have considerable impact on the overall NHS budget.
Opportunities to address patients’ problems are commonly missed at consultation opening (when the GP should elicit the patients reason for attendance)2. Problems can remain unaddressed at consultation closure, if advice given is unclear, particularly with regards “safety-netting”: i.e. advising patients what to do if the problem does not resolve, or gets worse12.
This study involved testing the feasibility of an intervention aimed at more comprehensively addressing patients’ concerns in general practice. The intervention focused on consultation opening and closing, incorporating use of an individual-level patient-reported outcome measure (PROM) at consultation opening and written information at consultation closure. It was named the Consultation Open and Close, or COAC intervention.
The COAC intervention development study was carried out immediately prior to this study and is published in two linked papers13,14. Both parts of the intervention (the pre-consultation form at opening and the summary report at closing) were developed and tested separately, according to Medical Research Council (MRC) guidance15.
The “opening” part of the intervention uses a form completed by patients before the consultation which gives their reasons for attending and highlights other common concerns (the pre-consultation form). This includes individualised information (a list generated by the patient of their reasons for attending, and the key issues they would like to discuss) and standardised questions (a short list of questions on common problems, with tick-box answers). The standard questions were based on the Primary Care Outcomes Questionnaire (PCOQ); a validated generic questionnaire developed to capture the main outcomes which can be influenced by primary care, including physical and emotional symptoms and function, self-care, health behaviour, adherence, and a sense of support16,17. This was adjusted using a person-based approach in three rounds18.
The “closing” part of the intervention involves a summary of the consultation being handed or sent to the patient. This summary is generated through a programmed coded set of automated actions (known as a “protocol”) which was developed within EMIS Health®19, the electronic patient records system used in 57% of GP practices in the UK20. The protocol allows GPs to load a clinical template (which is a structured form) to allow them to input information, with the most common types of information generated by tick-boxes (e.g. tests, referrals and generic safety netting). When the GP saves the template, a Microsoft Word report is automatically generated and saved to the record. This can be printed or sent to patients via email or an SMS (short message service), which is a text message sent to a mobile phone. The protocol, clinical template and document template to enable this were developed by One Care, the Federation for GP Practices in Bristol North Somerset and South Gloucestershire (BNSSG). It was developed in the One Care test EMIS system. This system uses Read codes and is used by the GP federation to test protocols before publishing to practices. The patient-facing summary report was developed in consultation with a patient and public involvement (PPI) group and tested iteratively using a person-based approach. The final summary report had two sub-headings as follows:
An example of the pre-consultation form and summary report are shown in Figure 1 and Figure 2 (reproduced from the two linked papers on development and testing of the pre-consultation form14 and summary report13 respectively.
This study was conceived before the COVID-19 pandemic but began recruitment during the pandemic. The NHS long-term plan had committed practices to offer online consultations from April 2020 and video from April 202121. The COVID-19 pandemic required accelerated adoption of these tools22; in March 2020, to reduce contagion, the UK government instructed general practices to conduct all consultations remotely unless there was urgent need otherwise23. Online consultations were implemented in UK general practices throughout 2020. Telephone or online consultation triage models were introduced24, with most triaged consultations done via telephone/video25.
Although patient satisfaction with general practices in 2021 was at its highest level in 3 years26, media reports suggested some patients found triage frustrating and access increasingly challenging27. General practices had a workforce crisis for several years and 2021 was widely acknowledged as being the most pressured time ever28. Many GPs felt that media coverage reporting poor patient access was unfair and demoralising28. The study was implemented in this very challenging context. There was an imperative for the study to be manageable within GP workload, beneficial to the consultation and working to resolve rather than exacerbate these issues in general practice.
PROMs were originally designed for use at aggregate level, to compare the scores of groups of patients receiving different care29. However, PROMs are increasingly being used at an individual-level to inform a consultation, set priorities or aid diagnosis29. Prior to the COVID-19 pandemic, feedback of individual-level PROMs information to clinicians had been used most widely in oncology30. PROMs feedback was found to have a positive effect on patient experience and patient care by promoting patient self-reflection thereby helping patients remember their main concerns31, by improving patient-clinician communication32 and by making it easier for patients to share information which they find it difficult to express verbally33. Since the COVID-19 pandemic, PROM feedback to clinicians in secondary care has become much more widespread. Secondary care clinicians in specialities such as rheumatology34 and cardiology35 have reduced the number of face-to-face appointments and the overall number of follow-up appointments while maintaining quality and patient safety through “remote monitoring”, which includes collecting symptom and/or PROMs information from patients in between appointments. Recognising that the wholescale shift to telephone consultations risked serious problems being missed in some specialities, some secondary care clinicians also collected PROMs information from patients immediately prior to appointments to aid communication and help identify possible hidden problems36,37.
Collection of symptom information from patients before appointments has also increased substantially in primary care since the start of the COVID-19 pandemic. However, in primary care this is dominated by electronic triage. Electronic triage forms were mandated by the NHS Long Term Plan21 and rolled out across general practice during this study. Electronic triage forms have features that are common to ePROMs completed before consultations; they both collect clinical information from the patient which is shared asynchronously with a clinician. However, they differ in purpose and content. Electronic triage forms collect information on symptoms and are primarily used to assess whether and what type of consultation a patient needs and with whom. The patient may not receive a consultation after completion of an electronic triage form, but can be advised to self-care, go to a pharmacist or Emergency Department (ED) or receive advice from the GP through email or the triage portal. In contrast, the primary purpose of ePROMs shared with clinicians before consultations is not triage, as the patient already has a booked appointment; it can serve multiple purposes, include raising clinicians’ awareness of patient concerns or providing more detail on a patient’s problems29.
The current widespread digitisation in general practice38 offers a timely opportunity to integrate an ePROM into clinical practice for use at an individual-level to help identify patient concerns.
The NIHR draws a distinction between a “pilot” study (a full trial in miniature, including assessment of outcomes) and a “feasibility” study (research designed to investigate whether a randomised control trial (RCT) will be feasible, which does not include assessment of the primary outcomes)39.
Cluster trials are most appropriate for interventions using PROMs feedback to clinicians, because contamination at the level of clinician or practice is a common problem with such RCTs; clinicians who are trained to make use of certain techniques at consultation opening and closure do not readily “forget” this training for control arm patients in an individually randomised trial40. Trials of PROMs feedback to clinicians which have shown effects on patient outcome tend to use randomisation at the level of physicians or practices, rather than individual patients40. Randomisation at the clinician or practice level also offers the potential for minimising the potential confusion that individual-level randomisation could cause for patients. In low-risk contexts, a cluster design in which physicians or practices are randomly assigned to prescribe an alternative treatment can be implemented without obtaining individual patient consent for randomisation41.
The objective of this feasibility study was to test the COAC intervention in a cluster-randomised framework to establish the feasibility both of the intervention and of a cluster RCT of the intervention.
This was a cluster-randomised feasibility study focused on whether the intervention was acceptable and whether the trial was feasible in terms of recruitment and retention. Since the focus was on feasibility, we did not include formal comparison of the outcomes between control and intervention arms. The key research questions were:
1. Is the feasibility study able to recruit and retain patients in both arms?
2. Can the necessary outcome data be collected in both arms?
3. Is the intervention acceptable to patients and clinicians?
4. What were the key implementation factors which affected the study (recruitment and response of practice teams, recruitment and drop-out of patients, fidelity to the intervention, adaptations, acceptability)?
5. What outcomes were achieved, through what mechanisms and in what contexts? (This research question is reported on briefly here, and in detail in the linked papers13,14.)
This study was based in Bristol, North Somerset and South Gloucestershire in six primary care practices with a range of socioeconomic deprivation levels as well as urban, suburban and rural areas.
The intervention comprised the following:
1. Patients with an upcoming GP or nurse appointment were sent a text with a pre-consultation form to complete this before their consultation. This was done by practice administrators who ran a search on the patient record daily for patients with upcoming appointments and sent them a batch SMS invitation.
2. Patients received the SMS invitation with a link to the form, configured in the software system REDCap®42. Patients clicked on the link to complete the form.
3. Preparation step: Administrators generated a colour-coded report from each patient form and manually upload each report to the patient record before the consultation. The GP reviewed the report before the consultation.
4. Opening: At the start of the consultation, the GP made it clear that they had read the report then gave the patient a reasonable length of time to elaborate before interrupting, redirecting or closing down.
5. Consultation: The GP carried out the consultation according to their normal practice.
6. Closure: The clinician provided a sub-set of patients with a written summary report (given on paper for face-to-face patients or sent by SMS or email for telephone patients) of what was agreed in the consultation. This was only provided to the sub-set of patients who have either had tests ordered, safety netting advice given a referral made or another specific follow-up.
A proposed initial programme theory for how this intervention was intended to work is shown Figure 3.
Randomisation was done at the practice level to avoid contamination40. We selected three practices in the top two deprivation quartiles and three practices in the bottom two. To achieve a balance on deprivation, the three most deprived practices were randomised one to control and two to intervention and similarly with the three least deprived practices. One or two GPs and administrators per intervention practice were trained in the intervention, including use of the pre-consultation form and consultation summary report. Control practices received a shorter training as the process was simpler for control practices (see workflow in Figure 4).
Each practice was asked to recruit 30 patients, resulting in 120 in the intervention and 60 in the control (see Table 1). An estimated 1,200 texts were expected to be sent to recruit 180 patients. Figure 5 shows this in an anticipated CONSORT flowchart of recruitment.
Intervention | Control | Total | |
---|---|---|---|
Practices | 4 | 2 | 6 |
Patients | 120 | 60 | 180 |
It was estimated that at least 115 patients (64%) of 180 patients responding to the initial text would provide follow-up data and agree to data sharing from the patient record. This sample size would enable an estimation of the follow-up rate within two-sided 95% confidence limits of ± 14%. An improved follow-up rate would generate a narrower confidence interval. A sample size of 180 was also enough to allow a sufficient pool of participants for interview, assuming 20% would consent to this.
Practices who had participated in the intervention development study were approached by the study chief investigator (CI) and new practices were approached by the National Institute of Health Research (NIHR) Clinical Research Network (CRN) for the West of England. Practices received a Research Information Sheet for Practices (RISP) and interested practices then contacted the CI.
All selected practices already used SMS software (MJOG® or accuRX®) and the patient records system EMIS.
Patients were included who were:
▪ Aged 17 or over (on date of SMS invitation to participate)
▪ Had an upcoming appointment with a recruiting GP within the next week
Patients were excluded if they were:
▪ Housebound
▪ Had not given permission to receive SMS messages from the practice
▪ Had a recent diagnosis of life-limiting or life-threatening illness,
▪ Were deemed by the GP to be at serious suicidal risk,
▪ Were unable to complete questionnaires in English even with the help of carers.
General practice administrators searched their practice database using an electronic search strategy which identified patients with upcoming appointments who met the inclusion criteria. Batch SMSes were sent to patients with a link to the baseline questionnaire hosted on REDCap. The SMSes contained the patient EMIS number and the patient was required to input this so their questionnaire could be identified.
Administrators received an alert when a patient completed a questionnaire. On a regular basis, administrators downloaded the summary report from REDCap to PDF and attached it to the EMIS patient record system. The baseline questionnaire included an information screen explaining the purpose of the study and how the data would be used. Return of the questionnaire indicated consent. Patients were asked to consent to their contact phone number being shared with the University of Bristol for the purposes of sending a follow-up questionnaire. Consent for use of that phone number to contact the patient for interview and for access to the patient’s record for demographics and re-consultation rates was requested in the follow-up questionnaire41. A similar approach has been taken for a number of other cluster trials41,43,44. The researcher then took full informed consent from patients who agreed to be interviewed prior to their interview. This consent was written for face-to-face interviews and audio-recorded for telephone interviews.
Feasibility study data collected included clinician questionnaire data, interview data, and quantitative patient data.
Clinician and administrator questionnaire data: The GP questionnaire included information for each consultation (new/review), modality (face-to-face, telephone or video), whether the pre-consultation questionnaire was useful, and why a summary report was used. Administrators completed a questionnaire indicating the number of SMS invitations sent each day, the number of reports attached and any technical issues.
Interview data: Interviews in the feasibility study (up to 30 patients and 16 practice staff) were conducted by the CI and the project research associate. Patients and practitioners were interviewed to the point of achieving “information power”, i.e. when the data analysis has yielded one or more coherent theories which are relevant to the study aims45. We additionally had carried out 26 GP and patient interviews in the intervention development phase. For some of our research questions it was helpful to compare the two phases, so these were also used to inform the analysis.
Quantitative data: Quantitative data included patient-reported outcomes and data from the patient record. The patient-reported baseline data was collected in the initial questionnaire sent to patients as part of the intervention. This questionnaire collected a combination of pre-consultation form data (included as part of the intervention to inform the consultation) and data collected for research purposes. There is overlap between these since some of the PCOQ questionnaire items are used both for the intervention and as a research outcome measure. At the start of the intervention development study the pre-consultation form had 18 questions based on three domains of the PCOQ. Through the intervention development process, five questions were dropped and the two “support” questions reworded to better identify potential social prescribing needs. As a result, 11 of the 13 questions from the pre-consultation form overlap with the PCOQ, but there was a need to ask seven more questions from the PCOQ for research purposes. The quantitative/questionnaire data items which were collected in the feasibility study are listed in Table 2.
Measure | Data collection method |
---|---|
Data extracted from the patient record* | |
Proportion of patients with at least one follow-up appointment with 1) one month 2) three months | EMIS |
Patient demographic information | EMIS |
Patient reported information collected via SMS link and input into REDCap | |
Perceived clinician empathy and doctor-patient communication | The consultation and relational empathy tool (10 items – follow-up only)44. |
Health and well-being | Three domains from the primary Care Outcomes Questionnaire (18 items – baseline and follow-up)45. The fourth domain – Confidence in Health Plan, will not be collected. |
Health Knowledge and Self-care | |
Confidence in Health Plan | |
Patient satisfaction | Patient overall satisfaction with the consultation (single item – follow-up only) |
Index value of health-related quality of life for economic evaluation purposes | EQ-5D46 (5 items – baseline and follow-up) |
Extent to which the patient’s main problem was resolved | Single item adapted from other studies in primary care (follow-up only) |
Extent to which consultation addresses patients’ priorities | Single item adapted from Long Term Conditions 647 (LTC6) questionnaire (follow-up only) |
Extent to which consultation provided patients with information to manage their health | Single item adapted from LTC6 (follow-up only) |
2.8.1 Quantitative data analysis:
As this was a feasibility study, outcomes in the intervention and control groups were not formally compared. Instead, the analysis focused on reporting data for planning and for assessing the feasibility of the full trial. The analysis answers research questions 1 and 2.
Recruitment and retention
A CONSORT flow diagram46 was produced. Proportions with 95% confidence intervals were calculated using the exact binomial method for the number of patients recruited, retained and completing outcome data.
Patient reported outcome measures
Patient-reported outcome measures were scored using STATA 16. The EQ-5D downloadable STATA scoring package was used47 and published scoring rules were used for the CARE measure and the PCOQ48,49. The COAC pre-consultation questionnaire contained some items common to the PCOQ. Additional PCOQ items were included sufficient to score three domains; the domain “confidence in seeking healthcare” was omitted as it is similar to a measure of satisfaction with the GP and including it in a questionnaire which was going to be shared with the GP may have affected the way patients responded. Means and standard deviations were calculated for each measure. Missing data was not imputed, only complete data was analysed.
Patient record data
The data extracted from the patient record included patient characteristics; consultations in the 12-week period after and including the date of the index consultation; and clinical codes added on the day of the index consultation.
In the EMIS® patient records system used in our recruiting practices, a record is added to the consultations database for different kinds of staff activity, some of which are not actually consultations. To identify the proportion of patients with at least one follow-up appointment within four weeks and twelve weeks, the consultations data extracted from EMIS® was cleaned to:
▪ Only include records added by clinicians, and exclude administrator-added records,
▪ Only include records where the consultation type was face-to-face, telephone, video or online communication,
▪ Treat consultations of the same modality and same clinician type within 30 minutes of one another as the same consultation.
The proportions of patients with repeat consultations within four and twelve weeks of the index consultation were then calculated.
2.8.2 Process evaluation
As well as informing feasibility, the qualitative data collected informed a process evaluation. The process evaluation was carried out using MRC guidance on process evaluation of complex interventions and based within a realist evaluation framework. Realist evaluation is a theory-driven approach which aims to identify core theories about how a programme is supposed to work and test them out to see if they are plausible, practical and valid. The full realist evaluation answers research question 5 and is reported in two linked papers: one for the realist evaluation of the pre-consultation form14 and one for the realist evaluation of the summary report13.
In line with MRC guidance, the process evaluation questions included implementation factors (recruitment and response of practice teams, recruitment and drop-out of patients, fidelity to the intervention, adaptations, acceptability). The analysis reported here answers research questions 3 and 4.
The chief investigator (MM) read and re-read the initial interview transcripts from both patients and practitioners in order to gain an overall view of the accounts given, to identify patterns in the data. MM identified themes against the MRC framework of implementation factors. The analysis was reviewed with the qualitative team (GW and AS) and finalised.
Based on the evaluation, a set of pre-agreed success criteria (see Table 3) were evaluated, to decide whether to continue (i.e. apply for funding for an RCT), stop (do not progress to RCT), or modify the intervention.
This research was informed by patient and public involvement (PPI) both before the study commenced and during the study. PPI contributors received expenses and reimbursement in line with INVOLVE guidance50.
The PPI group were heavily involved in development of the intervention and met four times during the intervention development study to inform intervention design. Their input and the impact of this is described in the two linked papers13,14.
The PPI group also met at the end of the feasibility study to comment on the overall interpretation of the data and discussed how the results could be used in the future either for additional research or how to benefit patients and clinicians in the future. The group assisted with drafting and approving the plain English summary for this paper and other publications.
This study was sponsored by the University of Bristol. Ethics approval was granted by Frenchay Research Ethics committee51 and the Heath Research Authority (HRA). BNSSG Clinical Commissioning Group Clinical Effectiveness and Research Team provided research and development approval. The study was NIHR funded and supported by the CRN who liaised with centres on the researchers’ behalf.
Insurance was provided by the University of Bristol as research sponsor. The study sponsor and funders did not have any role in study design; data collection, management, analysis, and interpretation of data; writing of the report; or the decision to submit the report for publication.
The feasibility study was registered in the ISCTRN registry (ISRCTN13471877) and on the CRN portfolio (42005). The study protocol was published before recruitment completed52.
The feasibility study quantitative results are shown in this section.
Table 4 shows the sites randomised and the patients recruited in these sites. The sites recruited covered a wide range in terms of levels of area deprivation. List sizes were larger than the English average; the smallest list size was just under 10,000 patients and the highest a “super practice” with more than 40,000 patients. The ethnic mix was varied; one practice had over 40% of patients from Asian, mixed, black or other non-white ethnic backgrounds and two had less than 5%.
Site* | Site IMD decile | Site list size | Site % Asian, black, mixed other non- white | Site median age bracket | Recruitment periodƗ | # patients Recruited (% of those contacted) | # patients summary- report used | # patients interviewed | # patients provided follow-up data (% of # recruited) | # patients shared data |
---|---|---|---|---|---|---|---|---|---|---|
Site 1 (Intervention) | 9 | 12,500–15,000 | 0–5% | 40–44 | 06/07/2021 – 16/07/2021 | 32 (25%) | 5 | 6 | 12 (38%) | 7 |
Site 2 (Intervention) | 5 | 15,000–17,500 | 5–10% | 35–39 | 05/07/2021 – 20/07/2021 14/09/2021 – 28/07/2021 | 25 (17%) | 14 | 7 | 8 (32%) | 7 |
Site 3 (Intervention) | 10 | 40,000–45,000 | 0–5% | 40–44 | 17/08/2021 – 31/08/2021 20/09/2021 – 04/10/2021 | 33 (40%) | 13 | 9 | 21 (64%) | 17 |
Site 4 (Intervention) | 1 | 7,500–10,000 | 40–45% | 30–34 | 27/09/2021 – 14/10/2021 | 32 (27%) | 5 | 8 | 11 (34%) | 8 |
Site 5 (Control) | 10 | 15,000–17,500 | 5–10% | 30–34 | 19/08/2021 – 03/09/2021 | 35 (28%) | n/a | n/a | 8 (23%) | 5 |
Site 6 (Control) | 2 | 20,000–22,500 | 10–15% | 35–39 | 27/09/2021 – 14/10/2021 | 37 (28%) | n/a | n/a | 10 (27%) | 7 |
Total | 194 (27%) | 37 | 30 | 70(36%) | 51 |
* Information from the first four columns has been extracted from the national General Practice profiles published by Public Health England53 and reflects the population of patients in each practice. The index or multiple deprivation (IMD) decile uses 39 separate indicators, organised across seven domains of deprivation which can be combined, using appropriate weights, to calculate an index score. Lower deciles represent more deprived populations.
Ɨ Follow up patient-reported questionnaires were sent after 1 week and data extracted from the patient record after 12 weeks. In site two, both GPs were recruiting for both recruitment periods. In site 3, one GP recruited in the first period and another in the second.
194 patients were recruited of which 70 provided follow-up data.
Figure 6 and Figure 7 show patients’ route to recruitment in both intervention and control arms.
Overall, the practices exceeded the recruitment target of 120 for intervention practices and 60 for control. From the patients contacted, 27% were recruited. Neither the control nor intervention arms met the target for completion of follow-up questionnaires or data sharing (43% intervention and 25% control versus a target of 80%). Of these, 39/52 (75%) in the intervention arm and 12/18 (67%) in the control arm agreed to data sharing from the patient record. As shown in Figure 6 and Figure 7, some patients were lost because they did not consent to follow-up and some because they consented but did not complete the questionnaire.
Table 4 shows how these recruitment rates varied across the sites. Rates across the two control sites were very similar. The intervention site rates varied more widely, with between 17% (site 2) and 40% (site 4) of patients contacted being recruited, and of those recruited, between 32% (site 2) and 64% (site 4) providing some follow-up data.
Table 5 shows the number and characteristics of the patients for whom patient record data was extracted. Because of the low follow-up rates, these figures are not necessarily representative of the patients recruited to this study. Most of the patients were female (67% intervention and 70% control). The average age was 55.6 in the intervention group and 47 in the control group. Based on the patient record, 62% of intervention patients and 40% of control patients had at least one long-term condition. In the intervention sites, 85% were white. In the control sites 50% were white. Ethnicity information was not available for four (8%) patients.
Intervention Sites | Control Sites | |
---|---|---|
n= 39 | n= 10 | |
Female, n (%) | 26 (67%) | 7 (70%) |
Age, mean | 55.6 | 46.9 |
White, n (%)* | 33 (85%) | 5 (50%) |
Have a long-term condition, n (%) | 24 (62%) | 4 (40%) |
Table 6 shows the patient-reported outcome measures analysis. Of the 122 intervention patients recruited to the study, 117 (96%) completed enough PCOQ items to score at least one PCOQ domain, and 115 (94%) completed the EQ-5D. In the control arm, 72 (100%) of recruits completed sufficient PCOQ items to score at least one domain and 70 (97%) completed the EQ-5D index.
Outcome | Intervention | Control | ||||
---|---|---|---|---|---|---|
Baseline | n | Mean | SD* | n | Mean | SD |
PCOQ: Health and well-being | 117 | 3.43 | 0.99 | 72 | 3.18 | 1.00 |
PCOQ: Health knowledge and self-care | 115 | 4.04 | 1.00 | 71 | 3.84 | 0.88 |
PCOQ: Confidence in health plan | 116 | 4.16 | 0.65 | 72 | 3.71 | 0.74 |
EQ-5D index | 115 | 0.65 | 0.31 | 70 | 0.54 | 0.38 |
Follow-up | ||||||
PCOQ: Health and well-being | 49 | 3.70 | 1.02 | 16 | 3.35 | 1.04 |
PCOQ: Health knowledge and self-care | 49 | 4.29 | 0.85 | 16 | 3.88 | 0.95 |
PCOQ: Confidence in health plan | 49 | 4.17 | 0.64 | 16 | 3.91 | 0.78 |
EQ-5D index | 49 | 0.69 | 0.33 | 15 | 0.61 | 0.36 |
CARE | 48 | 4.32 | 0.80 | 15 | 4.38 | 0.82 |
Single Item: Satisfaction with last apt | 52 | 4.54 | 0.75 | 18 | 4.61 | 0.70 |
Single Item: Discussed important issues | 52 | 4.54 | 0.96 | 18 | 4.50 | 0.92 |
Single Item: Satisfied with information | 52 | 4.42 | 0.89 | 18 | 4.22 | 1.11 |
Single Item: Main problem resolution** | 36 | 4.06 | 1.45 | 13 | 3.31 | 1.03 |
**n excludes 16 intervention and 4 control patients who responded n/a
Primary Care Outcomes Questionnaire (PCOQ), Consultation and Relational Empathy Measure (CARE) and first 3 single items scores from 1 (poor) to 5 (excellent)
EuroQol 5-dimensions (EQ-5D): Index score with zero at a state equivalent to death and 1 perfect health
Satisfaction with last appointment scores: very satisfied = 5, fairly satisfied = 4, neither satisfied nor dissatisfied = 3, fairly dissatisfied = 2, very dissatisfied = 1
Discussed important issues scores: yes=5, most of them = 2, some of them = 3, not really = 2, no = 1
Satisfied with information scores: very satisfied = 5, fairly satisfied = 4, neither satisfied nor dissatisfied = 3, fairly dissatisfied = 2, very dissatisfied = 1
Main Problem Resolution scores: 7 = completely better, 6= much better, 5=better, 4= slightly better, 3= same, 2 = Slightly worse, 1 = worse
Of those who completed follow-up questionnaire single items, 49/52 (94%) also completed the PCOQ and 48/52 (92%) also completed the CARE measure. In the control arm 16/18 (89%) completed the PCOQ, and 15/18 (83%) completed the EQ-5D and index.
Table 7 shows re-consultation rates. Re-consultation rates were calculated using consultations data extracted from EMIS. 72% (95% CI = 55%, 85%) of intervention patients and 90% (95% CI = 56%, 100%) of control patients re-consulted within twelve weeks. The data necessary to carry out this analysis was consistently coded in the patient record, and it was a relatively simple process to establish replicable data rules to calculate the consultation rates.
Intervention (n=39) | Control (n=10) | |||
---|---|---|---|---|
Re-consultation rates | n | Proportion (95% CI*) | n | Proportion (95% CI*) |
Proportion of patient who reconsult within four weeks | 23 | 0.59 (0.42, 0.74) | 9 | 0.90 (0.56, 1.00) |
Proportion of patient who reconsult within twelve weeks | 28 | 0.72 (0.55, 0.85) | 9 | 0.90 (0.56, 1.00) |
Table 8 shows the GP questionnaire results. GPs reported on 111 (91%) of recruited patients. They found the summary report useful in 69% of patients, varying from 59% in site 1 to 78% in site 3. The summary report was used in 37 patients.
The process evaluation is presented in the following sections. section 4.2 gives a summary of the interview participants and section 4.3 gives a brief summary of the realist evaluation, which is presented in more detail in the two linked papers13,14. section 4.4 presents some contextual factors affecting the implementation from the GP and patient perspectives. In section 4.5, fidelity to the intervention and adaptations made are presented. section 4.6 describes technical and process issues. In section 4.7 and section 4.8 the acceptability of the pre-consultation form and summary report are reported from the GP, patients and administrator perspective. Finally, section 4.9 presents the factors which may have affected recruitment and follow-up rates.
A total of 45 interviews were carried out: 30 patients, nine GPs and six administrators. In addition, the 26 GP and patient interviews from the intervention development phase were also used to inform the analysis. The number of interviews at each site in each phase are shown in Table 9. In the qualitative analysis which followed, patients 1 to 20 were from the intervention development study and patients 30 to 50 from the feasibility study. So that the evolution of their views can be compared, the same identifier is used across the studies for GPs who were in both studies.
A key finding from the GP interviews was that the pre-consultation form and the closure report were useful in different circumstances and for different types of patients. The closure report was completed for 30% of patients who completed a pre-consultation form (Table 4). GPs said this was because completing the summary report took a few extra minutes and the time trade-off was not worth it for the other patients. They also felt that there were several patients they would have liked to give a closure report, who were not eligible for one, because they had not completed the pre-consultation form. One GP summarised this as follows:
“The pre-consultation questionnaire and the post consultation things could be entirely separate in their usefulness […] you might bring in some elderly frail patients and they won’t have wanted to fill in the questionnaire […] but they do need a way of remembering what was decided about their medications or their tests or all the rest of it. A whole group of, let’s say younger, anxious people, doing the pre-consultation questionnaire will be really useful for us, efficient in what is a low-risk patient and the ones that truly are anxious we can back it up with the things [...] So, yes. I wouldn’t even think they’d necessarily need to be directly linked with each other.” (GP 2, feasibility study)
This GP had participated in the intervention development study when each part of the intervention was tested separately. She felt the technologies should still be available separately. The pre-consultation form was useful for low-risk patients, but the summary report was most useful for frail and elderly who might need a memory aid after complex instructions, yet these patients may not have completed the pre-consultation form.
Because of this finding, separate programme theories were developed for the pre-consultation form and the closure report. These are presented separately in the two linked papers13,14. The separate programme theories resulted in two sets of theorised outcomes for each part of the intervention and two programme theories to replace the initial programme theory, shown in Figure 3.
For the pre-consultation questionnaire, six outcomes were identified. This included issues being raised that might not have been otherwise, particularly when patients had a concern they found difficult to voice. It also included a wider range of support offered to patients, partly because hidden issues were uncovered and partly because having the information in writing emphasised the importance to GPs and enabled them to quickly focus on what mattered. GPs and patients felt time was used more effectively because the dialogue began before the consultation. Some patients felt their health and wellbeing was improved, partly because they were offered more support and partly through the therapeutic act of feeling more listened to. Feeling more listened to also made some patients more confident in seeking healthcare in the future. Finally, most patients were more satisfied with their consultation. A revised programme theory showing these outcomes and mechanisms is shown in Figure 8. This has been reproduced from the pre-consultation form development and realist evaluation paper14 but with the addition of how each of the outcomes could be measured, and whether the data to measure it was captured in this study.
For the summary report, five outcomes were identified. The key outcome was that patients and their family were clearer on the follow-up required because the report acted as a memory aid that patients could share and discuss with their family. Patients were reassured and empowered by the information. GPs felt that they reflected more on how to plan and communicate the follow-up and this, combined with the report being available on record for other GPs to see, led to a more appropriate care pathway for the patient. Finally, some GPs thought the audit trail on the record would be useful for medicolegal purposes in case of a legal dispute. A revised programme theory showing these outcomes and mechanisms is shown in Figure 9. Again, this is reproduced from the summary report development and realist evaluation paper13 but with the measurement aspect added.
GP factors
General practice was forced to rapidly change to a remote consulting model at the start of the COVID-19 pandemic. Although face-to-face consultations increased in 2021, there were still fewer than ever before. GPs also used SMS much more frequently than before54. In March 2021, one GP commented:
“If you had done this study 18 months ago, you would have probably got very different answers, so what this form added, because we just weren’t texting patients routinely, whereas now quite often if I’ve booked appointments I will text it to them so that they know when it is.” (GP 3, Intervention Development)
This meant that the technological environment in general practice was vastly different between the time of the study design and the conduct of the feasibility study. Practice booking procedures and policies had also changed. Many of the feasibility study sites had allocated most of their slots to telephone triage, and most released these appointments on the day. From the point of view of the COAC Study, this meant that the study had to fit in around these new procedures. How this was done depended on an individual agreement with the study CI and each practice principal investigator (see fidelity and adaptations).
The feasibility study also coincided with patient volumes increasing to pre-COVID levels and the roll-out of the vaccination programme. This put GPs under unprecedented pressure. As one GP said:
“This job would be easy if I had an hour per patient most of the time but from the point of view of how much detail you want, to be honest with you, you haven’t got time or you haven’t got the brain capacity to keep that all at the front of your mind.” (GP 1, Feasibility Study)
In the context of the COAC study, this meant that practice recruitment was slightly delayed, and the intervention had to be as time effective as possible for GPs. The availability and experience of administrative staff was also a factor. Some of the practices we recruited were short of administrative staff during the period of recruitment and keen for the administrative input to be kept to a minimum.
Patient factors
Some patients had perceived a deterioration in access to general practice following the switch to remote consulting. Patients described both finding it difficult to get through on the phone and finding it difficult to get their needs met by phone when they did finally get through. Two patients explained this as follows:
“I have found that, during the lockdown, with the telephone calls – the first doctor I had was absolutely useless – and I don’t know if it’s because of COVID – I don’t know if that’s relevant for the study, or not, but over the last year, the telephone appointments – they’ve seemed a bit disinterested, and you’ve got not much faith because no-one wants to see you.” (Patient 26, feasibility study)
“But because it was so difficult to get into, even before COVID, to be honest. I rang up today to make an appointment for some blood tests with this practice nurse. It takes you 33 minutes just to get through.” (Patient 28, feasibility study)
Patient 28 felt that it was difficult to contact the practice even before COVID, but that COVID had made it worse. Patient 26 felt frustrated they could only get an appointment by telephone and felt that this did not meet their needs. This is particularly relevant for the COAC study, as the intention was that it would make telephone consultations less transactional and patients feel more listened to, so GPs did not seem “disinterested”. A minority of patients preferred the switch to telephone:
“I find that the telephone conversations are a lot better than the in-person face to face […] I think they have to focus more on you when they’re on the telephone having a conversation with you. There’s no waiting around in the surgery. There’s no… I don’t know. It seems to take less time for myself, and the GP and it seems to be a lot more focused.” (Patient 30, feasibility study)
Many patients found that the lack of continuity of care combined with the very limited time in GP consultations made it difficult to explain their problems to the GP:
“Sometimes it’s very hard to get consistency because, until recently, you just see whatever doctor got offered […] and because the time is so limited when you go and see a doctor… you have to take up half your time just going over your history because there’s no consistency, and that’s particularly when I have four, five, six things which I really want to get addressed.” (Patient 21, feasibility study)
This patient struggled to get continuity of care but had identified that they would benefit from continuity of care and felt time was wasted at the start of the consultation repeating the history. This was a common complaint from patients. Many of the patients we interviewed said they often found communication difficult with their GP and in some cases this is why they opted to complete the pre-consultation form. There were, however, many exceptions to this, where the patient either did not have difficulties with access or continuity or managed to receive a good service despite having issues.
Fidelity
Because of the COVID-19 pandemic regulations, the researchers were not able to directly observe consultations to assess fidelity. From the patient interviews it appeared that GP fidelity to the “Open” part of the intervention was strong. Most patients felt that the GP had read the form, let them know they had read it at the start of the consultation, often by raising issues before the patient did. Patients also reported that GPs listened at the start of the consultation rather than diverting it down the route of the issues on the form. GPs confirmed that they followed this part of the intervention as it was explained in the training:
“I had a patter that I got used to doing. I think it was partly the patter that when Mairead (CI who ran GP training) and me, we were discussing on our training was that to sort of try to open with ‘I’ve read your pre-consultation form. Thank you very much for being part of the study, is there anything you want to add’.” (GP 8, feasibility study)
There were two or three exceptions where patients said that the GP did not mention the form and they thought it had not been read. Is seems possible this was due to a process failure whereby the report was not uploaded to the patient record in time (see technical and process issues).
Patient and GP interviews also suggested a high level of fidelity to the consultation summary report. Overall, the summary report was used with 30% of patients, but there was a wide variation within practices from 15% in one practice to 56% in another. As covered in the training, GPs completed the summary in patient-friendly language, summarised the advice in the consultation and sent to patients on the same day.
Administrators mostly followed the process described in the technical guide. Part of this process was to check the list of patients for exclusions that would not be picked up by the EMIS query (for example, patients with a recent life-limiting diagnosis). Some administrators extended the exclusion criteria, for example one practice did not send the invitation to patients who were booked for an intra-uterine contraceptive device (coil) clinic, as the recruiting GP felt it would not be useful for these patients to complete a form. Other administrators did not have time to go through the list for exclusions, so sent the SMS to all patients with booked appointments.
Practice-level customisation
The generic procedure documented in the protocol was adjusted for each practice to fit with local booking procedures and administrator capacity. Some sites sent the SMS invitations out in the morning to patients with same day appointments, and administrators attached the reports as they were received through the day. One site lifted the embargo on same-day telephone triage so that the SMS invitations could be sent the day before and the reports attached as a single task the following morning. Another site sent the SMS invitations once a week to patients with pre-booked appointments.
The free-text in the consultation summary report was customised for each practice to reflect local practice and procedures, e.g. on safety-netting advice for fast-track referrals, or procedures for test booking. Any minor changes like this were agreed after the training session and implemented between the GP training session and the first day of recruitment.
Adaptations
GPs were provided with a broad outline for the intervention but, within this, were encouraged to adapt the process to fit their consultation style. In letting the patient know they had read the form, some GPs simply said “Thank you for completing the form, is there anything else you want to add before we start”. Other GPs raised issues at the start of the consultation “I see the pain in your side is really bothering you and I’d like to support you”.
GPs gathered information from the pre-consultation form in slightly different ways. Some GPs used the form as a prioritisation tool by summarising the patient’s problems at the start of the consultation agreeing which to focus on. All GPs read the more detailed information written by the patient as highlighted in blue. Some used the traffic light system to pick out the aspects where the patient was more bothered. Some GPs ignored the traffic-light system for the individual lines but used them as a whole to gauge the patients’ overall well-being and mood (for example a report that was mainly green was likely to be a quick consultation with no hidden agenda whereas a report that was mainly red and amber indicated that the patient had a poor perception of their own health and wellbeing). Some used the traffic lights for mood and health concerns but paid less attention to them for pain and other physical symptoms.
“I would have read the free text sections and then focussed in on the fact that clearly they should have said that their pain is quite severe, so yeah, I would have concentrated on that and then sort of made a mental note about mentioning that CKD as well, but yeah, if a patient just said slight low mood or anxiety, slightly worried it might indicate a serious illness I probably wouldn’t have sort of separately asked about that.” (GP 7, feasibility study)
With the consultation summary report, GPs varied in the level of detail they provided and much of the summary they wrote up during the consultation and how much afterwards. For phone consultations, most GPs entered at least some information into the EMIS template as they were consulting with the patient, then generated and formatted the word document afterwards.
“I mean I can touch type so I’ll be looking at them but I’ll be typing […] But the kind of the bit at the end, the Alt F is a little bit tricky so I think I always did that bit afterwards.” (GP 5, Intervention Development round 3)
This GP wrote up the patient summary in the consultation but waited until the patient had left before running the word macro to format the summary (referred to as “Alt-F” in the quote) so they could concentrate on getting this right. Some GPs read the report aloud to the patient as they typed to engage them in the process and create a shared decision-making plan. One GP also sent the reports via SMS even for face-to-face consultations:
“some of the patients I saw face to face I could have just printed it out and given it to them in the consult […] we’re so used to in the last 18 months consulting more on the phone and sending people texts, etc, etc, although we’ve been seeing face to face where we need to, I just think in my head I just made it electronic and send it to them on a text and I would say to them, I’ll send you the report on a text when I guess [laugh] I could have given them the physical print out. […] It gave me opportunity when they’d left just to type them out without them just sitting there watching me and then send it to them.” (GP 8, feasibility study)
Some GPs modified the way they took notes in EMIS so they did not duplicate information in the template:
“it was a question of remembering […] as long as I then go ‘oh I am going to be doing the template, don’t bother writing this because you are about to write it again into the template’, I then try to modify my normal history taking and note keeping in EMIS and abbreviate that and then switched over to using the template early so that I didn’t have to type.” (GP 2, feasibility study)
This GP modified her consultation notes; putting them in a very summarised format before launching the COAC template and then using the template to store the notes that they might previously have put into EMIS without the template. This got easier with practice, and one GP was just starting to do that towards the end of recruitment.
“In theory I think if I got more used to doing it, you could almost use this protocol as your consultation. You know you could write everything in it and almost do it instead of your consultation notes. But I didn’t get slick enough for that.” (GP 6, intervention development study round 3)
Other GPs pointed out that there is a need for some technical language in the patient record, so the COAC template would never entirely replace the consultation notes.
Pre-consultation form
The pre-consultation report required use of REDCap, MJOG and a macro-enabled spreadsheet. There were some technical issues with all these systems. This meant that the process required much more support from the CI than had been anticipated. The macro-enabled spreadsheet generated some errors in practices using Office 365. A REDCap server failure at one point meant administrators were unable to generate reports and recruitment was briefly paused. A REDCap server update on another occasion meant the links no longer worked and needed to be replaced. The process adopted in the feasibility study involved patients keying in their own EMIS numbers (as opposed to having individualised links which was used for the intervention development study). A few patients keyed the EMIS number in incorrectly. Administrators were normally able to recognise the typo and attach the correct report to the record.
Some administrators were un-used to the SMS system used (MJOG). Because of this, one made an error when sending the messages such that the patients received a message without an EMIS number within it. This meant that those patients were unable to complete a report, as the first screen asked for the EMIS number. Nearly all administrators commented that they would prefer to use a different SMS system (accuRX) as they were more familiar with it. However, at the time of the feasibility study, accuRX had just developed its batch SMS functionality and this did not include the ability to customise the message with a different EMIS number for each patient.
Finally, there were some process issues which meant that the pre-consultation report did not get uploaded onto the patient record in time for the GP to see it:
“I think there were two that one of them I didn’t get to the notes before I consulted the patient, so I didn’t see that at all beforehand. There was another one that I think I may have looked at mid-consultation, because it had just come in. So, there was just a bit of timing issue on those ones that we’d booked the same day, I think.” (GP 2, feasibility study)
This GP was in one of the sites where the on-the-day triage system meant that the SMS invitations were sent out on the same day as the appointment, so the window of time for the patient to complete the report and the administrator to upload it to EMIS was short.
Consultation summary report
Using existing software for the consultation summary report worked relatively well. The process was designed so that, once the GP had completed the template, the production of the report and process of sending to patients was quick and easy. However, it did involve three separate actions (pressing save on EMIS to generate the report, running the macro to format the report, attaching to accuRX from EMIS). If any of these steps failed, this was frustrating for GPs. In one practice, the button to attach a document to accuRX directly from EMIS was not enabled. This meant that the GPs in that practice had to save the document to the desktop:
“It was just the linking it from EMIS. So, when it’s saved, when you’d send it in the text message, it should say, ‘Send document from EMIS’, but it didn’t come up, for some reason. Then it wouldn’t send when it did come up, so I just saved it on the desktop and sent it from there instead, which just took a bit longer. But it did work eventually.” (GP 2, feasibility study)
In another practice, the GP moved to a different computer which did not have the word macro installed on it.
“It had bullet points and then obviously because the ones that you don’t use come out blank, they come out as a bullet point with a blank space beside it and when you use the macro it just tidies that all up and makes all the blank spaces disappear, which was great, it was very handy when it worked, but it didn’t work then on subsequent days on different computers, but it was very easy just to manually delete them within a few seconds.” (GP 6, intervention development study, round 3)
Both GPs seemed unconcerned about the technical glitches; GP 2 said it “just took a bit longer” and GP6 said it was done “within a few seconds”. However, even if these technical faults only add slightly to the time, this is still additional time which GPs do not have and may dissuade GPs from completing the summary report.
GP practices all required customisation of the prefilled text to reflect local practice; for example, phone numbers to call if referrals were not received. Because the summary report was developed in a Read code system, the practices were not able to make direct changes to the free text, One Care created and republished an adjusted version for each practice. Although this did not impact the practices, it would be difficult to administrate if rolled out more widely.
Administrator perspective
Of the four intervention sites, one administrator felt an RCT would be easily manageable and the other three had reservations. The extent of the administrative burden depended on the practice appointment booking procedure and policy. In sites 1 and 3 appointments were booked on the day, and the administrator had to send out SMS invitations daily with a short window (sometimes only 1 hour) between the patient completing the pre-consultation form and the appointment.
“you’ve got to keep checking your emails and you’re in like five different apps. You know you’re in something that’s quite complicated, you know then you’ve got to stop that, come out of that, go put that on the record [the COAC report for the GP], then you’ve got to go back to where you were and that’s where there’s really time consuming ‘cause you’ve lost your track of what you were doing in the first place [ …] If you could set aside half an hour or whatever time it is, to just do that job for that amount of time and then go on something else, that would be a million times easier.” (Administrator 1, feasibility study)
“I think if we did it again with that in mind then I think we’d probably like you said, that other practices did, where they did them – let them be booked the day before so you’ve got time to get them all on to the system.” (Administrator 3, feasibility study)
Both these administrators found it difficult to send out the SMS invitations in the morning when the appointment was on the same day, as they had to remain alert throughout the day for completed forms, which was more difficult than uploading the reports as a single task.
Site 4 had adjusted their booking processes so that the forms could be sent out the day before. This administrator found the task much more manageable:
“when we first got together going through setting everything up […] I didn’t know my way around the system, whereas once I was up and running with it, it was actually really straightforward. […] it’s been great […] the way that the patients were contacted was very straightforward. […] I would be really excited if we did it, if it was a bigger project here, just because I think it was such a positive thing for the patients, so I think it would be brilliant.” (Administrator 6, feasibility study)
This administrator found the task straightforward and expressed interest in extending the study. However, the task was only acceptable to this administrator because a change was made to the embargo on appointments for the recruiting GP. If more GPs in the practice were recruiting, this change may not be possible without disrupting practice policy. The administrator also pointed out that she might need to reduce her levels of checking the patient list if more GPs were involved.
Site 2, in theory, should have had a similar experience to Site 4, as this site also sent the SMS invitations out in advance. However, this site was beset by technical difficulties:
“I thought it would be similar and quite streamlined like the last time around whereas this time it was quite hard work to actually get it and it was actually quite time consuming.” (Administrator 5, feasibility study)
This site had been involved in the intervention development study and the administrator explained that she had not found the process difficult then. However, because of three different technical problems coinciding, the feasibility study was more difficult, and this site had the longest recruitment period, but lowest number of recruits in the study. When asked if an RCT was feasible the administrator said:
“If it all worked, then yeah, we would be able to do it (an RCT).” (Administrator 5, feasibility study)
The two control practices both found the process relatively easy, although each had a minor technical problem; in one practice, a patient entered the EMIS number incorrectly so they needed to correct it and in the other, the REDCap link stopped working. If these problems were resolved the administrators felt an RCT would be feasible provided they were given the time to send the message daily.
The study CI was available throughout the recruitment period to assist with technical problems at short notice. This was possible because of the small number of recruiting sites but may be more difficult in a larger RCT with multiple sites recruiting simultaneously.
GP perspective
The pre-consultation report was straightforward and acceptable to GPs who felt it fitted well into their mode of operating, particularly with the shift to more telephone consultations.
“I think it's reflecting the way that we're working. We're getting telephone consultations, and it might say something on the screen, and when we speak to the patient it's completely different. […] Doing this pre-consultation allows us to have an idea about patient's concerns, which is really, really useful.” (GP 4, feasibility study)
Most GPs were aware of the difficulties administrators had, so caveated the level of acceptability when asked if they would do an RCT or incorporate the pre-consultation form in routine use:
“I think, yes [I would use it in daily practice] if it didn’t involve admin time, and therefore, it just came through automatically. You know, they get automatically sent out when the patient booked an appointment, and it automatically came back into us. (GP 2, feasibility study)
When asked the direct question of whether they would be happy to use the pre-consultation form in clinical practice this GPs said yes, but only if it was automated so that it did not require so much administrative input. Other GPs agreed with this; ignoring administrative difficulties, the form was useful and simple:
“It was quite straightforward. Obviously doing anything new adds a little bit of stress to the working day, but it was actually very straightforward.” (GP 2, feasibility study)
“it was really easy for me to read, I scanned it in 20-30 seconds, phoned the patient and said, ‘Thank you so much for doing this, I’ve had a look at your [ form you completed online… ] from that I can see that this is what’s going on, and this is what’s important to you.’ I found that actually really easy.” (GP 5, intervention development study round 3)
“my expectation was probably it would generate lots of extra issues that there wouldn’t be time to deal with and that it would be sort of too much, but actually yeah no I think it worked well in terms of sort of getting the relevant information and sort of getting to the heart of the issue earlier on that you probably would have done otherwise in the consultation I think.” (GP 7, feasibility study)
GP 2 explained that, even though the process was straightforward, it felt unfamiliar at first. Training and practice were both important factors in improving acceptability. GP 3 explained that, before their training session they had been concerned that the pre-consultation element of the intervention would be time-consuming as they had to read the document before speaking to the patient but, in fact it did not take too much time. The CI was aware from previous research that GPs often over-estimated the time taken and underestimated the value added in these situations. She therefore timed the GPs reading the pre-consultation report in the training sessions. GPs were then asked to estimate how long they thought it took, and often overestimated. Through this process, the GPs realised that reading the pre-consultation form was a time-efficient way to gather information. One GP commented on this:
“we did the practices with Mairead (CI who trained the GPs) and it would be 20 seconds to scan this document, it does draw your eye to the important bits which is the red or the amber, and then the other bits you can quickly scan over but your mind isn’t so preoccupied by those, whereas if it was just a black and white document, in 20 seconds you’d be struggling getting that information.” (GP 5, feasibility study)
The training session highlighted to this GP that the document did not take as long to read as they thought, and that the colour-coded format enabled them to obtain relevant information very quickly. Other GPs commented that they felt slightly awkward the first few times, as they were un-used to greeting the patient with a synopsis of their problem, but this got easier over time.
Patient perspective
In considering acceptability from the patient perspective, it is necessary to make a distinction between the acceptability of the intervention, and the acceptability of the research. The first half of the baseline questionnaire was the pre-consultation form completed as part of the intervention. The second half was additional questions completed to inform the research. Some feasibility study patients commented that the questionnaire felt too long and repetitive.
“It just did feel a bit long, and a bit too many questions – I think some people would, maybe, give up half-way through ‘cause I almost did.” (Patient 26, feasibility study)
Patient 6 felt the questionnaire length should be reduced. However, in making this assessment they did not distinguish between the part of the questionnaire which was necessary for the intervention and the part necessary for research. Of the patients interviewed in the final round of the intervention development phase, all said that it was very straightforward to complete. No-one mentioned questionnaire length and three explicitly mentioned that it was quick to complete:
“it was very simple to follow. I didn’t have any problems understanding and it didn’t take long at all.” (Patient 12, intervention development study round 3)
“I think it only took me probably about 10 minutes, just reading through and writing it down. Yeah, it was very good.” (Patient 9, intervention development study round 3)
“I found it quite straightforward to use but I didn’t take very long. I understood the questions.” (Patient 11, intervention development study round 3)
These quotes suggest that it was the research part of the questionnaire that the patients found long and repetitive because patients did not find the questionnaire too long in the intervention development phase (when it only included the questions required for the intervention), but some did in the feasibility study (when in included the questions for research purposes).
The pre-consultation form had relatively good face validity. When asked if they thought the questions were relevant and informed their consultation, most patients said yes.
“Yeah, I was really pleased to get it […] I had to write all my symptoms down, what I was feeling, everything else, so that could be read through by a health professional, and then they can ring me and just kind of go through exactly what was happening. What went through my head was ‘what a brilliant idea. This is going to really help me’.” (Patient 28, feasibility study)
This patient was pleased to receive the form and felt when completing it that it would certainly help her consultation.
Other patients found the form more face valid for the first half of the questions (on pain, physical symptoms, life effects, mood and health concerns) than on the second half (on health knowledge, support, adherence, healthy lifestyle and confidence in plan). One patient suggested the form was “mixing two things”; the short-term that the patient wanted to deal with in that consultation and the longer-term which the patient may or may not want to raise in the consultation, and this hampered face validity when they were completing the form. A few patients with long-term chronic conditions found it difficult to complete the form when they had ongoing issues they didn’t want to discuss in that consultation:
“Well, I wasn’t going to the GP for anything to do with pain, yet I was being asked if I was in pain or not and I found that quite difficult because I have this degenerated disc in my back, I always have pain in my back, but I’ve managed it and it’s absolutely fine. It’s not what I was going to the GP about, so it was like is this question relevant to me, do I need to be answering it? It was a bit confusing really.” (Patient 30, feasibility study)
This patient was not sure whether to indicate pain on her form as it wasn’t the reason for her attendance.
Some patients had problems with specific questions on the form. One patient did not like being asked about whether they needed support with leading a healthy lifestyle:
“I just think well ‘hold on’ you know ‘I’ve got a hip pain, why do I need to tell you if I drink alcohol’, you know. […] And I just feel it’s entered into boxes and these boxes are then stored against you.” (Patient 2, intervention development study, round 1)
Some patients were confused by the final question “how confident are you that you are on the right path to dealing with your health problems?”. Mainly referring to this question, one patient said:
“I thought some of the questions were a little bit ambiguous like that – didn’t know if that was deliberate, or not. So, I just answered them as they were, rather than trying to read everything into ‘em.” (Patent 21, feasibility study)
Despite these problems with face validity, all patients who completed the form found it sufficiently face valid to complete; all patients said they would be happy to complete it again; and in the majority, face validity was strong.
Comparison with routinely used electronic triage forms
Both patients and GPs drew comparisons with the practice electronic triage forms, which were mandated by the long-term plan. Some GPs commented about this without being prompted:
“It’s much more straightforward than some of the things we do in terms of the [online triage form] and things at the moment [...] So, I think people [GPs] wouldn’t have a problem looking at a shorter summary, particularly looking at what people were interested to talk about.” (GP 2, feasibility study)
“I was pleasantly surprised by it actually in terms of the sort of initial questionnaire I suppose I sort of – having had quite a lot of experience of things like [practice electronic triage form] … getting a large volume of information most of which isn’t that helpful and it was, you know I was surprised at it, that it was pretty quick to go through and actually I think you probably were saving time” (GP 7, feasibility study)
These two GPs commented that the COAC pre-consultation form was easier to process and more useful than the practice electronic triage forms. Some GPs felt the colour-coded fixed format traffic light system could be usefully transferred to the triage form.
“It’s certainly much quicker for a clinician to look at than an [practice electronic triage system] is. You just really want your eyes to go straight to, what it is that matters, and the traffic light system enables you to do that and getting engaged.” (GP 1, feasibility study)
Patients also commented that the form was easier to complete than the practice online triage form:
“initially I was a little bit cynical because I just thought, ‘Oh’, because the surgery has the form that you can fill in. I don’t know what it’s called […] and that’s a very longwinded thing you just fill in. I just thought, oh, it’s just like one of those things. It’s not really going to be very helpful […] but actually, when I went to my appointment and realised that, actually, it had actually been read, I think it’d been really useful. I was very, very pleasantly surprised.” (Patient 23, feasibility study)
Patient 3 commented they were initially “cynical” because of their experience of completing the practice online triage form but found the COAC form much more helpful. Another patient agreed with this and pointed out that the triage element of the other form made it difficult to fill in:
“The trouble is, at the moment, if I have something and ask for an appointment with the doctor, I can’t get it, unless it’s an emergency, without actually going online and after going through […] a set of about 20 questions – 25 questions, anyway. It starts right at the very top – ‘Are you having a heart attack or bleeding to excess?!’ and it widdles its way down – like a triage thing […] What I liked about the survey that you sent – a lot of it was your underlying position on things. I can see it’s a bit more deeper than that.” (Patient 21, feasibility study)
These quotes suggest that, despite having some issues with face validity in a minority of patients, the COAC form was still more acceptable to the patients who complete it than the practice’s online triage form. This may be not just due to the format, but because the patients were given a choice to complete the COAC pre-consultation form, whereas the electronic triage form was used by some practices as gateway to patients seeing a GP53.
GP perspective
Generating the summary report was straightforward and acceptable to GPs. GPs liked the result, both in terms of content and formatting and apart from minor technical problems, found it simple to produce.
“I think it’s a really nice, formatted document. You know I like that formatting, I like the way that we can just text it to them, then and there, you know or just after the consultation. Yes, no I think it’s all good.” (GP5, intervention development study, round 3)
“It was downloaded onto one of my FPs, so I could just double-click, and it would come up. So then it’s just a matter of saving and sending really, exporting and sending so, simple.” (GP 4, feasibility study)
As with the pre-consultation form, GPs felt their use of the summary report improved with practice.
“There were a couple of times I forgot at what point it was going to come up and I forgot that I’d written so I had to save the consult for it to pop up and I was sort of sat there waiting for it to form a document but it hadn’t because I hadn’t pressed the button yet! But otherwise, yeah it was fairly straightforward.” (GP 7, Feasibility)
The action of writing the report in patient-friendly language, sending it to the patient and reflecting more on the follow-up in order to do this takes time. One GP pointed out that, although this was only a few minutes, this could add up if used for lots of patients:
“Not ages to do but it did take longer to do than not doing it [laugh] and it took longer than just sending the patient a text which is what I would normally do if I felt they needed more follow-up. So another layer of admin kind of hassle to do isn’t it. And you’ve made it as simple as possible to be but actually at the end of the day, it’s just another thing to do and our days full of too many things to do already.” (GP 3, feasibility study)
“I think it’s a wonderful intervention […] My main worry is the time involved. […] I know it only takes a few minutes but it’s a few minutes that we tend not to have [laugh].” (GP5, intervention development study round 3)
These GPs both felt that although production of the report took very little time, in the current time-pressured context this was still significant. Another GP pointed out that it was important to select the patients who would benefit most from receiving the report:
“some consultations you just have to invest a little bit of extra time when they are complex […] I think personally I felt it was time really well spent […] going forward I would be wanting to use it irregularly, so it's not a huge amount of time in the day overall.” (GP5, intervention development study round 3)
Only one GP (GP 2) thought completion of the summary report did not necessarily take any more time. This GP explained they typed the report as they were going along and used it in consultations that were complex. They also pointed out that, had they not completed a summary they may have spent more time verbally summarising to the patient and ensuring they understood. However, this GP tended to write shorter summary reports than the other GPs and at least one of their patients found the summary report they generated too brief to be useful. It therefore seems clear that, if the summary report is detailed enough to be useful to patients that it will take GPs a few minutes more time per consultation.
Patient perspective
The summary report was very acceptable to patients. GPs asked patients if they would like to receive this, and the majority of patients readily accepted. Patients were unanimously confident in opening the text, because they had already been informed by their GP that they would receive it and because it came in an SMS from the practice. Patients had to put in their date of birth to access the summary report, which felt secure to them, and left them in no doubt that the message had come from the practice.
“If you’ve got something like that written down afterwards that you can read after you come off the phone, it’s quite handy.” (Patient 26, feasibility study)
“I think the text with the summary came, I think, the day after or something like that. So, it wasn’t on the same day […] And so it was an, oh, yeah, that’s really quite helpful. […] If every GP did that, I think that would be quite helpful to be honest.” (Patient 28, feasibility study)
“I thought the whole thing was a great idea. It was good to be able to forewarn the doctor what I was thinking about and it was useful to have that feedback afterwards to remind myself of what he’d said.” (Patient 41, feasibility study)
“It’s quite useful either to have a rough copy of what you’ve said or have somebody with you who is there to listen, especially if you’re worried about whatever it is that’s taken you there. So yeah, I hadn’t seen this particular doctor before and I was surprised and delighted when she printed out […] It’s quite obvious really, I think why haven’t we been doing this for years?” (Patient 40, feasibility study)
Out of all the patients interviewed, only two did not find the summary report useful. Both these patients felt they had a poor consultation and were generally unhappy with their GP practice. They felt there was a lack of detail in the report and this did not compensate for the poor consultation:
“basically it was only a thing telling you how many tablets to take, it wasn’t like – this is the problem, this is what’s causing it, and this is what we suggest – all it was was the fact that I’m going to increase the steroids from [described doses] I was on it already and I knew I was in trouble because you know, I kept falling over so it’s, they’re just, they’re not – doctors are not as, how shall I say, they’re not as caring, put it that way, certainly where I live, they’re just a little bit, know their business like you know?” (Patient 37, feasibility study)
Although the majority of patients were happy with the content of the report, patients varied as to how technically easy they found it to access the content. Some patients who accessed their patient record online suggested that the summary report should be accessible via Patient Access. Some patients were disappointed that the link was only valid for two weeks and some would have preferred to receive the summary by email; most GPs did not offer this option as they could not send it directly from the patient record via email, but could do this via SMS. The accuRX functionality to send the report to email was rolled out during recruiting and the recruiting GP in site 4 offered patients the choice between email and SMS.
Recruitment rates:
This study exceeded the target recruitment rate of 15% of patients responding to the SMS. Some intervention arm patients completed the form to help communication in the consultation and others because they wanted to help the research study:
“I did that [completed the questionnaire] because I thought that it might well help the doctor/patient communication.” (Patient 22, feasibility study)
“I completed it because I think the research is really important! […] I’ll be honest with you – I didn’t actually expect the doctors to have read it!” (Patient 21, feasibility study)
“I thought it was a good idea and I thought it might improve communication […] I kind of hope that doing the questionnaire might be helpful but I’m a bit sceptical about whether they would actually have time to read the questionnaire before a consultation.” (Patient 33, feasibility study)
Patient 22 above completed the form because they thought the intervention would be beneficial. Patient 21 completed it to help the research. Patient 13 also completed it because they thought the intervention would be helpful, although they were not fully confident that the GP would read the form in advance.
The SMS method of recruitment may have put off some patients. We were unable to interview non-responders to confirm this, but some patients who completed the form had initially hesitated:
“at first I thought ‘Oh why have I got this?’ and I received it quite early in the morning. And I was a bit half asleep and I thought ‘Ooh this is a bit strange’ you know, so that made me read it more … because you get rogue things, and I just did wonder whether it was rogue.” (Patient 2, intervention development study round 1)
This patient explained that they read the message carefully before completing it as they were initially unsure about whether it was “rogue”. This may have also affected the non-responders.
Follow-up rates
As shown in Figure 6 and Figure 7, the follow-up rates were 43% in the intervention arm and 25% in the control arm. This was lower than had been anticipated. In later interviews, patients were asked about this. Patients who completed a follow-up normally did so because they wanted to give feedback about a good experience, or they understood that completing the questionnaire was important for the research study:
“I think the doctor had explained that it was some research that was going on at the moment. And I thought, well it makes sense that obviously you’ll want to follow up. I can understand that some people wouldn’t be interested but anything that helps, I’m happy to do.” (Patient 47, feasibility study form positively affected consultation)
“I think I just wanted to help the study […] I know that GP appointments across the country are not very satisfactory to a lot of people so I thought any little helps. We should do what we can to improve our services.” (Patient 50, feasibility study, form positively affected consultation)
“I had a really good consultation, so I was happy to give feedback about it. And yes, so that's why I completed it, actually. [Laughs]” (Patient 25)
These three patients all explained that they completed the follow-up questionnaire because they had a good experience of the intervention and wanted to inform this intervention. They all understood the purpose of the follow-up questionnaire.
Patients who did not complete the follow-up gave their reasons as: 1) didn’t understand the reason for it, 2) too lengthy and participant feels they have “done their bit” already, 3) they forgot, 4) they were not sure the follow-up text was genuine, and 5) they did not remember receiving a text.
Most patients who had a good experience of the consultation did not complete the follow-up because they forgot or didn’t understand its importance, not because they were unwilling. For example, patient 18 said it simply “slipped down the to do list”. Patient 33 did not understand why they received it:
“I don’t know what the point is then. If it’s similar to the one that I’ve already done, why are we doing another one?” (Patient 33 – completed follow-up questionnaire immediately after the interview)
When the purpose of the follow-up questionnaire was explained to patient 33 by the interviewer, they immediately completed it.
Patients who had a less positive experience of the intervention were sometimes less positive about completing the follow-up questionnaires. For example, one patient who thought their form had not been read was asked about the follow-up questionnaire and said:
“My feeling was that I’d been asked to complete this questionnaire before I went, and I don’t know, it might have taken me 15 minutes, or so. I can’t remember quite how long it took, but I thought to myself, ‘Okay. I’ve done my bit. I’m not sure that I’ve got the energy to respond to this as well.’ It felt a bit too time demanding.” (Patient 48, not sure form had been read)
Patient 48 was one of the few who did not feel their form had been read. They therefore felt that the pre-consultation form took up their time without giving them any direct benefit and therefore did not feel inclined to complete the follow-up.
We do not have data on why control arm patients were less likely to complete follow-up questionnaires. However, control arm patients did not receive an intervention so may, like patient 48, have felt less inclined to complete a follow-up or less likely to understand the importance of completing the follow up.
In designing this study, five progression criteria were set for taking this study forward to an RCT. The criteria are shown in Table 10.
1. Perceived benefit / acceptability of the intervention
The first criterion was the perceived benefit/acceptability of the intervention. Both parts of the intervention were most useful for different types of patients and consultations; this suggests the intervention should be substantially modified; i.e. split into two separate interventions.
Perceived benefit / acceptability of the pre-consultation form
As an intervention in its own right, the pre-consultation form was acceptable to patients and to GPs with some modifications. However, some practice administrators felt that the process of sending the SMS invitations and attaching the reports to the patient record should be automated. Although other administrators felt that they could proceed to a full trial, the process required support from the study CI to assist with technical problems and this may not be sustainable in a full trial.
Perceived benefit / acceptability of the summary report
The summary report was acceptable to almost all patients. Some patients felt it would be better received through email or patient access than SMS; this modification could be made relatively easily. GPs found the summary report useful for complex consultations / patients only, provided it is thoroughly user-tested in multiple scenarios to avoid minor technical problems slowing down the report production.
2. Recruitment rates
Recruitment rates were acceptable in both control and intervention arms of the study.
3. Completion rates of baseline patient data
As shown in Figure 6 and Figure 7, completion rates of baseline data were high. In the intervention arm, 118/122 (97%) of recruits completed the baseline data, and 115/122 (94%) provided sufficient data to score both the PCOQ and the EQ-5D. In the control arm, 72/72 (100%) of recruits completed the baseline data and 70/72 (97%) provided sufficient data to score both the PCOQ and the EQ-5D.
4. Clinician questionnaire completion
All (100%) of clinicians completed their questionnaires and included 91% of patients.
5. Follow-up rates
Completion of the follow-up questionnaire and consent for data sharing both fell well below the target of 80%. The point estimates of intervention and control (43% intervention vs 25% control) were not sufficiently precise to establish if the difference in follow-up was statistically significant between arms.
Given the failure of criteria 1 and 5, an RCT of this study is unlikely to be feasible without improvements to increase follow-up. However, as patients were accepting of the intervention (criterion 1) and recruitment rates were high (criterion 2), this should be explored further.
This feasibility study did not meet all of the criteria for progression to an RCT. Despite this, both the pre-consultation form and the summary report were feasible and useful for patients and GPs. However, because they were useful for different types of patients, they are best considered as two separate interventions and evaluated independently. The additional time needed to generate summary reports meant GPs preferred to use it selectively in patients most likely to benefit. The process of sending the SMS invitations to patients and sharing the pre-consultation questionnaire with GPs was technically challenging for administrators and required support from the study CI. This technology needs further development to allow closer integration with routine IT systems before further evaluation is carried out, which should be possible, given current technical solutions available.
Follow-up rates were low; only 36% of patients completed the follow-up questionnaires and 26% agreed to share data from the patient record. It is likely this is due to the online method of recruitment which, while efficient, may have meant the patients were unclear about the importance of returning the follow-up information. Alternative data collection approaches would therefore be required before an RCT is feasible.
This study was successfully implemented during a challenging time for UK General Practice. GP Principal Investigators in the practices perceived it as a highly successful study and recruitment levels were high. There was no obligation on practices who had participated in the intervention development study to continue to the feasibility study. Despite this, all practices who participated in the first phase continued to the feasibility study, indicating the engagement of GPs with the intervention. We recruited a representative mix of practices across different deprivations, sizes and ethnicity mixes. The training of GPs and administrators was very successful and we conducted a high number of interviews which contained ample information power for our analysis45. For the patients who consented, we were able to successfully extract the data from the patient record and successfully count repeat consultations within four and twelve weeks.
The study had some limitations. We were unable to interview non-responders. This meant that we did not get the perspective of patients who did not complete the questionnaire. We found that the pre-consultation form was very acceptable to the patients we interviewed, however; there was a 26% response rate and the remaining 74% may have found it either unacceptable (for example, were unable to complete it, or found it too long) or lacking face validity (not relevant to the reason for their attendance). We did not interview patients in the control arm, although we did interview GPs and the administrator in the control arm. Although we aimed for practices with a mix of deprivation status, rurality, age and ethnicities, all practices had similarities based on their being in the same clinical commissioning group, for example all used the same online triage platform. Some of the implementation and context findings may, therefore, be less transferable to other areas of England. We had substantial attrition in the study but, because we did not collect baseline characteristics of patients responding, we are only able to compare the characteristics of patients who agreed to record sharing (and thus completed the follow-up data).
The pre-consultation form was acceptable to patients and GPs but less acceptable to the practice administration. Although a minority of patients had problems with face validity of certain questions, all said they would be happy to complete the form again, indicating a base level of face validity. Both patients and GPs mentioned that the pre-consultation form was more useful than the practice online triage form. Other studies have noted that patients can perceive electronic triage forms as a barrier55, so this may be not just due to the format, but because the patients were given a choice to complete the COAC pre-consultation form whereas the electronic triage form was seen in some cases as a gateway to their seeing a GP.
We found that fidelity to the intervention was high, but that GPs made adaptations to fit with normal practice. In common with other studies, we found that training and ongoing support56 was an essential part of ensuring fidelity. GPs referred to the training session in their interviews, commenting that it helped them appreciate the benefits of the intervention, the relatively short time commitment of the pre-consultation form and gave them the opportunity to practice in advance.
The pre-consultation form proved most useful for patients with complex problems, mental health issues, health concerns, a concern they find difficult to voice, or who find consultations nerve-racking. It was also useful for patients who sometimes feel that the GP doesn’t listen to them or understand their problems. It was less useful when patients who completed it had a quick problem, or when they had underlying chronic problems that were unrelated to their consultation.
We identified six possible outcomes of the pre-consultation form in a new programme theory. The previous programme theory (Figure 3) had included reduction in re-consultation rates as an outcome. However, interviews did not show any evidence that this is a likely outcome. The six outcomes, with possible methods of data collection, are shown in Table 11. The outcomes with most qualitative evidence, likely to be the primary outcomes in a larger study are shown at the top of the table.
The summary report was highly valued by most of the patients who received it provided the GP completed a sufficient level of detail. It was acceptable to GPs provided that they could choose when to complete it, and most said they would only issue it when they had given important safety netting advice or complex follow-up steps to patients who had memory problems, health anxiety, language problems.
Some GPs found that they adapted the way in which they wrote their consultation notes to fit the new template. As Greenhalgh has pointed out, such adaptations are an essential feature of embedding a new intervention57 and GPs who adapted found it easier and more useful than GPs who did not; for example when GPs adapted to complete part of the template during the consultation, this meant they discussed it with patients and it became a mechanism for engaging the patient in their care plan. The task of sending the report at the end of the consultation then took only a minute or two. GPs who carried out their consultation as normal and completed the template at the end found it took longer and did not obtain these additional benefits.
We identified five possible outcomes of the summary form in a new programme theory. These five outcomes, with possible methods of data collection, are shown in Table 12. The outcomes with most qualitative evidence, likely to be the primary outcomes in a larger study, are shown at the top of the table (increased patient knowledge and empowerment and patient and family clear on follow-up).
Outcome | Measurement | Data source | |
---|---|---|---|
1 | Patient clear on safety-netting and follow-up required | Not collected in this study. Possible measures include single-items (e.g. "I know what to do if my condition gets worse"). Further investigation into multi-item validated questionnaires on safety-netting is required. | Patient-reported |
2 | Increased patient knowledge and empowerment | PCOQ knowledge domain Single items on satisfaction. Patient enablement instrument60. | Patient-reported |
3 | Patient reassured | Not collected in this study. Reassurance is a broad concept, encompassing relief of anxiety, receipt of a clear explanation and the feeling of being in safe hands, so multiple patient-reported measures are possible61. | Patient-reported |
4 | More planned and coordinated care pathway for patient | Not collected in this study. Possible measures include administrative report on calls to practice or the item from the LTC647 “Do you think the support and care you receive is joined up and working for you?” | Admin-reported / patient reported |
5 | Audit trail available for medico legal purposes | Not collected in this study. Possible measures include GP report on whether the report was used to provide evidence in the case of complaints. | GP-reported |
Recruitment rates (recruits per SMS invitations sent) were 26%, higher than the target of 15%. Rates varied among practices. The practice with the lowest recruitment rate was site 2, with 17%. This site had numerous technical difficulties, which resulted in one batch of SMS invitations not having the EMIS number in the message for the patients to input and this will have affected the rate of patients able to complete the pre-consultation form.
Follow up rates were 36% against a target of 80%. We asked patients about this in later interviews. Some patients in the intervention arm did not understand why they had been sent a follow-up form. One of the key reasons for recruitment and retention problems in RCTs is communication58. Patients in this study were recruited without any verbal communication prior to recruitment. The short duration of the intervention (over a single consultation) meant there was little time for patients to absorb the fact that they were part of a research study. A systematic review of similarly brief interventions which sought to manipulate patient-perceived levels of clinician empathy reported reasonable follow-up rates for most studies59. However, most of these studies captured the follow-up data immediately after the intervention while the patient was present. In our study, patients received the follow-up questionnaire 10 days after the consultation which may have affected their response.
The proportion of patients who agreed to share data from the record was also low. Patients were asked if they were willing to share data at the end of the follow-up questionnaire. Given the low rates of follow-up questionnaire completion, in retrospect this may not have been the best design; requesting for consent for data sharing at end of the baseline questionnaire rather than at the end of the follow-up would have given every recruit the opportunity to consent to this and potentially increased the proportion of patients consenting to data sharing. Many people are happy to share fully anonymised and de-identified health data for research purposes60, so making this request at the outset could have substantially increased the proportion of recruits agreeing to data share. However, it could also have led to lower initial recruitment rates.
The patient-reported questionnaires had low levels of missing data. Although follow-up response rates were low, most patients who responded completed the entire questionnaire. Scores were within the range expected for patients in primary care for the PCOQ61, EQ-5D and CARE measure. The data extracted from the patient record was complete in terms of patient characteristics, apart from ethnicity, which had 8% missing data (this is common in routine datasets)62. The consultation record data was comprehensive and we were able to calculate repeat consultations.
The point estimates of intervention and control (43% intervention vs 25% control) were not sufficiently precise to establish if there was differential attrition. Systematic reviews of interventions carried out over an episode of care comprising more than one interaction, with an interactive recruitment process, have tended not to show indication of differential attrition63,64. In contrast, COAC is a brief intervention, delivered in a single consultation. Intervention arm patients had an opportunity to ask questions in an interview and gained a greater understanding through the form being used in their consultation but control arm patients did not have this opportunity, so might have been expected to have a relatively lower rate of follow-up completion.
There is some evidence of differential attrition in studies with similar designs. For example, Little et al., (2015) carried out a cluster RCT where GPs were trained in empathetic non-verbal communication. Patients were given the follow-up questionnaire immediately after the consultation and encouraged to complete before they left with an option to post it. Follow-up rates were 92% in the intervention arm and 73% in the control arm65. Although the authors did not report this as differential attrition, the confidence interval overlap is similarly small to this study and intervention arm patients may have been more engaged in the short duration of the intervention than control arm.
Further design improvements may be required for either the pre-consultation form or closure report to be adopted more widely. The extent to which interventions are taken up depends to a great extent on relative advantage: whether the individuals involved in the interventions perceive that the new intervention is better than what has gone before66. Both GPs and patients perceived a relative advantage in the case of both the pre-consultation questionnaire and consultation closure report; but this advantage was higher for patients; GPs saw advantages for their patients rather than direct advantages to themselves as clinicians. Non-adoption and abandonment of technological interventions in healthcare is often high, even when relative advantage is initially perceived. This is often explained by technology (such as usability or occurrence of technical errors)57. The relative difficulty of using the pre-consultation form from an administrative perspective suggests abandonment would be likely without automation of the SMS sending and report attaching. Through partner companies, EMIS web allows for an automated message to be sent to patients on appointment booking67. This could be investigated further to see if the survey link and EMIS number could be added to the appointment reminder message, negating the need for a daily SMS batch invitation to patients. The attaching of reports was identified as equally important for automation but may be more difficult to implement. Some GP electronic triage systems are not integrated with EMIS, and require an administrator to manually copy and paste, or attach the online consultation to the patient record. General practice allows a 24-hour window to respond to an online consultation, so a practice administrator can upload or copy the online consultations as a single task in a day. The shorter window for the COAC form made this administrative task too difficult. Private companies are evolving their functionality rapidly, with approved Application Programming Interface (API) partner organisations able to interface with the EMIS Web clinical system68. Automation of this element of the COAC intervention should, therefore, be possible but it may require substantial input from private providers.
For the consultation summary report, GPs found the process relatively simple, provided there were no technical malfunctions. Use of this form hinges on the decision of each individual GP, and any spread of it would require local champions to persuade their peers that this technology is effective, safe, and professionally appropriate69,70. Any future roll-out of this should include identification of such champions and thorough testing to ensure all technical problems are resolved before starting recruitment. The main improvement suggested by patients was that they should receive the report through the medium of their choice, which was often email. This was difficult for GPs at the start of the recruitment period, but by August / September 2021, accuRX had added email to their functionality, so email was as easy for GPs as SMS. Some patients who accessed their patient record online also suggested that the summary report should be accessible via Patient Access. There is a need to make the medical record useful and understandable to patients71,72 while ensuring it contains sufficient technical medical information71. The COAC summary report may facilitate this shift in recording information in the medical record as it allows a patient-friendly report to be generated and stored alongside more technical notes.
Based on the findings in this feasibility study, a number of changes to study design are recommended before a larger evaluation or RCT is carried out.
Firstly, COAC is not a single intervention, but rather two different interventions. These may be used together, but they may not both be useful in all circumstances, so they should be offered separately to patients and evaluated separately.
Secondly, the summary report is almost ready to share, but needs to be configured in SNOMED CT73 first, with the user guide updated so that practice staff can make their own customisation to free-text. If the summary report were to be evaluated in its own right, patients would be recruited directly by GPs in the consultation, and the option of traditional postal follow-up patient-reported questionnaires would be required, as the group of patients who benefit most from this may not be highly digitally literate.
Thirdly, the pre-consultation form requires substantial technical modifications to automate the sending of SMS invitations and integrate the pre-consultation form with the patient record. To maximise recruitment rates in a future RCT, communications should be sent out before commencing recruitment to inform patients about the intervention. Patients should be asked to consent to sharing their phone number and data from the record at the start of the pre-consultation questionnaire as a pre-requisite to entering the study. This should substantially reduce attrition, especially if patients are followed-up on this phone number by researchers. As identified in Table 11, two possible outcomes of the pre-consultation form could be assessed using data extracted from the patient record (“issues discussed that might not have been otherwise” and “wider range of tailored support offered to patients”). Using patient record data negates the need for long follow-up questionnaires and removes the problem of attrition entirely. Customised code lists should be developed in SNOMED CT to measure these outcomes. However, given we were unable to demonstrate that GPs code sufficiently consistently for these code lists to work, patient-reported outcomes should be captured as well.
Finally, although a cluster RCT is still a potentially appropriate design for each study, any future RCT should commence with an internal pilot to ensure that the changes recommended above are feasible, with stop-go criteria set based on acceptability, recruitment and retention.
Both the pre-consultation form and the summary report showed important potential benefits. They should be considered as separate interventions and evaluated independently. The technology to send pre-consultation forms needs further development to allow closer integration with routine IT systems. This should be possible, given current technical solutions available.
The additional time needed to generate summary reports meant GPs preferred to use it selectively in patients most likely to benefit. Collecting outcome data using online questionnaires was efficient but associated with high attrition, so improvements to the study design are needed before a full RCT is feasible.
The underlying data is stored in three repositories: one for the quantitative data, one for the qualitative data and one for the extended data as follows:
Qualitative data (restricted)
Researchers can apply for this data via a form on the repository:
https://doi.org/10.5523/bris.1ljvagu1sigje2duqj3ube527y (restricted access)74.
This project contains the qualitative data transcripts for the COAC feasibility study, where participants agreed that these could be shared with bona fide researchers outside the Bristol research team. Information about each transcript is listed below, as follows:
Transcript ID: The name of the transcript in the folder. The name consists of:
* a participant identifier
* the type of participant (patient, clinician or administrator)
* the site (1 to 4 – this was not reported in the paper for reasons of anonymity)
* The date of the interview
Participant identifier used in papers: This is the identifier used in this paper.
The folder also contains the consent form. All patients in this study consented to point 7 in this form: "I understand that after the study my anonymised data will be made available to bona fide researchers for future research studies, and it will not be possible to identify me from these data. If I agree to this, my data will be held for twenty years."
This dataset has an access level Restricted, which means it is not available via direct download but must be requested. Research participants did not give explicit consent to share this data as open data but agreed that it should be made available to approved bona fide researchers only, after their host institution has signed a Data Access Agreement. In order to request access to this data please complete the data request form available from the link above. We will consider any application from any organisation where an established research governance process is in place.
Data are available under a Non-Commercial Government Licence for public sector information.
Quantitative data (restricted)
Researchers can apply for this data via a form on the repository:
https://doi.org/10.5523/bris.3tncqi96uvlkg2451y8qfpjlfx75.
The data contains the patient-reported outcomes data and the data extracted from the patient record for those patients who consented to data sharing beyond the research team. It also contains the CONSORT checklist for the study76.
This dataset has an access level Restricted, which means it is not available via direct download but must be requested. Research participants did not give explicit consent to share this data as open data but agreed that it should be made available to approved bona fide researchers only, after their host institution has signed a Data Access Agreement. In order to request access to this data please complete the data request form available from the link above. We will consider any application from any organisation where an established research governance process is in place.
Data are available under a Non-Commercial Government Licence for public sector information.
University of Bristol: COAC Study Extended Dataset, https://doi.org/10.5523/bris.386dsq2e4iii225ms7du8pd5jq77.
This project contains the following extended data:
1. COAC-pre-consultationForm.doc
This file contains screenshots of the pre-consultation form which patients responded to in the COAC Study.
2. COACStudy-pre-consultationform-TableOfChanges.doc
This file contains a detailed table of changes made to the pre-consultation form in the COAC Intervention Study. Patients who are quoted in this table all consented to the first six points in the consent form included in this folder.
3. COACStudy-SummaryReport-TableOfChanges.doc
This file contains a detailed table of changes made to the summary report in the COAC Intervention Study. Patients who are quoted in this table all consented to the first six points in the consent form included in this folder.
4. COACStudy-TopicGuides.doc
This file contains the interview topics guides for the COAC Study.
5. PatientConsent-Interviewsv1.3.doc
This is the patient consent form used for the COAC Study
6. PatientInfoInterviewStudy2v1.4.doc
This is the patient information leaflet given to patients interviewed for the COAC Study
7. COREQ checklist - pre-consultation form
This is a checklist for the COREQ reporting guidelines which demonstrates how they were following in collecting and analysing data about the pre-consultation form
8. COREQ checklist – summary report
This is a checklist for the COREQ reporting guidelines which demonstrates how they were following in collecting and analysing data about the summary report
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
This paper has followed the CONSORT 2010 checklist of information to include when reporting a pilot or feasibility trial76. This is available in the quantitative data repository (see underlying data section).
We would like to acknowledge the valuable input of the PPI group, including Tom Yardley, Christina Stokes, Anna Ferguson-Montague, Fatima Ahmed, Mary Ellis and the other members who helped to design the intervention and interpret the findings. We would like to thank Grace Mander from One Care for assisting with development of the EMIS searches, Anna Rushowski, Andrew Appleton and Lakwinder Anota for helping design the administrative process and user-guide for the pre-consultation form, Victoria Wilson for assisting with co-ordination of the PPI meetings, and Calum Masterson from the CRN for assistance in recruiting practices. Thanks to advisory group members, Pete Bower, Gary Abel, Julia Frost, Louise Ting and Joanne Protheroe for providing advice and support. Finally, we are very grateful to all the recruiting GPs and patient participants whose input was essential, but cannot be acknowledged by name for the purposes of maintaining anonymity.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Health services research; pilot/feasibility studies; mixed methods
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Public Health, NHS Health Check, digital health check, health inclusion
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 14 Apr 22 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Register with NIHR Open Research
Already registered? Sign in
If you are a previous or current NIHR award holder, sign up for information about developments, publishing and publications from NIHR Open Research.
We'll keep you updated on any major new updates to NIHR Open Research
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)