Keywords
Remote monitoring; technology; technology-enabled care; health care; social care; evaluation; rapid evaluation
There is considerable interest in technology-enabled remote monitoring in the UK. The aim is to respond to system pressures and improve access, experience and quality of care. There is an urgent need for process, outcome and impact evaluations of interventions at various stages of development and implementation to address evidence gaps around adoption, spread, sustainability and inequalities.
DECIDE (Digitally Enabled Care in Diverse Environments) is a centre for rapid evaluation of technology-enabled remote monitoring funded by the National Institute for Health and Care Research (2023 to 2026). It aims to support service users, service commissioners and providers of remote monitoring services, to enable high quality care. Example questions include: Is the technology-enabled remote monitoring innovation needed and, if so, for whom? How are technology-enabled care pathways implemented, and what are associated outcomes and impacts? What are the opportunities and challenges for sustainability, scale-up and spread?
A range of qualitative, quantitative and economic methods will be used. Exact methods and questions will be dependent on the focus, scope and scale of each evaluation. Evaluations will be informed by relevant theory, including the Non-Adoption, Abandonment and the challenges to Spread, Scale-up and Sustainability of technological innovation in health and care (NASSS) framework.
A User Advisory Group and External Steering Committee, both with diverse voices, will help shape evaluation design, implementation and dissemination. Project-led dissemination will ensure timely sharing of insights and support impact.
Evaluations will advance understanding of when and for whom technology-enabled remote monitoring innovation is needed; how it works and how factors related to the intervention, implementation process and wider context influence adoption; associated outcomes and impacts, whether and how these tackle inequalities; and potential challenges to scale and spread. We aim to inform decision-making by policymakers, commissioners, providers, patients/service users and researchers.
There is increasing interest in whether and how technology can help people get the right care, at the right time and in the right place. One type of technology which may help with this is technology used for remote monitoring, which can be mobile applications, personal or medical gadgets (e.g. activity trackers) and what are called ‘virtual ward’ or ‘hospital at home’ services. Use of such technology has potential to improve the quality of care, reduce hospital stay, support independent living, prevent transmission of disease and allow people to access care at a safe and suitable time and location.
Despite increased use of remote monitoring across different people and services, there is still a lot we don’t know. Do different types of monitoring work as they are meant to? Could their use be spread effectively across health and care services? Who do they benefit most, and what is needed to ensure they do not disadvantage access to and quality of care for some people?
The National Institute of Health and Care Research has funded a team from Oxford University and RAND Europe for three years investigating the most promising types of remote monitoring. Patients, service users and carers are helping us decide how the research is being done so it is as effective as possible. We have an experienced Patient and Public Involvement and Engagement lead and knowledgeable and engaged researchers. They are supported by a steering committee of experts from health and care services, policymaking and those who develop new technologies and services.
Together we will ensure that we evaluate important remote monitoring innovations, collect and analyse findings in a high-quality and timely way, communicate what we have learned in ways which people understand and can use, and bring together diverse groups to share knowledge and experiences.
Remote monitoring; technology; technology-enabled care; health care; social care; evaluation; rapid evaluation
There is increased interest in how we can improve the way health and social care is delivered, and how technology can help to ensure that people get the right care, at the right time, in the right place and in the right way. This is important in efforts to improve people’s health, wellbeing and experiences of care in fair and equitable ways. It also matters for tackling the increased pressures that health and care services face, such as long waiting times for patients to access care, workforce shortages and limited capacity in hospitals.
Technology-enabled remote monitoring is being introduced to improve care quality and service user experience, reduce unnecessary admissions to hospital and A&E, reduce length of stay in hospitals, prevent infection transmission and allow people to access care safely, at a suitable time and location. However, the current evidence base about the effectiveness and cost-effectiveness of technology-enabled remote monitoring is limited. As the pace of policy and innovation in this field increases, there is an urgent need to rapidly evaluate emerging innovations and provide the evidence base that can best inform technology-enabled remote monitoring services across the UK.
This protocol paper reports the establishment of a rapid evaluation centre, DECIDE (Digitally Enabled Care in Diverse Environments), focusing on technology-enabled remote monitoring as the core of its work. It first summarises the research and policy context for technology-enabled remote monitoring in UK health and social care, before setting out the approach and methods to be used for rapid evaluation.
Technology-enabled remote monitoring involves the use of technology, devices or apps to support patients to actively monitor and manage their health or long term conditions (e.g. asthma, heart failure). It enables the remote exchange of information, primarily between a patient or service user and a health or care professional, to assist in diagnosing or actively monitoring health or care status or promoting good health and care. Remote exchange is typically supported by consultations or other interactions (e.g. messaging) that facilitate sense-making of that information, enabling systematic review and monitoring, accompanied by (where needed) changes in the care being received.
Care enabled by technology and the allied use of remote monitoring is a key part of the policy vision of the NHS for improving value and access for patients, and recovery of the health and care system1–3. There is a significant policy and service-level push to develop technology-enabled care, and to use remote monitoring in ways that are meaningful and effective. Remote monitoring services range from use of apps, personal or medical devices (e.g. pulse oximeters, activity trackers) to support care, through to use of smartphones to share vital sign readings (e.g. blood pressure) with care providers, and use of technology to enable patients to get hospital standard care safely while remaining in their own home. Remote monitoring is being tested and used across the UK for a range of conditions, such as cardiovascular and respiratory diseases4.
The COVID-19 pandemic generated much increased need and motivation to make use of such tools, and prompted the reduction of implementation barriers (e.g. approvals for remote monitoring technology)5. Various interventions launched across settings and service user populations during the pandemic focussed on protecting vulnerable and multimorbid patients (e.g. use of implantables for remote management of people with heart failure6, monitoring real time prevalence of COVID in patients with multiple sclerosis7 or diabetes8, and developing remote neurorehabilitation services9) or monitoring spread of the disease itself. However, systematic use of remote monitoring interventions is fairly new. Published evidence is nascent and heterogeneous, the latter reflecting a breadth in service development and implementation practices (see Box 1). To date technology-enabled remote monitoring interventions have typically been small scale, limited to a specific clinical or geographic focus, and provided limited evidence about outcomes that are meaningful for care providers and patients. This leaves those making decisions about the wider scaling and adoption of such interventions unclear about the potential benefits and costs of doing so.
Using technology to support remote monitoring in acute care in the home setting
The literature in this area is small but rapidly growing, typically drawing on small-scale feasibility or pilot studies conducted within a single organisation or clinical area (e.g. post kidney transplant10, post stroke arm/hand rehabilitation11, pre/post joint replacement12, chemotherapy toxicity support for specific cancers13, palliative care14 and atrial fibrillation15,16). The use of technologies varies, from use of a smartphone application to read home self-testing (e.g. urine testing), through to use of multiple technologies (e.g. motion sensors, smartphone applications and smartwatches) to support home monitoring. With the exception of one large randomised controlled trial (RCT13), which focused on patient self-reported outcomes related to chemotherapy (e.g. quality of life, side effects, symptom burden, anxiety, self-efficacy, and work limitations), studies broadly focus on recruitment, adherence, retention and acceptability. Virtual wards (also known as ‘hospital at home’) are being adopted across the NHS, though with limited evidence about impact and outcomes17,18, or about the need for enabling a supportive adoption context in terms of issues such as staff, patient and carer training and support, workforce planning and data and analytics infrastructure, to enable them19–21. The extent of technology-enabled remote monitoring (as opposed to remote monitoring not aided by a technology platform) in virtual wards, and the sociotechnical work required to make that happen, is unknown.
Using technology-enabled remote monitoring for care of people with long term and chronic conditions
A wide range of long term and chronic condition services have engaged with technology-enabled remote monitoring and been subject to evaluation, including (but not limited to) epilepsy22, cardiovascular conditions23–25, COPD26, diabetes27, paediatric sleep disorders28, mental health conditions29,30, dementia31, oncology32, Parkinson's disease33, rheumatoid arthritis34,35 and hypertension36. More work has been done in some areas than others, and much of the current evidence base is drawn from proof-of-concept studies and small-scale pilots which do not provide evidence of clinical efficacy, cost effectiveness, integration into workflow practices or long term patient acceptability. There is wide variation in evaluation design, scope and scale; most focus on telemonitoring systems that collect patient information via computers, tablets or dedicated devices, including biosensors/wearables, and transfer this to a web-based server. A small number collect patient data via SMS text message or provide online education. Issues of equity and access notwithstanding, evidence indicates high adherence rates with patients often finding remote monitoring acceptable and reporting increased knowledge that triggers action37, and improved self-management. There is limited evidence reflecting positive impact of remote monitoring on self-reported quality of life. Qualitative evidence suggests that patients worry about the impact on interpersonal connections (e.g. fear of being ‘lost in data’) and the additional burden of technology-enabled remote monitoring (e.g. inputting data, out-of-pocket costs)38. User-centred development of technology-enabled remote monitoring systems and workflow implications for clinical teams is often neglected39,40.
Remote monitoring using technology to support social and home care
The evidence base informing understanding of UK-based technology-enabled remote monitoring to support social and home care is extremely limited41,42. Technologies in this space include ‘distributed systems’ (combining data from multiple home sensors and Internet of Things), hand-held/mobile devices (data capture through software/mobile devices, including tablets/assistive robots), and wearables (GPS and accelerometers) aimed at supporting assessment, short term reablement and longer-term monitoring and review. Although technology-enabled remote monitoring appears to offer potential to better target care services, provide more timely information to identify and support care needs, and provide more proactive care for people (e.g. monitoring people post-hospital discharge to understand care needs and target resources), there has been limited evaluation12,41–47. Reviews in this space to date typically focus on remote monitoring within the context of cognitive or neurological decline and are largely technology-centric, aiming to understand technical advances and capabilities, with a particular focus on machine-learning43–46. Almost all are confined to a specific research setting, consisting of feasibility studies with a technical focus on establishing correlations between machine learning/algorithm inferences and trusted/standard measures (e.g. cognitive assessments, paper diaries), as well as small scale/case study trials to evidence psychosocial outcomes (e.g. quality of life, reassurance). A small number of qualitative studies have been conducted to explore implementation challenges within the context of social care, which highlight some of the system-wide challenges to implementation and use48–50. However, there is a notable gap in evidence of on-the-ground experiences of implementing this technology, particularly how different stakeholders respond to the information captured by home sensors. There has also been little attention paid to the views and lived experience of service users provided with such technology, as well as issues surrounding disadvantaged groups and digital exclusion.
Growing interest in technology-enabled remote monitoring is accompanied by rising concerns over inequalities between groups in the population and concerns about potential digital exclusion1,2,51,52. Such inequalities are deep-rooted and widening, leading to disparate outcomes, varied access to services, and poor experiences of care, particularly for certain groups (e.g. sharing specific characteristics such as those related to race, gender, ethnicity, disability, socioeconomic status/deprivation, geography53–55). This is compounded by issues of digital exclusion, where some people (e.g. with learning or physical disabilities) have unequal access and capacity to use technologies that are increasingly essential to fully participate in health and social care51.
Rapid evaluation can provide timely, rigorous and evidence-based insights to inform decision-making on the adoption, implementation and use of technology to support remote monitoring56–59. A number of rapid evaluation teams exist. These largely focus on service innovations in health and social care, with a direct policy focus (often linked to a centrally-steered remote monitoring intervention) and the aim of rapidly foregrounding evidence that can inform timely decision-making60. There has already been some evaluation and learning related to technology-enabled remote monitoring by these rapid evaluation teams. Examples include evaluations on the use of pulse oximetry at home56, and in care homes61; the development of remote monitoring care models, including virtual wards18,56 and the use of remote AI monitoring in social care48.
Across existing rapid evaluations to date, assessment of technology-enabled remote monitoring intended to support health and social care has reported mixed results. Use of oximeters during the COVID-19 pandemic provided care homes with a level of reassurance but knowledge of the additional support available through the NHS was limited61. Evaluation of the wider use of oximeters to monitor patients at home56 indicated lower mortality but this was not statistically significant. Low numbers and high variability across the service meant that cost-effectiveness and long term impact were not assessed. Exploration of the use of in-home sensors to support social care was low on recruitment, exacerbated by COVID-19, and found that limitations in digital infrastructure, system complexity, and general service pressure led to challenges with implementation and disappointing outcomes62.
In sum, while there is growing potential to use technology-enabled remote monitoring to benefit more people and services, there has been limited high quality, peer reviewed published evaluation to date. There are opportunities to bring what evidence there is together in the context of the current policy push for technology-enabled remote monitoring. For instance, there is a lot we do not know about whether different types of remote monitoring work as intended; when they work and for whom, why and how; how they can be appropriately spread; and what is needed to make sure they are helpful and do not disadvantage access to and quality of care for some people. There is a need for process, outcome and impact evaluations of interventions at various stages of development and implementation to address questions around adoption, spread, and sustainability and inequalities. There is also a need to consider the varied contexts in which technology-enabled interventions are intended to land, and to pay attention to the often ‘rugged’ landscapes63 of health and social care characterised by emergence, system complexity and unpredictability and requiring attention to the rich and complex factors associated with good outcomes.
Funded by the National Institute for Health and Care Research, Health and Social Care Delivery Research (NIHR HSDR) programme, DECIDE has been established as a centre for rapid evaluation with a dedicated focus on technology-enabled remote monitoring. A collaboration across the University of Oxford and RAND Europe, it brings together topic expertise in health and care innovation, with a track record of delivering cross-disciplinary and methodologically robust rapid evaluations. The aim is to generate a strong evidence base on the potential and limitations of technology-enabled remote monitoring in health and care, and support patients, service users, carers and those who commission remote monitoring services to enable high quality care and ensure decision-makers can make better-informed decisions.
DECIDE’s operational objectives are to:
a) Conduct formative, mixed method evaluation of remote monitoring interventions to help inform their implementation and adaptation, sharing insights and facilitating knowledge sharing across stakeholders.
b) Perform summative assessment of the potential of remote monitoring interventions to improve patient and service outcomes, as well as produce economic benefits.
c) Refine existing and inform new remote monitoring models in health and care through co-production and with a specific emphasis on inclusion, diversity and equity.
d) Draw transferable learning on the development, implementation and mainstreaming of technology-enabled remote monitoring in health and care.
Meaningful involvement of service users, carers, patients and members of the public is central to DECIDE, and will enable focused and relevant research and user-friendly outputs. We plan to involve service users throughout and in three key areas. Firstly, via a User Advisory Group with a lay chair and representation from Carers UK (carersuk.org/), National Voices (nationalvoices.org.uk/) and People Street (peoplestreet.net/). The Group is deliberately small, enabling rapid engagement and ensuring that DECIDE evaluates technology-enabled remote monitoring solutions that matter, asks the right questions, collects the right information appropriately, engages the right people, and interprets, communicates and shares learning effectively. To ensure that DECIDE holds issues of health and digital inequality as a central thread, User Advisory Group members co-produced the underpinning principles in Box 3.
Secondly, we will involve services users in dedicated PPIE activities related to individual evaluations. We will invite at least one User Advisory Group member to join each evaluation PPIE group and expand this to ensure diverse representation, include specific areas of expertise/experience in health and social care, and ensure that we conduct evaluation ‘with’ or ‘by’ people who use services (rather than ‘to’, ‘about’ or ‘for’ them). The involvement team will vary in size and scope depending on the design and requirements of the evaluation. Evaluation specific involvement activities are supported by a dedicated budget and are likely to include (but aren’t restricted to) advising on approaches to engage service users, design and delivery of participant activities and materials, and interpretation of findings to maximise dissemination across the whole community.
Finally most, if not all, evaluations will involve consenting service users as participants, and where relevant including both those who use technology-enabled remote monitoring and those who do not. Participant activities will generally involve some form of generating data which may include, for instance, participation in interviews, workshops and/or observations within each evaluation. Guided by DECIDE’s cross-cutting theme of inequalities (see below) we aim to ensure robust and (where relevant) diverse sampling and recruitment strategies to ensure that evaluations engage with a range of participants and voices.
DECIDE uses rapid evaluation to enable timely evidence that can inform policy and practice relating to technology-enabled remote monitoring. Our approach is to provide rapid evaluation of selected interventions using robust methods and a theoretically informed approach, underpinned by a set of guiding principles (see Box 2), and with issues of equality, diversity and inclusion remaining central throughout.
• Asking the right questions about relevant, priority interventions through close engagement with policymakers and other stakeholders and building on the existing evidence base.
• Theoretically and methodologically rigorous evaluation using, and where appropriate adapting, tried and tested theoretical frameworks (e.g. NASSS64).
• Flexibility in evaluation design supporting range in scope, scale, complexity of remote monitoring interventions at diverse stages of development and adoption.
• Practically-oriented for usable insights in diverse and accessible formats, including de-implementation considerations.
• Attention to inclusiveness and inequalities to address the needs of diverse stakeholders and populations; ensuring that diverse voices are involved throughout.
• Consideration of future scenarios, including intervention sustainability, spread, scale up and environmental sustainability.
1. DIVERSE: Our oversight group will provide insights from a wide range of stakeholders including charities, care providers, and members of the public. For individual projects we will supplement our core oversight group with appropriate expertise, as relevant for that project
2. EQUITABLE ways of working: Participants of all types will be clearly informed of what will be expected and will be actively involved in co-creating expectations; work load will be defined and properly compensated; training will be provided
3. CONTEXTUAL: We will aim to be context sensitive and flexible. We are aware that different types of projects will require different levels and types of involvement, input and analysis. We will focus on relevance and intentional design, not quantity or tokenism
4. INFORMED: Communication will be clear and respectful at all times. We will be honest about what can and cannot be achieved and in what timeframes. We will be aware of and discuss the trade-offs between the need for speed and the need for granularity
5. DELIBERATIVE: We will be considered in our approach, engaging with institutional and lay contributors, and seeking their views and dialogue on what types of data and sources we should consult, with whom we should engage and how
6. ETHICAL: We will take into account a wide range of ethical issues, from questions of data equity and health inequalities to environmental impact and other forms of sustainability
DECIDE principles build on the wider rapid evaluation literature, which emerged from responses to humanitarian emergencies and unfolding crises65,66. The emphasis in rapid evaluation is on the ‘4Rs’ – rapid, responsive, relevant and rigorous57,67 – with the aim of maximising likelihood of impact of actionable findings in a short time frame (compared to traditional evaluation methods68–70). As is the case with DECIDE, use of the term ‘rapid’ can mean different things depending on the rationale, scope, and available timeline for each evaluation (e.g. fast set up, condensed data collection, accelerated dissemination57).
Rapid evaluation uses established methods, combined with focused, efficient and collaborative work processes, to rapidly assess interventions, programs or policies and so provide timely feedback and insights that can support decision-making57. This typically involves adapting standard qualitative, quantitative and mixed methods approaches, for instance through compressed analysis or extrapolating findings from limited data, or use of larger teams allowing larger scale data collection and/or complex analysis. Careful attention to process and awareness of limitations support the trustworthiness and credibility of reporting, and ease tensions (e.g. between rigour and rapidity) that can come with working at speed65.
DECIDE has a UK-wide focus, with a remit to conduct up to 6 rapid evaluations over three years (June 2023 to July 2026) and with one of these potentially offering longitudinal, comparative evaluation across the UK. This is important given the varied policy, system and infrastructural contexts in which each of the four UK nations operate, which have potential to shape technology-enabled remote monitoring and from which there is potential for cross-national learning71,72. For example, the NHS in Scotland has partnered with an independent sector provider to support national roll-out of remote monitoring in a number of pathways, including blood pressure monitoring and chronic pain management73,74. Other nations currently aim to support remote monitoring through national and regional funding and support programmes.
We envisage an engaged relationship with our funder that is supported by good governance via our Steering Committee (see below) and User Advisory Group, and informed by discussion with potential policy stakeholders. We plan early engagement with these key groups to inform both topic selection and subsequent topic specification, scoping and protocol development, as well as quality control and dissemination (Figure 1).
Selection of topics will happen via three routes (Figure 2). First, the DECIDE team will respond to priority evaluations identified by the NIHR team through the funder’s formal prioritisation processes. Second, the team will engage Steering Committee members, other stakeholders and potential policy customers to identify key topics for rapid evaluation as they relate to priority interventions and services (e.g. technology-enabled remote monitoring to support seasonal planning). Finally, and complementing existing routes, the team will draw on early rapid horizon scanning work, conducted at the outset of the funding period. This involved a combination of a rapid scoping review of the literature with stakeholder consultation to identify evidence gaps and potential areas for evaluations, and was informed by principles of good practice for health research priority setting75 (e.g. in terms of information gathering, inclusiveness, transparency).
Horizon scanning facilitated identification of the following five broad topic areas for potential evaluation which have shaped discussion about DECIDE’s work plan to date:
1. Using technology to support remote monitoring in acute care in the home setting
2. Using technology-enabled remote monitoring to care for people with long term/chronic conditions
3. Remote monitoring using technology to support social and home care
4. Use of tools and technologies to underpin, support and deliver remote monitoring
5. How technology-enabled remote monitoring is impacting on digital and health (in)equalities or seeking to support equitable access to services.
Exact questions to be addressed will vary according to the focus and scope of each evaluation. However, we will work closely with relevant stakeholders (Figure 1) in designing and delivering evaluations, and will be guided by a set of co-produced questions (see Box 4 for examples). Exact questions will depend on the focus, scope and scale of each evaluation, as well as the readiness of a technology-enabled care pathway for specific types of evaluation. For instance, some pathways may lend themselves to exploratory evaluation to identify the nature of implementation processes and early experiences, while more mature interventions or those where the local data infrastructure is in place may be ready for more impact focused evaluation work.
- Is the technology-enabled/remote monitoring innovation needed or desired and if so, for and by whom?
- How does it work and what factors related to the innovation, implementation process and wider context influence adoption?
- What are the associated outcomes and impacts, and on whom (e.g. which end-user populations, service provider contexts)?
- How does the intervention and its impacts mitigate and/or tackle inequalities and widen equality, diversity and inclusion considerations?
- What are the unintended consequences of implementing and using technology-enabled remote monitoring, and how are these mitigated?
- What are the opportunities and challenges for the innovation to scale (within the local setting) and spread (more widely in health and care)?
- What de-implementation considerations matter?
Following selection of a topic, each potential evaluation will then require a brief topic specification to be co-produced with the identified policy customer (e.g. Department of Health and Social Care, NHS England, NHS Scotland). On confirmation by the funder, we will then scope and produce a full protocol for formal funder approval. Depending on the focus of each evaluation this is likely to involve rapid, scoping desk research and stakeholder consultation to understand patient journeys and service processes related to the remote monitoring innovation, familiarisation with key stakeholders, liaison with participating sites (Figure 1) and consideration of key process and/or outcome measures.
We will employ a core set of tried and tested methods across evaluation projects, while retaining flexibility for rapid tailoring of our approach depending on the needs of individual cases. The primary research method adopted for each project will depend on the time horizon agreed with the funder, input from the policy customer and other stakeholders, diffusion of the technology being evaluated, availability/accessibility of data (e.g. within electronic health records), and the results of scoping and literature reviews. We will proactively consider how to address inequalities in designing and undertaking evaluations, and use relevant theory that enables explicit consideration of digital inclusion (see below).
Following the process outlined in Figure 2, exact activities for each evaluation will be guided by the stage of adoption and nature of the evaluation but will include familiarisation discussions with patients/PPIE representatives, staff and other stakeholders to establish initial programme theory; rapid review of the available literature (e.g. using restricted/rapid review76 or living systematic review77); exploring outcome measures that are meaningful in each context, and agreement of data collection protocols with sites. PPIE contributors will be involved in shaping evaluation plans.
Once the formal protocol is approved by the funder, we will immediately progress evaluation. Data collection will focus on five core evaluation domains which will be adapted or extended depending on the needs of each project and accessibility/availability of (particularly quantitative) data:
- implementation process (quantitative service metrics, e.g. number of patients using remote monitoring services, duration and qualitative metrics, e.g. workforce and service user engagement, contextual influences on implementation enablers and barriers e.g. funding and procurement processes);
- staff, patient and carer experiences (qualitative via staff/patient/carer interviews and focus groups, focused observation in clinical and home settings and/or quantitative via short surveys);
- health service access and use (quantitative metrics, routinely collected data, e.g. hospital admissions, emergency attendances, and length of stay, may include capacity metrics such as bed availability and occupancy);
- clinical progress or measures of quality of life/wellbeing depending on patient condition/clinical or service area (e.g. patient outcomes, health-related quality of life – mostly quantitative, informed by routinely collected and/or via short bespoke surveys), and unanticipated outcomes (qualitative, case studies of significant events);
- economic outcomes, including resource use, economic costs and overall assessments of cost-effectiveness or cost-consequences.
Economic research methods will vary. Rapid evaluations of 9-12 months are likely to include economic evaluations based on patient-level observational studies and quasi-experimental designs (e.g. regression adjustment, difference-in-difference, propensity score matching) and economic evaluations based on decision-analytic models including value of information analyses. For more rapid evaluations, we will use alternative methods to facilitate that rapidity, including assessments of economic costs (using provider surveys, patient/client surveys, time/motion studies, other bottom-up costing approaches), budget impact analyses, and stated preference methods (e.g. discrete choice experiments, best-worst scaling). Where data is available and accessible, we anticipate drawing upon routine data sources in health care and complementing this with primary locally collected data in both health and social care settings. We will also apply an equity lens within each evaluation, seeking to ensure that disparities in care provision and outcomes by geography, socioeconomic status, ethnicity and other protected characteristics are adequately addressed by the evaluation (e.g. including consideration of the potential for inequalities as an outcome of the technology). For example, within economic evaluations, distributional cost-effectiveness methods will be used to assess trade-offs between efficiency and equity concerns.
We will work closely with sites to set up data collection (qualitative and/or quantitative as appropriate), and use options for telephone/video interviews as well as face-to-face. We will seek to use maximum variation sampling to recruit a diversity of patients and carers, for instance sampling across sociodemographic and ethnic backgrounds, health needs and conditions, and with different levels of self-assessed digital literacy/confidence/access.
In each evaluation we will conduct data analysis in parallel with data collection and feed findings directly into iterations of programme theory. We will establish the summative impact of the technology-based remote monitoring service across relevant qualitative and quantitative metrics.
Evaluation timelines will vary depending on scope and scale of projects. We will continue to revisit plans iteratively to ensure we remain well-placed to respond to shifting priorities and emerging learning. Evaluation findings will be documented in narrative form, describing how the technology was implemented.
DECIDE evaluations take as a starting point that the success of an intervention (e.g. new technology) is intimately related to its interactions with the conditions and actions in the broader adoption context and wider health system78. Evaluations are informed by relevant theory, including the Non-adoption, Abandonment and challenges to Scale-up, Spread and Sustainability (NASSS) framework, an evidence-based framework consisting of 7 interacting domains, that guide thinking on sociotechnical implementation, roll-out and embedding of technology-supported innovations in health and care64,79. NASSS will inform study design and data collection on the multiple interacting influences affecting implementation and sustainment of technology-enabled remote monitoring. The NASSS framework has already proven useful in previous research on remote monitoring (e.g. concerning heart failure and other cardiovascular conditions)23,80,81. Work is underway by the team to adapt the NASSS framework to the specific context of social care (to be reported separately).
In addition, evaluations will draw on a number of theoretical strands of work from science and technology studies, medical sociology and organisational theory (see Box 5) to underpin data analysis and synthesis. Where feasible and relevant, this theoretical engagement will enable development of cross-cutting themes (including, but not limited to, DECIDE’s focus on digital equity and equality) and comparative analysis across evaluations.
Digital exclusion and intersectionality, focusing on the multiple dimensions that contribute to some people being served less well by technology-enabled remote monitoring services and the potential to support inclusive design decisions for increasing equity in access, adoption, meaningful use and effectiveness of digital health interventions82–85
System complexity, recognising interdependence and emergence in the design and delivery of technology-enabled and remote monitoring health and care services79,86,87
Infrastructure, challenging the perception of technology as standalone and acknowledging the importance of the stuff “that other things run on”88, including technical and physical structures, as well as human, organisational and regulatory arrangements89
Socio-technical practices, acknowledging the additional (and often hidden or invisible) work needed to maintain, sustain, repair and evolve technologies underpinning remote monitoring90–94
Experiential dimensions of technology use, by staff, patients and carers, including the ways in which technology shapes and reshapes people’s interaction with the world of health and social care95,96; and the creation, digitalisation and representation of users through remote monitoring data97
Innovation systems and sociotechnical transitions98–100 perspectives from science, technology and innovation studies: enabling consideration of systems-level transition from incumbent to novel forms as dependent on co-evolving interactions between technical and social aspects of the innovating system; and interactions between bottom-up driven efforts (in innovation niches) and top-down systems orchestration100–102.
All evaluations will have senior oversight and draw on the Centre’s pool of experienced research staff from its collaborating organisations.
The programme will be overseen by an independent Steering Committee with diverse representation from policy, clinical care, social care, the commercial sector, and people with experience on patient advocacy groups or regulatory bodies; and with voices from across the four UK nations. The Steering Committee will advise on strategic and operational aspects of DECIDE activity, oversee independence and quality, and feed into topic identification and prioritisation (Figure 1). Additional input will be sought at the evaluation level, including (but not limited to) from service providers and health and care professionals. A cross-partnership Internal Advisory Group consisting of senior Oxford, RAND and Health Innovation Network representation, will provide topic and method-relevant advice and contribute to the design, planning, implementation and delivery of DECIDE’s work.
We will submit a classification request to the Research Governance Ethics and Assurance (RGEA) team at the University of Oxford to confirm the appropriate approval process for each rapid evaluation: (a) service evaluation that does not require research ethics approval, (b) research not requiring NHS HRA ethical approval, or (c) research requiring NHS HRA approval. We will follow the confirmed approvals process for each evaluation. All evaluations will be conducted in accordance with the Health Research Authority guidance (UK Policy Framework For Health and Social Care Research)103 and with the ‘The Concordat to support Research Integrity’104.
We will support diverse dissemination and engagement channels, engage early on with a range of stakeholders, and enable real-time formative feedback on emerging learning with those directly involved in each evaluation in a timely way to support decision-making and learning cycles, with final learning disseminated more widely.
The focus will be on tailored dissemination and outputs specific to individual evaluations, as well as synthesis/learning across evaluations (e.g. on topics, methods, effective PPIE), including:
- For policymakers and funder: evaluation reports and policy briefs, focused on evaluation outcomes and methodological innovation; formative and summative communications via informal and ad hoc meetings, webinars, social media.
- For commissioners of technology-enabled services: evaluation reports, evidence briefings and infographics/blogs, presentations at seminars/events; ad hoc in-person and virtual engagements.
- For providers and health and social care professionals: practical learning about remote monitoring, presentations at seminars/events; case vignettes and resources that can support appropriate local adoption, use and spread.
- For patients/users and the public: a public-facing website, using plain English and visuals; at least one patient/public led publication, plus co-designed patient/user facing resources to support adoption and use of technology-enabled remote care by different groups.
- For researchers: a series of journal papers, presenting evidence from evaluations along with theorisation of technology-enabled remote care; conference presentations/discussion.
Internal quality assurance processes will ensure that all outputs are peer reviewed (see Figure 1). To extend potential reach and impact, we will work with an applied design agency throughout to develop high quality materials and resources and tailor messaging across evaluations.
The UK health and social care system is experiencing numerous challenges, including tight budgets and trying to deliver more for less. Technology is increasingly seen by policymakers, providers, commissioners and many members of the public as a route to improving access to and delivery of health and social care. This brings significant challenges in design and implementation of technology-enabled remote monitoring services and in spreading and scaling. As a centre for rapid evaluation, DECIDE offers a route to rapidly evaluate promising remote monitoring practices quickly and accurately, enabling mobilisation and dissemination of evidence in this rapidly growing field and informing decision-making in policy and practice.
Ethical approval and consent were not required.
SSh and SMa conceived the programme, and with input from CP and FW, wrote the grant application. SSh as PI, along with SMa, CPa, FW, GH, GF, NF and JSu secured funding. SSh led on drafting the protocol paper with input from SM, CPa, AAN, FW, NN and JW. All authors contributed to subsequent drafts and approved the final version of the paper.
Thanks to Trish Greenhalgh for input to early development of the cross-cutting inequalities theme and Charlotte Thompson-Grant for administrative and research support.
Is the rationale for, and objectives of, the study clearly described?
Yes
Is the study design appropriate for the research question?
Yes
Are sufficient details of the methods provided to allow replication by others?
Yes
Are the datasets clearly presented in a useable and accessible format?
Not applicable
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Evidence synthesis; research methodologist
Is the rationale for, and objectives of, the study clearly described?
Yes
Is the study design appropriate for the research question?
Partly
Are sufficient details of the methods provided to allow replication by others?
No
Are the datasets clearly presented in a useable and accessible format?
Not applicable
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: technology for daily-life monitoring of human functions
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 10 Apr 25 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Register with NIHR Open Research
Already registered? Sign in
If you are a previous or current NIHR award holder, sign up for information about developments, publishing and publications from NIHR Open Research.
We'll keep you updated on any major new updates to NIHR Open Research
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)