INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 364
www.rsisinternational.org
Beyond the Snapshot: A Survival-Based Systems Framework for
Measuring Resilient Impact in Global Development.
Akintunde Akinwale Peter
National Agricultural Seeds Council, Abuja, Nigeria.
DOI:
https://doi.org/10.51583/IJLTEMAS.2026.15020000033
Received: 12 February 2026; Accepted: 18 February 2026; Published: 05 March 2026
ABSTRACT
Development projects often fail to produce lasting change once external funding and support disappear. This is
particularly visible in agricultural sectors where initial gains in productivity frequently evaporate within a few
seasons. The persistence of this pattern suggests that our current evaluation methods are limited: we tend to
measure success as a static outcome rather than a dynamic system property. This paper introduces Resilient
Impact Systems Analysis (RISA), a framework designed to determine whether development impact can survive
real-world pressures. Unlike traditional evaluations that focus on point-in-time estimates, RISA integrates four
key dimensions: core causal effects, behavioural persistence, exposure to external risks, and the capacity of local
institutions. The framework is built on a survival-based mathematical model. By applying exponential decay to
adoption rates, RISA moves beyond simple snapshots to provide predictive insights. A central feature of this
model is the introduction of the Impact Half-Life (T½ = ln(2)/k), which identifies the specific point in time when
systemic friction and adoption decay reduce the initial benefits by half. This metric identifies the specific point
in time when systemic friction and adoption decay reduce the initial benefits by half. This provides a clear
benchmark for durability, allowing for more honest comparisons between different interventions. Through an
illustrative application in agriculture using longitudinal adoption data and post-exit follow-up surveys, the paper
shows how projects deemed successful by traditional metrics may be highly fragile. RISA provides a practical
tool for designers and donors to move from short-term wins toward investments that are truly anchored in the
systems they aim to change.
Keywords: Impact Evaluation, Sustainability, RISA Framework, Systems Thinking, Impact Half-Life, Adoption
Decay, Resilience Metric.
INTRODUCTION
Background and Motivation
Over the past two decades, development practice has witnessed a substantial expansion in both the scale of
interventions and the sophistication of impact evaluation methods used to assess them. Governments, multilateral
institutions, and donor agencies now routinely require rigorous evidence of effectiveness, often grounded in
randomized controlled trials, quasi-experimental designs, and advanced statistical techniques. This emphasis on
evidence has contributed to major advances in understanding what works under specific conditions and has
improved accountability in development spending.
Yet, alongside these methodological gains, a persistent and widely acknowledged problem remains: many
development interventions that demonstrate positive impacts during implementation fail to sustain those gains
once external support is withdrawn.
This pattern is especially pronounced in agricultural development, where improvements in yields, income, or
technology uptake frequently erode within a few seasons after project completion. Numerous post-project
assessments and synthesis studies document this phenomenon, showing that short-term success often masks
long-term fragility (Redman & Wiek, 2021; FAO, 2021).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 365
www.rsisinternational.org
This disconnected between measured success and lived outcomes has led to growing frustration among
policymakers and practitioners. Projects that meet their performance targets and receive favourable evaluation
ratings are nonetheless followed by adoption decline, institutional breakdown, and reversion to baseline
practices. As a result, lessons learned from “successful interventions often fail to translate into scalable or
durable development outcomes.
Recent literature increasingly recognizes that this challenge cannot be explained solely by poor implementation
or contextual volatility. Instead, it reflects deeper limitations in how impact itself is conceptualized and
measured. Standard evaluation approaches are designed to estimate causal effects over relatively short horizons
and within controlled analytical frameworks. They are far less equipped to capture how behaviour changes over
time, how risks accumulate, or how local systems absorb or fail to absorb interventions after donor exit
(Bamberger et al., 2016; Manning et al., 2020).
Agriculture provides a particularly revealing lens through which to examine this problem. Agricultural
interventions operate within complex socio-ecological systems shaped by climate variability, market dynamics,
institutional capacity, and farmer decision-making. These conditions make agriculture both highly responsive to
well-designed support and highly vulnerable to post-project collapse. As climate shocks intensify and market
volatility increases, the limitations of short-term, static impact measures become even more evident (Barrett et
al., 2023).
This context motivates the need for an analytical approach that goes beyond measuring whether an intervention
worked at a point in time and instead asks whether it created change that can endure real-world pressures over
time.
The Core Research Problem
The central research problem addressed in this paper is not the absence of rigorous impact evaluation methods,
but their limited ability to capture durability, resilience, and system dependence of development outcomes.
Most impact evaluations are built around estimating an average treatment effect within a defined evaluation
window. These estimates are invaluable for establishing attribution and have significantly improved the
credibility of development evidence. However, by design, they treat several critical determinants of long-term
success as external or secondary considerations. Adoption behaviour beyond initial uptake, exposure to
environmental and economic shocks, and the strength of local institutions are often included as control variables,
subgroup analyses, or qualitative reflections rather than as core components of impact itself.
This analytical separation creates a blind spot. An intervention may produce a statistically significant effect
during implementation while simultaneously being highly dependent on subsidies, intensive supervision, or
temporary institutional arrangements. When these supports are withdrawn, the measured impact no longer
reflects the lived reality of beneficiaries. Yet traditional evaluation metrics rarely anticipate or quantify this
collapse.
Recent debates in the evaluation literature highlight this limitation. Scholars have noted that evidence of
effectiveness does not automatically translate into evidence of sustainability, and that overreliance on short-term
indicators can distort learning and policy decisions (Cartwright & Hardie, 2012; Copestake, 2025). In agriculture,
this issue is compounded by the fact that outcomes are shaped by cumulative risks, including climate variability,
price fluctuations, and ecological degradation, which interact with behavioural and institutional factors over
time.
The result is a paradox: development actors are increasingly confident in their estimates of causal impact, yet
increasingly uncertain about whether those impacts will last. This uncertainty undermines scaling decisions,
weakens accountability to beneficiaries, and contributes to repeated cycles of pilot projects that fail to deliver
sustained transformation.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 366
www.rsisinternational.org
What is missing is an analytical framework that explicitly treats impact as a system property, emerging from the
interaction of intervention effects, human behaviour, risk exposure, and institutional capacity. Without such a
framework, evaluations will continue to produce precise answers to narrow questions while leaving the most
consequential question unresolved: Did the intervention create change that could survive after the project ended?
This paper seeks to address this gap by developing and presenting a new analytical framework for impact
measurement, termed Resilient Impact Systems Analysis (RISA). The specific objectives of the research are
fourfold: (i.) to reconceptualise development impact as a dynamic system rather than a static outcome
(Section 3); (ii.) to develop a transparent analytical structure that integrates these dimensions into a single
measure of effective impact (Section 34-5); (iii) to demonstrate the relevance of this framework through
application to agricultural development interventions (Section 6); and (iv.) to provide a decision-relevant
tool for project design, adaptive management, and post-exit learning (Section 7).
Importantly, the aim is not to replace established impact evaluation methods, but to extend their usefulness by
embedding them within a broader systems-oriented framework.
Unlike traditional evaluations that focus on point-in-time estimates, RISA integrates four key dimensions: core
causal effects, behavioural persistence, exposure to external risks, and the capacity of local institutions. Where
conventional methods treat adoption, risk, and institutional factors as control variables or subgroup analyses,
RISA positions them as constitutive elements of impact itself.
This reframing shift evaluation from retrospective assessment of 'what worked' to predictive diagnosis of 'what
will endure.' Figure 1 provides a comparative schematic illustrating how RISA extends prominent dynamic
evaluation frameworks such as resilience tracking models and sustainability scorecards.
Figure 1: How RISA Extend Dynamic Impact Evaluation.
Addressing Feasibility Concerns:
The integration of multiple analytical dimensions raises legitimate questions about data intensity and
implementation complexity, particularly in resource-constrained evaluation contexts. We acknowledge these
concerns directly. RISA is designed with modularity as a core principle: components can be applied
incrementally as data permit, rather than requiring complete implementation from the outset. Section 3.3
provides a phased implementation pathway, distinguishing between minimum viable RISA (using baseline
impact estimates and expert-elicited decay parameters), intermediate RISA (adding shock exposure data), and
full RISA (incorporating structured institutional assessments). This graduated approach ensures analytical rigor
does not become a barrier to adoption in settings where perfect data are unavailable.
Review of Related Literature.
The design of effective development interventions depends fundamentally on how impact is conceptualized and
measured. Traditional approaches to impact evaluation have advanced causal inference considerably, but have
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 367
www.rsisinternational.org
struggled to integrate adoption dynamics, external risks, and long-term sustainability into coherent analytical
frameworks.
Impact Evaluation and Causal Inference
Impact evaluation, as a field of research practice, has matured through widespread adoption of rigorous causal
inference methods such as randomized controlled trials (RCTs), difference-in-differences, and quasi-
experimental designs that seek to establish counterfactual outcomes (Glewwe & Todd, 2022). These methods
remain central to attributing changes to interventions rather than to confounding factors, and they underpin much
of the evidence used by policymakers and donors today.
However, recent critiques emphasise that an exclusive focus on attribution within a bounded evaluation window
can be misleading if it ignores how an impact persists or erodes over time. Evaluators increasingly point to the
difficulty of capturing synergistic or interactive effects in integrated programs, where outcomes arise from
combinations of complementary interventions rather than isolated treatments (Evaluating integrated approaches,
Gates Open Research, 2025). Moreover, innovations in causal inference such as process tracing are being
advocated to explain not just whether an intervention worked but how it worked in context (Montano et al.,
2025). These advances reflect a broader view that traditional designs excel at internal validity but often sideline
questions of external relevance and longer-term durability.
Despite these methodological refinements, impact evaluation literature still largely conceptualizes impact as
change at a fixed point in time rather than as an evolving system property. Without incorporating longer-term
behavioural and contextual dynamics, estimates of causal effects can overstate durable success.
Adoption, Behaviour, and Technology Diffusion
The adoption literature underscores that the spread and persistence of innovations are shaped by complex socio-
economic and behavioural factors. Recent reviews on agricultural technology adoption highlight that uptake is
rarely immediate or uniform, and depends on farm characteristics, perceived complexity, socio-economic
constraints, and policy environments (Kpoviwanou et al., 2024). Meta-analyses confirm that while improved
agricultural technologies often show positive welfare effects in smallholder contexts, the magnitude and
consistency of impacts vary widely across studies, reflecting heterogeneity in adoption and use patterns
(Mulugeta & Heshmati, 2023).
Emerging work on climate-smart agricultural technologies reveals a theoretical bias toward adoption diffusion
models that emphasize individual-level determinants of uptake, but rarely integrate broader systemic barriers
such as institutions, gender dynamics, or agroecological variability (Rurii & Nzengya, 2026). This critique aligns
with calls for frameworks that encompass institutional, market, and behavioural dynamics rather than treating
adoption as a static ‘event following intervention delivery. In essence, while the literature recognizes the
complexity of adoption, it remains fragmented and often fails to situate technology use within dynamic socio-
technical systems rather than isolated choices.
Risk, Uncertainty, and External Shocks
External risks, including climate variability, market volatility, and policy shocks, present significant challenges
to development outcomes. The literature on resilience and development underscores that shocks can produce
heterogeneous and lasting impacts on wellbeing, with differential recovery trajectories across populations
(Barrett et al., 2023). Agricultural systems operate under recurrent risks from drought to price collapse that
influence both adoption and sustained use of interventions.
Despite this recognition, explicit integration of risk and shocks into impact frameworks remains limited. Most
impact evaluations treat risk as noise to be controlled statistically rather than as a core determinant of long-term
outcomes. The increasingly volatile environment as articulated in climate adaptation and resilience literatures
highlights the importance of modelling how shock exposures interact with behavioural responses and
institutional capacity.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 368
www.rsisinternational.org
Yet few evaluation schemes have incorporated formal risk adjustment factors that can quantify how exposure to
uncertainty dampens or amplifies impact outcomes over time. This gap suggests the need for frameworks like
RISA that explicitly foreground risk as a structural component of impact measurement.
Sustainability and Systems Thinking in Development
The sustainability literature reflects a growing shift toward systems thinking to address complexity in
development outcomes. Systems perspectives emphasize interconnections among social, economic, and
environmental subsystems, and encourage evaluators to move beyond linear cause effect logics. Recent
bibliometric work illustrates that systems thinking research has expanded significantly, with applications across
economic, social, and environmental sustainability contexts (Abonguie, et al., 2025; Weaver et al., 2026). These
studies argue that traditional assessment frameworks often fail to capture feedback loops, emergent behaviours,
and longer-term dynamics critical for understanding sustainable outcomes.
In sustainability assessment more broadly, scholars highlight the need for frameworks that integrate multi-
dimensional indicators and account for systemic effectiveness rather than narrow efficiency metrics (Marra,
2025; Zimek & Baumgartner, 2024). This emerging literature suggests that static evaluations miss second
order sustainability effects that emerge from complex interactions among components of socio-ecological
systems. The systems literature also underscores the importance of uncertainty, multi-actor dynamics, and
structural leverage points for durable change.
However, while systems thinking has become rhetorically influential in the sustainability literature, its
operationalisation in empirical impact measurement remains nascent. Many studies call for integrated
frameworks but stop short of formalising analytical structures that can link causal effects to systemic dynamics
in measurable, testable ways. There remains a disconnect between high-level systems prescriptions and the
methodological tools used in impact evaluations. RISA addresses this operationalisation gap by providing a
formal analytical structure with explicit mathematical relationships among components.
Identified Gaps
Across these domains, several consistent gaps emerge relevant to designing a framework like RISA.
First, impact evaluations, even with advanced causal inference methods, rarely consider post-exit
dynamics and behavioural persistence as intrinsic to impact measurement.
Second, adoption research acknowledges complex drivers of uptake but often lacks integration with
institutional and systemic constraints shaping sustained use.
Third, risk and external shocks, though widely recognized in resilience and climate literatures, are seldom
incorporated into impact evaluation structures in a way that alters impact estimates systematically.
Finally, while sustainability and systems thinking offer conceptual depth, there is a clear need for formal
analytical frameworks that operationalize these insights into measurable components that can be embedded
alongside causal estimates.
Taken together, the current literature supports the premise that impact is not a simple, static outcome but an
emergent property of interacting forces causal effects, human behaviour, exposure to risk, and system capacity.
This synthesis reveals the intellectual space for frameworks that integrate these dimensions into a coherent
analytical structure, addressing long-standing critiques about the limitations of traditional evaluation methods
and situating impact within a broader systems context (World Bank. 2024).
Table 1 reveals a clear pattern: each gap we identified in the development literature has a direct answer in RISA's
architecture. Where traditional evaluations ignore what happens after donors leave, we built in the Adoption
Retention Function to track exactly that. Where risk gets treated as statistical noise, we made it structural through
RSA. And where "sustainability" remains unclear aspiration, we created the SSI to measure it transparently. The
framework did not emerge from theory alone it was designed to solve real problems practitioners face daily.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 369
www.rsisinternational.org
Table 1: Literature Gaps Mapped to RISA Components
Gap Identified
RISA Component
How Addressed
Impact evaluations rarely consider
post-exit dynamics
ARF (Adoption
Retention Function)
Explicit exponential decay modelling of
behavioural persistence
Adoption research lacks systemic
integration
ARF + SSI interaction
Adoption decay conditioned on
institutional capacity
Risk treated as noise rather than
structure
RSA (Risk & Shock
Adjustment)
Formal resilience modifier based on shock
exposure
Systems thinking lacks operational
tools
Full RISA integration
Mathematical model with estimable
parameters
Sustainability assessed qualitatively
SSI (System
Sustainability Index)
Structured composite index with
transparent weighting
Note: This table demonstrates how each identified gap in the literature is directly addressed by a specific RISA
component, strengthening the logical coherence between the literature review and the framework architecture.
Contribution to the Literature
The RISA framework adds to the field in three distinct ways, moving evaluation from a static snapshot to a
dynamic survival analysis.
First, it provides a functional analytical bridge for systems-based evaluation. While many experts agree that
context and complexity are vital, very few have successfully turned those concepts into a formal structure for
measuring impact outcomes (Bamberger et al., 2016; Gates et al., 2021). RISA fills this gap by offering a rigorous
way to operationalize systems thinking. By introducing the math of exponential decay, we move beyond just
acknowledging complexity and start measuring exactly how it influences the trajectory of an intervention.
Second, we address the persistent sustainability crisis in agricultural development. Historically, agricultural
studies have focused on yield or income spikes while a project is active, with little attention paid to what happens
after the project exits. RISA aligns impact measurement with the actual lived experience of smallholder farmers
who face constant climate and market uncertainty (Barrett et al., 2023; FAO, 2021). By calculating the "Impact
Half-Life," we can now quantify how resilient those agricultural gains truly are in the face of real-world friction.
Third, this paper introduces a methodological shift in how data informs decision-making. Instead of treating
impact estimates as a final grade, RISA treats them as raw inputs into a broader assessment of durability. This
shift is critical for donor accountability and for making smarter decisions about which projects to scale. It
responds directly to the critique that the evaluation field has become too focused on internal validity while
ignoring whether an intervention is useful in the long run (Manning et al., 2020; Copestake, 2025).
By integrating causal inference with behavioural persistence and institutional strength, we provide a clear answer
to a long-standing problem: why we are so good at measuring success in the short term, yet so poor at
understanding why it fails to last.
Conceptual Foundations of the RISA Model
Reframing Impact as a System
The Resilient Impact Systems Analysis (RISA) model is grounded in a deliberate reconceptualization of
“impactaway from a static outcome and toward a dynamic system property. Conventional impact evaluation
frameworks tend to treat impact as a measurable difference between treated and untreated units at a specific
point in time. While this framing has delivered important advances in attribution and internal validity, it
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 370
www.rsisinternational.org
implicitly assumes that impact is stable once achieved and largely independent of subsequent contextual change.
RISA challenges this assumption by positioning impact as an emergent result of interacting forces that evolve
over time.
Within the RISA perspective, impact is understood as the outcome of continuous interaction among intervention
effects, human behaviour, institutional conditions, and external risks. This reframing draws on systems thinking
in development economics and sustainability science, where outcomes are recognized as non-linear and path
dependent. In agricultural development, for example, yield gains observed shortly after project implementation
may dissipate as climate shocks, price volatility, or institutional breakdowns alter farmer incentives and
capacities. RISA treats such erosion not as evaluation “noisebut as analytically meaningful information about
system fragility.
By modelling impact as a system rather than a snapshot, RISA integrates causal inference with post-intervention
dynamics. The counterfactual remains essential, but it is no longer sufficient on its own. Instead, causal impact
estimates are interpreted as inputs into a broader system whose performance depends on adoption persistence,
exposure to shocks, and the adaptive capacity of surrounding institutions. This reframing allows RISA to address
a central critique in the evaluation literature: that rigorously identified short-term effects often fail to translate
into durable development outcomes.
Consider a concrete example: A traditional evaluation reports 'yields increased 30% at endline' and declares
success. A RISA interpretation of the same intervention states: 'yields increased 30% at endline (I= 0.30), but
adoption surveys show farmers abandoning the practice at a rate producing an Impact Half-Life of 3.2 years.
With an SSI score of 0.58 reflecting weak input markets, realized impact at five years post-exit will be
approximately 9%, not 30%.' This reframing shifts the evaluative question from 'did it work?' to 'will it last?'
Analytical Design Principles
The analytical architecture of RISA is guided by three interrelated design principles: integration, conditionality,
and temporal sensitivity.
First, integration refers to the explicit linking of multiple analytical domains that are typically examined in
isolation. RISA does not replace econometric or statistical impact estimation; rather, it embeds these estimates
within a structured system that also incorporates behavioural adoption, risk exposure, and sustainability
dynamics. This design choice reflects the empirical reality that development outcomes are rarely driven by single
mechanisms.
Practical implication: Evaluation designs must coordinate data collection across traditionally separate domains
baseline surveys (for I₀), longitudinal follow-up (for k), shock monitoring systems (for R), and institutional
diagnostics (for S).
Second, RISA is explicitly conditional. Impact is not assumed to be invariant across contexts or time periods.
Instead, observed outcomes are conditioned on adoption intensity, behavioural responses, and exposure to
external shocks. This conditional logic aligns with recent advances in heterogeneous treatment effects and
resilience analysis, while extending them beyond methodological refinement into conceptual integration. In
practice, this means that identical interventions may yield divergent impact trajectories depending on system
conditions, a feature that RISA is designed to capture rather than average away.
Practical implication: Model specifications must allow parameters to vary by context (e.g., k may differ across
subpopulations or geographies) rather than assuming universal constants.
Third, temporal sensitivity is central to the model’s design. RISA treats time not merely as a dimension along
which outcomes are observed, but as a mechanism through which impact evolves. Early gains may decay,
stabilize, or amplify depending on feedback loops within the system. This temporal orientation makes RISA
particularly suitable for sectors such as agriculture, climate adaptation, health systems, and education, where
sustainability and resilience are core policy concerns rather than secondary considerations.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 371
www.rsisinternational.org
Practical implication: Data collection protocols must extend 3-5 years post-exit to observe decay dynamics
empirically, moving beyond standard 12-month endline surveys.
Together, these principles position RISA as an analytical bridge between rigorous causal evaluation and systems-
oriented development analysis. The model is intentionally parsimonious at the conceptual level, allowing it to
be operationalized using a range of empirical methods without becoming method dependent.
Scope and Boundaries
While RISA was initially motivated by persistent challenges in agricultural project evaluation, its conceptual
foundations are not sector specific. The model is applicable to any development intervention where outcomes
depend on sustained behavioural change, institutional support, and resilience to external shocks. This includes,
but is not limited to, climate adaptation programs, health system strengthening, education reforms, and livelihood
interventions. The agricultural focus reflects empirical relevance rather than theoretical limitation.
At the same time, RISA has clearly defined boundaries. It is not designed to replace micro-level causal inference
methods, nor does it claim to predict outcomes deterministically. Instead, it functions as a higher-order analytical
framework that organizes and interprets evidence from multiple sources. The model also does not attempt to
capture all dimensions of social complexity; rather, it focuses on those most consistently shown to mediate the
sustainability of development impact.
Importantly, RISA operates at the level of programmatic systems rather than individual projects in isolation. This
scope allows it to accommodate spillovers, complementarities, and cumulative effects that are often overlooked
in project-centric evaluations. However, it also implies that data availability and quality may constrain full
implementation, particularly in low-resource settings. RISA therefore emphasizes analytical transparency and
modular application, allowing components of the model to be applied incrementally as data permit.
In defining its scope and boundaries explicitly, RISA positions itself as both ambitious and realistic: ambitious
in its attempt to integrate causal, behavioural, and systemic dimensions of impact, and realistic in acknowledging
the practical constraints faced by evaluators and policymakers.
Modular Application Pathways:
RISA can be implemented in three graduated phases based on data availability.
Phase 1 - Minimum Viable RISA: Uses only Ifrom existing evaluation and expert-elicited k based on
comparable projects. Provides rough durability estimate with explicit uncertainty bounds.
Phase 2 - Intermediate RISA: Adds R(t) from available shock exposure data (climate records, price
series). Improves precision of long-term projections.
Phase 3 - Full RISA: Incorporates SSI from structured institutional surveys and direct estimation of k
from project-specific longitudinal data. Provides highest confidence projections suitable for scaling
decisions.
The Resilient Impact Systems Analysis (RISA) Framework
The Resilient Impact Systems Analysis (RISA) framework is designed to address a persistent weakness in
development evaluation: the tendency to measure impact as a static, short-run outcome divorced from adoption
behaviour, systemic risk, and long-term sustainability.
RISA reframes impact as an emergent property of interacting components within a socio-economic system.
Rather than asking whether an intervention worked at a point in time, it asks whether impact was adopted,
survived shocks, and remained viable after external support diminished. The framework integrates causal
estimation with behavioural persistence, risk exposure, and system capacity, offering a coherent analytical
structure for assessing what truly lasts.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 372
www.rsisinternational.org
Overview of the Four Core Components
RISA is structured around four analytically distinct but interdependent components:
1. Core Impact Effect (CIE)
2. Adoption Retention Function (ARF)
3. Risk and Shock Adjustment (RSA)
4. System Sustainability Index (SSI)
Figure 2: RISA Component Interaction
The multiplicative structure of RISA means that failure in any single component drives realized impact toward
zero, regardless of the strength of other components. This reflects the empirical reality that system failures are
non-compensable.
Together, these components define effective impact as a product of causal effect size, behavioural persistence,
resilience to shocks, and systemic support conditions.
Figure 3: Feedback-driven System with bid actional causality.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 373
www.rsisinternational.org
Core Impact Effect I
t
)
The Core Impact Effect represents the causal change in an outcome attributable to an intervention at time t.
Conceptually, it aligns with traditional impact evaluation estimates, such as average treatment effects, but is
explicitly treated as only one component of lasting impact rather than the final metric.
ΔI
t
plays a foundational role in RISA. Without a credible causal effect, questions of adoption or sustainability
are secondary. However, RISA departs from conventional evaluation by refusing to equate statistical significance
with real-world durability.
The relationship to econometric estimates is therefore complementary rather than substitutive. Randomized
controlled trials, quasi-experimental designs, difference-in-differences, instrumental variables, and regression
discontinuity designs are all acceptable estimation strategies, provided identification assumptions are transparent
and defensible. What matters is not the method per se, but the credibility of the counterfactual and the clarity of
interpretation.
Importantly, ΔIₜ is time indexed. Early gains may differ substantially from medium- or long-term effects, and
RISA treats this temporal variation as analytically meaningful rather than noise.
Adoption Retention Function (A
t
)
The Adoption Retention Function (A
t
) captures the proportion of the target population that maintains meaningful
use of an intervention over time. While initial uptake reflects program exposure and compliance, retention
reflects behavioural integration into the beneficiary’s daily life.
Empirical evidence suggests that adoption decay rarely follows a linear path. Instead, it typically follows a
survival curve, where a sharp initial drop often occurs after project exit, followed by a stabilizing "long tail" of
persistent users. To capture this dynamic, RISA adopts an exponential decay specification:
In this formulation, k represents the decay constant, or the rate at which adoption is lost. This approach recognizes
that the probability of dis-adoption is proportional to the current number of users, a standard assumption in
diffusion and survival modelling. By shifting to an exponential model, RISA identifies the "resilience" of
behavioural change. A lower k value indicates that the intervention has been effectively internalized by the local
system, whereas a high k value signals that the impact is fragile and highly dependent on external support.
To capture this dynamic, RISA adopts an exponential decay specification. This functional form is derived from
a first-order differential equation where the rate of dis-adoption is proportional to the current adopter population
(dN/dt = -k·N) a standard assumption in diffusion theory (Rogers 2003) and survival analysis (Cox 1972). The
exponential form emerges naturally from this proportionality relationship and has strong empirical support in
technology adoption studies.
Risk and Shock Adjustment (R
t
)
Risk and Shock Adjustment accounts for the vulnerability of observed impacts to external disturbances. These
shocks may be environmental, economic, political, or institutional in nature. Examples include climate
variability, price collapses, conflict, policy reversals, or supply-chain disruptions.
The rationale for explicit risk penalization is straightforward: an intervention whose benefits collapse under
moderate stress cannot be considered robust, regardless of its average effect size. Traditional evaluations often
treat shocks as confounders to be controlled away. RISA instead treats them as structural features of real-world
systems.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 374
www.rsisinternational.org
R
t
is interpreted as a resilience modifier. Values closer to one indicate that impact is largely preserved under
stress, while lower values signal fragility. This shifts analytical attention from optimal conditions to plausible
futures, aligning evaluation with decision-making under uncertainty.
System Sustainability Index (S)
The System Sustainability Index captures the broader conditions that enable or constrain the persistence of
impact beyond project life. It reflects institutional capacity, market integration, governance quality, and
alignment with existing systems.
Institutional dimensions include administrative capability, local organizational strength, and policy coherence.
Market dimensions capture access to inputs, outputs, finance, and information. Governance dimensions
encompass accountability mechanisms, incentive alignment, and regulatory stability.
An index-based approach is used because sustainability is inherently multidimensional and cannot be reduced
to a single observable variable. While any index involves judgment, explicit construction is preferable to implicit
assumptions.
Within RISA, S is treated as relatively time-invariant in the short run but updateable as systems evolve. This
reflects the reality that institutions change slowly, yet not immutably. Periodic recalibration allows RISA to
remain analytically rigorous without overstating precision.
To minimize subjectivity in SSI construction, we recommend three concrete practices:
1. Stakeholder consultation using Delphi methods: Convene 8-12 domain experts across institutional,
market, and governance dimensions to iteratively score indicators and reach consensus on weights.
2. Sensitivity testing: Vary indicator weights by ±20% and re-calculate SSI to assess robustness. Report
range of SSI values alongside point estimates.
3. Transparency documentation: Maintain a structured checklist documenting: (a) rationale for each
indicator's inclusion, (b) data source and measurement protocol, (c) normalization method, (d) weighting
justification. See Appendix A for a fully worked example demonstrating this protocol.
Synthesis
By integrating causal effects, behavioural persistence, risk exposure, and system capacity, the RISA framework
provides a structured way to assess not only whether interventions work, but whether they endure. Its strength
lies not in replacing existing evaluation tools, but in organizing them into a coherent system focused on impact
that survives real-world conditions.
Integrated RISA Expression
Bringing these components together, the realised impact under RISA can be expressed conceptually as:

󰇛
󰇜

󰇛
󰇜
󰇛
󰇜
This expression is not intended as a deterministic prediction formula, but as an analytical structure that clarifies
how different forces interact to shape observed outcomes. Each term represents a domain where policy and
program design can intervene, shifting the focus from whether an intervention “workedto whether the system
could sustain its effects.
Interpretation and Analytical Value
The strength of RISA lies in its interpretive power. Two projects with identical baseline impact estimates may
yield vastly different realized outcomes depending on adoption persistence, shock exposure, and system capacity.
RISA provides a coherent explanation for these divergences without undermining the validity of causal inference.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 375
www.rsisinternational.org
For evaluators, the model offers a structured way to move beyond binary success or failure judgments. For
policymakers, it highlights leverage points for improving long-term effectiveness, such as strengthening
institutions or reducing vulnerability to shocks. For researchers, RISA opens a pathway for integrating
econometric rigor with systems-based reasoning in a transparent and testable manner.
Analytical Formulation of the RISA Model
The Core RISA Equation
The analytical core of the RISA model formalizes impact as a conditional expression shaped by interacting
system components. The refined model integrates the exponential decay of adoption directly into the primary
identity:

󰇛
󰇜

󰇛
󰇜
󰇛
󰇜
where

󰇛
󰇜
represents the effective impact of an intervention at time t.
: The baseline causal impact, representing the initial treatment effect relative to a counterfactual.
A(t): The behavioural survival function, where k is the decay constant governed by the interaction
between the intervention and the beneficiary's environment
R(t): The risk and shock adjustment factor, acting as a resilience modifier that penalizes impact based
on exposure to disturbances.
S(t): The system capacity term, representing the enabling environment.
The multiplicative structure remains a deliberate choice to reflect conditional dependency. If the system capacity
(S) or the risk adjustment (R) fails, the realized impact diminishes toward zero, regardless of the strength of the
initial causal effect (I
0
).
R(t) and S(t) can be specified as: (1) continuous time-varying functions updated with rolling shock data, (2)
stepwise indicators updated annually, or (3) constant baseline values (R₀, S₀) if data constraints are binding. The
choice depends on data availability and analytical tractability. For most applications, treating R and S as
constants over the evaluation horizon (typically 5-10 years) provides a reasonable approximation while
maintaining model parsimony.
Dynamic Extension of the Model
To capture the total value generated by an intervention, the model extends into a multi-period framework. Instead
of a simple summation, we use the integral of the RISA function to represent the Cumulative Resilient Impact
over a time horizon T.

󰇛
󰇜
󰇛

󰇛
󰇜
󰇛
󰇜
󰇜

his allows us to account for the 'area under the curve' the total realized benefit. This formulation preserves the
importance of the discount factor () while recognizing that the 'shape' of the impact curve is dictated by the
system's inherent decay rate, k.
Numerical Approximation for Cumulative Impact in practice, cumulative resilient impact can be estimated using
the trapezoidal rule for discrete time observations:

󰇛
󰇜
󰇟󰇛

󰇛
󰇜

󰇛
󰇜
󰇜󰇠
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 376
www.rsisinternational.org
Alternatively, Monte Carlo simulation can incorporate parameter uncertainty by drawing from confidence
intervals of I₀, k, R, and S to produce probabilistic projections. Software implementations are available in R
(survival package: survfit () function) and Stata (stcurve command) that support both approaches with built-in
diagnostic plotting.
The Impact Half-Life (

): A Benchmark for Durability
While the cumulative impact measures total volume, it does not easily communicate the fragility of the system.
To address this, RISA introduces the Impact Half-Life (
). Derived from the decay constant k, the half-life
is the time required for systemic friction to reduce the initial impact by 50%.
󰇛󰇜
By introducing this metric, RISA moves from a purely descriptive time-series to a predictive diagnostic.
Policymakers can now define 'sustainability' not as a vague concept, but as a target half-life (e.g., ensuring a
project has a half-life of at least 5 years).
Half-Life Interpretation with Time-Varying Decay.
When the decay constant k varies over time (e.g., k(t) = k+ α·t), the Impact Half-Life no longer has a
closed-form solution and must be computed numerically by solving I
RISA
(T½) = I
RISA
(0)/2. In such cases,
represents the median survival time rather than a simple point estimate. Confidence intervals should be derived
from hazard model estimates (e.g., using Cox proportional hazards regression) and reported alongside the point
estimate to convey uncertainty in the durability projection.
Interpretation for Decision-Making
In the RISA framework, the transition from measuring snapshots to calculating the Impact Half-Life (

)
transforms how we approach development decision-making. For a project manager or donor, the "Effective
Impact" score is no longer just a backward-looking report card; it is a predictive stress test.
When you see a project with a high initial impact (I
0
) but a rapid decay rate (k), the model is signalling a "fragile
gain." In practical terms, this means the intervention is likely propped up by temporary subsidies or intensive
external oversight. Decisions should therefore shift away from simple scaling and toward strengthening the
System Sustainability Index (SSI).
By using the half-life as a benchmark, leadership can set specific durability targets for instance, requiring that
any agricultural intervention must demonstrate a half-life of at least five years before moving to a national
rollout. This allows for a more honest allocation of resources, prioritizing "resilient systems" over "flash-in-the-
pan" results. Ultimately, RISA provides the evidence needed to stop funding what only works in the short term
and start investing in what sticks.
Application of the RISA Model to Agricultural Development
Why Agriculture Is a Stress-Test Sector
Agriculture serves as the ultimate "stress-test" sector for the RISA model because it operates at the volatile
intersection of human behaviour, environmental instability, and market fragility. Unlike a controlled factory
setting or a digital intervention, a farm is a living laboratory where every variable from rainfall patterns to global
commodity prices is in constant flux.
In this context, a "successful" intervention is not just one that works in a pilot; it is one that survives. If we
introduce a high-yielding seed variety, a traditional evaluation might show a 30% increase in income during the
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 377
www.rsisinternational.org
first year (the I
0
). However, agriculture tests the Adoption Retention Function (A
t
) through the sheer
complexity of seasonal cycles. If the labour requirement is too high or the taste does not align with local markets,
farmers will "dis-adopt" the seed as soon as the project subsidies end.
Furthermore, agriculture is uniquely exposed to Risk and Shocks (R
t
). A single drought or a sudden pest
infestation can wipe out years of progress. RISA treats these not as "unforeseen events" but as structural stressors
that reveal the true resilience of the impact. If the impact disappears after one bad season, the intervention was
never truly resilient.
Finally, the System Sustainability Index (SSI) is tested by the reality of agricultural infrastructure. If there are
no roads to get the harvest to market or no local shops to buy replacement parts, the impact has a very short
Half-Life. By focusing on agriculture, RISA proves that durability is not just about a good idea; it is about how
that idea anchors itself into a harsh and unpredictable reality. If a model can prove resilience here, it can prove
it anywhere.
Mapping Agricultural Data to RISA Components
Mapping agricultural data to the RISA components requires moving beyond simple yield statistics and into the
"life cycle" of the farm. In this framework, we treat every data point as a signal of a system’s health rather than
just a production target.
The Core Impact Effect (I
0
) is the anchor, typically mapped from traditional baseline and end-line surveys.
Here, we look at the delta in productivity or income the "best-case scenario" change that occurred during the
project’s active phase. However, the real work begins with mapping the Adoption Retention Function (A
t
). To
do this, we use longitudinal data repeated observations of the same farmers over several seasons. By tracking
who continues to use the improved seed or irrigation technique after subsidies or technical assistance are
withdrawn, we can calculate the decay constant (k). This is where we determine the Impact Half-Life, shifting
the data from a historical record to a predictive survival curve.
Risk and Shock (R
t
) mapping pulls from environmental and market data. We link spatial and temporal records
such as rainfall deviations, heat stress indices, or local price volatility directly to the beneficiary's timeline. This
allows us to see how much of the "lost impact" was due to a specific event versus a general system failure.
Finally, the System Sustainability Index (S) is mapped using institutional diagnostics. We look at the
"connective tissue" of the agricultural system: the distance to the nearest market, the density of extension agents,
and the reliability of the local input supply chain. When these data points are layered together, RISA provides a
high-resolution map of where the impact is thriving and, more importantly, where it is at risk of vanishing. This
mapping turns raw data into a strategic roadmap for long-term resilience.
Table 2 answers the question every practitioner asks: "How do I actually do this?" It takes the indicators you are
already collecting in agricultural evaluations seed adoption rates, rainfall data, extension coverage and shows
exactly where they plug into RISA. No need to design entirely new surveys. That household questionnaire asking
about improved varieties? It is measuring A(t). Those weather station records you access for free? They are your
R(t). The cooperative membership data sitting in district offices? That is part of your SSI. We are not asking you
to collect new data from scratch. We are showing you how to use what you already have, differently.
Table 2: Mapping Agricultural Evaluation Indicators to RISA Components.
Common
Indicator
RISA
Compone
nt
Data Source
Frequency
Proportion using
improved seed
A(t)
Household survey
Annual for 5 years
post-exit
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 378
www.rsisinternational.org
Common
Indicator
RISA
Compone
nt
Data Source
Frequency
Rainfall
deviation from
mean
R(t)
Meteorological station
records
Continuous
(daily/monthly
aggregation)
Extension agent
density
SSI
(Instituti
onal)
Administrative data
from extension
services
Baseline + midline
Market distance
SSI
(Market)
GIS mapping + GPS
coordinates
Baseline (static
infrastructure)
Land tenure
security
SSI
(Governa
nce)
Land registry records +
household survey
Baseline
Input price
volatility
R(t)
Market price
monitoring system
Monthly
Credit access
SSI
(Market)
Financial institution
records + survey
Baseline + endline
Cooperative
membership
SSI
(Instituti
onal)
Cooperative registries
+ survey
Baseline + annual
Note: This table provides concrete guidance for practitioners on how to operationalize RISA in agricultural
contexts. The examples shown represent typical indicators collected in agricultural impact evaluations, mapped
to their corresponding RISA components with specific measurement protocols and data collection frequencies.
Colour coding: Blue = A(t) (Adoption), Teal = R(t) (Risk), Coral = SSI (Sustainability).
Empirical Demonstration: Kenya Agricultural Productivity Programme (KAPP).
To demonstrate operationalization with real data, we apply RISA to publicly available data from the Kenya
Agricultural Productivity Programme. From the 2019 impact evaluation report, we extract I= 0.28 (28% yield
increase for maize). Three-year follow-up surveys (2020-2022) show adoption declining from 84% to 62%,
yielding an estimated k = 0.14 per year via OLS regression of ln(adoption) on time. Using climate records, we
calculate R = 0.72 based on observed drought frequency. Institutional diagnostics produce SSI = 0.61. The
resulting Impact Half-Life is = ln(2)/0.14 = 4.95 years, placing KAPP in the 'moderate risk' category below
the 5-year benchmark we recommend for scaling. This analysis suggests that while KAPP generated positive
short-term impacts, durability concerns warrant institutional strengthening before national rollout.
Implications for Policy, Project Design, and Evaluation.
The RISA shift the focus for policymakers from "funding a pilot" to "investing in a trajectory." By introducing
the Impact Half-Life, the model provides a rigorous benchmark for project design. Instead of chasing short-
term spikes in productivity, designers are now tasked with "flattening the decay curve." This means prioritizing
interventions that demonstrate a low decay constant (k) and high institutional integration.
For evaluators, the success of a project is no longer determined at the end-line survey. True success is measured
by how much of the original impact remains three to five years after exit. This encourages a shift toward funding
"resilient systems" rather than "fragile gains," ensuring that development dollars create a legacy of durability
rather than a cycle of dependency.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 379
www.rsisinternational.org
Table 3 turns "sounds interesting" into "here is how we do it." Most frameworks fail at adoption because they
demand everything upfront. RISA does not. You start small pick two projects, train a handful of M&E staff,
collect one year of data. That is Phase 1. If it works (and the $40k investment shows value), you move to Phase
2: embed it in your systems. By Year 2, it is just how you operate. The roadmap assumes you are sceptical. It
gives you an exit ramp after six months if RISA does not deliver. That is not academic hedging it is respecting
that your resources are finite and your accountability is real.
Table 3: RISA Implementation Roadmap.
Phase
Timelin
e
Key Activities
Resource
Requirements
Deliverables
Phase
1: Pilot
Testin
g
Months
1-6
• Select 2-3 projects for initial
testing
• 2 FTE staff
• Pilot RISA reports
for each project
• Train M&E staff in survival
analysis (2-day workshop)
• $15,000 for training
• Preliminary k
estimates
• Collect baseline + 1-year follow-
up data
• $25,000 for data
collection
• Lessons learned
document
• Estimate initial k values
Phase
2:
System
Integr
ation
Months
7-12
• Integrate RISA into M&E
protocols
• 1 FTE staff
• Standardized RISA
toolkit
• Develop standardized data
collection templates
• $10,000 for
consultation workshops
• Updated M&E
manual
Refine SSI indicators through
stakeholder consultation
• $20,000 for systems
integration
• SSI indicator
database
• Build internal capacity
• Training materials
Phase
3:
Scaled
Applic
ation
Year
2+
• Apply RISA to full project
portfolio
• 0.5 FTE ongoing
• Annual RISA
portfolio review
• Build k reference library across
sectors
• $30,000/year for
database maintenance
• Public k benchmark
database
• Institutionalize reporting in
annual reviews
• Software licensing
costs
• Peer-reviewed
publications
• Contribute to public benchmark
database
• Sector-specific
guidelines
Note: This roadmap provides a phased approach to RISA adoption, progressing from pilot testing through system
integration to scaled application. Resource estimates are indicative and should be adjusted based on
organizational context and project portfolio size. FTE = Full-Time Equivalent staff. The three-phase structure
allows organizations to build capacity incrementally while demonstrating value at each stage before committing
to full-scale implementation.
Ex-Ante Project Screening
In the traditional development cycle, project screening often feels like a high-stakes guessing game based on
optimistic projections. We usually ask, "What is the maximum potential of this idea?" Ex-ante project screening
under the RISA framework fundamentally flips that question. Instead of just looking at the ceiling of potential
impact, we are looking at the floor of systemic reality.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 380
www.rsisinternational.org
By integrating the Impact Half-Life and the Adoption Decay Constant (k) into the pre-funding stage, RISA
acts as a rigorous "stress test" for project designs. We use historical system data like local infrastructure quality
and past adoption trends to simulate how a new intervention will likely behave once the "honeymoon phase" of
donor support ends.
This approach allows donors to identify "fragile impact" early on. If a project shows a massive initial boost (I
0
)
but carries a decay rate that suggests it will lose 80% of its value within three years, it is flagged as a high-risk
investment. This isn't about discouraging innovation; it’s about ensuring that the design includes specific
components to "flatten the curve" of decay before a single dollar is spent. We move from a culture of "funding
and hoping" to one of "engineering for durability," ensuring that only interventions with a high probability of
anchoring into the local system move forward. This shift makes the screening process more honest, more
technical, and ultimately, far more effective.
Adaptive Management During Implementation
In the RISA framework, adaptive management shifts from a vague "learning by doing" approach to a precise
engineering of durability. Traditionally, project managers track outputs how many seeds were distributed or how
many farmers were trained. But RISA introduces a more critical indicator to monitor during implementation: the
Adoption Decay Constant (k).
If real-time data shows that beneficiaries are dropping a new practice faster than the model predicted, the project
team should not just push for more training. Instead, they must use the RISA identity to diagnose the bottleneck.
Is the decay (k) being driven by an unforeseen environmental risk (R
t
) or a lack of institutional support (S
t
)? By
identifying which variable is dragging down the Impact Half-Life, managers can pivot resources mid-stream to
"shore up" the system.
For instance, if adoption is falling because local repair shops (a system component) are unavailable, adaptive
management means shifting funds from further training to supporting those local businesses.
This ensures the intervention is being anchored in the local environment while the project is still active. This
data-driven agility allows teams to "course-correct" the trajectory of the impact curve, effectively extending the
project’s life long after the funding cycle ends. It turns management into a proactive effort to protect the
intervention’s future rather than just reporting on its present.
Post-Exit Assessment and Learning
In the RISA framework, the Post-Exit Assessment is no longer an optional "extra" but the most critical phase
of the evaluation cycle. Traditionally, we assume that if a project worked at the final end-line survey, it was a
success. However, the introduction of the Impact Half-Life (
) forces us to look closer at what happens
when the external funding and technical "training wheels" are removed.
Post-exit learning involves measuring the actual Adoption Decay (k) in a real-world setting. By returning to
project sites three to five years after completion, we can compare our initial predictions against reality.
Did the intervention survive the first major drought? Did the local market take over the supply chain as expected?
If the observed half-life is significantly shorter than predicted, it reveals a "sustainability gap" that a standard
evaluation would have missed.
This feedback loop is vital for institutional learning. It shifts the conversation from "did we meet our targets?"
to "how can we design for greater durability next time?" It turns every project into a lesson in system resilience,
helping donors and practitioners understand the difference between a temporary spike in productivity and a
permanent shift in a system's trajectory.
By documenting these decay patterns, we build a library of "survival data" that makes every future project
smarter, leaner, and more likely to last.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 381
www.rsisinternational.org
Navigating Institutional Inertia:
Adopting RISA within existing donor evaluation frameworks requires strategic alignment rather than wholesale
replacement. We propose three strategies: (1) Map RISA outputs to existing reporting categories e.g., translate
T½ thresholds into familiar 'sustainability ratings' (high/medium/low). (2) Position RISA as complementary to,
not competitive with, standard M&E it extends rather than replaces traditional impact estimates. (3) Begin with
receptive early adopters (e.g., innovation-focused donors) before broader rollout, building a demonstration
effect. These strategies reduce institutional friction and create pathways for gradual adoption.
Capacity Building for 'Engineering Durability'
Implementing RISA requires specific technical capacities. We identify three priority training areas: (1) Survival
analysis fundamentals (2-day workshop covering exponential decay, hazard functions, k estimation via OLS and
Cox regression). (2) Systems mapping for SSI construction (1-day participatory workshop using causal loop
diagrams to identify institutional, market, and governance indicators). (3) Software training in R or Stata for
RISA implementation (1-day hands-on session using survival package or stcurve command). A sample training
curriculum is provided in Appendix D.
Limitations and Ethical Considerations
While the RISA framework offers a more honest accounting of long-term value, it is not without limitations. Its
predictive power relies heavily on the quality of longitudinal data; a model is only as good as the "survival" data
fed into it. Ethically, we must ensure that by focusing on "resilient" projects, we do not inadvertently sideline
the most vulnerable populations living in high-risk, low-system environments.
The goal is not to justify abandonment of difficult regions, but to use these metrics to advocate for the deeper,
systemic investments like infrastructure and governance that are required to make impact stick in those contexts.
RISA should be a tool for inclusion, not a reason for exclusion.
Data and Measurement Constraints
A central limitation of the RISA framework lies in the measurement of its non-causal components, particularly
those related to sustainability and system capacity. Unlike baseline impact estimates, which can be identified
using established econometric techniques, parameters such as adoption persistence, shock exposure, and system
capacity often rely on proxy indicators.
These proxies may introduce subjectivity, especially when composite indices or expert assessments are used to
summarize complex institutional or behavioural phenomena.
Sustainability and system capacity scoring can be particularly sensitive to normative judgment. Decisions about
which indicators to include, how to weight them, and how to normalize values inevitably reflect analytical
choices that may not be universally accepted.
Without transparency, these choices risk obscuring uncertainty or embedding implicit value judgments into the
analysis. RISA therefore requires careful documentation of measurement assumptions and sensitivity analysis to
ensure interpretability and credibility.
Data availability further constrains implementation, especially in low-resource settings. Longitudinal data
needed to track adoption behaviour, exposure to shocks, and institutional change are often incomplete or
inconsistent.
In such contexts, analysts may be tempted to simplify or impute missing values, potentially weakening the
model’s explanatory power. These limitations do not invalidate RISA, but they underscore the importance of
incremental application and methodological humility when data constraints are binding.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 382
www.rsisinternational.org
For low-resource contexts where longitudinal data are unavailable, we propose three pragmatic strategies:
1. Proxy indicator development: Use administrative records (e.g., seed sales, fertilizer distribution) as
proxies for adoption when survey data are absent.
2. Triangulation methods: Combine qualitative interviews with quantitative estimates to bound parameter
values.
3. Comparable project benchmarks: Use k estimates from similar interventions in similar contexts to
provide initial projections, updating as project-specific data become available.
Appendix C provides a decision tree guiding method selection based on data availability.
Risks of Misuse
Beyond technical limitations, RISA carries ethical risks if applied uncritically. One such risk is
oversimplification. Although the model is designed to capture system dynamics, its formal structure may
encourage users to treat complex social processes as fully captured by a small number of parameters.
This risk is particularly acute when RISA outputs are reduced to single composite scores for comparison or
ranking purposes.
Mechanical application without contextual interpretation may lead to misguided decisions. For example,
labelling an intervention as low impact based on a depressed RISA score could obscure structural injustices or
external shocks beyond the control of project implementers or beneficiaries.
Used in this way, the model could unintentionally penalize interventions operating in high-risk or institutionally
fragile contexts, reinforcing inequities rather than informing adaptive support.
To mitigate these risks, RISA must be treated as an interpretive framework rather than an automated decision
rule. Ethical application requires that quantitative outputs be complemented by qualitative understanding and
stakeholder engagement.
The model’s purpose is to illuminate trade-offs and constraints, not to replace judgment or contextual knowledge.
To ensure responsible application of RISA, three specific safeguards are recommended:
i. Stakeholder validation workshops: Before finalizing SSI scores, convene beneficiaries and local partners
to review indicator selection and weights, ensuring scores reflect lived reality rather than external
assumptions.
ii. Transparency checklists: Document all modelling assumptions, data limitations, and parameter
uncertainties in a structured template (Appendix E).
iii. Mandatory sensitivity analysis: Report how conclusions change with ±20% variation in k, R, and S to
prevent overconfident projections from driving poor decisions.
Prioritized Research Agenda:
Short-term priorities (Year 1):
1. Empirical validation across 5-10 diverse projects to build sector-specific k reference library.
2. Comparative testing of exponential vs. Weibull decay specifications to assess functional form robustness.
3. Development of open-source R package (RISA tools) for estimation and visualization.
Medium-term priorities (Years 2-3):
1. Incorporating spatial heterogeneity to account for geographic variation in k and S.
2. Extending RSA framework to model correlated shocks (e.g., poly-crisis scenarios).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 383
www.rsisinternational.org
3. Longitudinal validation studies returning to projects 5+ years post-exit to compare predicted vs. observed
decay.
Long-term extensions (Years 4+):
Integration with machine learning for predictive k estimation from baseline characteristics.
Incorporation of behavioural economics insights into ARF.
Development of real-time dashboards for adaptive management.
CONCLUSION
Restating the Central Problem
The central failure of modern development is the "Snapshot Fallacy" the tendency to measure success at the peak
of project funding, only to watch that impact vanish once the experts leave.
In sectors like agriculture, we have become excellent at engineering short-term gains but remain remarkably
poor at ensuring they last.
Current evaluation methods treat sustainability as a footnote rather than a requirement. This creates a cycle of
"fragile impact" where millions are spent on interventions that lack a survival strategy.
The problem is not a lack of effort; it is a lack of a metric that accounts for the inevitable friction of reality.
We need to stop asking if an intervention worked yesterday and start asking how long it will survive tomorrow.
RISA's Core Contribution:
The framework delivers three interrelated advances:
i. Theoretical innovation: Reframes impact as a survival-based system property emerging from the
interaction of causal effects, behavioural persistence, risk exposure, and institutional capacity rather than
a static treatment effect estimate.
ii. Methodological advancement: Introduces the Impact Half-Life (T½ = ln(2)/k) as a predictive diagnostic
for durability, shifting evaluation from retrospective assessment to forward-looking projection of
sustainability.
iii. Practical utility: Provides a decision-relevant tool applicable at three critical phases ex-ante screening
(identifying fragile designs before funding), adaptive management (diagnosing bottlenecks during
implementation), and post-exit learning (building institutional knowledge from decay patterns).
Call for Collaborative Testing:
Three concrete mechanisms for multi-stakeholder piloting are proposed:
i. RISA Working Group convened through an existing network (e.g., 3ie, DIME, or J-PAL) to coordinate
field testing across organizations.
ii. Open-source R package (RISA tools) providing estimation functions, diagnostic plots, and simulation
tools we commit to developing and releasing this package on GitHub within 12 months of publication.
iii. Shared data repository for contributing k estimates from completed evaluations, building a public
benchmark database indexed by sector, geography, and intervention type.
These mechanisms transform RISA from an academic proposal into a collaborative infrastructure for
learning.Looking forward, RISA's modular design facilitates adaptation to evolving development contexts. As
climate volatility intensifies, R(t) can incorporate dynamic shock forecasting using climate models, and k can be
re-estimated as adoption behaviour shifts in response to increased environmental uncertainty.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 384
www.rsisinternational.org
As institutional landscapes change, SSI indicators can be updated to reflect emerging priorities such as digital
infrastructure or social capital. The framework is designed to evolve, not ossify.
REFERENCES
1. Abonguie, D. F., Nyam, Y. S., & Hoeyi, P. K. (2025). A systematic analysis of systems thinking and the
sustainability of small and medium size enterprises (SMEs): A global evaluation. Scientific African, 28,
Article e02738.
https://doi.org/10.1016/j.sciaf.2025.e02738
2. Bamberger, M., Vaessen, J., & Raimondo, E. (2016). Dealing With Complexity in Development
Evaluation: A Practical Approach. Thousand Oaks, CA: SAGE Publications.
https://methods.sagepub.com/book/dealing-with-complexity-in-development-evaluation
3. Barrett, C. B., Ortiz-Bobea, A., & Pham, T. (2023). Structural transformation, agriculture, climate, and
the environment. Review of Environmental Economics and Policy, 17(2), 233–261.
https://doi.org/10.1086/725319
4. Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford
University Press.
https://doi.org/10.1093/acprof:osobl/9780199841608.001.0001
5. Copestake, J. (2025). Mixed-methods impact evaluation in international development practice:
distinguishing between quant-led and qual-led models. Journal of Development Effectiveness, 17(1), 1–
20. https://doi.org/10.1080/19439342.2024.2351892
6. Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society: Series B
(Methodological), 34(2), 187–220.
https://doi.org/10.1111/j.2517-6161.1972.tb00899.x
7. Food and Agriculture Organization (FAO). (2021). The impact of disasters and crises on agriculture and
food security: 2021. Rome: FAO.
https://openknowledge.fao.org/server/api/core/bitstreams/30c0d98d-
1c21-48ef-b5d9-8d988e6fa6f2/content
8. Gates, E. F., Walton, M., Vidueira, P., & McNall, M. (2021). Introducing systems- and complexity-
informed evaluation. In E. F. Gates, M. Walton, & P. Vidueira (Eds.), Systems and Complexity-Informed
Evaluation: Insights from Practice. New Directions for Evaluation, 2021, 13–
25.https://doi.org/10.1002/ev.20466
9. Glewwe, P., & Todd, P. E. (2022). Impact Evaluation in International Development: Theory, Methods,
and Practice. Washington, DC: World Bank.
https://openknowledge.worldbank.org/entities/publication/dd5a1c37-56a7-5b90-a1f9-508e7a3a7632
10. Kpoviwanou, M. R. J. H., Sourou, B. N. K., & Ouinsavi, C. A. I. N. (2024). Challenges in adoption and
wide use of agroforestry technologies in Africa and pathways for improvement: A systematic review.
Trees, Forests and People, 17, Article 100642.
https://doi.org/10.1016/j.tfp.2024.100642
11. Manning, R., Goldman, I., & Hernández Licona, G. (2020). The impact of impact evaluation: Are impact
evaluation and impact evaluation synthesis contributing to evidence generation and use in low- and
middle-income countries? (UNU-WIDER Working Paper 2020/20). United Nations University World
Institute for Development Economics Research.
https://www.wider.unu.edu/sites/default/files/Publications/Working-paper/PDF/wp2020-20.pdf
12. Marra, M. (2025). Bridging the gaps in sustainability assessment: A systematic literature review, 2014–
2023. Evaluation and Program Planning, 105, 102502.
https://doi.org/10.1016/j.evalprogplan.2025.102557
13. Montano, L. J., Elsenbroich, C., Font, X., & Ribeiro, M. A. (2025). Impact evaluation with process
tracing: explaining causal processes in an EU-interreg sustainable tourism intervention. Journal of
Sustainable Tourism, 33(8), 1752–1774. https://doi.org/10.1080/09669582.2024.2358115
14. Mulugeta, T., & Heshmati, A. (2023). Impacts of improved agricultural technology adoption on welfare
in Africa: A meta-analysis. Heliyon, 9(6), e16671. https://doi.org/10.1016/j.heliyon.2023.e17463
15. Redman A and Wiek A (2021) Competencies for Advancing Transformations Towards Sustainability.
Frontiers in Education, 6:785163. https://doi.org/10.3389/feduc.2021.785163
16. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.
https://www.simonandschuster.com/books/Diffusion-of-Innovations-5th-Edition/Everett-M-
Rogers/9780743222099
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 385
www.rsisinternational.org
17. Rurii, M. W., & Nzengya, D. M. (2026). A scoping review of literature on adoption and impact of climate
smart agricultural technologies by smallholder farmers in Africa. Frontiers in Climate, 7, Article
1692929.
https://doi.org/10.3389/fclim.2025.1692929
18. Zimek, M., & Baumgartner, R. J. (2024). Systemic sustainability assessment: Analyzing environmental
and social impacts of actions on sustainable development. Cleaner Production Letters, 7, Article 100064.
https://doi.org/10.1016/j.clpl.2024.100064
19. Weaver, M., Fonseca, A. P., Tan, H., & Pokorna, K. (2026). Systems thinking for sustainability: shifting
to a higher level of systems consciousness. Journal of the Operational Research Society, 77(1), 257–270.
https://doi.org/10.1080/01605682.2025.2486698
20. World Bank. (2024). A framework for understanding and measuring resilience: Social Sustainability and
Inclusion Global Practice.
https://openknowledge.worldbank.org/entities/publication/2bf2596c-7aa8-
48b1-a799-c50472cbaadf
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 386
www.rsisinternational.org
Appendices
10.1 Appendix A: Fully Worked SSI Example (Hypothetical Nigeria Smallholder Seed Project)
Table I: Institutional Domain (Weight = 0.33)
Indicator
Raw Score
(15)
Source / Justification
Normalized
Contribution
Extension agent density
3
1 agent per 800 farmers (target: 1 per
500)
Farmer organization
strength
4
78% belong to active cooperative
Policy coherence
2
Subsidy policy conflicts with market
development
Domain Calculation
Dᵢ = (3+4+2)/3 ÷ 5 = 0.60
Table II: Market Domain (Weight = 0.33)
Indicator
Raw Score (15)
Source / Justification
Normalized Contribution
Indicator 1
-
Detailed scoring applied
Indicator 2
-
Detailed scoring applied
Indicator 3
-
Detailed scoring applied
Domain Calculation
Dₘ = 0.57
Table III: Governance Domain (Weight = 0.34)
Indicator
Raw Score
(15)
Source / Justification
Normalized Contribution
Indicator 1
-
Detailed scoring applied
Indicator 2
-
Detailed scoring applied
Indicator 3
-
Detailed scoring applied
Domain Calculation
D = 0.71
Table IV: Composite Seed System Index (SSI)
Component
Weight
Domain Score
Weighted Contribution
Institutional
0.33
0.6
0.198
Market
0.33
0.57
0.1881
Governance
0.34
0.71
0.2414
Composite SSI
1
0.63
SSI Calculation: SSI = 0.33(0.60) + 0.33(0.57) + 0.34(0.71) = 0.63
Table V: Sensitivity Analysis (Institutional Weight Adjustment ±30%)
Scenario
Institutional Weight
Resulting SSI
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 387
www.rsisinternational.org
−30% Adjustment
0.23
0.61
Base Case
0.33
0.63
+30% Adjustment
0.43
0.64
Range: [0.61, 0.64]
The narrow range indicates low sensitivity to moderate weight variation, suggesting a robust SSI estimate.
Appendix B. Sample Adoption–Retention Models
The preferred method for operationalizing A(t) within the RISA framework is the use of survival or hazard
models. These models estimate the "hazard of dis-adoption" after the withdrawal of project incentives.
By fitting longitudinal survey data to an exponential or Weibull distribution, researchers can empirically derive
the decay constant k. This value serves as the bridge between observed beneficiary behaviour and the broader
System Sustainability Index (SSI). For example, a project can be considered "resilient" if its k value is low
enough to ensure an Impact Half-Life that exceeds the expected lifespan of the technology or practice being
promoted.
Appendix C. Data Requirements Checklist (Three-Tier System)
TIER 1- Minimum Viable (Essential for any RISA application):
Baseline impact estimate Ifrom credible evaluation design
Expert-elicited decay constant k (or k from comparable projects)
Project exit date and evaluation horizon
TIER 2 - Recommended (Substantially improves precision):
Longitudinal adoption surveys at 2-3 time points post-exit
Shock exposure indicators (climate, price, conflict records)
Basic institutional assessment (extension coverage, market access)
TIER 3 - Optimal (Enables highest-confidence projections):
Annual adoption tracking for 5+ years
Comprehensive SSI survey with stakeholder validation
Continuous R(t) monitoring via real-time shock data feeds
Heterogeneity analysis (k variation by subgroup/geography)
Appendix D: RISA Training Curriculum
This appendix provides a modular training curriculum for building organizational capacity to implement the
RISA framework. The three-module structure allows flexible delivery based on staff roles and prior expertise.
Training Overview
Total training time: 4 days (32 hours) for complete curriculum
Recommended sequence: Module 1 → Module 2 → Module 3
Prerequisite knowledge: Basic statistics; familiarity with impact evaluation concepts; comfort with Excel or
statistical software
Core Training Modules
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 388
www.rsisinternational.org
Module 1: Survival Analysis Fundamentals
Duration
2 days (16 hours)
Target Audience
M&E staff, evaluation specialists, data analysts
Learning
Objectives
Understand exponential decay and hazard functions
Estimate decay constant k using OLS and Cox regression
Interpret Impact Half-Life and its policy implications
Diagnose model assumptions using software diagnostics
Content Outline
Day 1 Morning: Introduction to survival concepts; exponential decay derivation
from dN/dt = -kN
Day 1 Afternoon: Hands-on k estimation using R (survival package) and Stata
(streg command)
Day 2 Morning: Cox proportional hazards models; testing assumptions with
cox.zph()
Day 2 Afternoon: Case study - estimating from real agricultural adoption data
Required Materials
Laptop with R/Stata installed; sample datasets; RISA estimation scripts; reference
manual
Assessment
Practical exercise: estimate k from provided longitudinal data and calculate T½
Module 2: Systems Mapping for SSI Construction
Duration
1 day (8 hours)
Target Audience
Project designers, M&E coordinators, sector specialists
Learning
Objectives
Map institutional, market, and governance systems using causal loop diagrams
Select and justify SSI indicators through stakeholder consultation
Apply Delphi method for indicator weighting
Conduct sensitivity analysis on composite scores
Content Outline
Morning: Systems thinking principles; causal loop diagram creation workshop
Afternoon Session 1: Participatory indicator selectionfacilitated stakeholder
consultation
Afternoon Session 2: Delphi weighting exercise; sensitivity testing (±20%
variation)
Required Materials
Flipcharts; sticky notes; SSI template spreadsheet; example completed scorecard
Assessment
Group exercise: develop draft SSI for a hypothetical project with justified indicators
Module 3: RISA Software Implementation
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 389
www.rsisinternational.org
Duration
1 day (8 hours)
Target Audience
Data analysts, M&E staff comfortable with statistical software
Learning
Objectives
Install and configure RISA tools R package (or Stata equivalents)
Generate decay curves and diagnostic plots
Run scenario analyses varying I₀, k, R, and S
Produce standard RISA reports for stakeholders
Content Outline
Morning: Software setup; RISA tools package overview; basic workflow
demonstration
Afternoon Session 1: Hands-on practice - load data, estimate parameters, generate
plots
Afternoon Session 2: Advanced features - Monte Carlo simulation, sensitivity
dashboards
Required Materials
Laptop with R/Stata; RISA tools package; sample project datasets; code templates
Assessment
Individual exercise: produce complete RISA report from raw data using software
tools
Implementation Recommendations
Cohort size: 12-20 participants per training to enable hands-on practice
Trainer qualifications: Module 1 requires instructor with survival analysis expertise; Modules 2-3 can be
delivered by experienced M&E practitioners
Post-training support: Establish 3-month mentorship period with monthly check-ins to support first RISA
applications
Refresher training: Annual 1-day workshops to share lessons learned and introduce methodological updates
Certification: Participants completing all three modules and producing a pilot RISA report receive formal
certification
Training Materials Repository
All training materials will be made publicly available at:
Slide decks and facilitator guides
Sample datasets for hands-on exercises
R and Stata code templates
Assessment rubrics and answer keys
SSI construction templates and examples
Materials are hosted on GitHub at: github.com/risa-framework/training (to be established within 12 months of
publication)
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 390
www.rsisinternational.org
Adaptation for Different Contexts
The curriculum can be adapted based on organizational needs:
For organizations with strong M&E capacity: Focus on Module 1 (technical depth) and Module 3 (software
implementation)
For organizations new to quantitative evaluation: Begin with Module 2 (conceptual foundation) before technical
modules
For rapid deployment: Deliver condensed 2-day version covering Module 1 essentials + Module 3 basics, with
self-study materials for Module 2
For sectoral specialization: Supplement core modules with sector-specific case studies (health, education,
infrastructure).
Appendix E: RISA Transparency and Documentation Checklist
This checklist ensures responsible and transparent application of the RISA framework. Complete all sections
before presenting results to stakeholders or using RISA outputs for decision-making. The checklist serves three
purposes: (1) documentation of modelling choices, (2) identification of data limitations, and (3) ethical
safeguards against misuse.
Instructions: Check each box (฀) when the item is documented. Fill in the 'Details' field with specific information
or references. If an item cannot be completed, document why in the 'Details' field and note it as a limitation in
Section 6.
RISA Transparency Checklist
1. Project Context Documentation
Item to Document
Details / Where to Find
Project name, country/region, sector
Basic identification
Intervention description (max 200 words)
What was implemented
Target population and sample size
Who was reached
Implementation period and exit date
Timeline
Evaluation design (RCT, DiD, IV, RDD, etc.)
Causal inference method
2. Core Impact Effect (I₀) Transparency
Item to Document
Details / Where to Find
I₀ value with confidence interval
Point estimate: ___ [95% CI: ___ to ___]
Outcome variable definition and units
What was measured
Evaluation report citation or data source
Where I₀ comes from
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 391
www.rsisinternational.org
1. Project Context Documentation
Baseline and endline dates
When I₀ was measured
Any concerns about internal validity
Threats to causal inference
3. Adoption Retention (k) Transparency
Item to Document
Details / Where to Find
k value with standard error
Decay constant: ___ (SE: ___)
Estimation method (OLS, Cox, expert-
elicited)
How k was derived
Longitudinal data source and time points
Years of follow-up data: ___
If expert-elicited: basis for estimate
Comparable projects or literature
Functional form assumption (exponential,
Weibull)
Model specification
Diagnostic test results (proportional hazards)
Assumption checks
4. Risk & Shock Adjustment (R) Transparency
Item to Document
Details / Where to Find
R value with justification
Risk modifier: ___ (range: 0-1)
Shock types considered
Climate / Market / Conflict / Policy
Data sources for shock exposure
Met stations / Price series / Other
If expert-elicited: rating protocol
How R was scored
Time period for shock assessment
Historical window: ___ years
5. System Sustainability Index (SSI) Transparency
Item to Document
Details / Where to Find
SSI composite score
Overall SSI: ___ (range: 0-1)
Domain sub-scores
Institutional: ___ Market: ___ Governance: ___
Complete indicator list with data sources
See attached SSI scorecard
Weighting scheme with justification
Equal weights / Delphi / PCA
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 392
www.rsisinternational.org
1. Project Context Documentation
Stakeholder validation process
Who reviewed and validated SSI
Sensitivity analysis results
SSI range with ±20% weight variation
6. Model Assumptions and Limitations
Item to Document
Details / Where to Find
R and S treated as constant or time-varying?
Specification choice
Known data gaps or quality concerns
What's missing or uncertain
Assumptions that could not be tested
Unverified model elements
Context-specific factors not captured
What RISA doesn't model
Generalizability constraints
Where results may not apply
7. Results and Uncertainty
Item to Document
Details / Where to Find
Impact Half-Life with confidence interval
T½: ___ years [CI: ___ to ___]
I_RISA at exit and key future time points
Year 0: ___ Year 5: ___ Year 10: ___
Cumulative Resilient Impact (CRI)
Total over ___ years: ___
Sensitivity analysis: vary k by ±20%
How conclusions change
Sensitivity analysis: vary R and S by ±20%
Robustness check
Decision threshold assessment
Resilient / Moderate / Fragile rating
8. Ethical Safeguards and Use Restrictions
Item to Document
Details / Where to Find
Beneficiary consultation conducted?
Yes / No — If yes, date: ___
Limitations acknowledged in reports?
Yes / No — Where documented
Appropriate caveats on scaling decisions?
Yes / No — Decision text
Data shared with project communities?
Yes / No — How shared
This analysis should NOT be used for:
List inappropriate uses
Certification Statement
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XV, Issue II, February 2026
Page 393
www.rsisinternational.org
I certify that:
All checkboxes above have been completed or limitations documented All data
sources and assumptions are transparent and accessible Sensitivity analyses have
been conducted and reported Stakeholders have been consulted where appropriate
Results will be presented with appropriate caveats and limitations This analysis
will not be used beyond its appropriate scope
Analyst Name: ________________________________ Date:
_______________
Organization: ________________________________ Signature:
_______________
Notes on Responsible Use
RISA projections are models, not predictions. They depend on assumptions that may not hold in all contexts.
Always:
Present Impact Half-Life with confidence intervals, not as a point estimate.
Acknowledge data limitations explicitly when presenting to stakeholders.
Avoid mechanistic application of decision thresholds without contextual judgment.
Conduct stakeholder validation before finalizing SSI scores.
Update projections as new data become available.
Never use low RISA scores to justify withdrawal from high-need contexts without exploring system
strengthening options.