Adaptive Trial Designs in Oncology
Definition
An adaptive design is a clinical trial that allows prospectively planned modifications to the design, sample size, population, randomization ratio, or statistical analysis based on accumulating interim data, while preserving the type I error rate and maintaining statistical power. Adaptations are determined by pre-specified rules codified in the Statistical Analysis Plan (SAP) before unblinding.
Regulatory perspective (FDA 2019, Final): Adaptive designs are permitted for all phases (exploratory, confirmatory); the key requirement is pre-specification and rigorous type I error control.
Regulatory perspective (ICH E20 2025, DRAFT Step 2b): Formalizes adaptive design principles globally; stricter requirements for simulation documentation and sensitivity analyses; introduces Bayesian adaptive frameworks as co-equal to frequentist conditional error function approaches.
FDA 2019 Classification: Well-Understood vs Less Well-Understood Adaptive Designs
Well-Understood Adaptive Designs (Lower Regulatory Burden)
Characteristics: Extensive experience; clear type I error control; limited complexity
Examples:
-
Group Sequential Designs (GSD)
- Interim efficacy/futility stopping with pre-specified boundaries (O'Brien-Fleming, Pocock)
- No sample size re-estimation; no population changes
- Type I error controlled by spending functions
- Regulatory burden: Low; standard analysis
- R packages:
rpact,gsDesign2
-
Blinded Sample Size Re-Estimation
- Recalculate sample size at interim based on nuisance parameters only (event rate, dropout rate)
- Do NOT unblind treatment effect
- Preserves type I error under standard conditional error function
- Example: Event rate lower than expected → increase N proportionally
- Regulatory burden: Low; straightforward inflation factor or conditional error analysis
- R packages:
rpact,adaptIVPT
-
Planned Interim Analysis with Futility Only
- Test for futility (conditional power <20%) at interim
- Efficacy tested only at final analysis
- Type I error = α (no inflation)
- Regulatory burden: Minimal
Less Well-Understood Adaptive Designs (Higher Regulatory Burden)
Characteristics: Complex type I error control; limited historical experience; requires simulation validation
Examples:
-
Unblinded Sample Size Re-Estimation
- Re-estimate N based on observed treatment effect at interim
- Adaptive combination test (p₁, p₂, correlation structure) required to preserve α
- Risk: If treatment effect weak, adaptive increase in N + unblinded data → type I error inflation
- Control: Conditional error function approach (Cui, Hung, Wang 1999)
- Regulatory burden: HIGH; requires simulation proof of type I error control
- R packages:
rpact(conditional error),MAMS
-
Population Enrichment (Adaptive Subgroup Selection)
- Full population enrolled initially
- Interim analysis: If pre-specified biomarker/subgroup shows superior effect, adapt to enrich that population in stage 2
- Risk: Multiple testing across biomarkers; inflation if not pre-specified
- Control: Hierarchical gatekeeping or Bayesian methods; pre-specification of biomarkers and decision rules
- Example: PD-L1+ vs PD-L1- cohorts in immunotherapy trial; adapt if PD-L1+ shows benefit
- Regulatory burden: VERY HIGH; FDA requires detailed biomarker rationale, pre-specification, simulation
-
Adaptive Randomization (Response-Adaptive, Covariate-Adaptive)
- Randomization probabilities change during trial based on:
- Response-adaptive: Interim efficacy/safety data (e.g., favor arm with higher response rate)
- Covariate-adaptive: Balance baseline covariates in real-time
- Risk: Bias in estimation; loss of blinding if observed (e.g., more patients to one arm = possible unblinding)
- Control: Analysis must adjust for adaptive allocation; conditional error function + sensitivity analysis
- Regulatory burden: VERY HIGH; FDA skeptical; requires rigorous justification and simulation
- R packages:
adaptIVPT, specialized Bayesian software
-
Seamless Phase 2/3 Designs
- Single trial spanning dose-ranging (Phase 2) and efficacy (Phase 3)
- Interim: Select optimal dose(s) based on Phase 2 data
- Stage 2: Confirm efficacy at selected dose(s) in larger population
- Risk: Dose selection based on same population; if selection wrong, phase 3 underpowered; multiplicity across doses
- Control: Pre-specification of dose selection criteria; hierarchical testing or closed testing procedures; often Bayesian
- Example: KEYNOTE-407 (pembrolizumab in squamous NSCLC) used adaptive design for dose confirmation
- Regulatory burden: HIGH; requires simulation and pre-specification
ICH E20 (2025 Draft, Step 2b): Key Differences from FDA 2019
What ICH E20 Adds to FDA 2019
-
Formal Bayesian Adaptive Framework
- FDA 2019: Mentions Bayesian methods; primarily focuses on frequentist conditional error function
- ICH E20 DRAFT: Elevates Bayesian adaptive designs to co-equal status with frequentist approaches
- Allows Bayesian response-adaptive randomization with proper type I error control via posterior predictive probability
- Recognizes informative priors from historical data as valid for confirmatory trials (if pre-specified)
-
Stricter Simulation Requirements
- FDA 2019: Simulation recommended for complex designs; not always required
- ICH E20 DRAFT: Mandatory simulation for all adaptive designs with:
- ≥10,000 replicates under null (type I error)
- ≥1,000 replicates under alternative (power)
- Scenario coverage: All plausible parameter combinations (not single-point estimates)
- Sensitivity analysis: MNAR (missing not at random) robustness if applicable
-
Explicit Guidance on IDMC (Independent Data Monitoring Committee) Role
- FDA 2019: IDMC role implied; details sparse
- ICH E20 DRAFT: Detailed specifications:
- IDMC must have statistical expertise in adaptive design methodology
- Interim futility decisions must be based on pre-specified conditional power thresholds (not subjective judgment)
- Adaptive allocations (randomization ratio change, population switch) require IDMC recommendation + sponsor transparency
-
Intercurrent Events and Missing Data Under Adaptation
- FDA 2019: Links to ICH E9(R1) but doesn't address how adaptations interact with intercurrent events
- ICH E20 DRAFT: Explicit guidance:
- If population enrichment occurs, estimand must be re-defined (principal stratum, hypothetical scenario, etc.)
- If randomization adapts, missing data strategy may need re-specification in the enriched population
- Example: If trial enriches to biomarker+ patients at interim, primary estimand shifts to biomarker+ population; missing data assumption (MAR vs MNAR) must be re-justified for new population
-
Historical Data and Borrowing
- FDA 2019: Cautious on external data; generally recommends prospective trials
- ICH E20 DRAFT: Formalizes use of historical data borrowing via:
- Informative priors (Bayesian)
- Power priors (weighted historical information)
- Requires prior-data conflict assessment (Bayesian predictive checks)
- Discordance plan: What if historical data conflict with interim findings?
-
Timing and Pre-specification in SAP
- FDA 2019: Pre-specification recommended; some flexibility for modifications via Type C meeting
- ICH E20 DRAFT: Stricter: All adaptive rules, decision thresholds, and conditional power/futility criteria must be in SAP before any unblinding
- Modifications post-IND or after interim only allowed under exceptional circumstances (with written FDA justification)
Sample Size Re-Estimation: Blinded vs Unblinded
Blinded Sample Size Re-Estimation (Well-Understood)
When permissible: Adjust N based on nuisance parameters (event rate, dropout, variance) observed at interim, without unblinding treatment effect.
Procedure:
- At interim (e.g., 50% information), assess event rate or variance in pooled population (all arms combined)
- Compare to original assumption
- Recalculate sample size: N_new = N_original × (observed / assumed)²
- Increase enrollment if needed; continues blinded
- Final analysis: Combined test across both stages
Type I Error Control: Preserved automatically under blinded SSR (conditional error function not required)
Example (Oncology):
# Original assumption: median OS = 12 months (λ = 0.0578/month)
# At interim: observed pooled event rate = 0.035/month (higher than assumed)
# This means fewer events will accrue → longer trial duration needed
# Recalculate: N_new = N_original × (0.0578 / 0.035)² ≈ 1.37 × N_original
R Package: rpact supports blinded SSR with inflation factor calculation
Unblinded Sample Size Re-Estimation (Less Well-Understood)
When permissible: Recalculate N based on observed treatment effect (e.g., observed HR) at interim.
Risk: Observing weak effect → unblinded increase in N → temptation to declare success if final p-value barely crosses α, even though weak effect was true all along.
Type I Error Control Required: Conditional Error Function (Cui, Hung, Wang 1999)
Procedure:
-
At interim, unblind and observe:
- Z₁ = interim test statistic (standardized log-rank, for example)
- p₁ = interim p-value
-
Calculate conditional error (or equivalently, conditional power):
- CP = P(reject at final | interim data)
- If CP > 80%, trial likely to succeed; maybe stop early for efficacy
- If CP < 20%, trial futile; stop for futility
- If 20% < CP < 80%, continue with re-estimated N
-
Pre-specify decision boundaries (α-spending function) that account for the interim look
- Final analysis combines Z₁ and Z₂ via combination test:
- Z_combined = w₁ × Z₁ + w₂ × Z₂ (weights pre-specified)
- Or: Inverse normal method: p_final = Φ(w₁ × Φ⁻¹(p₁) + w₂ × Φ⁻¹(p₂))
Type I Error Control: Guaranteed by proper weighting and spending function choice (requires simulation validation)
Example SAP Language:
At interim analysis (50% information), if observed treatment effect is weak
(HR > 0.80), the trial will be re-powered to detect HR = 0.80 instead of
the original 0.70. The conditional error function approach (Cui, Hung, Wang)
will be applied. Final analysis will use inverse normal combination method
with weights: w₁ = 0.5, w₂ = 0.5 (symmetric).
Type I error control verified via simulation (10,000 replicates under null
hypothesis) showing α-level ≤ 0.025.
R Packages:
rpact: Conditional error function, design, and analysis functionsMAMS: Multi-arm, multi-stage adaptive designs with SSR
Adaptive Randomization
Response-Adaptive Randomization (RAR)
Mechanism: Randomization probabilities shift during trial based on interim efficacy or safety.
Example:
- Initially: 1:1 randomization (50% to each arm)
- At interim: Observe arm A response rate = 60%, arm B = 40%
- Adapt: Shift to 65:35 (favor responding arm A)
- Rationale: Ethical (more patients get better treatment); efficiency (fewer failures)
Risk:
- Bias in estimation: If allocation favors arm A, final effect estimate may be inflated
- Type I error inflation: If decision rule based on interim p-value, unblinded re-randomization can amplify significance
- Unblinding risk: Unequal randomization visible to site staff; can compromise blinding
Type I Error Control: Requires conditional error function + sensitivity analysis
Regulatory stance (FDA 2019): Possible, but skeptical; requires:
- Robust justification (ethical benefit clearly outweighs complexity)
- Pre-specified adaptation rule (not data-driven)
- Simulation showing type I error control
- Sensitivity analysis (assume rule failed; what if allocation not adaptive?)
Regulatory stance (ICH E20 DRAFT): Opens door to Bayesian response-adaptive randomization using posterior probabilities:
- Randomization probability ∝ posterior P(arm is best | interim data)
- Type I error controlled via Bayesian predictive probability framework
- More transparent than frequentist RAR
R Packages:
adaptIVPT: Inverse probability weighting for RAR- Bayesian software:
rstan,Stanmodels for Bayesian RAR
Covariate-Adaptive Randomization
Mechanism: Real-time balancing of baseline covariates (age, disease stage, biomarker status) across arms.
Example:
- PD-L1 status and ECOG performance status are critical confounders
- Traditional 1:1 randomization may imbalance these in small trials
- Covariate-adaptive (e.g., minimization, stratified permuted blocks): Ensure balance in real-time
Advantages:
- Reduces confounding; increases precision
- No impact on type I error (statistical nuisance)
- Blinding preserved (allocation method invisible to patients)
Regulatory stance: Well-understood; FDA 2019 accepts covariate-adaptive randomization routinely.
Minimal Requirements:
- Pre-specify covariates and balancing algorithm
- Document in SAP
- No simulation needed (deterministic allocation)
R Packages:
blockrand: Stratified permuted blocksminimise: Minimization algorithms
Seamless Phase 2/3 Designs and Population Enrichment
Seamless Phase 2/3 Design
Structure:
- Stage 1 (Phase 2): Dose-ranging study in smaller population (e.g., n=100)
- Objective: Select optimal dose based on efficacy/tolerability
-
Primary endpoint: Response rate or dose-limiting toxicity
-
Stage 2 (Phase 3): Efficacy confirmation at selected dose in larger population (e.g., n=300 additional)
- Objective: Confirm efficacy of chosen dose vs. control
- Primary endpoint: OS or PFS
Advantages:
- Time and cost savings (single trial instead of two)
- Expedited development pathway
Risks:
- Dose selection bias: If Phase 2 population is small or unrepresentative, selected dose may not be optimal for Phase 3 population
- Multiplicity: If testing multiple doses in Phase 2, selection introduces multiple comparisons
- Type I error inflation: If final analysis uses same Phase 2 data to both select dose and test efficacy
Type I Error Control:
- Closed testing procedures (graphical methods, gatekeeping)
-
Hierarchical hypothesis structure:
- Primary: Test efficacy of selected dose (Phase 3 data alone)
- Secondary: Test dose A, then dose B, etc. (Phase 2 data, closed testing)
-
Bayesian approach: Informative prior on selected dose based on Phase 2 data; Phase 3 likelihood-based update
Regulatory stance (FDA 2019): Acceptable; requires clear pre-specification of dose selection rule.
Regulatory stance (ICH E20 DRAFT): Favors structured Bayesian approach for dose selection and confirmation.
Example (Real Trial): KEYNOTE-407 (pembrolizumab + chemotherapy in squamous NSCLC)
- Phase 2: Pembrolizumab 10 mg/kg Q2W selected
- Phase 3: Confirmed 10 mg/kg as optimal (vs. chemotherapy alone)
- Design: Seamless, with dose selection at interim
Population Enrichment Adaptation
Mechanism: Trial starts with broad population; at interim, focus on biomarker-defined subgroup (e.g., PD-L1 high, BRCA mutant).
Rationale:
- Biomarker effect uncertain at trial start (continuous response across subgroups)
- Interim data reveals biomarker signal
- Efficiency: Enrich to biomarker+ patients if they show benefit
Procedure:
- Stage 1: Enroll all-comers (n=200) with baseline biomarker assessment
-
Interim Analysis (after ~100 events):
- Compare treatment effect in biomarker+ vs. biomarker- subgroups
- Decision rule (pre-specified):
- If biomarker+ subgroup shows superior response (e.g., HR < 0.65), enrich to biomarker+ only
- Otherwise, continue with all-comers
-
Stage 2: Enroll additional biomarker+ patients (n=200 more), if enriched
Type I Error Control:
- Challenge: Testing in one population (all-comers), then switching to subgroup → multiplicity
- Solution: Hierarchical testing (test biomarker+ efficacy only if overall effect significant) OR closed testing + Holm-Bonferroni correction
- Bayesian alternative: Mixture prior reflecting uncertainty; posterior probability of biomarker+ benefit guides decision
Regulatory stance (FDA 2019): Requires:
- Clear clinical/biological rationale for biomarker
- Pre-specified enrichment rule (decision threshold, e.g., "enrich if HR < 0.65 in biomarker+ subgroup")
- Hierarchical testing or closed testing to control type I error
- Simulation validation
Regulatory stance (ICH E20 DRAFT): Formalizes via adaptive subgroup selection framework:
- Prior on biomarker effect specified (informative or flat)
- Interim decision rule based on posterior probability of benefit in each subgroup
- Phase 3 efficacy estimated in enriched population; adjusted for prior data borrowing
Estimand Impact:
- Original estimand (all-comers, treatment policy): "Effect of assignment to treatment in overall population"
- Enriched estimand (biomarker+, hypothetical): "Effect in biomarker+ subgroup, hypothetically continuing treatment"
- SAP must specify: How does enrichment change the estimand? (ICH E9(R1) implications)
Type I Error Control: Conditional Error Function and Combination Tests
Conditional Error Function (CEF) Approach
Principle: At interim, compute the conditional error probability (probability of rejecting H₀ at final, given current data).
Mathematical framework (Cui, Hung, Wang 1999):
Stage 1: Collect n₁ observations; compute test statistic Z₁, p-value p₁
Stage 2: Collect n₂ observations; compute Z₂, p₂
Conditional Error: α_c = P(reject H₀ | interim data)
= P(Z₂ > z(α_c | Z₁))
If interim decision is "continue to stage 2", the spending boundary at stage 2 is
adjusted so that if Z₂ crosses it, the combined test will reject H₀ with overall
type I error = α.
Advantages:
- Flexible: Allows sample size change, population switch, test change
- Efficient: Weak interim findings can be re-powered
- Transparent: Decision rules tied to conditional power
Implementation (Inverse Normal Method):
Z_combined = w₁ × Z₁ + w₂ × Z₂ (weights w₁, w₂ pre-specified)
Critical value at stage 2: c₂ = (z_α - w₁ × Z₁) / w₂
If Z₂ > c₂, reject H₀ (at stage 2)
Example (Oncology):
- Stage 1: 200 events observed; Z₁ = 1.5 (p₁ = 0.067, non-significant)
- Interim decision: Weak signal; re-estimate sample size for final stage 2
- Pre-specified weights: w₁ = 0.4, w₂ = 0.6 (asymmetric; more weight on stage 2)
- Stage 2: Collect 200 more events; Z₂ = 2.1
- Z_combined = 0.4 × 1.5 + 0.6 × 2.1 = 1.86
- If critical value ≤ 1.86, reject H₀ at overall α = 0.025
R Packages:
rpact: Conditional error functions, design, analysis, inflation factorsMAMS: Multi-arm, multi-stage with conditional error control
Combination Tests (Fisher, Inverse Normal)
Fisher's Method (for p-values):
Test statistic: T_Fisher = -2 × [ln(p₁) + ln(p₂)]
Null distribution: χ² with 4 degrees of freedom
Advantage: Intuitive; based on p-values Disadvantage: Less flexible for unequal information weights
Inverse Normal Method (for z-statistics):
Z_combined = (w₁ × Z₁ + w₂ × Z₂) / √(w₁² + w₂²)
Null distribution: N(0, 1) [standard normal]
Advantage: Flexible; allows asymmetric weights for unequal stages Disadvantage: Assumes normally distributed test statistics
Pre-specification in SAP:
"The primary analysis will use the inverse normal combination test with
weights w₁ = 0.5, w₂ = 0.5 (symmetric, equal information at each stage).
Type I error control verified via simulation:
- 10,000 replicates under H₀ (HR = 1.0)
- Estimated α = 0.0248 (95% CI: 0.0210–0.0290)
- Conclusion: Type I error ≤ 0.025 ✓
"
Simulation Requirements per ICH E20 (DRAFT)
Mandatory Elements
1. Simulation scope (DRAFT requirement, stricter than FDA 2019):
- ≥10,000 replicates for type I error (null hypothesis true)
- ≥1,000 replicates for power (alternative hypothesis true)
- All plausible scenarios: Base-case, optimistic, pessimistic parameter values
- No single-point estimates allowed; must explore parameter ranges
2. Data-generating model specification:
- Piecewise hazard (if non-proportional hazards)
- Event rate, dropout rate, enrollment rate
- Interim timing (% information)
- Any adaptive rule parameters (e.g., conditional power threshold for futility)
3. Adaptive rule validation:
- Simulate the exact adaptive rule (blinded SSR, unblinded SSR, enrichment, RAR)
- Compute conditional power at interim under different scenarios
- Verify futility/efficacy stopping boundaries preserve α
4. Type I error control proof:
- Show estimated α ≤ 0.025 (with 95% CI) under null
- If multiple adaptive looks or interim decisions, confirm per-family error rate (FWER) controlled
5. Sensitivity analyses (DRAFT):
- MNAR (Missing Not At Random): If dropout mechanism may be informative, simulate under MNAR assumptions (delta adjustment, reference-based imputation)
- Alternative adaptive rules: If primary rule uncertain, test robustness to rule variation
- Dose selection bias: For seamless Phase 2/3, simulate impact of dose selection on final power
6. Operating characteristic reporting:
- Expected sample size, # events, trial duration under each scenario
- Probability of early stopping (efficacy, futility) at interim
- Conditional power distribution at interim
- Power by subgroup (if enrichment adaptive)
Pre-Specification and FDA Communication
SAP Requirements (FDA 2019 + ICH E20 DRAFT)
Must be specified BEFORE any unblinding:
-
Adaptive rule(s): Exact decision criterion
"At interim analysis (after 100 events), if conditional power < 20%, trial will be stopped for futility. Conditional power is calculated as P(Z_final > 1.96 | interim data, assumed HR = 0.70 under alternative)." -
Type I error control method:
"Inverse normal combination test with weights w₁ = 0.5, w₂ = 0.5. Type I error verified via simulation showing α ≤ 0.025." -
Simulation assumptions: Document data-generating model completely
"Event rate: λ_control = 0.05/month; HR = 1.0 (null), 0.70 (alternative); Dropout: 1%/month; Enrollment: 20/month; Interim timing: 50% events" -
Decision boundaries: Efficacy and futility boundaries at interim
"Efficacy: None (continue to final). Futility: Stop if conditional power < 20%. Final: Reject H₀ if Z_combined > 1.96 (α = 0.025, one-sided)"
FDA Communication (Type C Meeting)
When to request feedback:
- Complex adaptive design (less well-understood category)
- Novel adaptive rule not extensively published
- Uncertainty about regulatory acceptability
Deliverables for meeting:
- Study protocol (draft)
- SAP (detailed adaptive rules + simulation)
- Simulation report: 10,000+ replicate results showing type I error control
- Sensitivity analyses (alternative rules, MNAR robustness)
- Literature references supporting approach
FDA Response: Type C meeting concludes with:
- "Acceptable" (design approved; proceed as planned)
- "Continue discussion" (minor modifications needed)
- "Not acceptable" (major redesign required)
R Packages for Adaptive Design Implementation
rpact: Comprehensive Adaptive Design Framework
Installation: install.packages("rpact")
Key functions:
getDesignGroupSequential(): Group sequential design (O'Brien-Fleming, Pocock)getDesignAdaptive(): Adaptive sample size re-estimation with conditional errorgetDesignConditionalDARTPower(): Conditional power assessmentgetAnalysisResults(): Analyze trial data under adaptive designgetInverseNormalCombinationTest(): Inverse normal combination test
Example (Unblinded SSR):
library(rpact)
# Design: Unblinded SSR with conditional error function
design <- getDesignAdaptive(
alpha = 0.025,
beta = 0.2,
informationRates = c(0.5, 1.0), # Interim at 50% info
typeOfDesign = "conditionalErrorFunction",
typeBetaSpending = "bsf", # Beta-spending function
gammaA = 1.0 # Power parameter
)
# Simulate under adaptive rule
sim <- getSimulationMultiArmSurvival(
design = design,
n = 400,
lambda = 0.05, # Control event rate
theta = log(0.65), # Treatment effect (HR)
dropoutRate = 0.01,
maxNumberOfIterations = 10000
)
summary(sim)
adaptIVPT: Adaptive Designs with Inverse Probability Weighting
Installation: GitHub (Merck package)
Purpose: Response-adaptive randomization with bias correction
Key function: adaptIVPT() implements inverse probability weighting (IPW) to adjust for adaptive allocation bias
Use case: If response-adaptive randomization deployed, IPW ensures unbiased treatment effect estimate
MAMS: Multi-Arm, Multi-Stage Adaptive Designs
Installation: install.packages("MAMS")
Key features:
- Designs for multiple treatment arms with interim arm dropping
- Sample size re-estimation
- Conditional error function with combination tests
- Flexible efficacy/futility boundaries
Example (Multi-arm design):
library(MAMS)
# Design: 3 arms (A, B, control) with interim arm dropping
design <- mams(
nMat = matrix(c(100, 100, 100, 200, 200, 200), nrow = 2, byrow = TRUE),
alpha = 0.025,
beta = 0.2,
r = 2, # 2 stages
r0 = 2, # 2 treatment arms initially
rar = TRUE # Response-adaptive randomization
)
# Analyze using combination test
Backlinks
- Sample Size Re-estimation (SSR)
- Sensitivity Analysis Playbook for Oncology Trials
- ICH E9(R1) Estimand Framework
- Intercurrent Events in Oncology Trials
- Simulation-Based Power Analysis
- Group Sequential Designs (GSD)
Source: FDA Guidance "Adaptive Designs for Clinical Trials of Drugs and Biologics" (Final, November 2019); ICH E20 "Adaptive Design Clinical Trials" (DRAFT Step 2b, June 2025 revision pending); literature on conditional error functions, combination tests, Bayesian adaptive designs
Status: FDA 2019 guidance — Final; ICH E20 — DRAFT (Step 2b, June 2025 revision in progress; flagged as DRAFT per user instruction)
Compiled from: FDA 2019 adaptive guidance, ICH E20 draft documents, simtrial/rpact documentation, published literature on conditional error functions (Cui, Hung, Wang 1999; Jennison & Turnbull 2000)