Analysis Pipeline
Every GaiaLab analysis runs through five sequential stages. Stages 1 and 2 are fully parallel across all sources.
Gene normalisation
Input gene symbols are normalised to HGNC approved symbols. Aliases (e.g. HER2 → ERBB2) are resolved before any database query. Invalid symbols are flagged and excluded from scoring but included in the report.
Parallel data fetch — 35+ sources
All database queries run simultaneously via Promise.allSettled(). No source blocks another. A timeout or API error in one source does not prevent results from the remaining 25. Each client returns partial results on failure rather than throwing.
Channel aggregation
Raw API responses are aggregated into 16 evidence channels by domain-specific aggregators. Each aggregator applies source-specific normalisation, deduplication, and confidence flags before passing data downstream.
Scoring and classification
Drug candidates are scored 0–100 across six weighted factors. Pathways are ranked by FDR-corrected enrichment p-value. Hypotheses are filtered by evidence quality and cross-deduplicated against input gene tokens.
6-agent AI synthesis
Six AI agents — Hypothesis, Critic, Evidence, Risk, Innovation, Synthesis — debate the structured data. Each agent receives grounded prompts seeded with scored outputs from stage 4, not raw database dumps. The Synthesis agent produces the final executive brief.
Data Sources
35+ sources queried per analysis. Coverage figures are per-gene, averaged across a 5-gene panel.
| Source | Domain | Data type | Auth required |
|---|---|---|---|
| PubMed / NCBI | Literature | Citation metadata, MeSH terms | Optional (higher rate limit) |
| PMC Full-Text | Literature | JATS XML, quantitative extraction (IC50, HR, OR, n=) | No |
| ChEMBL | Drug bioactivity | IC50, EC50, Ki, pChEMBL, mechanism of action | No |
| OpenTargets | Disease association | Gene-disease association scores by data type | No |
| BioGRID | Interaction | Protein-protein interactions, genetic interactions | Optional |
| STRING | Interaction | Functional association network scores | No |
| UniProt | Protein | Function, variants, subcellular location, PTMs | No |
| AlphaFold (EBI) | Structure | pLDDT per-residue confidence → druggability score | No |
| ClinicalTrials.gov | Clinical | Active trials, phase, intervention, NCT IDs | No |
| OpenFDA | Safety / Regulatory | Adverse event counts, drug approval status | No |
| KEGG | Pathway | Pathway membership, module associations | No |
| Reactome | Pathway | Hierarchical pathway enrichment | No |
| Gene Ontology | Functional annotation | BP, MF, CC terms | No |
| DGIdb | Drug-gene | Drug-gene interaction types and sources | No |
| DisGeNET | Disease-gene | Gene-disease associations with evidence score | API key |
| DrugBank | Drug | Drug targets, pharmacokinetics, interactions | API key |
| OMIM | Disease genetics | Mendelian disease associations | No (public API) |
| ClinVar | Variant | Pathogenic/benign variant classifications | No |
| Semantic Scholar | Literature | Citation graph, influential papers, open-access PDFs | Optional |
| GTEx | Expression | Tissue-specific expression, eQTL associations | No |
| CPTAC | Proteomics | Proteogenomic abundance and phospho-state summaries exposed through GaiaLab modality adapters | No |
| CELLxGENE | Single-cell | Cell-state and compartment enrichment summaries exposed through GaiaLab modality adapters | No |
| HMDB | Metabolomics | Metabolite-linked flux-axis context exposed through GaiaLab modality adapters | No |
| IntAct | Interaction | Curated molecular interactions with MI scores | No |
| cBioPortal | Cancer genomics | Alteration frequency, mutation-aware survival stratification in mapped TCGA cohorts with single-gene loss-associated subgrouping when discrete CNA is available | No |
| Pathway Commons | Pathway | Merged pathway graph from 22 pathway databases | No |
FDR-Corrected Pathway Enrichment
GaiaLab uses a hypergeometric test for gene set enrichment, then applies Benjamini-Hochberg (BH) multiple testing correction across all tested pathways.
Hypergeometric test
BH correction
Raw p-values across all pathways are ranked ascending. Each pathway receives an adjusted q-value:
Pathways are labelled by significance tier:
- high — q ≤ 0.01
- moderate — q ≤ 0.05
- nominal — q ≤ 0.10
- ns — q > 0.10 (not shown by default)
Only pathways at q ≤ 0.05 are included in the executive brief and drug scoring. Pathways at q ≤ 0.10 are shown in the full pathway panel with a "nominal" label. This stricter threshold (tightened from q < 0.20) limits the expected false-discovery rate to 1-in-10 rather than 1-in-5.
Citation Verification & Hallucination Detection
GaiaLab runs a three-stage evidence integrity pipeline on every analysis to ensure cited literature is real, relevant, and accurately represented.
Stage 0 — PMID existence check
All PMIDs produced by the AI synthesis layer are batch-queried against the NCBI PubMed E-utilities esummary API in groups of 50. Any PMID not found in PubMed's index is flagged as hallucinated and stripped from the result before display. The hallucination rate (hallucinated / total AI-cited PMIDs) is reported on the Trust dashboard.
Stage 1 — NLI entailment check
Claims from the analysis are verified against their cited abstract text using DeBERTa-v3-large (cross-encoder/nli-deberta-v3-large), a state-of-the-art Natural Language Inference model. The entailment score threshold is 0.5 — claims that score below this are flagged as weakly supported. Context window: 2,500 characters per passage, 400 characters per claim.
Stage 2 — ALCE-style cite metrics
Inspired by the ALCE attribution benchmark, GaiaLab computes cite-precision, cite-recall, and cite-F1 for each analysis:
These metrics are shown on the Trust page. A cite-F1 ≥ 0.6 is considered well-grounded.
Multi-agent citation floor
Any insight produced by the 6-agent debate that has zero verified PMIDs is annotated with citationFloor: false and its evidence quality is capped at "moderate". A "⚠ No PMIDs" badge is shown on the insight card in the analysis output.
Relation-Aware Drug Scoring
Each drug candidate is scored 0–100 across six weighted factors, then classified into a tier and assigned a floor/cap based on regulatory status.
Scoring formula
Tier classification
Tier I
Score ≥ 70. Strong evidence. On-panel target, clinical data, context match. Shown prominently in all views.
Tier II
Score 50–69. Moderate evidence. Includes all FDA-approved drugs that pass context filter. Up to 3 shown by default.
Tier III
Score < 50. Exploratory. Collapsed behind toggle. Requires explicit expansion by the user.
Filters applied before scoring
- Context relevance ≥ 40 required for off-panel drugs (≥ 30 for on-panel)
- Clinical evidence score ≥ 15 required for off-panel drugs
- Synthetic lethality only computed in oncology disease contexts
- Duplicate canonical drugs resolved by highest
repurposingScore
6-Agent AI Debate
GaiaLab uses six specialised AI agents that each receive structured, scored data — not raw database text. Each agent has a defined role and adversarial mandate.
Hypothesis Agent
Generates mechanistic hypotheses from gene-pathway-drug co-occurrence patterns. Revises hypotheses in response to Critic flaws (iterative debate round).
Critic Agent
Identifies confounders, alternative explanations, and evidence gaps. Flags hypotheses that lack direct mechanistic support. Seeded with live OpenTargets and ChEMBL bioactivity data.
Evidence Agent
Assesses citation quality, recency, and quantitative support from PMC full-text extraction (IC50, HR, OR, n= values). Assigns grounding scores per claim.
Risk Agent
Evaluates safety signals from FDA FAERS adverse event counts and contraindication overlaps. Penalises drug candidates with high AE burden in the disease population.
Innovation Agent
Identifies novel angles — repurposing opportunities, combination hypotheses, and underexplored targets. Seeded with active ClinicalTrials.gov recruiting trials.
Synthesis Agent
Integrates debate outputs into the executive brief. Applies the advisory-therapeutic normaliser to ensure claim-level confidence aligns with citation coverage. Produces the final PMID evidence ledger.
Provider failover order
AI calls attempt providers in order: DeepSeek → OpenAI → Google Gemini → Anthropic Claude. Each call is gated by a token-bucket rate monitor. If a provider's bucket is empty, it is skipped without error and the next provider is tried.
Confidence Tiers
Claim-level confidence is capped by citation coverage. AI-generated language cannot assert high confidence when the citation record does not support it.
| Confidence | Requirement | Display |
|---|---|---|
| High | On-panel target AND clinical evidence score ≥ 15 AND ≥ 6 PubMed citations | Green border, "strong evidence" label |
| Medium | 2–5 citations OR off-panel with clinical data | Blue border, "moderate evidence" label |
| Low | < 2 citations OR hypothesis only | Grey border, "exploratory" label |
Every cited claim includes a PMID. Claims without PMIDs are labelled "derived" or "hypothetical" and rendered with reduced visual prominence. This is enforced by the PMID evidence ledger, not by AI instruction — AI cannot override it.
Immutable Analysis IDs
Every analysis run generates a permanent ID of the form gl-{timestamp}-{8-char-hash}. This ID is:
- Included in API responses and the analysis UI
- Linkable as a permanent URL:
https://gaialabai.com/analysis/{id} - Safe to cite in paper supplementary materials
- Stored as an immutable JSON snapshot in
data/snapshots/
Snapshot files record the exact gene list, disease context, all database responses, all scored outputs, and the AI synthesis. A snapshot can be replayed to verify that the same inputs produce equivalent outputs under the same database state.
Known Limitations
Database coverage gaps
Without paid API keys (DisGeNET, DrugBank), coverage falls to ~30/35+ sources. These gaps are disclosed in the analysis output and do not produce false confidence — missing sources are simply absent, not filled with hallucinated data.
AI synthesis is probabilistic
The six AI agents reason from structured data but can still produce plausible-sounding errors. All AI output is gated by the PMID evidence ledger — claims without citation support are demoted. Users should treat the executive brief as a hypothesis generator, not a clinical decision tool.
Small panels (< 3 genes)
Pathway enrichment and drug scoring are less reliable with fewer than 3 genes. The hypergeometric test loses power and synthetic lethality detection is disabled. Results for single-gene queries are labelled accordingly.
Non-human species
GaiaLab is optimised for human gene symbols. Mouse orthologs (e.g. Trp53) are partially supported via alias resolution but may miss sources that do not cross-reference species.
Not a clinical decision support tool
GaiaLab is a research intelligence platform. Outputs are not validated for clinical use and should not inform patient treatment decisions without independent expert review and regulatory-grade validation.