Longevity signatures can support candidate geroprotector retrieval, but reversal-only ranking often elevates stress-like, cytostatic, or otherwise misleading perturbations. We present an offline, agent-executable workflow that scores frozen LINCS DrugBank consensus signatures against a frozen ageing query while requiring concordance with conserved longevity biology from vendored Human Ageing Genomic Resources snapshots. The scored path integrates GenAge human genes, HAGR-provided human homolog mappings for GenAge model-organism genes, mammalian ageing and dietary-restriction signatures, GenDR genes, and CellAge genes and senescence signatures. For each compound, the workflow emits a rejuvenation score together with a Rejuvenation Alignment Certificate, a Confounder Rejection Certificate, and a Query Stability Certificate, explicitly testing whether apparent reversal is better explained by conserved longevity programs than by stress, cytostasis, senescence, or toxicity. In the frozen rediscovery benchmark, the full model improved negative-control suppression but did not beat reversal-only on the pre-registered primary metric AUPRC. The contribution is therefore a reproducible retrieval-control framework that makes candidate ranking auditable and self-verifying, rather than a claim of successful geroprotector discovery.
We present an offline, self-verifying workflow that ranks single-antigen and logic-gated cell-therapy leads from compact frozen snapshots of TCGA-style tumor RNA, Human Protein Atlas-style normal RNA and protein, adult-only healthy single-cell data, and TISCH2-style tumor single-cell evidence in a compact indication panel. The scored path combines tumor prevalence, tumor intensity, same-malignant-cell support, surface-target confidence, off-tumor safety, and patchiness into a transparent single-target score, then proposes A AND B rescue circuits when single targets are unsafe or too heterogeneous. The contribution is not merely a list of overexpressed tumor antigens, but an executable workflow that compiles safer recognition programs after testing their safety, coverage, and rescue feasibility.
Antimicrobial peptide discovery often rewards assay-positive hits that later fail in salt, serum, shifted pH, or liability-sensitive settings. We present a biology-first, offline workflow that ranks APD-derived peptide leads by deployability rather than activity alone and then proposes bounded rescue edits for near misses. The frozen scored path vendors 6,574 standard-amino-acid APD entries retrieved from the official APD site and combines interpretable sequence features with APD-derived activity, salt, serum, pH, resistance, and liability labels. On a frozen rediscovery panel of 320 APD peptides, the full deployability score outperformed an activity-only baseline on every primary ranking metric, improving AUPRC from `0.4188` to `0.9176`, AUROC from `0.3498` to `0.8751`, EF@5% from `0.75` to `2.00`, and recall@25 from `0.0563` to `0.1563`. On a 24-pair masked analog benchmark constrained to the v1 redesign search space, the rescue engine recovered the exact target sequence within the accepted rescue set for 22 pairs (`91.7%`) with a mean accepted proposal gain of `0.0988` deployability units over parent peptides. In the default canonical library, Chicken CATH-1 (`AP00557`) ranked first. The contribution is therefore not a generic AMP classifier, but an executable workflow that separates deployable leads from liability-heavy hits under physiologic constraints and audits minimal redesigns before reporting them.
We present an offline, self-verifying target-cartography workflow for prioritizing solid-tumor cell-therapy single-antigen leads from compact frozen snapshots of tumor RNA, normal-tissue RNA and protein, and adult healthy single-cell atlases. Canonical v1 ranks safety-filtered single-antigen targets only. Optional logic-gate outputs are generated separately as bulk-supported rescue hypotheses from bulk tumor co-detection plus adult normal-risk filtering and are not same-cell-validated gate designs. The workflow emits transparent feature terms, safety and coverage certificates, and separate rediscovery benchmarks against a naive tumor-overexpression plus bulk-normal-RNA baseline.
We present an offline, agent-executable bioinformatics workflow that classifies human gene signatures as aging-like, dietary-restriction-like, senescence-like, mixed, or unresolved from vendored Human Ageing Genomic Resources snapshots. The workflow does not report a longevity label on overlap alone. Instead, it tests whether the interpretation survives perturbation, remains specific against competing longevity programs, and beats explicit non-longevity confounder explanations before reporting it. The scored path uses frozen GenAge, GenDR, CellAge, and HAGR ageing and dietary-restriction signatures, together with a holdout-source benchmark and a blind external challenge panel. In the frozen release, all four canonical examples classify as expected, the holdout-source benchmark passes 3/3, and a blind panel of 12 compact public signatures is recovered exactly, including mixed and confounded cases. The contribution is therefore a reproducible bioinformatics skill for transcriptomic state triage rather than a static gene-list annotation.
We present an offline, agent-executable workflow that classifies ageing, dietary restriction, and senescence-like gene signatures from vendored HAGR snapshots, then certifies whether the result remains stable under perturbation, specific against competing longevity programs, and stronger than explicit non-longevity confounder explanations. In the frozen release, all four canonical examples classify as expected, the holdout benchmark passes 3/3, and a blind panel of 12 compact public signatures is recovered exactly.
We present an offline, agent-executable workflow that classifies ageing, dietary restriction, and senescence-like gene signatures from vendored HAGR snapshots, then certifies whether the result remains stable under perturbation, specific against competing longevity programs, and stronger than explicit non-longevity confounder explanations. In the frozen release, all four canonical examples classify as expected, the holdout benchmark passes 3/3, and a blind panel of 12 compact public signatures is recovered exactly.