Responsible AI Policy

Service: ThroughTheirEyes.ai

Effective: 1 December 2025

Operator: I4HUM LIMITED (New Zealand)

ThroughTheirEyes.ai ("TTE") is designed to help people explore their family history through carefully controlled, ethically constrained, AI-powered ancestral personas.

This document explains how we use AI responsibly, the guardrails we apply, and the commitments we make to users and their families.

Our approach is grounded in three principles:

  1. Respect for the living
  2. Integrity toward the past
  3. Transparency in the present

1. Our AI Philosophy

Genealogy sits at the intersection of identity, history, and deeply personal memory. When AI is applied to this domain, it must be:

  • privacy-preserving
  • historically grounded
  • emotionally safe
  • interpretable
  • transparent in what it knows and what it invents

Our system is designed explicitly to avoid fabricating real people, protect families, and make inferences visible through metadata and provenance cues.

2. What AI Does (and Doesn't) Do in TTE

2.1 What AI Does

AI is used to:

  • convert cleaned GEDCOM data into an ancestor persona
  • apply structured persona blueprints (traits, worldview, voice)
  • generate conversational responses based on historical context
  • make narrative inferences within guardrails
  • surface provenance cues to distinguish:
  • [GEDCOM-derived facts]
  • [persona blueprint traits]
  • [LLM inference / imagination] (coming soon)

2.2 What AI Never Does

AI is never used to:

  • create or preserve data about living people
  • guess, infer, or reconstruct living individuals
  • train or fine-tune third-party AI models
  • identify users or link user actions to PII
  • cross-contaminate data between users or workspaces
  • build persistent profiles of users' behaviour

3. Model Governance

3.1 Direct Model Providers Only

We use direct API integrations with:

  • OpenAI
  • Anthropic

We do not use model routing services or intermediaries (e.g., OpenRouter) for genealogical conversations.

3.2 No-Training, No-Retention

We enforce:

  • no training,
  • no dataset contribution,
  • no retention,

for all inference traffic.

These settings are actively configured at the API level.

3.3 Regional Awareness

We route data only through providers operating in jurisdictions with strong privacy protections.

We do not send genealogical data to providers in high-risk or unclear jurisdictions.

4. Data Minimisation & Privacy by Design

4.1 Pseudonymous Internal Architecture

Inside the TTE backend:

  • users exist only as workspace ID + user ID
  • both IDs originate from Clerk and cannot identify you without Clerk
  • no email, name, or identifying metadata is stored in our systems

4.2 Automatic Removal of Living Persons

Every GEDCOM file is cleaned before processing:

  • If birth date <110 years ago and no death date → treated as living
  • Records explicitly marked "living" → removed
  • Files failing safety checks → rejected

4.3 Encrypted Genealogy Storage

GEDCOM files:

  • stored in Cloudflare R2
  • encrypted at rest using quarterly rotated AES-256 keys
  • decrypted only at runtime
  • deleted when you delete them

4.4 Conversations

Conversation data:

  • encrypted at rest
  • tied only to pseudonymous IDs
  • not readable by platform operators
  • scanned transiently by automated safety systems (nothing stored)

5. AI Safety Guardrails

5.1 Abuse & Safety Screening

We apply lightweight, automated safety filters to:

  • prevent harassment
  • avoid harmful genealogical fantasies (e.g. violence, exploitation)
  • block attempts to recreate living individuals
  • detect patterns of attempted misuse (without profiling users)

These checks operate on temporary, in-memory representations that are never logged.

5.2 Hallucination Boundaries

LLMs may fill narrative gaps using historically plausible context (e.g., common mining practices, regional conflicts, cultural rituals).

We limit AI behaviour to:

  • historically plausible interpretation
  • personality-driven inference
  • narrative colouring within constraints

We explicitly prevent the model from:

  • inventing relationships not present in the GEDCOM
  • creating timelines that contradict the user's data
  • asserting unverifiable claims as fact

A future enhancement will mark [AI Inference] explicitly in output.

5.3 Ethical Use of Ancestor Personas

We enforce strict eligibility rules to ensure ancestor personas are used respectfully:

  • Only deceased individuals may be selected as personas
  • Individuals who died within the last 50 years are ineligible
  • Parents of living persons cannot be recreated
  • Living individuals are automatically removed from all uploaded GEDCOM files

These rules protect the emotional well-being of living family members and ensure the platform remains a tool for historical reflection, not recent memory recreation. Learn more about our ethical use framework →

6. The Persona Blueprint System

Every persona uses a structured blueprint containing:

  • voice, tone, worldview
  • demographic and cultural context
  • historical constraints
  • emotional style
  • guardrails around what the persona can and cannot claim

Blueprints ensure:

  • consistency (no shifting personalities)
  • coherence (anchoring to data)
  • explainability (where the model's behaviour comes from)

This framework is part of our Responsible AI strategy and is shared openly.

7. Synthetic Personas for Safety

Our public demo uses fully synthetic historical personas to avoid any privacy or cultural risks.

Even when real GEDCOMs are used:

  • personas are grounded in your data,
  • but the expressive behaviour is always mediated through the blueprint,
  • and nothing can resurrect or reconstruct living individuals.

8. Optional Memory System (Ethical by Design)

The memory system is currently disabled for the public demo. When enabled (opt-in), it provides richer, more coherent conversations by allowing an ancestor persona to remember information within your private workspace.

Memory exists to support continuity — for example, helping the persona recall who you are, what you've previously told them, and details that make the interaction feel more natural.

Importantly, memory operates only within your private workspace. Memory is further isolated per chat session — meaning each ancestor persona remembers only what occurs within its own conversation thread. Memory cannot identify you, cannot access your Clerk identity, and cannot reveal who you are without Clerk. Workspace-level encryption prevents memory items from being accessed outside your secure environment.

What Memory May Contain (Only What You Introduce)

If you enable memory, the system may store:

  • episodic conversational details (e.g., things you told the persona earlier)
  • conversation-specific context that is isolated to the individual chat session
  • place names and geographic references mentioned during dialogue
  • contextual details that help maintain continuity ("You said your name is David")
  • relationships or individuals you explicitly reference (including living individuals, if you voluntarily mention them)
  • inferred associations that arise naturally during the conversation (e.g., linking a place you mention with a known historical location)

These details exist only so the same ancestor persona can maintain coherence over time.

What Memory Will Never Contain

Memory will never include:

  • living individuals automatically extracted from GEDCOM files (our system removes these before persona creation)
  • information drawn from other users, other workspaces, or other personas
  • data inserted by staff or system operators
  • analytics, tracking identifiers, or marketing data
  • content used to train AI models (either internal or third-party)

Memory is not a profiling system, and it is not used for behavioural analysis.

How Memory Is Protected

When the feature is enabled, all memory data is:

  • encrypted at rest using rotating encryption keys (with additional per-session isolation)
  • isolated per workspace (never shared across users or personas)
  • inaccessible to platform operators in decrypted form
  • never transmitted to AI providers for training
  • stored only for your private use within your workspace

The system is designed so that even if memory contains living-person details you voluntarily introduced, this information cannot leave your encrypted workspace.

User Control

You maintain full control over memory at all times:

  • You may opt out entirely (default for demo users).
  • You may clear memory for a persona at any time.
  • You may delete your workspace, which permanently removes all memory.
  • You may request deletion under GDPR / NZ Privacy Act.
  • Memory never persists outside your private workspace unless you choose to submit a conversation publicly.

When Memory Is Used

Memory is only used:

  • to improve continuity within your own conversations,
  • with the same ancestor persona,
  • inside your isolated workspace,
  • within the boundaries of a single chat session (no cross-session carryover)
  • and only when you opt in to the feature.

It is not used for global learning, analytics, or model refinement.

9. Human Oversight & Accountability

We maintain:

  • strict RBAC for platform operators
  • zero ability to view decrypted content
  • no profiling, user scoring, or behavioural targeting
  • full audit trails for system actions (pseudonymous)
  • professional review of model behaviour for fairness & safety
  • DPIAs and risk reviews for all AI features
  • adherence to ISO/IEC 23894 and 42001 principles where applicable

No automated decision-making about users occurs anywhere in the system.

10. Transparency & Explainability

You will always know:

  • what data was used to create a persona
  • what part of a response is factual, blueprint-based, or inferred
  • how your data is processed
  • how to delete your data
  • how to opt out of features

We also publish:

  • "How We Built Will" (our transparency exemplar)
  • named concept definitions for AI-first discoverability
  • clear provenance metadata in persona output

11. User Responsibilities

By using the Service, you agree to:

  • respect others' families and history
  • not attempt to reconstruct living individuals
  • use AI-generated content ethically
  • avoid harmful or abusive conversations

You are also responsible for the provenance and accuracy of data you upload.

12. Contact

For AI safety concerns, inquiries, or appeals:

Responsible AI Lead
I4HUM LIMITED
Waikanae 5391, New Zealand
Email: responsibleai@i4hum.com

Questions About Our AI Practices?

We're committed to responsible AI and welcome your questions.

responsibleai@i4hum.com