AInsure LLC - Revolutionizing Insurance with AI & Blockchain

@AInsure_CORA Yes , we’re always looking collab and to be a dedicated energy provider and AI Insure is actually a very interesting TBL project to me.

Tronpower is also planning a “Tronpower Talk” video podcast / stream where we highlight other TBL projects that have interesting and innovative platforms solving interesting and real world problems.
The goal is to not only allow the TRON community but the wider crypto community to see innovations on TRON and to better connect with the founders/builders/projects.

So yeah, inviting your team members to “Tronpower Talk” , we’re currently sorting out everything , but we already have a varied list of projects ready to come onto “Tronpower Talk”

2 Likes

Thanks for your post! Maybe there is opportunity to plug cora into your dapp and we could plug your energy service into our CIS. We currently use energy sales for a revenue steam and we would be more then happy to attend your Tronpower talk! we can coordinate here or you can DM us on X i believe we follow your X account. You can also send me an email chance@metzgerinsuranceagency.com

3 Likes

Keeping fingers crossed in anticipation to see how this collab turns out.

3 Likes

One recommendation from @support.hackathon was Provide supporting data or early indicators of traction such as user testing, pilot programs, or partnerships.

We have been testing with agents and carriers. As well as joining REV1, Ohio University and others. but we would like to break this up and focus on the first part of the comment to provide supporting data about our project first and then talk more about testing and our partners in the coming weeks. We did market research with REV1 attached below

AInsure LLC.pdf (3.4 MB) but since hackathon support put this as one of their comments we wanted to put more focus on providing even more supporting data. To achieve this we took advantage of being a member of the Ohio University innovation center and are tapping into OU students.

Below i have attached email from our OU rep.

I’m reaching out to several of you at once to share a great opportunity for free assistance via some outstanding student groups looking for experiential learning. We’ve had good success with both of these groups. See below and please let me know asap if you think you have a need. If so, I can share more about next steps.

Ohio University Consulting Fellows

  • Top student consultants will run 8-12 week fully pro bono consulting projects from last week of August to mid-November

  • Suggested projects:

o Design a pilot program framework with an implementation roadmap

o Conduct industry benchmarking and competitor analysis

o Develop a market strategy or digital marketing plan

o Perform custom research or data analysis tailored to your needs

· Past clients include Northwestern Mutual, Total Quality Logistics, Kenworth, and others

College of Business MGT 3730 Entrepreneurial Consulting class

· Startups (at a developed stage), small businesses, not-for-profits, and existing companies with innovative or new activities all qualify

· Projects launch 10/21 and finish in December

· Variety of skillsets and project types available — please share your needs or any ideas regarding potential support you could use

Im pleased to announce that we have been picked by the OUCF. Below is the email from our rep

Good morning, Chance!

I’m happy to share that OUCF chose your project below to pursue. We can still submit the others to the management course as discussed. I will follow up with more information as soon as I receive it.

I submitted this with your business name anonymously for confidentiality, but anyone working on the project will sign an NDA.

Great momentum!

Ohio University Innovation Center client “A”

Client A is primed to revolutionize the insurance industry by infusing insurance processes and practices with cutting-edge AI technology to redefine insurance for the modern era. As the premier startup at the forefront of innovation, A will employ Artificial Intelligence (AI), Real World Assets (RWAs), and Web3 solutions to streamline operations and tasks for agents and carriers that are currently using traditional. time-consuming methods or multiple platforms. The company has already graduated from a premier startup bootcamp in the state, with invitations to apply for two accelerator programs, and is actively developing varied tech tools.

Client A is now in need of additional market research and analysis, including competitor analysis as well as what, if any insure-tech tools agents and carriers are currently using, and what are the features and costs.

@support.hackathon be on the look out for more market research and supporting data in the coming months. We will continue to build and address all your comments and concerns you sent in DM. Will take some time but we really would like to be accepted by Tron DAO to partner and receive funding.

5 Likes

Client A Insure-Tech Project Scope.pdf (149.4 KB) Scope of the data research project.

2 Likes

AI-Campaign_Vendor_Landscape_V4.pdf (2.9 MB)

AI-driven vendors for independent insurance agents | Agent for the Future

Attached is where we have been included the latest liberty mutual AI research article. @support.hackathon They would only allow us to be under 1 box but our CIS is going to handle all the above. Agency ops, Customer experience, sales assistance, coaching, retention, policy checking, video proposals.

2 Likes

Its been over 2 weeks since we have posted a update. So wanted to let the community know we are getting very close to the new website going live. Also our market research OU is doing for us should be done in the coming days. In the mean time we did @tronpower.xyz interview you can check out here Tronpower Talk - Episode 4 : AInsure .. We changed our framework to Lang chain so this update was a big update which is taking a little longer than expected.

3 Likes

Oh yeah, I was binge watching the session with TronPower at the time it was live, wonderful session and keeping fingers crossed in anticipation for the new website, goodluck on everything :crossed_fingers:t2:

2 Likes

We are going to be waiting for you’re update

Good Morning Tron Community. As we wait for Ohio University Consulting Fellows to finish up market research on top of our REV1 qualitative research we wanted to start to address Tron DAO next feed back point. **

Enhance developer documentation and include relevant security or technical disclosures to build confidence.** @support.hackathon

We are pleased to announce **CORE!

CORA Oversight & Regulation Engine (CORE)**

CORE lets CORA bend without breaking compliance:

  • Metaphorically flexible: domains can hot-swap prompts, models, or entire execution modes (local/remote/manifest) without code rewrites, yet every change is validated, approved, versioned, and auditable.
  • Strict adherence: immutable logs, signed configs, human approvals, LangGraph traces, and Mermaid diagrams provide clear chain of custody for regulators and internal auditors.
  • Expandable governance: if regulations evolve (e.g., new AI transparency mandates), we can add validation steps, approval rules, or additional logging fields without altering domain logic.
  • Client policy adaptation: per-client overlays let us enforce existing external regulations (e.g., banned model providers, mandatory prompts) with zero impact on the client’s workflows—no retraining, no policy changes required. Domains gracefully degrade via configured fallbacks while monitoring accuracy gaps.

In short, CORE behaves like a flexible joint: compliance constraints form the hinge, and dynamic configuration provides controlled motion. We can pivot quickly when client or regulatory expectations shift, but every movement is measured, logged, and reviewable—meeting the high bar of the insurance and InsurTech ecosystem.

The following document describes CORE—the hybrid, security-tiered architecture that powers our LangGraph-based insurance and blockchain assistant. It captures all required components, control planes, observability hooks, and compliance guardrails. The design maximizes flexibility while preserving strict adherence to internal and external (client) regulations.**
**

SOC 2 Type II Readiness Checklist

- Executive sponsorship and governance charter covering CORE scope and responsibilities (CC1)

  • Security policy framework approved, versioned, and reviewed at least annually

  • Roles, responsibilities, and segregation of duties documented for engineering, compliance, and operations

- Formal risk assessment program capturing identification, analysis, mitigation plans, and follow-up tracking (CC3)

- Communication and training plan ensuring staff understand SOC 2 obligations, security policies, and reporting channels (CC2)

- Vendor and subservice provider management program with due diligence, contracts, SLAs, and monitoring (CC4)

- Logical access controls with documented provisioning/deprovisioning, MFA, RBAC reviews, and quarterly access recertifications (CC6)

- Change management lifecycle covering design reviews, approvals, testing, separation of duties, rollbacks, and evidence retention (CC8)

- System operations procedures for logging, monitoring, alerting, backup verification, and capacity/performance management (CC5)

- Incident response plan with detection, triage, communication, root-cause analysis, and lessons-learned tracking (CC7)

- Business continuity and disaster recovery strategy with RTO/RPO targets, tested playbooks, and documented results (A1/A2)

- Data governance and retention program addressing classification, encryption (in transit/at rest), and disposal (PI1/PII for Confidentiality & Privacy)

- Privacy program (if Trust Service Criteria include Privacy) covering consent, data subject rights, and cross-border transfer controls (P1–P6)

- Confidentiality controls for sensitive client information, including need-to-know enforcement and secure transmission/storage (C1–C2)

- Availability commitments met via redundancy, uptime monitoring, SLAs, and continuity plans (A1–A4)

- Evidence collection workflow ensuring artifacts are gathered continuously over the audit period with ownership and storage defined

- Independent auditor engagement, scope confirmation, timeline, and bridge letters for any carve-out subservice organizations

- Management assertion drafted, reviewed, and aligned with controls in scope for the Type II period

## TODO

- Map each CORE control to the specific SOC 2 criteria (CC1–CC9, A, PI, C, P) and document coverage status

- Build evidence catalog template (control description, owner, frequency, storage location) and populate for all checklist items

- Automate evidence capture where possible (config snapshots, log exports, approval records) and schedule manual collections

- Align CORE observability dashboards with SOC 2 reporting needs (uptime, security events, change records)

- Draft customer-facing summary of CORE SOC 2 commitments for onboarding and due diligence packages

# CORA Oversight & Regulation Engine (CORE)

## Hybrid Domain Hot-Swapping Architecture

This document describes CORE—the hybrid, security-tiered architecture that powers our LangGraph-based insurance assistant. It captures all required components, control planes, observability hooks, and compliance guardrails. The design maximizes flexibility while preserving strict adherence to internal and external (client) regulations.

-–

## 1. Architecture summary

1. **Config-driven governance layer** (PostgreSQL + signed artifacts)

  • `domains` table stores `name`, `security_rating`, `graph_mode`, `status`, `fallback_domain`, `model_hierarchy`, `manifest_version`, `prompt_version`, `llm_parameters`, `change_ticket_id`, `last_reviewed_by`, and JSONB metadata.

  • Version history tables persist immutable records of every config, manifest, and prompt change. Each entry includes hashes, automated validation results, and human approval signatures (two-person rule for `high`/`extreme`).

  • Prompts and manifests live in object storage with HSM-backed signatures; the DB keeps metadata, state, and references.

2. **Runtime loader and registry**

  • All domains inherit `ConfigDrivenDomain`, which enforces security tier behavior, model hierarchy selection, telemetry mix-ins, secrets injection, and hot-swapping.

  • `DomainRegistry` holds active domain instances, resolves client-specific policies, and orchestrates fallback routing.

  • Activation workflow: fetch config → verify signature → run smoke tests → log activation → swap orchestrator. Rollback to last-known-good on failure.

3. **Security tiers and hosting modes**

  • `low`: in-process module (shared LangGraph, same service, built from local code).

  • `medium`: separate graph per domain within the same service (isolation while remaining co-located).

  • `high`: orchestrator invokes a remote service via authenticated RPC/REST (domain logic runs in dedicated infrastructure).

  • `extreme` (and `test`/`dev`): dynamic graph composition at runtime from signed manifest + cloud storage assets (highest isolation & observability); debug logging enabled in `test`/`dev`.

  • `disabled`: domain blocked; traffic routed to configured fallback chain (default `research` if unspecified).

4. **Model hierarchy & prompt management**

  • Each domain/agent defines an ordered list of allowed LLM providers/models with optional prompt overrides and settings (temperature, max tokens, etc.).

  • Client policy overlays remove disallowed models/providers while preserving fallback behavior.

  • Prompt registry tracks versions and compatibility metadata; updates follow the same approval pipeline.

-–

## 2. Simplified graph layout by security tier

```mermaid

flowchart LR

subgraph CoreLangGraph\[LangGraph Runtime\]

    direction TB

    memory\[(Memory Subgraph)\] --> triage{Triage Node}



    triage --> domainLow\["Low Tier Domain\\n(In-process code)"\]

    triage --> domainMed\["Medium Tier Domain\\n(Local subgraph module)"\]

    triage --> domainHigh\["High Tier Domain\\nRemote service call"\]

    triage --> domainExt\["Extreme Tier Domain\\nDynamic manifest from cloud"\]



    domainLow --> finalize((Finalize))

    domainMed --> finalize

    domainHigh --> finalize

    domainExt --> finalize

end



classDef low fill:#d1f0ff,stroke:#0077b3,color:#000

classDef med fill:#e6ffd1,stroke:#4f8a10,color:#000

classDef high fill:#ffe6cc,stroke:#cc7a00,color:#000

classDef extreme fill:#ffd1dc,stroke:#b30047,color:#000



class domainLow low

class domainMed med

class domainHigh high

class domainExt extreme



%% Storage annotations

domainLow:::low -.->|Code| repo\[(Source repo)\]

domainMed:::med -.->|Config + Code| pg\[(PostgreSQL + local modules)\]

domainHigh:::high -.->|Signed manifest + API contract| svc\[(Remote domain service)\]

domainExt:::extreme -.->|Signed manifest + prompts\\n in cloud storage| bucket\[(Cloud storage)\]

```

- **Low tier** domain logic lives inside the main service’s source tree.

- **Medium tier** keeps code locally but uses per-domain graph builds driven by config.

- **High tier** executes via remote service endpoints (signed, audited contracts).

- **Extreme tier** materializes graphs dynamically from manifests and assets stored in signed cloud buckets.

-–

## 3. Operational controls & required parts

### 3.1 Configuration & workflow

- Draft config/manifest/prompt → automated validation (schema lint, unit/integration tests, security scans) → human approval → activation.

- Approvals link to ticket IDs and record multi-factor approver identity.

- Observability agents submit suggested changes through the same workflow (human-in-loop maintained).

### 3.2 Runtime enforcement

- Base class injects telemetry mix-ins, secret fetchers (Google Cloud Secret Manager), and security checks before executing domain logic.

- Domain activation events, fallback decisions, and model selections are logged and metered.

- Fallback matrix prevents cycles and ensures graceful degradation (e.g., insurance → legal → research).

### 3.3 Secrets & client policy overlays

- Secrets (LLM keys, API tokens, DB credentials) stored in GCP Secret Manager; domain runtime fetches on demand with principle-of-least-privilege IAM.

- Client policies overlay domain/model restrictions without code changes; same workflow updates ensure auditability.

### 3.4 Model hierarchy & monitoring

- Telemetry tags capture `model_name`, `prompt_version`, `fallback_used`, enabling performance comparisons against baselines.

- Shadow evaluations / A/B testing possible: run fallback model in parallel (outside user flow) to gauge quality before promotion.

- Underwriters can flag low-quality responses, feeding back into prompt tuning pipeline.

-–

## 4. Logging, observability, and oversight

### 4.1 LangGraph tracing

- LangGraph runtime emits structured events for every node, tool, and LLM call (with `domain`, `node`, `model`, `prompt_version`, `config_version`, `client_id`, `security_rating`, `run_id`).

- Events stream to LangGraph Studio (or OSS UI) for run replay and step-by-step inspection.

- Custom callbacks mirror events to telemetry bus for long-term analysis.

### 4.2 Logfire structured logging

- Unified JSON schema for access logs, security logs, config changes, and LLM activity.

- Append-only storage with hashing and retention policies meeting insurance regulations (7–10+ years).

- Key LangGraph trace events mirrored into Logfire so replay evidence exists even outside the tracing UI.

### 4.3 Per-run Mermaid graph snapshots

- Graph builder exports node/edge data; utility renders Mermaid diagram (as shown above) with metadata (ratings, fallbacks).

- Diagram and rendered SVG stored per run (`graph_version`, `run_id`) in audit storage.

- Run metadata includes a pointer to the Mermaid artifact so operators can switch between flow view and trace effortlessly.

### 4.4 Oversight and A/B testing

- Metrics pipeline (Prometheus/OpenTelemetry) tracks LLM usage, per-domain throughput, fallback rates, accuracy/latency deltas across models, security tool usage, and memory QA stats.

- Change-control UI allows safe experimentation through staged rollouts, shadow evaluation, and comparison dashboards.

- Evidence Locker viewer provides compliance teams with immutable logs, trace histories, Mermaid diagrams, and approval records for each change.

-–

## 5. Internal tooling

| Tool | Purpose | Primary Users |

|------|---------|----------------|

| **Config & Governance Console** | Edit configs/manifests/prompts, view validation results, capture approvals, trigger rollbacks/disable domains. | Platform engineers, compliance, domain owners |

| **Observability & Ops Portal** | Monitor real-time metrics, alerts, and run traces; link to LangGraph replay and Mermaid diagram. | SREs, ML operations, on-call staff |

| **Prompt & Model Tuning Lab** | Manage prompt hierarchy, run A/B or shadow tests, gather feedback, and submit change requests. | Prompt engineers, ML researchers |

| **Client Policy Dashboard** | View/manage per-client domain/model restrictions and fallback mapping. | Account managers, compliance |

| **Evidence Locker Viewer** | Search immutable logs, approvals, LangGraph traces, Mermaid snapshots; export for auditors/regulators. | Compliance, internal audit |

-–

## 6. Security stance and mitigations

| Risk / Requirement | Mitigation |

|--------------------|------------|

| Unauthorized config changes | Signed artifacts, workflow enforcement with multi-factor approvals, immutable version history |

| Secret leakage | Google Cloud Secret Manager, least-privilege IAM, no secrets in repo or logs |

| Domain compromise | Security ratings enforce hosting isolation; remote services use mTLS + signed contracts; fallback routes isolate issues |

| Prompt/model drift | Hierarchy monitoring, fallback performance metrics, human-in-loop prompt updates, shadow evaluations |

| Observability tampering | Dual logging (LangGraph + Logfire) with append-only storage, hashed logs, long-term retention |

| Regulatory audits | Evidence Locker, Mermaid snapshots per run, documented approvals, chain-of-custody metadata |

-–

## 7. Flexibility with compliance

CORE bends without breaking compliance:

- **Metaphorically flexible**: domains can hot-swap prompts, models, or entire execution modes (local/remote/manifest) without code rewrites, yet every change is validated, approved, versioned, and auditable.

- **Strict adherence**: immutable logs, signed configs, human approvals, LangGraph traces, and Mermaid diagrams provide clear chain of custody for regulators and internal auditors.

- **Expandable governance**: if regulations evolve (e.g., new AI transparency mandates), we can add validation steps, approval rules, or additional logging fields without altering domain logic.

- **Client policy adaptation**: per-client overlays let us enforce existing external regulations (e.g., banned model providers, mandatory prompts) with zero impact on the client’s workflows—no retraining, no policy changes required. Domains gracefully degrade via configured fallbacks while monitoring accuracy gaps.

In short, CORE behaves like a flexible joint: compliance constraints form the hinge, and dynamic configuration provides controlled motion. We can pivot quickly when client or regulatory expectations shift, but every movement is measured, logged, and reviewable—meeting the high bar of the insurance and InsurTech ecosystem.

and here is what GPT found comparable to CORE… it got compared to Microsoft, Salesforce, and IBM straight out the gate:

Regulated FinServ reference implementations (public case studies)
Examples:

  • Capital One’s “Responsible AI governance architecture” (AWS re:Invent 2023 talk).
  • JP Morgan’s “Model Risk Management for Generative AI” blueprint (Risk.net 2024).
    Parallels: Multi-tier isolation, strict change control with ticket linkage, fallback hierarchies across approved models, and immutable logging targeted at regulatory audits.

Differences: Typically built on internal orchestration frameworks or AWS Bedrock guardrails; graph-based routing like CORE is less emphasized.

Takeaways for CORE
There’s a growing pattern of policy-driven orchestrators with artifact signing, dual observability, and human approvals—the same pillars CORE prioritizes.
Where others lean on specific cloud ecosystems, CORE’s LangGraph-centric approach gives you more runtime flexibility (dynamic manifests, per-domain hot-swaps) but still aligns with the same governance principles.

2 Likes
3 Likes

I think CORE should be entered as a solo project in TBL, don’t you think?

No i dont believe so. TBL is more for a company to get backing from my understanding not like a hackathon where you focus on 1 project. We look forward to continuing to build and get tron DAO to take notice. Were a small team but foundation is about ready. We have also started working with CORA for coding which is going very well and will help speed up updates.

I get that, my bad!
But your recent update just looked like a separate submission on it’s own, hence my suggestion.

2 Likes

Was a big update for our backend. We are excited for the future of AInsure and CORA in the Tron Ecosystem. We have something very cool we are working on for the community that will work with sunperp.

Looking forward to future updates..

2 Likes

Our CTO Scott Root was named a retool community champion for his work with Retool and AInsure find out more here 🏆 Introducing our Retool Community Champions - 🤗 Community Happenings - Retool Forum

Also wanted to let the community see some data on CORA from our latest update. @support.hackathon

4 Likes

Great job of the CTO!

This is a good example of how to use tech properly.

Congrats!

1 Like

The graphs in the latest update looking so colorful, must commend you for the graphics and congratulations to your CTO..

1 Like

It is amazing how you’re have been working and congratulations to you’re cto Scott

1 Like