
Artificial intelligence in healthcare it is an operational reality today reshaping how insurers process claims, how hospitals triage patients, and how clinicians diagnose disease. The pace of adoption is accelerating. According to a 2025 survey of 93 health insurers conducted by the National Association of Insurance Commissioners (NAIC), 92% of respondents are already using, planning to use, or actively exploring AI. Health insurance leads all lines of business in AI adoption, ahead of auto at 88%, home at 70%, and life insurance at 58%1.
Table of Contents
Where AI Is Already Delivering Results
AI's value in healthcare is not hypothetical. It is being demonstrated daily across clinical and administrative workflows. Traditional machine learning models, which predate the current wave of generative AI, remain deeply embedded in operations. The applications with genuine proven traction are the unglamorous ones that nobody writes press releases about. Readmission risk prediction models allow providers and insurers to identify patients likely to return to hospital shortly after discharge, enabling timely interventions. Mortality risk models estimate the probability of patient death within defined timeframes, supporting triage decisions that direct patients to the right level of care at the right moment in their health journey.
Generative AI has accelerated adoption in newer areas, particularly clinical documentation. Ambient listening tools now convert patient-provider conversations into text in real time, generating after-visit summaries and structured records that reduce the administrative burden on clinicians while simultaneously improving the quality of data available for downstream risk analytics. AI-driven triage of patient portal messages routes communications to the correct care team based on urgency, shortening response times and improving patient safety. Clinical documentation through ambient listening is the one genuinely transformative recent addition because it solves a problem physicians hate deeply, the 2 hours of note writing after clinic that drives burnout more than almost anything else. Mammography AI has cleared regulatory bars and is reducing radiologist review time in real deployments. Discharge prediction and surgical scheduling models are delivering measurable operational improvements in systems that have the data infrastructure to support them.
Medical imaging represents one of the most mature and equity-relevant AI domains. Automated detection of abnormalities in mammography screening reduces physician review time, prompts earlier scrutiny of high-risk findings, and has been shown to reduce diagnostic delays with disproportionate benefits for underserved communities where access to specialist radiologists is limited.
The areas still facing real barriers are almost everything that sounds exciting in conference presentations. Agentic AI in clinical workflows is mostly pilot stage. Multimodal models combining imaging, notes, claims, and lab data sound powerful but require data integration work that most health systems have not completed. Voice diagnostics and scent-based detection are genuinely promising in research settings but nowhere near clinical deployment at scale. The barrier in almost every case is not the algorithm. It is data quality, integration, change management, and the clinical trust problem, getting physicians to actually use the output rather than ignore it.

Newsletter continues after job posts…
👔 New Actuarial Job Opportunities For The Week
Have you signed up to our weekly job alerts on Actuary List? We post 50 new handpicked jobs every week that match your expertise. To ensure you don’t miss out, sign up here. Here are a few examples of new jobs this week:
Travelers - USA - Manager Actuarial Product Specialty
Salary Range: $109k-$180kAIG - USA - Actuary & AVP (P&C Pricing)
Salary Range: $199k-$260kSwiss Re - Canada - Actuarial Analyst
Salary Range: $74k-$110k
Interested in advertising with us? Visit our sponsor page
Build, Buy, or Partner: Strategic Choices in AI Deployment
The strategic decision that faces every healthcare organisation considering AI investment is whether to build solutions in-house, purchase ready-made applications from vendors, or engage in collaborative partnerships. Each path carries distinct trade-offs that depend on the nature of the use case, available resources, time-to-value requirements, and expected return on investment.
Off-the-shelf solutions offer the fastest path to deployment often operational within weeks rather than months and are well-suited when the use case closely matches a vendor's existing offering and speed is critical. Internal builds are justified when the organisation possesses proprietary data that provides a genuine competitive edge, when in-house teams have the technical capacity to manage the full model lifecycle, or when cost-benefit analysis clearly favours self-development over licensing.
As organisational capabilities mature, the calculus between build, buy, and partner shifts and healthcare organisations that treat it as a fixed policy rather than a context-dependent judgment risk either underinvesting in proprietary advantage or wasting resources on in-house development where vendor solutions are already superior.
What is actually delivering results in practice is bought solutions implemented with serious change management investment on top, and internal builds in organizations that have spent years cleaning their data before they started modeling. The organizations that are failing are the ones that bought a solution, pointed it at messy data, got poor outputs, and concluded that AI does not work in healthcare. The technology was fine. The data was not ready.

Governance: The Non-Negotiable Foundation
The future of AI in healthcare depends less on technological capability than on governance. The main barrier to widespread adoption is not building or buying sufficiently powerful models it is ensuring that those models are deployed safely, fairly, and transparently, and that they maintain the trust of clinicians, patients, and regulators over time.
Effective governance requires sustained multidisciplinary collaboration. Data scientists and clinical experts must work together from model ideation through production, with cybersecurity specialists, ethicists, and compliance officers also integrated into the review process. Before full deployment, every AI tool whether built internally or sourced from a vendor should undergo real-world piloting to expose performance gaps before broader release. Bias review and detection was described as a non-negotiable step, even models built without explicitly sensitive variables can embed discrimination through underlying data. A ZIP code, for example, may serve as a hidden proxy for race or socioeconomic status, inadvertently reinforcing the very inequities that healthcare AI is supposed to help address.
Once deployed, governance responsibilities do not end. Continuous monitoring is required to detect model drift the gradual degradation in predictive accuracy as the real-world environment diverges from training conditions and to ensure the tool is being used appropriately by its intended audience. Whereas early vendors were often reluctant to share development and validation details, regulatory pressure and FDA oversight of clinical AI models has moved the industry toward fuller disclosure of training data sources, bias review outputs, and monitoring procedures.

Emerging Capabilities: What Is Next on the Horizon
Looking beyond current deployments, multimodal AI models as the most transformative near-term development. These systems combine structured claims data, laboratory results, unstructured clinical notes, ambient audio recordings, and medical imaging into unified analytical frameworks, enabling a level of patient understanding that no single data source can provide alone. The ability to extract social determinants of health or undocumented clinical indicators from narrative notes filling the gaps that often limit comprehensive patient assessment was highlighted as a particularly promising capability.
Several concrete near-term clinical opportunities were cited: retinal imaging integrated into AI-driven workflows to detect hypertension, cardiovascular disease, and potentially early-onset Alzheimer's; pharmacogenetics models to personalise drug dosing in depression, cancer, and blood disorders; and deep learning applied to ultrasound images for breast cancer detection in resource-limited settings, functioning as a first-pass triage tool to expand access in communities where specialist imaging is unavailable. Voice analysis for the detection of diabetes or early dementia, and novel data sources such as social media signals for identifying mental health trends, were discussed as longer-horizon possibilities requiring robust consent frameworks and data governance before deployment.
It is clear that fully automated clinical decision-making remains aspirational. AI should augment, not replace, clinical judgment, and all prescribing and treatment decisions must remain clinician-led. Federated learning which trains models on data that never leaves its source institution and synthetic data generation were identified as critical enablers for responsible innovation, allowing model development to proceed without compromising patient privacy.

The Actuarial Profession's Expanding Role in Healthcare AI
Actuaries are exceptionally well-positioned to lead but only if the profession embraces a broader mandate than traditional pricing and modelling.
Actuaries' core competencies identifying trends, quantifying uncertainty, integrating business insight, and maintaining rigorous standards of professional accountability map directly onto the governance and oversight challenges posed by healthcare AI. The profession's historical expertise in forming risk classes for insurance pricing carries with it a specific responsibility: ensuring that AI-driven models do not inadvertently introduce variables that violate legal or ethical boundaries with respect to protected classes such as race, ethnicity, gender, and sexual orientation. As AI expands into claims adjudication, fraud detection, workflow automation, and marketing, actuaries can provide the validation layer that keeps AI-driven decisions both technically sound and socially responsible.
AI has to be embedded within enterprise risk management frameworks not treated as an isolated technological project, but as a systemic risk category requiring organisation-wide governance. Actuaries, who already lead or participate in corporate ERM functions, are natural custodians of this broader oversight role. The actuarial profession should develop cross-disciplinary fluency in social sciences, behavioural economics, and human decision-making, equipping actuaries to anticipate the secondary social effects of data-driven decisions and to guide AI deployment in ways that serve both commercial and public interest.

Conclusion: Promise, Responsibility, and the Road Ahead
Actuaries are in a genuinely strong position here and are underplaying it. The core actuarial skills, quantifying uncertainty, understanding model limitations, thinking in distributions rather than point estimates, designing governance frameworks for long tail risks, these are exactly what AI deployment in healthcare needs and almost nobody else in the AI ecosystem has been trained in them. A data scientist building a readmission model optimizes for AUC and declares victory. An actuary asks what happens in the 5% of cases where the model is confidently wrong, what the downstream cost of those errors is, and whether the confidence intervals are being communicated honestly to the clinicians using the output. That second set of questions is the one that prevents harm.
The mindset shift required is significant though. Actuaries who are waiting for AI to come to them through a traditional modeling workflow are going to find themselves increasingly peripheral. The profession needs to be in the room when AI governance frameworks are being designed, when vendor contracts are being negotiated, when bias audits are being scoped, and when boards are being educated about AI as an enterprise risk category. None of those conversations start with a spreadsheet or a pricing model. They start with credibility built through genuine understanding of how these systems work and where they fail, combined with the communication skills to translate that understanding for a non technical audience. Actuaries who develop that combination are going to be among the most valuable people in healthcare organizations over the next decade. Those who do not are going to watch other professions occupy territory that should have been theirs.


Looking for clarity on consulting, income, or next steps?


Last week we covered When the Algorithm Answers for Itself: AI Governance Regulations by Colorado for Insurance.
👉 If you missed the last week’s issue, you can find it here.

💼 Sponsor Us
Get your business or product in front of thousands of engaged actuarial professional every week.
💥 AI Prompt Of The Week
About This Prompt
Get a list of best practices for improving Excel performance – e.g. using fewer volatile functions, simplifying complex formulas, enabling manual calc, or moving data to a database – which can be critical when working with large models.
The Prompt:
My Excel workbook with thousands of formulas is running slow. How can I optimize this spreadsheet to make it calculate faster and avoid crashes?





