Jeffrey Sullivan has served as chief technology officer for Consensus Cloud Solutions since February 2019 (formerly J2 Global). Prior to joining Consensus, Sullivan was chief technology officer for Demandforce (formerly Internet Brands) and vice president of technology for the health market segment at Internet Brands.
Sullivan is a professional writer in the areas of technology and creative writing, with more than 200 published magazine articles and book chapters to his credit.
JS: We saw the impact of this technology within a state-run healthcare organization, which was managing thousands of incoming unstructured documents. They needed a better solution to manage Medicare and Medicaid insurance claims for processing and approval. Natural Language Processing (NLP) and AI found these claims forms among the trove of documents and identified and structured the most important information, then routed all that information to the most appropriate workflow.
JS: For many organizations, skepticism often accompanies the acquisition of AI solutions. However, skepticism should not deter progress. Paired with human and automated review mechanisms, AI can build trust. It can be piloted with high-effort manual workflows (e.g., extracting data from fax transmissions) to deliver early benefits. AI can incrementally deliver organizational value rather than shooting for the stars with the first implementation.
JS: There is no silver bullet to this question. The best way to drive adoption and trust is transparency. Share the results, warts and all. Start small, establish guardrails and build on successes. Adapt technology to existing workflows, not the other way around. When technology improves outcomes, especially without forcing major changes to existing workflows, trust will be built, and adoption increased. It’s a win for everyone, including patients.
JS: Biases in AI models are mostly a result of poor selection of training populations, which are not representative of the population they address. Our industry must keep the roots of racial and gender biases in mind when exploring the basis of models and algorithms in AI systems. Much like documentation of treatment populations in drug studies, documentation of algorithmic derivations should be part of the evaluation process for AI systems.
JS: AI poses several challenges that the industry has not had to address in the past. The AI itself should adhere to the same standards as any other protected health information (PHI) application, but the implementation of responsible and ethical AI practices must also be addressed. Best practices such as the HITRUST CSF certification and similar security frameworks should be applied to both the development systems producing the AI solutions and the production environments in which they run.
JS: A quick Google search for “AI partner” will yield about 865 million results, leaving many feeling like selecting one will be daunting. Healthcare is unlike many other industries, it’s personal, which is why you need the right one for you. Any AI partner must: understand the nature of your business; be able to support the high level of protecting electronic protected health information (ePHI); own a proven track record of innovation; possess the ability to automate and understand your business model.
JS: AI will take some time to permeate all areas within healthcare. Administrative functions will be the first adopters, including order requests for equipment, case prioritization and routing, and linking images, such as faxes, scans and PDFs, to patient records. Early adopters will still blend an element of human touch with more fully integrated AI applications. Over time, we will see epidemiological models for early diagnosis bearing more and more fruit.
To learn more, visit www.consensus.com