How are you tackling the challenge of scattered healthcare data?


1.4k views2 Upvotes5 Comments

CTO, 201 - 500 employees
We're early on. I have high hopes for Fire because it would be the types of information that our labs will have. Patient history is something that Fire lends itself very nicely to. I find it really encouraging that there are so many companies now in the space. The thing that I found most daunting is in the absence of these aggregators, you actually have to connect to all of the systems and basically have an army of project managers. While we have some direct connections, my money is on being able to work with companies that continuously and consistently expand their reach in terms of what type of data they have. Obviously you have to choose a partner that's secure. You want to be able to trust their privacy practices and their security practices above all else. Hopefully they also continue to be successful in growing their reach. Most companies only have control of portions of the United States, and it makes it difficult to ensure that you have reliable match rates. We're hoping that as the ecosystem matures, we'll get improvement there.
1 2 Replies
Managing Director, 1,001 - 5,000 employees

It's such a big problem and challenge. I hope in our lifetimes we see a solution to it. I know of a company that uses EHR data, and they have this challenge around harmonizing data. What they have done is created their own proprietary ontology where they have mapped outcomes. They have all of these feeders and they use machine learning to adapt. Basically what they've said is, “we're not going to wait for a big thing, like FHIR to be adopted. We're going to do it ourselves.” That's an extreme measure that not every company can take. Hopefully we do get to a place where we can understand healthcare data across the country and internationally too.

CIO/CTO, 1,001 - 5,000 employees

Healthcare is in that inflection point, right? People in the industry have suddenly woken up to these issues and I think COVID also helped tremendously. It's an exciting time to be in healthcare, but there's a lot of work that needs to be done.

Managing Director, 1,001 - 5,000 employees
I have found with this company TriNetX, you can really reduce your time to clinical trial and figure out how to design it, because you can run a lot of these queries and get answers right off the platform. You can download the data as well and use it for analysis, but the platform itself gives you the ability to go ahead, put in queries,  and get insights. It helps to connect you to hospital systems that might have specific types of patients or the diversity in the population that you're looking for.
1
Director of Data in Healthcare and Biotech, 10,001+ employees
Disparate, heterogeneous datasets are certainly some of the most significant challenges in healthcare technology. I assert that this is the crux of numerous issues that emanate in all directions, some not so obviously connected. The lack of real-time, actionable, clean, and validated data is a substantial problem. Internally at Wilmot, we developed a system to tackle this issue called "Hyperion". (For more information, please refer to our publication: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000036)

Our approach was based on the understanding that we could never fully eliminate the problem. However, we believed that if we could lower the skill floor enough to eliminate the need for multiple full-time equivalents and high skill sets to curate new data sources, we would make substantial progress. Therefore, we built a custom Python/Flask web application that allows us to select datasets, provided we know the data format (SQL, XML, JSON, API, Oracle, etc.), and enable drag-and-drop connections, with users defining the update cadence (real-time, hourly, etc.). The system automatically generates all the necessary jobs and scripts in the background to continuously keep the data sources updated and connected.

It's a bit beyond that with validation scripts running to check manually entered data, such as addresses in Epic. Since accurate address information is critical for real-time geospatial feeds, our system compares the entered addresses with the national address lists from the Census Bureau. We can immediately detect if an entry is questionable beyond certain confidence intervals and make corrections on the spot. We have expanded this validation process over the years. For example, in oncology, we have cancer registries and decision support tools in separate databases. By combining them, we can quickly check if essential data like demographics, disease information, etc. are in alignment, and flag any discrepancies in real time.

The system isn't perfect in the slightest. However, it does alleviate a considerable amount of burden and reduce the required skill level to the point the team can develop and iterate significantly fast than what it was like pre-this system.

Content you might like

Production45%

Backup65%

Replication33%

Non-production DBs (Dev, Training, QA, etc.)30%


219 PARTICIPANTS

1.5k views1 Upvote

Scalable AI49%

Composable Data and Analytics40%

XOps23%

Data Fabric35%

Engineering Decision Intelligence25%

Augmented Consumers6%

Edge Computing26%


225 PARTICIPANTS

1.1k views1 Upvote

Senior Director, Technology Solutions and Analytics in Telecommunication, 51 - 200 employees
Palantir Foundry
3
Read More Comments
7.9k views14 Upvotes48 Comments