Data Lake

From Clinfowiki
Revision as of 18:22, 27 October 2020 by Nahata5 (Talk | contribs)

Jump to: navigation, search

A data lake is a central repository that allows the storage and flow of structured and unstructured data sources. This concept is akin to a lake with multiple streams or sources to fill up a reservoir and store data as is, before it is allowed to flow out to various applications within an organization.

Functions of a Data Lake

Data Ingestion

The purpose of this step is to setup an ingestion pipeline for the various different data that is required to be stored. Within the clinical context, this may include the text of all clinical encounters from the EHR, the imaging data from the PACS system, the laboratory observations. The potential for creating an HL7 streaming data pipeline.

Tools

  • Apache Flume [1]
  • Apache Kafka [2]
  • Apache Storm [3]

Data Storage and Retention

The purpose of data lake storage is to have a central repository of various datatypes, structured and unstructured. In the same location, image files and vital signs can be stored. Advantages to a cloud based storage tool, such as AWS or Azure, include cheaper longer term storage and the ability to easily scale as more space is required.

Tools

  • Apache Hive [4]
  • Apache Hbase [5]
  • MapR-DB [6]
  • Azure Data Lake Storage [7]
  • AWS Data Lake Storage [8]

Data Processing

A number of distributed data processing tools, largely based off of the Hadoop/Spark frameworks exist to run extract, load, transform tools to clean the data for the downstream users. During this phase, a number of transformations can take place, including machine learning pipelines to perform analysis of the various types of data.

Tools

  • MapReduce - a programming model and an associated implementation for processing big data [9]
  • Apache Hive - a data warehouse system [10]
  • Apache Spark - a unified analytics engine for large-scale data processing[11]
  • Apache Storm - a open source distributed real-time computation system [12]
  • Apache Drill - a Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage [13]

Data Access

  • Tools


Difference from Data Warehouse

A data warehouse itself is a database for relational data, where the data is extracted, cleaned, and transformed prior to being stored in a pre-defined schema. This data is optimized for fast SQL queries.

A data lake stores the raw data, both relational and non-relational data sources, without having to fit it within the constraints of a single database schema. Depending on the analytics required from various areas of the organization, the extract, transform, and load steps are performed within the data lake and distributed to the client depending on their needs[1]. In the clinical setting, this allows for storage of free text progress notes, laboratory observations, and imaging data to be all stored in the same central location, but can be used and analyzed together.

Data Swamp

This is when a data lake can become unruly and become a data swamp.


References

  1. Holmes DE. Big data [Internet]. Amazon. Oxford University Press; 2017 [cited 2020Oct26]. Available from: https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/

Submitted by Tom Nahass