Data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage (data migration). The import into the intermediate extracting system is thus usually followed by data transformation and possibly the addition of metadata prior to export to another stage in the data workflow.

Usually, the term data extraction is applied when (experimental) data is first imported into a computer from primary sources, like measuring or recording devices. Today's electronic devices will usually present an electrical connector (e.g. USB) through which 'raw data' can be streamed into a personal computer.

Data sources

edit

Typical unstructured data sources include web pages, emails, documents, PDFs, social media, scanned text, mainframe reports, spool files, multimedia files, etc. Extracting data from these unstructured sources has grown into a considerable technical challenge, where as historically data extraction has had to deal with changes in physical hardware formats, the majority of current data extraction deals with extracting data from these unstructured data sources, and from different software formats. This growing process of data extraction from the web is referred to as "Web data extraction" or "Web scraping".

Imposing structure

edit

The act of adding structure to unstructured data takes a number of forms

  • Using text pattern matching such as regular expressions to identify small or large-scale structure e.g. records in a report and their associated data from headers and footers;
  • Using a table-based approach to identify common sections within a limited domain e.g. in emailed resumes, identifying skills, previous work experience, qualifications etc. using a standard set of commonly used headings (these would differ from language to language), e.g. Education might be found under Education/Qualification/Courses;
  • Using text analytics to attempt to understand the text and link it to other information

See also

edit
  • Data mining, discovery of patterns in large data sets using statistics, database knowledge or machine learning
  • Data retrieval, obtaining data from a database management system, often using a query with a set of criteria
  • Extract, transform, load (ETL), procedure for copying data from one or more sources, transforming the data at the source system, and copying into a destination system
  • Information extraction, automated extraction of structured information from unstructured or semi-structured machine-readable data[1], for example using natural language processing to extract content from images, audio or documents

References

edit
  1. ^ Hartley, Miranda. "Using AI to Extract Unstructured Data From PDFs: Benefits & Considerations". Evolution AI. Retrieved 20 November 2024.