QuantHub features powerful no-code tools to collect your data regardless of its origin or format.
We have got you covered whether you need to import into the system any type of bulk data from external storage or collect custom observations via surveys and poles. Our pluggable system allows for the flexibility of importing data from any commercial data source.
Bulk Data Ingestion
Loading Dock component in combination with Ingestion Pipelines allows for bulk data import in a configurable, reproducible, and automated fashion.
Loading Dock is the main data ingestion gateway where all the complexity of working with multiple APIs and data formats is resolved, providing data processing pipelines with a unified model to consume data.
Mapping and Structuring
Ingestion Pipeline is an intermediary between the raw data in the Loading Dock and your datasets. It is designed to perform sophisticated ingestion procedures, including data filtering, unpivoting, mapping, and error resolution in a no-code fashion.
This is where the raw source data from the Loading Dock is mapped on the Data Structure Definition (DSD) standard format, ensuring all the data arrives in the dataset structured and formatted in a consumable way.
Ingest bulk data and metadata into the system from diverse external data sources.
Collect data in configurable, reproducible, and automated ways.
Juggle multiple APIs and data formats.
Automatically aggregate, filter, and map your raw data on a SDMX format for further analysis.
Leverage a pluggable architecture with functional API to integrate with even more sources.
Keep the history of all your data ingestions. Track every data point to its source as well as replay historical ingestions.
Ingestion of Individual Observations
Our powerful Survey Module allows the creation of paperless surveys that work on all devices and supports any custom logic, diverse data formats, logical operations, branching, pagination and more.
Raw data collected from questionaries is automatically mapped on the standard DSD format and inserted directly into the dedicated dataset, ready for further processing and analysis.