While the buzz around big data analysis is at a peak, there is less discussion about how to get the necessary data into the systems in the first place, which can involve the cumbersome task of setting up and maintaining a number of data processing pipelines.
To help solve this problem, Santa Clara, California start-up DataTorrent has released what it calls the first enterprise-grade ingestion application for Hadoop, DataTorrent dtIngest.
The application is designed to streamline the process of collecting, aggregating, and moving data onto and off of a Hadoop cluster.
The software is based on Project Apex, an open source software package available under the Apache 2.0 license.
Working as a component within a Hadoop platform, dtIngest can work with both streaming and batch data. It can exchange data across a variety of file systems and protocols, including NFS, FTP, the Hadoop File System, Amazon Web Service's Simple Storage Service (S3), Kafka, and the Java Message Service.
The software is fault tolerant, in that it can resume a file transfer automatically after disruption. It comes with a point-and-click interface, as well as monitoring logs.
The company has released dtIngest for free, hoping that users will upgrade to DataTorrent's enterprise Hadoop data ingestion pipeline software, DataTorrent RTS 3, which is based on dtIngest/Project Apex and includes additional capabilities for operational management, easy development and data visualization.
DataTorrent was co-founded by Amol Kekre and Phu Hoang, a pair of engineers who used to work at Hadoop pioneer Yahoo. The company has formed partnerships with Hadoop distributors Hortonworks and Pivotal, and has drummed up nearly $24 million in early stage funding from investors.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.