This section provides information that will help you get started with configuring a pipeline using iWay Big Data Integrator (iBDI).
This use case demonstrates how structured (relational) data can be ingested through a pipeline. A data warehouse builder might use this example to understand how to model data within a Big Data context.
If you are familiar with Sqoop, then select RDBMS as the source. The right pane shows the properties of the selected object.
Select a defined Data Source Profile from the drop-down list. Provide an integer column for partition column - customer_id and enter sql in the sql - select * from customers field.
Examine the first 20 records and select Show.
Select Console.
The Run Configurations dialog opens.
Right-click and select New Configuration from the context menu.
The Create, Manage and Run Configurations dialog opens.
Specify a name for the new pipeline configuration.
In the Pipeline field, click Browse to open the current project. Expand Pipelines, open the Pipelines folder, then select the pipeline file to be deployed.
Open the current project and then open the Pipelines folder.
When the job is to be run as a scheduled CRON job, select Publish. Otherwise, select Deploy.
Enter the host, port, user name, and password information where the iBDI edge node has been provisioned.
For the path, enter the subfolder inside the iBDI provisioned edge node where this object will be located. If an existing name is used, then the object will be overwritten without warning in the client tool. The console will display the overwrite message.
The Console pane (deployments only) shows the job execution log. The output of the pipeline is shown inside the console. For example: