A new feature introduced in Data Integration
- A orchestrate a series of jobs as a single process.
- It allows to orchestrate EPM cloud jobs across instances from one location.
- It allows a better control and visibility of full extended data integration process of preprocessing, data loading and post processing jobs.
- Contains multiple stages each of which includes jobs that can be run in serial or parallel.
A Pipeline can include the following job types for multiple target application:
- integrations
- business rules
- business rulesets
- open batches (by file, location, and name)
- objects to and from the Object Store
- substitution variables
Applies to: FCCS, EPBCS, TRCS, Planning, and Planning Modules
Pipeline is available from the Data Integration home page by clicking the Add icon and then selecting the Pipeline option.
A Pipeline job is identified with a icon on the Data Integration home page. Pipeline jobs can be also searched by searching for the word "pipeline" or a part of the word from Search.
Run Pipeline Option
- It is available from the Data Integration home page enabling to execute the jobs in the Pipeline, send emails, and attach logs.
- When the Pipeline is running, it shows the status. Customers can also see the status of the Pipeline running in Process Details.
- Each individual job in the Pipeline is submitted separately and creates a separate job log in Process Details.
Benefit:
- The Pipeline provides customers with a framework to combine multiple and different jobs type as a single process instead of a series of processes.
- Previously customers had to use the Batch option in Data Management, which restricted the types of jobs that could be included in the batch.
Thank you!
Created by - Mohit Jain & Megha Gupta
This is good, thanks for sharing
ReplyDelete