Data Integrator acts here exactly like any other ABAP programmer. You look at the structure of objects, call some functions, cache this and read that, and once the ABAP source code is completed, it get tested and moved to production.
In a DataFlow you are absolutely free, you can join data from different databases, use any kind of transforms, etc. An ABAP program on the other hand can be excuted on one SAP system only, has to have a program name, you can call SAP functions inside. In order to support all that on the GUI, a separate object - the R/3 DataFlow - was created.
When you create a new R/3 DataFlow (e.g. via the tool palette) you get asked where the ABAP should be executed, give a name to the ABAP file in the jobserver, the ABAP report name as seen from within SAP and a SAP job name. Therefore, it is guaranteed only objects and functions of this datastore are used, and build in functions are reduced to the list of function we can generate inside an ABAP (or how would you call an exec-function inside an ABAP, hmm?).
Inside the R/3 DataFlow you can use all SAP objects and functions of this particular system like in a DataFlow, but the last object of each R/3 DataFlow has to be a DataTransport object - the file generated by the ABAP and downloaded by the jobserver.
- How to read the ABAP
- How to execute the ABAP
- Moving to Production
- Moving ABAP to Production (DI 12.1)
- Common Questions
- Custom ABAP Transform
- Calling functions inside the ABAP