Skip to end of metadata
Go to start of metadata
Summary

Quite frequently you have to load mass data via a webservice or a BAPI but neither do you want to load one record at a time nor can you make one large call with all data. Solution is simple, using the from/where clause of the root level and the nested level you can control how many calls are made with how much data in each call. But what from/where clause?

Let's imagine we have a large source table and the function we are going to call has one input schema with as many rows as wanted.

The normal approach would be to use Row_Gen and the source table to control the number of rows in each schema.


Either we use the Row_Gen as the "From" clause for the query root level and the table itself as "From" for the schema, hence calling the function once with all data.


Or vice versa, the source is used as "From" for the query root level and each schema is 1 rows large - the Row_Gen is used as "From" clause.


In order to get a mixture of both we need some way to break the rows into groups and know what row belongs to which group. A gen_row_num() function is one example. We can divide this number by 1000 and take the integer value of that and this will be the group number. So the first 1000 rows are of group 0, for the second 1000 the division will return 1 etc.

As a result we will have 1000 rows in each schema and the root node controlling the number of function calls will have as many rows as there are batches of 1000.





  • No labels

1 Comment

  1. Former Member

    Note that for this, because you are using a selection on ROWS_WITHIN_BATCH = 0 as your criteria, you will always miss the first batch (as you are indexing this starting at 1, not 0).

    Because gen_row_numb() does not start with 0, the first ROW_WITHIN_BATCH is mod(1, 1000). This returns 1, not 0, so you will never include it.

    The solution is to use 

    • mod(gen_row_numb() - 1, 1000)
    • trunc((gen_row_num()-1) / 1000, 0)

    instaed.