Child pages
  • BW/4HANA conversion: Scope Collection behavior
Skip to end of metadata
Go to start of metadata

The transfer tool scope collection is based on the associations between the BW logical transport (TLOGO) objects like they are defined in the standard BW TLOGO framework.

There are two main types of associations (called association types):

  • USED_BY: This means that if object #1 is returning object #2 as USED_BY, then #1 is used by #2
  • DEPENDENT: This means that if object #2 is returning object #1 as DEPENDENT, then #2 is dependent on #1

Note that in above examples, the role of #1 and #2 was switched: This is on purpose, because the two association types are inverse to each other. If #1 is used by #2, then #2 is dependent on #1.

  • EXISTENTIAL: Object #1 is part of object #2, so neither of them can exist without the other

There are two more relevant association types:

  • SEND_DATA_TO: In DataFlow upwards
  • RECEIVE_DATA_FROM: In DataFlow downwards

These association types are also browsed during scope collection, however they are taken into account only if an object is not returning the same child object also with association type USED_BY or DEPENDENT. This is in fact some shortcoming of the corresponding object, and the transfer tool tries to mitigate this shortcoming.

As said, the basic collection logic for Inplace-Transfer and for Shell- and Remote-Transfer is recursively based on above logic, i.e. the associations of objects that have been found are followed as well. The association type EXISTENTIAL is always followed, however regarding DEPENDENT and USED_BY, the association types that are followed by the scenarios are different:

  • Shell and Remote-Transfer follow the collection logic of the standard transport collection. This means all objects must be collected, on which an object is dependent, i.e. association type DEPENDENT is followed, while association type USED_BY is ignored. This is so, because in Shell- and Remote-Transfer the objects are indeed transported. For example an InfoCube is dependent on it’s InfoObjects, hence the InfoObjects must be transported and therefore be part of the same scope like the InfoCube.
  • For Inplace-Transfer on the contrary, the USED_BY association type is followed, but the DEPENDENT associations are ignored. This is so, because if any object is changed or replaced by the transfer in the current system, then all objects which are dependent on that object, i.e. all objects by which the changed object is used, must eventually be adjusted as well. For example an InfoCube is used by a Transformation, hence the Transformation must be adjusted and therefore be part of the same scope like the InfoCube.

On top of this basic logic, there are some associations between certain object types which must be followed regardless the association type between them, in order for the transfer tool to function correctly.
These are called SUPER STRONG associations, and they are relevant for all Scenarios:

  • Error-DTPs and their DTPs (in both directions)
  • DataSources and their Source systems (in both directions)
  • DataSources and their InfoPackages (in this direction only)

Especially for Remote-Transfer, there is another type of relationship that becomes important and must be followed, which is that of the request flow. Since Data and Request Information is copied from the sender BW into the receiver BW/4, all the objects which a data request is touching on it’s (potential) way from the DataSource to the last DataTarget, must be part of the same scope:

  • All source objects of an object along the data flow must be in scope so that the data requests in the object have the corresponding source object available
  • All target objects of an object along the data flow must be in scope, because if the requests in the already transferred object are changed, the requests of the target object can no longer be transferred consistently later.

In order to achieve this request flow consistency, all the following objects are considered to have so-called STRONG associations with each other:

  • Link Objects: Transformations, Data Transfer Processes, InfoPackages, HANA Analytic Processes
  • Node Objects: InfoCubes, DataStores, advanced DataStores, DataSources, Destinations, InfoSources, Source Systems
  • Union Objects: MultiProvider, CompositeProvider

This leads to the fact, that the Remote-Transfer always has the biggest scope compared to Shell- and Inplace-Transfer and can sometimes reveal surprising load paths that exist in the system but were not obvious.

The basic idea of the transfer tool is to minimize the collected scope. In order to achieve this, especially for Inplace-Transfer, a number of exceptions to the above basic collection logic are implemented:

  • InfoObjects are not collected. This can be done, because InfoObjects are dependent only on other InfoObjects and also, because the transfer of InfoObjects is very special: They are switched to the new TSN-based Request management, which however will not allow new data to be loaded anymore in BW, but only in BW/4. Therefore this transfer is done only in the Ready-for-Conversion-Phase as one of the last steps before actual system conversion.
  • Members of InfoAreas and Application Components are only collected, if the user started the collection with an InfoArea or Application component in the first place. Else the scope would suddenly become very big, but an adjustment of the members of an InfoArea or Application Component is not required during Inplace-Transfer
  • The same holds true for Planning Areas, whose members also are not collected
  • MetaChains of ProcessChains are not collected, because MetaChains do not have to be adjusted if a subchain is transferred
  • InfoPackages which are using Query Variables are not collected, because they do not have to be adjusted, but they will bring completely different dataflows into the scope
  • Query Elements and Hana Analysis Processes are not collected from advanced DataStore Objects and Composite Providers, because they need no adjustment, if their source is already BW/4-compatible

Now this basic idea of minimizing the scope might be sometimes unwanted, because it might require to add a lot of start objects in order to get everything together that is considered to belong to a certain data flow. Besides different buttons which allow to choose the start objects, there are the following features to mitigate this requirement:

  • From the DataFlow in the administrator workbench, a button is available to transfer the shown objects (not for 7.0x)
  • In the start object screen, there is a button “minimal scope, which is de-selected by default. If it is pressed, the aforementioned collection behavior takes place. If it is NOT pressed, then:
    • For the start objects of an Inplace-Transfer not only the association type USED_BY is followed, but the association type DEPENDENT as well. This means all objects around the start objects are collected, not only the mandatory objects. In addition, for process chains not only the processes are collected, but also the objects on which the single processes in turn depend on. This allows to enter a process chain in order to transfer a number of InfoProviders which are processed by the chain. In addition, also Query Elements and Hana Analysis Processes on top of advanced DataStores and Composite Providers are collected.
    • For the start objecs of a Shell- or Remote-Transfer not only the association type DEPENDENT is followed, but the association type USED_BY as well. This means all objects around the start objects are collected, not only the mandatory objects.
  • For Shell- and Remote-Transfer there is an additional button if the minimal scope button is pressed: Collect Queries.  Queries would not be collected from the start objects if the minimal scope applies, because they are USED_BY the start objects only. However you might not want to deselect the button, because a lot of other objects could come in as well that you do not want. Therefore this button disables the minimal scope only for Query Elements.

  • No labels