Page tree
Skip to end of metadata
Go to start of metadata

Overview

In the context of generating extractor functions, it is important to understand the system landscape, in which CDC is used. Regarding Solution Manager there is typically either a two- or three-system landscape. For CDC it would not really matter whether it runs on the development (DEV), test (QA) or productive (PRD) Solution Manager, as long as the required connections (like RFC destinations or database connections) to the intended source systems are available. Normally you would use CDC on the development Solution Manager only for initial testing, like evaluating the features, building a proof-of-concept, testing custom-enhancements, and so on, with connections to the development or test source systems. For real productive usage, which means comparing data from the productive source systems, CDC runs on the productive Solution Manager obviously.

Please note that it is not possible to transport comparison objects and comparison instances, because they are considered as application data and would also use different connection parameter values depending on the environment. However, it is possible to use a download/upload function for comparison object data models or whole comparison objects. But typically you start building new comparison objects directly in the productive Solution Manager and test them with connections to development/test source systems, before running high-volume comparisons between the productive source systems. This way you only need one comparison object for modeling and you can have multiple comparison instances pointing to the different environments.

For source type ABAP- RFC to SAP ABAP System

At CDC design-time

At design-time, which is the modeling of comparison objects, you need two types of RFC destinations to the development source system.

  • A Read-RFC-Destination, which uses a background connection with a system logon user, to access the data dictionary for providing value helps on tables and fields.
    • Generated RFC-Destinations (e.g. by Solution Manager Setup) of this type are named SM_<sid>CLNT<clnt>_LOGIN, where <sid> is the System-ID and <clnt> is the client number.
  • A Trusted-RFC-Destination, which uses a connection with the same logon user, to generate the specific extractor functions. For this code generation you need developer authorization to create the corresponding function group und the remote-enabled extractor function modules.
    • Generated RFC-Destinations of this type are named SM_<sid>CLNT<clnt>_TRUSTED, where <sid> is the System-ID and <clnt> is the client number.

Afterwards the generated extractor function modules are transported from the development system to the test system to the productive system, by standard transport management tools. It is not possible to generate extractor coding directly in the productive system, because it does not allow repository changes.

NOTE

To overcome the need of generation and transport, you can use the source type “SAP ABAP System (using RFC to generic extractor)”.

 

At Run-time for Data Extraction

At run-time, which is the actual extraction and comparison of data, you just need a Read-RFC-Destination, which in this case needs authorization to read from the respective application database tables. Typically you compare the data from the productive source systems, using the transported extractor function modules. Or alternatively you can use a generic extractor already present inside the productive source systems.

For source type  ADBC- Remote Database

For data sources using a remote database connection, there is no transporting required. During design-time the database connection is used to query the data dictionary, in order to provide search helps for tables and fields. Based on the designed comparison object model, two SQL statements are generated, one for doing a count (to estimate the number of expected records) and one for doing the actual data extraction. These SQL statements are stored centrally on Solution Manager as comparison object parameter values.

 

At Run-time for Data Extraction

At run-time, the generated SQL statement is used to extract data from the productive non-ABAP database.


Filtering on the comparison

Sometimes not all data existing in the systems should be compared. The reason might be that the comparison of some data is meaningless or that you would like to restrict the data to different organizational units.

An example for the first case would be if data has to be released before it is send to the target systems (e.g. service orders have to be released in a central customer care system before they are replicated to a maintenance systems). In this case an comparison of unreleased service orders would be useless as they exist by definition only in the source system. As this behaviour is build into the process it would not make sense to allow end users to change this filter but it should be build into your data model. To allow this type of filtering, model specific filters can be maintained which are valid for all comparisons using this model. 

An example of the second case could be required if multiple organizational units with different responsibles work in the same system. In this case it might be sensible to define comparisons for each responsible where the employee sees only the data he or she is responsible for. The actual data model for each responsible is identical but he should see only a subset of the data. To do this comparison specific filters can be defined as well.

Sometimes you would like to filter on values for multiple fields in the same comparison. In this case it is important that you provide a well-defined relation ship between the different filter values. An example could be that you would like to compare sales orders for certain sales organisations and distribution channel. If you would provide only the information you are interested in data for sales organization 1, sales organization 2, distribution channel A and distribution channel B the data selection would consider all possible combinations which are the value pairs (sales organization 1, distribution channel A), (sales organization 1, distribution channel B), (sales organization 2, distribution channel A), (sales organization 2, distribution channel B).

This might be okay for most cases but if you are only interested in the combinations (sales organization 1, distribution channel A), (sales organization 2, distribution channel B) you need to provide the relationship, which can be done via filter sets.

Types of Filters

 

You can use various types of filters in cross-data comparison to restrict the data being compared for the sources. The following filter types are available in Cross Database Comparison.

Filter TypeData ModelComparisonComparison Run
Object filters with fixed values (type 1)You can define a fixed filter value at source level. You can define them for both sources or for one only). The filter value is used during the generation to create a hard-coded where clause.Not visible on the comparison levelComparison Run fetches data from generated extractor with its hard-coded where-clause.
Comparison filters with variable values (type 2)Variable filter flag is selected in the data model, either at mapping level (relevant for both sources) or at source level (relevant for one source only).On the comparison level the actual filter values can be maintained. These are used for all Comparison Runs of this Comparison Instance.Comparison Run hands over the filters to the extractor, which handles the filter values dynamically.
Object filters with relative dates (type 3, variant of type 1)If a filter for a Data model contains the syntax for relative dates (starts with ‘$’), it will not be generated as hard-coded filter in the extractor.Not visible on the comparison level. Valid for all Comparison Instances of this Comparison Object.Comparison Run translates the relative date syntax into an absolute timestamp and hands over the filters to the extractor, which handles the filter values dynamically.
Comparison filters with relative dates (type 4, variant of type 2)

Variable Filter flag is selected in comparison object, either at mapping level (relevant for both sources) or at source level (relevant for one source only).

On the comparison level, relative date syntax can be maintained (starts with ‘$’). These are used for all Comparison Runs of this Comparison Instance.Comparison Run translates the relative date syntax into an absolute timestamp and hands over the filters to the extractor, which handles the filter values dynamically.

Model specific filters are fixed filter values at source level. You can define them for both sources or for one only). The filter value is used during the generation to create a hard-coded where clause. Also if a filter for a data model contains the syntax for relative dates (starts with ‘$’), it will not be generated as hard-coded filter in the extractor.

Below is an example of fixed filters.

Below is an example of using relative dates syntax for one source system. See Using relative date syntax for more information.


The Variable Filter flag is selected in comparison object, either at mapping level (relevant for both sources) or at source level (relevant for one source only). In the Comparison, the actual filter values can be maintained. These are used for all Comparison Runs of this Comparisons. Comparison Run hands over the filters to the extractor, which handles the filter values dynamically. In the  Comparison, relative date syntax can be maintained (starts with ‘$’). These are used for all Comparison Runs of this Comparison. Comparison Run translates the relative date syntax into an absolute timestamp and hands over the filters to the extractor, which handles the filter values dynamically. See Using relative date syntax for more information.

On object level, set flag and date type for “Variable Filter” on mapping details popup

On selecting the filter and clicking OK, In the comparison-specific filters group, a new option appears with the heading filters from mapping.


Using relative date syntax

 

There are many use cases, where the comparison shall be time-dependent. For example, you only want to compare documents created yesterday, or during the last month. Entering static date filter values does not make sense here, because you would have to adapt the comparison filters before each run. To achieve a relative date filtering, which means calculating relative dates instead of absolute dates, CDC provides a special syntax on how define fix points and offsets for relative dates.

Syntax for Relative Dates

Instead of a fixed (absolute) date, you can enter a special keyword for the start date (prefixed by a $ character) and optionally an additional offset as difference in days.

Syntax = <StartDate>[<Difference>]

For <StartDate> the following keywords are available:

Keyword for <StartDate>

Description

Example using 2013/04/25 as Reference Date

$TODAY

Current date (today)

2013/04/25

$FDOCW

First day of current week

2013/04/22

$LDOCW

Last day of current week

2013/04/28

$FDOCM

First day of current month

2013/04/01

$LDOCM

Last day of current month

2013/04/30

$FDOCQFirst day of current quarter2013/04/01
$LDOCQLast day of current quarter2013/06/30

$FDOCY

First day of current year

2013/01/01

$LDOCY

Last day of current year

2013/12/31

$FDOPW

First day of previous week

2013/04/15

$LDOPW

Last day of previous week

2013/04/21

$FDOPQFirst day of previous quarter2013/01/01
$LDOPQLast  day of previous quarter2013/03/31

$FDOPM

First day of previous month

2013/03/01

$LDOPM

Last day of previous month

2013/03/31

$FDOPY

First day of previous year

2012/01/01

$LDOPY

Last day of previous year

2012/12/31

$FDONW

First day of next week

2013/04/29

$LDONW

Last day of next week

2013/05/05

$FDONM

First day of next month

2013/05/01

$LDONM

Last day of next month

2013/05/31

$FDONQFirst day of next quarter2013/07/01
$LDONQLast day of next quarter2013/09/30

$FDONY

First day of next year

2014/01/01

$LDONY

Last day of next year

2014/12/31

$CWEEKCurrent calendar week (offset in weeks)201317
$FIPECFiscal Period (current)

004.2013

$FIPEPFiscal Period (previous)003.2013

These additional keywords are meant for full timestamps only:

Keyword for <StartDate>

Description

Example for 2013/04/25 at 08:21:47 with offset -2

$TIMES

Time stamp now (offset in seconds)

2013/04/25 08:21:45

$TIMEM

Time stamp now (offset in minutes)

2013/04/25 08:19:47

$TIMEH

Time stamp now (offset in hours)

2013/04/25 06:21:47

$TIMED

Time stamp now (offset in days)

2013/04/23 08:21:47

$HOURA

absolute hour
(+=today / -=yesterday)

2013/04/24 02:00:00

$HOURR

current full hour (offset in hours)

2013/04/25 06:00:00

$DELTA

Delta mode (timestamp of last run)

n/a

The optional <Difference> is entered as positive or negative offset (usually in days, unless otherwise noted)

  • using '+' increments days (= move start date into the future)
  • using '-' decrements days (= move start date into the past)

Examples:

  • $TODAY-2 = day before yesterday
  • $TODAY+2 = day after tomorrow

The relative date selection can be combined in ranges (using the LOW and HIGH fields of the selection criteria), e.g. to define "between" intervals.

Example: You want to filter the documents which have a creation date in last month. The selection criteria would be <FieldName> = $FDOPM to $LDOPM.


Conversion of comparison key fields

For comparison key fields, conversion might be required although the field contents have the same format in both systems. The reason is that during comparison, the correct sort order of the key fields is checked, and the field contents are compared as string. So, for example, the numeric field contents 3, 4, 22, 100 have the correct sort order, but when copied into a string field, the sort order ‘3’, ‘4’, ‘22’, ‘100’ is wrong. You can use a suitable build-in conversion type like ‘INT: length x’ to fill the field contents with leading zeros up to x integer places to get the correct sort order like ‘003’, ‘004’, ‘022’, ‘100’.

Integer comparison key fields

For integer comparison key fields, conversion is required unless all field contents have the same number of integer places. For example, in the unlikely case of only field contents with two integer places, like 11, 21, 34, 54, no conversion is required, but in the usual case of field contents with a different number of integer places, like 3, 4, 22 and 100, conversion is required.

There are specific build-in conversion types for integers, all with a label starting with ‘INT’.  There are conversion types to fill the field contents with leading zeros up to a fixed number of integer places like ‘INT: length 2’, ‘INT: length 3’, etc. and there are conversion types to fill the field contents with leading zeros up to the length specified for the data type in one of the source systems, ‘INT: length source 1’ and ‘INT: length source 2’.

Example

An number character key field has length 2 in source system 1 and length 4 in source system 2. As the length in source 1 is greater than the length in source 2, you could choose conversion type ‘INT: length source 1’ as in the following screenshot or alternatively ‘INT: length 2’ which would have the same effect in this example. Please note that conversion is required for both sources as otherwise there will be no leading zeros as required for the correct sort order.

All integer conversion rules also remove possible trailing blanks as ABAP will add a trailing blank during conversion of a positive integer into a string. (A negative integer will get a trailing minus sign.) There is also a special conversion type ‘INT: remove trailing blanks’ available to remove trailing blanks only.

RESTRICTION

There is one restriction for integer comparison key fields: Integers with negative content cannot be used as comparison key fields as the correct sort order cannot be granted.

Decimal floating point number comparison key fields

For integer comparison key fields, conversion is required unless all field contents have the same number of integer places. For example, in the unlikely case of only field contents with two integer places, like 11, 21, 34, 54, no conversion is required, but in the usual case of field contents with a different number of integer places, like 3, 4, 22 and 100, conversion is required.

There are specific build-in conversion types for integers, all with a label starting with ‘INT’.  There are conversion types to fill the field contents with leading zeros up to a fixed number of integer places like ‘INT: length 2’, ‘INT: length 3’, etc. and there are conversion types to fill the field contents with leading zeros up to the length specified for the data type in one of the source systems, ‘INT: length source 1’ and ‘INT: length source 2’.

Example

A decimal floating point number key field has length 15 and 3 decimals in source 1 and length 3 and 1 decimal in source 2. This means that you could choose conversion type ‘DEC: length source 2, dec. s.2’ as in the following screenshot or alternatively all other ‘DEC: length source x, dec. s.x’ which would have the same effect in this example. Please note that conversion is required for both sources as otherwise there will be no leading and trailing zeros as required for the correct sort order.

There is one restriction for decimal floating point number comparison key fields: Decimal floating point numbers with negative content cannot be used as comparison key fields as the correct sort order cannot be granted.

UUID/GUID comparison key fields

For UUID/GUID comparison key fields, conversion is not required as long as the UUID/GUID format is either binary 16 or character 32. These UUID/GUID formats are automatically extracted as character 32 UUIDs/GUIDs and the correct sort order should be given.

There are specific built-in conversion types for character 22 or character 26 UUID/GUID formats, all starting with ‘UUID’.

Comparison fields (no keys)

Comparison data fields that are not used as comparison key fields do not necessarily require conversion as there is no such thing like the correct sort order checked. Conversion is only required if there are specific mapping rules between the fields.

Example 1

When data is transferred from a field in source system 1 to a field in source system 2, the third of three decimal places is cut off. You can use the built-in conversion type ‘DEC: decrease to 2 decimals’ for source system 1 in this case to cut off the third decimal place of the field content in source system 1.

Example 2

Data in a field in source system 1 is case sensitive whereas the data in the corresponding field in source system 2 is in upper case letters only. You can use the built-in conversion type ‘CASE: convert into upper case’ for source system 1 in this case to convert the field content in source system 1 to upper case.




  • No labels