Back to Home
CDC compares data between two data sources. The following source types are available for the two data sources.
Source Types offered | |
---|---|
SAP ABAP-based systems | |
ABAP | SAP ABAP System (using RFC to generated extractor): Generates an individual extractor function module in the managed system, extracts data via RFC to the generated function module, compares data in SAP Solution Manager. The generation creates an own function module per each data model in the customer namespace. It allows enhancements in the generated code, for example to code some additional application logic, but requires transport from the development system to the production system. |
ABDY | SAP ABAP System (using RFC to generic extractor): Generates SQL statements, extracts data via RFC to a generic extractor function module in the managed system, compares data in SAP Solution Manager. The generic extractor function module is delivered with the add-on ST-PI as standard code, thus does not allow extensions. |
IDC | Determine inconsistent entries in one system: Generates SQL statements, compares data within one system via RFC to a generic extractor function module, transfers comparison results to the CDC application in SAP Solution Manager. |
BIQY | Business Intelligence (MDX Query): Generates MDX queries for BW info objects, extracts data via query execution in the managed BW system, compares data in SAP Solution Manager. |
BWRI | SAP BW Data Manager Read Interface: The source type uses an RFC connection to the BW system to call the BW Data Manager Read Interface to read the data. |
Databases | |
ADBC | Remote Database (using ADBC): Generates SQL statements, extracts data via ABAP Database Connectivity to supported DBMS (including HANA), compares data in SAP Solution Manager. |
HANA | Comparison in one SAP HANA Database: Generates SQL statements, compares data between SAP HANA and another database connected via SAP HANA Smart Data Access in the HANA database, transfers comparison results to the CDC application in SAP Solution Manager. |
Web Services | |
ODAT | OData Service: Generates OData queries, extracts data via HTTP query execution, compares data in SAP Solution Manager. |
Ariba | |
ARIB | Ariba Networks (Using Transaction Monitoring API V2): Generates an openAPI query, extracts data via Transaction management API, compares data in SAP Solution Manager |
ARIP | Ariba P2P/P2O (Using Operational Reporting API): Generates an openAPI query, extracts data via Operational Reporting API, compares data in SAP Solution Manager |
Files | |
FIXS | XML File on Application Server: No generation, extracts data from XML files on application server of SAP Solution Manager, compares data in SAP Solution Manager. |
FIXL | XML File on local PC: No generation, extracts data from XML file on local PC, compares data in SAP Solution Manager. |
CSVS | CSV File on Application Server: No generation, extracts data from comma separated fields on application server of SAP Solution Manager, compares data in Solution Manager. |
Custom | |
Custom Source Type | You can create additional custom source types using the CDC framework. For details refer to Extensibility of CDC . |
Capabilities of the different source types
Source Type | Dialog | Background | Aggregation Support | Iteration Support | Table and column name support | Duplicate check support |
---|---|---|---|---|---|---|
ABAP | X | X | NATIVE BY SOURCE TYPE | X | X | - |
ABDY | X | X | NATIVE BY SOURCE TYPE | X | X | X |
ADBC | X | X | NATIVE BY SOURCE TYPE | X | X | X |
BIQY | X | X | - | X | X | - |
BWRI | X | X | GENERIC AFTER EXTRACTION | - | X | - |
CSVS | X | X | GENERIC AFTER EXTRACTION | X | - | - |
FIXL | X | - | GENERIC AFTER EXTRACTION | - | - | - |
FIXS | X | X | GENERIC AFTER EXTRACTION | X | - | - |
HANA | X | X | NATIVE BY SOURCE TYPE | X | X | - |
IDC | X | X | - | X | X | - |
ODAT | X | X | GENERIC AFTER EXTRACTION | X | X | - |
ARIB | X | X | - | X | X | - |
ARIP | X | X | - | - | X | - |
Dependencies between extraction strategy and source type
Extraction Strategy VS Source Type | Default: Individual data selection in each source system | Use keys of source system 1 to select in source system 2 / Use keys of source system 2 to select in source system 1 | Execute comparison in one system |
---|---|---|---|
SAP ABAP System (using RFC to generated extractor) | X | X | - |
SAP ABAP System (using RFC to generic extractor) | X | X | - |
Remote Database (using ADBC): | X | X | - |
XML File on Application Server | X | X | - |
XML File on local PC | X | X | - |
CSV File | X | X | - |
OData | X | X | - |
BIQY | X | X | - |
BWRI | X | - | - |
Comparison in one HANA database | - | - | X |
Determine inconsistent entries in one system (IDC) | - | - | X |
SAP ABAP System (using RFC to generated extractor)
Run Mode: BACKGROUND AND DIALOG Aggregation: NATIVE Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: REQUIRED Please check here
Generates an extractor function module in the managed system, extracts data via RFC to the generated function module, compares data in SAP Solution Manager.
You choose this for an ABAP-based system. The data is read using a function module, which needs to be individually generated for each comparison object. The generation creates an own function module per each data model in the customer namespace. It allows enhancements in the generated code, for example to code some additional application logic, but requires transport from the development system to the production system.
Parameters
The following data must be entered:
Parameter | Description |
---|---|
RFC Destination (Read) | Specify the connection that is used to read the data from the ABAP Dictionary and the application tables of the source system. |
RFC Destination (Trusted) | Specify the connection that is used to create the extractor function module in the source system. You need developer authorization. |
Function Group | Specify the name of the function group (new or existing function group). |
Function Name | Specify the name of the function module that is generated by the application. |
Package | Specify the name of the package in which a new function group should be created. Not required for already existing function groups. |
Transport Request | Specify the transport request that should be used for the new function group and / or new function module. Not required for new function groups and function modules stored as local object (package $TMP). |
SAP ABAP System (using RFC to generic extractor)
Run Mode: BACKGROUND AND DIALOG Aggregation: NATIVE Iteration: SUPPORTED Duplicate Check: SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: REQUIRED Please check here
You choose this for an ABAP system. The data is read using a generic extractor function module. It is not necessary to generate individual extractor function modules but you generate SQL statements that are used by a generic extractor function module to extract the data. The generic extractor function module is delivered with the add-on ST-PI as standard code, thus does not allow extensions. You can find more details about the generic extractor function module in SAP note 1819794 - CDC: Generic extractor function module.
Parameters
Enter the following data:
Parameter | Description |
---|---|
RFC Connection | Specify the connection that is used to read the data from the ABAP Dictionary and the application tables of the source system. |
The following fields are non-editable and generated by the tool | |
SQL Statement SELECT | The select part of the OpenSQL query to be executed on the table depending on the data model constructed |
SQL Statement FROM | The from part of the OpenSQL query depending on the table(s) selected, could be a simple select or with JOINs |
SQL Statement Fixed WHERE | The where part of the OpenSQL query generated depending on the data model if fixed filters are applied |
SQL Statement GROUP BY | The group by part of the OpenSQL query generated if grouping is used |
SQL Statement ORDER BY | The order by part of the OpenSQL query which is generated for the data model. The query is always sorted by the key fields in the data model. |
Business Intelligence (MDX Query)
Run Mode: BACKGROUND AND DIALOG Aggregation: NOT AVAILABLE Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: REQUIRED Please check here
Generates MDX queries for BW info objects, extracts data via query execution in the managed BW system, compares data in SAP Solution Manager.
You can use this source type to extract data from SAP BW info objects. The metadata is presented in the following way, info objects are table names while key figures are field names.
Parameters
The following parameters are required:
Parameter | Description |
---|---|
RFC Connection | Specify the RFC destination of the SAP BW system |
The following fields are non-editable and generated by the tool | |
MDX query statement | The MDX query is generated and used to extract the data from the SAP BW system |
BW Read Interface (BWRI)
Run Mode: BACKGROUND AND DIALOG Aggregation: GENERIC AFTER EXTRACTION Iteration: NOT SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
This source type uses an RFC connection to the BW system to call the BW Data Manager Read Interface to read the data. In the data model an InfoCube is entered as table and the characteristics and key figures as table columns. There is no generation, but the information in the data model is used to call the BW Data Manager Read Interface.
Currently the new source type BWRI supports the comparison of data from BW InfoCubes only. It is possible to use characteristics as key or data fields, but key figures can be used as data fields only. Both characteristics and key figures can be used as context fields.
Parameters
The following information is required
Parameter | Description |
---|---|
RFC Connection | Specify the connection to the BW system |
Note
This source type has a few restriction when it comes to using filters. The source type supports filters on data model and comparison level, but there cannot be both types of filters on the same field as these filters would have to be connected with an AND operator and the BW Data Manager Read Interface offers only one filter range table without this option.
Remote Database (using ADBC)
Run Mode: BACKGROUND AND DIALOG Aggregation: NATIVE Iteration: SUPPORTED Duplicate Check: SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: REQUIRED Please check here
You choose this for a non-ABAP system where you can establish a direct remote database access. The data is read via a secondary database connection using a native SQL statement with ADBC (ABAP Database Connectivity). This works for all SAP-supported RDBMS including HANA.
Parameters
Parameters | Description |
---|---|
Database Connection Name | Specify the connection that is used to read the data for the comparison. It is maintained in transaction DBACOCKPIT. |
Database Schema Name | Specify the name of the database schema from which the data is read. |
The following fields are non-editable and generated by the tool: | |
SQL Statement (Count) | The SQL statement used to count the number of expected objects is displayed after generation. |
SQL Statement (Extract) | The SQL statement used to extract the source data is displayed after generation. |
SQL Statement (Duplicates) | Generated SQL Statement to Extract a List of Duplicate Objects |
Direct remote database access
In addition to its own local database, the Solution Manager can also create secondary database connections to “remote” (external) databases of other non-ABAP or non-SAP systems. The connection to a remote database uses ADBC (ABAP Database Connectivity), which is an ABAP/OO-based API and supports all official SAP-supported Database Management Systems (DBMS). It allows running native SQL queries from the ABAP stack. CDC can generate such native SQL extraction queries to read and compare the data with other data sources. The corresponding secondary database connection must be maintained with transaction DBACOCKPIT. The old method of using transaction DBCO is still supported but less user-friendly.
From a technical point of view, you need an installation of an appropriate Database Client and Database Shared Library (DBSL), for each type of SAP-supported DBMS, if not already installed. For example, if the Solution Manager runs on a MaxDB database, and you want to connect to some external application using also a MaxDB database, there is no additional installation necessary. However, if this external application runs on MS-SQL, you would need to install an additional DBSL and DB-Client from Microsoft.
The following SAP Notes provide additional information on how to setup the secondary database connections (also known as database multiconnect) for the different supported DBMS.
DBMS code | Description | SAP Note |
---|---|---|
ADA | SAP MaxDB/liveCache | 955670 DB multiconnect with SAP MaxDB as secondary database |
DB2 | DB2 UDB for OS/390 | 160484 DB2/390: Database multiconnect with EXEC SQL |
DB4 | DB2 UDB for AS/400 | 146624 IBM i: Database Multiconnect for IBM DB2 for i |
DB6 | DB2 UDB for LUW | 200164 DB6: DB multiconnect with DB6 as target database |
HDB | SAP HANA database | 1597627 SAP HANA connection |
MSS | Microsoft SQL Server | 178949 MSSQL: Database MultiConnect |
ORA | Oracle | 339092 DB MultiConnect with Oracle as secondary database |
SYB | SAP ASE | 1507573 External DB connect to an SAP ASE database |
SIQ | SAP IQ | 1737415 Sybase IQ: Enable remote/secondary connect to SAP IQ |
Using SAP HANA SDA (Smart Data Access) you can connect to further remote sources not listed here, such as Hadoop, Teradata, etc. The HANA DB exposes them as virtual tables, and CDC can consume them with the source type ADBC or HANA as well. So the access path from CDC point-of-view would be SolMan/CDC => ADBC => HANA => SDA => RemoteDB.
Please see here for further information: https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/a07c7ff25997460bbcb73099fb59007d.html
Comparison in one SAP HANA Database
Run Mode: BACKGROUND AND DIALOG Aggregation: NATIVE Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED Metadata retrieval: SUPPORTED
Extraction Strategy: EXTRACT COMPARISON IN ONE DATABASE Special Permissions: NONE
This source type enables comparisons directly in an SAP HANA database. You can compare data from the SAP HANA database with data from a second database (SAP HANA or a different database). For this purpose, the tables of the second database must be available as virtual tables in the SAP HANA database. This is realized using SAP HANA Smart Data Access:
SAP HANA Smart Data Access
The SAP Hana Smart Data access is based on virtual tables that maps to an existing object at the remote data source site.
For information about how to create virtual tables using SAP HANA Smart Data Access, please see the corresponding chapter in the SAP HANA Administration Guide: SAP HANA Smart Data Access.
For most source types, CDC extracts data from both data sources and compares them in SAP Solution Manager. To compare data directly in an SAP HANA database, you first change the extraction strategy from Default: Individual Data Selection in Each Source System to Execute Comparison in One System. After this, instead of selecting two source types, you select only one source type Comparison in One SAP HANA Database.
Parameters
The following information is required:
Parameter | Description |
---|---|
Database Connection Name | Specify the connection that is used to read the data for the comparison |
Database Schema Name | Specify the name of the database schema from which the data is read |
The following fields are non-editable and generated by the tool | |
SQL Statement (Count System 1, Count System 2) | The SQL statement used to count the number of expected objects is displayed after generation. |
SQL Statement (Comparison) | The SQL statement used for comparison between the data in the two source systems. |
SQL Statement (Extract) | The SQL statement used to extract the source data is displayed after generation. |
During comparison, data is compared in the SAP HANA database and only the comparison result is returned to the CDC application in SAP Solution Manager:
This source type determines row hashes and uses them for comparison. Where available, only the key values and row hashes are transfered from the second database to the SAP HANA database. After the inconsistencies are determined, only the requested information like inconsistency details and context fields for the inconsistent objects is determined from the two databases.
This source type offers improved performance because the comparison is moved from the application server of SAP Solution Manager to the more powerful SAP HANA database. In addition to data comparison within one SAP HANA database or between two SAP HANA databases, this approach is also suitable for various other databases like TERADATA, SAP Sybase ASE, SAP Sybase IQ, Oracle and MS SQL Server. Currently the source type does not support comparison in blocks of configurable block size, so we recommend to split very large comparisons first, see Mass Activities for more information.
OData Service
Run Mode: BACKGROUND AND DIALOG Aggregation: GENERIC AFTER EXTRACTION Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
Use this source type for data from cloud systems which offer an accessible OData services.
Parameters
The following information is required:
Parameter | Description |
---|---|
RFC Connection | Specify the connection that is used to connect to the OData service. For this purpose set up an HTTP connection to an external server (Type G). |
URL Extension | If the URL in the RFC destination needs any additional extension, it must be given here. |
The following fields are non-editable and generated by the tool | |
Query | This is the OData query which is generated depending on the data model. Note Please be careful on the number of fields selected and the number of conditions added to the filters. The query has a fixed length of 2048 characters, if it exceeds this length then the query will not be able to execute and the comparison will fail. |
OData Version | This is used to extract the OData version of the service. Features may or may not be available depending on the version. |
For connecting to Cloud systems, for example SuccessFactors, only basic authentication is supported over a secure HTTP connection.
CDC supports all versions of OData, version 2.0+. Upto Service pack 6, only the ATOM format is supported and from SP07, the JSON response is supported.
The following functions must be accessible and working for CDC to function with the OData service:
- Metadata ($metadata)
- Count ($count and/or $inlinecount)
- Querying, filtering and expand operations ($select,$filter,$expand)
- Paging operations ($top and $skip)
Metadata interpretation
General display
Each OData Entity set is considered as a table. So after entering the RFC destination, add a table to the OData source. The Entities are equivalent to tables in CDC. On selecting the entity set, the properties and associations, if any, are displayed in the form of columns. CDC supports only one primary OData entity at a given time. Each entity set is linked to an Entity type. An entity type has properties and navigation properties to other entities.
For fields of the given entity having a standard EDM type (like Edm.Int32, Edm.String etc.), the field followed by an equivalent ABAP DDIC type is shown.
Note
The ABAP DDIC type shown is just for information. During extraction and comparison CDC treats all OData responses as strings.
Special properties
Upto service pack 6 (SP0-SP6) | From service pack 7 (SP07+) |
---|---|
If the property has an attribute that has KeepInContent = "false", CDC will search for the Target path and display the column name as: PropertyName_S_TargetName. Example : Name_S_Title. | Shown as a normal field |
If the property has a complex type, CDC treats it as a structure and displays it as follows: ComplexTypeName__ComplexTypePropertyName. Example : Address__Street | If the property has a complex type, CDC treats it as a structure and displays it as follows: ComplexTypeName_C_ComplexTypePropertyName. Example : Address_C_Street |
If the property is part of an association (relation) then the column name is displayed as follows: AssociatedEntityName_N_AssociatedEntityPropoertyName. Example: Category_N_ID. Only 1..1 associations are supported | If the property is part of an association (relation) then the column name is displayed as follows: for 1..1 associations it is AssociatedEntityName_N_AssociatedEntityPropoertyName. Example: Category_N_ID. For 1..* or *..1 associations it is AssociatedEntityName_NR_AssociatedEntityPropoertyName. Example: Product_NR_ID |
Key fields defined in the OData metadata is also displayed in the fields for table. |
Supported and equivalent data types
The table below enumerates the equivalent OData and DDIC data types for comparison. As before stated, these types are only for information. All OData responses are treated as strings.
OData Type | ABAP Type Equivalent | Support |
---|---|---|
Null | N/A | Yes, as (' ') |
Edm.Binary | D16R | Yes |
Edm.Boolean | CHAR | No (Conversion needed) |
Edm.Byte | INT1 | Yes |
Edm.DateTime | DATS | Support only after conversion |
Edm.DateTimeOffset | DATS | Support only after conversion |
Edm.Decimal | FLTP | Yes |
Edm.Double | FLTP | Yes |
Edm.Single | FLTP | Yes |
Edm.Guid | LCHR | Yes |
Edm.Int16 | INT2 | Yes |
Edm.Int32 | INT4 | Yes |
Edm.Int64 | INT4 | Yes |
Edm.SByte | LRAW | Yes |
Edm.String | SSTRING | Yes |
Edm.Date | DATS | Support only after conversion |
Edm.Time | TIMS | Support only after conversion |
Ariba Networks (Using Transaction Monitoring API V2)
Run Mode: BACKGROUND AND DIALOG Aggregation: NOT SUPPORTED Iteration: SUPPORTED FOR SELECT FIELDS Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
This source type generates an openAPI query, extracts data via Transaction management API, compares data in SAP Solution Manager.
The SAP Ariba Transaction Monitoring REST API gives users of SAP Ariba cloud solutions a fast and reliable mechanism to track business documents as they flow between applications. This allows a high degree of visibility and traceability into business processes and documents.
Parameters
The following parameters must be entered:
Parameter | Description |
---|---|
OAuth Destination | RFC connection to the authentication service which is an HTTPS service which the username and password must be saved which will generate the token |
OpenAPI Destination | The other destination to be created will be the service to the HTTP service which holds the TM API. |
API Key | API key for the API. Unique to every customer |
The following fields are non-editable and generated by the tool | |
Query | OpenAPI query generated from the data model. |
Requirements for Ariba Networks
Follow the steps for enabling the Transaction monitoring API as detailed in the SAP Ariba Transaction monitoring REST API guide available here.
The following steps are required to use the Transaction Monitoring API with Ariba Network:
- Enable alternate cXML document flow in Ariba Network.
SAP Ariba Customer Support must enable alternate cXML document routing for the supplier-buyer relationship. Ask your Designated Support Contact to log a service request.
- Create SAP Ariba OpenAPI account.
To create your account, access the SAP Ariba Open API portal (https://developer.ariba.com/api/login), click Don’t have an account yet? and complete the Registration Request Form. Alternatively, contact your SAP Ariba Customer Success Manager or Account Manager to create the account.
- Provision TM OpenAPI and BPM integration (Client-ID to ANID mapping).
To create the client-ID mapping, contact your SAP Ariba Customer Success Manager or Account Manager
Documents supported
Ariba Process Name | Document Name |
---|---|
TRANSACTION_TRACKING_PURCHASEORDER | Purchase Order |
TRANSACTION_TRACKING_SHIPNOTICEDOCUMENT | (Advanced) Ship Notice |
TRANSACTION_TRACKING_COMPONENTCONSUMPTIONREQUEST | Component Consumption Request |
TRANSACTION_TRACKING_CONFIRMATIONDOCUMENT | Order Confirmation |
TRANSACTION_TRACKING_COPYREQUEST | Wrapper which contains another document type (like the copy of a Purchase Order to another trading partner) |
TRANSACTION_TRACKING_PRODUCTACTIVITYMESSAGE | Forecast |
TRANSACTION_TRACKING_PRODUCTREPLENISHMENTMESSAGE | Forecast Commit and Manufacturing Visibility |
Metadata provided
Fieldname | Key | Description |
---|---|---|
document_id | X | This is the unique identifier of the business document that is being tracked in an event. This could contain the value of one of the following identifiers: OrderID, ConfirmID, ShipmentID, and InvoiceID. |
sender_id |
| Identity of the sender of the business document. This is a unique Ariba Network ID for the sender. For example, the sender's Ariba Network ID. |
reciever_id |
| Identity of the receiver of the business document. This is a unique Ariba Network ID for the sender. For example, the sender's Ariba Network ID. |
process_name | The specific business document that is tracked in an event. Processes relating to the following documents are tracked:
| |
changed_by |
| This is the name of the application/node of the published event. |
events_NR_correlation_id |
| This is the purchase order (PO) ID from the source system. This helps in easy identification of the PO numbers which are affected. This is specified only when the document is a Change PO, an Advance Ship Notice (ASN), or an Order Confirmation. |
events_NR_order_date |
| This is the date on which the order was created in the source system. This field is mapped either to the Document Date or the Reference Document Date. It is mapped to Reference Document Date to trace the lineage in PO change instances as PO change does not contain orderReference information. |
events_NR_alert_type | This is the business document processing status. This indicates the success or failure of document processing. It also includes warnings, if there are any. | |
events_NR_event_name |
| This is the event name that indicates the document processing stage. |
events_NR_event_timestamp |
| This is the time the event was triggered |
events_NR_event_id |
| The unique identifier of the event. |
events_NR_host | This is the webhost domain link | |
events_NR_reference_payload_id |
| This is the cXML Payload ID for the PO. This will help associate the PO to other associated business documents, such as Order Confirmation and Advance Ship Notice. |
events_NR_error_message |
| This is the error information from Ariba Network, which contains more information on why an error has occurred. This will allow the user to quickly identify corrective actions. |
events_NR_source |
| This is the source of the event. |
events_NR_date_happened | This is the event date, which is the date and time when the event was triggered. Dates are in epoch format. | |
date_happened |
| This is the event date, which is the date and time when the event was triggered. Dates are in epoch format. |
Iteration support
The Ariba source type for CDC supports iteration and key-based selection for the following fields (Only one can be selected at a given time):
- document_id
- correlation_id
- transaction_id
Limitations
Due to the CDC Source type making use of the API, it is limited in a few ways. These are:
- When the ARIBA source type is a primary source type, the date_happened filter is mandatory and must be between two time frames and cannot be more than 30 days in the past. Relative time filters are advised.
- Filters can only be applied to the fields as detailed in the filters section. Other combinations are not possible.
Ariba P2P/P2O (Using Operational Reporting API)
Run Mode: BACKGROUND AND DIALOG Aggregation: NOT SUPPORTED Iteration: NOT SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
This source type generates an openAPI query, extracts data via the Operational Reporting API, compares data in SAP Solution Manager.
The Operational reporting API for procurement enables you to extract and report on the transactional procurement data that you need to make operational decisions, such as invoices to pay or purchase requisitions that need approval.
Parameters
The following parameters must be entered:
Parameter | Description |
---|---|
OAuth Destination | RFC connection to the authentication service which is an HTTPS service which the username and password must be saved which will generate the token |
OpenAPI Destination | The other destination to be created will be the service to the HTTP service which holds the Operational Reporting API. |
Realm | The Ariba Realm for the P2P/P2O service |
API Key | API Key of the application |
The following fields are non-editable and generated by the tool | |
Query | OpenAPI query generated from the data model. |
Metadata
The CDC Source type for P2P/P2O provides the following tables.
The tables are:
- CDC_CopyOrder
- CDC_DirectOrder
- CDC_ERPOrder
- CDC_Invoice
- CDC_InvoiceRequisition
- CDC_Recipt
- CDC_Requisition
Each table has custom fields pertaining to the type and all tables have few fields in common:
- createDate
- updateDate
- status
Only one date field can be selected at a given time. By default, all statuses are selected
Filters
The following fields are filterable using CDC:
Field Name | Object filter supported? | Instance filter supported? | Combine with |
---|---|---|---|
createDate and updateDate
| Date/Time must be given in the format: YYYY-MM-DDTZHH:MM and must always be a range and the timespan cannot be more than 31 days and not more than 30 days in the past | Date/Time can use relative time filters like $TODAY. | All |
status
| Depending on the table it can have different statuses. | No | All |
Supported filter Values for status
Document type | Status |
---|---|
Requisition |
|
Purchase order including ERPOrder, DirectOrder, and CopyOrder |
|
Receipt |
|
Invoice |
|
Invoice Reconciliation |
|
Limitations
Due to the CDC Source type making use of the API, it is limited in a few ways. These are:
- When the ARIBA source type is a primary source type, the createDate/updateDate filter is mandatory and must be between two time frames and cannot be more than 30 days in the past. Relative time filters are advised.
- Filters can only be applied to the fields as detailed in the filters section. Other combinations are not possible.
View creation on Ariba P2P/P2O
The cross-database comparison for Ariba P2P/P2O works with views provided from the Operational Reporting API. Not all views are supported however. We make use of few custom views for each document type. When the comparison is run, the tool tries to create a view using the reporting API, this view is created depending on the date field selected. For example, if you select the view CDC_Requisitions and select the date filter as create_date, a custom view will be created with the CDC_Requisitions_CR. This view will then be used for extracting data. Please note that the view will be created only once when the comparison is successfully run for the first time.
XML File on Application Server
Run Mode: BACKGROUND AND DIALOG Aggregation: GENERIC AFTER EXTRACTION Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: NOT SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
No generation, extracts data from XML files on application server of SAP Solution Manager, compares data in SAP Solution Manager.
You choose this for data that is stored in a XML file on an SAP Solution Manager application server. The source type is able to process single and multiple XML files.
Parameters
The following data is required:
Parameters | Description |
---|---|
Host name | Specify the name of the application server |
File Path | Specify the path on the SAP Solution Manager application server under which the XML file containing the data to be compared is saved. |
File Name | Specify the name of the XML file. In no file name but only a path name is supplied, all files in the given path are processed. Moreover, the file name allows wild-cards *, +, $TODAY and $YESTERDAY. In case multiple files are selected, data from all these files is processed together. |
Created from, Created to | Enter a time from when and a time until when files are considered. The following key words are possible for entering the times:
|
XML transformation | Enter a parameter in order to execute an XLS transformation which changes data from a different format to the required asXML format. In this way it is now possible to process XML files which are originally not in the required as XML format. |
Requirements
The CDC source type FIXS for XML files on application server requires either XML files directly in asXML format or XML files in any different format and a corresponding transformation to bring the required data from these files into asXML format.
asXML is the format that you get if you use transformation ID for an internal ABAP table. The following is an example for one item with columns PARTNER, NAME_LAST and NAME_FIRST. (For a table with multiple items, you would see multiple <item> tags.)
The XML declaration at the beginning of the file is optional, whereas the namespace definition needs to refer to ABAP XML. The part of the XML file containing the data to be compared needs to be included in tag <asx:values> (and </asx:values> respectively).
Beyond this level a tag <TABLE> marks the start of the data table, whereas several tags opening and closing with <item> (and </item> respectively) mark the beginning and the end of a line item in the table to be compared.
Within the <item> tags the names of the respective table fields need to be used as tag names. These tags contain the actual item data.
Furthermore it is important that the tags in the XML file in asXML format must correspond to the tags in the mapping connection of the CDC data model.
Example
ABAP Table:
FIRST_NAME | LAST_NAME | CITY |
---|---|---|
Jason | Li | Shanghai |
Peter | Meier | Berlin |
Michael | Smith | Washington |
Converted into asXML this table would look like this:
<?xml version="1.0" encoding="UTF-8"?> <asx:abap xmlns:asx=" http://www.sap.com/abapxml" version="1.0"> <asx:values> <TABLE> <ITEM> <FIRST_NAME>Jason</FIRST_NAME> <LAST_NAME>Li</LAST_NAME> <CITY>Shanghai</CITY> </ITEM> <ITEM> <FIRST_NAME>Peter</FIRST_NAME> <LAST_NAME>Meier</LAST_NAME> <CITY>Berlin</CITY> </ITEM> <ITEM> <FIRST_NAME>Michael</FIRST_NAME> <LAST_NAME>Smith</LAST_NAME> <CITY>Washington</CITY> </ITEM> </TABLE> </asx:values> </asx:abap>
In the definition of the data model it should like this:
(Here “TEST_TABLE” is used as table name.)
XML File on local PC
Run Mode: DIALOG Aggregation: GENERIC AFTER EXTRACTION Iteration: NOT SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: NOT SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
You choose this for data that is stored in an XML file on the local PC.
Parameters
The following data must be entered:
Parameter | Description |
---|---|
File Name | Name and path of the XML file on the local system |
Requirements
The CDC source type FIXL for XML files on local PC requires XML files in asXML format.
asXML is the format that you get if you use transformation ID for an internal ABAP table. The following is an example for one item with columns PARTNER, NAME_LAST and NAME_FIRST. (For a table with multiple items, you would see multiple <item> tags.)
The XML declaration at the beginning of the file is optional, whereas the namespace definition needs to refer to ABAP XML. The part of the XML file containing the data to be compared needs to be included in tag <asx:values> (and </asx:values> respectively).
Beyond this level a tag <TABLE> marks the start of the data table, whereas several tags opening and closing with <item> (and </item> respectively) mark the beginning and the end of a line item in the table to be compared.
Within the <item> tags the names of the respective table fields need to be used as tag names. These tags contain the actual item data.
Furthermore it is important that the tags in the XML file in asXML format must correspond to the tags in the mapping connection of the CDC data model.
Example
ABAP Table:
FIRST_NAME | LAST_NAME | CITY |
---|---|---|
Jason | Li | Shanghai |
Peter | Meier | Berlin |
Michael | Smith | Washington |
Converted into asXML this table would look like this:
<?xml version="1.0" encoding="UTF-8"?> <asx:abap xmlns:asx=" http://www.sap.com/abapxml" version="1.0"> <asx:values> <TABLE> <ITEM> <FIRST_NAME>Jason</FIRST_NAME> <LAST_NAME>Li</LAST_NAME> <CITY>Shanghai</CITY> </ITEM> <ITEM> <FIRST_NAME>Peter</FIRST_NAME> <LAST_NAME>Meier</LAST_NAME> <CITY>Berlin</CITY> </ITEM> <ITEM> <FIRST_NAME>Michael</FIRST_NAME> <LAST_NAME>Smith</LAST_NAME> <CITY>Washington</CITY> </ITEM> </TABLE> </asx:values> </asx:abap>
In the definition of the data model it should like this:
(Here “TEST_TABLE” is used as table name.)
CSV File on Application Server
Run Mode: BACKGROUND AND DIALOG Aggregation: SUPPORTED Iteration: SUPPORTED Duplicate Check: NOT SUPPORTED
Metadata retrieval: SUPPORTED Extraction Strategy: INDUVIDUAL SELECTION AND KEY BASED SELECTION Special Permissions: NONE
This source type was introduced with Solution Manager 7.2. You use it for data stored in comma separated files (CSV).
Parameters
The following data is required:
Parameters | Description |
---|---|
Host name | Specify the name of the application server |
File Path | Specify the path on the SAP Solution Manager application server under which the CSV file containing the data to be compared is saved. |
File Name | Specify the name of the CSV file. In no file name but only a path name is supplied, all files in the given path are processed. Moreover, the file name allows wild-cards *, +, $TODAY and $YESTERDAY. In case multiple files are selected, data from all these files is processed together. |
Created from, Created to | Enter a time from when and a time until when files are considered. The following key words are possible for entering the times:
|
Header Line | Specify if the CSV files has a header line |
Field Seperator | Specify the field seperator of the CSV file |
Metadata
Enter any table name and the field names from the header line or any field names in case of no header line. There is no value help for table and field names, but you have to enter them manually.
CSV files supported
The source type supports different kinds of CSV files:
- CSV files with or without header lines: The header line is not compared but determines the order how the fields are mapped.
- CSV files with different field separators: A comma is used as default field separator, but it is possible to enter different field separators.
CSV files in different formats: The source type checks automatically if the file is in UTF-8 format. In UTF-8 format, a possible byte order mark (BOM) is skipped and not included in the comparison. In a non-Unicode system, the file is read without being converted. In a Unicode system, if the file is not in UTF-8 format, the characters of the file are handled in accordance with the non-Unicode code page that would be assigned at read time in a non-Unicode system, in accordance with the entry in the database table TCP0C of the current text entvironment.