Skip to end of metadata
Go to start of metadata


This documentation is intended for solving problems, that may occur on customer site while running productive SAP Content Server based on SAP MaxDB.

Symptom

There are some situations, where accidentally documents in a SAP Content Server are no longer accessible. E.g.

  • not intended dropping of repositories
  • not intended deletion of documents
  • running more than one SAP Content Server
  • not intended connection of a test system with a productive SAP Content Server

All these operations lead to missed documents or not up to date repositories.

Reason and Prerequisites

While dropping of repositories or deletion of documents belong usually to handling problems, they can be mitigated with the same means as in the other two examples. While running more than a single SAP Content Server might be fine, if this is planned in the setup of systems, it might be cumbersome, if later is detected, that all repositories should reside in only one SAP Content Server.

When setting up a test system as a copy of a productive system, and the test system still connects to the SAP Content Server of production (that would be the case, if the test system is not reconfigured with respect to the SAP Content Server) and operations on the SAP Content Server happen, data lost may be the consequence.

Solution

General Backup/Recovery for SAP Content Server

As we consider here the SAP Content Server with SAP MaxDB as document storage, all benefits of SAP MaxDB can be utilized for preserving the data. I.e. regular backups should be configured and with having log backups and the current log, point in time recoveries can be made with the SAP MaxDB tools. This may be suited to solve some of the above situations or to serve as starting point for further operations.

LOADERCLI and the table structure of the SAP Content Server

Essentially, there are three kind of tables, that build a SAP Content Server:

  1. CONTREP - describing all repositories
  2. DOCUMENTS* - information of documents
  3. COMPONENTS* - document's content

* means here, that each repository has its own tables DOCUMENTS* and COMPONENTS*, where * is the repository number (four digits with leading zeros, e.g. 0001).

For details, see SAP Note 1678771: SAP Content Server database.

LOADERCLI is able to export and import tables between SAP MaxDB databases. It is flexible not only in selecting which parts of a table should be imported, but also in error tolerant importing of tables, that allow to merge documents in SAP Content Server repositories.

Extracting repositories with LOADERCLI

If lost repositories cannot be retrieved with backup/recovery, then LOADERCLI can usually help here, if the repository is still available in a backup. Think e.g. that although your productive SAP Content Server had lost a single repository (due to some mishap), other repositories might be used and got new documents. Therefore, no recovery can serve both goals: getting back the lost repository while keeping the current state of other repositories.

Solution
  1. build up a SAP Content Server from a backup, that contains the lost repository
  2. export the lost repository with LOADERCLI

This solution is also suited, if only a certain range of documents should be retrieved. LOADERCLI is able to export table content with specified where clauses.

Import repositories/documents with LOADERCLI

After having the export files from the previous section, you can import the content of the tables into an existing SAP Content Server. With this, you can not only restore complete repositories, but also documents (e.g. from a certain time period) of repositories. With that, we can finish the above started solution:

Solution (continued)

3. import this exported repository into your productive SAP Content Server


The handling of loadercli is described in SAP Note 1770207 - Export and import of a table.

back to top

Examples

In the following, typical scenarios are described, the necessary sequence of steps is given and in the details below, the single LOADERCLI or SQL statements are listed.

With TST the SAP Content Server is denoted, that holds the documents, which should be exported. E.g. it is built up as a restore from a regular backup, that contains the missed documents or repositories. With SDB, the productive SAP Content Server is meant. The user to access the SAP Content Server is SAPR3 with password SAP. As repository in question, the repository with contrepnr=1 is chosen. Therefore, the tables, which belong to this repository, are DOCUMENTS0001 and COMPONENTS0001. Probably, you have to adapt these notations, when you solve problems at your site with the appropriated choice of database names, users, passwords and repositories.

I. Deleted Docs

  1. Recover backup in test system

  2. Export of information to be restored
  3. Import into production system (ignore duplicates)

II. Deleted Repositories

  1. Recover backup in test system
  2. Export of the repositories to be restored (LOADERCLI with generated keybounds (available with 7.9.10.04)
  3. (parallel, as of 7.xx) Import into Production System

III. Incorrect content server in production system

Documents were written to another CS instead of the one intended for this purpose.

  1. Export Documents (from Time Stamp)
  2. Import into production system (update duplicates)

IV. Export and import individual documents using their Doc ID

V. Copy entire repositories with parallel export and import.

VI. Copy large repositories separated in parts.

back to top

Details

Case I

Assumption: In repository 1 (related tables; DOCUMENTS0001, COMPONENTS0001) all documents originating in 2019 have been deleted by mistake.

sqlcli -d sdb -u SAPR3,SAP

select count(*) from DOCUMENTS0001 where '2018-12-31' < datem and datem < '2020-01-01'
select count(*) from COMPONENTS0001 where '2018-12-31' < compdatem and compdatem < '2020-01-01'

Both selects return 0.

Create a test system TST with restore of a backup of the SAP Content Server that contains the deleted data.

Then the corresponding documents have to be exported from TST (date within 2019):

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20181231' < DATEM and DATEM < '20200101') DATA OUTFILE 'DOCUMENTS0001_extract.data' RECORDS
EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20181231' < COMPDATEM and COMPDATEM < '20200101') DATA OUTFILE 'COMPONENTS0001_extract.data' RECORDS

Afterwards import into the productive SAP Content Server SDB:

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE DOCUMENTS0001 IGNORE DUPLICATES data instream file 'DOCUMENTS0001_extract.data' RECORDS
IMPORT TABLE COMPONENTS0001 IGNORE DUPLICATES data instream file 'COMPONENTS0001_extract.data' RECORDS

The option "IGNORE DUPLICATES" ensures that existing documents are not replaced by documents from the export. This means that even if the export from the restore contains more documents than were deleted, no documents are changed in the production system. Only deleted entries are imported.

Finally, check in productive SAP Content Server SDB:

sqlcli -d sdb -u SAPR3,SAP

select count(*) from DOCUMENTS0001 where '2018-12-31' < datem and datem < '2020-01-01'
select count(*) from COMPONENTS0001 where '2018-12-31' < compdatem and compdatem < '2020-01-01'

Both selects return a value > 0.

Case II

Assumption: entire or large parts of a repository have been deleted.

Create a test system TST with restore of a backup of the SAP Content Server that contains the deleted data.

If the repository does not exist anymore in the target system, you must export the relevant information from the table CONTREP and import it into the production system. In this example, repository 1 is selected, i.e. contrepnr = 1.

Export from TST:

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM "SAPR3"."CONTREP" where contrepnr = 1 DATA OUTFILE 'CONTREP.data' RECORDS

Import into SDB:

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE "SAPR3"."CONTREP" IGNORE DUPLICATES data instream file 'CONTREP.data' RECORDS

Export of the entire repository (here 0001)

For very large repositories, you can speed up the export/import if you export or import the repository in disjoint parts. As of Release 7.9.10.04 LOADERCLI supports the division of the repository.

If you are using an older version of MaxDB or you do not need parallel export or import, you can simply export the repository with:

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" DATA OUTFILE 'DOCUMENTS0001_full.data' RECORDS

EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" DATA OUTFILE 'COMPONENTS0001_full.data' RECORDS

Then import it into the production system SDB:

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE "SAPR3"."DOCUMENTS0001" IGNORE DUPLICATES data instream file 'DOCUMENTS0001_full.data' RECORDS

IMPORT TABLE "SAPR3"."COMPONENTS0001" IGNORE DUPLICATES data instream file 'COMPONENTS0001_full.data' RECORDS

If you want to use parallel export and import:

Check how many documents exist in repository 1:

sqlcli -d TST -u SAPR3,SAP

select count(*) from documents0001

Let us assume that 20000 documents are available. Furthermore, you want to use four parallel export/import loader processes (the option 'bound 5000' results from 20000 documents / 4 parallel exports). Then the LOADERCLI of MaxDB 7.9.10.04 generates the exports commands with:

loadercli -d TST -u SAPR3,SAP

export table DOCUMENTS0001 keybounds outstream 'docs_cmd.txt' bound 5000
export table COMPONENTS0001 keybounds outstream 'comps_cmd.txt' bound 5000

Note that usually the individual export commands do not contain exactly 5000 elements. It is possible that three or five export commands are created instead of four.

The files docs_cmd.txt and comps_cmd.txt contain the export commands that can be executed in parallel.

docs_cmd.txt (for example, with four LOADERCLI commands, the character strings <XXX>, <YYY>, <ZZZ> are also generated by the LOADERCLI for the concrete data):

EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" < <XXX>
) DATA OUTFILE 'DOCUMENTS0001_part1.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <XXX>
) AND ("NAME" < <YYY>
) DATA OUTFILE 'DOCUMENTS0001_part2.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <YYY>
) AND ("NAME" < <ZZZ>
) DATA OUTFILE 'DOCUMENTS0001_part3.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <ZZZ>
) DATA OUTFILE 'DOCUMENTS0001_part4.data' RECORDS


comps_cmd.txt (for example, with three LOADERCLI  commands, the character strings <AAA>, <BBB> are also generated by the loader for the concrete data):

EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" < <AAA>
) DATA OUTFILE 'COMPONENTS0001_part1.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" >= <AAA>
) AND ("NAME" < <BBB>
) DATA OUTFILE 'COMPONENTS0001_part2.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" >= <BBB>
) DATA OUTFILE 'COMPONENTS0001_part3.data' RECORDS

The respective export commands can be processed independently of different LOADERCLI processes, that is, also in parallel.

The call for the first export for DOCUMENTS0001 is then:

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" < <XXX>) DATA OUTFILE 'DOCUMENTS0001_part1.data' RECORDS

The remaining calls are executed in the same way, whereby the execution can take place in parallel.

Import into production system SDB

Like the export, the import of different loader processes can be executed independently of each other.

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE "SAPR3"."DOCUMENTS0001" IGNORE DUPLICATES data instream file 'DOCUMENTS0001_part1.data' RECORDS

and

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE "SAPR3"."COMPONENTS0001" IGNORE DUPLICATES data instream file 'COMPONENTS0001_part1.data' RECORDS


for all sequential numbers in the files DOCUMENTS0001_part<no>.data and COMPONENTS0001_part<no>.data

Case III

Assumption: Documents of repository 1 were not stored in the production system but in a test system by mistake.

The steps to be performed for this are very similar to those from Case I.
Assume that all documents starting on the 15th May 2020 have been stored in the test system TST instead of the production system SDB.

Check the creation date of documents in the production system:

sqlcli -d sdb -u SAPR3,SAP

select max(datem) from documents0001
| EXPRESSION1 |
| ----------- |
| 2020-05-14  |


Export the relevant documents from the test system TST:

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20200515' <= DATEM) DATA OUTFILE 'DOCUMENTS0001_extract.data' RECORDS

EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20200515' <= COMPDATEM)imf DATA OUTFILE 'COMPONENTS0001_extract.data' RECORDS


Import these documents into production system SDB:

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE DOCUMENTS0001 UPDATE DUPLICATES data instream file 'DOCUMENTS0001_extract.data' RECORDS

IMPORT TABLE COMPONENTS0001 UPDATE DUPLICATES data instream file 'COMPONENTS0001_extract.data' RECORDS


The option "UPDATE DUPLICATES" ensures that existing documents are replaced by documents from the export.

Check whether new documents have been added

sqlcli -d sdb -u SAPR3,SAP

select count(*) from documents0001 where datem >= '2020-05-15'

The number is greater than 0.

Case IV

Assume that a certain document is to be transferred from one SAP Content Server TST to another one SDB using the Doc ID:

Given the document ID 58AD7742438618E0E1008000AC1CE249. For identification, always use the like condition (including the closing '%'), as shown in the example below.

Export the document from SAP Content Server TST:

loadercli -d TST -u SAPR3,SAP

EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE lower(name) like lower('58AD7742438618E0E1008000AC1CE249%') DATA OUTFILE 'DOCUMENTS0001_doc58ad.data' RECORDS

EXPORT COLUMNS * FROM COMPONENTS0001 WHERE lower(name) like lower('58AD7742438618E0E1008000AC1CE249%') DATA OUTFILE 'COMPONENTS0001_doc58ad.data' RECORDS


Import the document to SAP Content Server SDB:

loadercli -d sdb -u SAPR3,SAP

IMPORT TABLE DOCUMENTS0001 UPDATE DUPLICATES data instream file 'DOCUMENTS0001_doc58ad.data' RECORDS

IMPORT TABLE COMPONENTS0001 UPDATE DUPLICATES data instream file 'COMPONENTS0001_doc58ad.data' RECORDS

Case V

Copy entire repositories with parallel export and import.

You want to transfer individual or all repositories of an SAP Content Server from one system to another. If all repositories are to be transferred, backup/restore may also be an option.

In the following, SRC and DST denote the source or target system. The user is SAPR3 with password SAP. You must adjust this information according to your setup.

1. create the user SAPR3 in DST:

sqlcli -d DST -u <user SYSDBA>,<password SYSDBA>

CREATE USER SAPR3 PASSWORD SAP DBA NOT EXCLUSIVE DEFAULTCODE ASCII

2. Transfer the catalog and repository information of the user SAPR3:

allreposexport.txt:

export user catalog outstream file 'sapr3.cat'
//
EXPORT COLUMNS * FROM "SAPR3"."CONTREP" DATA OUTFILE 'CONTREP.data' RECORDS

allreposimport.txt:

import user catalog instream file 'sapr3.cat'
//
IMPORT TABLE "SAPR3"."CONTREP" IGNORE DUPLICATES data instream file 'CONTREP.data' RECORDS

Run the commands:

loadercli -d SRC -u SAPR3,SAP -b allreposexport.txt

loadercli -d DST -u SAPR3,SAP -b allreposimport.txt


Note that all table definitions are transferred here. If you want to transfer individual repositories, you can either delete the associated tables DOCUMENTS<nr> and COMPONENTS<nr> in the target system (in DST: drop table ...) or remove them from the file 'sapr3.cat' before the import. <nr> corresponds to the CONTREPNR column from the CONTREP table for the associated repository. Furthermore, you have to delete the corresponding rows in table CONTREP (delete from FROM "SAPR3"."CONTREP" where CONTREPNR in (<comma separated list of nr>)).

3a Transferring a repository without parallel partitioning:

Assumption: Repository 1 is to be transferred, i.e., the tables DOCUMENTS0001 and COMPONENTS0001. The number 0001 corresponds to the repository number (four digits with leading zeros, here 0001).

Export the repository from SRC:

exportdocs1.txt

set isolation level 1
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" DATA OUTFILE 'DOCUMENTS0001_full.data' RECORDS

exportcomps1.txt

set isolation level 1
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" DATA OUTFILE 'COMPONENTS0001_full.data' RECORDS

Run the commands:

loadercli -d SRC -u SAPR3,SAP -b exportdocs1.txt

loadercli -d SRC -u SAPR3,SAP -b exportcomps1.txt


And then import into DST:

importdocs1.txt

IMPORT TABLE "SAPR3"."DOCUMENTS0001" IGNORE DUPLICATES data instream file 'DOCUMENTS0001_full.data' RECORDS

importcomps1.txt

IMPORT TABLE "SAPR3"."COMPONENTS0001" IGNORE DUPLICATES data instream file 'COMPONENTS0001_full.data' RECORDS

Run the commands:

loadercli -d DST -u SAPR3,SAP -b importdocs1.txt

loadercli -d DST -u SAPR3,SAP -b importcomps1.txt


You can perform these steps not only in parallel for DOCUMENTS<nr> and COMPONENTS<nr>, but also for each repository you want to transfer, in parallel.

For small repositories, you can combine the export and import of DOCUMENTS0001 and COMPONENTS0001 into a single batch file:

exportrepo1.txt

set isolation level 1
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" DATA OUTFILE 'DOCUMENTS0001_full.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" DATA OUTFILE 'COMPONENTS0001_full.data' RECORDS

importrepo1.txt

IMPORT TABLE "SAPR3"."DOCUMENTS0001" IGNORE DUPLICATES data instream file 'DOCUMENTS0001_full.data' RECORDS
//
IMPORT TABLE "SAPR3"."COMPONENTS0001" IGNORE DUPLICATES data instream file 'COMPONENTS0001_full.data' RECORDS


Run the commands:

loadercli -d SRC -u SAPR3,SAP -b exportrepo1.txt

loadercli -d DST -u SAPR3,SAP -b importrepo1.txt

3b Transferring a repository with parallel partitioning:

Assumption: Repository 1 is to be transferred.

If a repository is very large, you can use LOADERCLI as of version 7.9.10.04 to perform an efficient distribution of the export.

Check how many documents exist:

sqlcli -d SRC -u SAPR3,SAP

select count(*) from documents0001

Let us assume that 20000 documents are available.

You want to use four parallel export/import loader processes:
(The option 'bound 5000' results from 20000 documents / 4 parallel exports)

loadercli -d SRC -u SAPR3,SAP

export table DOCUMENTS0001 keybounds outstream 'docs_cmd.txt' bound 5000
export table COMPONENTS0001 keybounds outstream 'comps_cmd.txt' bound 5000

Note that the individual export commands do not normally contain exactly 5000 elements. It is possible that three or five export commands are created instead of four.


The files docs_cmd.txt and comps_cmd.txt contain the export commands that can be executed in parallel.

docs_cmd.txt (for example, with four Loader commands, the character strings <XXX>, <YYY>, <ZZZ> are also generated by the loader for the concrete data):

EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" < <XXX>
) DATA OUTFILE 'DOCUMENTS0001_part1.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <XXX>
) AND ("NAME" < <YYY>
) DATA OUTFILE 'DOCUMENTS0001_part2.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <YYY>
) AND ("NAME" < <ZZZ>
) DATA OUTFILE 'DOCUMENTS0001_part3.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS0001" WHERE ("NAME" >= <ZZZ>
) DATA OUTFILE 'DOCUMENTS0001_part4.data' RECORDS


comps_cmd.txt (for example, with three Loader commands, the character strings <AAA>, <BBB> are also generated by the loader for the concrete data):

EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" < <AAA>
) DATA OUTFILE 'COMPONENTS0001_part1.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" >= <AAA>
) AND ("NAME" < <BBB>
) DATA OUTFILE 'COMPONENTS0001_part2.data' RECORDS
//
EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" >= <BBB>
) DATA OUTFILE 'COMPONENTS0001_part3.data' RECORDS

The respective export commands can be processed independently of different loader processes, that is, also in parallel. In order to perform that, copy each export command into a single batch file. E.g.:

export_comps2.txt:

EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS0001" WHERE ("NAME" >= <AAA>
) AND ("NAME" < <BBB>
) DATA OUTFILE 'COMPONENTS0001_part2.data' RECORDS

Afterwards, the loader calls:

loadercli -d SRC -u SAPR3,SAP -b export_docs1.txt

loadercli -d SRC -u SAPR3,SAP -b export_docs2.txt

loadercli -d SRC -u SAPR3,SAP -b export_docs3.txt

loadercli -d SRC -u SAPR3,SAP -b export_docs4.txt

loadercli -d SRC -u SAPR3,SAP -b export_comps1.txt

loadercli -d SRC -u SAPR3,SAP -b export_comps2.txt

loadercli -d SRC -u SAPR3,SAP -b export_comps3.txt

can be executed in parallel.


Import into target system DST:

Like the export, the import of different loader processes can be executed independently of each other. For each part, you have to provide the import command.

E.g.

import_docs1.txt:

IMPORT TABLE "SAPR3"."DOCUMENTS0001" IGNORE DUPLICATES data instream file 'DOCUMENTS0001_part1.data' RECORDS

As before, the import commands can run in parallel as well:

loadercli -d DST -u SAPR3,SAP -b import_docs1.txt

loadercli -d DST -u SAPR3,SAP -b import_docs2.txt

loadercli -d SRC -u SAPR3,SAP -b import_docs3.txt

loadercli -d SRC -u SAPR3,SAP -b import_docs4.txt

loadercli -d SRC -u SAPR3,SAP -b import_comps1.txt

loadercli -d SRC -u SAPR3,SAP -b import_comps2.txt

loadercli -d SRC -u SAPR3,SAP -b import_comps3.txt


Case VI

Copy large repositories separated in parts.

You want to transfer individual or all repositories of an SAP Content Server from one system to another. The downtime of the SAP Content Server should be short, while the sizes of the repositories are large. In this case, here a procedure is described, how to transfer single parts of a SAP Content Server, while it is in use.

In the following, SRC and DST denote the source or target system. The user is SAPR3 with password SAP. The repository in question has contrepnr=1, i.e. the tables DOCUMENTS0001 and COMPONENTS0001 are considered. You must adjust this information according to your setup.

When accessing a SAP Content Server, on that users create, modify or delete documents, it is necessary to set the isolation level to 1 on each SAP Loadercli process for export at the beginning:

set isolation level 1

In order to cut the repositories into parts, the modification date of tables DOCUMENTS0001 (DATEM) and COMPONENTS0001 (COMPDATEM) is used. In the Loadercli both dates are set in the format 'YYYYMMDD' (e.g. 20201231 for New Year's eve). These dates are suitable to finally put together all documents of a repository in the destination system DST. Please make sure, that you do not miss documents on the boundary date. In the following example, three parts comprising the years 2018, 2019, and 2020 are used.

First, we determine the number of documents in the source system for the chosen parts and in total:

sqlcli -d SRC -u SAPR3,SAP

select count(*) from documents0001 where '2017-12-31' < DATEM and DATEM < '2019-01-01'
select count(*) from components0001 where '2017-12-31' < COMPDATEM and COMPDATEM < '2019-01-01'

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'

select count(*) from documents0001 where '2019-12-31' < DATEM and DATEM < '2021-01-01'
select count(*) from components0001 where '2019-12-31' < COMPDATEM and COMPDATEM < '2021-01-01'

select count(*) from documents0001
select count(*) from components0001

Then the export is started. Depending on the I/O of the system, the export could be done in parallel or shifted to time slots, where the source SAP Content Server load is low. The actual size of exports with respect to the number of documents could be controlled by appropriated limits for the start and end date of the single exports for tables DOCUMENTS0001 and COMPONENTS0001.

In the following, we use 3 input files for the Loadercli, in order to parallel export each year:

export2018.txt

set isolation level 1
//
EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20171231' < DATEM and DATEM < '20190101') DATA OUTFILE 'DOCUMENTS0001_2018.data' RECORDS
//
EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20171231' < COMPDATEM and COMPDATEM < '20190101') DATA OUTFILE 'COMPONENTS0001_2018.data' RECORDS


export2019.txt

set isolation level 1
//
EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20181231' < DATEM and DATEM < '20200101') DATA OUTFILE 'DOCUMENTS0001_2019.data' RECORDS
//
EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20181231' < COMPDATEM and COMPDATEM < '20200101') DATA OUTFILE 'COMPONENTS0001_2019.data' RECORDS


export2020.txt

set isolation level 1
//
EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20191231' < DATEM and DATEM < '20210101') DATA OUTFILE 'DOCUMENTS0001_2020.data' RECORDS
//
EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20191231' < COMPDATEM and COMPDATEM < '20210101') DATA OUTFILE 'COMPONENTS0001_2020.data' RECORDS


Then, the 3 Loadercli calls can be executed simultaneously:

loadercli -d SRC -u SAPR3,SAP -b export2018.txt

loadercli -d SRC -u SAPR3,SAP -b export2019.txt

loadercli -d SRC -u SAPR3,SAP -b export2020.txt


You can import these parts in an arbitrary order except the last ones for DOCUMENTS0001 and COMPONTENTS0001, which have to be made during the downtime of the productive instance:

E.g. either:

loadercli -d SRC -u SAPR3,SAP

IMPORT TABLE DOCUMENTS0001 UPDATE DUPLICATES data instream file 'DOCUMENTS0001_2018.data' RECORDS

IMPORT TABLE COMPONENTS0001 UPDATE DUPLICATES data instream file 'COMPONENTS0001_2018.data' RECORDS

or:

import2018.txt:

IMPORT TABLE DOCUMENTS0001 UPDATE DUPLICATES data instream file 'DOCUMENTS0001_2018.data' RECORDS
//
IMPORT TABLE COMPONENTS0001 UPDATE DUPLICATES data instream file 'COMPONENTS0001_2018.data' RECORDS


loadercli -d SRC -u SAPR3,SAP -b import2018.txt


Check in the target system:

sqlcli -d DST -u SAPR3,SAP

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'


Eventually in the downtime to switch production from SRC to DST, you have to transfer the last part of the repository. You can check, whether all rows of DOCUMENTS0001 and COMPONENTS0001 match each other in SRC and DST, e.g. for 2019:

sqlcli -d SRC -u SAPR3,SAP

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'

sqlcli -d DST -u SAPR3,SAP

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'


Please note, that in case, that if a document was deleted or one of it's components was deleted, or a component was added after export, there may exist documents or components in the repository, that are no longer needed. This can happen due to the nature of exporting over a time interval, while the repository is in use (i.e. can be changed). If it is not acceptable in your case, you have to repeat exports and imports of all parts, where a difference is given in the number of rows of DOCUMENTS0001 or COMPONENTS0001. Before import, you have to delete all information of documents for the corresponding time interval. Assuming, 2019 has to be exported and imported again:


loadercli -d SRC -u SAPR3,SAP

EXPORT COLUMNS * FROM DOCUMENTS0001 WHERE ('20181231' < DATEM and DATEM < '20200101') DATA OUTFILE 'DOCUMENTS0001_2019.data' RECORDS
EXPORT COLUMNS * FROM COMPONENTS0001 WHERE ('20181231' < COMPDATEM and COMPDATEM < '20200101') DATA OUTFILE 'COMPONENTS0001_2019.data' RECORDS


Delete in DST:

sqlcli -d DST -u SAPR3,SAP

delete from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
delete from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'


Import in DST:

loadercli -d DST -u SAPR3,SAP

IMPORT TABLE DOCUMENTS0001 UPDATE DUPLICATES data instream file 'DOCUMENTS0001_2019.data' RECORDS
IMPORT TABLE COMPONENTS0001 UPDATE DUPLICATES data instream file 'COMPONENTS0001_2019.data' RECORDS


Of course, export and imports can be executed in parallel and with batch command files and the -b option of the Loadercli.
Check in SRC:

sqlcli -d SRC -u SAPR3,SAP

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'

Check in DST:

sqlcli -d DST -u SAPR3,SAP

select count(*) from documents0001 where '2018-12-31' < DATEM and DATEM < '2020-01-01'
select count(*) from components0001 where '2018-12-31' < COMPDATEM and COMPDATEM < '2020-01-01'



back to top

Further Information

1678771 SAP Content Server database 
1770207 Export and Import of a table

SAP Content Server 

Documentation
  • No labels