Page tree
Skip to end of metadata
Go to start of metadata


Best practice for BPC script logic


This wiki is a brief introduction about how to write BPC script logic with best practice, how to analyze & resolve script logic issues efficiently.


Script logic is a strong tools which allows BPC consultant to write customized codes to meet specific variable business requirements. Compared to other calculation types, script logic has following advantages: 

Time Flexible:
     – Real-time calculations using default logic.
     – Run at any scheduled time use package scheduler.
Code Flexible
     – Can different formulas to different models within an environment.
     – Can use customized coding to achieve specific calculation requirements.

However, without best practice, script logic will also cause some performance or other issues which will impact BPC functions. 

This wiki is to introduce some useful points to help BPC consultant to use script logic with best practice. Also to introduce some useful tools to analyze script logic issues.

Some trouble shooting tools:

  1. LGX fileŸLGX file is generated after script logic is validated and saved, which contains the expanded code of script logic. It is under UJFS --> webfolder -->(Environment) -->adminapp --> (Model)

    The difference between LGF & LGX file:

    You can always check the content of LGX file to see whether the detail codes of your script logic does works as your expectation.
  2. Script logic log: Script logic log file contains the logic calculation result/error after your script logic is executed. ŸLog file is under UJFS --> webfolders --> (Environment) -->(Model) --> privatepublications --> (User) --> tempfiles. How it looks:
    This log does not only show the execution result & detail error message, also contains the calculation logic and the time it consumes of every part. So, always check this log if any issues/error/performance problem happens on your script logic.
  3. SLG1 trace: Which is useful if some specific error happens in backend. Find it by going to SLG1, and specify Object: UJ*, UserID & Execute time.
  4. UJKT: UJKT is a very useful test tools which allows you to test the script logic in backend directly by input correct parameters. By using the simulate button to run your problematic script logic without touching the data in database can help you to test your codes whatever times you want.
    How to use UJKT: Go to T-code: UJKT -->  Specify Environment, Mode. --> 
    Paste your script logic. --> Change the dynamic dimension context or parameter to a specified ones. --> Execute or Execute(Simulate).

Seven suggestions to improve your script logic performance(best practice):

  1. Load in memory only the required data: Always set XDIM_MEMBERSET to the specific dimension granularity as detail as possible, to avoid loading to much data into memory when doing the calculation. In following picture, the right side one has a better performance than the left side one:

  2. Carefully select the “triggers” of your calculations: In your calculation, put the scope of source data into "trigger" part(like *WHEN, ...) as possible as you can will have better performance then putting it into the calculation part(like EXPRESSION=, ...). In following picture, the right side one has a better performance than the left side one:

  3. Keep the logic structure as compact as possible: Writing your code by a compact format will not only improve the readability, but also improve the calculation performance. In following picture, the right side one has a better performance than the middle one, and middle one is better then left side one:

  4. Minimize the number of COMMIT: You will need to use the result of the last calculation as the source data of next. However, abuse the "*COMMIT" in a script logic will impact the performance. Re-structure the codes to minimize the quantity of "*COMMIT". For a simple calculation, in most case, even writing it twice will have better performance than using "*COMMIT". In following picture, the right side one has a better performance than the left side one:

  5. Keep in default logic only the calculations that are absolutely required to be performed in real time: In a word, put the calculation into specific script logic as possible as you can, rather than put them into default.lgf. This will minimize the possibility of potential issues in saving data in input schedule, or in package execution.

  6.  Carefully check what the LOG files say: Always check the script logic log introduced above once any issues happens on your script logic.

  7.  Run a stress test before going live: Script logic performance will have big difference when the quantity of transaction data or dimension members increase. So doing enough stress testing before moving your script logic code from QA to PRD.

Two case studies:

  1. Delta load(/CPMB/LOAD_DELTA_IP) posts records even package fails:
    Issue description: After running package "Delta load(/CPMB/LOAD_DELTA_IP)", the result message is showing failed, but data is still saved to backend.

    Investigation result: The error happens in default logic which is executed after package is completed. So the error happens in script logic will not revert back the result of package execution. So if end user execute the package, this behavior happens: Package failed but data is updated.
    Solution: An enhancement is added to Delta load(/CPMB/LOAD_DELTA_IP) package to let BPC user has the option to revert package result or not if error happens in default logic. See KBA: 2093127.
    Experience from this case study:  Keep in default logic only the calculations that are absolutely required to be performed in real time.

  2.  Performance of logic RUNALLOCATION is very bad:
    Issue description: *RUN_ALLOCATION function in script logic is having bad performance after go-live. The time consuming increased from 2 hours to 10 hours. What code looks like:

    Investigation steps:  Firstly check customer's script logic log. It looks like following: 
    You will see the run allocation calculation is repeated over 1000 times, and every time it consumes 75s. From further investigation, we find customer uses *FOR loop to repeat the RUN_ALLOCATION for all ENTITY members. And in their PRD system, ENTITY has over 5000 members. 
    Solution: Instead of using *FOR loop, put the ENTITY member context into *XDIM_MEMBERSET. This causes the allocation only executed once, but have same result. See KBA: 2155230.
    Experience from this case study:  1.Load in memory only the required data; 6.Carefully check what the LOG files say; 7.Run a stress test before going live.

When you raise incidents:

  1. If you face error message/dumps after your script logic execution, you can:
    1. Research for KBA/Note correctly based on the key word you catch from ST22, SLG1, script logic log. Also, after many notes are found, check the component, SP version, condition in the KBA/Note to make sure this note is applicable to your system. 
    2. If customer is using HANADB, or HANA related parameter is triggered, then check whether all the requirements are met according to following KBAs:
      ENABLE_HANA_MDX (1904344): BPC & BW version.
      ENABLE_ACCELERATOR (2003863): BPC & BW & HANADB version.
      As many previous versions of BPC/HANA revision is having bugs related with script logic calculation, make sure the version of your BPC/HANA is new enough. 

  2.  If you face Performance Issue  3. wrong Output:
    Do enough pre-analyze to narrow down the root cause to script logic before raising incident. Make sure the issue is really caused by script logic, not report format/ MDX formula or else.

Usefult resource/document about script logic:

Related Notes:


  • No labels

1 Comment

  1. Sorry, but a lot of incorrect information.