Thursday, 27 October 2011

Conversion of DTP's (after BDLS)

When a BDLS has ran, usually after a system copy/restore, not always all existing DTP's have been properly 'BDLS-ed'.

To 'BDLS' DTP's, program RSBKDTP_BDLS can be used.

Friday, 21 October 2011

SAP BW user exit alternative

Instead of applying/changing code of 'user-exit' ABAPs ZXRSAUnn or ZXRSRUnn, you can also use BADI RSU5_SAPI_BADI
Tcode SE18:

Implementation --> Overview or Implementation --> Create
Double click on METHOD: DATA_TRANSFORM

Within METHOD IF_EX_RSU5_SAPI_BADI~DATA_TRANSFORM the required user exit code can be entered/altered.

Program to activate transformations in a closed BW system

Without having to transport your transformation (again), you can also execute abap RSDG_TRFN_ACTIVATE to re-activate an inactive transformation

Tuesday, 30 August 2011

Regression tests? Make easy cube content downloads

It's been a while (aka long time) since I've last posted an entry to this blog. My apologies...

Ever wondered how to make life easier when executing regression tests on cube content?
Wouldn't it be easy to just download the cube content to excel twice and compare the 2 downloads?
To achieve the above, just use transaction RSCRM_BAPI



Via RSCRM_BAPI you can easily make query extracts and save these extracts to excel.
The preferred regression test way of working:
1) Execute RSCRM_BAPI on a query on a particular cube and download query result to excel
2) Apply changes to the transformation/update rule of that particular cube
3) Reload a subset of data to that particular cube
4) Execute step 1 (with the same variables)
5) Compare the 2 (created in step 1 and step 4) excel downloads and you're done!

That's all folks....

Friday, 17 December 2010

Perfomance vs statistics

Ever wondered why, even though you've deleted all indexes on your cube, the (massive) initial load still took a very, very long time?
It's all about (database) statistics!
Unfortunately the statistics can't be refreshed during (massive) data loading...even though you might think it's possible, looking at the screenshot below.

Unfortunately, the "refresh statistics" button doesn't work properly.

Luckily there's a workaround: Refreshing the statistics (of a cube), while data is being loaded towards it.
Run report RSANAORA and enter the FACTtable of the cube for which the statistics needed to be refreshed:



ps. With transaction DB20 you can validate/check the correctness of the current statistics. As long as the traffic light is green, you've got nothing to worry about :-)

Tuesday, 14 December 2010

Starting a DTP WITHOUT a process chain

In the old days, working with infopackages, it was easy to schedule loads from within the infopackage. Schedule the infopackage directly, or in the background on a specific date + time.
Unfortunately this easy scheduling is not available from within the DTP. To schedule an individual DTP, you HAVE to create a process chain, containing the DTP you want to schedule.

Luckily there's a workaround for this. Applying the code below will create an ABAP which makes it possible to schedule (individual) DTP's, without the necessity of a process chain.



Enter the technical DTP name in the parameter field of the ABAP and schedule the ABAP. Via scheduling the ABAP automatically the DTP will be scheduled ;-)

Thursday, 9 September 2010

Easy overview of Multiprovider assignment/identification

When creating/maintaining a multiprovider, consisting of several dataproviders, it's hard to get an overview of the 'linking' of every multiprovider-'field' to it's dataprovider(s)-field.
The SAP BW GUI doesn't deliver the desired overview in one glance....you have to click on every individual field to see the way it's linked.

By looking at the content (SE16) of table RSDICMULTIIOBJ, the multiprovider-linking will be shown as stated in the screengrab below:

Thursday, 2 September 2010

Easy overview of (the runtime of) Process Chains

When you're interested in the runtime of several process chains (eg. during last night), transaction codes RSPC, RSPC1 or RSPCM don't do the trick completely.
Luckily there's a workaround via transaction code SA38,
report name /SSA/BWT
Execute the program, check radiobutton Process Chain Analysis and press F8.

In the following screen press F5 (=button Process Chains)

After pressing F5, a pop-up will be displayed in which you can enter the desired selection criteria

After pressing F8, the following result will be shown:

Friday, 27 August 2010

Connecting a DB connect source system

Connecting a DB connect source system isn't always as easy as it seems.
RSA13 (or RSA1-> Source systems) and then create a new DB connect source system.



Right mouse click on the newly created source system to set the connection parameters.


Fill in the correct credentials, and you're there....although...this is what you might think.
For some reason, the settings you've just entered are not correctly saved and therefore your DB connect source system is not properly connected....and thus unusable.
You can keep on repeating (in vain) the steps above...or you can hack/add an entry to table DBCON



Just enter the DB connect source system name to this table and then retry to enter the settings (via RSA13)....This time it will work and the settings will be saved!

Wednesday, 14 July 2010

BRAIN 441 (query) error

Herewith some explanation regarding the BRAIN441 error which can occur when using formula variables in a query.
When a query uses a formula variable of the type "replacement path", it can, for example, replace the occurrence of a characteristic with another value.



The BRAIN441 error will occur when the query doesn't "know" how to replace the characteristic as it might occur in several InfoProviders, when using a query created on top of a Multiprovider.
As shown in the screengrab below, characteristic SSC_COKY occurs in all 3 Infoproviders (contained in the Multiprovider)



When you tick on all 3 checkboxes when identifying/assigning, the BRAIN441 error will occur when executing the query as it doesn't "know" which characteristic to replace.
Should it replace SSC_COMPL__SSC_COKY (the green boxes) or should it replace SSC_COKY (the red box).
As long as both characteristics differ, the BRAIN441 error will keep on occurring.
The solution for the BRAIN441 error, in this case, is to only tick on the checkboxes which refer to the same characteristic. In this case, to tick on the characteristics in the green boxes.

Friday, 9 July 2010

Using STATICS command

It is commonly known not to read from database tables in fieldroutines. Otherwise you will meet performance problems. The most usual approach is to fill an internal table in the startroutine and do a binary search on internal table in fieldroutine.

It is difficult to catch such a procedure in a function module or include, not having bits of coding sitting at different places (data declaration, startroutine, fieldroutine). Below example makes use of the STATICS command, which loads an internal table in memory during LUW execution.
The data is read once into memory and can be accessed quickly using READ TABLE statements.

Tuesday, 8 June 2010

Before/Pre aggregation in 7.x query

Within BW 3.x you were able to execute a pre-aggregation on every calculated keyfigure.
In BW 7.x this is no longer possible for 'local' calculated keyfigures (as the option of aggregation is greyed out).
By creating a global calculated keyfigure, it is still possible to influence the aggregation as shown in the screengrab below

Tuesday, 11 May 2010

Constant selection in query (Apples and Oranges)

The great thing about an infoset is that it can combine data from different sources via a "JOIN" (inner/outer) mechanism. This may come in very handy if you want to report on two different sources (apples & oranges) without having to show the annoying '#'.
But what to do if you want to show data from the Apples bucket, even though this bucket is empty for the corresponding Oranges bucket? (and vice versa)

Eg.
Apples: 10 pieces in January 2010 for customer A
Oranges: 5 pieces in January 2010 for customer B

With an infoset (JOIN) on Apples & Oranges (where Apples is 'leading') it is impossible to show the Oranges of January for customer B because Apples has no customer B entry.

To be able to show the Oranges data, you have to use a multiprovider which uses the UNION mechanism. The downside of the multiprovider is that it will show a # for January Oranges for Customer A and a # for January Apples for Customer B.

This unwanted # behaviour can be overruled by using constant selection
Step 1: Create restricted keyfigure


Step 2: Tick on constant selection



Without constant selection:
January 2010;Customer A; Apples 10; Oranges #
January 2010;Customer B; Apples #; Oranges 5

With constant selection:
January 2010;Customer A; Apples 10; Oranges 5
January 2010;Customer B; Apples 10; Oranges 5

Monday, 8 March 2010

DB02: table diagnostics and 'native sql' commands

Via DB02 you can find out what 'issues' you might have on database level.

In the screengrab above it is shown that 4 DB tables no longer exist in the database. You can use 'native SQL' commands to physically delete these 4 tables from the database. This native SQL can be executed via a function module (SE37) call of DB_EXECUTE_SQL

Thursday, 4 March 2010

SAPconnect: Set BW mail/fax/internet settings

Use transaction SCOT to setup, for example, an SMTP connection on your BW server.

Thursday, 25 February 2010

Currency conversion in query

Looking at the properties of a keyfigure,

you sometimes find a currency conversion being executed.
If you're wondering where this conversion is being maintained, take a look at transaction RSCUR

Within transaction RSCUR you can customize/maintain the conversion which should be executed.

Friday, 19 February 2010

Activation of a cube/iobj/mpro/odso in a (closed) production system

Within SAP BI 3.x and also within SAP BI 7.x you have the possibility to activate infocube's, DSO's, etc. in a productive (closed) system, without having to transport them (again) from development.
To achieve this activation, the following ABAP programs can be used:

RSDG_ODSO_ACTIVATE: Activation of all ODS Objects
RSDG_IOBJ_ACTIVATE: Activation of all InfoObjects
RSDG_MPRO_ACTIVATE: Activating Multiproviders
RSDG_CUBE_ACTIVATE: Activation of InfoCubes

Friday, 29 January 2010

ABAP maximum for compiled code (within user exit) reached

Last week we've encountered some strange behavior (read: short dumps) in our BW system. For some strange reason the user exit for variables (function module EXIT_SAPLRRS0_001; include zxrsru01) was generating shortdumps. Analysis revealed that there's a maximum of 32kb of compiled code which the abap runtime environment can handle. (This limitation is not regarding the number of coded lines, but the amount of byte code generated)
This limitation was breached as we've placed all our user-exit code in one big CASE/ENDCASE statement.
When we replaced this big CASE/ENDCASE statement by several smaller CASE/ENDCASE chunks, everything worked fine again.



Thanks to Crispian for the analysis

Wednesday, 27 January 2010

Easily find the request ID you're looking for

Two easy ways to find Request ID's in a BW 7.0 system:

1) Use transaction code RSRQ and enter the request ID you're looking for


2) When 'monitoring' a request ID, you can press SHIFT-F5 or go to the menu-bar and press Request -> Other Request. Within the pop-up which appears, another request ID (the one you're looking for) can be entered.

Friday, 22 January 2010

Adding a filter value within a DTP

It's been a while (in fact: it's been to long) since I've last posted a blog entry. The reason for this was that I've been working as a team-lead the last couple of months. Therefore I was more 'occupied' with leading a team, then I was with inventing/discovering great BI 7.0 solutions....
But as I'm back in the saddle, I'm proudly presenting my first 2010 blog entry:

Within the DTP-extraction tab, you're able to set a filter to control the data you're going to extract:


The standard coding which will be displayed will be something like this:

read table l_t_range with key fieldname = ' '.
l_idx = sy-tabix.
if l_idx <> 0.
   modify l_t_range index l_idx.
else.
   append l_t_range.
endif.

Please be aware that, if l_idx = 0 and thus no entry is found in table l_t_range, you also have to fill in the fieldname

Eg. Usually, for filling a range, the following code is entered:
   l_t_range-sign = 'I'.
   l_t_range-option = 'EQ'.
   l_t_range-low = 'something'
This coding is sufficient when the read statement (read table l_t_range with key fieldname = 'whatever') finds an entry in table l_t_range...and thus l_idx <> 0.

When no entry is found (and thus l_idx = 0) you also have to add the following line to the code
   l_t_range-fieldname = 'FIELDNAME'.
otherwise filling the range has no added value...as the system does not know for which field the range should be filled....

Friday, 30 October 2009

Exclude query to use BW Accelerator (BWA)

What's in a name :
High Performance Analytics(HPA) became Business Intelligence Accelerator(BIA) which became Business Warehouse Accelerator(BWA) which will, eventually, become Open Database Analytics(ODA).

Anyway: After you've enabled a cube to be indexed/loaded to the BWA, all queries belonging to that cube will be executed via the BWA. You're able to exclude a particular query, belonging to a "BWA-cube", to be ran via the BWA.
Find the query which shouldn't use the BWA via SE16 and table RSRREPDIR.
Within table RSRREPDIR, set field NOHPA (=Do Not Use HPA Index) to 'X' and save the change. From now onwards, the particular query will not use the BWA eventhough the cube it "runs" on is BWA enabled.

Friday, 16 October 2009

Debug startroutine in BI 7.0

Within BI 3.x it was quite easy to debug your startroutine. Pressing the sequence F7 -> F6 -> F5 brought you directly to the startroutine (code). Within BI 7.0 the "F7->F6->F5" trick doesn't work anymore.
To debug a startroutine in BI 7.0, the following steps need to be executed:

1) Go to Display Generated Program within the transformation

2) Jump all the way down to find some startroutine coding you recognize :-) and place a break-point.

3) Find the load you want to debug within the DTP monitor

4) Press the debugging button

5) Execute the debugging request and the execution will stop at the break-point which has been created/set at step 2.


Big thanks go out to Freek Geldof who provided the necessary information!

Monday, 5 October 2009

ADAPT: Application Design for Analytical Processing Technologie

I had the pleasure to attend a training at SAP Netherlands in which the ADAPT modeling methodology was explained. As I've become very enthusiastic about this modeling "tool", I'd recommend you all to read the following white paper, written by the Symmetry Corporation.

Monday, 28 September 2009

Hierarchy on navigational attribute decreased query performance

A multiprovider, based upon 5 different infoproviders and 2 infoobjects, was causing a performance drain whenever a query was executed.
Every query, based upon this multiprovider, contained a hierarchy on a navigational attribute of an infoobject.
Eg. Infoobject A --> navigational attribute B --> Hierarchy C.
To make sure that the hierarchy was displayed correctly, also infoobject B was added to the multiprovider, eventhough this had no added value (as no "fact" data was to be displayed which "belongs" to infoobject B)
Infoobject B (IOB in the picture below) is a navigational attribute of 0MATERIAL

Infoobject B (IOB) has been added to the multiprovider and was identified (assigned) via the checkbox.
After the checkbox was "un-ticked" (as seen in the picture above; red arrow) and thus infoobject B was no longer part of the identification, the query performance increased.
Conclusion: Know what you're doing when inserting infoobjects in a multiprovider as mis-usage can drastically decrease performance!

Wednesday, 23 September 2009

Making an added function module (to a function group) visible in SE80

After creating an (additional) function module within a function group, it wasn't visible within SE80 after it was transported to another BW system.
When displaying the function group, via transaction SE80 --> Function Group, the newly created function module WAS NOT visible, even though the import/transport ended without errors.
The trick, to make all function modules visible within the function group, is to update the navigation index as shown in the screen-grab below: