Ever wondered why, even though you've deleted all indexes on your cube, the (massive) initial load still took a very, very long time?
It's all about (database) statistics!
Unfortunately the statistics can't be refreshed during (massive) data loading...even though you might think it's possible, looking at the screenshot below.
Unfortunately, the "refresh statistics" button doesn't work properly.
Luckily there's a workaround: Refreshing the statistics (of a cube), while data is being loaded towards it.
Run report RSANAORA and enter the FACTtable of the cube for which the statistics needed to be refreshed:
ps. With transaction DB20 you can validate/check the correctness of the current statistics. As long as the traffic light is green, you've got nothing to worry about :-)
Friday, 17 December 2010
Tuesday, 14 December 2010
Starting a DTP WITHOUT a process chain
In the old days, working with infopackages, it was easy to schedule loads from within the infopackage. Schedule the infopackage directly, or in the background on a specific date + time.
Unfortunately this easy scheduling is not available from within the DTP. To schedule an individual DTP, you HAVE to create a process chain, containing the DTP you want to schedule.
Luckily there's a workaround for this. Applying the code below will create an ABAP which makes it possible to schedule (individual) DTP's, without the necessity of a process chain.
Enter the technical DTP name in the parameter field of the ABAP and schedule the ABAP. Via scheduling the ABAP automatically the DTP will be scheduled ;-)
Unfortunately this easy scheduling is not available from within the DTP. To schedule an individual DTP, you HAVE to create a process chain, containing the DTP you want to schedule.
Luckily there's a workaround for this. Applying the code below will create an ABAP which makes it possible to schedule (individual) DTP's, without the necessity of a process chain.
Enter the technical DTP name in the parameter field of the ABAP and schedule the ABAP. Via scheduling the ABAP automatically the DTP will be scheduled ;-)
Thursday, 9 September 2010
Easy overview of Multiprovider assignment/identification
When creating/maintaining a multiprovider, consisting of several dataproviders, it's hard to get an overview of the 'linking' of every multiprovider-'field' to it's dataprovider(s)-field.
The SAP BW GUI doesn't deliver the desired overview in one glance....you have to click on every individual field to see the way it's linked.
By looking at the content (SE16) of table RSDICMULTIIOBJ, the multiprovider-linking will be shown as stated in the screengrab below:
The SAP BW GUI doesn't deliver the desired overview in one glance....you have to click on every individual field to see the way it's linked.
By looking at the content (SE16) of table RSDICMULTIIOBJ, the multiprovider-linking will be shown as stated in the screengrab below:
Thursday, 2 September 2010
Easy overview of (the runtime of) Process Chains
When you're interested in the runtime of several process chains (eg. during last night), transaction codes RSPC, RSPC1 or RSPCM don't do the trick completely.
Luckily there's a workaround via transaction code SA38,
report name /SSA/BWT
Execute the program, check radiobutton Process Chain Analysis and press F8.
In the following screen press F5 (=button Process Chains)
After pressing F5, a pop-up will be displayed in which you can enter the desired selection criteria
After pressing F8, the following result will be shown:
Luckily there's a workaround via transaction code SA38,
report name /SSA/BWT
Execute the program, check radiobutton Process Chain Analysis and press F8.
In the following screen press F5 (=button Process Chains)
After pressing F5, a pop-up will be displayed in which you can enter the desired selection criteria
After pressing F8, the following result will be shown:
Friday, 27 August 2010
Connecting a DB connect source system
Connecting a DB connect source system isn't always as easy as it seems.
RSA13 (or RSA1-> Source systems) and then create a new DB connect source system.
Right mouse click on the newly created source system to set the connection parameters.
Fill in the correct credentials, and you're there....although...this is what you might think.
For some reason, the settings you've just entered are not correctly saved and therefore your DB connect source system is not properly connected....and thus unusable.
You can keep on repeating (in vain) the steps above...or you can hack/add an entry to table DBCON
Just enter the DB connect source system name to this table and then retry to enter the settings (via RSA13)....This time it will work and the settings will be saved!
RSA13 (or RSA1-> Source systems) and then create a new DB connect source system.
Right mouse click on the newly created source system to set the connection parameters.
Fill in the correct credentials, and you're there....although...this is what you might think.
For some reason, the settings you've just entered are not correctly saved and therefore your DB connect source system is not properly connected....and thus unusable.
You can keep on repeating (in vain) the steps above...or you can hack/add an entry to table DBCON
Just enter the DB connect source system name to this table and then retry to enter the settings (via RSA13)....This time it will work and the settings will be saved!
Wednesday, 14 July 2010
BRAIN 441 (query) error
Herewith some explanation regarding the BRAIN441 error which can occur when using formula variables in a query.
When a query uses a formula variable of the type "replacement path", it can, for example, replace the occurrence of a characteristic with another value.
The BRAIN441 error will occur when the query doesn't "know" how to replace the characteristic as it might occur in several InfoProviders, when using a query created on top of a Multiprovider.
As shown in the screengrab below, characteristic SSC_COKY occurs in all 3 Infoproviders (contained in the Multiprovider)
When you tick on all 3 checkboxes when identifying/assigning, the BRAIN441 error will occur when executing the query as it doesn't "know" which characteristic to replace.
Should it replace SSC_COMPL__SSC_COKY (the green boxes) or should it replace SSC_COKY (the red box).
As long as both characteristics differ, the BRAIN441 error will keep on occurring.
The solution for the BRAIN441 error, in this case, is to only tick on the checkboxes which refer to the same characteristic. In this case, to tick on the characteristics in the green boxes.
When a query uses a formula variable of the type "replacement path", it can, for example, replace the occurrence of a characteristic with another value.
The BRAIN441 error will occur when the query doesn't "know" how to replace the characteristic as it might occur in several InfoProviders, when using a query created on top of a Multiprovider.
As shown in the screengrab below, characteristic SSC_COKY occurs in all 3 Infoproviders (contained in the Multiprovider)
When you tick on all 3 checkboxes when identifying/assigning, the BRAIN441 error will occur when executing the query as it doesn't "know" which characteristic to replace.
Should it replace SSC_COMPL__SSC_COKY (the green boxes) or should it replace SSC_COKY (the red box).
As long as both characteristics differ, the BRAIN441 error will keep on occurring.
The solution for the BRAIN441 error, in this case, is to only tick on the checkboxes which refer to the same characteristic. In this case, to tick on the characteristics in the green boxes.
Friday, 9 July 2010
Using STATICS command
It is commonly known not to read from database tables in fieldroutines. Otherwise you will meet performance problems. The most usual approach is to fill an internal table in the startroutine and do a binary search on internal table in fieldroutine.
It is difficult to catch such a procedure in a function module or include, not having bits of coding sitting at different places (data declaration, startroutine, fieldroutine). Below example makes use of the STATICS command, which loads an internal table in memory during LUW execution.
The data is read once into memory and can be accessed quickly using READ TABLE statements.
It is difficult to catch such a procedure in a function module or include, not having bits of coding sitting at different places (data declaration, startroutine, fieldroutine). Below example makes use of the STATICS command, which loads an internal table in memory during LUW execution.
The data is read once into memory and can be accessed quickly using READ TABLE statements.
Tuesday, 8 June 2010
Before/Pre aggregation in 7.x query
Within BW 3.x you were able to execute a pre-aggregation on every calculated keyfigure.
In BW 7.x this is no longer possible for 'local' calculated keyfigures (as the option of aggregation is greyed out).
By creating a global calculated keyfigure, it is still possible to influence the aggregation as shown in the screengrab below
In BW 7.x this is no longer possible for 'local' calculated keyfigures (as the option of aggregation is greyed out).
By creating a global calculated keyfigure, it is still possible to influence the aggregation as shown in the screengrab below
Tuesday, 11 May 2010
Constant selection in query (Apples and Oranges)
The great thing about an infoset is that it can combine data from different sources via a "JOIN" (inner/outer) mechanism. This may come in very handy if you want to report on two different sources (apples & oranges) without having to show the annoying '#'.
But what to do if you want to show data from the Apples bucket, even though this bucket is empty for the corresponding Oranges bucket? (and vice versa)
Eg.
Apples: 10 pieces in January 2010 for customer A
Oranges: 5 pieces in January 2010 for customer B
With an infoset (JOIN) on Apples & Oranges (where Apples is 'leading') it is impossible to show the Oranges of January for customer B because Apples has no customer B entry.
To be able to show the Oranges data, you have to use a multiprovider which uses the UNION mechanism. The downside of the multiprovider is that it will show a # for January Oranges for Customer A and a # for January Apples for Customer B.
This unwanted # behaviour can be overruled by using constant selection
Step 1: Create restricted keyfigure
Step 2: Tick on constant selection
Without constant selection:
January 2010;Customer A; Apples 10; Oranges #
January 2010;Customer B; Apples #; Oranges 5
With constant selection:
January 2010;Customer A; Apples 10; Oranges 5
January 2010;Customer B; Apples 10; Oranges 5
But what to do if you want to show data from the Apples bucket, even though this bucket is empty for the corresponding Oranges bucket? (and vice versa)
Eg.
Apples: 10 pieces in January 2010 for customer A
Oranges: 5 pieces in January 2010 for customer B
With an infoset (JOIN) on Apples & Oranges (where Apples is 'leading') it is impossible to show the Oranges of January for customer B because Apples has no customer B entry.
To be able to show the Oranges data, you have to use a multiprovider which uses the UNION mechanism. The downside of the multiprovider is that it will show a # for January Oranges for Customer A and a # for January Apples for Customer B.
This unwanted # behaviour can be overruled by using constant selection
Step 1: Create restricted keyfigure
Step 2: Tick on constant selection
Without constant selection:
January 2010;Customer A; Apples 10; Oranges #
January 2010;Customer B; Apples #; Oranges 5
With constant selection:
January 2010;Customer A; Apples 10; Oranges 5
January 2010;Customer B; Apples 10; Oranges 5
Monday, 8 March 2010
DB02: table diagnostics and 'native sql' commands
Via DB02 you can find out what 'issues' you might have on database level.
In the screengrab above it is shown that 4 DB tables no longer exist in the database. You can use 'native SQL' commands to physically delete these 4 tables from the database. This native SQL can be executed via a function module (SE37) call of DB_EXECUTE_SQL
In the screengrab above it is shown that 4 DB tables no longer exist in the database. You can use 'native SQL' commands to physically delete these 4 tables from the database. This native SQL can be executed via a function module (SE37) call of DB_EXECUTE_SQL
Thursday, 4 March 2010
Thursday, 25 February 2010
Currency conversion in query
Looking at the properties of a keyfigure,
you sometimes find a currency conversion being executed.
If you're wondering where this conversion is being maintained, take a look at transaction RSCUR
Within transaction RSCUR you can customize/maintain the conversion which should be executed.
you sometimes find a currency conversion being executed.
If you're wondering where this conversion is being maintained, take a look at transaction RSCUR
Within transaction RSCUR you can customize/maintain the conversion which should be executed.
Friday, 19 February 2010
Activation of a cube/iobj/mpro/odso in a (closed) production system
Within SAP BI 3.x and also within SAP BI 7.x you have the possibility to activate infocube's, DSO's, etc. in a productive (closed) system, without having to transport them (again) from development.
To achieve this activation, the following ABAP programs can be used:
RSDG_ODSO_ACTIVATE: Activation of all ODS Objects
RSDG_IOBJ_ACTIVATE: Activation of all InfoObjects
RSDG_MPRO_ACTIVATE: Activating Multiproviders
RSDG_CUBE_ACTIVATE: Activation of InfoCubes
To achieve this activation, the following ABAP programs can be used:
RSDG_ODSO_ACTIVATE: Activation of all ODS Objects
RSDG_IOBJ_ACTIVATE: Activation of all InfoObjects
RSDG_MPRO_ACTIVATE: Activating Multiproviders
RSDG_CUBE_ACTIVATE: Activation of InfoCubes
Friday, 29 January 2010
ABAP maximum for compiled code (within user exit) reached
Last week we've encountered some strange behavior (read: short dumps) in our BW system. For some strange reason the user exit for variables (function module EXIT_SAPLRRS0_001; include zxrsru01) was generating shortdumps. Analysis revealed that there's a maximum of 32kb of compiled code which the abap runtime environment can handle. (This limitation is not regarding the number of coded lines, but the amount of byte code generated)
This limitation was breached as we've placed all our user-exit code in one big CASE/ENDCASE statement.
When we replaced this big CASE/ENDCASE statement by several smaller CASE/ENDCASE chunks, everything worked fine again.
Thanks to Crispian for the analysis
This limitation was breached as we've placed all our user-exit code in one big CASE/ENDCASE statement.
When we replaced this big CASE/ENDCASE statement by several smaller CASE/ENDCASE chunks, everything worked fine again.
Thanks to Crispian for the analysis
Wednesday, 27 January 2010
Easily find the request ID you're looking for
Two easy ways to find Request ID's in a BW 7.0 system:
1) Use transaction code RSRQ and enter the request ID you're looking for
2) When 'monitoring' a request ID, you can press SHIFT-F5 or go to the menu-bar and press Request -> Other Request. Within the pop-up which appears, another request ID (the one you're looking for) can be entered.
1) Use transaction code RSRQ and enter the request ID you're looking for
2) When 'monitoring' a request ID, you can press SHIFT-F5 or go to the menu-bar and press Request -> Other Request. Within the pop-up which appears, another request ID (the one you're looking for) can be entered.
Friday, 22 January 2010
Adding a filter value within a DTP
It's been a while (in fact: it's been to long) since I've last posted a blog entry. The reason for this was that I've been working as a team-lead the last couple of months. Therefore I was more 'occupied' with leading a team, then I was with inventing/discovering great BI 7.0 solutions....
But as I'm back in the saddle, I'm proudly presenting my first 2010 blog entry:
Within the DTP-extraction tab, you're able to set a filter to control the data you're going to extract:
The standard coding which will be displayed will be something like this:
read table l_t_range with key fieldname = ' '.
l_idx = sy-tabix.
if l_idx <> 0.
   modify l_t_range index l_idx.
else.
   append l_t_range.
endif.
Please be aware that, if l_idx = 0 and thus no entry is found in table l_t_range, you also have to fill in the fieldname
Eg. Usually, for filling a range, the following code is entered:
   l_t_range-sign = 'I'.
   l_t_range-option = 'EQ'.
   l_t_range-low = 'something'
This coding is sufficient when the read statement (read table l_t_range with key fieldname = 'whatever') finds an entry in table l_t_range...and thus l_idx <> 0.
When no entry is found (and thus l_idx = 0) you also have to add the following line to the code
   l_t_range-fieldname = 'FIELDNAME'.
otherwise filling the range has no added value...as the system does not know for which field the range should be filled....
But as I'm back in the saddle, I'm proudly presenting my first 2010 blog entry:
Within the DTP-extraction tab, you're able to set a filter to control the data you're going to extract:
The standard coding which will be displayed will be something like this:
read table l_t_range with key fieldname = ' '.
l_idx = sy-tabix.
if l_idx <> 0.
   modify l_t_range index l_idx.
else.
   append l_t_range.
endif.
Please be aware that, if l_idx = 0 and thus no entry is found in table l_t_range, you also have to fill in the fieldname
Eg. Usually, for filling a range, the following code is entered:
   l_t_range-sign = 'I'.
   l_t_range-option = 'EQ'.
   l_t_range-low = 'something'
This coding is sufficient when the read statement (read table l_t_range with key fieldname = 'whatever') finds an entry in table l_t_range...and thus l_idx <> 0.
When no entry is found (and thus l_idx = 0) you also have to add the following line to the code
   l_t_range-fieldname = 'FIELDNAME'.
otherwise filling the range has no added value...as the system does not know for which field the range should be filled....
Subscribe to:
Posts (Atom)