Quantcast
Channel: SCN : All Content - Data Warehousing
Viewing all 1423 articles
Browse latest View live

Abap program - end of process chain

$
0
0

Hi,

 

I have a requirement. At the end of each process chain, the ABAP program must trigger email.

The process chains are having naming format of Z<Module name>. Ex: SD process chain will have ZSD, PS process chain will have ZPS etc.

Now we are reading a table based on the module and identify the users, and to those users email should be sent.

Now i am unable to identify the name of the process chain at run time from my ABAP program.

Is there any way to identify the name of the process chain at runtime from ABAP program?

Thanks.


compre data source in R/3

$
0
0

Hi,

 

I need to compare data source in R/3 Dev & Qulity systems. but do not have authentications for rsa2,rsa6(display), rsa3(extraction) .

Is there any other way to compare this data source in R/3.

 

Regards

Raju M

SAP ECC Integration with Informatica Power Exchange - BCI Datasources

$
0
0

Hi Experts,

 

I am working with Informatica and SAP ECC. We are extracting data in Informatica from SAP ECC via. Business Content Datasources.

We are using more than 30 datasources including Master (Attr, text) and Lo cockpit data extractors.

It turned out that during Initialization or delta loads of data we cannot trigger the datasources in parallel i.e say 3-4 datasources in parallel as in Informatica during staging of data all the parallel datasources data will come at once and then process workflow in Informatica will not know to which exact data target  data should go.

There is a particular mechanism in Informatica of data load:

1 Listener -> 1 Staging Area -> many processworkflow (which direct data to datatargets)

 

But the problem is if there is only 1 staging area then we cannot go for parallel data loads from SAP ECC.

I am still not sure about 1 staging area but my Informatica expert told me this. I have worked in SAP BW and there we have a lot of flexibility.

I think it is not possible that extraction from ECC to Informatica must be sequential and no parallel as in future we will be having 100's of datasources and loading everything in sequence will take days to complete only one load.

 

Is there any way to load data parallelly in Informatica from SAP ECC?


Any suggestion please!

 

Regards

Amit

Transactional System Vs Data Warehouse

$
0
0

Hello Everyone,


Please find the below informative document about comparison between Transactional System Versus Data Warehouse.

 

Transactional System Versus Data Warehouse or Data Mart

Transactional systems are designed to efficiently collect business data. Depending on the size of the application, the number of transactions stored in the database could range anywhere from hundreds, thousands, millions, or more, which means such systems should write that information into the

database efficiently within a fraction of a second.

 

Have you experienced that while you were looking for an airline ticket and comparing prices for the same flight by different vendors that by the time you decided to buy, the ticket that suited you best was gone? Yes, thousands of people access the same information at the same time. This is an example of a transactional system in which all the transactions are stored as-is. When you buy a ticket, that event is considered one transaction; when you change your ticket, the event is considered a separate event; and when you cancel it, it is treated as yet another event.

 

On the other hand, a data warehouse is designed for efficient retrieval of many records in various views and aggregates. The goal is to quickly retrieve data from the database rather than the speed of writing the information into the database. When designing a data warehouse or a data mart, the data is collapsed into fewer tables, so most of the information required for reporting can be found in one place. In addition, the design enables you to easily answer questions such as, How many sales were made by Region A? Who is the top salesperson this year? Which is the best-selling product? How is the newly launched product trending over time? Database design that supports this kind of reporting is called a star schema and is built using fact and dimension tables.


Although a data warehouse refers to an enterprisewide BI solution, a data mart focuses on certain functional areas. Both data warehouses and data marts use a star schema for their database design. It is common for organizations to implement their BI solution in phases with data marts that address the

needs of a department or specific business area, and then add other departments or business areas. The design supports an integrated method of reporting using each of the data marts.


Requirement for building a data warehouse or data mart.


The data mart is focused on a specific functional area or department. Depending on your business environment, you may need to populate your data mart with data from only one data source or multiple data sources. First, you must understand the business questions that need to be answered; based on that you then need to find out where the data required for reporting is stored, that is, your source systems. After you identify all your source systems, the data needs to be modeled using the star schema database design concept. The data is logically modeled first and then converted to a physical

database model by creating the tables defined in the data model. Typically, the data modeling is done by a data modeler, whereas creation of the database and tables is handled by the Database  Administrator. It is not uncommon that the database administrator wears both the hats; however this is not always true.


After the data has been modeled, the data from the source system is moved into the data mart via programs that extract, transform, and load (ETL) data. It is usually an ETL developer who does this task. Although there are number of off-the-shelf ETL tools available to accomplish this task.


This is all about Transactional System and Data Warehouse functionality.

 

Thanks & regards,

Vishakha Nigam

NLS - Setting done in table not reflecting in Cube

$
0
0

Hi Expert,

 

I had updated the backend table(RSDCUBE) for Read Mode for nearline storage set to 'X'(Near-Line Access Switched On) but the changes do not reflect in the MultiProvider settings. Please let me know if there a different table which need to be updated for the same.

 

Thanks,

Anupama

Maintain the link node in a hierarchy

$
0
0

Hello,

 

I have loaded 2 hierarchies H1 and H2 from ERP system and maintained a hierarchy Z in BW consists of H1 and H2. The node names are exactly the same of the 2 loaded hierarchies. Now I want to maintain the “link node flag” for the 2 nodes H1, H2.

 

When I maintain the hierarchy this field is only displayed (and it is blank).

 

How can I change this field to ‘X’.

 

Thanks!

Issue with planning Sequence

$
0
0


Hi Team,

 

We are facing issue, When we run the planning sequence.

Error: Master can not read value from subprocess.

Version: SAP BI 7.3 , Support package: 10.

 

Could you please help me out if any one have idea.

 

Regards,

 

Satya.

Locked Dimensions in InfoCube Editing

$
0
0

Hi there,

 

I have a situation with editing an InfoCube and all of the Dimensions and InfoObjects are locked (have turned blue).  I have done the following.

 

  • Deleted all data including dimensions.  I have preformed this action around ten times
  • Deleted all aggregates
  • Deleted indexes
  • Made sure no BWA indexes exit
  • Checked all dimensions and fact tables in se16 and no data exists, except for line-item dimensions
  • Executed the "Unlock InfoObjects" task in the Extras menu in the InfoCube Edit mode

 

Strangely, I am able to switch off the option of a dimension being a line-item dimension and able to insert a new InfoObject into said dimension.  But that is all I can do, and doesn't really help.  I can't remove any existing InfoObjects, thus I can't remove any Dimensions, which is what I need to do.

 

Current release is 7.31

 

The only other thing to add is that in the past this InfoCube had two remodelling rules executed which a) replaced a key figure and b) inserted a new key figure, but I don't see why this should have an affect.

 

Does anyone have any suggestions?

 

Cheers

 

Martyn


Star Schema Vs Snowflake Schema Vs Fact constellation Schema

$
0
0

Data Warehousing Schemas

  1. Star Schema
  2. Snowflake Schema
  3. Fact Constellation

 

Star Schema

  • A single large central fact table and one table for each dimension
  • Every fact points to one tuple in each of the dimensions and has additional attributes
  • Does not capture hierarchies directly.

1.png

Snowflake Schema

  • Variant of star schema model.
  • A single, large and central fact table and one or more tables for each dimension.
  • Dimension tables are normalized split dimension table data into additional tables.

2.png

 

       Fact Constellation:

  • Multiple fact tables share dimension tables.
  • This schema is viewed as collection of stars hence called galaxy schema or fact constellation.
  • Sophisticated  application requires such schema.

 

3.png

 

Case Study:

 

  • Afco Foods & Beverages is a new company which produces dairy, bread and meat products with production unit located at Baroda.
  • There products are sold in North, North West and Western region of India.
  • They have sales units at Mumbai, Pune , Ahmadabad ,Delhi and Baroda.
  • The President of the company wants sales information.

 

Sales Information

 

  • Report: The Number of units sold.
  • 113
  • Report : The Number of units sold over Time.

4.png

Building Data Warehouse

 

     Data Selection

  • Data Preprocessing
    1. Fill missing values
    2. Remove inconsistency
  • Data Transformation & Integration
  • Data Loading

Data in warehouse is stored in form of fact tables and dimension tables.

 

6.png


Thanks & Regards,

Vishakha Nigam

Strange behavior in Decision step for WORKING_DAY in process chain

$
0
0

Hi all,

 

We are on BW 7.31. I have a process chain with a sub chain where a decision step decides whether it is the second working day or not. If not, the process chain stops and goes to the next sub chain. If it is the second working day, it starts a new sub chain where financial data for the last month is being loaded.

 

The formula for the decision is as follows:

WORKINGDAY_MONTH( SYST-DATUM, 'NL', '' ) = 2

 

The strange thing now is that in the test systems the decision works fine but in production it is showing strange behavior. I've created three situation:

 

Situation 1: the 2nd day of the month is the second working day. So the sub chain starts after the decision. And off course not on the 1st and 3rd day.

calday12345678910
MoTuWeThFrSaSuMoTuWe
workingday12345678

 

Situation 2: the second working day is on monday. What happened in this case, is that the sub chain for financial data also ran during the weekend AND on monday. So as if the weekend days were also set as 2nd working day. This caused problems because of double (triple) requests.

calday27282930123456
MoTuWeThFrSaSuMoTuWe
workingday212223241234

 

So we changed the formula and we put '-'  in the last section (after 'NL', ) so it keeps the last day in memory in case it is not a working day. This was found in the F1 help pop-up in under WORKINGDAY_MONTH in the formula edit screen. So the new formula is:

WORKINGDAY_MONTH( SYST-DATUM, 'NL', '-' ) = 2

 

Situation 3: with the new formula there was a new problem. With working day 2 on friday, the system keeps the value during the weekend so again saturday and sunday was "sort-of-working-day-2".

calday2829301234567
MoTuWeThFrSaSuMoTuWe
workingday22232412345

 

What makes it even more strange is that it only happens on production. I've have checked the tables for factory calendar (TFACD) but they're the same on all systems. Is there another setting which determines the working days or how it keeps them in memory?

 

Hope someone has seen this before or know a simple setting or solution.

 

Thanks!


Best regards,


Daan Boon

The Netherlands

Short dump when trying to change DTP status to 'green'

$
0
0

Hi,

 

For a DTP delta run that brings in 0 records, the technical status turn green after 5-6seconds. But the overall status remains in yellow state.

Also if we manually try to set it to 'red', I am able to do so and delete this request and run a new request, but again it ends with overall status yellow.

 

If I try to set it to 'green' manually following short dump occurs:

Category               ABAP Programming Error

Runtime Errors         MESSAGE_TYPE_X

ABAP Program           SAPLRSMDATASTATE

Application Component  BW-WHM-DST

 

I have already scanned all SAP notes relevant for this short dump, but none of them relates to this scenario.

 

Please let me know if anyone faced this issue anytime, and how was it solved?

 

Version : BW 7.02 SP14

 

Regards,

Rathy


Issue in Transport Request

$
0
0

Hi,

I am trying to understand how does transport works. I have a issue while moving some SAP BW objects.

 

Transport X - Containts - A process chain , Transformations, some newly created variants and routines. - Moved to Q.

 

Now the problem is I dont need to move the Process chains to Production, that means I need to move only transformations, newly created variants and routines.

 

How do I do that.

If I create another transport Y- just containing the objects apart from process chain and take that TR to Q and then to P will it work. and stop the TR - X in quality. 

Inventory aging report

$
0
0

Hi,

 

Need help on to calculate below aging buckets (in days) for the virtual characteristics using BADI for inventory aging report in SAP BI.

 

Buckets: 0-15, 16-30, 31-60, 61-90, 91-120, 121-365,365-730 and >731

 

 

Thanks in adance.

 

Regards,

Madhu.

0UNIT - SID missing, but T006 filled

$
0
0

Hey guys,

 

the InfoObject 0UNIT is filled from T006 (SourceSystem).

On using the global settings I got the unit 'GB' into the BW.

 

But the entry is not in the SID-Table of 0UNIT.

I also used the function module RSDMD_INITIAL_LINE_INSERT and CONVERSION_EXIT_CUNIT_INPUT, but without success.

On excecuting the FM RSDMD_INITIAL_LINE_INSERT nothing happens.

Is that ok?

 

Can anybody explain to me, how the values are transfered to the SID-Table?

I do not know the technical background.

 

Thanks,

Barbara

Missing dimension in selective deletion options

$
0
0

Hello,

 

I urgently need to selectively delete records out from some InfoCubes. Now I realize that for one INfoCube I can't see the characteristics of my time dimension at all.

 

This is totally weird. There, of course, exists a time dimension with 0FISCYEAR, 0FISPER, etc.

However, these characteristics are not shown respectively available for selective deletion. Again, the whole dimesnion is misisng on the screen.

 

This does just happen for one Cube, in all other cubes I can see these time characteristics in the selective deletion option screen.

 

Any ideas ??


APD Question

$
0
0

How do you prevent users from corrupting the InfoObject data when using APD?

All about Data Transfer Process (DTP) - SAP BW 7

$
0
0

Data Transfer Process (DTP)


DTP determines the process for transfer of data between two persistent/non persistent objects within BI.

As of SAP NetWeaver 7.0, InfoPackage loads data from a Source System only up to PSA. It is DTP that determines the further loading of data thereafter.

 

 

 

Use

  • Loading data from PSA to InfoProvider(s).
  • Transfer of data from one InfoProvider to another within BI.
  • Data distribution to a target outside the BI system; e.g. Open HUBs, etc.

 

In the process of transferring data within BI, the Transformations define mapping and logic of data updating to the data targets whereas, the Extraction mode and Update mode are determined using a DTP.

 

NOTE: DTP is used to load data within BI system only; except when they are used in the scenarios of Virtual InfoProviders where DTP can be used to determine a direct data fetch from the source system at run time.

 

 

Key Benefits of using a DTP over conventional IP loading

  1. DTP follows one to one mechanism between a source and a Target i.e. one DTP sources data to only one data target whereas, IP loads data to all data targets at once. This is one of the major advantages over the InfoPackage method as it helps in achieving a lot of other benefits.
  2. Isolation of Data loading from Source to BI system (PSA) and within BI system. This helps in scheduling data loads to InfoProviders at any time after loading data from the source.
  3. Better Error handling mechanism with the use of Temporary storage area, Semantic Keys and Error Stack.

 

 

Extraction

There are two types of Extraction modes for a DTP – Full and Delta.

 

 

 

Full:

 

Update mode full is same as that in an InfoPackage.

It selects all the data available in the source based on the Filter conditions mentioned in the DTP.

When the source of data is any one from the below InfoProviders, only FULL Extraction Mode is available.

  • InfoObjects
  • InfoSets
  • DataStore Objects for Direct Update

 

Delta is not possible when the source is anyone of the above.


 

Delta:

                    
Unlike InfoPackage, delta transfer using a DTP doesn’t require an explicit initialization. When DTP is executed with Extraction mode Delta for the first time, all existing request till then are retrieved from the source and the delta is automatically initialized.Delta.jpg

 

The below 3 options are available for a DTP with Extraction Mode: Delta.

  • Only Get Delta Once.
  • Get All New Data Request By Request.
  • Retrieve Until No More New Data.

 

 

     I      Only get delta once:

If this indicator is set, a snapshot scenario is built. The Data available in the Target is an exact replica of the Source Data.

Scenario:

Let us consider a scenario wherein Data is transferred from a Flat File to an InfoCube. The Target needs to contain the data from the latest Flat File data load only. Each time a new Request is loaded, the previous request needs to be deleted from the Target. For every new data load, any previous Request loaded with the same selection criteria is to be removed from the InfoCube automatically. This is necessary, whenever the source delivers only the last status of the key figures, similar to a Snap Shot of the Source Data.

Solution – Only Get Delta Once

A DTP with a Full load should suffice the requirement. However, it is not recommended to use a Full DTP; the reason being, a full DTP loads all the requests from the PSA regardless of whether these were loaded previously or not. So, in order to avoid the duplication of data due to full loads, we have to always schedule PSA deletion every time before a full DTP is triggered again.

 

‘Only Get Delta Once’ does this job in a much efficient way; as it loads only the latest request (Delta) from a PSA to a Data target.

      1. Delete the previous Request from the data target.
      2. Load data up to PSA using a Full InfoPackage.
      3. Execute DTP in Extraction Mode: Delta with ‘Only Get Delta Once’ checked.

 

The above 3 steps can be incorporated in a Process Chain which avoids any manual intervention.

 

 

     II     Get all new data request by request:

If you set this indicator in combination with ‘Retrieve Until No More New Data’, a DTP gets data from one request in the source. When it completes processing, the DTP checks whether the source contains any further new requests. If the source contains more requests, a new DTP request is automatically generated and processed.

 

NOTE: If ‘Retrieve Until No More New Data’ is unchecked, the above option automatically changes to ‘Get One Request Only’. This would in turn get only one request from the source.

Also, once DTP is activated, the option ‘Retrieve Until No More New Data’ no more appears in the DTP maintenance.

 

 

 

Package Size

The number of Data records contained in one individual Data package is determined here.

Default value is 50,000.

 
 

 

Filter

  
The selection Criteria for fetching the data from the source is determined / restricted by filter.filter.jpg

 

We have following options to restrict a value / range of values:

 

   Multiple selections

 

   OLAP variable

 

   ABAP Routine

 

Acheck.jpg on the right of the Filter button indicates the Filter selections exist for the DTP.

 


 

 

Semantic Groups

Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.

This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.

 

Acheck.jpgon the right side of the ‘Semantic Groups’ button indicates the Semantic keys exist for the DTP.

 
  

 

Update

update.jpg

 

 

Error Handling

 

  • Deactivated:

If an error occurs, the error is reported at the package level and not at the data record level.

The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.

This results in faster processing.

 

  • No Update, No Reporting:

If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.

The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.

 

  • Valid Records Update, No Reporting (Request Red):

This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).

The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

 

  • Valid Records Update, Reporting Possible (Request Green):

Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.

The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

 

 

 

Error DTP

Erroneous records in a DTP load are written to a stack called Error Stack.

Error Stack is a request-based table (PSA table) into which erroneous data records from a data transfer process (DTP) are written. The error stack is based on the data source (PSA, DSO or Info Cube), that is, records from the source are written to the error stack.

In order to upload data to the Data Target, we need to correct the data records in the Error Stack and manually run the Error DTP.

 

 

Execute

Execute.jpg

 

 

 

Processing Mode

 

Serial Extraction, Immediate Parallel Processing:

A request is processed in a background process when a DTP is started in a process chain or manually.

 

 
 

 

Serial in dialog process (for debugging):

A request is processed in a dialog process when it is started in debug mode from DTP maintenance.
This mode is ideal for simulating the DTP execution in Debugging mode. When this mode is selected, we have the option to activate or deactivate the session Break Points at various stages like – Extraction, Data Filtering, Error Handling, Transformation and Data Target updating.

You cannot start requests for real-time data acquisition in debug mode.

 

Debugging Tip:

When you want to debug the DTP, you cannot set a session breakpoint in the editor where you write the ABAP code (e.g. DTP Filter). You need to set a session break point(s) in the Generated program as shown below:

 

Execute.jpg

 

 

No data transfer; delta status in source: fetched:

This processing is available only when DTP is operated in Delta Mode. It is similar to Delta Initialization without data transfer as in an InfoPackage.

In this mode, the DTP executes directly in Dialog. The request generated would mark the data found from the source as fetched, but does not actually load any data to the target.

We can choose this mode even if the data has already been transferred previously using the DTP.

 
  

 

Delta DTP on a DSO

There are special data transfer options when the Data is sourced from a DTP to other Data Target.

 

DSO.jpg

 

  • Active Table (with Archive)

       The data is read from the DSO active table and from the archived data.

 

  • Active Table (Without Archive)
    The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted.

 

  • Archive (Full Extraction Only)
    The data is only read from the archive data store. Data is not extracted from the active table.

 

  • Change Log
    The data is read from the change log and not the active table of the DSO.

aggregates vs compression

$
0
0

hi

after loading the data into cube ,i click roll up(aggregates are maintained) and i have checked option compression after roll up,this was done successively .two tick mark appears (compression status of aggregate,,roll up status)

but the data not moved from ftable to etable.why?again i should go to compress tab and put the req and release?

delete operation for transport request

$
0
0

Hi expert,

 

 

    (1) I want to delete the whole request BI7K900013, but objects are locked, I can delete locked objects one by one, but how can I delete the request all at once by single operation?

 

   (2) when I try to delete object, it shows this object is locked, but we can continue to delete it anyway. what cases will cause objects locked in request ?

 

   (3) I have following image for requests, does it mean release process is still going on ? how to delete those unfinished request ?

 

when I try to delete it, it shows as following:

 

 

(4) in request BI7K900013, I have some released task, when I try to delete this task, it failed and got following message. please tell how can I delete it?

 

 

(5) in my request BI7K900013, there are some tasks for other people, when I try to delete one of them, it failed and got following message. please tell how can I delete it?

 

(6)in my request BI7K900013, there are some tasks for other people, when I try to delete one of them, it failed and got following message. please tell how can I delete it?

 

Many Thanks.

error to transport HANA optimized - semantic partition DSO

$
0
0

Hi expert,

We have BW on HANA. I have created a HANA optimized - semantic partition DSO. it works fine since I have tested it with a Bex query. But when I tried to transport it, when released the transport in Development, I got the following error. I have repeatedly reactivated DSO, but still have the error.

 

Object LPOA ZT_OPA: Active version differs from modified version

Message no. RSO667

 

Thanks for your help!

Viewing all 1423 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>