Data Applications → Incorta Data Delivery of Oracle EBS for Azure Synapse

About Incorta Data Delivery of Oracle EBS for Azure Synapse

Incorta Data Delivery streamlines the integration between Oracle E-Business Suite (EBS) and Microsoft Azure Synapse Analytics. Incorta Data Delivery accelerates data mart deployment, and automates data model design and source schema mapping with the Incorta Oracle EBS data application. The Incorta Oracle EBS data application provides a new, faster, more effective option for analyzing and understanding your data, with no data modeling or extract, transform, load (ETL) required.

Following is the high-level Incorta Data Delivery process:

  • Connect Incorta to your data source with a data application, and load pre-existing physical schema fact and dimension tables.
  • The physical schema tables are replicated to the Parquet that resides on Azure Data Lake Storage (ADLS) Gen 2.
  • Load Synapse data marts from the parquet.
  • Visualize the data with Power BI.

Note
  • Incorta offers a free 30-day Data Delivery trial, after which Incorta will assist you with licensing options.
  • For existing on-premises customers, refer to Configure Incorta Data Delivery to enable Data Delivery for your Incorta instance.

Following are the Oracle EBS modules and business flows for which Incorta Data Delivery is available:

Financial Modules

  • General Ledger: Helps you analyze core financial statements and the underlying transactions

Supply Chain Planning Modules

  • Order Management: Helps you automatically identify risks to open orders and seamlessly unite data from various business functions, so you can optimize order fulfillment and maximize customer satisfaction

Business Flows

  • Procure to Pay: Helps you analyze the end-to-end buying process, from initial request and purchase through to invoicing and payment for goods and services received

Sign up for Incorta Data Delivery

You can sign up for a free trial of Incorta Data Delivery from the Azure Synapse Analytics Studio. Once you complete the signup process, you will receive an email with your Incorta Data Delivery credentials.

Configure Incorta Data Delivery

To configure Incorta Data Delivery, the following steps are required:

Configure the Oracle EBS Data Application

Incorta Data Delivery is pre-configured with sample Oracle EBS data, which will allow you to preview the Oracle EBS to Synapse data integration process before you connect to and configure the data application for your Oracle EBS instance. If you would like to use your own Oracle EBS data, follow the steps in the Configure the Oracle EBS data application section of Data applications → Install and Configure Oracle EBS for Incorta On-Premises.

External Data Sources

Here are the predefined External Data Sources:

External Data SourceTypeDescription
EBSoracleThis external data source connects to your Oracle EBS database. Refer to Configure the Oracle EBS data application for details on how to update the properties for your instance.
IncortamysqlThis external data source is for the Incorta metadata.
PostgrespostgresThis external data source is for Incorta over Incorta.
SparkSqlpostgresThis external data source is for materialized views.
Important

Do not delete the External Data Sources, as they are required.

Data Files

Here are the pre-defined Local Data Files used in the COMMON physical schemas:

  • Date_US.csv
  • Month_US.csv
  • Period.csv
  • Quarter.csv
Important

Do not delete the Local Data Files, as they are required.

Global Variables

Here are the pre-defined global variables:

Global VariableDescription
flex_value_set_idThis global variable is used in the Account_Hierarchy and CostCenter_Hierarchy tables in the EBS_FIN_COMMON physical schema. Refer to Configure the Oracle EBS data application for details on how to update the value of this global variable for your Oracle EBS instance.
Default_InitialExtractDateOptionally use this global variable to limit rows for large transaction tables during a full load. For example: last_update_date > $$Default_InitialExtractDate
Refer to Configure the Oracle EBS data application for more details.

Security Groups

Here are the pre-defined security groups:

Security GroupDescription
Incorta_Internal_UserThis is the Incorta internal user corresponding to the intelligent_ingest_for_azure_synapse@incorta.com email address. There is no way to change this user password, since it is a system user. Do not delete this user. If you delete this user, contact Incorta Support.
Super UserThis is the Administrator. You can change this password using the Profile Manager.

EBS Common Physical Schemas

Physical SchemaDescription
EBS_CAL_COMMONThis physical schema contains the Oracle EBS calendar tables
EBS_COMMON_DIM_ALLThis physical schema contains the materialized views copied to parquet that represent Oracle EBS dimensions used across modules and business flows, such as calendars, accounts, cost centers, operating units, and more
EBS_FIN_COMMONThis physical schema contains the Oracle EBS finance tables
EBS_FND_COMMONThis physical schema contains the Oracle EBS foundation, or application object library, tables
EBS_HR_COMMONThis physical schema contains the Oracle EBS human resources tables
EBS_ITEM_COMMONThis physical schema contains the Oracle EBS Item Master tables
EBS_PARTY_COMMONThis physical schema contains the Oracle EBS party, or business entity, tables
Note

The EBS_XLA and EBS_XLA_MV physical schemas are included with Incorta Data Delivery, but are not used.

EBS_COMMON_DIM_ALL Materialized Views

Here are the EBS_COMMON_DIM_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
DimAccountHierarchyThis materialized view contains the dimensions for the account hierarchy, or parent/child relationshipsAccount
DimAccountsThis materialized view contains the dimensions for the accountsCODE_COMBINATION_ID
DimBalancingSegmentThis materialized view contains the dimensions for the balancing segment, or segment used to ensure that all journals balance for each balancing segment valueCODE_COMBINATION_ID
DimCodeCombinationsThis materialized view contains the dimensions for the code combinations, or the segment pairings for each accounting key flexfield (KFF)CODE_COMBINATION_ID
DimCostCenterThis materialized view contains dimensions for the cost centersCODE_COMBINATION_ID
DimCustomerThis materialized view contains the customer dimensionsSITE_USE_ID
DimFiscalCalendarThis materialized view contains the fiscal calendar dimensionsGL_Date
DimGLPeriodsThis materialized view contains the general ledger period dimensionsPERIOD_SET_NAME
PERIOD_NAME
DimGregorianCalendarThis materialized view contains the Gregorian calendar dimensionsDate
DimItemThis materialized view contains the inventory item dimensionsINVENTORY_ITEM_ID
ORGANIZATION_ID
DimLedgersThis materialized view contains the customer dimensionsLEDGER_ID
DimOperatingUnitThis materialized view contains the operating unit dimensionsORGANIZATION_ID
DimPersonThis materialized view contains the details for the person entityPERSON_ID
DimSuppliersThis materialized view contains the supplier dimensionsVENDOR_ID
DimSupplierSiteThis materialized view contains the supplier site dimensionsVENDOR_SITE_ID

CommonDimensionDM Business Schema

The CommonDimensionDM Business Schema is used to create the EBS_COMMON_DIM_ALL materialized views.

Order Management Physical Schemas

Physical SchemaDescription
EBS_ARThis physical schema contains the Oracle EBS accounts receivable tables
EBS_AR_FACTS_ALLThis physical schema contains the accounts receivable fact tables copied to parquet
EBS_AR_MVThis physical schema contains the GL codes and invoice tax materialized views
EBS_AR_SNPThis physical schema contains the AR aging and write-off materialized views
EBS_OM_FACTS_ALLThis physical schema contains the order management fact tables copied to parquet
EBS_ONTThis physical schema contains the Oracle EBS order management tables

EBS_AR_FACTS_ALL Materialized Views

Here are the EBS_AR_FACTS_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
FactARPaymentSchedulesThis materialized view contains the AR payment schedulesPAYMENT_SCHEDULE_ID
FactCashReceiptsThis materialized view contains the cash receiptsCASH_RECEIPT_HISTORY_ID

ARCashReceiptsDM Business Schema

The ARCashReceiptsDM Business Schema is used to create the EBS_AR_FACTS_ALL materialized views.

EBS_OM_FACTS_ALL Materialized Views

Here are the EBS_OM_FACTS_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
FactInvoiceDetailsThis materialized view contains the invoice detail factsCUSTOMER_TRX_LINE_ID
FactSalesOrderHeaderThis materialized view contains the sales order header factsHEADER_ID
FactSalesOrderLineThis materialized view contains the sales order line factsCUSTOMER_TRX_LINE_ID
LINE_ID

OrderManagementDM Business Schema

The OrderManagementDM Business Schema is used to create the EBS_OM_FACTS_ALL materialized views.

General Ledger Physical Schema

Physical SchemaDescription
EBS_GLThis physical schema contains the Oracle EBS general ledger tables
EBS_GL_FACTS_ALLThis physical schema contains the general ledger fact and dimension tables copied to parquet

EBS_GL_FACTS_ALL Materialized Views

Here are the EBS_GL_FACTS_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
DimJournalCategoryThis materialized view contains the journal category dimensionsJE_CATEGORY_NAME
DimJournalSourceThis materialized view contains the journal source dimensionsJE_SOURCE_NAME
FactGLBalancesThis materialized view contains the general ledger balance factsCODE_COMBINATION_ID
LEDGER_ID
CURRENCY_CODE
PERIOD_NAME
FactJournalDetailsThis materialized view contains the journal detail factsJE_HEADER_ID
JE_LINE_NUM

GeneralLedgerDM Business Schema

The GeneralLedgerDM Business Schema is used to create the EBS_GL_FACTS_ALL materialized views.

Procure to Pay Physical Schemas

Physical SchemaDescription
EBS_APThis physical schema contains the Oracle EBS accounts payable tables
EBS_AP_FACTS_ALLThis physical schema contains the accounts payable fact and dimension tables copied to parquet
EBS_INVThis physical schema contains the Oracle EBS inventory tables
EBS_PAThis physical schema contains the Oracle EBS projects tables
EBS_POThis physical schema contains the Oracle EBS purchase order tables
EBS_PO_FACTS_ALLThis physical schema contains the purchase order fact and dimension tables copied to parquet

EBS_AP_FACTS_ALL Materialized Views

Here are the EBS_AP_FACTS_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
DimProjectThis materialized view contains the project dimensionsPROJECT_ID
DimInvoiceHeadersThis materialized view contains the invoice headerINVOICE_ID
FactInvoiceDistributionsThis materialized view contains the invoice distribution factsINVOICE_DISTRIBUTION_ID
FactInvoiceHoldsThis materialized view contains the invoice hold factsHOLD_ID
FactInvoicePaymentsThis materialized view contains the invoice payment factsINVOICE_PAYMENT_ID

PayablesDM Business Schema

The PayablesDM Business Schema is used to create the EBS_AP_FACTS_ALL materialized views.

EBS_PO_FACTS_ALL Materialized Views

Here are the EBS_PO_FACTS_ALL materialized views copied to Azure Synapse:

Materialized ViewDescriptionPrimary/Foreign Keys
DimBuyerThis materialized view contains the buyer dimensionsAGENT_ID
DimPurchaseOrderHeaderThis materialized view contains the purchase order header dimensionsPO_HEADER_ID
FactPurchaseOrderDistributionThis materialized view contains the purchase order distribution factsPO_DISTRIBUTION_ID
FactPurchaseOrderReceiptsThis materialized view contains the purchase order receipt factsPO_HEADER_ID
INVOICE_ID
PO_DISTRIBUTION_ID
ORG_ID
VENDOR_ID
TRANSACTION_ID
FactPurchaseOrderRequisitionThis materialized view contains the purchase order requisition factsDISTRIBUTION_ID
FactPurchaseOrderShipmentThis materialized view contains the purchase order shipment factsLINE_LOCATION_ID

PurchaseOrderDM Business Schema

The PurchaseOrderDM Business Schema is used to create the EBS_PO_FACTS_ALL materialized views.

Load2Synapse Physical Schemas

Following are the pre-defined physical schemas that trigger the ingest of data into Azure Synapse:

  • Load2Synapse_EBS_AP_FACTS_ALL
  • Load2Synapse_EBS_AR_FACTS_ALL
  • Load2Synapse_EBS_COMMON_DIM_ALL
  • Load2Synapse_EBS_GL_FACTS_ALL
  • Load2Synapse_EBS_PO_FACTS_ALL
  • Load2Synapse_EBS_OM_FACTS_ALL

Configure Your Azure Synapse Analytics SQL Pool

Perform the following configuration steps in your Azure Synapse Analytics SQL Pool before you load data into Synapse from Incorta:

Note

It is assumed that you already have a Synapse workspace and dedicated SQL Pool from the Azure blade.

Create an Incorta login for your SQL pool for the master database

Here are the steps to create an Incorta login for your SQL pool for the master database. You must replace <PASSWORD> with a secure password.

  • In Synapse Analytics, for your workspace, select NewSQL script.
  • In the SQL editor, in the SQL Script tab, in Connect to, first select your dedicated SQL Pool and then, in Use database, select master.
  • In the SQL editor, run the following CREATE LOGIN command, replacing <PASSWORD> with a secure password for the new login.
CREATE LOGIN incorta WITH PASSWORD = '<PASSWORD>';

Create the Incorta user with grants for the SQL pool database

Here are the steps to create an Incorta user with grants for the selected SQL pool database:

  • In Synapse Analytics, in Develop, select +SQL script.
  • In the SQL editor, in the SQL Script tab, in Connect to, first select your dedicated SQL Pool and then, in Use database, select your SQL pool database.
  • In the SQLeditor canvas, run the following SQL commands:
CREATE USER incorta FOR LOGIN incorta;
EXEC sp_addrolemember 'db_ddladmin', 'incorta';
EXEC sp_addrolemember 'db_datawriter', 'incorta';
EXEC sp_addrolemember 'db_datareader', 'incorta';
GRANT ALTER ANY EXTERNAL FILE FORMAT to incorta;
GRANT ADMINISTER DATABASE BULK OPERATIONS to incorta;

Create an Azure Synapse Data Destination

When you create an Azure Synapse Data Destination, you are able to seamlessly ingest physical schema tables from Incorta to your Azure Synapse instance.

Here are the steps:

  • ​​In the Navigation bar, select Data.
  • In the Action bar, select + NewAdd Data Destination.
  • In the Choose a Data Destination dialog, in Schema Destination select Synapse Analytics.
  • In the New Data Destination dialog, specify the following Asure Synapse schema destination properties:
  • To test the connection, select Test Connection.
  • To save, select Ok.

For more information on how to manage Data Destinations, refer to Tools → Data Manager.

Set a Data Destination for a Physical Schema

Set a data destination for physical schemas that will be sent to Synapse. Here are the steps:

  • ​​In the Navigation bar, select Schema.
  • In the Action bar, select Settings (gear icon) → Set Data Destination.
  • In the Set Schema Destination for <SCHEMA_NAME> dialog,
    • Select a Schema Destination or select + New Schema Data Destination to create a new one.
    • Select Done. A Synapse Analytics icon will appear to the left of the schema name to indicate a data destination has been set for the schema.

Set the data destination for the physical schemas you need to send to Synapse, which include:

  • EBS_AP_FACTS_ALL
  • EBS_AR_FACTS_ALL
  • EBS_COMMON_DIM_ALL
  • EBS_GL_FACTS_ALL
  • EBS_OM_FACTS_ALL
  • EBS_PO_FACTS_ALL

You will need to create a data destination for any additional physical schemas you would like to ingest into Azure Synapse.

Create Materialized Views to Trigger Data Ingest

For each physical schema that will trigger data ingest into Azure Synapse, create a materialized view. Refer to Load2Synapse Physical Schemas for a list of Load2Synapse physical schemas that have already been created. Each materialized view should contain the following lines of code for full and incremental load scripts:

Script (full load script)

from incorta_synapse import load_to_synapse
df = load_to_synapse("<PHYSICAL_SCHEMA_NAME>", "Full")
save (df)

Incremental Script (incremental load script)

from incorta_synapse import load_to_synapse
df = load_to_synapse("<PHYSICAL_SCHEMA_NAME>", "Incremental")
save (df)
Note

These examples pass the schema name as a parameter in the load script. There is also an option to pass a list of tables in the same schema as a comma separated list of fully qualified names as follows:

df = load_to_synapse([<PHYSICAL_SCHEMA_NAME>.<TABLE_NAME_1>, <PHYSICAL_SCHEMA_NAME>.<TABLE_NAME_2>], "Full")

Load Data

Following is the recommended load order:

  • Oracle EBS common physical schemas (for example, EBS_CAL_COMMON)
  • Oracle EBS base module physical schemas (for example, EBS_AP)
  • Materialized Views and Snapshots (EBS_AR_SNP and EBS_AR_MV)
  • _ALL physical schemas (for example, EBS_AP_FACTS_ALL)

Here is the detailed load order for the pre-defined physical schemas:

  • EBS_CAL_COMMON
  • EBS_FIN_COMMON
  • EBS_FND_COMMON
  • EBS_HR_COMMON
  • EBS_ITEM_COMMON
  • EBS_PARTY_COMMON
  • EBS_AP
  • EBS_AR
  • EBS_GL
  • EBS_XLA
  • EBS_XLA_MV
  • EBS_PO
  • EBS_ONT
  • EBS_INV
  • EBS_PA
  • EBS_AR_MV
  • EBS_AR_SNP
  • EBS_COMMON_DIM_ALL
  • EBS_AP_FACTS_ALL
  • EBS_AR_FACTS_ALL
  • EBS_GL_FACTS_ALL
  • EBS_OM_FACTS_ALL
  • EBS_PO_FACTS_ALL
  • Load_to_Synapse_EBS_COMMON_DIM_ALL
  • Load_to_Synapse_EBS_AP_FACTS_ALL
  • Load_to_Synapse_EBS_AR_FACTS_ALL
  • Load_to_Synapse_EBS_GL_FACTS_ALL
  • Load_to_Synapse_EBS_OM_FACTS_ALL
  • Load_to_Synapse_EBS_PO_FACTS_ALL

Manage Physical Schema Load Jobs

A set of pre-defined physical schema load jobs are available to load data from Oracle EBS into Incorta, and load data from Incorta into Azure Synapse. The recommended load order above is built into the pre-defined load job schedule.

Schema Load JobDescription
EBS_CAL_COMMON_JobThis job loads the EBS_CAL_COMMON physical schema
EBS_FIN_COMMON_JobThis job loads the EBS_FIN_COMMON physical schema
EBS_FND_COMMON_JobThis job loads the EBS_FND_COMMON physical schema
EBS_HR_COMMON_JobThis job loads the EBS_HR_COMMON physical schema
EBS_ITEM_COMMON_JobThis job loads the EBS_ITEM_COMMON physical schema
EBS_PARTY_COMMON_JobThis job loads the EBS_PARTY_COMMON physical schema
EBS_AP_JobThis job loads the EBS_AP physical schema
EBS_AR_JobThis job loads the EBS_AR physical schema
EBS_GL_JobThis job loads the EBS_GL physical schema
EBS_PO_JobThis job loads the EBS_PO physical schema
EBS_ONT_JobThis job loads the EBS_ONT physical schema
EBS_INV_JobThis job loads the EBS_INV physical schema
EBS_PA_JobThis job loads the EBS_PA physical schema
EBS_COMMON_DIM_ALL_JobThis job loads the EBS_COMMON_DIM_ALL physical schema and Azure Synapse
EBS_AR_MV_JobThis job loads the EBS_AR_MV physical schema
EBS_AR_SNP_JobThis job loads the EBS_AR_SNP physical schema
EBS_AP_FACTS_ALL_JobThis job loads the EBS_AP_FACTS_ALL physical schema
EBS_AR_FACTS_ALL_JobThis job loads the EBS_AR_FACTS_ALL physical schema
EBS_GL_FACTS_ALL_JobThis job loads the EBS_GL_FACTS_ALL physical schema
EBS_OM_FACTS_ALL_JobThis job loads the EBS_OM_FACTS_ALL physical schema
EBS_PO_FACTS_ALL_JobThis job loads the EBS_PO_FACTS_ALL physical schema
Load2Synapse_EBS_COMMON_DIM_ALL_JobThis job loads EBS_COMMON_DIM_ALL into Azure Synapse
Load2Synapse_EBS_AP_FACTS_ALL_JobThis job loads EBS_AP_FACTS_ALL into Azure Synapse
Load2Synapse_EBS_AR_FACTS_ALL_JobThis job loads EBS_AR_FACTS_ALL into Azure Synapse
Load2Synapse_EBS_GL_FACTS_ALL_JobThis job loads EBS_GL_FACTS_ALL into Azure Synapse
Load2Synapse_EBS_OM_FACTS_ALL_JobThis job loads EBS_OM_FACTS_ALL into Azure Synapse
Load2Synapse_EBS_PO_FACTS_ALL_JobThis job loads EBS_PO_FACTS_ALL into Azure Synapse

Activate Schema Load Jobs

Here are the steps to activate the schema load jobs:

  • In the Navigation bar, select Scheduler.
  • In the Context bar, select Schema Loads.
  • In the Schema Loads list view, for each job listed in the above table, in the Status column, select SUSPENDED.
  • In the Confirm Changing Status dialog, select Ok. The job Status should change to ACTIVE.

Change Load Job Frequency

The pre-defined load jobs are scheduled to run once every hour. You can change the frequency to support your business needs. Here are the steps to change the frequency of the schema load jobs:

  • In the Navigation bar, select Scheduler.
  • In the Context bar, select Schema Loads.
  • In the Schema Loads list view, select a schema load job name.
  • In the Edit Schema Load dialog, in Every, enter an increment and select a period.
  • Select Save.

Customize Incorta Data Delivery

There may be customizations that you would like to make to support your specific business needs. Here are some common customizations along with detailed implementation steps:

Add New Columns to Existing Fact or Dimension Tables

You can add new columns to existing fact or dimension tables. Following are the steps for both types of tables:

Add the New Column to the Physical Schema as Needed
Add the New Column to the Business Schema
  • Add the column to the appropriate data mart business schema. The name of the business schema has a suffix of DM, the abbreviation for data mart. Within each DM business schema, there is a business view for each fact and dimension table. Here are the steps to add the column to the business view for the example:
    • In the Navigation bar, select Business Schema.
    • In the Business Schema Manager, select the CommonDimensionsDM business schema to open it.
    • In the Business Schema Designer, for the DimSuppliers business view, select More Options (⋮ vertical ellipsis).
    • Select Edit.
    • To add the new column, drag and drop it from the Data panel to the business schema view.
    • In the Action bar, select Done.
Add the New Column to the Materialized View
  • Add the new column to the corresponding fact or dimension materialized view in the appropriate physical schema. The name of the physical schema will end in _ALL. Here are the steps to add the column to the materialized view for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_COMMON_DIM_ALL schema to open it.
    • In the Schema Designer, in the Tables section, select the DimSuppliers table to open it.
    • In the Table Editor, select the Materialized icon to open the Data Source dialog.
    • In the Data Source dialog, in Script, select Edit Query. Add the new column to the end of the SELECT statement.
    • Select Validate.
    • Select Done.
Load the Physical Schemas
  • If you added the new column to the physical schema in the first step, perform a full load of the physical schema. Here are the steps to perform a full load of the physical schema for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_PARTY_COMMON schema to open it.
    • In the Schema Designer, in the Action bar, select LoadFull Load.
  • Perform a full load of the physical schema containing the materialized view. This will also load Synapse, as the physical schema contains a materialized view script that has a name with a prefix of Load_Synapse. Here are the steps to perform a full load of the materialized view physical schema for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_COMMON_DIM_ALL schema to open it.
    • In the Schema Designer, in the Action bar, select LoadFull Load.
Check the Load Status
  • Check the status of the Synapse data load with the following steps:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the _ALL schema of interest to open it.
    • In the Schema Designer, in the Tables section, select the Load_Synapse table to open it.
    • In the Columns section, select the Preview Data link.
    • In the Preview dialog, review the rows_status column.
  • If the Synapse data load is successful, add the new column to the PowerBI model and visualizations.

Add a New Dimension to an Existing Fact Table

You can add a new dimension to an existing fact table. Following are the steps:

Design and Model the New Dimension
  • Identify the base EBS source tables and views, and the key column, or composite key, you will use to join the new dimension to the existing fact table.
Add the Source Tables to the Physical Schema as Needed
Add the Dimension Table to the Business Schema
  • Add the dimension table to the appropriate data mart business schema. The name of the business schema has a suffix of DM, the abbreviation for data mart. Within each DM business schema, there is a business view for each fact and dimension table. Here are the steps to add the dimension table to the business schema for the example:
    • In the Navigation bar, select Business Schema.
    • In the Business Schema Manager, select the CommonDimensionsDM business schema to open it.
    • In the Business Schema Designer, in the Action bar, select + NewAdd New View.
    • Name the view DimPartners.
    • To add new columns, drag and drop them from the Data panel to the business schema view.
      • Add the LAST_UPDATE_DATE from each of the EBS source tables. If there are multiple last update date columns, add a unique suffix, such as LAST_UPDATE_DATE_PO_HDR. Add logic for null dates: (ifNull(EBS_PO.RCV_SHIPMENT_HEADERS.LAST_UPDATE_DATE,timestamp("1980-01-01 00:00:00.000"))
      • Add the ID column(s) that will be used as the key to join to the same ID in the fact table.
    • In the Action bar, select Done.
Create a New Materialized View for the Dimension
  • Create a new Incorta PostgreSQL materialized view in the corresponding _ALL physical schema. Here are the steps for the example:

    • In the Navigation bar, select Schema.

    • In the Schema Manager, select the Schemas tab.

    • In the list view of physical schemas, select the EBS_COMMON_DIM_ALL schema to open it.

    • In the Schema Designer, in the Action bar, select + NewDerived TableMaterialized View.

      • For Language, select Incorta PostgreSQL.

      • Enable Incremental.

      • Select Edit Query for Script and Incremental Script, and enter the SQL. See below for tips on how to generate the SQL and add logic to the incremental script using the last update date.

      • In properties, select Add Property, and enter the following:

        KeyValue
        spark.executor.cores1
        spark.executor.memory1G
        spark.driver.memory3G
        spark.cores.max1
  • For Language, select Incorta PostgreSQL.

    • Select Validate.
    • For Table Name, enter DimPartners.
    • In the Table Editor for the columns that are keys in the materialized view, in the Function column, select key.
    • Select Done.
      Tip → How to generate a PostgreSQL SELECT statement with the Schema Wizard
      • You can use the Schema Wizard to generate the PostgreSQL with the following steps:
        • In the Navigation bar, select Schema.
        • In the Action bar, select + NewSchema Wizard.
        • In step 1, Choose a Source,
          • In Name, enter a physical schema name.
          • For Select a Datasource, select Postgres.
        • In step 2, Manage Tables,
          • Select a data source and a table.
          • Select Customize SQL.
          • Select Format.
          • Copy the SQL and paste it into the Incorta PostgreSQL materialized view Script property.

Tip → How to add last update date logic to the Incremental Script
  • For the Incremental Script, add logic to get all rows that were modified after the maximum last update date of the dimension materialized view underlying tables.


Load the Physical Schemas
  • If you added a new table to the physical schema in the first step, perform a full load of the physical schema. Here are the steps to perform a full load for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_PARTY_COMMON schema to open it.
    • In the Schema Designer, in the Action bar, select LoadFull Load.
  • Perform a full load of the schema that contains the new Incorta PostgreSQL materialized view. This will also load Synapse, as the physical schema contains a materialized view that has a name with a prefix of Load_Synapse. Here are the steps to perform a full load of the materialized view physical schema for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_COMMON_DIM_ALL schema to open it.
    • In the Schema Designer, in the Action bar, select LoadFull Load.
Check the Load Status
  • Check the status of the Synapse data load with the following steps:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the _ALL schema of interest to open it.
    • In the Schema Designer, in the Tables section, select the Load_Synapse table to open it.
    • In the Columns section, select the Preview Data link.
    • In the Preview dialog, review the rows_status column.
  • If the Synapse data load is successful, add the new dimension table to the PowerBI model and visualizations. Create joins from the new dimension table to the fact tables. Create a concatenated key if the join uses a composite key.

Add a New Fact Table

You can add one or more new fact tables that may involve new dimensions. Following are the steps:

Design and Model the New Fact Table
  • Identify the base EBS source tables and views, and the key column, or composite key, you will use to join the new fact table to new and existing dimensions.
Add the Source Tables to the Physical Schema as Needed
  • Determine if the EBS source tables for the new fact table exist in the Incorta physical schema. For example, if you want to add a new fact called FactBulkOrder, identify the EBS source tables that contain the related data. If the EBS source tables exist, continue to the next step. If they do not exist, refer to the Apply Optional Customizations to the Oracle EBS data applications section of Data applications → Install and Configure Oracle EBS for Incorta On-Premises to add the tables to the Incorta physical schema.
  • Add the fact table to the appropriate data mart business schema. The name of the business schema has a suffix of DM, the abbreviation for data mart. Within each DM business schema, there is a business view for each fact and dimension table. Here are the steps to add the fact table to the business schema for the example:
    • In the Navigation bar, select Business Schema.
    • In the Business Schema Manager, select the PurchaseOrderDM business schema to open it.
    • In the Business Schema Designer, in the Action bar, select + NewAdd New View.
    • Name the view FactBulkOrder.
    • To add new columns, drag and drop them from the Data panel to the business schema view.
      • Add the LAST_UPDATE_DATE from each of the EBS source tables. If there are multiple last update date columns, add a unique suffix. such as LAST_UPDATE_DATE_PO_HDR. Add logic for null dates: (ifNull(EBS_PO.RCV_SHIPMENT_HEADERS.LAST_UPDATE_DATE,timestamp("1980-01-01 00:00:00.000"))
      • Add the ID column(s) that will be used as the key to join each dimension table to the new fact table.
    • In the Action bar, select Done.
Create a New Materialized View for the Fact Table
  • Create a new Incorta PostgreSQL materialized view in the corresponding _ALL physical schema. Here are the steps for the example:

    • In the Navigation bar, select Schema.

    • In the Schema Manager, select the Schemas tab.

    • In the list view of physical schemas, select the EBS_PO_FACTS_ALL schema to open it.

    • In the Schema Designer, in the Action bar, select + NewDerived TableMaterialized View.

      • For Language, select Incorta PostgreSQL.

      • Enable Incremental.

      • Select Edit Query for Script and Incremental Script, and enter the SQL. See below for tips on how to generate the SQL and add logic to the incremental script using the last update date.

      • In properties, select Add Property, and enter the following:

        KeyValue
        spark.executor.cores2
        spark.executor.memory2G
        spark.driver.memory3G
        spark.cores.max4
  • For Language, select Incorta PostgreSQL.

    • Select Validate.

    • For Table Name, enter DimPartners.

    • In the Table Editor for the columns that are keys in the materialized view, in the Function column, select key.

    • Select Done.

      Tip → How to generate a PostgreSQL SELECT statement with the Schema Wizard
      • In the Navigation bar, select Schema.
      • In the Action bar, select + NewSchema Wizard.
      • In step 1, Choose a Source,
        • In Name, enter a physical schema name.
        • For Select a Datasource, select Postgres.
      • In step 2, Manage Tables,
        • Select a data source and a table.
        • Select Customize SQL.
        • Select Format.
        • Copy the SQL and paste it into the Incorta PostgreSQL materialized view Script property.

Tip → How to add last update date logic to the Incremental Script
  • For the Incremental Script, add logic to get all rows that were modified after the maximum last update date of the dimension materialized view underlying tables.


* Perform a full load of the physical schema you added a new table to in the first step. Here are the steps to perform a full load for the example: * In the Navigation bar, select **Schema**. * In the Schema Manager, select the **Schemas** tab. * In the list view of physical schemas, select the **EBS_PO** schema to open it. * In the Schema Designer, in the Action bar, select **Load** → **Full Load**.
Load the Physical Schemas
  • Perform a full load of the schema that contains the new Incorta PostgreSQL materialized view. This will also load Synapse, as the physical schema contains a materialized view that has a name with a prefix of Load_Synapse. Here are the steps to perform a full load of the materialized view physical schema for the example:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the EBS_PO_FACTS_ALL schema to open it.
    • In the Schema Designer, in the Action bar, select LoadFull Load.
Check the Load Status
  • Check the status of the Synapse data load with the following steps:
    • In the Navigation bar, select Schema.
    • In the Schema Manager, select the Schemas tab.
    • In the list view of physical schemas, select the _ALL schema of interest to open it.
    • In the Schema Designer, in the Tables section, select the Load_Synapse table to open it.
    • In the Columns section, select the Preview Data link.
    • In the Preview dialog, review the rows_status column.
  • If the Synapse data load is successful, add the new fact table to the PowerBI model and visualizations. Create joins from the new fact table to all new or existing dimension tables. Create a concatenated key if the join uses a composite key.

Add a New Data Source and Create Facts and Dimensions

You can add one or more new fact or dimension tables from data sources other than Oracle EBS. Following are the steps:

  • Design and model the fact table and related dimensions, if they do not already exist. Identify the data source tables and views, and the key column, or composite key, you will use to join the new fact table to new and existing dimensions.
  • Create a new external data source in Incorta. Refer to References → Connectors for a list of available connectors and detailed instructions on how to connect to your external data source.
  • Create a new physical schema or add to an existing physical schema to create the tables from the new data source. Refer to Tools → Schema Manager for detailed instructions on how to create or modify a physical schema.
  • Perform a full load of the new or modified existing physical schema.
  • Refer to the previous customization sections for steps on how to add new columns, dimensions, and facts.

Additional Information

Troubleshooting

  • For large tables, to limit full data loads, add a SQL clause to the table query: WHERE CREATION_DATE > <DATE>. You may need to contact Incorta Support to increase your cluster size.
  • Some of the formula columns will be incompatible. In this situation, the formula columns can be created in Power BI.
  • A data type mismatch may occur if the parquet data types are not mapped properly to the MS SQL data types.
  • If table creation fails in Synapse, this may be due to the following:
    • Key columns are not defined in the Incorta materialized view
    • Synapse is paused

Support for Source System Row Deletion

Incorta supports the deletion of rows in the source system through use of security filters. For more information, refer to Discussions on the Incorta Community.

About Entity Object Naming Conventions for the EBS Data Application

  • Oracle EBS physical schemas have a prefix of EBS.
  • Business schemas with facts and dimensions have a suffix of DM, the abbreviation for data mart.
  • Dimension business views and materialized views have a prefix of Dim.
  • Fact business views and materialized views have a prefix of Fact.
  • Physical schemas used to load parquet and Synapse have an _ALL suffix.
  • Materialized views used to load Synapse have a Load_Synapse prefix.