Professional Documents
Culture Documents
Introduction
This paper focuses on understanding Data collection architecture. Data Collection is a process that pulls data from designated data sources into its planning data store. The Data collection process consists of the pull process and Operational Data Store (ODS) load process. Note: This document is intended for use as a reference document. Certain sections of the text are intentionally repeated throughout the document, so that each section can be read and understood independently of one another. This document is designed to supplement Oracle Advanced Supply Chain Definitions Planning and Oracle Global ATP Server Oracle Applications Data Store (ADS) Users Guide and other APS training class Represents all the source data tables used to notes. build and maintain the planning data store within Oracle ASCP. It represents a single source data instance. E.g. transaction system Overview of Running (Source Instance)
Collections
Operational Data Store (ODS) Part of Oracle ASCP that represents all the planning data tables that act as destinations for the collected data from each of the data sources (both ADS and Legacy). This acts as the input for the snapshot portion of the planning process. ODS and PDS share the same physical tables where a special plan identifier (for The user has the flexibility in determining example, -1) is used for distinction. when the snapshot of information from the transaction system (Source Instance) should Planning Data Store (PDS) Represents all be taken, and in deciding what information to the tables within Oracle ASCP which capture with each run of Data Collection. The encompass those in the ODS and other data collection program can be set to run output tables from planning, including upon submission of a job request, and at copies/snapshots of the input data used for specified time intervals, and to collect the current planning run striped by the different type of information with different appropriate plan identifier. frequency e.g. Dynamic data such as sales order can be collected frequently, while static Data Collection Data collection consists of data such as department resources can be the following: collected at longer intervals. Data Collection programs move data from the source instance into staging tables where data integrity is checked before being loaded into APS instances Operational Data store. Pull Data programs move the data from the staging tables to the APS instances Operational Data store.
Pull program Collects the data from the ADS, and stores the data into the staging tables. This pull program is a registered AOL concurrent program that could be scheduled and launched by a system administrator. If you are using a legacy program, you must write your own pull program. The pull program performs the following major processes. Refresh snashots. Launch the pull Workers to perform pulls from the appropriate source tables and insert the data in the Staging tables. ODS Load A PL/SQL program which performs the data transform and moves the data from the staging tables to the ODS. This collection program is a registered AOL concurrent program that could be scheduled and launched by the system administrator. The Launch_Monitor procedure performs the following major processes: Key Transformation - Generate new local ID for global attributes such as items, category set, vendor, vendor site, customer and customer site. Launch the ODS Load Workers to perform Create, Update, and Delete operations for each entity in ODS (MSCPDCW). Recalculate the sourcing history based on the latest sourcing information and the data from transaction systems. Recalculate the net resource availability based on the calendars, shifts and department resources information. Purge the data in the Staging tables (MSCPDCP). Collection Workbench Collection Workbench is a centralized data repository providing collected data from the source. Data from different source instances can be viewed using the Collection Workbench. The functionality here is similar to the Planner Workbench functionality. The collection
workbench is used to verify that the intended data has been collected If necessary, troubleshoot errors in data collection and rerun the data collection program.
The Planning Database Link is defined by the database administrator on the APS (destination) planning instance. There are two Database links created. One on the source instance that points to the destination instance (known as the Application Database Link). One on the destination instance that points to the
source instance (known as the Planning Database Link). Both the links are bidirectional. The Planning Database link is used for Data collection from transaction source to the planning instance. When an action, such as releasing a planned order or requisition, occurs in the APS planning system, the data is published to the transaction instance. The first step in this process is to send a signal via the Planning database link to the transaction instance. This initiates a remote procedure that pulls the planned order or requisition from the planning instance to the transaction instance. The Application Database link is used for Completing the Publishing process. The remote procedure that pulls the planned order or requisition record from the planning instance to the transaction instance does so by using the Application Database link. Centralized vs. Distribution Configuration This refers to the deployment of planning and ERP modules. In Centralized deployment, planning and ERP modules are in one machine and in the same database instance. In the Distributed configuration, planning and ERP modules are on different database instances and are usually on different machines.
plans and link them up by feeding the supply schedule of one org as demand schedule of another.
Following database link is created in Source instance: In source (Application Database Link) =============================
create public database link APS connect to apps identified by apps using 'APS.world';
link
is
created
in
This has nothing to do with the way vis1153.world = (DESCRIPTION= the planning and ERP modules are (ADDRESS=(PROTOCOL=tcp)(HOST=aps.us. deployed. This has to do with oracle.com)(PORT=7500))(CONNECT_DATA whether you are planning all orgs in =(SID=vis1153))) an enterprise together or separately. In centralized planning, you plan all Define Instances orgs in ONE PLAN. In decentralized planning, you plan Navigation path for defining the instances. different subsets of orgs in different
Advanced Supply Chain Planner Setup Instances The Database Administrator uses this form to setup instance names and to specify the release version of the source database, Application Database link and Planning Database link associated with the instance names. Complete the following Field and Flag in the Application Instance Window. Set up Organizations by clicking on the Organizations Button. The Organizations window lists the organization within the instance. To enable data collection for an organization within an instance, select the enabled check box. The database links can be confirmed by running the following SQL scripts after the instances are defined. In vis1153 SQL*Plus ================
Select db_link from dba_db_links where db_link = 'APS'; DB_LINK ------APS select m2a_dblink,a2m_dblink from mrp_ap_apps_instances;
MSC_APPS_INSTANCES table will list all the source instances from which destination (APS) instance can pull data from.
FIELD/FLAG
Instance Code Instance Type
DESCRIPTION
Choose from Multiple instances Discrete, Process, Discrete and Process or Other. If the source is discrete, only the discrete entites are collected. IF Process, then only OPM related entities are collected Unique version for the specified instance A link to connect to the Application database to Oracle ASCP A link to connect to the Oracle ASCP to the Application database Select this option to enable the collection Process The difference between instance time zone and GMT
Version Application Database Link Planning Database Link Enable Flag GMT Difference
M2A_DBLINK A2M_DBLINK ----------- ----------vis1153 APS Specify parameters for the data to be collected Data Pull Parameters In the data pull parameter form, specify the data to retrieve from the selected instance.
MRP_AP_APPS_INSTANCES table points to the destination (APS) instance. It will always have only one row.
M2A_DBLINK A2M_DBLINK ----------- ----------vis1153 APS
The complete refresh flag works in conjunction with the other data Yes/No flags listed lower in the form. When Complete Refresh is set to Yes, all the original data in the Operational Data Store (ODS) will be purged. Then the data that have flags set to yes are collected and inserted into the ODS.
Example 1: Complete Refresh = Yes (Complete Refresh Mode), Pull Items = Yes, Pull BOM/Routings = No After the Data collection, the ODS will contain items but no information about bills and routings.
When Complete Refresh is set to No (Incremental Refresh - Net Change mode), then the data collection is performed in incremental refresh mode Net Change. The data existing in the ODS is not purged. The data that have flags set to Yes will be refreshed.
2) BOM/Routing: Any change in BOM/Routing, except for substitute components. In the case of using alternate/simultaneous resources, the change of the resource step number is not supported. 3) Supply/Demand: Any change in supply/demand, includes onhand quantities and hard reservations. 4) Resource: a) The department resource capacity changes defined in the simulation set. b) WIP line resource changes c) WIP job's resource requirement changes. d) Supplier's capacity changes In the Data pull parameters form the number of workers to be employed can be specified.
are specified.
Example 2 Complete Refresh = No, Pull Items = Yes, Pull BOM/Routings = No After the data collection, the ODS will contain refreshed item information and the same bills or routings information that existed before the incremental refresh occurred.
In the ODS Load parameters form the number of workers to be employed can be specified. Recalculate Net Resource Availability (NRA) parameter If a new resource is defined or the availability of a resource has changed on the transaction (source) instance, then the Recalculate Net Resource Availability (NRA) field on the ODS load parameter should be set to Yes. Recalculate Sourcing History parameter Choose to allow sourcing history to affect allocation of orders among external supply
In the incremental refresh mode, net changes of the following entities is supported. 1) Item: new item and item attribute changes
Refresh Snapshot process completes successfully. This can be illustrated with an example. In the example we will see the flow of data for a given Bill of Material. The BilI is defined in the source instance. The Data Collection Process followingSK-DCTEST01 To understand the Data Collection process SK-DCTEST02 - Component first we need to understand the architecture of SK-DCTEST03 - Component the data in the source instance. For that we need to understand the following. Once this bill is defined the following tables
sources. Setting the Recalculate Sourcing History flag to Yes will cause the sourcing history to be recalculated.
are populated.
SELECT ASSEMBLY_ITEM_ID ITEM_ID, SEGMENT1, BOM.ORGANIZATION_ID ORG_ID, BILL_SEQUENCE_ID BILL_SEQ_ID FROM BOM_BILL_OF_MATERIALS BOM, MTL_SYSTEM_ITEMS MTL WHERE ASSEMBLY_ITEM_ID = 8307 AND BOM.ORGANIZATION_ID = MTL.ORGANIZATION_ID AND ASSEMBLY_ITEM_ID = INVENTORY_ITEM_ID; ITEM_ID SEGMENT1 ORG_ID BILL_SEQ_ID ------- ----------- ------ ----------8307 SK-DCTEST01 207 18328 SELECT COMPONENT_ITEM_ID ITEM_ID, SEGMENT1, COMPONENT_QUANTITY COMPONENT_QTY FROM BOM_INVENTORY_COMPONENTS COMPS, MTL_SYSTEM_ITEMS MTL WHERE BILL_SEQUENCE_ID = 18328 AND MTL.ORGANIZATION_ID = 207 AND COMPONENT_ITEM_ID = INVENTORY_ITEM_ID; ITEM_ID ------8309 8317 SEGMENT1 COMPONENT_QTY ----------- ------------SK-DCTEST02 2 SK-DCTEST03 2
There are three Snapshots involved in this example. MTL_SYS_ITEMS_SN for Items, BOM_BOMS_SN for bill material header, and BOM_INV_COMPS_SN. For each of these snapshots there is a synonym created Viz; MRP_SN_SYS_ITEMS, MRP_SN_BOMS, and MRP_SN_INV_COMPS respectively.
If you run the following SQL scripts before the Refresh Snapshot Process is run, you will get When the data in transaction (Source) zero rows returned. If you run the same scripts instance is added or changed. The changes after the Refresh Snapshot Process is are reflected in the snapshots after the successfully run, it will return some rows.
SELECT COUNT(*) FROM MRP_SN_BOMS WHERE ASSEMBLY_ITEM_ID = 8307; COUNT(*) ---------0 SELECT COUNT(*) FROM MRP_SN_INV_COMPS WHERE BILL_SEQUENCE_ID = 18328; COUNT(*) ---------0
If you look at the LOAD_BOM procedure for data pull, it pulls the Bill of Material data from the above two views and populates the following two staging tables. MSC_ST_BOMS and MSC_ST_BOM_COMPONENTS After the data pull is complete, and before the purge staging process is started, the following SQL scripts can be run.
SELECT ASSEMBLY_ITEM_ID ITEM_ID, SEGMENT1, BOM.ORGANIZATION_ID ORG_ID, BILL_SEQUENCE_ID BILL_SEQ_ID FROM MSC_ST_BOMS BOM, MTL_SYSTEM_ITEMS MTL WHERE ASSEMBLY_ITEM_ID = 8307 AND BOM.ORGANIZATION_ID = MTL.ORGANIZATION_ID AND ASSEMBLY_ITEM_ID = INVENTORY_ITEM_ID; ITEM_ID SEGMENT1 ORG_ID BILL_SEQ_ID ------- ----------- ------ ----------8307 SK-DCTEST01 207 36656 SELECT COMPS.INVENTORY_ITEM_ID ITEM_ID, SEGMENT1, USAGE_QUANTITY USAGE_QTY FROM MSC_ST_BOM_COMPONENTS COMPS, MTL_SYSTEM_ITEMS MTL WHERE MTL.INVENTORY_ITEM_ID = COMPS.INVENTORY_ITEM_ID and MTL.ORGANIZATION_ID = 207 and BILL_SEQUENCE_ID = 36656 ITEM_ID ------8317 8309 SEGMENT1 USAGE_QTY ----------- --------SK-DCTEST03 2 SK-DCTEST02 2
Additionally, there are some views created on the snapshot synonyms which are used in the data collections process to load the data from the source instance into the MSC staging tables. In this example, there are two views that are used to load the Bill of Material data into the staging tables. They are MRP_AP_BOMS_V and MRP_AP_BOM_COMPONENTS_V After the Refresh Snapshot Process is successfully run, run the following SQL scripts.
SELECT ASSEMBLY_ITEM_ID ITEM_ID, SEGMENT1, BOM.ORGANIZATION_ID ORG_ID, BILL_SEQUENCE_ID BILL_SEQ_ID FROM MRP_AP_BOMS_V BOM, MTL_SYSTEM_ITEMS MTL WHERE ASSEMBLY_ITEM_ID = 8307 AND BOM.ORGANIZATION_ID = MTL.ORGANIZATION_ID AND ASSEMBLY_ITEM_ID = INVENTORY_ITEM_ID; ITEM_ID SEGMENT1 ORG_ID BILL_SEQ_ID ------- ----------- ------ ----------8307 SK-DCTEST01 207 36656 SELECT COMPS.INVENTORY_ITEM_ID ITEM_ID, SEGMENT1, USAGE_QUANTITY USAGE_QTY FROM MRP_AP_BOM_COMPONENTS_V COMPS, MTL_SYSTEM_ITEMS MTL WHERE MTL.INVENTORY_ITEM_ID = COMPS.INVENTORY_ITEM_ID and MTL_ORGANIZATION_ID = 207 and BILL_SEQUENCE_ID = 36656;
If you look at the LOAD_BOM procedure for ODS data load, it pulls the Bill of Material data from the above two staging tables and populates the following two tables.
MSC_BOMS MSC_BOM_COMPONENTS Before the data is populated in the above tables, there is a process of Key transformation that takes place where the Source Inventory Item Id are Mapped to the inventory Item Id in the Planning instance. For Inventory item Id, the mapping is stored in the following table: MSC_ITEM_ID_LID
SELECT SR_INVENTORY_ITEM_ID SR_ITEM_ID, INVENTORY_ITEM_ID ITEM_ID FROM MSC_ITEM_ID_LID WHERE sr_inventory_item_id IN (8307,8309,8317); SR_ITEM_ID ITEM_ID ---------- ------8307 2333 8309 2334 8317 2335 SELECT INVENTORY_ITEM_ID ITEM_ID, ITEM_NAME, SR_INVENTORY_ITEM_ID SR_ITEM_ID, ORGANIZATION_ID ORG_ID FROM MSC_SYSTEM_ITEMS WHERE ITEM_NAME LIKE 'SK-DC%'; ITEM_ID ------2333 2334 2335 ITEM_NAME SR_ITEM_ID ORG_ID ----------- ---------- -----SK-DCTEST01 8307 207 SK-DCTEST02 8309 207 SK-DCTEST03 8317 207
ITEM_NAME, COMPS.ORGANIZATION_ID ORG_ID, USAGE_QUANTITY USAGE_QTY FROM MSC_BOM_COMPONENTS COMPS, MSC_SYSTEM_ITEMS MTL WHERE BILL_SEQUENCE_ID = 36656 AND COMPS.INVENTORY_ITEM_ID = MTL.INVENTORY_ITEM_ID AND COMPS.ORGANIZATION_ID = MTL.ORGANIZATION_ID; ITEM_ID ------2334 2335 ITEM_NAME ORG_ID USAGE_QTY ----------- ------ --------SK-DCTEST02 207 2 SK-DCTEST03 207 2
User has the flexibility in determining when a snapshot of information from the source system should be taken, and in deciding what information to capture with each job. The data collection program can be set to run upon submission of a job request, and at specified time intervals, and to collect different types of information with different frequencies. For example, dynamic data such as sales orders can be collected frequently, while static data, such as department resources, can be collected at longer intervals.
The objective is to set up data collection as needed to create a current replica of information for the APS system to use in its model. To a degree, this is a self-balancing decision. In the incremental refresh (net change) mode, collection workers can detect and collect only changed data. The data will the be at least as old as the job run time. Data collection process is run as a request set. Data can be collected from only one instance with each request set. Request sets are divided into one or more stages which are linked to determine the sequence in which your requests are run. Each stage consists of one or more requests that you want to run in parallel (at the same time in any order). To run requests in sequence, you assign requests to different stages, and then link the stages in the order you want the requests to run.
The concurrent manager allows only one stage in a request set to run at a time. When Complete refresh Ignores the most one stage is complete, the following stage is recent refresh data and collects all data. submitted. A stage is not considered to be Incremental refresh Collects only the complete until all of the requests in the stage incremental changes since the most are complete. One advantage of using stages recent refresh. is the ability to run several requests in parallel and then move sequentially to the next stage. Refresh Snapshot (MSRFWOR) In Data collection there are two stages, Data Pull Stage and ODS Load Stage. In the Data Pull Stage Planning Data Pull, Refresh Snapshot and Planning Data Pull Worker requests are run. In the ODS Load Stage Planning ODS Load, Planning ODS Load Worker, and Planning Data Collection Purge Staging tables requests are run. Refresh Snapshot Process consistently refreshes one or more snapshots. The refresh snapshot uses Oracle's replication management APIs following packages: DBMS_SNAPSHOT.REFRESH. A Commaseparated list of snapshots is provided and the Type of refresh to perform for each snapshot listed; 'F' or 'f' indicates a fast refresh, 'C' or 'c' indicates a complete refresh.
Six different processes comprise the structure of Data Collection process. Each of these processes are launched during the Data collection run. The refresh snapshot process is started, and purge the staging tables process is also CONCURRENT SHORT NAME submitted while the refresh snapshot process is running. For purging the staging PROGRAM tables, the Planning Data Pull MSCPDP Refresh Snapshot MSRFWOR MSC_CL_COLLECTION.PURGE_STAGIN Planning Data Pull Worker MSCPDPW G_TABLES_SUB procedure is called. The Planning ODS Load MSCPDC purpose of calling this procedure is that MSCPDCW Planning ODS Load there is a COMMIT after every task If the Worker previous data pull failed then there would be Planning Data Collection MSCPDCP data left in the staging tables. Purge Staging tables
The Planning Data Pull workers communicate with Planning Data Pull program via the use of database pipes. Messages sent between processes on database pipes include new tasks, task completion messages, etc.
Please refer to the appendix for a complete list of tasks and procedures called.
MRP:Cutoff Date Offset Months profile option is used for resource availability. Please refer to the appendix for a complete list of tasks and procedures called.
The data in the staging tables is purged after the planning ODS load is successfully completed. The data collection process will attempt to purge the already extracted data and any data orphaned as a result of a termination of the Data Collection process. MRP:Purge Batch Size profile option is used to determine the number of records to be deleted at a time. The tables which are purged by the Purge Staging tables process are:
MSC_ST_BOM_COMPONENTS MSC_ST_BOMS MSC_ST_DEMANDS MSC_ST_ROUTINGS MSC_ST_COMPONENT_SUBSTITUTES MSC_ST_ROUTING_OPERATIONS MSC_ST_OPERATION_RESOURCES MSC_ST_OPERATION_RESOURCE_SEQS MSC_ST_PROCESS_EFFECTIVITY MSC_ST_OPERATION_COMPONENTS MSC_ST_BILL_OF_RESOURCES MSC_ST_BOR_REQUIREMENTS MSC_ST_CALENDAR_DATES MSC_ST_PERIOD_START_DATES MSC_ST_CAL_YEAR_START_DATES MSC_ST_CAL_WEEK_START_DATES MSC_ST_RESOURCE_SHIFTS MSC_ST_CALENDAR_SHIFTS MSC_ST_SHIFT_DATES MSC_ST_RESOURCE_CHANGES MSC_ST_SHIFT_TIMES MSC_ST_SHIFT_EXCEPTIONS MSC_ST_NET_RESOURCE_AVAIL MSC_ST_ITEM_CATEGORIES MSC_ST_CATEGORY_SETS MSC_ST_SALES_ORDERS MSC_ST_RESERVATIONS MSC_ST_SYSTEM_ITEMS MSC_ST_DEPARTMENT_RESOURCES MSC_ST_SIMULATION_SETS MSC_ST_RESOURCE_GROUPS MSC_ST_SAFETY_STOCKS MSC_ST_DESIGNATORS MSC_ST_ASSIGNMENT_SETS
MSC_ST_SOURCING_RULES MSC_ST_SR_ASSIGNMENTS MSC_ST_SR_RECEIPT_ORG MSC_ST_SR_SOURCE_ORG MSC_ST_INTERORG_SHIP_METHODS MSC_ST_SUB_INVENTORIES MSC_ST_ITEM_SUPPLIERS MSC_ST_SUPPLIER_CAPACITIES MSC_ST_SUPPLIER_FLEX_FENCES MSC_ST_SUPPLIES MSC_ST_RESOURCE_REQUIREMENTS MSC_ST_TRADING_PARTNERS MSC_ST_TRADING_PARTNER_SITES MSC_ST_LOCATION_ASSOCIATIONS MSC_ST_UNIT_NUMBERS MSC_ST_PROJECTS MSC_ST_PROJECT_TASKS MSC_ST_PARAMETERS MSC_ST_UNITS_OF_MEASURE MSC_ST_UOM_CLASS_CONVERSIONS MSC_ST_UOM_CONVERSIONS MSC_ST_BIS_PFMC_MEASURES MSC_ST_BIS_TARGET_LEVELS MSC_ST_BIS_TARGETS MSC_ST_BIS_BUSINESS_PLANS MSC_ST_BIS_PERIODS MSC_ST_ATP_RULES MSC_ST_PLANNERS MSC_ST_DEMAND_CLASSES MSC_ST_PARTNER_CONTACTS
MRPCLHAB.pls MRPCLHAS.pls
Snapshot Logs BOMMPSNL.sql MTLMPSNL.sql MRPMPSNL.sql WIPMPSNL.sql POMPSNAL.sql OEMPSNAL.sql Create Synonyms MRPMPSNS.sql Create Triggers MRPMPCRT.sql
Create Snapshot Tables BOMMPSNP.sql MTLMPSNP.sql MRPMPSNP.sql WIPMPSNP.sql POMPSNAP.sql OEMPSNAP.sql Create Views MRPMPCRV.sql
MSCCLAAB.pls MSCCLAAS.pls
MSCCLBAB.pls MSCCLBAS.pls
MSCCLFAB.pls MSCCLFAS.pls
MSCCLJAB.pls MSCCLJAS.pls
MRPCLEAB.pls MRPCLEAS.pls
This package executes the task assigned by the Pull worker. This package collects data from staging tables to ODS. This package launch the pull programs Monitor/Worker processes. This package is used to exchange partitions with the tables. This package launches the Refresh Snapshot process.
APPENDIX
Planning Data Pull Worker (MSCPDPW) task list. The procedures are defined in the package body MSC_CL_PULL_WORKER, MSCCLAAB.pls. TASK NAME
TASK_ITEM1 TASK_ITEM2 TASK_ITEM3 TASK_PO_SUPPLY TASK_WIP_SUPPLY TASK_OH_SUPPLY TASK_MPS_SUPPLY TASK_MDS_DEMAND TASK_WIP_DEMAND TASK_SALES_ ORDER TASK_BIS
Planning ODS Load Worker (MSCPDCW) task list. The procedures are defined in the package body MSC_CL_COLLECTION, MSCCLBAB.pls TASK NAME
TASK_SUPPLY TASK_BOR TASK_CALENDAR_ DATE TASK_ITEM TASK_RESOURCE TASK_SALES_ORDER TASK_ SUBINVENTORY TASK_HARD_ RESERVATION TASK_SOURCING TASK_SUPPLIER_ CAPACITY TASK_CATEGORY TASK_BOM TASK_UNIT_ NUMBER TASK_SAFETY_ STOCK TASK_PROJECT TASK_PARAMETER TASK_BIS_ TARGET_LEVELS TASK_BIS_TARGETS TASK_BIS_ BUSINESS_ PLANS TASK_BIS_ PERIOD TASK_ATP_RULES TASK_NET_ RESOURCE_AVAIL TASK_PLANNERS TASK_DEMAND_ CLASS TASK_BIS_ PFMC_ MEASURES
PROCEDURE NAME
LOAD_ITEM LOAD_ITEM LOAD_ITEM LOAD_PO_SUPPLY LOAD_WIP_SUPPLY LOAD_OH_SUPPLY LOAD_MPS_SUPPLY LOAD_MDS_DEMAND LOAD_WIP_DEMAND LOAD_SUPPLIER_ CAPACITY LOAD_BIS110 for 11.0 source LOAD_BIS107 for 10.7 source LOAD_BIS115 for 11.5 source. LOAD_BOM LOAD_ROUTING LOAD_ CALENDAR_DATE LOAD_SCHEDULE LOAD_RESOURCE LOAD_TRADING_ PARTNER LOAD_SUB_ INVENTORY LOAD_HARD_ RESERVATION LOAD_SOURCING LOAD_ SUPPLIER_CAPACITY LOAD_CATEGORY LOAD_BOR LOAD_UNIT_NUMBER LOAD_SAFETY_STOCK LOAD_PROJECT LOAD_PARAMETER LOAD_UOM LOAD_ATP_RULES LOAD_USER_SUPPLY LOAD_USER_DEMAND LOAD_PLANNERS LOAD_DEMAND_CLASS LOAD_BUYER_ CONTACT LOAD_ FORECASTS
PROCEDURE NAME
LOAD_SUPPLY LOAD_BOR LOAD_ CALENDAR_ DATE LOAD_ITEM LOAD_RESOURCE LOAD_SALES_ORDER LOAD_SUB_ INVENTORY LOAD_HARD_ RESERVATION LOAD_SOURCING LOAD_SUPPLIER_ CAPACITY LOAD_CATEGORY LOAD_BOM LOAD_UNIT_NUMBER LOAD_SAFETY_STOCK LOAD_PROJECT LOAD_ PARAMETER LOAD_BIS_ TARGET_LEVELS LOAD_BIS_ TARGETS LOAD_BIS_ BUSINESS_ PLANS LOAD_BIS_ PERIODS LOAD_ATP_RULES LOAD_NET_ RESOURCE_AVAIL LOAD_PLANNERS LOAD_DEMAND_ CLASS LOAD_BIS_PFMC_ MEASURES
TASK_BOM TASK_ROUTING TASK_CALENDAR_ DATE TASK_SCHEDULE TASK_RESOURCE TASK_TRADING_ PARTNER TASK_SUB_INVENTORY TASK_HARD_ RESERVATION TASK_SOURCING TASK_SUPPLIER_ CAPACITY TASK_CATEGORY TASK_BOR TASK_UNIT_NUMBER TASK_SAFETY_STOCK TASK_PROJECT TASK_PARAMETER TASK_UOM TASK_ATP_RULES TASK_USER_ SUPPLY TASK_USER_ DEMAND TASK_PLANNERS TASK_DEMAND_ CLASS TASK_BUYER_ CONTACT TASK_LOAD_ FORECAST
: : : :
MSCCLFAB.pls
Refresh Snapshot
: : : :
: : : :
MRPCLEAB.pls
: : : :
MSCCLBAB.pls
DATA COLLECTION
SOURCE (TRANSACTION) DESTINATION (PLANNING APS) PLANNING DATABASE LINK
CORE APPS
ITEMS BOM RTG MDS SCP
PLANNING DATA
DATA COLLECTION
STEP 2 STEP 2
ODS LOAD ODS LOAD
RUN PLAN
(PDS)
= ODS Snapshot + Plan Output
(ADS)
(ODS)
P U B L
APS 14/ 14/14