Professional Documents
Culture Documents
Third-party Business Objects products in this release may contain redistributions of software
Contributors licensed from third-party contributors. Some of these individual components may
also be available under alternative licenses. A partial listing of third-party
contributors that have requested or permitted acknowledgments, as well as required
notices, can be found at: http://www.businessobjects.com/thirdparty
2008-10-09
Contents
Chapter 1 Introduction to Data Federator 25
The Data Federator application.................................................................26
An answer to a common business problem.........................................26
Fundamental notions in Data Federator....................................................29
Data Federator Designer: design time.................................................30
Data Federator Query Server: run time................................................30
Important terms....................................................................................30
Data Federator user interface....................................................................31
Overview of the methodology....................................................................33
Adding the targets................................................................................35
Adding the datasources........................................................................35
Mapping datasources to targets...........................................................36
Checking if data passes constraints.....................................................36
Deploying the project............................................................................37
urlTemplate.........................................................................................479
Configuring connectors to web services..................................................479
List of resource properties for web service connectors......................480
Managing resources and properties of connectors.................................483
Managing resources using Data Federator Administrator..................483
Creating and configuring a resource using Data Federator
Administrator......................................................................................486
Copying a resource using Data Federator Administrator...................488
List of pre-defined resources..............................................................489
Managing resources using SQL.........................................................491
Creating a resource using SQL..........................................................491
Deleting a resource using SQL..........................................................492
Modifying a resource property using SQL..........................................493
Deleting a resource property using SQL............................................494
System tables for resource management..........................................494
Collation in Data Federator......................................................................495
Supported Collations in Data Federator.............................................496
Setting string sorting and string comparison behavior for Data Federator
SQL queries.......................................................................................497
How Data Federator decides how to push queries to sources when using
binary collation...................................................................................500
getSchemas.......................................................................................751
getForeignKeys..................................................................................752
refreshTableCardinality.......................................................................754
clearMetrics........................................................................................755
addLoginDomain................................................................................755
delLoginDomains................................................................................756
alterLoginDomain...............................................................................756
getLoginDomains...............................................................................757
addCredential.....................................................................................757
delCredentials....................................................................................758
alterCredential....................................................................................760
getCredentials....................................................................................761
Using patterns in stored procedures........................................................762
Network Connections.........................................................................792
Index 803
1
1 Introduction to Data Federator
The Data Federator application
This tool differs in its architecture to ETL (Extract, Transform, Load) tools in
that the data it manages is not replicated in another system but optimized in
the form of virtual data tables. The virtual database is a collection of relational
tables that are manipulated with SQL but do not hold stored data.
Data Federator allows you to consolidate your various data sources into one
coherent set of target tables. From these consolidated, virtual, target tables,
reporting tools can perform queries and be confident that the data are reliable,
trustworthy and up-to-date. For example, you can create a universe using
BusinessObjects Designer or create a query directly against the virtual target
tables using Crystal Reports.
Most businesses maintain several data sources that are spread across
different departments or sites. Often, duplicate information appears within
the various data sources but is cataloged in such a way that makes it difficult
to use the data to make strategic decisions or perform statistical analysis.
When your task involves consolidating several disparate data sources, you
most likely face the following challenges.
• simplicity and productivity - you want to develop a solution once
• quality control - you want to ensure that the consolidated data can be
trusted and is correct
• performance - you want to make sure that access to the data is optimized
to produce results quickly
• maintenance - you want to develop a solution that requires little or no
maintenance as new source of data are added or as existing sources
change
When faced with the above challenges, you can define the problem in terms
of the following needs.
• need to retrieve the content of each source
• need to aggregate the information relative to the same customer
The following diagrams illustrate how Data Federator addresses the above
needs.
Data Federator operates between your sources of data and your applications.
The communication between the data and the Data Federator Query Server
takes place by means of "connectors." In turn, external applications query
data from Data Federator Query Server by using SQL.
Internally, Data Federator uses virtual tables and mappings to present the
data from your sources of data in a single virtual form that is accessible to
and optimized for your applications.
The following diagram shows the internal operation of Data Federator and
how it can aggregate your sources of data into a form usable by your
applications.
Design time is the phase of defining a representation of your data, and run
time is the phase where you use that representation to query your data.
At design time, you use Data Federator Designer to define a data model,
composed of datasource tables and target tables. Mapping rules, domain
tables and lookup tables help you to achieve this goal.
Once your data model and its associated metadata are in place, your
applications can query these virtual tables as a single source of data. Your
applications connect to and launch queries against Data Federator Query
Server.
Behind the scenes at run time, the Data Federator Query Server knows how
to query your distributed data sources optimally to reduce data transfers.
Important terms
The following table lists some of the fundamental terms when working with
Data Federator. For a full list of definitions, see Glossary on page 766.
Term Description
A datasource is representation of a
source of your data, in tabular form.
datasource
You define a datasource in Data
Federator Designer.
The main components of the Data Federator Designer user interface are:
• (A) the breadcrumb, showing you the position of the current window in
the tree view
• (B) the tabs, where you navigate among your open projects
• (C) the project toolbar, where you add, import or export projects
• (D) the tree view, where you navigate among the components in your
project
• (E) the main view, where you define your components
• (F) the Save button, which saves the changes you made on the current
window
• (G) the Open button, which lets you open a project from the project
Configuration window
• (H) the Reset button, which resets the changes you made on the current
window
The following diagram summarizes steps 1-3 above. These steps represent
the construction phase in Data Federator Designer, at the end of which Data
Federator understands your source data and can present it as a federated
view.
Adding the targets is a matter of designing the schemas of the tables that
your applications will query.
This design is driven by the needs of your applications. You define the target
schema by examining what data your applications require, and by
implementing this schema as a target table in Data Federator Designer.
Related Topics
• Managing target tables on page 46
Depending on the type of source, you define the data access system in which
it is stored, you select the capabilities of the data access system, or you
describe the data extraction parameters if your source is a text file.
Related Topics
• About datasources on page 66
• Creating text file datasources on page 158
• Creating generic JDBC or ODBC datasources on page 138
• Configuring a remote Query Server datasource on page 202
Once the mappings are in place, Data Federator Query Server knows how
to transform, in real-time, the data in your datasources into the form required
by your targets.
Related Topics
• Mapping datasources to targets process overview on page 216
Once your mappings are defined, Data Federator Designer helps you check
the validity of the data that results from the mappings.
Once your constraints are defined, Data Federator Designer lets you check
each mapping, and mark the ones that are producing valid results, in order
to refine the rules so that they are ready for production.
Related Topics
• Testing mapping rules against constraints on page 294
• Defining constraints on a target table on page 294
When your mappings are tested in Data Federator Designer, you can deploy
your project on Data Federator Query Server.
When you deploy your project, its tables are usable by applications that send
queries to Data Federator Query Server.
Related Topics
• Managing a project and its versions on page 308
2
2 Starting a Data Federator project
Working with Data Federator
You should use Data Federator Administrator to change the login parameters
after installation.
Related Topics
• Starting Data Federator Administrator on page 384
Related Topics
• Data Federator Administrator overview on page 384
• Starting Data Federator Administrator on page 384
Related Topics
• Adding a project on page 41
• Opening a project on page 41
• Managing a project and its versions on page 308
Adding a project
To define targets, datasources and mappings, you must add a project to the
Data Federator list of projects.
When you add a project, it appears in the Data Federator list of projects, and
you can switch between different projects.
1. At the top of the window, click Projects.
2. Click Add project.
The New project window appears.
3. Enter a name and description for the project in the Project name and
Description fields, and click Save.
Data Federator adds the project to the list of projects.
Opening a project
You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
In order to work on your targets, datasources and mappings, you must open
a project. You open projects from the Projects tab.
1. At the top of the window, click the Projects tab.
2. In the tree list, click your-project-name.
The "Configuration" window appears.
3. Click Open.
The your-project-name tab appears.
Once your project is open, you can add targets, datasources and
mappings to it.
Related Topics
• Unlocking projects on page 43
• Managing target tables on page 46
• About datasources on page 66
• Creating database datasources using resources on page 72
• Mapping datasources to targets process overview on page 216
• Opening multiple projects on page 319
You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
1. Click the Projects tab.
2. Click the Delete this project icon.
Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308
The project closes and becomes unlocked for other user accounts.
Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308
Unlocking projects
When you open a project, Data Federator Designer locks it. When other user
accounts try to access the project, Data Federator refuses, and indicates
that it is locked by your user account.
To unlock a project, the user account that locked it must log in and close the
project.
If the password for the user account that locked the project is lost, the system
administrator can reset it. You can the log in using the user account that
locked the project, and unlock it.
If you open the same project on two machines with the same user account,
the last machine will lock the project. If you return to the first machine, the
project will be open, but you will not be able to save your changes. In this
case, you will have to decide if you want to keep the changes you made on
the first machine or on the second machine.
Data Federator also automatically unlocks the project after the session
timeout value expires. This value is set to 30 minutes.
Related Topics
• Closing Data Federator projects on page 42
• Managing a project and its versions on page 308
3
3 Creating target tables
Managing target tables
You define target tables in the Data Federator Designer user interface. Once
you have defined the target tables and deployed your project, the Data
Federator server (Data Federator Query Server) exposes your tables to your
other applications.
2. Type a name for the table in the Table name field, and a description in
the Description field.
3. Click Add columns, then click the number of columns that you want to
add.
Empty rows appear in the Table schema pane. Each row lets you define
one column.
4. Fill in each row in the Table schema pane with the name and type of the
column that you want to add.
5. Click Save.
Your target table appears in the Target tables tree list.
Related Topics
• Inserting rows in tables on page 618
• Adding a mapping rule for a target table on page 217
• Using data types and constants in Data Federator Designer on page 604
This procedure shows you how to add a target table by opening a file that
contains a DDL script.
1. Select Add target tables from DDL script from the Add drop-down
arrow.
The Import a DDL script window appears.
3. Click Save.
Data Federator Designer executes your DDL script and adds a new table.
Related Topics
• Adding a mapping rule for a target table on page 217
• Formats of files used to define a schema on page 608
This procedure shows you how to create a new target table by copying an
existing table or datasource.
1. Select Add target table from existing table from the Add drop-down
arrow.
The Add target table from exising table window appears.
2. Expand the Tables tree list and select the target table to be added.
The name of the selected table appears in the Replace with table field.
3. Click the table that you want to use.
The name of the selected table appears in the Selected table field, with
copyOf_your-table-name in the New target table's name field.
4. If you want to create a default mapping rule for your new target table,
select the Create default mapping rule check box.
A default mapping rule maps each column of your new table to its
corresponding column in the original table.
5. Click Save.
The Target tables window is displayed showing all created target tables.
6. Click copyOf_your-table-name in the tree list.
The copyOf_your-table-name window appears, showing the columns
copied from your original table to your new target table.
Related Topics
• Adding a mapping rule for a target table on page 217
You can change the name of a target table from the Target tables > your-
target-table-name window.
1. In the tree list, click your-target-table-name.
2. In the General pane, type the new name of your target table.
3. Click Save.
The name of the target table changes.
Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52
This section describes the options you have when you are defining the
schema of a target table. You can use this information while adding a target
table manually.
Option Description
Key icon
specifies if the column is the key, or
part of the key, of the target table
Option Description
Description icon
allows you to enter a description of the
column
Delete icon
Related Topics
• Adding a target table manually on page 46
(Data Federator does not show this status in the interface. All new targets
are put in this status.)
This table shows what to do for each status of the target life cycle.
Related Topics
• Deactivating a mapping rule on page 280
• Mapping values using formulas on page 222
• Testing mappings on page 280
• Deploying a version of a project on page 324
Element Description
Lineage tab
Impact tab
Data Federator lets you test a target by using the Target table test tool pane.
Testing a target
The target must have the status "mapped" (see Determining the status of a
target table on page 50).
You can run a query on a target to test that all of its mapping rules are
mapping values correctly and consistently.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-name window appears.
2. In the Target table test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.
For details on printing the results of the query, see Printing a data sheet
on page 617.
Data Federator displays the data in columns in the Data sheet frame.
3. Verify that the values appear correctly.
Otherwise, try adjusting the mapping rules in the target again.
Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.
Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.
For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:
client_id=6000114
Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.
If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.
You create domain tables when you want make an enumeration available
for a column in one of your target tables.
You can also use a domain table to constrain the values in the column of a
target table. See Using a domain table to constrain possible values on
page 263.
The following procedure is an example of a domain table that you can use
to enumerate a list of values for a column called marital_status. The list
in this example contains a code for each marital status.
2. In the Table name field, type a name for your new domain table.
3. In the "Table schema" pane, click Add columns, then click 1 column to
add one column.
One empty row appears in the "Table schema" pane.
5. Click Save.
The your-domain-table-name window appears.
6. In the "Table contents" pane, click Add, then click Add again.
The "Add rows" window appears, showing one empty row with the columns
that you defined.
7. Click Add rows, then click 3 rows to add three more rows.
8. In the field that you named marital_status, enter the values:
• SE
• MD
• DD
• WD
9. Click Save.
The "Update report" window appears.
10. Click Close.
The your-domain-table-name window appears, showing your new
table with the values you entered. You can now use this domain table to
define a set of values for a column in a target table.
Related Topics
• Using data types and constants in Data Federator Designer on page 604
This section shows some examples of domain tables that you can use in
different cases.
marital_status
SE
MD
DD
WD
marital_status marital_status_description
SE single
MD married
marital_status marital_status_description
DD divorced
WD widowed
department_
division_code_ department_
division_code code_ descrip-
description code
tion
department_
division_code_ department_
division_code code_ descrip-
description code
tion
• You must have created a text file containing the domain data. The file
must be in comma-separated value (CSV) format, as in the example
above.
• For details on data types that you can use, see Using data types and
constants in Data Federator Designer on page 604.
If you have a lot of domain data, you can enter it into your domain table
quickly by importing the data from a text file.
For example, Data Federator can import domain data such as the following.
file: my-domain-data.csv
"1";"single"
"2";"married"
"3";"divorced"
"4";"widowed"
2. Add a datasource that points to the file from which you want to import.
3. When the Domain tables > your-domain-tablewindow appears, click
Add, then click Add from datasource table.
The Domain tables > your-domain-table > Add rows from a
datasource window appears.
4. Refer to the Select a datasource table field and select the datasource
table to be added to the domain table.
The columns of the selected datasource table are displayed in the Select
a subset of columns field on the right. You can, if required, select one
or all of the columns in this field and click View Data to display the
contents of the selected columns.
5. Refer to the Domain columns mapping pane and map the required
datasource column from each domain table column's drop-down list-box.
6. Click Save.
The Domain tables > your-domain-table-name > Update report
window is displayed and your file's imported data is added to your domain
table.
Related Topics
• Creating text file datasources on page 158
• Adding a domain table to enumerate values in a target column on page 55
3. Click Save.
Related Topics
• Adding a target table manually on page 46
To delete a domain table, you must first remove references to it from any
lookup and target tables.
1. In the tree list, click Domain tables.
2. Select the tables that you want to delete.
3. Click Delete, and click OK to confirm.
Related Topics
• Dereferencing a domain table from your target table on page 60
• Dereferencing a domain table from a lookup table on page 251
This procedure shows how to use the values that you entered in a domain
table as the values that can appear in a column of your target table.
1. Add a target table. See Managing target tables on page 46.
2. When the Target tables > New target table window appears, click Add
columns, then, from the list, click 1.
An empty row appears in the Target schema pane.
.
The Target tables > New target table > Domain constraint table 'your-
column-name' window appears.
6. In the list, expand the name of your domain table, then click the column
that you want to use as the domain.
For example, if your domain table contains the columns marital_code,
and marital_code_description, click marital_code.
The name of the domain table appears in the Selected table box. The
name of the column appears in the Selected column box.
7. Click Save.
The "Target tables > New target table" window appears.
When you choose values for this column in Data Federator Designer,
only the values in domain table will appear.
4
4 Defining sources of data
About datasources
About datasources
Data Federator projects use datasources to access a project's sources of
data. A datasource is a pointer that points to and represents the data that is
kept in a source. For example, this could be a relational database in which
you store customer data. A datasource can also point to a text file, for
example in which you keep sales information.
Datasources that you can create fall into the following categories:
• Databases are datasources that represent databases such as Oracle,
Access and DB2. Data Federator includes pre-defined resources that you
can use to help configure your datasource to achieve the best
performance.
This category includes relational databases that use JDBC drivers, ODBC
drivers, and openclient drivers.
• Text file datasources provide access to data held in text files, for example
comma-separated value (.csv) files.
• XML/web datasources provide access to data held in XML files, or data
provided by web services.
• A Remote Query Server datasource uses a remote Data Federator Query
Server as a source of data.
Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Generic and pre-defined datasources on page 71
• Creating remote Query Server datasources on page 202
• Creating text file datasources on page 158
• Adding an XML file datasource on page 179
• Adding a web service datasource on page 183
The following diagram shows what you see in Data Federator Designer when
you work with datasources.
When you create a new datasource, Data Federator marks its status as
Draft, to indicate that the definition is incomplete. In order to use your
datasource in a mapping, when you finalize the definition, you must make it
Final.
• Draft: A datasource is a draft when you first create it. When a datasource
is a draft, you can modify it, but you cannot use it in a mapping.
When a datasource is Final, you cannot modify it, but you can use it in
a mapping.
Datasource definition
Draft, and schema definition Test the datasource
Complete parameters are com- configuration.
plete and valid.
Related Topics
• Setting a text file datasource name and description on page 159
• Creating generic JDBC or ODBC datasources on page 138
Once you have created a resource in Administrator, you can use it to create
datasources in Designer.
• Datasource availability:
• A generic datasource does not use a configured resource. You have
to re-enter all the configuration parameters every time you create a
new generic JDBC datasource.
• You can use a pre-defined resource configuration for multiple
datasources.
Related Topics
• About datasources on page 66
• Configuration parameters for generic JDBC and ODBC datasources on
page 150
• Creating generic JDBC or ODBC datasources on page 138
• Managing resources using Data Federator Administrator on page 483
Related Topics
• Creating JDBC datasources from custom resources on page 133
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select DB2, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your DB2 database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your DB2 database. If you are not
sure which resource to choose, ask your Data Federator administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
Parameter Description
Parameter Description
For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name
• Password
• Port
• Schema
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Informix, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Informix database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Informix database. If you are
not sure which resource to choose, ask your Data Federator administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name
where parameter is the deployment context parameter that you want to use.
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select MySQL, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your MySQL database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your MySQL database. If you are
not sure which resource to choose, ask your Data Federator administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Oracle, and click Save.
The "Draft" configuration screen is displayed.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• Schema
• SID
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Netezza, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Netezza database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Netezza database. If you are
not sure which resource to choose, ask your Data Federator administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Progress, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Progress database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Progress database. If you
are not sure which resource to choose, ask your Data Federator
administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Progress DB password
• Progress DB schema
• Progress DB username
• SequeLink data source name
• SequeLink server host name
• SequeLink server port
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SAS, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SAS database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SAS database. If you are not
sure which resource to choose, ask your Data Federator administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Password
• Port
• SAS/SHARE server host name
• Schema
• User Name
Using data sets that are not pre-defined to the SAS/SHARE server
You can configure Data Federator to access multiple data sets that are not
pre-defined to the SAS/SHARE server. These are data sets that are not
included in the current SAS configuration.
To access these data sets, you use the Connection Parameters area of
the configuration screen. To configure a non pre-defined data set, use the
following procedure:
1. In the Connection Parameters area, select the Use data sets that are
non pre-defined to the SAS/SHARE server check box. A set of Location
and Library Name fields appears.
2. In the Location field, enter the path for the dataset, in the format required
for the operating system that you are using.
3. In the Library name field, enter a name to use to refer to the data set,
and select the Prefix table name with schema name checkbox. The
library name that you entered appears as a SCHEMA.
4. Click Add data set to add a new, empty set of Location and Library
name fields, ready to define a further set if you require it.
To delete a defined data set, click the Delete button (shown as a cross
on the user interface) at the right of the data set to delete.
5. Click Save to save the configuration.
Related Topics
• Installing drivers for SAS connections on page 427
• Ensure that you have the necessary drivers installed for SQL Server.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SQL Server, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SQL Server database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SQL Server database. If you
are not sure which resource to choose, ask your Data Federator
administrator.
Related Topics
• Managing login domains on page 525
Parameter Description
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase, and click Save.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• Default database
• Password
• Schema
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase IQ, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Sybase IQ database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Sybase IQ database. If you
are not sure which resource to choose, ask your Data Federator
administrator.
Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Teradata, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Teradata database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Teradata database. If you
are not sure which resource to choose, ask your Data Federator
administrator.
Related Topics
• Managing login domains on page 525
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
1. Access the project to which you want to add the datasource, and at the
top of the Data Federator Designer screen, click Add, and from the
pull-down list, click Add datasource.
The "New Datasource" screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
3. From the list, select the JDBC from defined resource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
pull-down list, select the custom resource that you want to use.
5. On the "Draft" screen, configure the parameters. Refer to the connection
parameters and descriptions information for details.
6. Add the datasource tables to your datasource. Refer to the information
on adding tables to a database datasource for details.
7. Click Save.
Your JDBC datasource is added.
Related Topics
• Connection parameters for JDBC from custom resource datasources on
page 135
• Adding tables to a relational database datasource on page 157
• Testing and finalizing datasources on page 210
Parameter Description
Parameter Description
Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
1. Access the project to which you want to add the JDBC or ODBC
datasource, and at the top of the Data Federator Designer screen, click
Add.
The New Datasource screen is displayed.
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed.
3. From the pull-down list, select either Generic JDBC datasource, or
Generic ODBC datasource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection Parameters area, enter the connection details as
required.
5. When you have entered the connection parameters, click the Test button
to check that they are correct.
If the details are incorrect, a dialog box is displayed with the details. Use
this information to fix the problem.
6. In the Configuration Parameters area, enter the configuration details
as required.
7. In the Optimization Parameters area, select the optimization details as
required.
8. Once the connection is working, click Save.
Your datasource is added, and you can add datasource tables to it.
Related Topics
• Connection parameters for generic JDBC datasources on page 141
• Connection parameters for generic ODBC datasources on page 146
• Configuration parameters for generic JDBC and ODBC datasources on
page 150
Parameter Description
Parameter Description
Example
Driver location
For example, to set the location of
the JDBC driver for a MySQL
database, if you put the file in the
Data Federator installation directory,
type:
C:\Program Files\BusinessOb
jects Data Federator XI
3.0\LeSelect\drivers\mysql-
connector-java-3.1.12-bin.jar
Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
Parameter Description
Parameter Description
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526
For this datasource type, you can use deployment context parameters for
the following fields.
• JDBC connection URL
Related Topics
• Defining a connection with deployment context parameters on page 156
For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
Parameter Description
BOOLEAN=BIT;STRING=VARCHAR
Parameter Description
The default is 0.
0 means no limit.
Parameter Description
Parameter Description
Related Topics
• About datasources on page 66
You use the "Optimization Parameters" pane to specify the capabilities that
your database management system supports. This reduces the amount of
processing that Data Federator performs, and improves performance.
For the capabilities that you do not select, Data Federator performs the
operation. For example, if you clear the Supports aggregates checkbox,
Data Federator performs the aggregate operation on the data it retrieves
from the database management system.
Note:
To get the best performance, check the capabilities of your database, and
select the matching capability check boxes for as many options as possible.
For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.
Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,
Data Federator substitutes the values corresponding to the deployment type
that you select.
The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.
Related Topics
• Defining deployment parameters for a project on page 155
Perform this task to create deployment contexts so that you can deploy the
project on multiple servers. Typically, you would create a deployment context
for each server on which the project is to be deployed.
1. Open a project and select it, and in the Tree list, select Configuration to
display the Configuration screen.
2. On the "Configuration" screen, click Deployment contexts to expand
the Deployment Contexts pane.
3. Click the Add new context link to display the Add a new context screen.
4. In the General pane, enter a name for your context. If this context is to
be the default, select the is default check box. The parameters that you
define are then used when you deploy a project and do not select a
deployment context.
5. On the Add a new deployment context screen, click the Add
Parameters button.
6. From the list that appears, select the number of parameters that you want
to add to the deployment context.
A row for each parameter appears, ready for you to supply the parameter
definitions.
7. Define each parameter, by entering a name and a corresponding value
in each of the Name and Value fields.
8. When you have finished defining parameters, click OK to save your
settings.
Example:
To define a variable in the deployment context to define a production server
with a host name of prodserver5 , you would enter the following in the Name
and Value fields:
• Name: ProductionServer
• Value: prodserver5
In your connection definition, you would define the host name as follows
to specify prodserver5 with the deployment context:
${ProductionServer}
Related Topics
• Defining a connection with deployment context parameters on page 156
• Using deployment contexts on page 325
Once you have defined a deployment context, you can use the parameters
it contains to configure a connection. The parameters with which you can
use a deployment context depends on the resource type that you are using.
For example, for a Microsoft Access datasource, you can use the following:
• dsn
• user
• password
Once you have defined a relational database datasource, you can add tables
to it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.
3. Select the tables that you want to include in your new datasource, and
click Save.
You need to make your datasource final before you can use it.
Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Updating the tables of a relational database datasource on page 158
• Making your datasource final on page 212
Once you have defined a relational database datasource, you can update
the tables in it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.
The tables that you have not selected appear as not in use.
The tables that you previously selected but that have been dropped from
the data access system appear as no longer usable. You must delete
the definitions of these tables from your datasource.
3. Select the tables that you want to include in your new datasource, and
click Save.
In general, Data Federator can understand any format in which the data in
the text file is arranged into columns. The columns can be a fixed length or
separated by a specific character. When you add a text file datasource, you
can use the File Extraction Parameters pane to configure these and other
options. Data Federator can then transform your text files into relational,
tabular data.
You can create a datasource from a single text file, or you can create a
datasource from multiple text files.
Files with separators are generally simpler to generate and are easier to
read. Many software applications can generate these types of files from
internal data.
Note:
Depending on the choice of character (or the character sequence) used as
a separator, you must be certain that the separator is not included in the
value for a field. If the character that you chose as a separator appears in
the value of a field, Data Federator will add two fields instead of one. Files
with fields of fixed length are more restrictive because the size of each field
does not vary.
3. Enter a name and description for your datasource name in the Datasource
name and Description boxes, then click Save.
The datasource is added, and it appears in the tree list at the left of the
screen. The datasource Draft screen is displayed. You can now add
tables, then select the source file or files for it.
Related Topics
• Selecting a text data file on page 160
• You must have defined a text file datasource, and allocated a name to
it.
3. In the Table name box, type a name for your new table .
4. In the General pane, click Browse.
The Browse frame appears.
5. Use the Browse frame to locate and select your source file.
To browse a different drive, enter the drive letter in the Directory box,
and click Browse again.
For example, to locate a text file on the Q: drive, enter "Q:\" in the
Directory box and click Browse.
When you select a file, the file name appears in the File name or pattern
box.
6. Click Save.
Data Federator references the file for use with this datasource table.
Related Topics
• Setting a text file datasource name and description on page 159
• Selecting multiple text files as a datasource on page 173
After you choose your source file, you define how Data Federator parses
the text that the file contains.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.
2. In the File extraction parameters pane, complete the text boxes that
describe how Data Federator parses your source file.
• To preview the contents of your file as you configure the file extraction
parameters, click Preview in the General pane.
The Data sheet frame appears, showing the first rows of your source
file.
3. In the Field formatting pane, complete the text boxes that describe the
format of the data in the fields of your source file.
4. Click Save.
Data Federator saves the definition of the format of your source file, which
allows Data Federator to parse the data in the file.
Related Topics
• File extraction parameters on page 162
• Field formatting parameters for text files on page 164
• Selecting a text data file on page 160
Use these parameters to help you when configuring text file extraction
parameters.
Parameter Description
• Delimited
For example:
MARY;123;SALES
JOHN;456;PURCHASING
File type
• Fixed width
Parameter Description
Parameter Description
Related Topics
• Configuring file extraction parameters on page 161
Parameter Description
13 janvier, 2000
Date and time language you should set your Date and time
language to fr (French).
Parameter Description
31/12/2002
12/31/2002
Parameter Description
123.99
Related Topics
• Configuring file extraction parameters on page 161
• Date formats used in text files on page 176
• Using data types and constants in Data Federator Designer on page 604
For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
• You must have configured the extraction parameters of your source file.
• The text file that you are configuring must have a header row.
Once you have configured the extraction parameters of your source file, you
must define the schema of your datasource table. Use this procedure to
automatically extract the schema from the first line in your source file.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.
3. In the Schema definition pane, select First line after the ignored header
of the file.
To preview the contents of your file as you define the schema, click
Preview in the General pane.
The Data sheet frame appears, showing the first rows of your source file.
Data Federator extracts the fields in the file, and uses the first line to
create a title for each column.
5. Verify and select the correct types for all the columns.
6. Select the check boxes under the key icon to specify the primary key.
Note:
You can select multiple check boxes to indicate a primary key defined by
multiple columns.
7. Click Save.
The text file is registered as the source of your datasource.
Related Topics
• Configuring file extraction parameters on page 161
• Generating a schema when a text file has no header row on page 171
• Using a schema file to define a text file datasource schema on page 171
• Using data types and constants in Data Federator Designer on page 604
• Making your datasource final on page 212
You can indicate the primary key while defining the schema of your
datasource.
1. Open the Table schema window.
2. Define one or more columns as a key in the Table schema pane, by
selecting the checkboxes under the Key icon
Data Federator lets you import the schema of a datasource from an external
file. The schema must be in a DDL format.
1. Write your schema file, using one of the formats that Data Federator
recognizes.
2. Open the Table schema window.
3. In the Schema definition pane, from the Schema location text box,
select SQL DLL file or Proprietary DDL file.
4. In the Schema definition pane, choose your source file by clicking
Browse, then Navigate to your file and click Select.
To preview the contents of your source text file as you define the schema,
click Preview in the General pane.
The Data sheet frame appears, showing the first rows of your source file.
Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Formats of files used to define a schema on page 608
When your source file does not have a first line that defines the names of
the columns, Data Federator can extract the number of columns, which you
can then name manually.
1. Open the Table schema window.
2. In the Schema definition pane, from the Schema location text box,
select Automatic from the structure of the file.
To preview the contents of your file as you define the schema, click
Preview in the General pane.
The Data sheet frame appears, showing the first rows of your source file.
Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Defining the schema of a text file datasource manually on page 172
• Making your datasource final on page 212
You can define the schema of your datasource manually if the schema
information is not contained in the source file.
1. Open the Table schema window.
Delete a column
Note:
The columns must be indicated in the order that the fields appear in the
datasource.
When the source of the datasource is a file containing fixed-length fields,
you must also indicate the number of characters of each field.
Related Topics
• Using data types and constants in Data Federator Designer on page 604
You can specify multiple files simultaneously, when you are selecting a text
file as a datasource. Note, however, that the files must be of the same table
schema.
1. Specify your multiple source files in the File name or pattern text box in
the File pane of the Datasources > your-datasource-name > Draft
> your-datasource-table-name window.
The File name or pattern text box indicates the names of the files that
are used to populate the datasource table. You can associate multiple
source files to the same datasource table by separating the names of
each file with a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" to indicate any sequence of characters. For
example "dat*.csv" specifies all the files with names starting with "dat"
and ending with ".csv".
• Use the symbol "?" to indicate any single character. For example
"dat?.csv" specifies the files whose names are composed of the
character string "dat" followed by any character, the followed by the
extension "csv".
2. Click Save
Data Federator references the file for use with this datasource table.
Related Topics
• Selecting a text data file on page 160
The following table shows some examples of the numeric formats that Data
Federator reads from text files when you use a text file as a datasource.
If your text file uses and the column type Data Federator inter-
the number... is... prets it as...
These examples assume that you have set the decimal separator to "."
(period).
Rules that Data Federator uses to read numbers from text files
• For integers, no error is returned if the string data overflows the size of
an integer (MAX_VALUE = 2147483647 (2^31-1) and MIN_VALUE =
-2147483648 (-2^31)).
• For doubles, no error is returned if the string data overflows the size of a
double (64 BITS). The value is truncated.
• White space is removed.
• Parsing stops at the first non-digit character (except decimal separators,
grouping separators, exponential symbols ("e" and "E") and signs ("+"
and "-")).
• The exponential symbol and the decimal separator can be used only one
time, otherwise parsing stops.
• The exponential symbol must be followed by a digit or a sign, otherwise
parsing stops.
• The symbols "+" and "-" can be used at the beginning of the string data
or after the exponent symbol.
• In Data Federator Designer, if you do not indicate any decimal or grouping
separators, the application used the default separators corresponding to
the language field.
• No error is returned if you define the same symbol for column separator,
decimal separator and grouping separator (column separator has priority
over decimal separator, which has priority over grouping separator).
• If parsing cannot complete while following these rules, Data Federator
stops parsing and throws an exception.
The following table shows some examples of the date formats that you can
write in the Date format box.
For details on date formats, see the Java 2 Platform API Reference for the
java.text.SimpleDateFormat class, at the following URL:
http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.
Note:
For extracting hours, the pattern hh or KK extracts in 12-hour time, while HH
or kk extracts in 24-hour time.
The following table shows the results of different patterns on hour values.
'00' hh 00
'00' HH 00
'12' hh 00
'12' HH 12
'24' hh 00
'24' HH 00
Related Topics
• Field formatting parameters for text files on page 164
• Using data types and constants in Data Federator Designer on page 604
You can modify the data extraction parameters of the draft version of a
datasource.
Changing the data extraction parameters of your datasource has the following
consequences:
• Your table schemas are erased.
Related Topics
• File extraction parameters on page 162
You require the machine address, user name, password, and port number
for the remote machine.
Data Federator can access text files on a remote machine on a network. The
following options are available:
• Use an SMB share providing that your machine is in the same SMB
domain or workgroup as the distant machine.
• Use an FTP account.
The path must start with the name of the shared directory without the
leading backslash "\".
Note:
If connecting to a public SMB directory on UNIX, you must log in with the
username guest.
Data Federator can use an XML file as a datasource. The elements and
attributes in the XML file are mapped to tables and columns, depending on
how you configure the datasource. You can use the Elements and Attributes
pane to configure which tables and columns you want to create from the
elements in the XML file.
Related Topics
• Using the elements and attributes pane on page 189
Your datasource is added, and you can choose and configure a source
file for it.
• You must have added a datasource whose source type is XML file.
For example, enter Q:\ in the Directory box and click Browse.
5. Click Select.
The file name appears in the XML file name box in the Configuration
pane.
Related Topics
• Selecting multiple XML files as datasources on page 199
• Adding an XML file datasource on page 179
• Using a remote XML file as a datasource on page 200
For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name
Related Topics
• Defining a connection with deployment context parameters on page 156
• You must have chosen and configured a source file of type "XML".
Once you have defined an XML datasource, you can add tables to it.
3. Select the datasource tables that you want to include in your new
datasource, and click Save.
Related Topics
• Choosing and configuring a source file of type XML on page 180
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212
Data Federator can use a web service as a datasource. The elements and
attributes in the response of the web service are mapped to tables and
columns, depending on how you configure the datasource.
Some of the new concepts introduced by Data Federator web services are:
• SQL access to web services
• input columns (to pass values to web service parameters)
• automatic mapping of XML to relational schemas
Related Topics
• Using the elements and attributes pane on page 189
You can add a datasource using the Add button beneath a project's tab.
1. Click Add, then select Add datasource.
The New Datasource window appears.
• You must have added a datasource whose source type is web service.
4. Select one of the following radio buttons depending on how the datasource
tables of your web service should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.
The "Operation selection" pane expands, letting you select the operations
that you want to access from the web service.
Related Topics
• Adding a web service datasource on page 183
• You must have added a datasource whose source type is web service.
• You must have extracted the available operations from the web service.
The "Operations output schemas" pane expands, and you can select
which elements you want Data Federator to convert to tabular form.
Related Topics
• Adding a web service datasource on page 183
• Extracting the available operations from a web service on page 183
• Using the SOAP header to pass parameters to web services on page 186
• Selecting which response elements to convert to tables in a web service
datasource on page 187
For this datasource type, you can use deployment context parameters for
the following field.
Related Topics
• Defining a connection with deployment context parameters on page 156
There are two parts to the authentication on web services: on the server that
hosts the web services, and using the SOAP header to pass authentication
parameters to the web service operations.
To authenticate on the server that hosts the web service datasources, Data
Federator lets you use the same fields that you use for authenticating on
any type of datasource. To pass parameters in the SOAP header to the web
service operation, Data Federator provides a field where you can enter header
parameters.
Related Topics
• Authenticating on a server that hosts web services used as datasources
on page 185
• Using the SOAP header to pass parameters to web services on page 186
To authenticate on the server that is hosting the web services that you are
using as datasources, you use the "Web service authentication" pane to
enter your authentication details.
1. Edit your web service datasource.
Related Topics
• Adding a web service datasource on page 183
• Authentication methods for database datasources on page 207
• You must have extracted the available operations from the web service.
If you are accessing a web service that requires parameters in the SOAP
request, you can add these parameters in the SOAP header.
Related Topics
• Extracting the available operations from a web service on page 183
• Selecting the operations you want to access from a web service on
page 184
• You must have chosen and configured a source file for your web service
datasource.
• You must have extracted the available operations from the web service.
• You must have selected the operations you want to access from a web
service.
Once you have defined a web service datasource, you can add tables to it.
Related Topics
• Selecting the operations you want to access from a web service on
page 184
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212
• Defining the schema of a datasource on page 204
You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.
You create pre-filters when making the mappings from datasources to targets.
Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning constant values to input columns using pre-filters on page 233
You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.
Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning dynamic values to input columns using table relationships on
page 234
You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.
You create input value functions when making the mappings from datasources
to targets.
Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Propagating values to input columns using input value functions on
page 234
attributes from your XML or web service datasource you wish to use when
you generate your datasource tables.
The following example of XML code shows several highlighted elements and
attributes:
<cpq_package version="2.0.0.0">
<catalog_entry_path>ftp://ftp.google.com/pub/cp077.exe</cata
log_entry_path>
<filename>cp002877.exe</filename>
<divisions>
<division key="65">
<division_xlate lang="en">Networking</division_xlate>
<division_xlate lang="ja">Networking</division_xlate>
</division>
<division key="6">
<division_xlate lang="en">Server</division_xlate>
<division_xlate lang="ja">Server</division_xlate>
</division>
</divisions>
</cpq_package>
The table below describes how Data Federator maps these XML elements
and attributes to its own elements and attributes, and subsequently generates
and displays them:
Elements in List view are displayed in a list. In Explorer view they are
displayed in the Folder > Folder > File format of the Folders Explorer bar
in Windows Explorer. Attributes are always displayed in list form. Both
elements and attributes selected in List view remain selected when Explorer
view is selected, and vice versa.
If an Element or At-
It means ...
tribute is ...
If an Element or At-
It means ...
tribute is ...
The table below describes how different combinations of these colors and
check boxes they affect elements:
The Find next feature, as shown and highlighted in the image below, allows
you to locate all occurrences of an element, an attribute, or both. It is available
in the Elements and Attributes panels of both the List and Explorer view,
and is especially useful if you have several elements / attributes of the same
name in a large XML or web service datasource.
The image below shows an example of the List view tab located within the
Elements and attributes pane of the Datasources > your-datasource-
name > Draft window:
The List view tab consists of an Elements panel listing the datasource's
elements, and an Attributes panel listing a highlighed dark blue element's
attributes, if any. The List view tab, in common with the Explorer view tab,
allows you to select or de-select any element or its attributes for inclusion in
your XML datasource tables.
When you select an element in List view, if there is more than one element
of that name, for example 'DatesofAttendance (2), as shown in the image
above, all those elements (and their attributes) will be included in your
datasource table. You can de-select one or every attribute of a selected
element in the Attributes panel.
Elements Panel
View only elements already View only those elements that appear in your
used for tables check box current, not yet updated, datasource table.
Attributes Panel
The image below shows an example of the Explorer view tab located within
the Elements and attributes pane of the Datasources > your-data
source-name > Draft window:
The Explorer view tab displays your elements in expandable directory form.
It allows you to select one or more of an element for inclusion in your
datasource tables where there exist two or more of the same name. Attributes
are displayed in selectable (non-clickable) list form, as per the List view.
Elements Panel
Note:
Main view link
This link is only enabled if you have clicked on a
'More' icon. See 'More icon', below for details.
Display:
• all elements
All elements drop-
down list-box • selected elements
• selected elements and their children
• selected elements and their parents
Note:
More icon
The path from the actual root element to the select-
ed element is displayed at the top of the Elements
panel.
Attributes Panel
You can specify multiple files simultaneously when you are selecting an XML
file as a datasource.
Note:
All your XML files must have the same schema, and your XML schema must
be an external XSD file.
1. Specify the multiple source files in the XML file name text box in the
Configuration pane of the Datasources > your-datasource-name
> Draft window.
The XML file name text box displays the names of the files that are used
to populate the datasource table. You can associate multiple source files
to the same datasource table by separating the names of each file with
a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" (asterisk) to indicate any sequence of characters.
For example "dat*.csv" specifies all the files with names starting with
"dat" and ending with ".csv".
• Use the symbol "?" (question mark) to indicate any single character.
For example "dat?.csv" specifies the files whose names are
composed of the character string "dat" followed by any character,
then followed by the extension "csv".
2. Select the From external XSD (Xml Schema Definition) file radio button
to the right of XML schema location, click Browse to the right of the
XML schema file name field and navigate to and select your external
XSD file.
Note:
You cannot have an XML schema location inside an XML file if you are
using multiple XML files as datasources.
3. Select one of the following radio buttons depending on how your XML
datasource tables should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.
Related Topics
• Choosing and configuring a source file of type XML on page 180
• Choosing and configuring a source file of type XML on page 180
You require the http details, or the machine address, user name, password,
and port number for the remote machine.
• For a remote source on an SMB network, you must indicate the path
from the shared directory. For example if the shared directory is
shareDir and the files are contained in a sub directory data, then you
must indicate the path as shareDir\data.
The path must start with the name of the shared directory without the
leading backslash "\".
Note:
If connecting to a public SMB directory on UNIX, you must log in with
the username guest.
• For a remote source on an HTTP network, you must indicate the path
of the URL from the end of the IP address or server name. For
example: /Folder1/Folder2/File.xml
• You must have added a datasource whose source type is web service.
• You must have added tables to your web service datasource.
Symbol='SAP'
Data Federator contacts the web service, using any values that you assigned
in the Filter field as the values of the input parameters. When the web service
responds, Data Federator displays the response as a table in the Query
tool.
If Data Federator displays an error in contacting the web service, you can
ask your administrator to reconfigure the connector to web services.
Related Topics
• Adding a web service datasource on page 183
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Configuring connectors to web services on page 479
Note:
The remote installation of Query Server must be operating in order that you
can use it for a datasource.
Related Topics
• About datasources on page 66
• Managing resources using Data Federator Administrator on page 483
For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name
• Remote catalog and schema
Related Topics
• Defining a connection with deployment context parameters on page 156
Managing datasources
You can perform most simple operations on datasources from the tree list
in Data Federator Designer.
The Datasources node in the tree list displays the "Datasources" window,
where you can browse, edit and delete datasources.
You can define information about the schema of a datasource in its table
schema window.
1. Open the Table schema window.
2. For each column, set the options as follows.
Option Description
Description icon
Click this icon to display a new win-
dow allowing you to enter a descrip-
tion of the column
Delete a column
4. Click Save.
Related Topics
• Using data types and constants in Data Federator Designer on page 604
The Impact and lineage pane for your datasource table expands and
appears.
If you are adding a JDBC datasource or a web service datasource that has
a column with required values, you can check the Input box beside the
column to represent this requirement.
Web service datasources often have columns with required values, because
web services require a request parameter, like a stock ticker symbol, in order
to provive a specific response.
JDBC datasources rarely have columns with required values, but if they do
you can use the Input feature to represent this.
1. Edit the schema of your datasource table.
a. In the treelist, expand Datasources.
b. Click your-datasource-table-name.
In the window that appears, you can edit the schema in the Table
Schema pane.
Related Topics
• Mapping values to input columns on page 233
You can change the source type of the draft version of a datasource.
Changing the source type of your datasource has the following consequences:
• Your data extraction parameters are erased.
• Your table schemas are erased.
3. Click Change type, then click the type to which you want to change your
datasource.
Deleting a datasource
Note:
The tests must be done table by table. It is often practical to test the
datasource tables when they are created.
Related Topics
• Running a query on a datasource on page 211
You can run a query on a datasource to test that your datasource definition
and schema definition are returning the right values.
2. In the Query tool pane, select the columns of the datasource table you
wish to test and click View data.
Data Federator extracts data from the file, then displays the data in
columns in the query frame.
3. Verify that the values in your file appear under the correct columns.
If they are not, try adjusting the schema again.
For example, if you enter the value "dd-MM-yyyy" in the Date format
box, and the dates in your text file are "01-02-2000", where "01" means
"January", then Data Federator will extract the wrong date.
Make sure you use the value "MM-dd-yyyy" if the month appears before
the day in your source file.
• Verify that each value lines up in its correct column.
For example, in the following figure, there is a problem with one value
that breaks across two columns.
To fix this, make sure that you choose the field separator correctly when
you configure the extraction parameters.
Related Topics
• Running a query to test your configuration on page 614
• Printing a data sheet on page 617
• File extraction parameters on page 162
Note:
If you already have a datasource in final, it is replaced.
Your draft is erased.
To edit a datasource that you have already made final, you must copy it to
a draft.
Note:
The datasource previously in draft is replaced.
5
5 Mapping datasources to targets
Mapping datasources to targets process overview
Related Topics
• Adding a mapping rule for a target table on page 217
• Selecting a datasource table for the mapping rule on page 218
• Writing mapping formulas on page 219
The following diagram shows what you see in Data Federator Designer when
you work with mappings:
The main components of the user interface for working with mappings are:
• (A) the tree view, where you navigate among your target tables and
mappings
• (B) the main view, where you configure your mappings
• (C) an expanded node, showing a mapping rule for a target table with the
datasource, lookup and domain tables that participate in the mapping
rule
• (D) an expanded pane, showing how you edit mapping formulas
You add a mapping rule to map data from source tables to target tables.
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules.
The Mapping rules window appears.
2. Click Add.
The New mapping rule window appears.
3. In the General pane, type the name and description of your mapping rule
and click Save.
The new mapping rule appears in the tree list.
Related Topics
• Managing target tables on page 46
You can select multiple datasource tables to use as the source of a mapping
rule. This procedure shows how to select a single datasource table.
A datasource table that has a column that contributes to the key of a target
table is a called a "core table".
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules, then your-mapping-rule-name
The your-mapping-rule-name window appears .
2. In the Table relationships and pre-filters pane, click the Add a new
table to the mapping rule icon.
The Add a table to the mapping pop-up window appears.
3. In the tree list of the Add a table to the mapping pop-up window, select
the required datasource.
The name of your selected datasource table appears in the Selected
table field.
4. By selecting the appropriate checkboxes as required, define its alias,
whether it should be a core table and whether it should have distinct rows.
5. Click OK.
The selected datasource table appears in the mapping rule.
• concat(S12.FIRSTNAME, S12.LASTNAME)
4. Click Save.
Data Federator verifies and saves the mapping formula for the column.
Tip:
Displaying the column type when writing mapping formulas
When writing a mapping formula, you will likely need to know the type of the
column that you are mapping. An easy way to display the type is to roll the
mouse over the name of the column.
Related Topics
• Selecting a datasource table for the mapping rule on page 218
• Mapping values using formulas on page 222
(Data Federator does not show this status in the interface. All new
mapping rules are put in this status.)
• completed
• tested
This table shows what to do for each status of the mapping rule life cycle.
Related Topics
• Managing relationships between datasource tables on page 253
• Writing mapping formulas on page 219
• Adding a mapping rule for a target table on page 217
• Testing mappings on page 280
=concat(concat( S1.lastname,
concatenate text values
', ' ), S2.firstname)
For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.
When you have to map a lot of columns, Data Federator can add mapping
formulas automatically.
OR
• Data Federator finds a datasource column S.A where name of S.A
= name of T.A ignoring all periods, hyphens, and other
non-alphanumeric characters,
A = S.A
• You must have referenced a domain table from your target table.
In this example, the target table has a column called "country" that is not
available in the datasource table. However, all of the rows in the datasource
table are known to have the same value of country.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
A frame appears, showing the domain table that you used as the
domain of this column.
b. Click the value that you want to use as a constant.
You can click any column, but only the value from the column that you
selected in the schema of the target table will appear in the mapping
formula
Related Topics
• Using a domain table as the domain of a column on page 62
You can run a query on a mapping formula to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name window appears.
beside a formula.
A menu appears.
3. Click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > column-name window appears.
4. In the Formula test tool pane, click View data to see the query results.
For details on running the query, see Running a query to test your
configuration on page 614.
Data Federator displays the data in columns in the Data sheet frame.
Data Federator offers a set of standard aggregate functions that you can
use in your formulas.
When you need nested aggregate functions in one formula, you must
decompose them into separate terms.
SUM(S1.A1 + AVG(S1.A2))
SUM(S1.A1) + AVG(S1.A2)
When you use an aggregate function in your mapping rule, the resulting
query will perform a groupby on all columns that are not aggregates.
If you use the following formulas, where none of the columns are marked as
a key:
target.A = source.A
target.B = source.B
target.C = MAX(source.C)
S1.Customer S1.Amount
Almore 100
Beamer 100
Beamer 150
Data
Costly 100
Costly 200
Costly 250
T.Customer T.Amount
Almore 100
Beamer 125
Costly 183.333
You can use a case statement formula when you want to express the result
as a series of possible cases instead of as a single formula.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. In the Formuals field of the Mapping formuals pane, enter the case
statement directly and click Save
3. Or:
a. In the Mapping formulas pane, click the Edit icon
A menu appears.
b. Click Edit as case statement.
c. Click OK.
The Case statement pane appears
d. Click Add case, then Add new case.
Conditions are tested Enter this in the col- Enter this in the col-
in this order... umn If... umn then...
date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
1
'1%' 'AyyMMdd', '19yy-
MM-dd')
date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
2
'2%' 'AyyMMdd', '20yy-
MM-dd')
other cases
Related Topics
• Mapping datasources to targets process overview on page 216
• Inserting rows in tables on page 618
For example, when you are configuring your query, and you click the Default
button in the Case statement test tool pane, Data Federator will limit the
selected columns to those that are referenced in your formula.
1. In the tree list, expand Target tables, then expand your-target-table-
name, then expand Mapping rules, then click your-mapping-rule-name.
beside the case statement formula that you want to test, then click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > your-column-name window appears.
3. In the Case statement test tool pane, click the Default button.
Data Federator limits the selected columns to those that your formula
uses.
Data Federator displays the data in columns in the Data sheet frame.
5. Verify that the values appear in the target columns appear correctly.
Otherwise, try adjusting the mapping formula.
Related Topics
• Running a query to test your configuration on page 614
• The syntax of case statement formulas on page 620
Related Topics
• Assigning constant values to input columns using pre-filters on page 233
• Assigning dynamic values to input columns using table relationships on
page 234
• Propagating values to input columns using input value functions on
page 234
• Setting a constant in a column of a target table on page 224
Related Topics
• Adding a pre-filter on a column of a datasource table on page 236
When one of the source tables in your mapping rule has a column that
requires a value, and you want to force the query to provide this value, you
can use an input value function.
When you use an input value function in such a way, the user or application
that sends the query is responsible for providing a value in the where clause
of the query. When this value is not provided, Data Federator Query Server
throws an error.
1. Edit the source tables in your mapping rule.
a. In the treelist, click Target tables > your-target-table-name >
Mapping rules > your-mapping-rule-name.
2. Add an input value function on the column.
Pre-filers let you limit the source data that Data Federator queries in a
mapping rule. For example, you can use a filter to limit customer data to
those who are born after a certain date.
You can use a pre-filter on each datasource table that is used in a mapping
rule.
• post-filters
Post-filters let you limit the data after it has been treated by table
relationships.
You can add a pre-filter on a column of a datasource table to limit the data
that Data Federator retrieves from the column.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, select the table including
the column whose values you want to filter and click the Edit the selected
table icon.
4. Expand the tree list in the Tables and Columns pane and select a column
on which to add a filter formula.
Press Ctrl + Spacebar to activate autocomplete and display all possible
column names, if required.
5. Enter the filter formula in the Formula pane using, if required, the Tables
and Columns, Operator and Functions panes.
An example filter formula is:
6. Click OK.
You are returned to the Edit the mapping source -your-datasource-
table-name pop-up window.
7. Click OK.
The Table relationships and pre-filters pane shows a Filter icon
Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of filter formulas on page 619
Editing a pre-filter
You can edit a pre-filter using the Table relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:
3. Select the table including the column whose filter you want to edit and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:
6. Click Update.
Related Topics
• Mapping datasources to targets process overview on page 216
Deleting a pre-filter
You can delete a pre-filter using the Table relationships and pre-filters
pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:
3. Select the table including the column whose filter you want to delete and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:
Related Topics
• Mapping datasources to targets process overview on page 216
You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.
A lookup table associates the values of columns of one table to the values
of columns in another table.
• Lookup tables hold columns of data with mappings to other columns of
data.
• The data in a lookup table is stored on the Data Federator Query Server.
• You can combine a lookup table with a domain table to map the values
in a datasource column to the values in a domain table (see Using lookup
tables on page 242).
• Lookup tables support up to 5000 rows.
Your target table has a column "sex" with the enumerated values:
• 1
To complete your mapping, you must create a lookup table that maps
• F to 1
• M to 2
The following table lists the types of lookup tables that Data Federator lets
you create.
You can use a lookup table to map values from a datasource table to values
in a domain table.
You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.
The following process lists the steps in adding a lookup table to associate
the values of a column in a datasource table to a column in a domain table.
• (1) Add a lookup table (see Adding a lookup table on page 244).
• (2) Reference a datasource table in your lookup (see Referencing a
datasource table in a lookup table on page 246).
• (3) Reference a domain table in your lookup (see Referencing a domain
table in a lookup table on page 247).
• (4) Map the values in the datasource table to the values in the domain
table (see Mapping values between a datasource table and a domain
table on page 248).
3. In the Table name box, type a name for your new lookup table.
4. In the Table schema pane, click Add columns, then click Add
datasource column to add one column from a datasource table.
An empty datasource column appears.
5. In the Table schema pane, click Add columns, then click Add domain
column to add one column from a domain table.
You can add columns repeatedly.
For details about the data types in Data Federator, see Using data types
and constants in Data Federator Designer on page 604.
7. Click Save.
Your new lookup table appears in the tree list.
When you have created a lookup table, you can reference a datasource
table. The datasource table is the first part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.
4. In the list, expand the datasource, then click the datasource table whose
column you want to use in the lookup.
The name of the selected datasource appears in the Selected Datasource
box.
The name of the selected datasource table appears in the Selected table
box.
• You must have added a lookup table (see Adding a lookup table on
page 244).
• You must have added a domain table (see Adding a domain table to
enumerate values in a target column on page 55).
When you have created a lookup table, you can reference a domain table.
The domain table is the second part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.
4. In the list, click the domain table whose column you want to use in the
lookup.
The name of the selected domain table appears in the Lookup table box.
6. Click Save.
The Lookup tables > your-lookup-table-name > Description
window appears, showing the domain table you selected as part of the
lookup.
• You must have referenced a datasource table and a domain table in your
lookup (see Referencing a datasource table in a lookup table on page
246 and Referencing a domain table in a lookup table on page 247).
This section shows how to associate the values in the column of a datasource
table to the values in the column of a lookup table.
In this example, the set of values in the datasource is {F, M}, and the set
of values in the domain table is {1, 2}.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-name in the tree list.
.
The Lookup tables > your-lookup-table-name > Modify row window
appears. This frame contains a blank row containing two text boxes. The
first text box is a column from your datasource table. The second text box
is a column from your domain table.
.
The your-domain-table-name frame appears.
7. Click Save.
The Table contents pane shows the row (F, 1) in your table. This means
that the value F in your datasource table is associated to the value 1 in
your domain table.
If you have a lot of lookup data, you can enter it into your Data Federator
project quickly by importing the data from a text file.
For example, Data Federator can import data such as the following.
file: my-lookup-data.txt
"political_region";"code"
"Alabama";"1"
"Alaska";"2"
"Arizona";"3"
"Arkansas";"4"
"California";"5"
"Colorado";"6"
"Connecticut";"7"
"Delaware";"8"
"Florida";"9"
"Georgia";"10"
"Hawaii";"11"
"Idaho";"12"
"Illinois";"13"
"Indiana";"14"
"Iowa";"15"
"Kansas";"16"
"Kentucky ";"17"
"Louisiana ";"18"
"Maine";"19"
"Maryland";"20"
"Massachusetts";"21"
"Michigan";"22"
"Minnesota";"23"
"Mississippi";"24"
"Missouri";"25"
"Montana";"26"
"Nebraska";"27"
"Nevada";"28"
"New Hampshire";"29"
"New Jersey";"30"
"New Mexico";"31"
"New York";"32"
"North Carolina";"33"
"North Dakota";"34"
"Ohio";"35"
"Oklahoma ";"36"
"Oregon";"37"
"Pennsylvania";"38"
"Rhode Island";"39"
"South Carolina";"40"
"South Dakota";"41"
"Tennessee";"42"
"Texas";"43"
"Utah";"44"
"Vermont";"45"
"Virginia";"46"
"Washington";"47"
"West Virginia";"48"
"Wisconsin";"49"
"Wyoming";"50"
2. Add a lookup table and reference a datasource and domain table in it.
5. Refer to the Select a datasource table field and select the datasource
table to be added to the lookup table.
The first columns of the selected lookup table, and their types, are
displayed in the lookup table drop-down list-boxes beneath the Lookup
columns mapping pane.
They are also displayed in the Select a subset of columns field on the
right. You can, if required, select one or all of the columns in this field and
click View Data to display the contents of the selected columns.
6. Refer to the Lookup columns mapping pane and map the required
datasource column from each lookup table column's drop-down list-box.
7. Click Save.
The Lookup tables > your-lookup-table-namewindow is displayed
and your file's imported data is added to your lookup table.
Related Topics
• Using data types and constants in Data Federator Designer on page 604
• Adding a lookup table on page 244
• Referencing a datasource table in a lookup table on page 246
• Referencing a domain table in a lookup table on page 247
• Creating text file datasources on page 158
In this version, the only way to dereference a domain table from a lookup
table is to delete the lookup table.
Related Topics
• Deleting a lookup table on page 252
2. Select the check box beside the lookup table you want to delete.
3. Click Delete.
You need to manage relationship when you have multiple datasource tables
and the data in those tables is related.
Related Topics
• The process of mapping multiple datasource tables to one target table
on page 264
To find the datasource tables that have no, or incorrect, relationships to the
core tables, look for red bars in the Table relationships and pre-filters
pane.
The image above, however, also shows a solid grey line between the first
two (non-core) tables. This means their relationship is satisfactory. A dotted
red line means the relationship is erroneous.
The following table shows the meaning of the colors depending on whether
a table is a core table or not.
At least one table has no All of the core tables are re-
core table relationships with the other lated through relationships
core tables. to other core tables.
Related Topics
• Adding a relationship on page 256
Adding a relationship
4. Select columns from each table and the operator from the drop-down
between Table 1 and Table 2.
An example relationship is:
S1.A2 = S2.A2
5. Click OK.
The Table relationships and pre-filters pane shows your relationship.
Editing a relationship
5. Click OK.
Related Topics
• Mapping datasources to targets process overview on page 216
• Finding incomplete relationships on page 254
• The syntax of relationship formulas on page 621
Deleting a relationship
3. Click Remove.
4. Click OK to confirm you want to remove the relationship formula.
The Table relationships and pre-filters pane no longer shows the
relationship.
4. Click Save.
When you map multiple datasources to a target, you must distinguish between
core tables and non-core tables.
• Use a core table to choose the set of rows that will populate your target
table (the result set).
When you set two or more tables as core, the result set is defined by the
join of all the core tables.
• Use non-core tables to extend the attributes of each row in the result set.
The following icon, displayed beneath datasource table aliases such as S1,
S2 or S10 in the Table relationships and pre-filters pane, indicates they
are core tables:
The table below describes how you use core tables to configure meanings
of table relationships:
The effects on the target table, of designating a source table as a core table
are represented in the following diagram:
You can use a domain table to constrain the values that of a target table by
defining a relationship between the domain table and your datasource.
1. Add the datasource table as a source of your mapping.
2. Add the domain table as a source of your mapping.
3. Add a relationship between the key columns of the datasource table
whose values you want to constrain and the domain table.
datasource-id.key-column-id = domain-id.key-column-id
For example:
S1.A1 = S2.A1
Only the rows of the datasource whose ID matches one of the IDs in the
domain table appear in the target.
Related Topics
• Managing target tables on page 46
• Managing domain tables on page 54
The following process lists the steps in mapping a two datasource tables to
a single target table.
• (1) Add a datasource table (see Adding multiple datasource tables to a
mapping on page 265).
• (2) Write mapping formulas (see Writing mapping formulas when mapping
multiple datasource tables on page 265)
• (3) Add relationships between the datasource tables (see Adding a
relationship when mapping multiple datasource tables on page 267)
Tip:
In what order to proceed when mapping multiple datasource tables
Start by adding the datasource tables that map the key of the target table,
then proceed with the datasource tables that are needed to map the non-key
columns of the target table.
Writing mapping formulas for multiple datasource tables is the same as for
a single datasource table. To write a mapping formula, see Writing mapping
formulas on page 219.
Table 5-13: Example datasource table that contributes one non-key column
200101 3
200102 40
200103 12
200104 560
200200 10
555555 10
source1.order_id target.order_id
source1.date target.date
source2.quantity target.quantity
S1.A2 = S2.A2
4. Click Save.
The Table relationships and pre-filters shows the relationships.
5. Repeat steps 2-4 until all your datasource tables form a chain.
None of your datasource tables must be left without a relationship to
another table.
The syntax of relationship formulas allows you to use AND to map multiple
relationships between more than two tables simultaneously.
source1.order_id source2.order_id
If you have followed the two examples Writing mapping formulas when
mapping multiple datasource tables on page 265 and the current one, the
result is as follows.
• The row (order_id1, date1, quantity1) exists in the target table T if there
is a row with a key of value "order_id1" in the datasource source1, and
there is a row with a key of value "order_id1" in the datasource source2.
• The row (order_id1, date1, null) exists in the target table T if there is a
row with a key of value "order_id1" in the datasource S1, and there is
no row with a key of value "order_id1" in the datasource S2.
Table 5-17: Example target table composed of columns from two datasource tables
Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of relationship formulas on page 621
When you map two or more datasource tables to one target, the result
depends on several factors.
This section demonstrates the effect of the following factors on the result of
a mapping from two datasource tables to a target table:
This section shows how mappings between the same datasource tables and
target can produce different results.
Note:
This is the same example as the one in the procedure Adding a relationship
when mapping multiple datasource tables on page 267
Example: Example of two datasource tables where the first one's values
are optional
In this example, the factors are as follows.
Note:
Because the target order_id is mapped from source2, in the result, date
may be NULL.
Example: Example of two datasource tables where both values are required
In this example, the factors are as follows. For the schemas, see the
examples Writing mapping formulas when mapping multiple datasource
tables on page 265 and Adding a relationship when mapping multiple
datasource tables on page 267.
Factor Value
Factor Value
Related Topics
• Managing relationships between datasource tables on page 253
Data Federator lists all the mapping rules in the Mapping rules window.
• Click Target tables, then your-target-table-name, then Mapping
rules.
The Mapping rules window appears, showing a list of your mapping
rules.
Copying a mapping rule is a quick way to add a new mapping rule. When
you copy a mapping rule, the new mapping rule contains the same datasource
tables, lookup tables, and correct mapping formulas as the original mapping
rule.
Note:
Data Federator only copies correct mapping formulas. Therefore, even if
your original mapping rule is incomplete, after you copy, the copied mapping
rule may be complete.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Click the Copy this mapping rule icon
A message box appears. The message asks you to confirm the copy.
Delete a mapping rule when you do not need any of the mapping formulas
between datasources and target tables inside the mapping rule.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Select the check box beside the mapping rule that you want to delete.
You can also select multiple mapping rules at the same time.
3. Click Delete.
A message box appears. The message asks you to confirm the deletion.
Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52
You can deactivate a mapping rule from the Target tables > your-target-
table-name > Mapping rules window.
1. In the tree list, expand your-target-table-name, then click Mapping
rules.
2. In the List of mapping rules pane, check the box beside the mapping
rule that you want to deactivate.
3. Click Deactivate.
The mapping rule is deactivated.
Testing mappings
To test a mapping rule, you must verify if the information you entered allows
Data Federator to correctly populate the target tables.
Data Federator lets you test a mapping rule by using the Query tool pane.
You can run a query on a mapping rule to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-namewindow appears.
2. In the Mapping rule test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.
For details on printing the results of the query, see Printing a data sheet
on page 617.
Data Federator displays the data in columns in the Data sheet frame.
Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.
Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.
For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:
client_id=6000114
Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.
If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.
The following procedure shows how to add any type of table to a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The following procedure shows how to replace any type of table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:
3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:
b. Click Edit.
A new window Edit the mapping source appears:
4. Expand the Tables tree list and select the replacement table.
The name of the selected table appears in the Replace with table field.
5. Click OK to add the replacement table to the mapping rule.
The following procedure shows how to delete any type of table from a
mapping rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Right-click the table to be deleted.
A context-sensitive menu appears:
3. Click Remove.
4. Click OK.
The selected table is deleted from the mapping rule, but a reference to it
remains.
Note:
The target table itself is not deleted.
The following procedure shows how to view the columns of any type of table
when it is part of a mapping rule.
The following procedure shows how to set the alias of a table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:
3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:
b. Click Edit.
A new window Edit the mapping source appears:
4. In the Properties panel, enter an alias in the Update table alias field
and click OK.
The alias of the table in the mapping rule is set.
3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:
b. Click Edit.
A new window Edit the mapping source appears:
6
6 Managing constraints
Testing mapping rules against constraints
You can define constraints once you have defined a target table.
Types of constraints
This table describes the types of constraints that you can run in Data
Federator Designer.
Test Description
Related Topics
• Defining key constraints for a target table on page 295
• Adding a domain table to enumerate values in a target column on page 55
You define a key constraint when you create the schema of the target table.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-namewindow appears.
2. Select the Key check box for each column that you want to define as a
key.
3. Click Save.
When you click Constraints > your-key-constraint-namein the
tree list, in the Constraint checks pane, all mapping rules that have the
status "completed" appear in the list.
You define a NOT-NULL constraint when you create the schema of the target
table.
2. Select the Not null check box for each column on which you want to
define a NOT-NULL constraint.
3. Click Save.
When you click Constraints > your-column-name_not_null in the tree
list, in the Constraint checks pane, all mapping rules that have the status
completed appear in the list.
• You must have added a mapping rule (see Adding a mapping rule for a
target table on page 217).
2. Click Add.
The your-target-table-name> Constraints > your-constraint-
name > New constraint window appears.
3. In the General pane, type a name and description for your constraint.
4. In the Constraint definition pane, select a type for your constraint and
enter a constraint formula.
5. Click Save.
The custom constraint is added to the set of available constraints.
In the Constraint checks pane, all mapping rules that have the status
"completed" appear in the list.
For a full list of functions that you can use, see Function reference on
page 624.
For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.
Parameter Description
Parameter Description
To analyze the constraint violations a mapping rule returns, you must check
constraints on the mapping rule.
Tip:
Resolving constraint violations in a constraint check
Related Topics
• Checking constraints on a mapping rule on page 299
• Viewing constraint violations on page 303
• Mapping datasources to targets process overview on page 216
• Filtering constraint violations on page 302
• You must have defined the constraints for the mapping rule.
See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.
Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name .
The Target tables > your-target-table-name > Constraints >
your-constraint-namewindow appears.
For details on the settings of the Query check pane, see Configuring a
constraint check on page 297.
• You must have defined the constraints for the mapping rule.
See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.
Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.
The Target tables > your-target-table-name > Constraints >
your-constraint-name window appears.
2. In the Constraint checks pane, select the check boxes beside the
mapping rules you want to test.
3. Click Check constraints.
In the Constraint checks pane, the date of the check is displayed in the
Launch date column, and the number of violations that caused the
constraint to fail is displayed in the Violations column: zero, an integer,
or the value ">n".
If you have:
• constraint violations, Viewing constraint violations on page 303.
• no constraint violations, Marking a mapping rule as validated on page 303.
You must have computed the constraint violations for at least one constraint
(see: Computing constraint violations on page 300).
You can filter constraint violations into subsets in order to help you analyze
their sources of error.
1. In the Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name window, in
the Filtered constraint violations pane, click Add.
The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name > Filtered
constraint violations window appears.
5. Click Save.
Your filter is saved.
You can make as many filters as you need to organize and help you treat
the constraint violations.
After you have checked a constraint, and you are sure that a mapping rule
has satisfied the constraint formula, you can mark the mapping rule as
validated.
When a mapping rule is in the status tested, it can be deployed on the Data
Federator Query Server.
1. In the tree list, expand your-target-table-name, then Constraints,
then click your-constraint-name.
The your-target-table-name > Constraints > your-constraint-
name window appears.
2. Select the check box beside the mapping rule that you want to mark as
validated.
3. In the Constraint checks pane, click Validate.
The mapping rule is marked as validated for this constraint.
Related Topics
• Deploying a version of a project on page 324
• You must have checked the constraint at least once (see Computing
constraint violations on page 300).
Each time you checked a constraint, its results are stored so you can read
them without checking the constraint again.
You can view the stored results of the most recent constraint check.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.
For details on printing the results of the constraint, see Printing a data
sheet on page 617.
Column Description
Reports
This section describes the reports you can run on your constraints. It consists
of:
• "Generating a constraint report".
7
7 Managing projects
Managing a project and its versions
While a project is in development, you can store and modify multiple versions
of it. When you are ready to deploy the project, you choose one of the
versions and deploy it to a Data Federator Query Server. Data Federator
also keeps a list of all the deployments you have made.
The following diagram shows what you see on the Data Federator user
interface when you work with projects:
The main components of the user interface for working with projects are:
• (A) the Projects tab, where you switch from a list of the projects to the
current project
Note:
The current version is automatically saved into an archive file called Latest
Version.
Related Topics
• Opening a project on page 41
• Including a project in your current project on page 315
• Deploying a version of a project on page 324
• Downloading a version of a project on page 313
• Downloading a version of a project on page 313
You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
To define a project and to manage its versions, you must select the project.
1. Either:
a. Select the tab of the project.
The project's Configuration window appears.
2. Or
a. At the top of the window, click Projects
The list of projects appears.
b. In the tree list, select the project.
Note:
The project is selected and you can manage its versions and its
description.
Related Topics
• Unlocking projects on page 43
• Adding a project on page 41
You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Entire project.
You can also store the current version of selected target tables.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively, and click Store.
Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of selected target tables on page 312
• Downloading a version of a project on page 313
You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Select targets.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively.
4. In the Targets selection pane, in the Select the targets to archive box,
select the check boxes beside the targets you want to archive.
When you select a target, its dependencies appear in the Dependencies
of box.
You can click the name of a target in the Select the targets to archive
box to show its dependencies without selecting it.
Use the check box at the top of the Select the targets to archive box to
select all the displayed targets.
Note:
Once you have stored a version of a project, you can download its archive
file to your file system.
Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of a project on page 311
• Downloading a version of a project on page 313
A browser dialog opens, asking you if you want to save the archive file.
3. Save the archive file on your file system.
Related Topics
• Editing the configuration of a project on page 310
• Storing the current version of a project on page 311
Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on server radio button, then in the Select archive
or deployed version pane, expand your-project-name.
4. Click the name of the archive or deployed version you want to load.
The name of the clicked version appears in the Name box. The description
and creation date also appear.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.
Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on file system radio button, and click Browse.
The Choose file window appears.
4. Navigate to and click the name of the archive or deployed version you
want to load, and click Open.
You are returned to the Load from an archive window with the path of
your selected archive displayed in the Archive file field.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.
Related Topics
• Opening a project on page 41
You can merge the contents of an archive file or a deployed version with the
contents of the current version of a project.
1. Open the project.
2. Click Include.
3. Select an archive to include.
You can select an archive either from the Data Federator server or from
your file system. You can do both of these the same way as loading a
version of a project.
4. Choose how Data Federator treats included components that have the
same names as existing ones.
Table 7-2: How Data Federator treats included components that have the same names
as existing ones
Table 7-3: How Data Federator treats the status of included components
6. Click Save.
Data Federator includes the archive file into the project, the Projects >
your-project-name window appears, and a message is displayed
advising whether the project was included successfully.
Related Topics
• Opening a project on page 41
• Loading a version of a project stored on your file system on page 315
• Loading a version of a project stored on the server on page 314
• Mapping datasources to targets process overview on page 216
You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
You can open multiple projects at the same time by selecting them from the
"Projects" window.
1. At the top of the window, click the Projects tab.
2. In the tree list, click Projects.
3. Click the checkbox beside the projects that you want to open.
4. Click Open.
A tab appears for each project you opened. Each tab contains the latest
version of the project.
Click the tab of the project that you want to work on.
Once your project is opened, you can add targets, datasources and
mappings to it.
Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• About datasources on page 66
• Mapping datasources to targets process overview on page 216
You can export all of the projects at once from the Projects tab.
1. Above the tree view, click the Projects tab.
The Projects window appears.
2. Click Export all projects.
Data Federator creates an archive file whose name begins with
projects_export_. The message "Successfully exported all projects"
appears.
3. Click Download the archive file to download the projects.
Your browser asks you if you want to save the archive file.
4. Save the archive file using your browser.
4. Click Save.
Data Federator imports the set of projects contained in the selected
archive file.
Related Topics
• Unlocking projects on page 43
• Exporting all projects on page 320
Deploying projects
You deploy a project when you want other applications to query its tables.
When you deploy a project, it becomes a catalog on Data Federator Query
Server. The datasource, target, lookup and domain tables in the project
become tables in the catalog.
Related Topics
• Deploying a version of a project on page 324
If you want to deploy on a remote installation of Query Server, you must first
configure Data Federator so that it can connect to a remote server.
If you want to deploy the project onto a server cluster, click the Add servers
button and add the details of each of the cluster servers.
This configures Data Federator Designer to deploy the project on the servers
for which you provided the details. Each server uses the same datasource
connection parameters, and accesses the same datasource.
Note:
When deploying to a cluster, if deployment to one of the servers fails for any
reason, the deployment is not rolled back. That is, the project is deployed to
all servers in the cluster except the server or server where the deployment
fails.
Related Topics
• Configuring Data Federator Designer to connect to a remote Query Server
on page 584
• Sharing Query Server between multiple instances of Designer on page 585
When you deploy a new project, your user account is automatically granted
"select" and"undeploy" rights on that project.
If you want other user accounts to query a project, your administrator must
create those user accounts and give them authorizations to read the tables
in your project.
Related Topics
• About user accounts, roles, and privileges on page 504
You can also store versions of projects to the file system, and open versions
that you have stored either on the Data Federator server or on the file system.
Related Topics
• Managing a project and its versions on page 308
Data Federator allows you to enter a title and description for the archive.
This lets you maintain a history of all the projects that you have deployed.
You can also import an old version of a project.
• Data Federator creates a catalog on Data Federator Query Server using
the name you chose for the catalog. This overwrites any previous catalog
of the same name.
For example, if you deployed projectA in the catalog OP, and you deploy
projectB in the catalog OP, projectA is overwritten.
Related Topics
• Managing a project and its versions on page 308
Note:
You can deploy an empty project, but it is not useful until you have done the
above.
This procedure shows how to deploy your project to Data Federator Query
Server. When you deploy a project on Query Server, your datasources and
target tables can be queried by an application that connects to Query Server.
1. Open the project.
2. Click Deploy.
3. In the Default deployment address pane, enter the deployment options.
Note:
Choose a unique catalog name for the project. If you choose a catalog
name that already has a project deployed in it, you will overwrite the
existing catalog.
5. Type a name and description for the deployed version in the General
pane.
6. Click Save.
Your project is accessible for querying through Data Federator Query
Server, at the catalog name that you specified. Target tables are available
in the schema named targetschema, while datasource tables are available
in the schema named source.
Note:
Any previous deployment in the same catalog is overwritten.
The target tables are deployed as indicated by the option Deploy only
integrated target tables.
Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• Making your datasource final on page 212
• Mapping datasources to targets process overview on page 216
• Servers on which projects are deployed on page 322
• User rights on deployed catalogs and tables on page 322
• Storage of deployed projects on page 323
• Version control of deployed projects on page 323
• Reference of project deployment options on page 327
• Using deployment contexts on page 325
For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.
Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,
The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.
Related Topics
• Defining deployment parameters for a project on page 155
• Defining a connection with deployment context parameters on page 156
Parameter Description
Server address and • specifies the address and port of the Data
Server port Federator Query Server on which you want to
deploy your project
Catalog name
specifies the name that you want your project to
have on Data Federator Query Server
Parameter Description
Add servers button
displays the option to add a server or servers,
when you are deploying to a cluster
Move to button
lets you move a server within a cluster of servers
8
8 Managing changes
Overview
Overview
This section describes the impact of your changes in Data Federator.
When working on a project in Data Federator, you can make changes to the
following components.
• targets
• mappings
• datasources
• lookup tables
• domain tables
• constraint checks
The type of change you make to each of these components can impact some
of the other components. This section lists what you should verify for each
type of change.
to indicate values that you must fix before your changes are complete.
Modifying a target
To modify a target, see Adding a target table manually on page 46.
Modifying a mapping
To modify a mapping, see Mapping datasources to targets process overview
on page 216.
9
9 Introduction to Data Federator Query Server
Data Federator Query Server overview
The Data Federator Query Server expands the standard database user
management and the SQL query language. This allows the query engine
and virtual target tables to provide additional functionality and higher
performance.
Real-time access to sources of data is divided into two steps: the connector
and the driver.
Element Description
The illustration below shows an example of how the Data Federator Query
Server relates to the sources of data.
The database administrator is the person who sets up the connection and
allows others to connect to the data base. This person is not necessarily the
data expert who controls the Data Federator Designer. For this reason, a
system of aliases facilitates access to the often highly-customized database
resources.
Related Topics
• The Data Federator application on page 26
Related Topics
• About user accounts, roles, and privileges on page 504
• Managing resources using Data Federator Administrator on page 483
• Data Federator Query Server overview on page 344
• Query execution overview on page 530
Security recommendations
Business Objects recommends that you use a firewall and use a standard
http protocol to protect and access the Data Federator Query Server.
10
10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC
This procedure shows you what you need to install in order to use the JDBC
driver for Data Federator Query Server in your client application.
This procedure applies when you have the Data Federator or Data Federator
Drivers installer.
1. Use the Data Federator or Data Federator Drivers installer on the Data
Federator CD-ROM to install the JDBC driver.
See the Data Federator Installation Guide for installation details.
2. Add data-federator-installation-dir/JdbcDriver/lib/thindriv
er.jar to the classpath that your client application must search when
loading the Data Federator Query Server JDBC driver.
3. Use this as the class name of the JDBC driver that your client application
loads:
com.businessobjects.datafederator.jdbc.DataFederatorDriver
Note:
Data Federator remains compatible with previous versions. You can still
use the class name LeSelect.ThinDriver.ThinDriver, but it is recommended
that you update to the new name above.
If you do not have the Data Federator installer, but you can access the JDBC
files that come with Data Federator, then you can make a JDBC connection
as follows.
1. Retrieve thindriver.jar from the machine where Data Federator is
installed, from the directory data-federator-installation-dir/Jdbc
Driver/lib.
If:
• the parameter commProtocol=jacORB is used, you will also need
avalon-framework-4.1.5.jar, jacorb.jar and logkit-1.2.jar
• your client application uses JDK1.4, you will also needicu4j.jar
2. Copy these files to a directory of your choice (your-jdbc-driver-direc
tory).
3. Add your-jdbc-driver-directory/thindriver.jar to the classpath
that your client application must search when loading the Data Federator
Query Server JDBC driver.
4. Launch your client application.
For certain applications, you may need to launch the JVM with the
following option:
-Djava.endorsed.dirs=your-jdbc-driver-directory
jdbc:datafederator://host[:port][/[catalog]][[;param-
name=value]*]
Note:
Data Federator remains compatible with previous versions. You can still
use the url prefix jdbc:leselect:, but it is recommended that you update
to the new prefix above.
The catalog must be the same as the catalog name you used to deploy
one of your projects.
jdbc:datafederator://localhost/OP
The parameters in the JDBC connection URL let you configure the
connection.
Note:
The classpath that your application uses must include: your-jdbc-
driver-directory/thindriver.jar
Related Topics
• Installing the JDBC driver with the Data Federator installer on page 350
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• Deploying a version of a project on page 324
• Example Java code for connecting to Data Federator Query Server using
JDBC on page 353
The following code block suggests how to use the Data Federator driver to
connect to Data Federator Query Server and execute an SQL query.
You can replace the SQL statement in this example with any statement that
is supported by Data Federator. For the names of the stored procedures that
you can call using this kind of code, see the list of stored procedures.
+ "'";
Related Topics
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• System table reference on page 728
• List of stored procedures on page 744
To connect to Data Federator Query Server via ODBC, you must use the
OpenAccess ODBC to JDBC Bridge.
The Data Federator Drivers installer installs and configures the OpenAccess
ODBC to JDBC Bridge.
com.businessobjects.datafedera
Driver Class
tor.jdbc.DataFederatorDriver
jdbc:datafedera
tor://<host>[:<port>][/[<cata
log>]][[;param-name=value]*]
• The <catalog> must be the
same as the catalog name you
used to deploy one of your
URL projects.
• For example, if you named your
catalog "OP":
• jdbc:datafederator://local
host/OP
• Add parameters in the JDBC
connection URL as required.
4. In your ODBC client application, use the DSN name that you created in
your "ODBC Data Source Administrator".
Your client application can establish an ODBC connection to Data
Federator Query Server.
Related Topics
• Installing the ODBC driver for Data Federator (Windows only) on page 354
• Parameters in the JDBC connection URL on page 361
• Using ODBC when your application already uses another JVM on page 357
• Deploying a version of a project on page 324
• Parameters in the JDBC connection URL on page 361
You should add everything after the equal sign (=) to the classpath that
your application uses.
3. Change the value of the JVM_DLL_NAME property to the "JVM" path that
your application uses.
For example, when you install Data Federator, the value of the
JVM_DLL_NAME property may be:
You should change everything after the equal sign (=) to the path to the
"JVM" that your application uses.
Accessing data
You can access data in the Data Federator using SQL statements.
Your queries include the names of the catalog, schema, and table, unless
the catalog or schema names are specified by default.
"[catalog-name]"."[schema-name]"."[table-name]"
For details on catalog, schema and table naming, see Data Federator SQL
grammar on page 713.
Related Topics
• Properties of user accounts on page 511
The table below describes the parts of the URL syntax shown above, and
applies to all following examples:
Note:
port
• This is an optional part.
• If you do not name a port, the
URL points to 3055.
Note:
catalog
• This is an optional part
• If you do not name a catalog, the
URL points to /OP.
Note:
In the above syntax, each host entry can be either Data Federator Query
Server or Data Federator Connection Dispatcher.
Use the JDBC URL to define values for the parameters. Any values that you
set in the JDBC URL will override, for a single connection, the default values
and the values for your client.
Example:
The following example URL causes Data Federator to
• apply fault tolerance on host1, host2 and host3 is:
• apply connection failover if the host mainhost is down
jdbc:datafederator://mainhost:alternate
Servers=host1,host2,host3;user=jill;
Related Topics
• Configuring fault tolerance for Data Federator on page 601
• Parameters in the JDBC connection URL on page 361
• Parameters in the JDBC connection URL on page 361
This section lists the parameters you can add in the JDBC connection URL.
Note:
All parameters are case-sensitive.
When you extract the files from thindriver.jar, you will find the following
two files, which contain the default values of all the parameters.
• thindriver.jar/properties/thin_params.properties
• thindriver.jar/properties/thin_site_params.properties
Parameter Description
user
user=[username]
password
password=[password]
catalog
catalog=[catalog-
name]
schema
schema=[schema-name]
alternateServers
alternate
Servers=host1[:port1]
[,host2[:port2][,...]]
alternate
Servers=host1[:port1]
[&host2[:port2][&...]]
alternateServersFile
alternateServers
File=true
alternateServers
File=false
This file is at US
ER_HOME/.datafedera
tor/alter
nate_servers.list
checkVersion
Parameter Description
checkVer
sion=[STRICT|MA
JOR|NONE]
commProtocol
commProto
col=[JacORB|JDKORB]
commWindow
Parameter Description
commWindow=[integer]
dataFetcher
dataFetch
er=[PREFETCH|ONDE
MAND]
enforceMetadataColumnSize
Parameter Description
enforceMetadataColumn
Size=[true|false]
When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).
maxMetadataColumnSize
maxMetadataColumn
Size=[integer]
When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum
Parameter Description
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).
enforceStringSize
enforceString
Size=[true|false]
maxStringSize
maxStringSize=[inte
ger]
If the enforceStringSize
parameter is set to true,
then when a VARCHAR
value has a size greater
than maxStringSize, the
value is truncated and a
warning message is set.
enforceServerMetadataDecimal-
Size
Parameter Description
enforceServerMeta
dataDecimal
Size=[none|maxS
cale|fixedScale]
enforceMaxDecimalSize
enforceMaxDecimal
Size=[none|maxS
cale|fixedScale]
The enforceMaxDecimal
Size parameter is used in
conjuction with enforce
ServerMetadataDecimal
Size, primarily to configure
the precision and scale of
values returned by Data
Federator Query Server.
maxDecimalPrecision
Parameter Description
maxDecimalPreci
sion=[>=20]
maxDecimalScale
maxDeci
malScale=[0<=maxDeci
malScale<=maxDecimal
Precision]
ODBCMode
ODBCMode=[true|false]
rowsAtATime
Parameter Description
rowsAtATime=[integer]
It is recommended to not
force this to any value as
its dynamic adjustement
helps to prevent client
starving. Clients may be
found in situations where
they cannot retrieve data
for a long time if another
set of clients has forced
the fetch size to a large
value. The data prepara-
tion for fetching may con-
autoReconnect
autoRecon
nect=[YES|NO]
Related Topics
• Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server on page 535
JDBC limitations
The following JDBC methods are not supported by the Data Federator JDBC
driver.
• Connection methods:
• setAutoCommit (no effect)
• setHoldability (no effect)
• setReadOnly (Data Federator Query Server is always read-only)
• setSavePoint, releaseSavePoint
• setTransactionIsolation (no effect)
• setTypeMap (no effect)
• commit, rollback (no effect)
• Statement methods:
• setCursorName (no effect)
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)
• PreparedStatement methods:
• ResultSet methods:
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)
ODBC limitations
• SQLColumnPrivileges
• SQLColumnPrivilegesW
• SQLCopyDesc
• SQLFetchScroll
• SQLGetDescField
• SQLGetDescFieldW
• SQLGetDescRec
• SQLGetDescRecW
• SQLGetDiagField
• SQLGetDiagFieldW
• SQLGetDiagRec
• SQLGetDiagRecW
• SQLNativeSQL
• SQLNativeSQLW
• SQLParamOptions
• SQLSetDescField
• SQLSetDescRec
• SQLSetDescRecW
• SQLSetPos
• SQLSetScrollOptions
• SQLTablePrivileges
• SQLTablePrivilegesW
SQL Constraints
This section lists the SQL syntax that is accepted by Data Federator Query
Server.
11
11 Using Data Federator Administrator
Data Federator Administrator overview
Before you begin, ensure you or your system administrator has followed the
steps to install and configure Data Federator in the Data Federator Installation
Guide.
Related Topics
• Data Federator Query Server overview on page 344
Server configuration
The following section details steps to consider to configure your Data
Federator Query Server deployment.
Related Topics
• Objects tab on page 385
• Managing user accounts with SQL statements on page 516
• Administration tab on page 387
Objects tab
You use this tab to navigate in Data Federator Query Server objects. At the
highest level, a list of functions appears. When you navigate within the
objects, the following tabs appear:
• Info tab, displays information about the object
• Content tab, displays the contents for that object in the form of a table
You use this tab most frequently to execute SQL queries and to perform
administrative functions on Data Federator Query Server.
When you first login, the My Query Tool tab appears by default.
Related Topics
• Managing queries with Data Federator Administrator on page 397
• Key functions of Data Federator Administrator on page 347
Administration tab
You use this tab to monitor the queries that are running and display a history
of the queries that have executed already.
Queries that are running are displayed in the Query Running tab. If you
have no running queries, nothing displays here.
The Query History tab displays the history of the last ten queries.
The Server Status menu item displays a summary of the status of Data
Federator Query Server.
The Connector Settings menu item lets you manage resources for
connectors to data sources.
The User Rights menu item lets you manage users, roles and permissions.
The Statistics menu item lets you view updated statistics of queries that
you run on Data Federator Query Server.
Statistics are the estimations of the amount data in a column or table. Data
Federator can use statistics to optimize the queries it runs. By default, Data
Federator calculates statistics by itself. If you want, you can override the
values that Data Federator calculates.
You can use the Global Refresh of Statistics pane to refresh the statistics
of your tables automatically.
1. Select the option that controls how you want to perform the global refresh.
For details, see List of options for the Global Refresh of Statistics pane
on page 396.
2. Use the Statistics list pane to select the list of tables for which you want
statictics to be refreshed automatically.
3. Click OK.
You can use the Statistics List pane to select the list of tables for which
you want to display statistics.
1. Select List only statistics in.
2. Select values in the Catalog, Schema and Table boxes.
The statistics will be displayed for the tables that you select.
You can use the Statistics List pane to record the tables and columns for
which statistics were recently requested by Query Server.
1. Click List only cardinalities recently requested.
2. Set the session parameter Leselect.core.statistics.recorder.enabled to
true.
When queries are executed, the list of tables with their statistics is
displayed automatically.
Option Description
Only columns
computes the number of distinct values for
each column
Only tables
computes the number of distinct values in
each table
Excluding when
value is overridden
the statistics will not be computed if you en-
by user tered a value during the definition of the
datasource
Related Topics
• Defining the schema of a datasource on page 204
Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.
Related Topics
• Starting Data Federator Administrator on page 384
• Data Federator Query Server query language on page 688
• Query execution overview on page 530
Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.
12
12 Configuring connectors to sources of data
About connectors in Data Federator
For most sources of data that support JDBC, you just copy the JDBC
driver to a directory where Data Federator can find it, and there is nothing
more to configure.
• proprietary middleware
For sources of data that do not support JDBC, you must install the
vendor's middleware, and point Data Federator to the middleware. In
most cases, you have already installed the middleware, and you just need
to tell Data Federator where to find it.
In order to configure a connector for Access, you must install an ODBC driver
and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Access.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.
In order to configure a connector for DB2, you must install JDBC drivers.
These drivers are usually available from the DB2 website.
1. Download the JDBC driver for DB2.
You get a driver in the form of a .jar file or several .jar files.
Use the following link to download the IBM DB2 JDBC Universal Driver.
The product is called IBM Cloudscape.
After you install IBM Cloudscape, you can find the driver file in ibm-
cloudscape-install-directory/lib/db2jcc.jar. The file db2jcc.jar
is the driver you can use for DB2.
http://www14.software.ibm.com/webapp/download/
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers
This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.
When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
The table below lists the properties that you can configure in Informix
resources.
BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.
example:
'TABLE;SYSTEM TABLE;VIEW'
callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.
principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.
BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.
0 means no limit.
INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.
pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.
STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".
example:
BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.
example:
jdbc3 format:
SQL92 format:
example:
True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set
Related Topics
• transactionIsolation property on page 478
In order to configure a connector for MySQL, you must install JDBC drivers.
These drivers are usually available from the MySQL website.
1. Download the JDBC driver for MySQL.
You get a driver in the form of a .jar file or several .jar files.
http://dev.mysql.com/downloads/connector/j/5.1.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers
This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.
When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
To force a collation value for MySQL, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.
Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483
In order to configure a connector for Oracle, you must install JDBC drivers.
These drivers are usually available from the Oracle website.
1. Download the JDBC driver for Oracle.
You get a driver in the form of a .jar file or several .jar files.
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers
This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.
When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483
%
?225
/
?22f
\
?25c
.
?22e
#
?223
?
?23f
This version of Data Federator supports Netezza NPS Server versions 3.0
or 3.1.
To let Data Federator connect to your Netezza NPS Server database, you
must install Netezza ODBC driver (versions 3.0 or 3.1).
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.
The table below lists the properties that you can configure in Netezza
resources.
BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.
example:
'TABLE;SYSTEM TABLE;VIEW'
callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.
principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.
BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.
0 means no limit.
INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.
pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.
STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".
example:
BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.
example:
jdbc3 format:
SQL92 format:
example:
True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set
Related Topics
• transactionIsolation property on page 478
Data Federator loads the JDBC driver for Progress OpenEdge. The JDBC
driver for Progress connects to the OEM SequeLink Server. The OEM
SequeLink Server connects to Data Federator DataDirect Progress OpenEdge
ODBC driver. The ODBC driver connects to the Progress OpenEdge 10.0B
client. Finally, the Progress OpenEdge 10.0B client connects to the Progress
database.
The OEM SequeLink Server and the Data Federator DataDirect Progress
OpenEdge ODBC driver should be on the same Windows machine as the
Progress OpenEdge 10.0B client.
The connection from the Progress OpenEdge 10.0B client to the Progress
database is covered in your Progress documentation.
Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423
• Configuring middleware for Progress connections on page 423
In order to bridge the JDBC driver for Progress to the Data Federator
DataDirect Progress OpenEdge ODBC driver, you must install the SequeLink
Server for ODBC Socket 5.5 OEM version. The SequeLink Server installation
is provided on the Data Federator DVD.
• Run the following script from the Data Federator DVD.
drivers/sl550socket/oemsetup.bat
DLC=C:\Progress\OpenEdge
PATH=%PATH%;%DLC%\bin
3. Run the Data Federator Driver installer and choose an install set that
contains the connector driver for Progress OpenEdge 10.0B.
4. Open your operating system's "ODBC Data Source Administrator".
On Windows, you configure DSN entries in the "ODBC Data Source
Administrator".
6. Use the Data Federator DVD and install the SequeLink Server OEM
version (drivers/sl550socket/oemsetup.bat).
The SequeLink Server is a bridge between the JDBC driver for Progress
and the Data Federator DataDirect OpenEdge driver.
When you complete the above steps, any connections that users create of
type jdbc.progress.openedge will connect to Progress through the SequeLink
Server.
Give the following information to users of Data Federator Designer that want
to connect to this Progress server.
Parameter Description
Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423
In order to use a SAS connector, you must install a driver that lets Data
Federator connect to a SAS/SHARE server.
You can install the driver as you would install any other JDBC driver for Data
Federator.
This version of Data Federator supports SAS with the SAS/SHARE server
version 9.1 or higher.
In order to connect to SAS sources from Data Federator, you must install a
SAS/SHARE driver for JDBC.
The SAS/SHARE driver for JDBC should be on the same machine as Data
Federator.
SAS is sensitive to the ordering of tables in the from clause. For the fastest
response from the SAS/Share server, the table names in the from should
appear in descending order with respect to their cardinalities.
You can ensure that Data Federator generates tables in this order by keeping
the statistics in Data Federator accurate. You can do this using Data
Federator Administrator.
To control the order of tables manually, you can also set the sasWeights
resource property for the SAS JDBC connector.
Related Topics
• Managing statistics with Data Federator Administrator on page 394
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties for SAS on page 428
The table below lists the properties that you can configure in JDBC resources.
Example
EMPLOYEE=16;DEPARTMENT=4
Related Topics
• List of JDBC resource properties on page 461
In order to configure a connector for SQL Server, you must install JDBC
drivers. These drivers are usually available from the SQL Server website.
1. Download the JDBC driver for SQL Server.
You get a driver in the form of a .jar file or several .jar files.
Note:
The recommended driver for SQL Server 2000 is SQL Server JDBC driver
SP3 (the version is 2.2.0040)
http://www.microsoft.com/downloads/details.aspx?familyid=07287b11-
0502-461a-b138-2aa54bfdc03a&displaylang=en
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers
This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.
When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
To force a collation value for SQL Server, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.
Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483
The middleware comes with Sybase and lets Data Federator talk to the
database. For details on installing it, see the Sybase documentation.
Once you install and configure the middleware, you can use Data Federator
to connect to Sybase data sources.
To let Data Federator connect to your Sybase database, you must have the
following configuration.
• Data Federator Query Server and Sybase Open Client library, 12.5
or 15.0, must be installed on the same machine
• the Sybase Open Client library must be included in the environment
variable the defines the library path
$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export LD_LIBRARY_PATH=$LD_LI
BRARY_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p
$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export
LIB_PATH=$LIB_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p
Parameter Description
Related Topics
• http://infocenter.sybase.com/help/index.jsp
The table below lists the properties that you can configure in Sybase
resources.
BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.
example:
'TABLE;SYSTEM TABLE;VIEW'
callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.
principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.
STRING database
Sybase only
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.
INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.
0 means no limit.
INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.
pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.
STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".
example:
BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.
example:
jdbc3 format:
SQL92 format:
example:
True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set
Related Topics
• transactionIsolation property on page 478
In order to configure a connector for Sybase IQ, you must install an ODBC
driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Sybase IQ.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).
The table below lists the properties that you can configure in Sybase IQ
resources.
BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.
example:
'TABLE;SYSTEM TABLE;VIEW'
callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.
principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.
STRING database
Sybase only
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.
INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.
0 means no limit.
INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.
pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.
STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".
example:
BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.
example:
jdbc3 format:
SQL92 format:
example:
True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set
Related Topics
• transactionIsolation property on page 478
To let Data Federator connect to your Teradata database, you must install
a Teradata ODBC driver (versions 3.04 or 3.05).
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.
The table below lists the properties that you can configure in Teradata
resources.
BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.
example:
'TABLE;SYSTEM TABLE;VIEW'
callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.
principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.
example:
'isjdbc=true;outer
join=false;rightouter
join=true'
BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.
INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.
0 means no limit.
INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.
pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.
example:
BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.
example:
jdbc3 format:
SQL92 format:
example:
True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.
integer sampleSize
lets you define the maximum number
of rows to return in a random sample
from the database
boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set
minAggregate - false - - -
maxAggregate - false - - -
avgAggregate - false - - -
sumAggregate false - - -
union - - false - -
unionAll - - false - -
countAggregate - - false - -
aggregateDistinct - - false - -
By convention, the names of JDBC connectors start with jdbc.. If you create
a JDBC connector, you should maintain this convention.
Related Topics
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties on page 461
By default, Data Federator looks for JDBC drivers in the directory data-
federator-install-dir/LeSelect/drivers. You can keep the drivers in
a different directory by changing the path in the resource.
For example, to set the directory for the oracle9 resource, use the following
statement.
Related Topics
• Managing resources using Data Federator Administrator on page 483
• Modifying a resource property using SQL on page 493
• List of JDBC resource properties on page 461
The table below lists the properties that you can configure in JDBC resources.
Parameter Description
url
the url to the database
Example
jdbc:oracle:thin:@server.mydo
main.com:1521:ora
jdbcClass
This property is required for JDBC resources to
work in Data Federator Designer.
Example
oracle.jdbc.driver.OracleDriver
driverLocation
This property is required for JDBC resources to
work in Data Federator Designer.
Example
/usr/local/javaapps/oracle_classes12.zip
or C:\DRIVERS\oracle_classes12.zip
driverProperties
a list of driver properties
selectMethod=cursor;connection
RetryCount=2
Example
smith
password
the password of the corresponding user account
isPasswordEncrypted
specifies if the password has been encrypted by
Data Federator Designer
authenticationMode
one of: {configuredIdentity, callerImpersonation,
principalMapping}
1. configuredIdentity: Authentication on the
database is done using value of parameters
username and password.
2. callerImpersonation: Authentication on the
database is done using the same credential as
used to connect to Query Server.
3. principalMapping: Authentication on the database
is done using a mapping from the user of the
connector (principal) to a database user account.
In this case, the parameter loginDomain should
be set to a registered login domain.
loginDomain
the name of a login domain
supportsCatalog
specifies if the JDBC driver supports the notion
of catalog
Parameter Description
escapeIdenti
fierQuoteString
defines the string used to escape the identifier
quote string (as returned by DatabaseMetaDa
ta#getIdentifierQuoteString) when it appears inside
an identifier
addCatalog
specifies if Data Federator should prefix table
names with the name of the catalog
supportsSchema
specifies if the JDBC driver supports the notion
of schema
schema
the schema or schema pattern, or the list of
schema or schema patterns you want to access
Example
schema="SMITH", schema="SMITH;JOHN",
schema="SM%"
showAllTables
specifies if Data Federator should show all tables
from the selected schemas or schema patterns
ignoreKeys
specifies if the wrapper should not query the JDBC
driver to get key or foreign key metadata
supportsBoolean
specifies if the JDBC driver or database does not
support booleans as first class objects
Parameter Description
trimTrailingSpaces
specifies if Data Federator should remove extra
spaces from catalog, schema, table, column, key
and foreign key names
Some JDBC drivers return metadata padded with
blank spaces. Setting this parameter to yes will
ensure that extra spaces in catalog, schema, ta-
ble, column, key and foreign key names are re-
moved.
collationName
Deprecated: use datasourceBinaryCollation in-
stead.
Example
collationName="latin1_bin"
datasourceBinaryColla
tion
the source collation to use for comparisons that
need to be evaluated with a binary collation (like,
not like and function evaluations)
Unset by default.
Example
datasourceBinaryCollation="Latin1_gener
al_bin"
Unset by default.
Example
datasourceCompCollation="Latin1_gener
al_ci_ai"
datasourceSortCollation
the source collation to use for sort operations (or-
der by)
Unset by default.
Example
datasourceSortCollation="Latin1_gener
al_ci_as"
compCollationCompatible
specifies if the collation for comparison operations
in the data source is compatible with the current
setting in Query Server
Example
compCollationCompatible="true"
Parameter Description
sortCollationCompatible
specifies if the collation for sort operations (order
by) in the data source is compatible with the cur-
rent setting in the Query Server
When set to true, the server can ignore the colla-
tion of sort operations and (order by) expressions
can be safely pushed on the source.
Example
sortCollationCompatible="true"
sqlStringType
identifies the SQL dialect supported by the
database
one of:
• sql92
• sql99 (reserved for future usage)
• oracle8
• oracle9
• jdbc3 (JDBC syntax is used for outer joins)
• sas
Defaults to the SQL dialect supported by the
source as identified by the parameter sourceType.
If sourceType is undefined, then defaults to sql92.
one of:
• oracle8R1
• oracle8R2
• oracle8R3
• oracle9R1
• oracle9R2
• sqlserver
• mysql
• db2
• access
• progress
• openedge
• sybase
• teradata
• sas
castColumnType
a list of mappings from the type used by the
database to the type used by JDBC
Example
Parameter Description
allowTableType
a list of table types to take into consideration when
the metadata of the underlying database is re-
trieved
Elements are separated by the character ;. Do not
put spaces between the elements.
Example
TABLE;SYSTEM TABLE;VIEW
capabilities
a list of all capabilities supported by the database
Example
isjdbc=true;outerjoin=false;rightouter
join=true
nbPreparedState
mentsPerQuery
defines the maximum number of statements that
can be used concurrently when executing param-
eterized queries
useParameterInlining
specifies whether the JDBC connector should use
java.sql.PreparedStatement or java.sql.Statemen
tobjects to execute parameterized queries
one of:
• TRANSACTION_READ_COMMITTED
• TRANSACTION_READ_UNCOMMITTED
• TRANSACTION_REPEATABLE_READ
• TRANSACTION_SERIALIZABLE
Default: not set.
defaultFetchSize
the default fetch size to set when creating state-
ment objects
setFetchForwardDirection
specifies if fetch forward should be explicitly set
setReadOnly
specifies if connections should not be set to to
read only
Parameter Description
sessionProperties
a list of session variables set on the database
Example
selectMethod=cursor;connection
RetryCount=2
useIndexInOrderBy
specifies if index (column position) should be used
instead of alias (column name) in the order by
clause of submitted queries
Example
translationFile
the name of the XML file which contains transla-
tion definitions
The value can be absolute or relative to data-
federator-install-dir. If this parameter is
not specified, the default file will be used.
The table below lists the properties that you can configure in JDBC resources.
maxConnections
the maximum number of simultaneous connec-
tions to the underlying database
maxPoolSize
the maximum number of idle (free) connections
to keep in the pool
maxLoadPerConnection
the maximum load authorized for each connection
maxConnectionIdleTime
the maximum time an idle connection is kept in
the pool of connections
reaperCycleTime
Deprecated. There is now only one reaper for all
JDBC connector configurations. The system pa-
rameter leselect.core.jdbc.reaperCycleTime can
be used to control how often the connection reaper
should check for idle connections, or bad connec-
tions (those that are suspected to be broken due
to connection failure).
Parameter Description
connectionTestQuery
the SQL test query that can be used to check if
connections to the underlying database are valid
Caution: this query should be cheap to execute.
Example
connectionFailureDetec
tionOnError
a keyword indicating the kind of connection failure
detection that should be done when an SQLExcep
tion occurs
• sqlState: specifies that failure detection should
be done using SQLState codes
connectionFailureSQL
States
Example
61000
Related Topics
• List of JDBC resource properties on page 461
The table below lists the most common JDBC driver classes and the syntax.
You can use these when configuring the JDBC resource property named
jdbcClass.
DB2
com.ibm.db2.jcc.DB2Driver
MySQL
com.mysql.jdbc.Driver
Oracle
oracle.jdbc.driver.OracleDriver
SAS
com.sas.net.sharenet.ShareNet
Driver
SQLServer
com.microsoft.jdbc.sqlserv
er.SQLServerDriver
SQLServer 2005
com.microsoft.sqlserver.jd
bc.SQLServerDriver
Related Topics
• List of JDBC resource properties on page 461
The table below defines the default values used by the pre-defined databases.
You can use these when configuring the JDBC resource property named
urlTemplate.
DB2
jdbc:db2://host
name[:port]/database
name
MySQL
jdbc:mysql://host
name[:port]/database
name
Oracle 8
jdbc:ora
cle:thin:@host
name[:port]:database
name
Oracle 9
jdbc:ora
cle:thin:@host
name[:port]:database
name
Oracle 10
jdbc:ora
cle:thin:@host
name[:port]:database
name
Progress
jdbc:se
quelink://host
name[:port];server
DataSource=se
quelinkdatasource
name
SQLServer 2000
jdbc:mi
crosoft:sqlserv
er://host
name[:port];database
name=databasename
SQLServer 2005
jdbc:sqlserv
er://host
name[:port];database
name=databasename
Related Topics
• List of JDBC resource properties on page 461
transactionIsolation property
Predefined value
urlTemplate
This property defines the template of the JDBC URL used to connect to the
database.
This value is a hint. Data Federator Designer shows the value of this property
while adding a datasource, and users of Data Federator Designer complete
the remaining values.
String
For the URL templates of common JDBC resources, see List of pre-defined
JDBC URL templates on page 476.
You can configure this resource as you would configure any resource in Data
Federator Administrator.
Note:
Any changes that you make to the resource ws.generic will apply to all web
services that are deployed on your installation of Data Federator Query
Server.
Related Topics
• Creating and configuring a resource using Data Federator Administrator
on page 486
• List of resource properties for web service connectors on page 480
The table below lists the properties that you can configure in web service
resources.
<soapenv:Body>
<ns:GetQuote
xmlns:ns="http://www.xig
nite.com/services/">
<ns:Sym
bol>SAP</ns:Symbol>
</ns:GetQuote>
</soapenv:Body>
Then if addNames
paceInParameter is
true, Data Federator de-
tects if the web service
expects a namespace.
If so, it generates the
line containing the pa-
rameter as:
<ns:Sym
bol>SAP</ns:Symbol>
If addNamespaceInPa
<Symbol>SAP</Symbol>
You can set the properties of connectors by using one of the pre-defined
resources. Resources let you re-use the same set of properties for different
connectors.
You can also make your own resources. When you make a resource, you
define properties, the values of the properties, and then you choose a name
for your set of properties. See the documentation on managing resources
and properties for details.
Note:
In order to use a resource to make a connection, you must install drivers for
your sources of data.
Task Actions
Create a resource
Delete a resource
Copy a resource
Delete a property
The values that users define when adding a datasource in Data Federator
Designer override the values defined in the resource.
Related Topics
• Data Federator Administrator overview on page 384
• Creating and configuring a resource using Data Federator Administrator
on page 486
• Copying a resource using Data Federator Administrator on page 488
Your resource name must begin with a letter [a-zA-Z]. The characters that
follow can be any number of alphanumeric [a-zA-Z0-9], dot . and
underscore _, in any order, but each dot must be immediately followed by
an alphanumeric or underscore.
A resource name must start with a prefix that identifies that type of the
resource.
You can create a new, empty resource, and then add the parameters that
you require.
.
A dialog box appears, prompting you to enter a name for your new
resource.
5. Enter a name for your resource and click OK.
The dialog box closes.
6. Below the Property Name heading, click Add a property.
7. From the Property Name pull-down list, select a property to add, and in
the Property Value field, enter a value for the property. Click OK to add
the property.
Data Federator Administrator validates your entry and displays an error
message if it is invalid for the property.
8. Repeat the process to add the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.
When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.
Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486
• List of JDBC resource properties on page 461
You can create a new resource by copying an existing resource, and then
adding and modifying the parameters that you require.
1. Login to Data Federator Administrator.
The Data Federator Administrator screen is displayed.
2. At the top of the screen, click the Administration tab.
The Administration panel is displayed. At the left of the panel, the list
of administration options is displayed.
3. In the list of administration options, click the Connector Settings option.
The "Resource" panel is displayed.
4. In the Resource pull-down list, select the resource to copy, and click the
Copy icon
.
A dialog box appears, prompting you to enter a name for the new copy.
5. Enter a name for the resource copy and click OK.
The properties configured in the copied resource are displayed.
6. Add, edit, and delete properties as required to configure the new resource.
When you finish editing the properties, click OK.
7. Repeat the process to add and modify the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.
When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.
Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486
Data Federator delivers a set of pre-defined resources for the most popular
databases. This table lists the names of those that are available with the
installation.
odbc.informix.in-
formixXPS85
IBM Informix XPS ODBC
odbc.informix.in-
formixXPS84
JDBC through
SAS jdbc.sas.sas9
SAS/SHARE server
jdbc.sqlserver.sqlserv
SQL Server 2005 JDBC
er2005
open
native library (Open- client.sybase.sybaseASE12
Sybase
Client) open
client.sybase.sybaseASE15
odbc.teradata.tera
dataV2R5
Teradata ODBC
odbc.teradata.tera
dataV2R6
odbc.generic.odbc
Generic ODBC ODBC
odbc.generic.odbc3
For example, the My Query Tool tab in Data Federator Administrator lets
you create, delete, and modify resources and their properties.
Related Topics
• Managing resources using Data Federator Administrator on page 483
• Data Federator Administrator overview on page 384
• List of JDBC resource properties on page 461
CREATERESOURCEresource_name
This statement lets you create a new empty resource. Use the statement
Alter Resource to define associated properties.
Syntax
Figure 12-3: CREATERESOURCE statement
CREATE RESOURCE " new_resource_name "
[ FROM " existing_resource_name " ]
CREATERESOURCEnew_resource_nameFROMexisting_re
source_name
Tip:
Quickly duplicating an existing resource
Using the syntax FROMexisting_resource_name in this statement lets
you create a new resource with a set of initial properties. The new resource
inherits all the properties of the existing resource. This is an easy way to
duplicate resources.
Related Topics
• Valid names for resources on page 486
This statement defines the location of the jdbc driver associated to the
resource for MySQL.
This statement deletes the property for the jdbc driver location.
Once you define and edit resources, you can verify the metadata that Data
Federator Query Server has stored for these objects by querying the system
tables.
This example shows how to check the location of the JDBC driver file that
Data Federator uses to connect to your MySQL resource.
1. Open the Data Federator Administrator, as decribed in Starting Data
Federator Administrator on page 384.
2. Click the tab SQL.
3. Type the query to alter the property.
For example, to check the property that defines the location of the the
JDBC driver for a MySQL database, type:
For the other operations that you can perform on the resources, see
Managing resources using SQL on page 491.
For the resource properties that are available for JDBC connectors, see
"admin_resource_mngt18.dita#fm_2006071221_1141448-eim-titan".
Data Federator and the database systems that it accesses sort and compare
character data using rules that define the correct sequence of characters.
For most database systems, you can configure options to specify whether
the database system should consider upper or lower case, accent marks,
character width or types of kana characters.
Case sensitivity
If a system treats the character M the same as the character m, then the
system is case-insensitive. A computer treats M and m differently because
it uses ASCII codes to differentiate the input. The ASCII value of M is 77,
while m is 109.
Accent sensitivity
If a system treats the character a the same as the character á, then then the
system is accent-insensitive. A computer treats a and á differently because
it uses ASCII codes for differentiating the input. The ASCII value of a is 97
and á is 225.
Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated
differently, it is called Kana sensitive.
Width sensitivity
When a single-byte character (half-width) and the same character when
represented as a double-byte character (full-width) are treated differently
then it is width sensitive
Related Topics
• Supported Collations in Data Federator on page 496
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
Example:
en_US_AS_CI - English, US, accent sensitive, case insensitive
Related Topics
• Collation in Data Federator on page 495
You can use the sort and comp parameters to set how Data Federator treats
sorting and comparison for strings.
The sort parameter is used to define how strings will be sorted by Data
Federator Query Server. The value of the sort parameter is one of the
supported collation values. The default is binary.
The comp parameter is used to define how strings will be compared in SQL
queries . The value of the comp parameter is either
Example:
DEPART
LASTNAME FIRSTNAME SALARY
MENT_NAME
Smith 2
SmIth 1
Smith 4
Related Topics
• Collation in Data Federator on page 495
• Supported Collations in Data Federator on page 496
• Managing parameters using Data Federator Administrator on page 554
• Modifying properties of a user account with SQL on page 517
When collations are binary, Query Server decides whether or not to push a
subquery on a particular datasource by examining only the SQL capabilities
of the source of data.
Thus, in the general case, Query Server assumes that the underlying source
of data is using a default collation that is compliant with the binary collation
in Data Federator.
For SQLServer, MySQL and Oracle only, it is possible to force Data Federator
Query Server to use binary collation even if the default collation on the source
Related Topics
• Collation in Data Federator on page 495
• Setting string sorting and string comparison behavior for Data Federator
SQL queries on page 497
• Supported Collations in Data Federator on page 496
• Specific collation parameters for SQL Server on page 431
• Specific collation parameters for MySQL on page 411
• Specific collation parameters for Oracle on page 412
13
13 Managing user accounts and roles
About user accounts, roles, and privileges
Related Topics
• Deploying projects on page 321
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506
Related Topics
• About user accounts, roles, and privileges on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506
Related Topics
• Granting privileges to a user account or role on page 514
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506
Related Topics
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Query Server user account on page 506
Related Topics
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506
The following table summarizes how to manage user accounts on the User
Accounts tab.
Table 13-1: The Users Accounts view of the Security tab in Data Federator Administrator
Task Actions
Task Actions
Related Topics
• Managing privileges with Data Federator Administrator on page 514
• Properties of user accounts on page 511
SCHEMA
the default schema associated with the user
LANGUAGE
specifies the default language for error messages
COMP
the default collation to use for comparison opera-
tions
SORT
the default collation to use for sort operations
You can assign a role to a different role. The role inherits the assigned role's
privileges.
Every user is part of the role PUBLIC. Any privilege that you grant to PUBLIC
is automatically available to all users. The PUBLIC role is always available.
The following table summarizes how to manage roles on the Security tab.
Task Actions
icon .
Task Actions
Related Topics
• Data Federator Administrator overview on page 384
The following statement creates a user account. You must log in with an
administrator account to create user accounts. The optional keyword ADMIN
lets you assign administrator privileges to the user account that you create.
An administrator has total control over Data Federator Query Server.
Figure 13-1: CREATE USER statement
CREATE USER user_name PASSWORD password { ADMIN }
To delete a user that you have already created, use the drop user statement.
Figure 13-2: DROP USER statement
DROP USER user_name
To change the user password, use the alter user and set password
statements.
Figure 13-3: ALTER USER and SET PASSWORD statements
ALTER USER user_name SET PASSWORD { new- password }
Related Topics
• Properties of user accounts on page 511
To verify if you have correctly created, dropped, or altered a user, you can
create a query that gets the information from the system table called, users.
For more information see, users on page 735.
This query displays a list of users that Data Federator Query Server has
registered in the system table.
In SQL you define authorizations for objects, that is you set privileges and
set the roles that can use those privileges. Data Federator Query Server
defines authorizations following the same principles as in the standard SQL
syntax.
Privileges are created and granted with the GRANT statement, and are
deleted with the REVOKE statement and the DROP ROLE statement. Any
defined privilege must contain the following information:
• name of the object on which the privilege acts
• user or role that may use the privilege
• action that may be performed on the specified object
To verify the privileges that you have defined, you can query the permissions
system table.
Related Topics
• List of privileges on page 522
• permissions on page 738
• Grammar for managing users on page 721
• Deploying a version of a project on page 324
About grantees
For the detailed syntax of the grantees that you can use, see Grammar for
managing users on page 721.
Syntax
Figure 13-5: GRANT privileges statement
GRANT privileges [ ON{ CATALOG| TABLE| SCHEMA| RESOURCE } objectname
] TO{ grantees| PUBLIC }
The variable grantees must be a username or a list of usernames,
separated by commas.
Related Topics
• List of privileges on page 522
To revoke or remove privileges from a user or role, use the following syntax.
You can revoke privileges from all grantees by using the PUBLIC keyword.
Syntax
Figure 13-6: REVOKE privileges statement
REVOKE privileges [ ON { CATALOG | TABLE | SCHEMA | RESOURCE }
objectname ] FROM { grantees | PUBLIC }
Related Topics
• Managing privileges using SQL statements on page 518
This statement is unique to Data Federator and allows users to verify their
own privileges. An Administrator can also check the privileges of any user.
Syntax
Figure 13-7: CHECK AUTHORIZATION statement
CHECK [ AUTHORIZATION ] privileges [ FOR user_name ][ ON { CATALOG
| TABLE | SCHEMA | RESOURCE } objectname ]
This statement returns a result set with one row, one column "AUTHORIZED"
(type BIT).
The value of this column is true if the user has the listed privileges; false
otherwise.
Related Topics
• Managing privileges using SQL statements on page 518
Related Topics
• Resource system tables on page 739
List of privileges
SELECT
Allows a user to access
tables.
ALTER RESOURCE
Allows a user to alter a
specified resource.
DROP RESOURCE
Allows a user to drop a
specified resource.
Related Topics
• About user accounts, roles, and privileges on page 504
• Grammar for managing users on page 721
The CREATE ROLE statement defines a new Role. You can create a role
and apply it to more than one user. To avoid confusion between roles and
users, the CREATE ROLE statement is explicit.
Figure 13-8: CREATE ROLE statement
CREATE ROLE identifier
The Grant Role statement assigns one or more roles to grantees (a grantee
can be a user or a role).
Figure 13-10: GRANT roles statement
GRANT roles TO grantees
Note:
David Smith now has BankTeller and BankManager roles.
Note:
This grants the BankTeller role to all users.
Once you have created and modified the user roles, you can verify the data
stored in Data Federator Query Server by querying the system tables. For
You can use login domains to map users or roles to credentials that allow
them to log in to specific databases. This method of authenticating users lets
you hide database credentials while still allowing users to log in to those
databases.
The statement
maps the user account benoit to the user account julia with password
julia on the machine mysql@mandelbrot.
When the user benoit accesses datasources that are on the mysql server,
Data Federator will log in using the account julia.
These system tables are stored in the catalog called leselect and the schema
called system.
Related Topics
• Managing user accounts with SQL statements on page 516
14
14 Controlling query execution
Query execution overview
You can use Data Federator Administrator to view the target tables that you
have deployed.
1. Start Data Federator Administrator.
2. Click the Objects tab.
3. In the tree list, expand Objects, then TABLE, then your-catalog-
name, then targetSchema.
Related Topics
• Managing target tables on page 46
4. Click the Information tab to display information about the table on the
screen.
5. Click the Content tab to display the contents of the table.
The content that displays is dependent on the number of rows you enter
in the Maximum rows to display option.
6. Once you display the tables and contents, you have the information you
need to proceed to enter queries on the data to optimize the mapping
rules and test your data strategy.
Related Topics
• About datasources on page 66
Querying metadata
Dynamic applications that are not hard-coded to work with a specific set of
tables must have a mechanism for determining the structure and attributes
of the objects in any database to which they connect. These applications
may require information such as the following.
• the number and names of the tables in the targets and datasources
• the number of columns in a table together with the name, data type, scale,
and precision of each column
• the keys that are defined for a table
Related Topics
• Using stored procedures to retrieve metadata on page 533
• List of stored procedures on page 744
• JDBC metadata methods on page 533
• ODBC metadata functions on page 534
The JDBC specification defines a set of methods returning result sets that
contain the metadata information. This is a standard way of presenting catalog
information supported at database level, for example, lists of tables, columns,
data types.
The Data Federator JDBC driver supports the standard JDBC metadata
methods. For more information on standard JDBC metadata methods, see
the Sun JDBC Reference Guide,
http://java.sun.com/j2se/1.4.2/docs/api/java/sql/package-summary.html
The ODBC specification defines a set of functions that return result sets
containing the metadata information. This is a standard way of presenting
catalog information supported at a database level, for example, lists of tables,
columns, data types.
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/odbc/htm/odbcodbc_api_reference.asp
Cancelling a query
Data Federator provides a command that lets you cancel a running query.
The cancel command is asynchronous. Therefore, in some cases, when you
cancel a query, your client application may see the query as cancelled while
Data Federator Query Server may have not yet completed the cancel.
Cancelling a query
ADMIN CANCEL id
id is the ID of your query, which you can get from the Running Queries
tab in the Data Federator Administrator, in the Administration tab.
You can cancel all running queries using the ADMIN CANCEL ALL QUERIES
command.
• Run the following command.
Data types
Data types are propogated from data sources to Data Federator.
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size
Returned as obtained
from Data Federator
The DECIMAL metadata re-
Query Server. No
NONE NONE trieved from Data Federator
checks on decimal pre-
Query Server.
cision and scale are
done.
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size
NONE FIXED_SCALE
Note:
An exception is raised when the required decimal
precision and scale cannot be ensured. A warning is
issued when a truncation of the decimal digits is ap-
plied to ensure the required decimal precision and
scale.
NONE MAX_SCALE
Note:
An exception is raised when the required decimal
precision and scale cannot be ensured. A warning is
issued when a truncation of the decimal digits is ap-
plied to ensure the required decimal precision and
scale.
Note:
An exception is raised when the maximum decimal
precision cannot be ensured. A warning is issued
when a truncation of the decimal digits is applied to
ensure the required decimal precision. The integer
part of the DECIMAL value cannot exceed maxDeci
malPrecision – maxDecimalScale.
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size
computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre
cision, computed_scale = Adjusted with respect
maxDecimalScale and to the computed DECI
computed_precision = MAL metadata to have
MIN(server_precision - no more than comput
server_scale, maxDeci ed_precision and exact-
FIXED_SCALE FIXED_SCALE ly computed_scale.
malPrecision - maxDeci
malScale) + maxDeci Note:
malScale, and An exception is raised
• when server_precision is when truncation is not
greater than or equal to possible.
server_max_precision,
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.
computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre Adjusted with respect
cision, computed_scale = to the computed DECI
maxDecimalScale and MAL metadata to have
computed_precision = no more than comput
MIN(server_precision, ed_precision and exact-
FIXED_SCALE MAX_SCALE ly computed_scale.
maxDecimalPrecision),
and Note:
• when server_precision is An exception is raised
greater than or equal to when truncation is not
server_max_precision, possible.
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size
computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre
cision, computed_scale = Adjusted with respect
MIN(server_scale, to the computed DECI
maxDecimalScale) and MAL metadata to have
computed_precision = no more than comput
MIN(server_precision - ed_precision and exact-
MAX_SCALE FIXED_SCALE server_scale, maxDeci ly computed_scale.
malPrecision - comput
ed_scale) + comput Note:
ed_scale An exception is raised
when truncation is not
• When server_precision is possible.
greater than or equal to
server_max_precision,
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size
computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre Adjusted with respect
cision, computed_scale = to the computed DECI
MIN(server_scale, MAL metadata to have
maxDecimalScale) and no more than comput
computed_precision = ed_precision and no
MIN(server_precision, more than comput
MAX_SCALE MAX_SCALE
maxDecimalPrecision), ed_scale.
and Note:
• when server_precision is An exception is raised
greater than or equal to when truncation is not
server_max_precision, possible.
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.
Note:
• server_precision is the metadata DECIMAL precision retrieved from Data
Federator Query Server
• server_scale is the metadata DECIMAL scale retrieved from Data
Federator Query Server
• server_max_precision is the value of the parameter core.common.maxDec
imalPrecision defined at the level of Data Federator Query Server to
control the maximum decimal precision used when computing the decimal
precision of DECIMAL columns from result sets of queries.
Related Topics
• Scale and precision on page 700
info buffermanager
To view queries registered for the buffer manager, use the following
command.
To view detailed buffer allocation for operators, use the following command.
info wrappermanager
15
15 Optimizing queries
Tuning the performance of Data Federator Query Server
Updating statistics
To ensure that Data Federator Query Server can optimize your queries in
the best possible manner, you can update the statistics on your datasource
or target tables.
Related Topics
• The Statistics menu item on page 393
You can optimize the performance of Data Federator Query Server by paying
attention to the use of your system swap file.
The size of the swap file is dependent on the number of retrieved rows and
the complexity of the query.
Generally, you can tell that Data Federator will use the swap file when your
query uses one of the operators that consume memory. The Data Federator
documentation contains a list of these operators.
You can use the following strategies to optimize access to the swap file.
• Set the server parameter core.workingDir.
You can use this parameter to set the location of the swap file.
Point this parameter to a partition that has the most efficient disk access.
Use the syntax of the host OS (c:\\tmp or /tmp).
• In UNIX, configure your swap area for parallel access, for example RAID.
Related Topics
• Operators that consume memory on page 548
Optimizing memory
To audit your system's memory use, use the statement info buffermanager.
You can use the following strategies to optimize how Data Federator uses
memory.
• Set the server parameter core.bufferManager.executorMemory.
This parameter lets you configure the amount of memory used for query
execution.
Defines the number of queries that consume memory that can run
concurrently. Other queries are not affected.
This parameter limits how many operators that consume memory run in
parallel.
Decrease this number if the operators in your queries are consuming too
much memory.
You can approximate the average size and number of operators in your
queries by counting the number of large tables in different datasources
Related Topics
• Statistics on the buffer manager on page 543
• Queries registered for buffer manager on page 543
• Detailed buffer allocation for operators on page 543
• List of parameters on page 557
• Operators that consume memory on page 548
The following are the operators that can consume memory when you use
them in your queries.
• join
• cartesian product
• orderby
• groupby
• groupby when you have a lot of different values in the group (a large
group set)
Data Federator does not use a significant amount of memory when it performs
scans of tables, projections, filters, function evaluation or when it pushes the
operations down to the sources.
Data Federator can do this when your query involves a very large table and
a relatively small table, for example 20000 rows and 50M rows. Additionally,
your query must use the values in the small table to filter those in the large
You can configure the "bind join" operator using the following four parameters.
• leselect.core.optimizer.bindJoin.minCardinality
This parameter specifies the cardinality threshold (on the large table)
required to activate the "bind join" operator.
• leselect.core.optimizer.bindJoin.reductionFactor
When Data Federator uses the "bind join" operator, it fetches the rows
from the small table into memory and generates parameterized queries.
This parameter defines the maximum number of parameterized queries.
• leselect.core.optimizer.bindJoin.useIndexOnly
Specifies that Data Federator should only use the "bind join" when there
is an index on the source column.
Figure 15-1: How Data Federator decides to activate a "bind join" with parameters minCar
dinality=15000, reductionFactor=1000 and maxQueries=100
Example: Activating a "bind join" on a query with a small table and a very
large table
This example shows how to set system and session parameters to activate
the "bind join", when you have a small table containing 100 rows and a
large table with 50M rows. We also assume that when the values of the
small table are used to filter the values in the large table, 10000 rows will
be returned.
Refresh the statistics once your Data Federator project has been deployed.
You can refresh statistics in the Data Federator Administrator.
The number of rows in the large table is divided by this number to calculate
a threshold. In this case, the threshold is 50000 (50M / 1000 = 50000). Data
Federator then checks the statistics, which show that the "bind join" will
If you set this value too low, Data Federator will use a "bind join" when it is
not efficient. For example, if you set this value to 1, Data Federator will use
a "bind join" even when the number of rows returned by the "bind join" is
50M (50M / 1 = 50M). This is equivalent to doing a full table scan.
If you set this value to 2, Data Federator will use a "bind join" when the
number of rows returned by the "bind join" is half of that returned by a table
scan. This is not a sufficient gain over a full table scan.
If you set this value too high, Data Federator will not use a "bind join" when
it would be efficient. For example, if you set this value to 50M, Data
Federator will only use the "bind join" if the number of rows returned by the
"bind join" is 1 (50M / 50M = 1).
Setting this value to 1000 is generally equivalent to requesting that the "bind
join" be activated when its result is 1000 times smaller than a table scan.
With these settings, Data Federator should be able to perform a "bind join"
and thus run your query with optimal speed and use of memory.
Related Topics
• Managing statistics with Data Federator Administrator on page 394
16
16 Managing system and session parameters
About system and session parameters
Session parameters are defined for one JDBC or ODBC connection. The
value of these parameters can be different among connections.
Each session parameter takes its default value from the system parameter
of the same name. When you change the value of a system parameter
corresponding to a session parameter, the new value is only taken into
account on new sessions.
You can use system and session parameters to configure various aspects
of Data Federator, such as the following.
• use of memory
• use of network
• the order of execution of queries
• optimizations
Table 16-1: What you can do on the System parameters and Session parameters tabs
in Data Federator Administrator
Task Actions
Use ALTER SYSTEM default or SET default to set all system parameters
or all session parameters to their default values.
Note:
The keyword default is not enclosed in quotation marks when used in either
SQL statement.
SET default
Related Topics
• Data Federator Administrator overview on page 384
type: boolean
needs restart? no
type: integer
needs restart? no
default value: 0
type: integer
needs restart? no
default value: 2
core.queryEngine.dis
tinct.nbPartitions
Parameter Description
The optimal number of first level partitions to produce
for the distinct operator. (A new value of this param-
eter takes effect when there are no queries regis-
tered in the BufferManager.)
scope: system only
type: integer
needs restart? no
type: integer
needs restart? no
comm.jdbc.port The port for remote links from a client to Data Feder-
ator Query Server.
scope: system only
type: integer
needs restart? no
type: integer
needs restart? no
needs restart? no
type: string
needs restart? no
type: integer
needs restart? no
default value: 2
Parameter Description
core.bufferManager.max The maximum number of memory-consuming con-
ConcurrentOperatorsPer current operators. (A new value of this parameter
Query should take effect when there are no queries regis-
tered in the BufferManager. Currently, you must
restart the server)
scope: system only
type: integer
needs restart? no
default value: 5
type: string
needs restart? no
type: string
needs restart? no
type: integer
needs restart? no
type: integer
needs restart? no
default value: 10
Parameter Description
core.queryEngine.hash.max The maximum number of first level partitions to pro-
Partitions duce for the hash algorithms. (A new value of this
parameter takes effect on subsequent queries.)
scope: system only
type: integer
needs restart? no
type: string
needs restart? no
type: boolean
needs restart? no
type: boolean
needs restart? no
type: boolean
needs restart? no
type: integer
needs restart? no
type: long
needs restart? no
type: integer
needs restart? no
Parameter Description
leselect.core.optimiz Parameter for the minimum cardinality to determine
er.minCardForAsynch an asynchronous prefetch. -1 means that no asyn-
Prefetch chronous prefetch is allowed
scope: system only
type: long
needs restart? no
type: string
needs restart? no
type: string
needs restart? no
default value: 60
type: integer
needs restart? no
default value: 27
type: integer
needs restart? no
default value: 6
type: integer
needs restart? no
default value: 40
Parameter Description
core.common.scaleFor The maximum value that is reported by Data Feder-
MaxDecimalPrecision ator Query Server for the decimal scale of a column.
scope: system only
type: integer
needs restart? no
default value: 6
type: integer
needs restart? no
default value: 20
type: integer
needs restart? no
type: integer
needs restart? no
type: long
needs restart? no
type: long
needs restart? no
Parameter Description
leselect.core.optimiz Enable optimizer cache. If this is set to true, then
er.cache.enabled two queries with same execution plan, differing only
in their constants, need only one optimizer execution.
The second query is executed with the same plan
as the first, but with modified constants.
scope: system and session
type: boolean
needs restart? no
type: integer
needs restart? no
type: string
needs restart? no
type: boolean
needs restart? no
type: integer
needs restart? no
type: integer
needs restart? no
Parameter Description
language defines the ISO language code for the locale
scope: system and session
type: string
needs restart? no
default value: en
type: string
needs restart? no
default value: US
type: string
needs restart? no
type: string
needs restart? no
type: long
needs restart? no
Parameter Description
leselect.core.optimiz When Data Federator uses the bind join operator, it
er.bindJoin.maxQueries fetches the rows coming from the small table into
memory and generates parameterized queries. The
parameter "leselect.core.optimizer.bindJoin.max-
Queries" defines the maximum number of parame-
terized queries. This also means the maximum
number of rows retrieved from the small table to use
the bind join operator.
scope: system only
type: long
needs restart? no
leselect.core.optimiz Specifies that Data Federator only uses the bind join
er.bindJoin.useIndexOnly operator on indexed columns.
scope: system only
type: boolean
needs restart? no
leselect.core.optimiz
er.bindJoin.reductionFac
tor
type: string
needs restart? no
Related Topics
• Guidelines for using system and session parameters to optimize queries
on large tables on page 548
Note:
The startup parameter core.workingDir can only be changed by modifying
the server.properties file, and not via the Administrator UI as per system
parameters.
#core.workingDir=
2. Set the parameter core.workingDir to a partition that has the most efficient
disk access.
Make sure you erase the hash (#) character from the beginning of the
line.
To type the name of the directory, use the syntax of the host OS (c:\\tmp
or /tmp).
Note:
The backslash character (\) is an escape character. To represent a single
backslash, you must enter two backslashes in a row (\\).
core.workingDir=c:\\tmp
17
17 Backing up and restoring data
About backing up and restoring data
Once the administrator has entered their username and password, the Data
Federator Backup and Restore tool backs up and restores the following data:
• projects, including datasource definitions, targets, mappings, lookup tables
and domain tables
• connector resources
• usernames and authorizations
Data Federator Administrators can start the Data Federator Backup and
Restore Tool as follows.
1. Ensure you are logged out of your Data Federator applications.
2. Use one of the following methods to start the Backup and Restore Tool.
data-federator-in
stall-dir\bin\back
uptoolgui.bat
data-federator-in data-federator-in
stall-dir/bin/back stall-dir/bin/back
uptoolgui.sh uptoolcon.sh
If you are using Windows, and the Data Federator Windows services are
installed, the Backup and Restore Tool restarts the Data Federator
services automatically. Otherwise, use the shutdown and startup scripts.
4. If you are using AIX, Solaris or Linux, restart the Data Federator servers
by using the command: [data-federator-install-dir]/bin/start
up.sh
Related Topics
• Starting the Data Federator Backup and Restore tool on page 576
Data Federator Administrators can restore data using the Data Federator
Backup and Restore Tool.
1. Start the Data Federator Backup and Restore Tool.
2. Click Restore. (In console mode, type 1 and press enter).
3. Choose the directory where you backed up the data that you want to
restore.
The Backup and Restore Tool restores your data.
4. If you are using AIX, Solaris or Linux, restart the Data Federator servers
by using the command: data-federator-install-dir/bin/startup.sh
Related Topics
• Starting the Data Federator Backup and Restore tool on page 576
18
18 Deploying Data Federator servers
About deploying Data Federator servers
Projects are deployed on the same machine that runs Designer. In this
configuration, your application connects to a single Data Federator Query
Server.
• deploy a project on multiple installations on a cluster
Related Topics
• Deploying a project on a single remote Query Server on page 582
• Deploying a project on a cluster of remote instances of Query Server on
page 586
• Configuring fault tolerance for Data Federator on page 601
Related Topics
• Deploying projects on page 321
• Deploying a version of a project on page 324
• Using deployment contexts on page 325
• Possibilities for deploying a project on a single remote instance of Query
Server on page 583
The following possibilities are available for configuring Data Federator with
a single remote Query Server.
• You can deploy projects on the local Query Server. Designer and Query
Server run on the same machine.
Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325
When you install Designer and Query Server on separate machines, there
must not be a firewall between the two machines.
Figure 18-1: Architecture of an installation of Data Federator Designer with a remote Query
Server
1. Use the Data Federator installer to install Data Federator Designer and
Data Federator Query Server on machine A.
2. Use the Data Federator installer to install Data Federator Designer and
Data Federator Query Server on machine B.
3. At project deployment time on machine A, configure the deployment
address of machine B.
When you deploy projects from Designer, the domain and lookup tables are
deployed on the remote machine (machine B).
Note:
While you edit your project in Data Federator Designer, you use the local
installation of Query Server and the local domain and lookup repository.
Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325
When you deploy projects from each Designer on machine A1 or A2, its
domain and lookup tables are deployed in a dedicated database on machine
B.
Note:
In this type of shared configuration, if users on two different machines deploy
to a catalog of the same name, the first catalog is overwritten.
Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325
To use Connection Dispatcher, you must be sure that the project that your
application wants to access has been deployed in a catalog of the same
name on each Query Server that belongs to the cluster.
Note:
Connection Dispatcher only dispatches connections. Queries sent on the
same connection are always executed on the same installation of Query
Server. Load balancing is done only at connection time when a new
connection for an installation of Query Server is acquired.
Related Topics
• Deploying projects on page 321
• Possibilities for deploying a project on a cluster of remote instances of
Query Server on page 587
Note:
You cannot install multiple instances of Data Federator Query Server on the
same physical machine without using VMWare.
Figure 18-3: A single Data Federator Designer, multiple instances of remote Query Server
and Connection Dispatcher
Related Topics
• Deploying a version of a project on page 324
If you installed the Data Federator Windows Services component in the Data
Federator installer, the Connection Dispatcher server starts automatically.
Specifically, the Data Federator Windows Services start the Connection
Dispatcher server.
The table below shows the Windows service that runs the Connection
Dispatcher server.
If you did not install the Data Federator Windows Services component in the
Data Federator installer, you can use the startup scripts to start Connection
Dispatcher.
The following table shows the script that runs Connection Dispatcher.
The following table lists the names of the script that runs Connection
Dispatcher.
Table 18-3: List of scripts that run Connection Dispatcher on AIX, Solaris or Linux
Stop the following Data Federator Windows Service in order to shut down
Data Federator Connection Dispatcher.
Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure
to either restart Connection Dispatcher or point the clients to a different
server.
If you did not install the Data Federator Windows Services component in the
Data Federator installer, you can use the shutdown scripts to shut down
Connection Dispatcher.
The following table shows the name of the script that shuts down Connection
Dispatcher:
data-federator-install-
Data Federator Connection Dispatch-
dir\dispatcher\bin\shut
er
down.bat
Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure
to either restart Connection Dispatcher or point the clients to a different
server.
The shutdown operation and status are noted in the configured log.
The following table lists the name of the script that shuts down Connection
Dispatcher.
Table 18-6: List of scripts that shut down Connection Dispatcher on AIX, Solaris or Linux
Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure
The shutdown operation and status are noted in the configured log.
You can set parameters for the Connection Dispatcher in the dispatch
er.properties file.
1. Shut down Connection Dispatcher if it is running.
2. Edit the file data-federator-install-dir/dispatcher/conf/dispatch
er.properties.
3. Restart Connection Dispatcher.
Related Topics
• Parameters for Connection Dispatcher on page 596
Generally, there are three types of parameters that configure how long servers
are considered valid.
• The parameter leselect.dispatcher.reverifyTime specifies the
minimal time that Connection Dispatcher waits before contacting a server
to verify if it is valid.
The effective time between verifications may vary, but it cannot be less
than this value.
• The parameter leselect.dispatcher.validityTime specifies the time
that Connection Dispatcher considers a server to be valid after the last
reverification.
• The parameters leselect.dispatcher.blockedReverifyTime and
leselect.dispatcher.stopedReverifyTime control reverification for
blocked (non-responsive) servers and stopped servers respectively.
You can raise this value if you find that your servers are becoming invalid
before Connection Dispatcher checks them. However, since the default value
is set at two times the reverify value, your servers should very rarely hang
more often than this.
You may lower this value if you want to decrease the time limit that
Connection Dispatcher considers a server to have the same load and status.
This way you increase the confidence in the information that the Connection
Dispatcher has about each server, and Connection Dispatcher will perform
load balancing and failover with more accuracy.
For a list of the possible values, see Parameters for Connection Dispatcher
on page 596.
Parameter Description
leselect.dispatcher.validity-
Time=120000
Validity time of a serv-
er reference in the
cache (milliseconds).
leselect.dispatcher.reverify-
Time=6
The interval in chunks
of 10 seconds at
which a server refer-
ence is rechecked.
leselect.dispatcher.stopedRever-
ifyTime=3
The interval in chunks
of 10 seconds at
which a stopped serv-
er is rechecked.
leselect.dispatcher.blocke-
dReverifyTime=12
The interval in chunks
of 10 seconds at
which a non-respon-
sive (heavily loaded)
server is rechecked.
leselect.dispatcher.thresholdLa-
tency=500
The latency threshold
at which the server is
considered as loaded
as to decrease its pri-
ority by one level. The
most prioritized
servers are returned
first when a client asks
for a reference (mil-
liseconds).
#leselect.dispatcher.clusterCon-
figFile=
Parameter Description
If no value is specified
for this parameter, this
configuration file is
created by default in
USER_HOME/.datafed
erator/dispatch
er_servers_conf.xml
log4j.configuration=proper-
ties/logging_info.properties
Points to an existing
logging configuration
file that controls the
level of logging.
You must have installed and started Data Federator Connection Dispatcher.
For details, see the Data Federator Installation Guide.
add jdbc:datafederator:://server1
The following command shows the installations of Query Server that are
in the list.
list
help
Related Topics
• Possibilities for deploying a project on a cluster of remote instances of
Query Server on page 587
• Parameters in the JDBC connection URL on page 361
To add a server, stop the Connection Dispatcher, open this file for editing
and under the <servers> root tag add a tag with the following syntax.
• Attributes between brackets [] can be omitted and the default values are
used in this case.
• The URL is an installation of Data Federator Query Server. The other
attributes have the same meaning as the ones found in the dispatch
er.properties file except that they are not prefixed by leselect.dis
patcher.
Caution:
Even if you can add a reference to the URL of the Connection Dispatcher
itself, or to the URL of another Connection Dispatcher, this is not
recommended. Such references will prevent Connection Dispatcher from
correctly identifying the latency of the servers, and load balancing will not
function.
Note:
When you start Connection Dispatcher from a script on Windows or Unix-like
systems, the Connection Dispatcher servers configuration file is found at
USER_HOME/.datafederator/dispatcher_servers_conf.xml. When you
start Connection Dispatcher as a Windows service, the value of the SYSTEM
USER home directory depends on the version of Windows you have, and
you must use Windows search to find the correct .datafederator directory.
Related Topics
• Parameters for Connection Dispatcher on page 596
• Finding the Connection Dispatcher servers configuration file when running
Connection Dispatcher as a Windows service on page 780
• Parameters in the JDBC connection URL on page 361
The alternate servers will be solicited in the order in which you have listed
them in the URL. Each host in your list can be either a Query Server or
Connection Dispatcher.
You can configure fault tolerance for connections from your application to
Data Federator by using the JDBC URL.
Related Topics
• JDBC URL syntax on page 358
• Format of the Connection Dispatcher servers configuration file on page 600
19
19 Data Federator Designer reference
Using data types and constants in Data Federator Designer
In formulas, use a
function to return a
TIMESTAMP con-
stant.
In formulas, use a
function to return a
TIME constant.
In formulas, use a
function to return a
date constant.
• false
• or mixed case
http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.
Parameter Description
Parameter Description
ColumnName[:Column
Type][(Length)]
Examples:
my_column:varchar(20)
your_column:date
our_column:integer;their_col
umn:integer
DDL script
Example:
Related Topics
• Using a schema file to define a text file datasource schema on page 171
• Adding a target table from a DDL script on page 47
• Using data types and constants in Data Federator Designer on page 604
Click None to clear all the selected columns. If you run your query with
no selected columns, the query will return all columns.
5. Verify that the values in your file appear as you expect them to appear.
This depends on the component you are testing. Examples are:
• If you are testing a datasource, see Running a query on a datasource
on page 211.
• If you are testing a mapping rule, see Testing a mapping rule on
page 281.
Query configuration
Use these parameters when Running a query to test your configuration on
page 614.
Parameter Description
Parameter Description
• In the Data sheet pane of the component on which you have run a query,
click Print.
A PDF window appears, as shown in the example below:
When you click Add, Data Federator adds the new row below the row you
selected.
• operand = NULL
• operand <> NULL
• operand IS NULL
• operand IS NOT NULL
• operand LIKEstring-constant
• operand NOT LIKEstring-constant
You can use the following operands when writing a filter formula:
• columns from datasource tables
• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. This is either
an id number or a name (Sn.An or Sn.[column_name]).
• constants
• functions
• Use the Data Federator functions to convert or combine the column
values or constants.
• Choose the appropriate types for the data that you want to convert.
Related Topics
• isLike on page 649
• Function reference on page 624
• Using data types and constants in Data Federator Designer on page 604
• Use the Data Federator functions to convert or combine the column values
or constants.
For a full list of functions that you can use, see Function reference on
page 624.
• Choose the appropriate types for the data that you want to convert.
For details about the data types in Data Federator, see Using data types
and constants in Data Federator Designer on page 604.
Related Topics
• The syntax of filter formulas on page 619
You can use the following operands when writing a relationship formula:
• constants
Related Topics
• Using data types and constants in Data Federator Designer on page 604
20
20 Function reference
Function reference
Function reference
Aggregate functions
In aggregate functions, you can use the SQL keyword distinct in front of
column names.
Related Topics
• Writing aggregate formulas on page 227
AVG
DECIMAL AVG(INTEGER n)
Syntax
DECIMAL AVG(DECIMAL d)
COUNT
INTEGER COUNT(INTEGER n)
INTEGER COUNT(DECIMAL c)
INTEGER COUNT(DOUBLE d)
INTEGER COUNT(STRING s)
Syntax
INTEGER COUNT(TIMESTAMP m)
INTEGER COUNT(TIME t)
INTEGER COUNT(DATE a)
INTEGER COUNT(BOOLEAN b)
MAX
INTEGER MAX(INTEGER n)
DECIMAL MAX(DECIMAL c)
DOUBLE MAX(DOUBLE d)
TIMESTAMP MAX(TIMESTAMP m)
TIME MAX(TIME t)
DATE MAX(DATE d)
MIN
INTEGER MIN(INTEGER n)
DECIMAL MIN(DECIMAL c)
DOUBLE MIN(DOUBLE d)
TIMESTAMP MIN(TIMESTAMP m)
TIME MIN(TIME t)
DATE MIN(DATE d)
SUM
DECIMAL SUM(INTEGER n)
DECIMAL SUM(DOUBLE d)
Numeric functions
abs
decimal abs(decimal n)
integer abs(integer n)
abs(-2^31) = -2^31
Restrictions
returns null if the argument is null
acos
asin
Returns the arc sine of an angle, in the range of -pi/2 through pi/2.
atan
Returns the arc tangent of an angle, in the range of -pi/2 through pi2
atan2
ceiling
Returns the smallest value that is not less than the argument and is equal
to a mathematical integer.
integer ceiling(integer n)
decimal ceiling(decimal n)
cos
cot
degrees
double degrees(integer n)
double degrees(decimal c)
exp
Returns the exponential value of a number "d", of type double. This is the
value of e raised to the exponent d.
floor
Returns the largest value that is not greater than the argument and is equal
to a mathematical integer.
Note:
The type of the value returned is not converted. Therefore, floor (1.9) == 1.0.
If you want to convert the value to an integer, use a conversion function like
toInteger() (see toInteger on page 679).
integer floor(integer n)
decimal floor(decimal n)
log
Returns the base e logarithm of double number "d". The argument "d" must
be greater than 0. Returns null if the argument is negative or equal to 0.
log10
Returns the base 10 logarithm of double number "d". The argument d must
be greater than 0. Returns null if the argument is negative or equal to 0.
mod
pi
power
radians
double radians(integer n)
double radians(decimal c)
rand
Returns a double value d 0 <= d < 1. You can provide a seed integer to
initialize the random number generator.
double rand(integer n)
Syntax
double rand()
round
Returns the closest value to specified number of decimal places "p". The
function rounds towards the nearest neighbor unless both neighbors are
equidistant. In this case, it rounds up (i.e rounds away from zero).
Note:
The type of the value returned is not converted. Therefore, round(1.9) ==
2.0. If you want to convert the value to an integer, use a conversion function
like toInteger() (see toInteger on page 679).
double round(double n)
decimal round(decimal n)
sign
Returns the positive (1), zero (0), or negative (-1) sign of the argument.
integer sign(integer n)
double sign(double d)
sin
sqrt
Returns the square root of a number. The argument must be positive. Returns
null if the argument is negative.
tan
trunc
If the value m is negative, the function starts at the mth digit left of the decimal
point and sets to zero all the digits to the right of that position.
double trunc(double n)
decimal trunc(decimal n)
Alias truncate()
Date/Time functions
curdate
date
Returns the current date as a date value. This function is a "system function"
and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.
curtime
time
Returns the current local time as a time value. This function is a "system
function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.
dayName
string dayName(date a)
Syntax
string dayName(timestamp m)
dayOfMonth
integer dayOfMonth(date a)
Syntax
integer dayOfMonth(timestamp m)
dayOfWeek
Returns an integer from 1 to 7 representing the day of the week in date "a"
or timestamp "m". The first day of a week is Sunday.
integer dayOfWeek(date a)
Syntax
integer dayOfWeek(timestamp m)
dayOfYear
Returns an integer from 1 to 366 representing the day of the year in date "a"
or timestamp "m".
integer dayOfYear(date a)
Syntax
integer dayOfYear(timestamp m)
decrementDays
Decrements the given number of days "n" from date "a" or timestamp "m".
hour
integer hour(time t)
Syntax
integer hour(timestamp m)
incrementDays
Increments the date "a" or timestamp "m" argument with the given number
of days "n".
minute
integer minute(time t)
Syntax
integer minute(timestamp t)
month
integer month(date a)
Syntax
integer month(timestamp m)
monthName
string monthName(date a)
Syntax
string monthName(timestamp m)
now
date
Returns a timestamp value representing date and time. This function is a
"system function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.
quarter
integer quarter(date a)
Syntax
integer quarter(timestamp m)
second
integer second(time t)
Syntax
integer second(timestamp m)
timestampadd
Syntax • 'SQL_TSI_SECOND' or 1
• 'SQL_TSI_MINUTE' or 2
• 'SQL_TSI_HOUR' or 3
• 'SQL_TSI_DAY' or 4
• 'SQL_TSI_WEEK' or 5
• 'SQL_TSI_MONTH' or 6
• 'SQL_TSI_QUARTER' or 7
• 'SQL_TSI_YEAR' or 8
timestampdiff
Syntax • 'SQL_TSI_SECOND' or 1
• 'SQL_TSI_MINUTE' or 2
• 'SQL_TSI_HOUR' or 3
• 'SQL_TSI_DAY' or 4
• 'SQL_TSI_WEEK' or 5
• 'SQL_TSI_MONTH' or 6
• 'SQL_TSI_QUARTER' or 7
• 'SQL_TSI_YEAR' or 8
trunc
week
integer week(date a)
Syntax
integer week(timestamp m)
year
integer year(date a)
Syntax
integer year(timestamp m)
String functions
ascii
char
Returns the character whose ascii value corresponds to the INTEGER "n"
where n is between 0 and 255. Returns NULL if n is out of range.
Returns the ascii value of the INTEGER "n" where n is between 0 and 255.
Returns NULL if n is out of range.
concat
containsOnlyDigits
Returns true if the string "s" contains only digits, false otherwise.
boolean containsOnlyDig
Syntax its(string s)
insert
isLike
Checks a string s1 for a matching pattern s2. The pattern follows the SQL
92 standard. The string s3 can be used to specify an escape character in
the pattern.
If an '_' or '%' occurs in the string s1, you can match it by defining a character
s3 and, in the pattern s2, preceding the '_' or '%' by s3.
Note:
The third argument above is a charac-
ter used to escape metacharacters.
See restrictions below.
• (strings1, strings2): if s1 ==
NULL or s2 == null, returns NULL
• (strings1, strings2, strings3): if
s1 == NULL or s2 == NULL or s3
Restrictions == NULL, returns NULL.
• (strings1, strings2, strings3): in
s2, any occurence of s3 must be
followed by '_' or '%' or a second
s3
left
Alias leftStr()
leftStr
Alias left()
len
Alias length()
lPad
Pads a string "s1" on the left side to a specified length "n" using another
string "s2".
lPad('AB','x', 4) = 'xxab'
lPad('ABC','x', 2) = 'AB'
Examples
lPad('ABC','cd', 7) = 'cdcd
ABC'
Note:
If n is less than the length of s1, s1 is truncated.
lTrim
Removes the first sequence of spaces and tabs from the left side of the string
s.
If you specify s1 and s2, lTrim removes the first sequence of s2 from the left
side of s1. The string s2 must be a single chacter.
string lTrim(string s)
Syntax
string lTrim(string s1, string
s2)
Examples
lTrim(' AB CD ') = 'AB CD '
match
Returns true if the first string "s1" matches the pattern "s2", false otherwise.
Refer to the Sun™ Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html.
permute
Takes the first string s1, whose reference pattern is supplied in the second
argument reference-pattern, and applies a new pattern new-pattern
to produce a resulting string. The new pattern is expressed by permuting the
letters defined in the reference pattern.
• The reference pattern assigns each character in string s to the character
in the corresponding position in reference-pattern. The length of
reference-pattern must be equal to the length of s.
• The new pattern permutes the characters that were assigned in the
reference pattern.
In this example, the first 'D' refers to the first character in string s, the second
'D' to the second character in s, '/' to the third character in s, the first 'M' to
the fourth character and so on. This is why the length of reference-pat
tern must always equal the length of the string s. The function returns an
error if the two strings are of different lengths.
string s 22/09/1999
reference-pattern MM/DD/YYYY
new-pattern YYYY-MM-DD
result 1999-22-09
Text can also be inserted into the new pattern, provided none of the letters
are already used in the reference pattern. For example, using the new pattern
'MM/DD Year: YYYY' produces the following string: '09/22 Year: 1999'. The
permute function is helpful not only in transforming formats (dates, times,
encoding), but also in extracting information from a code of pre-defined length
(refer to the examples below).
permute('02/09/2003',
'DD/MM/YYYY', 'YYYY-MM-DD')
= '2003-09-02'
permute('02-09/200',
'DD/MM/YYYY', 'YYYY-MM-DD')
= '2003-09-02'
permute('02/09_2003',
'DD/MM/YYYY', 'DL :MM/DD An
:YYYY') = 'DL :09/02 An
:2003'
permute('2003-09-02', 'DDYY-
MM-YY', 'MM/YY') = '09/03'
permute('03/03/21-0123',
'bbbYY/MM/DD-NNNN', 'YYM
MDDNNNN') = '0303210123'
permute('2003NL987M08J21',
'YYYYXXXXXXMMXDD', 'YYYY-MM-
DD') = '2003-08-21'
permuteGroups
Takes the first string "s1", whose reference pattern is supplied in the second
argument "reference-pattern", and applies a new pattern "new-pattern" to
produce a resulting string. The new pattern is expressed by permuting the
groups of regular expressions defined in the reference pattern.
If the input string does not match the pattern, the function returns NULL.
Refer to the Sun Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html.
string permuteGroups(string
Syntax s1, string reference-pattern,
string new-pattern)
• permuteGroups( '1978-01-12',
'([\d]*)(.)([\d]*)(.)([\d]*)',
'{5}/{3}' ) = 12/01
pos
Returns the position of the first occurrence of string "s1" in string "s2". Returns
0 if string s1 is not found. The first character is in position 1. If "start" is
specified, the search begins from position "start" in s2.
Alias locate()
pos('cd','abcd') = 3
pos('abc', 'abcd') = 1
pos('cd', 'abcdcd') = 3
Examples
pos('cd', 'abcdcd', 3) = 3
pos('cd', 'abcdcd', 4) = 5
pos('ef', 'abcd') = 0
repeat
Returns a string formed by repeating string "s". The string is repeated "n"
times. Returns NULL if the count is negative.
replace
Replaces all occurences of string "s2" in string "s1" with string "s3".
replaceStringExp
Replaces all occurences of string "s2" in string "s1" with string "s3", following
the syntax of a Java regular expression.
Refer to the Sun Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html
string replaceStringExp(string
Syntax s1, string s2, string s3)
right
Alias rightStr()
rightStr
Alias right()
rPad
Pads a string "s1" on the right side to a specified length "n" using another
string "s2".
Note:
If n is less than the length of s1, s1 is truncated.
rPos
Returns the position of the last occurrence of string "s1" in string "s2". Returns
0 if string s2 is not found. The first character is in position 1, and the counting
proceeds from left to right.
rPos('CD','ABCD') = 3
rPos('CD', 'ABCDCD') = 5
Examples
rPos('ABC', 'ABCD') = 1
rPos('EF', 'ABCD') = 0
rTrim
Removes the first sequence of spaces and tabs from the right side of the
string s.
If you specify s1 and s2, rTrim removes the first sequence of s2 from the
right side of s1. The string s2 must be a single chacter.
string rTrim(string s)
Syntax
string rTrim(string s1, string
s2)
Examples
rTrim(' AB CD ') = ' AB
CD'
space
subString
This function extracts the sub-string beginning in position "n1" that is "n2"
characters long, from string "s". If string s is too short to make "n2" characters,
the end of the resulting sub-string corresponds to the end of string "S" and
is thus shorter than "n2".
If you do not specify n2, the sub-string from n to the end of s will be returned.
string substring(string s, in
teger n)
Syntax
string substring(string s, in
teger n1, integer n2)
substring('ABCD', 2, 2) = 'BC'
substring('ABCD', 0, 2) = NULL
toLower
Alias lcase()
toLower('ABCD') = 'abcd'
Examples
toLower('Cd123') = 'cd123'
toUpper
Alias ucase()
trim
Removes the first sequence of spaces and tabs from the left and right sides
of the string s.
If you specify s1 and s2, trim removes the first sequence of s2 from the left
and right sides of s1. The string s2 must be a single chacter.
string trim(string s)
Syntax
string trim(string s1, string
s2)
System functions
database
string
Returns the name of the database (catalog). This function is a "system
function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.
ifElse
min
nvl
Alias ifNull()
user
string
Returns the user name. This function is a "system function" and has the
following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.
valueIfElse
Conversion functions
cast
Casts the first argument "x" as the type specified by the second argument.
The second argument is an integer constant that can have the following
values:
• NULL
• STRING
• DOUBLE
• DECIMAL
• DATE
• TIME
• TIMESTAMP
• BOOLEAN
timestamp cast(type x AS
TIMESTAMP)
boolean cast(type x AS
BOOLEAN)
convert
Converts the first argument "x" to the type specified by the second argument.
The second argument is a string constant that can have the following values:
string convert(type x,
'STRING')
timestamp convert(type x,
'TIMESTAMP')
boolean convert(type x,
'BOOLEAN')
convertDate
Converts a string "s" having a specific format into a DATE. The INTEGER
"n" is reserved for format values. It must equal 1.
date convertDate(string s,
Syntax
integer n)
hexaToInt
intToHexa
toBoolean
boolean toBoolean(boolean b)
boolean toBoolean(string s)
toBoolean('true') = true
toBoolean('TrUe') = true
toBoolean('tru') = false
Examples
toBoolean('False') = false
toBoolean('F') = false
toBoolean('f') = false
toDate
The string s should appear as 'YYYY-MM-DD' where 'YYYY' is the year, 'MM'
is the month and 'DD' is the day.
No restrictions are imposed on month, day or year digit values. If the month
digit is greater than 12 or the day digit does not exist in the corresponding
month, the "toDate" function uses the internal calendar to convert to the
correct date. Thus, '2003-02-29' will be converted to '2003-03-01' and
'2002-14-12' to '2003-02-12'.
date toDate(date a)
null toDate(null u)
Syntax
date toDate(string s)
date toDate(timestamp m)
toDate('2003-02-12') = '2003-
02-12'
toDate('2003-02-29') = '2003-
03-01'
Examples
toDate('2002-14-12') = '2003-
02-12'
toDate('1994-110-12') = '2003-
02-12'
toDecimal
decimal toDecimal(string s)
decimal toDecimal(decimal c)
decimal toDecimal(integer n)
decimal toDecimal(null)
toDouble
double toDouble(string s)
double toDouble(decimal c)
double toDouble(integer n)
double toDouble(null u)
toInteger
integer toInteger(string s)
integer toInteger(decimal c)
integer toInteger(integer n)
integer toInteger(null u)
toNull
NULL toNull(BOOLEAN b)
NULL toNull(DATE a)
NULL toNull(DECIMAL c)
NULL toNull(DOUBLE d)
NULL toNull(NULL u)
NULL toNull(STRING s)
NULL toNull(TIME t)
NULL toNull(TIMESTAMP m)
toString
For details on date formats, see the Java 2 Platform API Reference for
the java.text.SimpleDateFormat class, at the following URL:
"http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html".
STRING toString(BOOLEAN b)
STRING toString(DATE a)
STRING toString(DECIMAL c)
STRING toString(DOUBLE d)
STRING toSTRING(INTEGER n)
string toSTRING(NULL u)
string toSTRING(STRING s)
Syntax
STRING toString(TIME t)
STRING toString(TIMESTAMP m)
STRING toString(DECIMAL c,
INTEGER n)
STRING toString(DOUBLE d, IN
TEGER n)
STRING toString(TIMESTAMP m,
STRING s)
Alias str()
toString(45) = '45'
toString(45.9) = '45.9'
toString('2002-09-09') =
'2002-09-09'
Examples
toString('23:08:08') =
'23:08:08'
toString('2002-03-03
23:08:08.0') = '2002-03-03
23:08:08'
toString(true) = 'T'
toString(false) = 'F'
toTime
Examples of STRINGS that respect this format: '23 :09 :07' and '03 :11
:29'. An error is returned if the format is wrong. No restrictions are imposed
on hour, minute or second values. If the number of minutes or seconds
is greater than 60 or the number of hours is greater than 24, the toTime()
TIME toTime(STRING s)
TIME toTime(DATE a)
TIME toTime(TIMESTAMP m)
TIME toTime(NULL u)
toTime('02:10:09') =
'02:10:09'
Examples
toTime('0:450:29') =
'07:30:29'
toTimestamp
For details on date formats, see the Java 2 Platform API Reference for
the java.text.SimpleDateFormat class, at the following URL:
http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.
If the month number is greater than 12, the day of the month does not exist
in the corresponding month, the number of minutes or seconds is greater
than 60, or the hour digit is greater than 24, the timestamp function uses an
internal clock and calendar to covert to the correct timestamp. Thus,
'2002-09-09 25:14:180' will be converted to '2002-09-10 01:17:00'.
TIMESTAMP toTimestamp(STRING
s)
TIMESTAMP toTimestamp(STRING
s1, STRING s2)
TIMESTAMP toTimestamp(TIME t)
TIMESTAMP toTimestamp(TIMES
TAMP m)
TIMESTAMP toTimestamp(NULL u)
toTimestamp('2003-02-12
02:10:09') = '2003-02-12
02:10:09.0'
toTimestamp('2003-02-29
02:10:09') = '2003-03-01
02:10:09.0'
toTimestamp('2002-14-12
02:10:09') = '2003-02-12
02:10:09.0'
Examples
toTimestamp('1994-110-12
02:10:09') = '2003-02-12
02:10:09.0'
toTimestamp('2003-02-12
0:450:29') = '2003-02-12
07:30:29.0'
toTimestamp('2002-09-09
25:14:180') = '2002-09-09
01:17:00.0'
val
The string "s" must be in the decimal number format where the period is used
as a separator for the decimal portion. An error is returned if s is not in the
decimal number format.
val('2987.9') = 2987.9
val('UUYGV76') = 0.0
21
21 SQL syntax reference
SQL syntax overview
For historical reasons and to support hierarchies, catalog names start with
"/". The "/" character is also the delimiter between two levels in a hierarchy.
c.s.t /c/s/t
"c"."s"."t" /c/s/t
"/c1/c2"."s"."t" /c1/c2/s/t
"/c1/c2".s.t /c1/s2/s/t
If there is a catalog or schema defined by default, you can omit the name
of the catalog or schema in the reference to the table.
"/c1/c2".s.t "/c1/c2" s t t
Catalogs
For details on the JDBC URL, see JDBC URL syntax on page 358.
For details on specifying a default catalog name for users, see Modifying
properties of a user account with SQL on page 517.
Catalog hierarchy
If you specify a catalog and schema in the JDBC connection URL, you need
only query relative names below the specified schema. If you specify nothing,
you have access to a global view of the system.
"/" "/"
/c1 "/" c1
"/c1/c2" "/" c1 c2
Default catalogs
You can specify a default catalog either when you connect to Data Federator
Query Server or by calling the java.sql.Connection#setCatalog(String
catalog) method. Specifying a default catalog allows you to send queries
without fully qualifying table names.
Schemas
"/" "s1"
/s1
"/c1" "s1"
/c1/s1
"/c1/c2" "s1"
/c1/c2/s1
Tables
A table is attached to one schema. The table name must be unique within a
schema to which it belongs.
When neither the default catalog nor the default schema are set, table
identifiers are constructed by giving the catalog name, the schema name
and the table name. In standard SQL syntax, the table identifier is constructed
by concatenating the catalog name, the schema name and the table name
separated by a "." (period).
"/".s1.t1
Note:
This syntax is "/" "s1" "t1"
equivalent to the
absolute path syn-
tax: /s1/t1.
c1.s1.t1
Note:
This syntax is "c1" "s1" "t1"
equivalent to the
absolute path syn-
tax: /c1/s1/t1
"/c1/c2".s1.t1
Note:
This syntax is
equivalent to the "/c1/c2" "s1" "t1"
absolute path syn-
tax:
/c1/c2/s1/t1
When default catalog and/or default schemas are set, catalog names and
schema names can be omitted in the table identifier:
To reference the if the default and if the default then use the
table... catalog is... schema is... qualified name...
"/c1/c2" "s1" t1
"/c1/c2".s1.t1
s1.t1
"/c1/c2" or
"c1/c2".s1.t1
s1/t1
Columns
To reference the if the default and if the default then use the
column... catalog is... schema is... qualified name...
/c1/s1/t1.col1
c1.s1.t1.col1
"/c1" s1/t1.col1
c1.s1.t1.col1
"c1/c2"."sche+ma"."Tab-
Correct
le1".col1
Incorrect /c1/c2.sche+ma.Tab-le1.col1
Related Topics
• Parameters in the JDBC connection URL on page 361
In a traditional database, length, precision and scale are set when creating
a column since they define the properties of the stored value. Data Federator
is a virtual database and does not store any values. Thus length, precision
and scale are not defined at schema definition time. Their values are
dynamically inferred from the contributing source tables.
Here is a list of the data types that are known to Data Federator Query Server.
• BIT
Since all databases do not use the same data types or interpret them in the
same way, Data Federator has standardized a mapping between the common
database types and Data Federator Query Server.
The following table details the correspondence between the internal data
types used in Data Federator and the JDBC data types returned by the Data
Federator JDBC driver.
BIT BIT
DATE DATE
TIMESTAMP TIMESTAMP
TIME TIME
INTEGER INTEGER
DOUBLE DOUBLE
DECIMAL DECIMAL
VARCHAR VARCHAR
NULL NULL
When accessing a JDBC data source, Data Federator does the mapping of
the JDBC types returned by the JDBC driver to the internal Data Federator
data types. The following table details the correspondence between the
JDBC data types and the Data Federator type that is used for the mapping.
BIT BIT
DATE DATE
TIME TIME
TIMESTAMP TIMESTAMP
Data Federator Query Server converts the TIME data into a TIMESTAMP
data by setting the date to '1970-01-01'
When two expressions have different data types, the data type of the result
of an expression that combine these two expressions with an arithmetic
operator is determined by applying the data type precedence.
The table below gives the vector (length, precision, scale) for all Data
Federator expressions.
BIT (1, 1, 0)
DATE (10, 0, 0)
TIMESTAMP (29, 9, 0)
TIME (8, 0, 0)
NULL (0, 0, 0)
DECIMAL Inferred
Related Topics
• Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server on page 535
Expressions
Functions in expressions
For more information on the specific functions that you can use in
expressions, see Function reference on page 624.
Operators in expressions
Names of identifiers and constants must start with a letter and use only letters
and underscores. You can, however, use any characters in your identifier /
constant name if you enclose it in double-quotes: ".
The following table describes the Data Federator syntax for identifiers and
numeric constants:
12
INTEGER: nnn (only
Integer 14
digits - one or more)
15
DOUBLE/DECIMAL: 12.3
nn.nn (one or more dig-
Double or Decimal its, followed by a point, 13.222
followed by one or more
digits) 11.3
DATE: {d 'yyyy-mm-
Date {d '2005-03-28'}
dd'}
TIMESTAMP: {ts
{ts '2005-03-28
Timestamp 'yyyy-mm-dd
01:11:34.23222'}
hh:mm:ss.ffff'}
Indentifier with special any string inside double- "!%any name you
characters quotes like$#$%"
Comments
To add comments to the SQL Statements precede the text with a double
hyphen (--), or with a pound sign (#). Comments terminate at the end of the
line.
Statements
You can write SQL queries to retrieve or manipulate data that is stored in
Data Federator Query Server. A query can be issued in several forms:
Related Topics
• Data Federator Administrator overview on page 384
SELECT Statement
Although queries have various ways of interacting with a user, they all
accomplish the same task: They present the result set of a SELECT statement
to the user. Even if the user never specifies a SELECT statement, the client
software transforms each user query into a SELECT statement that is sent
to Data Federator Query Server.
The SELECT statement retrieves data from Data Federator Query Server
and returns it to the user in one or more result sets. A result set is a tabular
arrangement of the data from the SELECT. Like an SQL table, the result set
is made up of columns and rows.
The full syntax of the SELECT statement is complex, but most SELECT
statements describe four primary properties of a result set:
• The number and attributes of the columns in the result set. The following
attributes must be defined for each result set column:
• The data type of the column.
• The size of the column, and for numeric columns, the precision and
scale.
• The source of the data values returned in the column.
• The tables from which the result set data is retrieved, and any logical
relationships between the tables.
• The conditions that the rows in the source tables must meet to qualify for
the SELECT. Rows that do not meet the conditions are ignored.
• The sequence in which the rows of the result set are ordered.
Statement
SELECTProductID, Name,ListPriceFROMProduction.Prod
uctWHEREListPrice >$40ORDER BYListPriceASC
• SELECT clause
The column names listed after the SELECT keyword (ProductID, Name,
and ListPrice) form the select list. This list specifies that the result set
has three columns, and each column has the name, data type, and size
of the associated column in the Product table. Because the FROM clause
specifies only one base table, all column names in the SELECT
statement refer to columns in that table.
• FROM clause
The FROM clause lists the Product table as the one table from which
the data is to be retrieved.
• WHERE clause
The WHERE clause specifies the condition that the only rows in the
Product table that qualify for this SELECT statement are those rows in
which the value of the ListPrice column is more than $40.
• ORDERBY clause
Here is the complete list of the SQL Statements that are supported by Data
Federator Query Server. A particular set of SELECT statements are supported
and unless noted, the entire standard SQL-92 syntax. In particular both the
SQL-92 grammar for outerjoin and JDBC syntax for outerjoins is supported.
CALL Statement
GRANT Statement
REVOKE Statement
SELECT Statement
Note:
There is a particular set of compatible statements.
Caution:
Data Federator does not support correlated queries, MINUS, INTERSECT, and
EXCEPT.
The following table lists the SQL-92 statements that are not supported by
Data Federator Query Server.
Column Constraints
CONNECT Statement
CLOSE Statement
COMMIT Statement
DELETE Statement
DISCONNECT Statement
OPEN Statement
INSERT Statement
DESCRIBE Statement
EXECUTE Statement
FETCH Statement
PREPARE Statement
ROLLBACK Statement
SET TRANSACTION
UPDATE Statement
WHENEVER Statement
Syntax key
The table below explains the conventions used in the command line syntax.
Symbol Description
text a keyword
The following section details the complete SQL Select clause grammar used
with Data Federator.
start
query [ ; ] EOF
query
WITHView
SQLSelectFromWhere
fromClause
FROM tableReferenceList
tableReferenceList
tableReference
tableReferenceAtomicTerm
{ tablePrimary
| jdbcOuterJoin
| "(" query ")" [ [ AS ] identifier ]
| "(" tableReference ")" [ [ AS ] identifier [ "(" projectAlias
[ , projectAlias ]... ")" ] ] }
tablePrimary
table [ [ AS ] tableAlias ]
table
{
{ identifier | delimitedIdentifier }
[ "." { identifier | delimitedIdentifier } ]
[ "." { identifier | delimitedIdentifier } ]
| URLIDENTIFIER
}
qualifiedJoinPart
jdbcOuterJoin
joinType
outerJoinType
joinSpecification
{ joinCondition | namedColumnsJoin }
joinCondition
ON disjunction
namedColumnsJoin
addUsing
columnName
projectAlias
{ identifier | delimitedIdentifier }
selectExpression
{ tableStar | disjunction
[ AS { identifier | delimitedIdentifier } ]
}
tableStar
functionTermJdbc
functionTerm
analyticFunctionPart
disjunction
conjunction
escapeChar
quotedString
quotedString
QUOTED_STRING_LITERAL
delimitedIdentifier
DELIMITED_IDENTIFIER
IDENTIFIER
columnName
{ identifier | delimitedIdentifier }
negationTerm
comparisonTerm
[ COMPARISON_OPERATOR
{ additiveTerm | { ANY | SOME | ALL } "(" query ")" }
| BETWEEN additiveTerm AND additiveTerm
| inValuesOrQuery
| LIKE additiveTerm [ ESCAPE escapeChar ]
| IS { NULL_LITERAL | NOT NULL_LITERAL }
| NOT { BETWEEN additiveTerm AND additiveTerm
| LIKE additiveTerm [ ESCAPE escapeChar ]
}
]
inValuesOrQuery
inValues
additiveTerm
factor
unaryTerm
variable
table [ . columnName ]
constant
atomicTerm
{
functionTerm [ analyticFunctionPart ] | functionTermJdbc
| variable
| constant
| "(" disjunction ")"
| caseExpression
| coalesceExpression
}
caseExpression
CASE {
additiveTermWHENadditiveTermTHENadditiveTerm [ WHENadditiveT
ermTHENadditiveTerm ]...
|
WHENcomparisonTermTHENadditiveTerm
[ WHENcomparisonTermTHENadditiveTerm ]
}
[ ELSEadditiveTerm ]
END
coalesceExpression
tableAlias
{ delimitedIdentifier | identifier }
startRoutineQuery
procedureCall [ ; ] EOF
procedureCall
procedureArguments
procedureArgument
procedureConstant
Non-terminals
Non-terminals are syntax elements that follow a specific syntax. The following
section details the syntax for non-terminals.
Start
dataDefinitionQuery EOF
dataDefinitionQuery
createUser
adminFlag
[
ADMIN
]
dropUser
alterUser
passwordSetting
propertySetting
userPropertyName userPropertyValue
userPropertyName
{ string | identifier }
createRole
dropRole
revokeRole
grantRole
roles
identifiers
grantPrivilege
revokePrivilege
privileges
objectPrivileges
checkAuthorization
objectName
tableName
tableName
identifier | URLIDENTIFIER
grantees
authorizationId
PUBLIC | identifier
action
columnNames
createResource
dropResource
alterResource
identifier
resourcePropertyValue
{ string | identifier }
referencedResource
identifier
identifiers
string
QUOTED_STRING_LITERAL
identifier
{ IDENTIFIER | DQUOTED_STRING_LITERAL }
createResource
CREATE RESOURCE identifier
dropResource
DROP RESOURCE identifier
alterResource
ALTER RESOURCE identifier {SET resourcePropertyName
resourcePropertyValue | RESET resourcePropertyName }
resourcePropertyName
identifier
resourcePropertyValue
{string | identifier}
identifiers
identifier [ , identifier ]...
string
QUOTED_STRING_LITERAL
identifier
{IDENTIFIER | DQUOTED_STRING_LITERAL}
22
22 System table reference
System table reference
• /leselect/system/ catalogs
Metadata system tables • /leselect/system/ schemas
• /leselect/system/ func
Function system tables tionSignatures
• /leselect/system/ users
• /leselect/system/ userProp
erties
• /leselect/system/ roles
User system tables
• /leselect/system/ roleMem
bers
• /leselect/system/ userRoles
• /leselect/system/ permis
sions
• /leselect/system/ resources
Resource system tables
• /leselect/system/ resource
Properties
• /leselect/system/ dual
catalogs
schemas
systemTables
NULL means no
access restriction.
an estimation of
ESTIMAT the number of
7 DOUBLE
ED_ROW_COUNT rows in the sys-
tem table
procedures
a short descrip-
PROCEDURE_DE
2 VARCHAR tion of the proce-
SCRIPTION
dure
procedureSignatures
the sequence
2 SIGNATURE_SEQ INTEGER number within the
signature
sequence number
of the parameters
3 PARAMETER_SEQ INTEGER
within the signa-
ture
functions
sequence number
2 SIGNATURE_SEQ INTEGER
within signature
sequence number
5 PARAMETER_SEQ INTEGER for parameter
within signature
functionSignatures
sequence number
2 SIGNATURE_SEQ INTEGER
within signature
sequence number
5 PARAMETER_SEQ INTEGER for parameter
within signature
users
userProperties
roles
roleMembers
userRoles
permissions
resources
resourceProperties
dual
single dummy
1 DUMMY VARCHAR
column
connections
the identifier of
2 CONNECTION_ID VARCHAR
this connection
23
23 Stored procedure reference
List of stored procedures
Related Topics
• Using stored procedures to retrieve metadata on page 533
getTables
getTables
Returns -
-
Parameters -
-
VARCHAR catalog
a catalog name
VARCHAR table-
NamePat-
a table name pattern
tern You can use a pattern to match
the table name as it is stored in
the database.
VARCHAR types
a list of table types to include
Related Topics
• Using patterns in stored procedures on page 762
getCatalogs
getCatalogs
Returns -
-
The result
set re-
turned by
this stored
procedure
is identical
to the one
returned by
the JDBC
method: ja
va.sq.lDatabaseMeta
Data#get
Catalogs().
getKeys
getKeys
Returns -
-
VARCHAR
-
KEY_NAME
- the name
of the key
VARCHAR
- COL
UMN_NAME
- the name
of the col-
umn to
which this
key refers
INTEGER -
KEY_SEQ
- sequence
number
within key
BIT -
IS_PRIMA
RY - speci-
fies if this
key is a pri-
mary key
Parameters -
-
VARCHAR tableURL
a table
URL
getFunctionsSignatures
getFunctionsSignatures
Returns -
-
VARCHAR - FUNC
TION_NAME - the
name of the function
INTEGER - SIGNA
TURE_SEQ - a short
description of the
function
INTEGER - RE
TURN_DATA_TYPE
- the data type
VARCHAR - RE
TURN_TYPE_NAME
- the data type name
INTEGER - PARAME
TER_SEQ - the pa-
rameter sequence
INTEGER - PARAME
TER_DATA_TYPE -
the parameter data
type
VARCHAR - PARAM
ETER_TYPE_NAME
- the name of the pa-
rameter type
Parame- -
ters
-
getColumns
getColumns
Re- -
turns
-
Param- -
eters
-
VAR- catalog
CHAR
a catalog name; must match the
catalog name as it is stored in
the database; "" retrieves those
without a catalog
VAR- tableNamePattern
CHAR
a table name pattern; must
match the table name as it is
stored in the database
VAR- columnNamePattern
CHAR
a column name pattern; it must
match the column name as it is
stored in the database
getSchemas
getSchemas
Returns -
-
The result
set re-
turned by
this stored
procedure
is identical
to the one
returned by
the JDBC
method: ja
va.sq.lDatabaseMeta
Da
ta#getSchemas().
getForeignKeys
getForeignKeys
Returns -
-
VARCHAR
-
FK_NAME
- the for-
eign key
name
VARCHAR
-
RK_NAME
- the refer-
enced key
name
VARCHAR
- RK
TABLE_NAME
- the refer-
enced key
table name
VARCHAR
- FKCOL
UMN_NAME
- the name
of the col-
umn this
foreign key
refers to
VARCHAR
- RKCOL
UMN_NAME
- the name
of the col-
umn associ-
ated to the
referenced
key
INTEGER -
KEY_SEQ
- sequence
number
within key
Parameters -
-
VARCHAR tableUrl
a table
URL
refreshTableCardinality
refreshTableCardinality
Computes and stores the estimated row count for the tables matching the
given pattern in the given catalog or schema.
Parameters -
-
VARCHAR catalogName
the catalog name
VARCHAR schemaName
the schema name
VARCHAR tablePattern
the table pattern, such as, 'D%'
clearMetrics
clearMetrics
clears all the metrics that were stored by any previous calls to refreshTable
Cardinality
addLoginDomain
addLoginDomain
Registers a new login domain with the specified name and description.
This routine can be called by an ADMIN user or by a user with the privilege:
CREATE ANY LOGINDOMAIN
Parameters -
-
VARCHAR loginDo-
mainName
a name for this login domain;
VARCHAR loginDo-
mainDe-
a description of this login domain
scription
delLoginDomains
delLoginDomains
Deletes all registered login domains matching the specified name pattern.
For details on patterns, see Using patterns in stored procedures on page 762.
This routine can be called by an ADMIN user or by a user with the DROP
LOGINDOMAIN privilege on each login domain matching the pattern.
When this routine is executed, all credentials depending on the deleted login
domains are also removed.
When this routine is executed, all privileges associated to the deleted login
domains are automatically deleted.
Parameters -
-
VARCHAR loginDomain-
NamePattern
a pattern for the login domain name
alterLoginDomain
alterLoginDomain
Modifies an existing login domain (with the specified name), setting a new
description.
This routine can be called by an ADMIN user or by a user with the privilege
ALTER LOGINDOMAIN ONloginDomainName.
VARCHAR loginDomainName
the name of an existing login
domain
VARCHAR loginDomainDescrip-
tion
a new description of this login
domain
getLoginDomains
getLoginDomains
Parameters -
-
VARCHAR loginDomain-
NamePattern
a pattern for the login domain name
addCredential
addCredential
Adds a new credential for the current user and the specified login domain
name. Or, adds a new credential for the specified authorizationID (user or
role) and login domain name.
The credential is divided into a public part and private part. The private part
will be encrypted when stored in the credential store.
Typically, the public part is the username, and the associated private part
is the password.
Parameters -
-
VARCHAR authoriza-
tionID
the Data Federator user or role name, or the key-
word 'PUBLIC' to denote any user
VARCHAR loginDo-
mainName
a login domain name;
VARCHAR public_cre-
dential
the public part of the credential
VARCHAR private_cre-
dential
the private part of the credential
delCredentials
Deletes credentials matching the specified login domain name pattern for
the current user. Or, deletes credentials matching the specified authoriza
tionID pattern and login domain name pattern.
Re- -
turns
-
CREDENTIALS_DELETED: the
number of deleted credentials.
Param- -
eters
-
VAR- authorizationIDPattern
CHAR
a pattern for the user or role
name
VAR- loginDomainNamePattern
CHAR
alterCredential
alterCredential
Modifies an existing credential for the current user and the specified login
domain name. Or, modifies an existing credential for the specified autho
rizationID (user or role) and login domain name.
The credential is divided into a public part and private part. The private part
will be encrypted when stored in the credential store.
Typically, the public part is the username, and the associated private part
is the password.
Parameters -
-
VARCHAR authoriza-
tionID
the Data Federator user or role name, or the key-
word 'PUBLIC' to denote any user
VARCHAR public_cre-
dential
the public part of the credential
VARCHAR private_cre-
dential
the private part of the credential
getCredentials
getCredentials
Returns all credentials matching the specified login domain name pattern.
Or, returns all credentials matching the specified authorizationIDpattern
pattern and login domain name pattern.
Re- -
turns
-
PUBLIC_CREDENTIAL: the
public part of the credential
Param- -
eters
-
VAR- authorizationIDpattern
CHAR
the Data Federator user or role
name, or the keyword 'PUBLIC'
to denote any user
VAR- loginDomainNamePattern
CHAR
a pattern for the login domain
name
%ment
matches
apartment
treatment
payment
_____ment
matches
apartment
treatment
but not
payment
matches
the pattern
AAA%port_100
matches
AAAairportX100
AAAseaportB100
AAAport8100
but not
AAAport100
24
24 Glossary
Glossary
Glossary
a table in a datasource;
25
25 Troubleshooting
Installation
Installation
This section lists common questions about installation.
bash-2.05$ sh ./install.bin
Preparing to install...
Extracting the installation resources from the installer
archive...
Configuring the installer for this system's environment...
Launching installer...
Invocation of this Java Application has caused an
InvocationTargetException. This application will now exit.
(LAX)
Cause
When installing from a remote machine, you may get this error because the
host cannot detect the graphic environment.
Action
sh ./install.bin -i console)
I get the error "input line is too long" when installing the services on Windows
2000.
Cause
Action
I get the error "input line is too long" when installing the services on Windows
2000.
Cause
Action
You need to rename the jar files in the directory bin/LeSelect/extlib, so that
their names are shorter.
If you use McAfee Anti-Virus, you must enable 'On-Access Scan' before you
start Data Federator.
If the 'On-Access Scan' was not enabled, stop the Data Federator Designer
service, enable the 'On-Access Scan' and restart the Designer service.
When using Data Federator Administrator, I get an error like Missing method
or missing parameter converters.
Cause
If you install a version of Data Federator, then, within three days, you use
this version and install a newer version or a hotfix, your browser's cache may
not be cleared. This can lead to errors in which new Data Federator
Administrator functions try to use new HTML pages that do not exist in the
browser cache.
Action
In Internet Explorer 7.0, for example, you can clear the cache as follows:
choose the Tools menu, click Delete Browsing History..., then in the
Temporary Internet Files section, click Delete Files.
In Internet Explorer 6.0: choose the Tools menu, click Internet Options,
then, in the General tab, in the Temporary Internet Files section, click
Delete Files.
In Firefox 2.0: choose the Tools menu, click Options, then, in the Privacy
tab, in the Private Data section, click Clear Now....
Cause
When upgrading Windows to a new Service Pack or version, the home folder
for the system user may change.
Action
To find the system user’s home directory on your version of Windows, if you
upgrade your version of Windows or you install a Windows service pack for
example:
• ensure you are signed in as an administrator
• stop and re-start the Connection Dispatcher service
• select the Include Hidden and System Files option of the Windows
Search tool
• locate the .datafederator directory, and
• copy the dispatcher_servers_conf.xml file from your old .datafeder
ator directory to the new one.
Note:
If you do not do this, the concerned DataFederator services will start with an
empty configuration after the Windows upgrade.
Datasources
This section lists common questions about working with datasources.
Cause
Action
Cause
Action
Make sure that in the separator text box, you did not enter a space before
or after the separator itself.
For example:
• "; " matches a semicolon followed by a space
If your separator is a semicolon only, the first example will not work.
Cause
Action
Connection parameters
You receive a error message when you attempt to add a datasource, such
as JDBC from defined resource. The message reads "The connection
could not be established. Either the connection parameters are not correct
or the server is not accessible."
Cause
You incorrectly entered the syntax or did not respect the URLTEMPLATE
parameters. You did not set the transactionIsolation parameter correctly.
Action
Cause
Action
Mappings
This section lists common questions about working with mappings.
Cause
If you add a lookup table after you add a mapping, you must add your
datasource table to your mapping a second time. Then, Data Federator
re-imports all the lookup tables linked to your datasources.
Action
I test for a BOOLEAN value in a mapping formula and the formula does not
work.
Cause
Did you specify "= true" in the predicates that return BOOLEAN values?
Certain data access systems return the values "1" and "0", or others, instead
of true and false. When you test the value, you should explicitly test for
equivalence to "TRUE" rather than relying on the returned value.
Action
if( match(S20.HomePhone,'0[0-68].*' )
use:
Cause
You either:
added a relationship which caused the relationship path starting from the
very first source table in the path to eventually return to itself, or
Action
The diagram below shows relationship paths that cause the message "These
source relationships introduce cycles in the graph" to be displayed:
The lines with the red tick show a valid relationship. The lines with a black
cross show relationships that will cause the message "These source
relationships introduce cycles in the graph" to be displayed.
The first line including a black cross introduces a cycle in the graph because
it adds a relationship which causes the relationship path to return to a source
table that has been 'reached' before, namely, the second bar from the left.
The second line including a black cross introduces a cycle in the graph
because it adds a relationship which causes the relationship path which
starts from the very first source table in the path to eventually return to itself.
The following error is displayed: The table ''{t}'', used in the mapping rule
''{m}'' is no longer available.
Cause
The table has been deleted from its datasource, though not necessarily from
the mapping rule, keeping all relationships between this and other tables in
the mapping rule intact.
Action
Either:
Note:
This procedure also deletes all relationships between this table and all other
tables in the mapping rule.
Or:
Related Topics
• Adding a table to a mapping rule on page 282
• Replacing a table in a mapping rule on page 284
• Deleting a table from a mapping rule on page 286
Cause
Either the source table has been added to the mapping rule and its columns
map the key of the target table, or the source table links two core tables.
Action
Set the source table as a core table, or change the key of the target table.
Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261
The source table is connected to another core table through one or more
non-core tables.
Action
Set the source table as a core table, or change the key of the target table.
Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261
Cause
The target table has no key columns, and and no source table has been set
as core.
Action
Set the source table as a core table, or set a key of the target table.
Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261
Domain tables
Cause
Have you removed all the links to the domain table? You must first make
sure that no other tables use the domain table.
For example, in a target table, columns that are of type "enumerated" may
reference your domain table.
Action
You must change these references before you can delete the domain table.
Cause
Did you type a name for your table? It is possible to create a table with no
name, but you cannot select it.
Action
Cause
Data Federator allows blank characters in column names, but the spaces
may be difficult to see on the screen.
Action
Cause
Action
I get the exception: Parser has reached the entity expansion limit "xxx" set
by the Application.
Cause
You may get this error if you manipulate large XML files within Data Federator,
for example the configuration files used to define connectors to sources of
data. This means that the number of XML entities used in your file has
exceeded the default limit. The default allows 1000000 entities.
Action
Cause
Action
Accessing data
This section lists common questions about accessing data on Data Federator
Query Server.
I added target tables, but I cannot access them on Data Federator Query
Server.
Cause
Action
I added target tables and deployed the project, but I cannot access the target
tables on Data Federator Query Server.
Cause
Do your target tables have the status "mapped"? Are your mapping rules in
the status "completed"?
Action
Data Federator only deploys target tables and mapping rules that have the
following status:
• Target tables are deployed when they have status "mapped".
• Mapping rules are deployed when they have status "completed".
To set the status of a target table to "mapped", make sure its mapping rules
have the status "completed". For details on setting the status of target tables,
see Determining the status of a mapping on page 220.
on Windows when the Query Server service was started with a user account
that does not have sufficient rights to access the CSV files over the network.
Cause
Action
What are the Data Federator Windows services and how do I start/stop them?
For a description of the Data Federator Windows services, see the Data
Federator Installation Guide.
Networking
This section lists common questions about networking. It covers how to
configure Data Federator Server if your server has multiple network cards
and / or independent sub-networks, or if your computer is not networked
correctly.
Network Connections
If the client is unable to contact the given IP, errors of the type [ThinDriver]
Cannot reach server. Reason: Retries exceeded, couldn't recon
nect to IP address or [ThinDriver] Server has probably been
restarted or clients entities on server have expired as no ac
tivity was recorded on them for a long time. are displayed.
Note:
The correct IP is the one that the clients may use to contact the server.
Data Federator Query Server supports only one network interface. That is,
only one IP address on which the server can be contacted by clients.
Configurations with multiple clients in multiple independent sub-networks
are not supported. If you have a multi-interface configuration on your server,
you should explicitly specify the public IP to be used by the server.
All clients that can contact this IP can create connections to the server.
Action
Note:
By default, following installation of Query Server, this parameter is not set
and it tries to identify the IP from the computer’s network configuration. This
may sometimes fail, and in some cases, after a system restart, the IP may
change randomly if the computer’s network configuration does not explicitly
specify what the the public IP/network interface is.
26
26 Data Federator logs
About Data Federator logs
This file also contains the log of the JDBC connection with Data Federator
Query Server and the log of the persistence layer.
• [data-federator-installation-dir]/tomcat/logs/*.log
• [data-federator-installation-dir]/tomcat/logs/local
host_datamap_log.[yyyy-mm-dd].txt
#log4j.configuration=conf/logging_all.properties
#log4j.configuration=conf/logging_level4.properties
#log4j.configuration=conf/logging_level3.properties
#log4j.configuration=conf/logging_level2.properties
#log4j.configuration=conf/logging_level1.properties
log4j.configuration=conf/logging.properties
log4j.configuration=conf/logging_all.properties
#log4j.configuration=conf/logging_level4.properties
#log4j.configuration=conf/logging_level3.properties
#log4j.configuration=conf/logging_level2.properties
#log4j.configuration=conf/logging_level1.properties
#log4j.configuration=conf/logging.properties
• data-federator-installation-dir/LeSelect/startup.sh
data-federator-installation-dir/LeSelect/log/*.log
A
A Get More Help
http://www.businessobjects.com/support/
For more information, contact your local sales office, or contact us at:
http://www.businessobjects.com/services/consulting/
http://www.businessobjects.com/services/training
documentation@businessobjects.com
Note:
If your issue concerns a Business Objects product and not the documentation,
please contact our Customer Support experts. For information about
Customer Support visit: http://www.businessobjects.com/support/.
E functions (continued)
curdate 637
effect of changes 330 curtime 638
exporting database 667
projects 313 date/time 637
dayname 638
dayofmonth 639
F dayofweek 639
fault tolerance dayofyear 639
Query Server 601 decrementdays 640
filter formulas degrees 631
editing 619 exp 631
syntax 619 floor 631
filtering 619 hexaToInt 675
constraint violations 302 hour 640
datasource columns 235, 236, 239, 241 ifElse 668
filters ifNull 669
adding to mapping rules 235, 236, 239, incrementdays 640
241 insert 648
formulas intToHexa 675
filters 619 isLike 649
if-then 620 lcase 665
relationships 621 left 650, 651
function leftStr 650, 651
conversion type 671 len 651
string type 647 length 651
functions locate 658
abs 628 log 632
acos 629 log10 632
ascii 647 lPad 652
asin 629 lTrim 652
atan 629 match 653
atan2 629 min 669
cast 671 minute 641
ceiling 630 mod 632
char 647 month 641
concat 648 monthname 641
containsonlydigits 648 now 642
convert 672 numeric 628
convertDate 674 nvl 669
cos 630 permute 654
cot 630 permuteGroups 657
U web services
adding datasources 183, 184
unlocking adding tables 187
projects 43 assigning constant values to parameters
url 479 188
syntax 352 assigning dynamic values to parameters
URLTEMPLATE property 479 188
databases 479 authenticating on 185
user account authenticating on server 185
default 40 connectors 479
user accounts datasources from 182
managing 507 extracting operations from 183
managing using SQL 516 propagating values to parameters 189
mapping to login domains 526 responses 187
properties 511 testing datasources 201
user interface Windows services 591
datasources 67 not installed 590, 591
mappings 216 workflow
overview 31 datasources 67
projects 308 mappings 220
user management 728 projects 309
user properties working directory
default 517 configuring 573
users WSDL files
deleting 516 choosing 183, 184
dropping 516 extracting operations from 183
managing 507, 514 testing 201
managing using SQL 516
X
W
XML
web service adding datasources 181
adding datasources 184 XML datasources
properties for connectors 480 using deployment contexts 181
web service datasources xml files
using deployment contexts 184 datasources from 179