You are on page 1of 816

Data Federator User Guide

BusinessObjects Data Federator XI 3.0


Copyright © 2008 Business Objects, an SAP company. All rights reserved. Business Objects
owns the following U.S. patents, which may cover products that are offered and
licensed by Business Objects: 5,295,243; 5,339,390; 5,555,403; 5,590,250;
5,619,632; 5,632,009; 5,857,205; 5,880,742; 5,883,635; 6,085,202; 6,108,698;
6,247,008; 6,289,352; 6,300,957; 6,377,259; 6,490,593; 6,578,027; 6,581,068;
6,628,312; 6,654,761; 6,768,986; 6,772,409; 6,831,668; 6,882,998; 6,892,189;
6,901,555; 7,089,238; 7,107,266; 7,139,766; 7,178,099; 7,181,435; 7,181,440;
7,194,465; 7,222,130; 7,299,419; 7,320,122 and 7,356,779. Business Objects and
its logos, BusinessObjects, Business Objects Crystal Vision, Business Process
On Demand, BusinessQuery, Cartesis, Crystal Analysis, Crystal Applications,
Crystal Decisions, Crystal Enterprise, Crystal Insider, Crystal Reports, Crystal
Vision, Desktop Intelligence, Inxight and its logos , LinguistX, Star Tree, Table
Lens, ThingFinder, Timewall, Let There Be Light, Metify, NSite, Rapid Marts,
RapidMarts, the Spectrum Design, Web Intelligence, Workmail and Xcelsius are
trademarks or registered trademarks in the United States and/or other countries
of Business Objects and/or affiliated companies. SAP is the trademark or registered
trademark of SAP AG in Germany and in several other countries. All other names
mentioned herein may be trademarks of their respective owners.

Third-party Business Objects products in this release may contain redistributions of software
Contributors licensed from third-party contributors. Some of these individual components may
also be available under alternative licenses. A partial listing of third-party
contributors that have requested or permitted acknowledgments, as well as required
notices, can be found at: http://www.businessobjects.com/thirdparty

2008-10-09
Contents
Chapter 1 Introduction to Data Federator 25
The Data Federator application.................................................................26
An answer to a common business problem.........................................26
Fundamental notions in Data Federator....................................................29
Data Federator Designer: design time.................................................30
Data Federator Query Server: run time................................................30
Important terms....................................................................................30
Data Federator user interface....................................................................31
Overview of the methodology....................................................................33
Adding the targets................................................................................35
Adding the datasources........................................................................35
Mapping datasources to targets...........................................................36
Checking if data passes constraints.....................................................36
Deploying the project............................................................................37

Chapter 2 Starting a Data Federator project 39


Working with Data Federator.....................................................................40
Login and passwords for Data Federator..................................................40
Adding new users......................................................................................40
Starting a project........................................................................................41
Adding a project...................................................................................41
Opening a project.................................................................................41
Deleting Data Federator projects.........................................................42
Closing Data Federator projects...........................................................42
Unlocking projects................................................................................43

Data Federator User Guide 3


Contents

Chapter 3 Creating target tables 45


Managing target tables..............................................................................46
Adding a target table manually.............................................................46
Adding a target table from a DDL script...............................................47
Adding a target table from an existing table.........................................47
Changing the name of a target table....................................................48
Displaying the impact and lineage of target tables...............................48
Details on configuring target table schemas........................................49
Determining the status of a target table.....................................................50
How to read the Impact and lineage pane in Data Federator Designer.....52
Testing targets...........................................................................................53
Testing a target.....................................................................................53
Managing domain tables............................................................................54
Adding a domain table to enumerate values in a target column..........55
Examples of domain tables..................................................................56
Adding a domain table by importing data from a file............................59
Dereferencing a domain table from your target table...........................60
Exporting a domain table as CSV........................................................61
Deleting a domain table........................................................................61
Using domain tables in your target tables.................................................61
Using a domain table as the domain of a column................................62

Chapter 4 Defining sources of data 65


About datasources.....................................................................................66
Datasource user interface....................................................................67
Draft and Final datasources.................................................................67
About configuration resources..............................................................70
Generic and pre-defined datasources..................................................71
Creating database datasources using resources......................................72

4 Data Federator User Guide


Contents

Adding Access datasources.................................................................72


Adding DB2 datasources......................................................................76
Adding Informix datasources................................................................81
Adding MySQL datasources.................................................................86
Adding Oracle datasources..................................................................91
Adding Netezza datasources...............................................................96
Adding Progress datasources............................................................101
Adding SAS datasources...................................................................107
Adding SQL Server datasources........................................................113
Adding Sybase datasources...............................................................119
Adding Sybase IQ datasources..........................................................124
Adding Teradata datasources............................................................128
Creating JDBC datasources from custom resources.........................133
Creating generic database datasources..................................................138
Creating generic JDBC or ODBC datasources..................................138
Managing database datasources............................................................154
Using deployment contexts ...............................................................154
Defining deployment parameters for a project ..................................155
Defining a connection with deployment context parameters .............156
Adding tables to a relational database datasource............................157
Updating the tables of a relational database datasource...................158
Creating text file datasources..................................................................158
About text file formats.........................................................................159
Setting a text file datasource name and description..........................159
Selecting a text data file.....................................................................160
Configuring file extraction parameters...............................................161
Automatically extracting the schema of your datasource table..........168
Indicating a primary key in a text file datasource...............................170
Managing text file datasources................................................................170
Editing the schema of an existing table..............................................170
Using a schema file to define a text file datasource schema.............171

Data Federator User Guide 5


Contents

Generating a schema when a text file has no header row.................171


Defining the schema of a text file datasource manually.....................172
Selecting multiple text files as a datasource .....................................173
Numeric formats used in text files......................................................174
Date formats used in text files............................................................176
Modifying the data extraction parameters of a text file.......................177
Using a remote text file as a datasource............................................178
Creating XML and web service datasources...........................................179
About XML file datasources...............................................................179
Adding an XML file datasource..........................................................179
Choosing and configuring a source file of type XML..........................180
Adding datasource tables to an XML datasource..............................181
About web service datasources.........................................................182
Adding a web service datasource......................................................183
Extracting the available operations from a web service.....................183
Selecting the operations you want to access from a web service......184
Authenticating on a web service datasource......................................185
Authenticating on a server that hosts web services used as
datasources........................................................................................185
Using the SOAP header to pass parameters to web services...........186
Selecting which response elements to convert to tables in a web service
datasource..........................................................................................187
Assigning constant values to parameters of web service operations..188
Assigning dynamic values to parameters of web service operations..188
Propagating values to parameters of web service operations...........189
Managing XML and web service datasources.........................................189
Using the elements and attributes pane.............................................189
Selecting multiple XML files as datasources .....................................199
Using a remote XML file as a datasource..........................................200
Testing web service datasources.......................................................201
Creating remote Query Server datasources............................................202

6 Data Federator User Guide


Contents

Configuring a remote Query Server datasource................................202


Managing datasources............................................................................204
Defining the schema of a datasource.................................................204
Authentication methods for database datasources............................207
Displaying the impact and lineage of datasource tables....................208
Restricting access to columns using input columns...........................209
Changing the source type of a datasource........................................209
Deleting a datasource........................................................................210
Testing and finalizing datasources...........................................................210
Running a query on a datasource......................................................211
Making your datasource final.............................................................212
Editing a final datasource...................................................................213

Chapter 5 Mapping datasources to targets 215


Mapping datasources to targets process overview.................................216
The user interface for mapping..........................................................216
Adding a mapping rule for a target table............................................217
Selecting a datasource table for the mapping rule.............................218
Writing mapping formulas...................................................................219
Determining the status of a mapping.......................................................220
Mapping values using formulas...............................................................222
Mapping formula syntax.....................................................................222
Filling in mapping formulas automatically..........................................223
Setting a constant in a column of a target table.................................224
Testing mapping formulas..................................................................226
Writing aggregate formulas................................................................227
Writing case statement formulas........................................................230
Testing case statement formulas........................................................232
Mapping values to input columns............................................................233
Assigning constant values to input columns using pre-filters.............233
Assigning dynamic values to input columns using table relationships.234

Data Federator User Guide 7


Contents

Propagating values to input columns using input value functions......234


Adding filters to mapping rules................................................................235
The precedence between filters and formulas...................................235
Adding a pre-filter on a column of a datasource table........................236
Editing a pre-filter...............................................................................239
Deleting a pre-filter.............................................................................241
Using lookup tables.................................................................................242
What is a lookup table?......................................................................242
The process of adding a lookup table between columns...................243
Adding a lookup table.........................................................................244
Referencing a datasource table in a lookup table..............................246
Referencing a domain table in a lookup table....................................247
Mapping values between a datasource table and a domain table.....248
Adding a lookup table by importing data from a file...........................249
Dereferencing a domain table from a lookup table............................251
Deleting a lookup table.......................................................................252
Exporting a lookup table as CSV.......................................................252
Using a target as a datasource................................................................252
Managing relationships between datasource tables................................253
The precedence between formulas and relationships........................253
Finding incomplete relationships........................................................254
Adding a relationship..........................................................................256
Editing a relationship..........................................................................259
Deleting a relationship........................................................................260
Choosing a core table........................................................................261
Configuring meanings of table relationships using core tables..........261
Using a domain table to constrain possible values............................263
The process of mapping multiple datasource tables to one target
table....................................................................................................264
Adding multiple datasource tables to a mapping...............................265
Writing mapping formulas when mapping multiple datasource tables.265

8 Data Federator User Guide


Contents

Adding a relationship when mapping multiple datasource tables......267


Interpreting the results of a mapping of multiple datasource tables....270
Combining mappings and case statements.......................................274
Managing a set of mapping rules............................................................276
Viewing all the mapping rules.............................................................276
Opening a mapping rule.....................................................................276
Copying a mapping rule.....................................................................277
Printing a mapping rule......................................................................278
Deleting a mapping rule.....................................................................279
Displaying the impact and lineage of mappings.................................279
Activating and deactivating mapping rules..............................................279
Deactivating a mapping rule...............................................................280
Activating a mapping rule...................................................................280
Testing mappings.....................................................................................280
Testing a mapping rule.......................................................................281
Managing datasource, lookup and domain tables in a mapping rule......282
Adding a table to a mapping rule.......................................................282
Replacing a table in a mapping rule...................................................284
Deleting a table from a mapping rule.................................................286
Viewing the columns of a table in a mapping rule..............................286
Setting the alias of a table in a mapping rule.....................................287
Restricting rows to distinct values......................................................289
Details on functions used in formulas......................................................292

Chapter 6 Managing constraints 293


Testing mapping rules against constraints...............................................294
Defining constraints on a target table......................................................294
Types of constraints...........................................................................294
Defining key constraints for a target table..........................................295
Defining not-null constraints for a target table....................................295
Defining custom constraints on a target table....................................296

Data Federator User Guide 9


Contents

Syntax of constraint formulas.............................................................296


Configuring a constraint check...........................................................297
Checking constraints on a mapping rule.................................................299
The purpose of analyzing constraint violations..................................299
Computing constraint violations.........................................................300
Computing constraint violations for a group of mapping rules...........301
Filtering constraint violations..............................................................302
Marking a mapping rule as validated.................................................303
Viewing constraint violations..............................................................303
The Constraint checks pane...............................................................304
Reports....................................................................................................306

Chapter 7 Managing projects 307


Managing a project and its versions........................................................308
The user interface for projects............................................................308
The life cycle of a project....................................................................309
Editing the configuration of a project..................................................310
Storing the current version of a project..............................................311
Storing the current version of selected target tables..........................312
Downloading a version of a project....................................................313
Loading a version of a project stored on the server...........................314
Loading a version of a project stored on your file system..................315
Including a project in your current project..........................................315
Opening multiple projects...................................................................319
Exporting all projects..........................................................................320
Importing a set of projects..................................................................321
Deploying projects...................................................................................321
Servers on which projects are deployed............................................322
User rights on deployed catalogs and tables.....................................322
Storage of deployed projects..............................................................323
Version control of deployed projects..................................................323

10 Data Federator User Guide


Contents

Deploying a version of a project.........................................................324


Using deployment contexts ...............................................................325
Reference of project deployment options...........................................327

Chapter 8 Managing changes 329


Overview..................................................................................................330
Verifying if changes are valid..............................................................330
Modifying the schema of a final datasource............................................331
Deleting an installed datasource.............................................................333
Modifying a target....................................................................................335
Adding a mapping....................................................................................337
Modifying a mapping................................................................................337
Adding a constraint check........................................................................338
Modifying a constraint check...................................................................338
Modifying a domain table.........................................................................338
Deleting a domain table...........................................................................339
Modifying a lookup table..........................................................................341
Deleting a lookup table............................................................................341

Chapter 9 Introduction to Data Federator Query Server 343


Data Federator Query Server overview...................................................344
Data Federator Query Server architecture..............................................344
How Data Federator Query Server accesses sources of data...........345
Key functions of Data Federator Administrator..................................347
Security recommendations......................................................................348

Chapter 10 Connecting to Data Federator Query Server using JDBC/ODBC


drivers 349
Connecting to Data Federator Query Server using JDBC.......................350
Installing the JDBC driver with the Data Federator installer...............350

Data Federator User Guide 11


Contents

Installing the JDBC driver without the Data Federator installer..........351


Connecting to the server using JDBC................................................352
Example Java code for connecting to Data Federator Query Server using
JDBC..................................................................................................353
Connecting to Data Federator Query Server using ODBC......................354
Installing the ODBC driver for Data Federator (Windows only)..........354
Connecting to the server using ODBC...............................................355
Using ODBC when your application already uses another JVM........357
Accessing data........................................................................................357
JDBC URL syntax....................................................................................358
Parameters in the JDBC connection URL..........................................361
JDBC and ODBC Limitations...................................................................378
JDBC and ODBC Limitations.............................................................378
SQL Constraints......................................................................................380

Chapter 11 Using Data Federator Administrator 383


Data Federator Administrator overview...................................................384
Starting Data Federator Administrator.....................................................384
To end your Data Federator Administrator session.................................384
Server configuration.................................................................................384
Exploring the user interface ....................................................................385
Objects tab.........................................................................................385
My Query Tool tab..............................................................................386
Administration tab...............................................................................387
The Server Status menu item.............................................................389
The Connector Settings menu item....................................................390
The User Rights menu item................................................................391
The Configuration menu item.............................................................392
The Statistics menu item....................................................................393
Managing statistics with Data Federator Administrator...........................394
Using the Statistics tab to refresh statistics automatically..................395

12 Data Federator User Guide


Contents

Selecting the tables for which you want to display statistics..............395


Recording statistics that Query Server recently requested................395
List of options for the Global Refresh of Statistics pane....................396
Managing queries with Data Federator Administrator.............................397
Executing SQL queries using the My Query Tool tab.........................397

Chapter 12 Configuring connectors to sources of data 399


About connectors in Data Federator........................................................400
Configuring Access connectors...............................................................400
Configuring Access connectors..........................................................400
Configuring DB2 connectors....................................................................401
Configuring DB2 connectors..............................................................401
Configuring Informix connectors..............................................................402
Supported versions of Informix...........................................................402
Configuring Informix connectors........................................................402
List of Informix resource properties....................................................403
Configuring MySQL connectors...............................................................410
Configuring MySQL connectors.........................................................410
Specific collation parameters for MySQL...........................................411
Configuring Oracle connectors................................................................412
Configuring Oracle connectors...........................................................412
Specific collation parameters for Oracle............................................412
How Data Federator transforms wildcards in names of Oracle tables.413
Configuring Netezza connectors.............................................................414
Supported versions of Netezza..........................................................414
Configuring Netezza connectors........................................................414
List of Netezza resource properties...................................................415
Configuring Progress connectors............................................................422
Configuring connectors for Progress..................................................422
Installing OEM SequeLink Server for Progress connections.............423
Configuring middleware for Progress connections.............................423

Data Federator User Guide 13


Contents

Configuring SAS connectors...................................................................426


Configuring connectors for SAS.........................................................426
Supported versions of SAS................................................................427
Installing drivers for SAS connections................................................427
Optimizing SAS queries by ordering tables in the from clause by their
cardinality...........................................................................................428
List of JDBC resource properties for SAS..........................................428
Configuring SQL Server connectors........................................................430
Configuring SQL Server connectors..................................................430
Specific collation parameters for SQL Server....................................431
Configuring Sybase connectors...............................................................432
Supported versions of Sybase...........................................................432
Configuring Sybase connectors.........................................................432
Installing middleware to let Data Federator connect to Sybase.........433
List of Sybase resource properties.....................................................434
Configuring Sybase IQ connectors..........................................................442
Supported versions of Sybase IQ......................................................442
Configuring Sybase IQ connectors....................................................442
List of Sybase IQ resource properties................................................443
Configuring Teradata connectors.............................................................451
Supported versions of Teradata.........................................................451
Configuring Teradata connectors.......................................................451
List of Teradata resource properties...................................................452
Default values of capabilities in connectors.............................................459
Configuring connectors that use JDBC...................................................460
Pointing a resource to an existing JDBC driver..................................460
List of JDBC resource properties.......................................................461
List of JDBC resource properties for connection pools......................472
List of common JDBC classes............................................................475
List of pre-defined JDBC URL templates...........................................476
transactionIsolation property..............................................................478

14 Data Federator User Guide


Contents

urlTemplate.........................................................................................479
Configuring connectors to web services..................................................479
List of resource properties for web service connectors......................480
Managing resources and properties of connectors.................................483
Managing resources using Data Federator Administrator..................483
Creating and configuring a resource using Data Federator
Administrator......................................................................................486
Copying a resource using Data Federator Administrator...................488
List of pre-defined resources..............................................................489
Managing resources using SQL.........................................................491
Creating a resource using SQL..........................................................491
Deleting a resource using SQL..........................................................492
Modifying a resource property using SQL..........................................493
Deleting a resource property using SQL............................................494
System tables for resource management..........................................494
Collation in Data Federator......................................................................495
Supported Collations in Data Federator.............................................496
Setting string sorting and string comparison behavior for Data Federator
SQL queries.......................................................................................497
How Data Federator decides how to push queries to sources when using
binary collation...................................................................................500

Chapter 13 Managing user accounts and roles 503


About user accounts, roles, and privileges..............................................504
About user accounts...........................................................................504
Creating a Data Federator administrator user account...........................505
Creating a Data Federator Designer user account..................................506
Creating a Data Federator Query Server user account...........................506
Managing user accounts with Data Federator Administrator...................507
Properties of user accounts.....................................................................511
Managing roles with Data Federator Administrator.................................511

Data Federator User Guide 15


Contents

Granting privileges to a user account or role...........................................514


Managing privileges with Data Federator Administrator..........................514
Managing user accounts with SQL statements.......................................516
Creating a user account with SQL.....................................................516
Dropping a user account with SQL....................................................516
Modifying a user password with SQL.................................................517
Modifying properties of a user account with SQL...............................517
Listing user accounts using SQL........................................................517
Managing privileges using SQL statements ...........................................518
About grantees...................................................................................519
Granting a privilege with SQL.............................................................519
Revoking a privilege with SQL...........................................................520
Checking a privilege with SQL...........................................................520
Verifying privileges using system tables.............................................521
List of privileges..................................................................................522
Managing roles with SQL statements......................................................523
Creating a role with SQL....................................................................523
Dropping a role with SQL...................................................................524
Granting roles with SQL.....................................................................524
Verifying roles using system tables....................................................524
Managing login domains..........................................................................525
Adding a login domain........................................................................525
Modifying a login domain description.................................................525
Deleting login domains.......................................................................526
Mapping user accounts to login domains...........................................526
System tables for user management.......................................................527
Using a system table to check the properties of a user.....................527

Chapter 14 Controlling query execution 529


Query execution overview.......................................................................530
Auditing and monitoring the system........................................................530

16 Data Federator User Guide


Contents

Viewing target tables..........................................................................530


Viewing datasource tables.................................................................530
Querying metadata.............................................................................532
Cancelling a query...................................................................................534
Cancelling a query..............................................................................534
Cancelling all running queries............................................................535
Data types................................................................................................535
Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server.............................................................535
Viewing system configuration..................................................................543
Statistics on query execution..............................................................543
Statistics on the buffer manager.........................................................543
Queries registered for buffer manager...............................................543
Detailed buffer allocation for operators..............................................543
Statistics on wrapper management....................................................544

Chapter 15 Optimizing queries 545


Tuning the performance of Data Federator Query Server.......................546
Updating statistics..............................................................................546
Optimizing access to the swap file.....................................................546
Optimizing memory............................................................................547
Operators that consume memory.......................................................548
Guidelines for using system and session parameters to optimize queries
on large tables....................................................................................548

Chapter 16 Managing system and session parameters 553


About system and session parameters...................................................554
Managing parameters using Data Federator Administrator.....................554
Managing parameters using SQL statements.........................................556
List of parameters....................................................................................557
Configuring the working directory............................................................573

Data Federator User Guide 17


Contents

Chapter 17 Backing up and restoring data 575


About backing up and restoring data.......................................................576
Starting the Data Federator Backup and Restore tool.............................576
Starting the Backup and Restore tool................................................576
Backing up your Data Federator data......................................................578
Restoring your Data Federator data........................................................578

Chapter 18 Deploying Data Federator servers 581


About deploying Data Federator servers.................................................582
Deploying a project on a single remote Query Server.............................582
Possibilities for deploying a project on a single remote instance of Query
Server.................................................................................................583
Configuring Data Federator Designer to connect to a remote Query
Server.................................................................................................584
Sharing Query Server between multiple instances of Designer.........585
Deploying a project on a cluster of remote instances of Query Server....586
Possibilities for deploying a project on a cluster of remote instances of
Query Server......................................................................................587
Starting and stopping Connection Dispatcher.........................................589
Starting Connection Dispatcher when Data Federator Windows Services
are installed........................................................................................589
Starting Connection Dispatcher when Data Federator Windows Services
are not installed..................................................................................590
Starting Connection Dispatcher on AIX, Solaris or Linux...................590
Shutting down Connection Dispatcher when Data Federator Windows
Services are installed.........................................................................591
Shutting down Connection Dispatcher when Data Federator Windows
Services are not installed...................................................................591
Shutting down Connection Dispatcher on AIX, Solaris or Linux........592
Configuring Connection Dispatcher.........................................................593
Setting parameters for Connection Dispatcher..................................593

18 Data Federator User Guide


Contents

Guidelines for using Connection Dispatcher parameters to configure


validity times of references to servers................................................593
Configuring logging for Connection Dispatcher..................................595
Parameters for Connection Dispatcher..............................................596
Managing the set of servers for Connection Dispatcher....................599
Format of the Connection Dispatcher servers configuration file........600
Configuring fault tolerance for Data Federator........................................601

Chapter 19 Data Federator Designer reference 603


Using data types and constants in Data Federator Designer..................604
Date formats in Data Federator Designer................................................607
Data extraction parameters for text files..................................................607
Formats of files used to define a schema................................................608
Running a query to test your configuration..............................................614
Query configuration.................................................................................615
Printing a data sheet................................................................................617
Inserting rows in tables............................................................................618
The syntax of filter formulas.....................................................................619
The syntax of case statement formulas...................................................620
The syntax of relationship formulas.........................................................621

Chapter 20 Function reference 623


Function reference...................................................................................624
Aggregate functions...........................................................................624
Numeric functions...............................................................................628
Date/Time functions...........................................................................637
String functions...................................................................................647
System functions................................................................................667
Conversion functions..........................................................................671

Data Federator User Guide 19


Contents

Chapter 21 SQL syntax reference 687


SQL syntax overview...............................................................................688
Data Federator Query Server query language........................................688
Identifiers and naming conventions....................................................688
Data Federator data types..................................................................696
Expressions........................................................................................701
Comments..........................................................................................706
Statements.........................................................................................706
Data Federator SQL grammar.................................................................713
Syntax key..........................................................................................714
Grammar for the SELECT clause.......................................................715
Grammar for managing users............................................................721
Grammar for managing resources.....................................................725

Chapter 22 System table reference 727


System table reference............................................................................728
Metadata system tables.....................................................................729
Function system tables.......................................................................733
User system tables.............................................................................735
Resource system tables.....................................................................739
Other system tables...........................................................................740

Chapter 23 Stored procedure reference 743


List of stored procedures.........................................................................744
getTables............................................................................................744
getCatalogs........................................................................................746
getKeys..............................................................................................746
getFunctionsSignatures......................................................................748
getColumns........................................................................................750

20 Data Federator User Guide


Contents

getSchemas.......................................................................................751
getForeignKeys..................................................................................752
refreshTableCardinality.......................................................................754
clearMetrics........................................................................................755
addLoginDomain................................................................................755
delLoginDomains................................................................................756
alterLoginDomain...............................................................................756
getLoginDomains...............................................................................757
addCredential.....................................................................................757
delCredentials....................................................................................758
alterCredential....................................................................................760
getCredentials....................................................................................761
Using patterns in stored procedures........................................................762

Chapter 24 Glossary 765


Glossary...................................................................................................766
Terms and descriptions............................................................................766

Chapter 25 Troubleshooting 777


Installation................................................................................................778
Installing from remote machine..........................................................778
Input line is too long: error on Windows 2000....................................778
Input line is too long: error on Windows 2000....................................779
McAfee's On-Access Scan.................................................................779
Errors like missing method due to uncleared browser cache after
installation..........................................................................................779
Finding the Connection Dispatcher servers configuration file when running
Connection Dispatcher as a Windows service...................................780
Datasources.............................................................................................781
File on SMB share..............................................................................781
Separators not working......................................................................781

Data Federator User Guide 21


Contents

Cannot edit an existing datasource....................................................782


Connection parameters......................................................................782
Targets.....................................................................................................783
Cannot see any datasources in targets windows...............................783
Mappings.................................................................................................783
Cannot reference lookup table in existing mapping...........................783
Error in formula that uses a BOOLEAN value....................................784
Source relationships introduce cycles in the graph............................784
Table used in mapping rule is no longer available.............................785
Table added to the mapping rule should be core...............................786
Table added to the mapping rule should not be core.........................786
At least one table should be core.......................................................787
Domain tables..........................................................................................787
Cannot remove domain table.............................................................787
Data Federator Designer.........................................................................788
Cannot select a table..........................................................................788
Cannot find column names.................................................................788
Cannot use column or table after changing the source......................789
Data Federator connectors......................................................................789
Exception for entity expansion limit....................................................789
On Teradata V2R6, error Datatype mismatch in the Then/Else
expression..........................................................................................790
Accessing data........................................................................................790
Target tables not accessible on Data Federator Query Server..........790
Target tables not accessible from deployed project on Data Federator
Query Server......................................................................................791
Cannot access CSV files on a remote machine using a generic ODBC
connection..........................................................................................791
Data Federator services..........................................................................792
Starting and stopping services...........................................................792
Networking...............................................................................................792

22 Data Federator User Guide


Contents

Network Connections.........................................................................792

Chapter 26 Data Federator logs 795


About Data Federator logs.......................................................................796
Data Federator Designer logs.................................................................796
Data Federator Query Server logs..........................................................796
Activating Data Federator Query Server logs....................................796

Appendix A Get More Help 799

Index 803

Data Federator User Guide 23


Contents

24 Data Federator User Guide


Introduction to Data
Federator

1
1 Introduction to Data Federator
The Data Federator application

The Data Federator application


Data Federator is an Enterprise Information Integration (EII) application that
provides a uniform, coherent and integrated view of distributed and
heterogeneous data sources. The data sources can be spread across a
network, managed by different data management systems and administered
under different areas of the organization.

This tool differs in its architecture to ETL (Extract, Transform, Load) tools in
that the data it manages is not replicated in another system but optimized in
the form of virtual data tables. The virtual database is a collection of relational
tables that are manipulated with SQL but do not hold stored data.

Data Federator allows you to consolidate your various data sources into one
coherent set of target tables. From these consolidated, virtual, target tables,
reporting tools can perform queries and be confident that the data are reliable,
trustworthy and up-to-date. For example, you can create a universe using
BusinessObjects Designer or create a query directly against the virtual target
tables using Crystal Reports.

An answer to a common business problem

Most businesses maintain several data sources that are spread across
different departments or sites. Often, duplicate information appears within
the various data sources but is cataloged in such a way that makes it difficult
to use the data to make strategic decisions or perform statistical analysis.

The following diagram illustrates the classic approach to consolidating data.

26 Data Federator User Guide


Introduction to Data Federator
The Data Federator application 1

What are the challenges?

When your task involves consolidating several disparate data sources, you
most likely face the following challenges.
• simplicity and productivity - you want to develop a solution once
• quality control - you want to ensure that the consolidated data can be
trusted and is correct
• performance - you want to make sure that access to the data is optimized
to produce results quickly
• maintenance - you want to develop a solution that requires little or no
maintenance as new source of data are added or as existing sources
change

How can the problem be defined?

When faced with the above challenges, you can define the problem in terms
of the following needs.
• need to retrieve the content of each source
• need to aggregate the information relative to the same customer

Data Federator User Guide 27


1 Introduction to Data Federator
The Data Federator application

• need to reconcile or transform the data to follow a uniform representation

How does Data Federator solve this problem?

The following diagrams illustrate how Data Federator addresses the above
needs.

Data Federator operates between your sources of data and your applications.
The communication between the data and the Data Federator Query Server
takes place by means of "connectors." In turn, external applications query
data from Data Federator Query Server by using SQL.

The following diagram shows where Data Federator operates in relation to


your sources of data and your applications.

28 Data Federator User Guide


Introduction to Data Federator
Fundamental notions in Data Federator 1

Internally, Data Federator uses virtual tables and mappings to present the
data from your sources of data in a single virtual form that is accessible to
and optimized for your applications.

The following diagram shows the internal operation of Data Federator and
how it can aggregate your sources of data into a form usable by your
applications.

Fundamental notions in Data Federator


You work with Data Federator in two phases:
• design time
• run time

Data Federator User Guide 29


1 Introduction to Data Federator
Fundamental notions in Data Federator

Design time is the phase of defining a representation of your data, and run
time is the phase where you use that representation to query your data.

Data Federator Designer: design time

At design time, you use Data Federator Designer to define a data model,
composed of datasource tables and target tables. Mapping rules, domain
tables and lookup tables help you to achieve this goal.

The outcome of this phase is a mapping from your datasources to your


targets. Your target tables are virtual tables that live inside Data Federator,
and they can be queried at run time.

Data Federator Query Server: run time

Once your data model and its associated metadata are in place, your
applications can query these virtual tables as a single source of data. Your
applications connect to and launch queries against Data Federator Query
Server.

Behind the scenes at run time, the Data Federator Query Server knows how
to query your distributed data sources optimally to reduce data transfers.

Important terms

The following table lists some of the fundamental terms when working with
Data Federator. For a full list of definitions, see Glossary on page 766.

Term Description

This is the database that you create


using Data Federator Designer: it
target consolidates the data of multiple
sources into a form that can be used
by your applications.

30 Data Federator User Guide


Introduction to Data Federator
Data Federator user interface 1
Term Description

A target table is one of the tables that


target table
you define in your target.

A datasource is representation of a
source of your data, in tabular form.
datasource
You define a datasource in Data
Federator Designer.

A connector is a file that defines your


sources of data in a format that Data
Federator Query Server understands.
connector When you use Data Federator De-
signer to add a datasource, the defi-
nition that you make is stored in a
configuration file for a connector.

This is a table that typically maps


values from one column to a different
lookup table column. You define it in Data Feder-
ator Designer, and you use it when
adding mappings.

A mapping is a set of rules that define


mapping a correspondence between a set of
datasource tables and a target table.

Data Federator user interface


The following diagram shows the layout and elements of a Data Federator
Designer window.

Data Federator User Guide 31


1 Introduction to Data Federator
Data Federator user interface

Data Federator Designer maintains a consistent interface for all the


components of a Data Federator project.

The main components of the Data Federator Designer user interface are:
• (A) the breadcrumb, showing you the position of the current window in
the tree view
• (B) the tabs, where you navigate among your open projects
• (C) the project toolbar, where you add, import or export projects
• (D) the tree view, where you navigate among the components in your
project
• (E) the main view, where you define your components
• (F) the Save button, which saves the changes you made on the current
window
• (G) the Open button, which lets you open a project from the project
Configuration window
• (H) the Reset button, which resets the changes you made on the current
window

32 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1
Overview of the methodology
This section introduces the methodology that you can follow to work with
Data Federator effectively.
You complete the following steps when working with Data Federator.
1. Add the targets.
2. Add the datasources.
3. Map the datasources to the targets.
4. Check the target data against constraints.
5. Deploy the project.

The following diagram summarizes steps 1-3 above. These steps represent
the construction phase in Data Federator Designer, at the end of which Data
Federator understands your source data and can present it as a federated
view.

Data Federator User Guide 33


1 Introduction to Data Federator
Overview of the methodology

34 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1

Adding the targets

Adding the targets is a matter of designing the schemas of the tables that
your applications will query.

This design is driven by the needs of your applications. You define the target
schema by examining what data your applications require, and by
implementing this schema as a target table in Data Federator Designer.

Related Topics
• Managing target tables on page 46

Adding the datasources

In Data Federator, you reference your existing sources of data by adding


"datasources".

Data Federator accepts database management systems and CSV files as


sources of data. These sources can be located on different servers at different
locations and use different protocols for access.

Depending on the type of source, you define the data access system in which
it is stored, you select the capabilities of the data access system, or you
describe the data extraction parameters if your source is a text file.

Once you have referenced either a text or database system as a source,


Data Federator names it a "datasource". The term "datasource" refers to
Data Federator representation of actual source of data. It is this abstraction
that lets Data Federator understand the data and perform real-time queries
on it.

Related Topics
• About datasources on page 66
• Creating text file datasources on page 158
• Creating generic JDBC or ODBC datasources on page 138
• Configuring a remote Query Server datasource on page 202

Data Federator User Guide 35


1 Introduction to Data Federator
Overview of the methodology

Mapping datasources to targets

The mapping phase links your datasources to your targets.


During the mapping phase, you can use filters, relationships and formulas
to convert values from the ones in your datasources to the ones expected
by your targets.

Mapping formulas let you make computations on existing data, in order to


convert it to its target form. Data Federator Designer lets you add additional
data that does not exist in your datasource by creating lookup tables and
domain tables. You can also describe additional logic by adding filters and
relationships between datasource tables.

Once the mappings are in place, Data Federator Query Server knows how
to transform, in real-time, the data in your datasources into the form required
by your targets.

Related Topics
• Mapping datasources to targets process overview on page 216

Checking if data passes constraints

Once your mappings are defined, Data Federator Designer helps you check
the validity of the data that results from the mappings.

Data Federator Designer defines several default constraint checks, such as


checking that a primary key column never produces duplicate values, or
checking that a column marked as "NOT-NULL" does not have any NULLS
in it. You can also add custom constraints.

Once your constraints are defined, Data Federator Designer lets you check
each mapping, and mark the ones that are producing valid results, in order
to refine the rules so that they are ready for production.

Related Topics
• Testing mapping rules against constraints on page 294
• Defining constraints on a target table on page 294

36 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1

Deploying the project

When your mappings are tested in Data Federator Designer, you can deploy
your project on Data Federator Query Server.

When you deploy your project, its tables are usable by applications that send
queries to Data Federator Query Server.

Related Topics
• Managing a project and its versions on page 308

Data Federator User Guide 37


1 Introduction to Data Federator
Overview of the methodology

38 Data Federator User Guide


Starting a Data Federator
project

2
2 Starting a Data Federator project
Working with Data Federator

Working with Data Federator


To start working with Data Federator, you create a "project" in Data Federator
Designer.

A "project" is a workspace containing all the components used by Data


Federator: targets, datasources, mappings, lookup tables, domain tables,
and constraint checks. Each project has versions, and each version is either:
• in development
• deployed

While you work on a project, it is considered to be in development. When


you are ready to put your work into production, you deploy the project. Once
you deploy a project, it becomes a catalog on Data Federator Query Server,
and other applications can send queries to it.

Login and passwords for Data Federator


The default user name is sysadmin.

The default password is sysadmin.

You should use Data Federator Administrator to change the login parameters
after installation.

Related Topics
• Starting Data Federator Administrator on page 384

Adding new users


You can add new users to Data Federator using Data Federator Administrator.

Related Topics
• Data Federator Administrator overview on page 384
• Starting Data Federator Administrator on page 384

40 Data Federator User Guide


Starting a Data Federator project
Starting a project 2
Starting a project
To start a project in Data Federator Designer, you add a project and then
open the project.

Related Topics
• Adding a project on page 41
• Opening a project on page 41
• Managing a project and its versions on page 308

Adding a project

To define targets, datasources and mappings, you must add a project to the
Data Federator list of projects.

When you add a project, it appears in the Data Federator list of projects, and
you can switch between different projects.
1. At the top of the window, click Projects.
2. Click Add project.
The New project window appears.
3. Enter a name and description for the project in the Project name and
Description fields, and click Save.
Data Federator adds the project to the list of projects.

Opening a project

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

In order to work on your targets, datasources and mappings, you must open
a project. You open projects from the Projects tab.
1. At the top of the window, click the Projects tab.
2. In the tree list, click your-project-name.
The "Configuration" window appears.

Data Federator User Guide 41


2 Starting a Data Federator project
Starting a project

3. Click Open.
The your-project-name tab appears.

4. Click the your-project-name tab.


The latest version of your project opens.

Once your project is open, you can add targets, datasources and
mappings to it.

Related Topics
• Unlocking projects on page 43
• Managing target tables on page 46
• About datasources on page 66
• Creating database datasources using resources on page 72
• Mapping datasources to targets process overview on page 216
• Opening multiple projects on page 319

Deleting Data Federator projects

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
1. Click the Projects tab.
2. Click the Delete this project icon.

Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308

Closing Data Federator projects


1. Click the your-project-name tab.
2. Click the Close this project icon.

42 Data Federator User Guide


Starting a Data Federator project
Starting a project 2

The project closes and becomes unlocked for other user accounts.

Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308

Unlocking projects

When you open a project, Data Federator Designer locks it. When other user
accounts try to access the project, Data Federator refuses, and indicates
that it is locked by your user account.

To unlock a project, the user account that locked it must log in and close the
project.

If the password for the user account that locked the project is lost, the system
administrator can reset it. You can the log in using the user account that
locked the project, and unlock it.

If you open the same project on two machines with the same user account,
the last machine will lock the project. If you return to the first machine, the
project will be open, but you will not be able to save your changes. In this
case, you will have to decide if you want to keep the changes you made on
the first machine or on the second machine.

Data Federator also automatically unlocks the project after the session
timeout value expires. This value is set to 30 minutes.

Related Topics
• Closing Data Federator projects on page 42
• Managing a project and its versions on page 308

Data Federator User Guide 43


2 Starting a Data Federator project
Starting a project

44 Data Federator User Guide


Creating target tables

3
3 Creating target tables
Managing target tables

Managing target tables


Target tables are the Data Federator tables that you create to present data
in the correct format to your external applications.

You define target tables in the Data Federator Designer user interface. Once
you have defined the target tables and deployed your project, the Data
Federator server (Data Federator Query Server) exposes your tables to your
other applications.

Adding a target table manually


1. Select Add a new target table from the Add drop-down arrow.
The New target table window appears.

2. Type a name for the table in the Table name field, and a description in
the Description field.
3. Click Add columns, then click the number of columns that you want to
add.
Empty rows appear in the Table schema pane. Each row lets you define
one column.

You can add rows repeatedly.

4. Fill in each row in the Table schema pane with the name and type of the
column that you want to add.
5. Click Save.
Your target table appears in the Target tables tree list.

Related Topics
• Inserting rows in tables on page 618
• Adding a mapping rule for a target table on page 217
• Using data types and constants in Data Federator Designer on page 604

46 Data Federator User Guide


Creating target tables
Managing target tables 3

Adding a target table from a DDL script

This procedure shows you how to add a target table by opening a file that
contains a DDL script.
1. Select Add target tables from DDL script from the Add drop-down
arrow.
The Import a DDL script window appears.

2. Import a DDL script in one of the following ways:


• Click Import a DDL script, then click Browse and select a file that
contains a DDL script that defines a table.
• Click Manual input, then type in a DDL script in the text box.

3. Click Save.
Data Federator Designer executes your DDL script and adds a new table.

Related Topics
• Adding a mapping rule for a target table on page 217
• Formats of files used to define a schema on page 608

Adding a target table from an existing table

This procedure shows you how to create a new target table by copying an
existing table or datasource.
1. Select Add target table from existing table from the Add drop-down
arrow.
The Add target table from exising table window appears.

2. Expand the Tables tree list and select the target table to be added.
The name of the selected table appears in the Replace with table field.
3. Click the table that you want to use.
The name of the selected table appears in the Selected table field, with
copyOf_your-table-name in the New target table's name field.

Data Federator User Guide 47


3 Creating target tables
Managing target tables

4. If you want to create a default mapping rule for your new target table,
select the Create default mapping rule check box.
A default mapping rule maps each column of your new table to its
corresponding column in the original table.
5. Click Save.
The Target tables window is displayed showing all created target tables.
6. Click copyOf_your-table-name in the tree list.
The copyOf_your-table-name window appears, showing the columns
copied from your original table to your new target table.

7. Modify the columns as required, and click Save when done.

Related Topics
• Adding a mapping rule for a target table on page 217

Changing the name of a target table

You can change the name of a target table from the Target tables > your-
target-table-name window.
1. In the tree list, click your-target-table-name.
2. In the General pane, type the new name of your target table.
3. Click Save.
The name of the target table changes.

Displaying the impact and lineage of target tables


1. Open your target table.
2. Click Impact and Lineage.
The Impact and lineage pane for your target table expands and appears.

Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

48 Data Federator User Guide


Creating target tables
Managing target tables 3

Details on configuring target table schemas

This section describes the options you have when you are defining the
schema of a target table. You can use this information while adding a target
table manually.

Table 3-1: Description of the Table schema pane

Option Description

allows you to select a row, if you want


to move it or add a new row before it
Select
(see Inserting rows in tables on
page 618)

lets you enter the name of a column


Column name
of the target table

the data type of the column (see Us-


Type ing data types and constants in Data
Federator Designer on page 604)

if the data type is enumerated, this


box lets you choose the domain table
Domain table that contains the allowed values for
this column (see Managing domain
tables on page 54)

Key icon
specifies if the column is the key, or
part of the key, of the target table

Data Federator User Guide 49


3 Creating target tables
Determining the status of a target table

Option Description

specifies if the values in this column


Not-null
must not be NULL

Input column icon if checked, Data Federator refuses


to answer queries on this table unless
the querying application supplies a
value for this column

Description icon
allows you to enter a description of the
column

Delete icon

deletes the row

Related Topics
• Adding a target table manually on page 46

Determining the status of a target table


Data Federator displays the current status of each of your targets. You can
use this status to learn if you have entered all of the information that Data
Federator needs to use the target.
Each target goes through the statuses:
• incomplete

(Data Federator does not show this status in the interface. All new targets
are put in this status.)

50 Data Federator User Guide


Creating target tables
Determining the status of a target table 3
• mapped

The status is shown in the Target tables > your-target-table-


name>your-mapping-rule-name window.

This table shows what to do for each status of the target life cycle.

The status... means... you can do this...

Deactivate the mapping


Not all of the active rules.
incomplete mapping rules in the
target are complete. Make the active map-
ping rules complete.

All of the active mapping • Test the mapping


mapped rules in the target are rules.
complete.

Related Topics
• Deactivating a mapping rule on page 280
• Mapping values using formulas on page 222
• Testing mappings on page 280
• Deploying a version of a project on page 324

Data Federator User Guide 51


3 Creating target tables
How to read the Impact and lineage pane in Data Federator Designer

How to read the Impact and lineage pane


in Data Federator Designer
Table 3-3: How to read the Impact and lineage pane in Data Federator Designer

Element Description

Each box represents a component of


your Data Federator Designer
project.
boxes
A box can be a target table, a final
datasource table, a mapping rule, a
domain table or a lookup table.

Lineage tab

On the Lineage tab, the arrows show


where data comes from.
arrows Each arrow points to the component
from which the first component gets
its data.

Impact tab

On the Impact tab, the arrows show


where data goes.
arrows Each arrow points to the component
to which the first component provides
data.

52 Data Federator User Guide


Creating target tables
Testing targets 3
Testing targets
To test a target, you must verify if the information you entered allows Data
Federator to correctly populate the target tables.
You can encounter the following problems:
• You have written mapping formula that maps the wrong value.
• Your mapping formulas do not result in sufficient information for your
target columns.
• Your mapping formulas result in null values in columns that must not be
NULL.

Data Federator lets you test a target by using the Target table test tool pane.

Testing a target

The target must have the status "mapped" (see Determining the status of a
target table on page 50).

You can run a query on a target to test that all of its mapping rules are
mapping values correctly and consistently.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-name window appears.

2. In the Target table test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.

For details on printing the results of the query, see Printing a data sheet
on page 617.
Data Federator displays the data in columns in the Data sheet frame.
3. Verify that the values appear correctly.
Otherwise, try adjusting the mapping rules in the target again.

Data Federator User Guide 53


3 Creating target tables
Managing domain tables

Example tests to run on a mapping rule

Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.

Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.

The number of rows will appear above the query results.


• Fetch a single row.

For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:

client_id=6000114

in the Filter box.

Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.

Type the formula:

client_id <> NULL

If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.

Managing domain tables


In Data Federator, domain tables are tables that have the following properties:
• Like datasource tables, domain tables hold columns of data.
• Unlike datasource tables, the data in a domain table is stored on the Data
Federator Query Server.

54 Data Federator User Guide


Creating target tables
Managing domain tables 3
• Unlike datasource tables, you can use a domain table to create an
enumeration to be used in a target table (see Using a domain table as
the domain of a column on page 62).
• Domain tables support up to 5000 rows.
• You can combine a domain table with a lookup table to map the values
in a datasource column to the values in a domain table (see Using lookup
tables on page 242).

You create domain tables when you want make an enumeration available
for a column in one of your target tables.

You can also use a domain table to constrain the values in the column of a
target table. See Using a domain table to constrain possible values on
page 263.

Adding a domain table to enumerate values in a


target column

The following procedure is an example of a domain table that you can use
to enumerate a list of values for a column called marital_status. The list
in this example contains a code for each marital status.

In this example, the list contains:


• SE (to represent single)
• MD (married)
• DD (divorced)
• WD (widowed)

1. Click Add > Add domain table.


The "New domain table" window appears.

2. In the Table name field, type a name for your new domain table.
3. In the "Table schema" pane, click Add columns, then click 1 column to
add one column.
One empty row appears in the "Table schema" pane.

4. Complete the row with the following values:

Data Federator User Guide 55


3 Creating target tables
Managing domain tables

• In the Column name box, type marital_status.


• In the Type box, type String.

In the key column (key icon ), select the key check box.

5. Click Save.
The your-domain-table-name window appears.

6. In the "Table contents" pane, click Add, then click Add again.
The "Add rows" window appears, showing one empty row with the columns
that you defined.

7. Click Add rows, then click 3 rows to add three more rows.
8. In the field that you named marital_status, enter the values:
• SE

• MD

• DD

• WD

9. Click Save.
The "Update report" window appears.
10. Click Close.
The your-domain-table-name window appears, showing your new
table with the values you entered. You can now use this domain table to
define a set of values for a column in a target table.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Examples of domain tables

This section shows some examples of domain tables that you can use in
different cases.

56 Data Federator User Guide


Creating target tables
Managing domain tables 3
Example: Single-column domain table used as an enumeration
You can use this type of table to enumerate the values in the column of a
target table (see Using a domain table as the domain of a column on
page 62)

marital_status

SE

MD

DD

WD

Example: Two-column domain table used as an enumeration, with


descriptions
You can use this type of table to enumerate the values in the column of a
target table, and add a description to each value. You can use the
descriptions to make the corresponding values easier to remember.

See Using a domain table as the domain of a column on page 62.

marital_status marital_status_description

SE single

MD married

Data Federator User Guide 57


3 Creating target tables
Managing domain tables

marital_status marital_status_description

DD divorced

WD widowed

Example: Four-column domain table with a relationship between the


columns
You can use this type of table in the following situation:
• You want to enumerate the values of one column of a target table.
• The column is related to another column, and you want to represent this
relationship.

In this example, you could use department_code as the domain of a column


called "department" in your target table, and you could populate the first
column called "division" based on the value of department_code.

See Using a domain table as the domain of a column on page 62.

department_
division_code_ department_
division_code code_ descrip-
description code
tion

HR human resources D101 benefits

HR human resources D102 new hires

research and de-


RD D111 servers
velopment

58 Data Federator User Guide


Creating target tables
Managing domain tables 3

department_
division_code_ department_
division_code code_ descrip-
description code
tion

research and de-


RD D112 workstations
velopment

MKTG marketing D121 North America

SLS sales D231 North America

PRCH purchasing D241 Global

Adding a domain table by importing data from a file

• You must have created a text file containing the domain data. The file
must be in comma-separated value (CSV) format, as in the example
above.
• For details on data types that you can use, see Using data types and
constants in Data Federator Designer on page 604.

If you have a lot of domain data, you can enter it into your domain table
quickly by importing the data from a text file.

For example, Data Federator can import domain data such as the following.

file: my-domain-data.csv
"1";"single"
"2";"married"
"3";"divorced"
"4";"widowed"

1. Add a domain table.

Data Federator User Guide 59


3 Creating target tables
Managing domain tables

2. Add a datasource that points to the file from which you want to import.
3. When the Domain tables > your-domain-tablewindow appears, click
Add, then click Add from datasource table.
The Domain tables > your-domain-table > Add rows from a
datasource window appears.

4. Refer to the Select a datasource table field and select the datasource
table to be added to the domain table.
The columns of the selected datasource table are displayed in the Select
a subset of columns field on the right. You can, if required, select one
or all of the columns in this field and click View Data to display the
contents of the selected columns.

5. Refer to the Domain columns mapping pane and map the required
datasource column from each domain table column's drop-down list-box.
6. Click Save.
The Domain tables > your-domain-table-name > Update report
window is displayed and your file's imported data is added to your domain
table.

Related Topics
• Creating text file datasources on page 158
• Adding a domain table to enumerate values in a target column on page 55

Dereferencing a domain table from your target table


1. Edit your target table.
2. In the Table Schema pane, find a column that references your domain
table and select String under the Type column.
Do this for each column that is of type Enumerated, and that references
your domain table.

3. Click Save.

Related Topics
• Adding a target table manually on page 46

60 Data Federator User Guide


Creating target tables
Using domain tables in your target tables 3

Exporting a domain table as CSV

• You must have added a domain table.

See Managing domain tables on page 54.

1. In the tree list, click Domain tables.


The Domain tables window appears.
2. Select the table you want to export as CSV.
The Domain tables > your-domain-table-name window appears.
3. Click Export.
The File download window appears giving you the option of opening or
saving your Domain_your-domain-table-name.csv file.
4. Click Save and save the .csv file to a location of your choosing.

Deleting a domain table

To delete a domain table, you must first remove references to it from any
lookup and target tables.
1. In the tree list, click Domain tables.
2. Select the tables that you want to delete.
3. Click Delete, and click OK to confirm.

Related Topics
• Dereferencing a domain table from your target table on page 60
• Dereferencing a domain table from a lookup table on page 251

Using domain tables in your target tables


This section describes how to use domain tables. You can use domain tables
to enumerate the values of a column in your target table.

Data Federator User Guide 61


3 Creating target tables
Using domain tables in your target tables

Using a domain table as the domain of a column

• You must have created a domain table as described in Managing domain


tables on page 54.

This procedure shows how to use the values that you entered in a domain
table as the values that can appear in a column of your target table.
1. Add a target table. See Managing target tables on page 46.
2. When the Target tables > New target table window appears, click Add
columns, then, from the list, click 1.
An empty row appears in the Target schema pane.

3. In the Column name box, type a name for your column.


4. In the Type list, click Enumerated.
An edit icon appears beside the Type box.

5. Click the Edit icon

.
The Target tables > New target table > Domain constraint table 'your-
column-name' window appears.

This window shows a list of your domain tables.

6. In the list, expand the name of your domain table, then click the column
that you want to use as the domain.
For example, if your domain table contains the columns marital_code,
and marital_code_description, click marital_code.

The name of the domain table appears in the Selected table box. The
name of the column appears in the Selected column box.

7. Click Save.
The "Target tables > New target table" window appears.

62 Data Federator User Guide


Creating target tables
Using domain tables in your target tables 3
The name of the domain table and domain column that you selected
appears in the Domain table box in the row that defines your new target
column.

When you choose values for this column in Data Federator Designer,
only the values in domain table will appear.

To associate a set of enumerated values in your datasource to a set of


enumerated values in your target, see The process of adding a lookup
table between columns on page 243.

To constraing rows in your datasource to those whose values match a


set of enumerated values in your target, see Using a domain table to
constrain possible values on page 263.

Data Federator User Guide 63


3 Creating target tables
Using domain tables in your target tables

64 Data Federator User Guide


Defining sources of data

4
4 Defining sources of data
About datasources

About datasources
Data Federator projects use datasources to access a project's sources of
data. A datasource is a pointer that points to and represents the data that is
kept in a source. For example, this could be a relational database in which
you store customer data. A datasource can also point to a text file, for
example in which you keep sales information.

Datasources are a basic component of Data Federator. A datasource consists


of a table, or a set of tables. Once you define a datasource, you can connect
your project to the datasource, and populate your target tables with the data.

Datasources that you can create fall into the following categories:
• Databases are datasources that represent databases such as Oracle,
Access and DB2. Data Federator includes pre-defined resources that you
can use to help configure your datasource to achieve the best
performance.

This category includes relational databases that use JDBC drivers, ODBC
drivers, and openclient drivers.
• Text file datasources provide access to data held in text files, for example
comma-separated value (.csv) files.
• XML/web datasources provide access to data held in XML files, or data
provided by web services.
• A Remote Query Server datasource uses a remote Data Federator Query
Server as a source of data.

Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Generic and pre-defined datasources on page 71
• Creating remote Query Server datasources on page 202
• Creating text file datasources on page 158
• Adding an XML file datasource on page 179
• Adding a web service datasource on page 183

66 Data Federator User Guide


Defining sources of data
About datasources 4

Datasource user interface

The following diagram shows what you see in Data Federator Designer when
you work with datasources.

The main components of the datasource user interface are:


• (A) the tree view, where you navigate among your datasources
• (B) the main view, where you define your datasources
• (C) collapsed nodes in the tree, each representing one datasource
• (D) an expanded node, showing a datasource with two statuses: a draft
and a final
• (E) a pane, showing parameters for a datasource

Draft and Final datasources

When you create a new datasource, Data Federator marks its status as
Draft, to indicate that the definition is incomplete. In order to use your
datasource in a mapping, when you finalize the definition, you must make it
Final.
• Draft: A datasource is a draft when you first create it. When a datasource
is a draft, you can modify it, but you cannot use it in a mapping.

Data Federator User Guide 67


4 Defining sources of data
About datasources

The datasource appears under Draft in the tree list.

A draft has two statuses: Incomplete and Complete.


• Incomplete: Certain configuration parameters have not been filled in.
The values are either not complete or they are invalid.
• Complete: All necessary configuration parameters are filled in and
are valid.
• The datasource passes automatically from Incomplete to
Complete as soon as you fill in the required configuration
parameters correctly.
• The datasource passes automatically from Complete to
Incomplete if you replace a correct value with an incorrect one.
This is also the case when you add a new table in which the
required parameters are not filled in correctly.

• Final: A datasource is final when you click Make Final.

When a datasource is Final, you cannot modify it, but you can use it in
a mapping.

The datasource appears under Final in the tree list.

68 Data Federator User Guide


Defining sources of data
About datasources 4
Table 4-1: Summary of the life cycle of a datasource

The version... means... you can do this...

Modify the datasource


configuration.
Some datasource defi-
Draft, nition and schema defi- The symbols
Incomplete nition parameters are
invalid.
indicate invalid parame-
ters.

The datasource table Define the datasource


schema is incomplete. table schema.

Datasource definition
Draft, and schema definition Test the datasource
Complete parameters are com- configuration.
plete and valid.

If all the datasource ta-


All the datasource ta-
bles have been added,
ble schemas are de-
make the datasource
fined.
Final.

The datasource ap- If you need to change


pears in the data- a Final datasource,
Final
source tree list under you must copy it to a
Final. Draft first.

Related Topics
• Setting a text file datasource name and description on page 159
• Creating generic JDBC or ODBC datasources on page 138

Data Federator User Guide 69


4 Defining sources of data
About datasources

• Defining the schema of a text file datasource manually on page 172


• Running a query on a datasource on page 211
• Making your datasource final on page 212
• Editing a final datasource on page 213

About configuration resources

The Data Federator software includes pre-defined configuration resources


that you can use to create datasources. For example, the Data Federator
software includes resources for databases including the following:
• Oracle
• MySQL
• SQL Server
• DB2
• Microsoft Access

Using Data Federator Administrator, you can:


• Modify a pre-defined resource to change the configurations of all
datasources that use it
• Copy a pre-defined resource, and use the copy as the base for a new
resource
• Create a new resource

Once you have created a resource in Administrator, you can use it to create
datasources in Designer.

In addition to using pre-defined resources, for a datasource, you can configure


a generic JDBC connection. Unlike resources, this can only be used by the
datasource for which it is created.

There are three types of resources:


• JDBC resources provide access through JDBC. These are used for
databases such as Access, Oracle, and DB2.
• ODBC resources provide access through ODBC. These are used for
databases such as Netezza, Terradata, and Informix.

70 Data Federator User Guide


Defining sources of data
About datasources 4
• Openclient resources are used for databases such as Sybase.

Generic and pre-defined datasources

A generic datasource is a connection configuration that you create in Data


Federator Designer. You do not require Administrator access to define a
generic datasource.

A generic datasource differs from a pre-defined resource in the following


ways:
• Performance: The performance of a generic datasource is not as efficient
as with a pre-defined resource:
• With a generic datasource, a large degree of data processing is
performed by the Data Federator application software.
• For pre-defined resources, as much processing as possible is handled
by the database software. This results in better performance. In
addition, for pre-defined datasources, the connection parameters have
been optimized and tested for maximum performance.

• Datasource availability:
• A generic datasource does not use a configured resource. You have
to re-enter all the configuration parameters every time you create a
new generic JDBC datasource.
• You can use a pre-defined resource configuration for multiple
datasources.

Related Topics
• About datasources on page 66
• Configuration parameters for generic JDBC and ODBC datasources on
page 150
• Creating generic JDBC or ODBC datasources on page 138
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 71


4 Defining sources of data
Creating database datasources using resources

Creating database datasources using


resources
To create a database datasource, you can:
• Use a resource definition to set the configuration parameters. A resource
can be used in multiple datasources across multiple projects.

Pre-defined resource definitions are supplied with the Data Federator


sortware, and you can create custom resources to use with Data Federator
Administrator
• Configure a new, generic JDBC or ODBC datasource. Unlike resources,
the configuration can only be used with the datasource for which it is
created.

Related Topics
• Creating JDBC datasources from custom resources on page 133

Adding Access datasources

To create a datasource for Access:


• Ensure that the connector for Access is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Access. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

72 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Access, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Access database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Access database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Access datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Access datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 73


4 Defining sources of data
Creating database datasources using resources

Connection parameters for Access datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

74 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 75


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Access datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding DB2 datasources

To create a datasource for DB2:


• Ensure that the connector for DB2 is configured. Usually, your Data
Federator administrator configures the connectors.

76 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that you have the necessary drivers installed for DB2. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select DB2, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your DB2 database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your DB2 database. If you are not
sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for DB2 datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your DB2 datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157

Data Federator User Guide 77


4 Defining sources of data
Creating database datasources using resources

• Defining a connection with deployment context parameters on page 156


• Testing and finalizing datasources on page 210

Connection parameters for DB2 datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

78 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name of the database to which


Database name
to connect.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

Data Federator User Guide 79


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

80 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in DB2 datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name
• Password
• Port
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Informix datasources

To create a datasource for Informix:


• Ensure that the connector for Informix is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Informix. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.

Data Federator User Guide 81


4 Defining sources of data
Creating database datasources using resources

• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Informix, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Informix database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Informix database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Informix datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Informix datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

82 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Informix datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 83


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

84 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Informix datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}

Data Federator User Guide 85


4 Defining sources of data
Creating database datasources using resources

where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding MySQL datasources

To create a datasource for MySQL:


• Ensure that the connector for MySQL is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for MySQL. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select MySQL, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your MySQL database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your MySQL database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for MySQL datasources for details.

86 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your MySQL datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 87


4 Defining sources of data
Creating database datasources using resources

Connection parameters for MySQL datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The name of the database to which


Database name
to connect.

88 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 89


4 Defining sources of data
Creating database datasources using resources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in MySQL datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name

90 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Oracle datasources

To create a datasource for Oracle:


• Ensure that the connector for Oracle is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Oracle. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Oracle, and click Save.
The "Draft" configuration screen is displayed.

Data Federator User Guide 91


4 Defining sources of data
Creating database datasources using resources

4. In the Connection parameters pane, from the Defined resource


drop-down list, select the name of the resource that defines the parameters
for your Oracle database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Oracle database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Oracle datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Oracle datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

92 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Oracle datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 93


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

94 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The system identifier for the Oracle


SID
database.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 95


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Oracle datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• Schema
• SID
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Netezza datasources

To create a datasource for Netezza:

96 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that the connector for Netezza is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Netezza. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Netezza, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Netezza database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Netezza database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Netezza datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Netezza datasource is added.

Data Federator User Guide 97


4 Defining sources of data
Creating database datasources using resources

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

98 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Netezza datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 99


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

100 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters in Netezza datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Progress datasources

To create a datasource for Progress:


• Ensure that the connector for Progress is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Progress.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

Data Federator User Guide 101


4 Defining sources of data
Creating database datasources using resources

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Progress, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Progress database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Progress database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Progress datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Progress datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

102 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Progress datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 103


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

104 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The authentication password for the


Progress DB password
Progress database.

The schema for the Progress


Progress DB schema
database.

The database username for the


Progress DB username
Progress database.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The name that your Data Federator


administrator defined as the data
source name in the administration
SequeLink data source name interface of SequeLink Server: your-
sequelink-data-source-name,
while configuring the connector to the
database.

The name of the host where your


Data Federator administrator installed
SequeLink server host name
the SequeLink Server, while configur-
ing the connector to the database.

Data Federator User Guide 105


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The port of the host where your Data


Federator administrator installed the
SequeLink server port
SequeLink Server, while configuring
the connector to the database.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

106 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters in Progress datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Progress DB password
• Progress DB schema
• Progress DB username
• SequeLink data source name
• SequeLink server host name
• SequeLink server port

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding SAS datasources

To create a datasource for SAS:


• Ensure that the connector for SAS is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for SAS. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

Data Federator User Guide 107


4 Defining sources of data
Creating database datasources using resources

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SAS, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SAS database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SAS database. If you are not
sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for SAS datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your SAS datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

108 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for SAS datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 109


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

For SAS databases the host name


SAS/SHARE server host name of the server where the SAS/SHARE
is running.

110 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 111


4 Defining sources of data
Creating database datasources using resources

Parameter Description

Select this check box to access mul-


tiple data sets that are not pre-de-
fined to the SAS/SHARE server.
Use data sets that are not pre-de- These are data sets that are not in-
fined to the SAS/SHARE server cluded in the current SAS configura-
tion. See the documentation on using
datasets that are not pre-defined to
the SAS/SHARE server for details.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in SAS datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Password
• Port
• SAS/SHARE server host name
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

112 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Related Topics
• Defining a connection with deployment context parameters on page 156

Using data sets that are not pre-defined to the SAS/SHARE server

You can configure Data Federator to access multiple data sets that are not
pre-defined to the SAS/SHARE server. These are data sets that are not
included in the current SAS configuration.

To access these data sets, you use the Connection Parameters area of
the configuration screen. To configure a non pre-defined data set, use the
following procedure:
1. In the Connection Parameters area, select the Use data sets that are
non pre-defined to the SAS/SHARE server check box. A set of Location
and Library Name fields appears.
2. In the Location field, enter the path for the dataset, in the format required
for the operating system that you are using.
3. In the Library name field, enter a name to use to refer to the data set,
and select the Prefix table name with schema name checkbox. The
library name that you entered appears as a SCHEMA.
4. Click Add data set to add a new, empty set of Location and Library
name fields, ready to define a further set if you require it.
To delete a defined data set, click the Delete button (shown as a cross
on the user interface) at the right of the data set to delete.
5. Click Save to save the configuration.

Related Topics
• Installing drivers for SAS connections on page 427

Adding SQL Server datasources

To create a datasource for SQL Server:


• Ensure that the connector for SQL Server is configured. Usually, your
Data Federator administrator configures the connectors.

Data Federator User Guide 113


4 Defining sources of data
Creating database datasources using resources

• Ensure that you have the necessary drivers installed for SQL Server.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SQL Server, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SQL Server database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SQL Server database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for SQL Server datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your SQL Server datasource is added.

Related Topics
• Managing login domains on page 525

114 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Connection parameters for SQL Server datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

Data Federator User Guide 115


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the database to which


Database name
to connect.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

116 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

Data Federator User Guide 117


4 Defining sources of data
Creating database datasources using resources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in SQL Server datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name

118 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Password
• Port
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Sybase datasources

To create a datasource for Sybase:


• Ensure that the connector for Sybase is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Sybase. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase, and click Save.

Data Federator User Guide 119


4 Defining sources of data
Creating database datasources using resources

The "Draft" configuration screen is displayed.


4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Sybase database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Sybase database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Sybase datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Sybase datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

120 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Sybase datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

the name of the database running on


Default database
the Sybase server

Data Federator User Guide 121


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

122 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The name that your Data Federator


administrator defined in the Server
name field of the Server object in
Sybase Open Client. Ask your Data
Server name
Federator administrator for the value
that he or she chose for the parame-
ter: sybase-server-name, while
configuring the Sybase connector.

For Sybase databases only, a


boolean that determines if the quote
Set quoted identifier
character (") is used to enclose iden-
tifiers.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Data Federator User Guide 123


4 Defining sources of data
Creating database datasources using resources

Connection parameters in Sybase datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Default database
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Sybase IQ datasources

To create a datasource for Sybase IQ:


• Ensure that the connector for Sybase IQ is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Sybase IQ.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.

124 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase IQ, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Sybase IQ database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Sybase IQ database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Sybase IQ datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Sybase IQ datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 125


4 Defining sources of data
Creating database datasources using resources

Connection parameters for Sybase IQ datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

126 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

Data Federator User Guide 127


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Sybase IQ datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Teradata datasources

To create a datasource for Teradata:


• Ensure that the connector for Teradata is configured. Usually, your Data
Federator administrator configures the connectors.

128 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that you have the necessary drivers installed for Teradata.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Teradata, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Teradata database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Teradata database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Teradata datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Teradata datasource is added.

Related Topics
• Managing login domains on page 525

Data Federator User Guide 129


4 Defining sources of data
Creating database datasources using resources

• Adding tables to a relational database datasource on page 157


• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Connection parameters for Teradata datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

130 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

Data Federator User Guide 131


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Teradata datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}

132 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Creating JDBC datasources from custom resources

To create a datasource from a custom resource:


• Ensure that you or the Data Federator administrator has created the
custom resource that corresponds to your database.
• Ensure that you have the necessary driver software for your database
installed, for example JDBC drivers.
• Ensure that you have the necessary driver connection parameters, to
hand. These are normally available from the driver supplier.
• Ensure that you have the necessary database access and authentication
details to hand.

1. Access the project to which you want to add the datasource, and at the
top of the Data Federator Designer screen, click Add, and from the
pull-down list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
3. From the list, select the JDBC from defined resource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
pull-down list, select the custom resource that you want to use.
5. On the "Draft" screen, configure the parameters. Refer to the connection
parameters and descriptions information for details.
6. Add the datasource tables to your datasource. Refer to the information
on adding tables to a database datasource for details.
7. Click Save.
Your JDBC datasource is added.

Data Federator User Guide 133


4 Defining sources of data
Creating database datasources using resources

Related Topics
• Connection parameters for JDBC from custom resource datasources on
page 135
• Adding tables to a relational database datasource on page 157
• Testing and finalizing datasources on page 210

134 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for JDBC from custom resource


datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

Data Federator User Guide 135


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The database connection information


in the form of a JDBC connection
URL.

Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name

For other types of JDBC drivers, see


the description of the property urlTem
plate.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

136 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The database supports metadata


Supports catalog
catalog functionality.

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

Data Federator User Guide 137


4 Defining sources of data
Creating generic database datasources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Creating generic database datasources

Creating generic JDBC or ODBC datasources

In order to create a generic JDBC or a generic ODBC datasource:

138 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
• Ensure that you have the necessary driver software for your database
installed, for example JDBC or ODBC drivers.
• Ensure that you have the necessary driver connection parameters to
hand. These are normally available from the driver supplier.
• Ensure that you have the necessary database access and authentication
details to hand.

1. Access the project to which you want to add the JDBC or ODBC
datasource, and at the top of the Data Federator Designer screen, click
Add.
The New Datasource screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed.
3. From the pull-down list, select either Generic JDBC datasource, or
Generic ODBC datasource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection Parameters area, enter the connection details as
required.
5. When you have entered the connection parameters, click the Test button
to check that they are correct.
If the details are incorrect, a dialog box is displayed with the details. Use
this information to fix the problem.
6. In the Configuration Parameters area, enter the configuration details
as required.
7. In the Optimization Parameters area, select the optimization details as
required.
8. Once the connection is working, click Save.
Your datasource is added, and you can add datasource tables to it.

Related Topics
• Connection parameters for generic JDBC datasources on page 141
• Connection parameters for generic ODBC datasources on page 146
• Configuration parameters for generic JDBC and ODBC datasources on
page 150

Data Federator User Guide 139


4 Defining sources of data
Creating generic database datasources

• Defining a connection with deployment context parameters on page 156


• Optimization parameters for generic JDBC and ODBC datasources on
page 153
• Adding tables to a relational database datasource on page 157
• Testing and finalizing datasources on page 210

140 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Connection parameters for generic JDBC datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 141


4 Defining sources of data
Creating generic database datasources

Parameter Description

The directory and filename of the


JDBC driver that Data Federator uses
to connect to your JDBC datasource.

Enter the path and filename of the


driver file that is delivered with the
database management system that
you want to use as a source.

Example
Driver location
For example, to set the location of
the JDBC driver for a MySQL
database, if you put the file in the
Data Federator installation directory,
type:

C:\Program Files\BusinessOb
jects Data Federator XI
3.0\LeSelect\drivers\mysql-
connector-java-3.1.12-bin.jar

The properties of the JDBC driver the


properties that are available depend
on the database management system
to which you want Data Federator to
connect. See the documentation for
your database management system
Driver properties
for a description of the available
properties.

In this field, enter properties in the


form:
property-name=value[;property-
name=value]*

142 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

The database connection information


in the form of a JDBC connection
URL.

Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name

For other types of JDBC drivers, see


the description of the property urlTem
plate.

The filename of the JDBC driver to


JDBC driver
load.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 143


4 Defining sources of data
Creating generic database datasources

Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The session properties the Data


Federator attempts to set on the
source database management sys-
tem.
Session properties
In this field, enter properties in the
form:
property-name=value[;property-
name=value]*

The database supports metadata


Supports catalog
catalog functionality.

144 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Data Federator User Guide 145


4 Defining sources of data
Creating generic database datasources

Connection parameters for generic ODBC datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

146 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

ODBC version The ODBC version.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 147


4 Defining sources of data
Creating generic database datasources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The database supports metadata


Supports catalog
catalog functionality.

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in generic JDBC datasources that can


use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• JDBC connection URL

148 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Connection parameters in generic ODBC datasources that can


use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Data Federator User Guide 149


4 Defining sources of data
Creating generic database datasources

Configuration parameters for generic JDBC and ODBC


datasources

Use these parameters when configuring a generic JDBC or ODBC


datasource, that is a JDBC or ODBC datasource for which there is no existing
resource file.

Parameter Description

Lets you list mappings between the


database type and the JDBC type.

Write the database type followed by


= (equals), then the JDBC type.
Separate mappings with ";" (semi-
Cast column types colon).

BOOLEAN=BIT;STRING=VARCHAR

This is useful when the default map-


ping done by the driver is incorrect
or incomplete.

specifies if Data Federator ignores


Ignore keys the keys when retrieving a schema
from the JDBC data source

150 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Parameter Description

The maximum load authorized for


each connection. This value can be
used to control the maximum number
of cursors open per connection. You
should also set this parameter to 1 if
Maximum load per connection the driver used to connect to the
database does not support connec-
tion sharing among threads.

The default is 0.

0 means no limit.

specifies if Data Federator puts


Quoted names quotes around table names when
connecting to the JDBC data source

specifies if Data Federator will at-


tempt to set the read only option
when connecting to your JDBC data
source
• Select this check box if you are
Set read only sure that your JDBC data source
supports the read only option.
This optimizes performance.
• Clear this check box if your JDBC
data source does not support the
read only option.

the SQL dialect to use. The choices


are:
• SQL 92
SQL dialect
• SQL 99
• ODBC

Data Federator User Guide 151


4 Defining sources of data
Creating generic database datasources

Parameter Description

The list of specific SQL State codes


that can be used to detect a stale
connection when an SQLException
is thrown by the underlying database.
Standard X/OPEN codes for connec-
tion failures (starting with the two-
character class 08) do not need to
be specified here.

SQL states for connection failure An example of a specific code for


Oracle is 61000 (ORA-00028: your
session has been killed). In this
case, you should enter the code
61000.

The default is empty.

Elements are separated by the char-


acter semicolon (;) with no space
between elements.

152 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.

Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

specifies the norm of the syntax that


SQL string type Data Federator uses to write requests
for your JDBC data source

Related Topics
• About datasources on page 66

Optimization parameters for generic JDBC and ODBC


datasources

You use the "Optimization Parameters" pane to specify the capabilities that
your database management system supports. This reduces the amount of
processing that Data Federator performs, and improves performance.

Data Federator User Guide 153


4 Defining sources of data
Managing database datasources

For the capabilities that you do not select, Data Federator performs the
operation. For example, if you clear the Supports aggregates checkbox,
Data Federator performs the aggregate operation on the data it retrieves
from the database management system.

Note:
To get the best performance, check the capabilities of your database, and
select the matching capability check boxes for as many options as possible.

Managing database datasources

Using deployment contexts

Deployment contexts allow you to easily deploy a project on multiple servers.


Using deployment contexts, you can define multiple sets of datasource
connection parameters to use with a project's deployment. Each deployment
context represents a different server deployment.

For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.

When you define the connection parameters for a datasource, in place of


the configuration values, you use the corresponding parameter name. At
deployment time, you select a deployment context, and Data Federator
substitutes the appropriate values for the connection.

Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,
Data Federator substitutes the values corresponding to the deployment type
that you select.

The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.

Related Topics
• Defining deployment parameters for a project on page 155

154 Data Federator User Guide


Defining sources of data
Managing database datasources 4
• Defining a connection with deployment context parameters on page 156

Defining deployment parameters for a project

Perform this task to create deployment contexts so that you can deploy the
project on multiple servers. Typically, you would create a deployment context
for each server on which the project is to be deployed.
1. Open a project and select it, and in the Tree list, select Configuration to
display the Configuration screen.
2. On the "Configuration" screen, click Deployment contexts to expand
the Deployment Contexts pane.
3. Click the Add new context link to display the Add a new context screen.
4. In the General pane, enter a name for your context. If this context is to
be the default, select the is default check box. The parameters that you
define are then used when you deploy a project and do not select a
deployment context.
5. On the Add a new deployment context screen, click the Add
Parameters button.
6. From the list that appears, select the number of parameters that you want
to add to the deployment context.
A row for each parameter appears, ready for you to supply the parameter
definitions.
7. Define each parameter, by entering a name and a corresponding value
in each of the Name and Value fields.
8. When you have finished defining parameters, click OK to save your
settings.

Example:
To define a variable in the deployment context to define a production server
with a host name of prodserver5 , you would enter the following in the Name
and Value fields:
• Name: ProductionServer
• Value: prodserver5

Data Federator User Guide 155


4 Defining sources of data
Managing database datasources

In your connection definition, you would define the host name as follows
to specify prodserver5 with the deployment context:

${ProductionServer}

Related Topics
• Defining a connection with deployment context parameters on page 156
• Using deployment contexts on page 325

Defining a connection with deployment context


parameters

You must have defined a deployment context with the connection


configurations to use.

Once you have defined a deployment context, you can use the parameters
it contains to configure a connection. The parameters with which you can
use a deployment context depends on the resource type that you are using.
For example, for a Microsoft Access datasource, you can use the following:
• dsn
• user
• password

For a DB2 datasource, you can use the following:


• host
• port
• database
• user
• password
• schema

• To use a deployment for a parameter, in the parameter field, use the


syntax: ${paramname} where paramname is the name of the deployment
parameter as defined in the Deployment parameters definition screen.

156 Data Federator User Guide


Defining sources of data
Managing database datasources 4
Example:
if you have defined a deployment parameter DeployHost to define the host,
then you would use the following syntax in the Connection Definition's Host
name field: ${DeployHost}

Adding tables to a relational database datasource

• You must have added a datasource with a relationald database source


type—that is, either a datasource defined using a resource, or a generic
JDBC or ODBC datasource, for example:
• JDBC from defined resource
• Generic JDBC
• ODBC from defined resource
• Generic ODBC

Once you have defined a relational database datasource, you can add tables
to it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Update all tables.


The Datasources > your-datasource-name > Draft > Update tables
window appears.

3. Select the tables that you want to include in your new datasource, and
click Save.
You need to make your datasource final before you can use it.

Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Updating the tables of a relational database datasource on page 158
• Making your datasource final on page 212

Data Federator User Guide 157


4 Defining sources of data
Creating text file datasources

Updating the tables of a relational database


datasource

Once you have defined a relational database datasource, you can update
the tables in it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Update all tables....


The Datasources > your-datasource-name > Draft > Update tables
window appears.

The tables that you previously selected appear as in use.

The tables that you have not selected appear as not in use.

The tables that you previously selected but that have been dropped from
the data access system appear as no longer usable. You must delete
the definitions of these tables from your datasource.

3. Select the tables that you want to include in your new datasource, and
click Save.

Creating text file datasources


Data Federator can use texts files as a datasource. There are many formats
of text files from which Data Federator can extract data. The simplest of
these formats is comma-separated value (CSV).

In general, Data Federator can understand any format in which the data in
the text file is arranged into columns. The columns can be a fixed length or
separated by a specific character. When you add a text file datasource, you
can use the File Extraction Parameters pane to configure these and other
options. Data Federator can then transform your text files into relational,
tabular data.

You can create a datasource from a single text file, or you can create a
datasource from multiple text files.

158 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

About text file formats

Data Federator supports access to two types of text files:


• files with fields of fixed length
• files with field separators (commonly called "CSV files")

Files with separators are generally simpler to generate and are easier to
read. Many software applications can generate these types of files from
internal data.

Note:
Depending on the choice of character (or the character sequence) used as
a separator, you must be certain that the separator is not included in the
value for a field. If the character that you chose as a separator appears in
the value of a field, Data Federator will add two fields instead of one. Files
with fields of fixed length are more restrictive because the size of each field
does not vary.

Setting a text file datasource name and description

You can define a text file to use as a datasource.


1. At the top left of the Designer window, click Add, and from the pull-down
list, select Add Datasource.
The New datasource window is displayed.
2. Click Add datasource, then click Text file.
The Datasources > Draft window appears.

3. Enter a name and description for your datasource name in the Datasource
name and Description boxes, then click Save.
The datasource is added, and it appears in the tree list at the left of the
screen. The datasource Draft screen is displayed. You can now add
tables, then select the source file or files for it.

Related Topics
• Selecting a text data file on page 160

Data Federator User Guide 159


4 Defining sources of data
Creating text file datasources

Selecting a text data file

• You must have defined a text file datasource, and allocated a name to
it.

This procedure describes how to use a single text file as a datasource. To


use multiple text files, see the documentation on selecting multiple text files
as a datasource.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Add.


The Datasources > your-datasource-name > Draft > New table
window appears.

3. In the Table name box, type a name for your new table .
4. In the General pane, click Browse.
The Browse frame appears.

5. Use the Browse frame to locate and select your source file.
To browse a different drive, enter the drive letter in the Directory box,
and click Browse again.

For example, to locate a text file on the Q: drive, enter "Q:\" in the
Directory box and click Browse.

When you select a file, the file name appears in the File name or pattern
box.

6. Click Save.
Data Federator references the file for use with this datasource table.

Related Topics
• Setting a text file datasource name and description on page 159
• Selecting multiple text files as a datasource on page 173

160 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Configuring file extraction parameters

• You must have chosen a source file for your datasource.

After you choose your source file, you define how Data Federator parses
the text that the file contains.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the File extraction parameters pane, complete the text boxes that
describe how Data Federator parses your source file.
• To preview the contents of your file as you configure the file extraction
parameters, click Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source
file.

3. In the Field formatting pane, complete the text boxes that describe the
format of the data in the fields of your source file.
4. Click Save.
Data Federator saves the definition of the format of your source file, which
allows Data Federator to parse the data in the file.

Related Topics
• File extraction parameters on page 162
• Field formatting parameters for text files on page 164
• Selecting a text data file on page 160

Data Federator User Guide 161


4 Defining sources of data
Creating text file datasources

File extraction parameters

Use these parameters to help you when configuring text file extraction
parameters.

Parameter Description

the character set used in your source


File charset
file

• Delimited

Choose this when your source file


has entries separated by a char-
acter.

For example:

MARY;123;SALES
JOHN;456;PURCHASING
File type
• Fixed width

Choose this when your source file


has entries with fixed widths.
For example:

MARYxxx 123xx SALESxxx


JOHNxxx 456xx PURCH.xx

the character that separates fields in


Field separator
your source file

162 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

specifies if, for text files with fields of


fixed width, Data Federator will con-
sider the newline character (\n) as
the end of a line, if this character oc-
curs in the last field before the normal
number of characters.

For example, for fields of six charac-


ters, the last field is normally padded
to its full width:

JOExxx USAxxx 1980xx Mxxxxx


Ignore line separator
However, the last field may not be
padded if it ends in a newline charac-
ter:

JOExxx USAxxx 1980xx M\n


• Select this check box, for fields of
fixed width, if your data is padded
in the last field.
• Clear this check box if your data
uses the newline character to end
the last field.

the character that surrounds text, for


Text qualifier example " (double quote) or ' (single
quote)

Data Federator User Guide 163


4 Defining sources of data
Creating text file datasources

Parameter Description

the character that allows a text quali-


fier to be treated literally,

For example, if the text qualifier is "


(double quote) and the escape char-
Escape character for text qualifier acter is \ (backslash), then Data
Federator considers the entire line
that follows, as one text:

"roles \"admin\", \"manager\",


\"root\""

Related Topics
• Configuring file extraction parameters on page 161

Field formatting parameters for text files

Use these parameters when configuring the file extraction parameters.

164 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

specifies the language in which


month names and weekday names
are represented in your source file

For example, if your source file has


dates like:

13 janvier, 2000
Date and time language you should set your Date and time
language to fr (French).

These codes are the lower-case, two-


letter codes as defined by ISO-639.
You can find a full list of these codes
at a number of sites, such as:
http://www.ics.uci.edu/pub/ietf/http/re
lated/iso639.txt

Data Federator User Guide 165


4 Defining sources of data
Creating text file datasources

Parameter Description

specifies the country of the localiza-


tion format used to represent dates
in your source file

For example, if your Date and time


language is English, and your source
file has dates like:

31/12/2002

you should set your Date and time


country to UK.

If your Date and time language is


Date and time country english, and your source file has
dates like:

12/31/2002

you should set your Date and time


country to US.

These codes are the upper-case,


two-letter codes as defined by ISO-
3166. You can find a full list of these
codes at a number of sites, such as:
http://www.chemie.fu-berlin.de/di
verse/doc/ISO_3166.html

166 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

the character that separates whole


numbers from decimals in your
source file

For example, if your source file has


Decimal separator numbers like:

123.99

you must set the Decimal separator


to "." (period).

the character that separates groups


of thousands in your source file

For example, if your source file has


numbers like:

Thousands grouping separator 26,120,500,000

(for 26 billion, 120 million, 500 thou-


sand)

you must set the Thousands grouping


separator to "," (comma).

specifies the format in which dates,


DATE format, TIMESTAMP format,
times and timestamps are represent-
TIME format
ed in your source file

Related Topics
• Configuring file extraction parameters on page 161
• Date formats used in text files on page 176
• Using data types and constants in Data Federator Designer on page 604

Data Federator User Guide 167


4 Defining sources of data
Creating text file datasources

Connection parameters in text datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Automatically extracting the schema of your


datasource table

• You must have configured the extraction parameters of your source file.
• The text file that you are configuring must have a header row.

Once you have configured the extraction parameters of your source file, you
must define the schema of your datasource table. Use this procedure to
automatically extract the schema from the first line in your source file.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.

168 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. Click Table schema.


The Datasources > your-datasource-name > Draft > your-data
source-table-name > Table schema window appears.

3. In the Schema definition pane, select First line after the ignored header
of the file.
To preview the contents of your file as you define the schema, click
Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

4. Click Extract schema.


A confirmation window appears. Click OK to extract the schema and
replace the current schema.

Data Federator extracts the fields in the file, and uses the first line to
create a title for each column.

5. Verify and select the correct types for all the columns.
6. Select the check boxes under the key icon to specify the primary key.
Note:
You can select multiple check boxes to indicate a primary key defined by
multiple columns.

7. Click Save.
The text file is registered as the source of your datasource.

Related Topics
• Configuring file extraction parameters on page 161
• Generating a schema when a text file has no header row on page 171
• Using a schema file to define a text file datasource schema on page 171
• Using data types and constants in Data Federator Designer on page 604
• Making your datasource final on page 212

Data Federator User Guide 169


4 Defining sources of data
Managing text file datasources

Indicating a primary key in a text file datasource

You can indicate the primary key while defining the schema of your
datasource.
1. Open the Table schema window.
2. Define one or more columns as a key in the Table schema pane, by
selecting the checkboxes under the Key icon

Managing text file datasources

Editing the schema of an existing table


1. In the tree list, expand your-datasource-name, then expand Draft, then
click your-datasource-table-name in the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the Table schema pane, click Table schema.


The Datasources > your-datasource-name > Draft > your-data
source-table-name > Table schema window appears, where you can
edit the table schema.

170 Data Federator User Guide


Defining sources of data
Managing text file datasources 4

Using a schema file to define a text file datasource


schema

Data Federator lets you import the schema of a datasource from an external
file. The schema must be in a DDL format.
1. Write your schema file, using one of the formats that Data Federator
recognizes.
2. Open the Table schema window.
3. In the Schema definition pane, from the Schema location text box,
select SQL DLL file or Proprietary DDL file.
4. In the Schema definition pane, choose your source file by clicking
Browse, then Navigate to your file and click Select.
To preview the contents of your source text file as you define the schema,
click Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

5. Click Extract schema.


Data Federator extracts the schema from your DDL file.

Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Formats of files used to define a schema on page 608

Generating a schema when a text file has no header


row

When your source file does not have a first line that defines the names of
the columns, Data Federator can extract the number of columns, which you
can then name manually.
1. Open the Table schema window.
2. In the Schema definition pane, from the Schema location text box,
select Automatic from the structure of the file.

Data Federator User Guide 171


4 Defining sources of data
Managing text file datasources

To preview the contents of your file as you define the schema, click
Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

3. Click Extract schema.


Data Federator extracts the correct number of columns, based on the
character you entered as the column separator.

4. Name the columns manually.


5. Click Save.
The text file is registered as the source of your datasource. You can now
make your datasource final.

Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Defining the schema of a text file datasource manually on page 172
• Making your datasource final on page 212

Defining the schema of a text file datasource


manually

You can define the schema of your datasource manually if the schema
information is not contained in the source file.
1. Open the Table schema window.

To do this... follow this step...

Select a row, then click Add


Add more columns columns, then click the number of
new columns you want to add.

Edit the name of the column in the


Change a column name
Column name column.

172 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
To do this... follow this step...

Edit the type of the column in the


Change a column type
Column type column.

Click the Delete icon

Delete a column

in the row that defines the column


that you want to delete.

In the Schema definition pane,


Extract the entire schema again
click Extract schema.

Note:
The columns must be indicated in the order that the fields appear in the
datasource.
When the source of the datasource is a file containing fixed-length fields,
you must also indicate the number of characters of each field.

2. Click Save to save your changes.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Selecting multiple text files as a datasource

You can specify multiple files simultaneously, when you are selecting a text
file as a datasource. Note, however, that the files must be of the same table
schema.
1. Specify your multiple source files in the File name or pattern text box in
the File pane of the Datasources > your-datasource-name > Draft
> your-datasource-table-name window.

Data Federator User Guide 173


4 Defining sources of data
Managing text file datasources

The File name or pattern text box indicates the names of the files that
are used to populate the datasource table. You can associate multiple
source files to the same datasource table by separating the names of
each file with a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" to indicate any sequence of characters. For
example "dat*.csv" specifies all the files with names starting with "dat"
and ending with ".csv".
• Use the symbol "?" to indicate any single character. For example
"dat?.csv" specifies the files whose names are composed of the
character string "dat" followed by any character, the followed by the
extension "csv".

2. Click Save
Data Federator references the file for use with this datasource table.

Related Topics
• Selecting a text data file on page 160

Numeric formats used in text files

The following table shows some examples of the numeric formats that Data
Federator reads from text files when you use a text file as a datasource.

If your text file uses and the column type Data Federator inter-
the number... is... prets it as...

+1 234.56 INTEGER 1234

-1 234 euros INTEGER -1234

-1 234 567.89 DECIMAL -1234567.89

+1 234 567.89 euros DECIMAL 1234567.89

174 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
If your text file uses and the column type Data Federator inter-
the number... is... prets it as...

-1 234 567,89 the deci-


mal separator is "," DECIMAL -1234567
(comma)

-1 e+2 euros DECIMAL -100

These examples assume that you have set the decimal separator to "."
(period).

Rules that Data Federator uses to read numbers from text files
• For integers, no error is returned if the string data overflows the size of
an integer (MAX_VALUE = 2147483647 (2^31-1) and MIN_VALUE =
-2147483648 (-2^31)).
• For doubles, no error is returned if the string data overflows the size of a
double (64 BITS). The value is truncated.
• White space is removed.
• Parsing stops at the first non-digit character (except decimal separators,
grouping separators, exponential symbols ("e" and "E") and signs ("+"
and "-")).
• The exponential symbol and the decimal separator can be used only one
time, otherwise parsing stops.
• The exponential symbol must be followed by a digit or a sign, otherwise
parsing stops.
• The symbols "+" and "-" can be used at the beginning of the string data
or after the exponent symbol.
• In Data Federator Designer, if you do not indicate any decimal or grouping
separators, the application used the default separators corresponding to
the language field.

Data Federator User Guide 175


4 Defining sources of data
Managing text file datasources

• No error is returned if you define the same symbol for column separator,
decimal separator and grouping separator (column separator has priority
over decimal separator, which has priority over grouping separator).
• If parsing cannot complete while following these rules, Data Federator
stops parsing and throws an exception.

Date formats used in text files

The following table shows some examples of the date formats that you can
write in the Date format box.

If your text file uses these dates... Enter this format...

2002-06-01, 1999-12-31, or 1970-


yyyy-MM-dd
01-01

10:20 AM January 31st, 1998 or


hh:mm aa MMMM dd, yyyy
8:00 PM March 10th, 1990

1/2/95, 12/15/01 or 4/30/2001 M/D/YY

For details on date formats, see the Java 2 Platform API Reference for the
java.text.SimpleDateFormat class, at the following URL:

http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.

Note:
For extracting hours, the pattern hh or KK extracts in 12-hour time, while HH
or kk extracts in 24-hour time.
The following table shows the results of different patterns on hour values.

176 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
The hour value... using the pattern... results in the value...

'00' hh 00

'00' HH 00

'12' hh 00

'12' HH 12

'24' hh 00

'24' HH 00

Related Topics
• Field formatting parameters for text files on page 164
• Using data types and constants in Data Federator Designer on page 604

Modifying the data extraction parameters of a text


file

You can modify the data extraction parameters of the draft version of a
datasource.

Changing the data extraction parameters of your datasource has the following
consequences:
• Your table schemas are erased.

1. In the bottom of the tree list, click Datasources.


2. Expand your-datasource-name, then click Draft.

Data Federator User Guide 177


4 Defining sources of data
Managing text file datasources

The Datasources > your-datasource-name > Draft window appears.

3. Modify the data extraction parameters.

Related Topics
• File extraction parameters on page 162

Using a remote text file as a datasource

You require the machine address, user name, password, and port number
for the remote machine.

Data Federator can access text files on a remote machine on a network. The
following options are available:
• Use an SMB share providing that your machine is in the same SMB
domain or workgroup as the distant machine.
• Use an FTP account.

To create a datasource using a text file on a remote machine:


1. In the Configuration pane, set the Source type parameter to Text file.
2. In the Configuration pane, set the Protocol parameter to FTP file system
or SMB share.
3. In the Configuration pane, set the Hostname, Port, Username, and
Password fields to the correct access details for the remote machine.
4. When adding a table to a remote FTP source, in the File name or pattern
fieldconfigure the name of your remote text file
For a distant source over FTP, indicate the absolute path. Note that a
root path for an FTP connection can be different from the root path for a
distant machine. For example, a root path for FTP may be /public/data
while the absolute path on the distant server is /home/user/public/data.

178 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
For a distant source on an SMB network, you must indicate the path from
the shared directory. For example if the shared directory is shareDir and
the files are contained in a sub directory data, then you must indicate the
path as shareDir\data.

The path must start with the name of the shared directory without the
leading backslash "\".

Note:
If connecting to a public SMB directory on UNIX, you must log in with the
username guest.

Creating XML and web service


datasources

About XML file datasources

Data Federator can use an XML file as a datasource. The elements and
attributes in the XML file are mapped to tables and columns, depending on
how you configure the datasource. You can use the Elements and Attributes
pane to configure which tables and columns you want to create from the
elements in the XML file.

Related Topics
• Using the elements and attributes pane on page 189

Adding an XML file datasource

You can add a datasource from the Datasources window.


1. In the tree list, click Datasources
2. Click Add datasource, then click XML data source.
The Datasources > Draft window appears.
3. Complete the datasource name and description in the Datasource name
and Description boxes, then click Save.

Data Federator User Guide 179


4 Defining sources of data
Creating XML and web service datasources

Your datasource is added, and you can choose and configure a source
file for it.

Choosing and configuring a source file of type XML

• You must have added a datasource whose source type is XML file.

1. In the tree list, expand your-datasource-name, then click Draft.


The Datasources > your-datasource-name > Draft window appears.

2. Select your source file.


Note that multiple XML source files must have the same schema.
3. In the Configuration pane, click Browse to select your source file.
The Browse frame appears.
4. Select your source file from the file list.
If you want to browse a different drive, enter the drive letter in the
Directory box, and click Browse in the Browse frame.

For example, enter Q:\ in the Directory box and click Browse.

5. Click Select.
The file name appears in the XML file name box in the Configuration
pane.

6. Select one of the following radio buttons depending on the location of


your XML schema file:
• Inside XML file
• From external XSD (Xml Schema Definition) file
If you are sourcing an external XSD file, click Browse to the right of the
XML schema file name field and navigate to and select it as described
in Step 4, above.
7. Select one of the following radio buttons depending on how your XML
datasource tables should be generated:
• Normalized
• Denormalized

180 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

8. Click Generate Elements and Attributes.


The Elements and Attributes pane is displayed showing a populated
Elements field in List view.

Related Topics
• Selecting multiple XML files as datasources on page 199
• Adding an XML file datasource on page 179
• Using a remote XML file as a datasource on page 200

Connection parameters in XML datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding datasource tables to an XML datasource

• You must have chosen and configured a source file of type "XML".

Data Federator User Guide 181


4 Defining sources of data
Creating XML and web service datasources

Once you have defined an XML datasource, you can add tables to it.

You do this by selecting the required Elements and Attributes in the


Elements and Attributes pane, and clicking Create tables.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.
2. In the Elements and Attributes pane, select the Elements and
Attributes that you want to appear in your datasource tables and click
Create tables.
The Tables pane appears showing your selected Elements and
Attributes.

3. Select the datasource tables that you want to include in your new
datasource, and click Save.

Related Topics
• Choosing and configuring a source file of type XML on page 180
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212

About web service datasources

Data Federator can use a web service as a datasource. The elements and
attributes in the response of the web service are mapped to tables and
columns, depending on how you configure the datasource.

Some of the new concepts introduced by Data Federator web services are:
• SQL access to web services
• input columns (to pass values to web service parameters)
• automatic mapping of XML to relational schemas

The responses of web service requests appear in Data Federator in tabular


form. You can use the "Elements and Attributes" pane to map the web service
response to tables and columns.

Related Topics
• Using the elements and attributes pane on page 189

182 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
• Mapping values to input columns on page 233

Adding a web service datasource

You can add a datasource using the Add button beneath a project's tab.
1. Click Add, then select Add datasource.
The New Datasource window appears.

2. Complete the datasource name and description in the Datasource name


and Description boxes.
3. From the Datasource type box, select Web service data source.
4. Click Save.
Your datasource is added, and you can choose and configure a web
service.

Extracting the available operations from a web service

• You must have added a datasource whose source type is web service.

1. Edit your web service datasource.


2. From the WDSL location list, choose HTTP.
3. In the URL box, type the URL of your WSDL file.
For example, you could type http://www.xignite.com/xQuotes.asmx?WSDL
to use the Xignite web service for checking stock quotes.

4. Select one of the following radio buttons depending on how the datasource
tables of your web service should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

5. Click Generate operations.

Data Federator User Guide 183


4 Defining sources of data
Creating XML and web service datasources

The "Operation selection" pane expands, letting you select the operations
that you want to access from the web service.

Related Topics
• Adding a web service datasource on page 183

Selecting the operations you want to access from a


web service

• You must have added a datasource whose source type is web service.
• You must have extracted the available operations from the web service.

1. Edit your web service datasource.


2. In the "Operation selection" pane, check the boxes beside the operations
you want Data Federator to access.
3. Click Generate operations output schemas,

The "Operations output schemas" pane expands, and you can select
which elements you want Data Federator to convert to tabular form.

Related Topics
• Adding a web service datasource on page 183
• Extracting the available operations from a web service on page 183
• Using the SOAP header to pass parameters to web services on page 186
• Selecting which response elements to convert to tables in a web service
datasource on page 187

Connection parameters in web service datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following field.

184 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
• URL

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Authenticating on a web service datasource

There are two parts to the authentication on web services: on the server that
hosts the web services, and using the SOAP header to pass authentication
parameters to the web service operations.

To authenticate on the server that hosts the web service datasources, Data
Federator lets you use the same fields that you use for authenticating on
any type of datasource. To pass parameters in the SOAP header to the web
service operation, Data Federator provides a field where you can enter header
parameters.

Related Topics
• Authenticating on a server that hosts web services used as datasources
on page 185
• Using the SOAP header to pass parameters to web services on page 186

Authenticating on a server that hosts web services


used as datasources

• You must have added a web service datasource.

To authenticate on the server that is hosting the web services that you are
using as datasources, you use the "Web service authentication" pane to
enter your authentication details.
1. Edit your web service datasource.

Data Federator User Guide 185


4 Defining sources of data
Creating XML and web service datasources

2. In the "Web service authentication" pane, choose the authentication mode


you want to use, and complete the required parameters.
The parameters you must enter depend on your choice of authentication
mode. Data Federator provides different types of authentication modes
that let you choose which credentials you want to send to the datasource.

Related Topics
• Adding a web service datasource on page 183
• Authentication methods for database datasources on page 207

Using the SOAP header to pass parameters to web


services

• You must have extracted the available operations from the web service.

If you are accessing a web service that requires parameters in the SOAP
request, you can add these parameters in the SOAP header.

Parameters in the SOAP header apply to a single operation in the web


service, and they do not change when you make a request. For example,
an operation that retrieves a stock quote may require that you provide a
username and password in the SOAP header. The username and password
would not change each time you made a request. In contrast, you could ask
for a different stock symbol each time, so you would give the stock symbol
in the SOAP body.

In the Data Federator interface, you provide a parameter in a SOAP header


using the "Header parameters" pane. You provide a parameter in the SOAP
body using an input column.
1. Select the operations that you want to access from the web service.
2. In the "Operation selection" pane, click Header parameters.
3. If the web service requires header parameters, such as authentication
tokens or passwords for each operation, use the SOAP header to pass
parameters to the web service.
Depending on the definition of the web service, the content of the header
parameters may differ. Indeed, there may not be header parameters at
all.

186 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
Check the Sensitive box, , if the value of the parameter is a password,
or other sensitive data.

Related Topics
• Extracting the available operations from a web service on page 183
• Selecting the operations you want to access from a web service on
page 184

Selecting which response elements to convert to


tables in a web service datasource

• You must have chosen and configured a source file for your web service
datasource.
• You must have extracted the available operations from the web service.
• You must have selected the operations you want to access from a web
service.

Once you have defined a web service datasource, you can add tables to it.

You do this by selecting the required elements and attributes in the


Operations output schemas pane, then clicking Create tables.
1. Edit your web service datasource.
2. In the Operations selection pane, choose the operations that you want
the web service to return.
3. In the Operations output schemas pane, select the elements and
attributes that you want Data Federator to expose as tables.
a. In the Operations box, select an operation to see its elements and
attributes.
The elements and attributes that the operation returns appear in the
Operations output schemas pane.
b. In the Operations output schemas pane, select the elements and
attributes that you want to appear in your datasource tables.
4. Click Create tables.
The Tables pane appears, showing your selected elements and attributes.

Data Federator User Guide 187


4 Defining sources of data
Creating XML and web service datasources

See the documentation on defining the schema of a datasource for details


on how to read the icons in the "Table schema" pane.

Related Topics
• Selecting the operations you want to access from a web service on
page 184
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212
• Defining the schema of a datasource on page 204

Assigning constant values to parameters of web


service operations

To assign constant values to parameters in web services, use pre-filters on


the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

You create pre-filters when making the mappings from datasources to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning constant values to input columns using pre-filters on page 233

Assigning dynamic values to parameters of web


service operations

To assign dynamic values to parameters in web services, use table


relationships on the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

188 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
You create table relationships when making the mappings from datasources
to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning dynamic values to input columns using table relationships on
page 234

Propagating values to parameters of web service


operations

To propagate values to parameters in web services, use input value functions


on the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

You create input value functions when making the mappings from datasources
to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Propagating values to input columns using input value functions on
page 234

Managing XML and web service


datasources

Using the elements and attributes pane

The Elements and attributes pane of the Datasources > your-data


source-name > Draft window allows you to select which elements and

Data Federator User Guide 189


4 Defining sources of data
Managing XML and web service datasources

attributes from your XML or web service datasource you wish to use when
you generate your datasource tables.

It consists of a List view and an Explorer view, as shown in the image


below, each containing two panels: Elements and Attributes.

The following example of XML code shows several highlighted elements and
attributes:

<cpq_package version="2.0.0.0">
<catalog_entry_path>ftp://ftp.google.com/pub/cp077.exe</cata
log_entry_path>
<filename>cp002877.exe</filename>
<divisions>
<division key="65">
<division_xlate lang="en">Networking</division_xlate>
<division_xlate lang="ja">Networking</division_xlate>
</division>
<division key="6">
<division_xlate lang="en">Server</division_xlate>
<division_xlate lang="ja">Server</division_xlate>
</division>
</divisions>
</cpq_package>

The table below describes how Data Federator maps these XML elements
and attributes to its own elements and attributes, and subsequently generates
and displays them:

190 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
Data
XML Exam- Generates in Data
XML term Federa-
ple Federator
tor term

lang="en" Attribute Attribute Column

Simple element (no child ele-


<filename> Attribute Column
ments)

Complex element (with child


<divisions> Element Table
elements)

Multi-value element (occurs


<division
more than once in its parent el- Element Table
key=
ement)

Elements in List view are displayed in a list. In Explorer view they are
displayed in the Folder > Folder > File format of the Folders Explorer bar
in Windows Explorer. Attributes are always displayed in list form. Both
elements and attributes selected in List view remain selected when Explorer
view is selected, and vice versa.

The Elements panel contains elements which can be 'checked', 'unchecked',


white, light blue, or dark blue. Attributes in the Attributes panel have identical
properties, but cannot be dark blue. The table below describes the meaning
of these properties:

Table 4-25: Meanings of colors and check boxes

If an Element or At-
It means ...
tribute is ...

Checked Its check box has been selected.

Unchecked Its check box has been deselected.

Data Federator User Guide 191


4 Defining sources of data
Managing XML and web service datasources

If an Element or At-
It means ...
tribute is ...

It was the last item on the screen to have been


clicked; it has the focus.
Dark blue
Note: Attributes cannot be clicked, and cannot
therefore be dark blue.

It has been modifed from one state to another. For


example, if an element is already included in a data-
source and is 'checked', it will be white. If it is then
Light blue unchecked, it will appear light blue (and will not be
included in the datasource tables when the list of
datasource tables is updated when Update Tables
is clicked).

White It has been neither modified nor has the focus.

The table below describes how different combinations of these colors and
check boxes they affect elements:

Table 4-26: Combinations of colors and check boxes on elements

If the Ele- Selecting its check box will


It means ...
ment is ... ...

It did not generate a datasource


Cause it to generate a data-
table currently listed in the Ta-
source table including all its
bles pane, nor has it been
Unchecked attributes when the list of
checked to generate one when
and white datasource tables is updated
the list of datasource tables is
when Update Tables is
updated when Update Tables
clicked.
is clicked.

192 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
If the Ele- Selecting its check box will
It means ...
ment is ... ...

It generated a datasource table


Cause it to generate a data-
currently listed in the Tables
source table including all its
pane, but has been unchecked
Unchecked attributes when the list of
so it will not generate one when
and light blue datasource tables is updated
the list of datasource tables is
when Update Tables is
updated when Update Tables
clicked.
is clicked.

It generated a datasource table Cause its generated data-


currently listed in the Tables source table to be deleted from
Checked and pane, and will continue to gen- the Tables pane when the list
white erate one when the list of data- of datasource tables is updat-
source tables is updated when ed when Update Tables is
Update Tables is clicked. clicked.

It did not generate a datasource


Prevent if from generating a
table currently listed in the Ta-
datasource table when the list
Checked and bles pane, but will generate
of datasource tables is updat-
light blue one when the list of datasource
ed when Update Tables is
tables is updated when Update
clicked.
Tables is clicked.

The Find next feature, as shown and highlighted in the image below, allows
you to locate all occurrences of an element, an attribute, or both. It is available
in the Elements and Attributes panels of both the List and Explorer view,
and is especially useful if you have several elements / attributes of the same
name in a large XML or web service datasource.

The Find next feature is described in the table below:

Data Federator User Guide 193


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Select elements, attributes, or elements and at-


Elements and at-
tributes, and display filtered results in the Ele-
tributes drop-down list-
ments and Attributes panels of both the List and
box
Explorer views.

Enter the element / attribute you wish to locate.


Find next field Note: Press Ctrl + Spacebar to activate autocom-
plete and display all previously entered terms.

Locate, and highlight in dark blue, every occur-


rence of the term entered in the Find next field.
Find next link
When used in the Explorer view, it also shows its
location in your XML or web service datasource.

Using the list view of the elements and attributes pane

The image below shows an example of the List view tab located within the
Elements and attributes pane of the Datasources > your-datasource-
name > Draft window:

194 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4

The List view tab consists of an Elements panel listing the datasource's
elements, and an Attributes panel listing a highlighed dark blue element's
attributes, if any. The List view tab, in common with the Explorer view tab,
allows you to select or de-select any element or its attributes for inclusion in
your XML datasource tables.

When you select an element in List view, if there is more than one element
of that name, for example 'DatesofAttendance (2), as shown in the image
above, all those elements (and their attributes) will be included in your
datasource table. You can de-select one or every attribute of a selected
element in the Attributes panel.

The List view tab contains the following features:

Use the ... To ...

Elements Panel

Select every instance of every element and


Header Row check box their attributes for inclusion in / removal from
your datasource table.

Data Federator User Guide 195


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Select every instance of the selected element


Lower check boxes and its attributes for inclusion in / removal
from your datasource table.

List the elements in ascending / descending


Elements title
order.

View only elements already View only those elements that appear in your
used for tables check box current, not yet updated, datasource table.

Display the element in dark blue, and display


Individual elements
its attributes in the Attributes panel.

Attributes Panel

Select every attribute of the selected element


Header Row check box for inclusion in / removal from your data-
source table.

Select an individual attribute for inclusion in


Lower check boxes
/ removal from your datasource table.

Using the explorer view of the elements and attributes pane

The image below shows an example of the Explorer view tab located within
the Elements and attributes pane of the Datasources > your-data
source-name > Draft window:

196 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4

The Explorer view tab displays your elements in expandable directory form.
It allows you to select one or more of an element for inclusion in your
datasource tables where there exist two or more of the same name. Attributes
are displayed in selectable (non-clickable) list form, as per the List view.

The Explorer view tab contains the following features:

Use the ... To ...

Elements Panel

Display the elements from the root directory.

Note:
Main view link
This link is only enabled if you have clicked on a
'More' icon. See 'More icon', below for details.

Data Federator User Guide 197


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Display:
• all elements
All elements drop-
down list-box • selected elements
• selected elements and their children
• selected elements and their parents

Expand the + signs if they are not already expand-


Expand all link
ed, and display all child elements in the XML file.

Display a Details tab showing only the highlighted


View details link
dark blue element and its direct parent / child path.

Highlight the element in dark blue and display its


Element box
attributes in the Attributes panel.

Select all the selected element's attributes for in-


Check box within an clusion in / removal from your datasource table.
Element box
Note: See 'Lower check boxes', below.

Display only the element to which it is attached as


a 'root' element.

Note:
More icon
The path from the actual root element to the select-
ed element is displayed at the top of the Elements
panel.

Attributes Panel

198 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
Use the ... To ...

Select every attribute of the the selected element


Header Row check box for inclusion in / removal from your datasource ta-
ble.

Arrow in header row Expand or contract the Attributes panel.

Select an individual attribute for inclusion in / re-


Lower check boxes
moval from your datasource table.

Selecting multiple XML files as datasources

You can specify multiple files simultaneously when you are selecting an XML
file as a datasource.

Note:
All your XML files must have the same schema, and your XML schema must
be an external XSD file.

1. Specify the multiple source files in the XML file name text box in the
Configuration pane of the Datasources > your-datasource-name
> Draft window.
The XML file name text box displays the names of the files that are used
to populate the datasource table. You can associate multiple source files
to the same datasource table by separating the names of each file with
a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" (asterisk) to indicate any sequence of characters.
For example "dat*.csv" specifies all the files with names starting with
"dat" and ending with ".csv".
• Use the symbol "?" (question mark) to indicate any single character.
For example "dat?.csv" specifies the files whose names are
composed of the character string "dat" followed by any character,
then followed by the extension "csv".

Data Federator User Guide 199


4 Defining sources of data
Managing XML and web service datasources

2. Select the From external XSD (Xml Schema Definition) file radio button
to the right of XML schema location, click Browse to the right of the
XML schema file name field and navigate to and select your external
XSD file.
Note:
You cannot have an XML schema location inside an XML file if you are
using multiple XML files as datasources.

3. Select one of the following radio buttons depending on how your XML
datasource tables should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

4. Click Generate Elements and Attributes.


The Elements and attributes pane is displayed showing a populated
Elements field in List view.

Related Topics
• Choosing and configuring a source file of type XML on page 180
• Choosing and configuring a source file of type XML on page 180

Using a remote XML file as a datasource

You require the http details, or the machine address, user name, password,
and port number for the remote machine.

Data Federator can access XML files on a remote machine on a network.


Data Federator can access files either using an SMB share, if your machine
is in the same SMB domain or workgroup as the distant machine. Otherwise,
Data Federator can use an FTP account, or HTTP access.
1. In the Configuration pane, set the Source type parameter to XML file.
2. In the Configuration pane, set the Protocol parameter to FTP file system,
SMB share, or HTTP.
3. In the Configuration pane, set the Hostname, Port, Username, and
Password fields to the correct access details for the remote machine.

200 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
4. When adding a table to a remote FTP source, in the File name or pattern
field, configure the name of your remote XML file
• For a remote source over FTP, enter the absolute path. Note that a
root path for an FTP connection can be different from the root path on
the remote machine. For example, a root path for FTP may be /pub
lic/data while the absolute path on the remote server is /home/us
er/public/data.

• For a remote source on an SMB network, you must indicate the path
from the shared directory. For example if the shared directory is
shareDir and the files are contained in a sub directory data, then you
must indicate the path as shareDir\data.

The path must start with the name of the shared directory without the
leading backslash "\".

Note:
If connecting to a public SMB directory on UNIX, you must log in with
the username guest.

• For a remote source on an HTTP network, you must indicate the path
of the URL from the end of the IP address or server name. For
example: /Folder1/Folder2/File.xml

Testing web service datasources

• You must have added a datasource whose source type is web service.
• You must have added tables to your web service datasource.

1. In the tree list, expand Datasources, then your-datasource-name,


then Draft, then click your-datasource-table-name.
2. Click Query tool.
3. For any column in your datasource that is an "input column", assign the
column a constant value in the Filter field of the Query tool panel.
For example, if the Symbol column is an "input column", enter a formula
like the following in the Filter field:

Symbol='SAP'

4. Click View data.

Data Federator User Guide 201


4 Defining sources of data
Creating remote Query Server datasources

Data Federator contacts the web service, using any values that you assigned
in the Filter field as the values of the input parameters. When the web service
responds, Data Federator displays the response as a table in the Query
tool.

If Data Federator displays an error in contacting the web service, you can
ask your administrator to reconfigure the connector to web services.

Related Topics
• Adding a web service datasource on page 183
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Configuring connectors to web services on page 479

Creating remote Query Server datasources


You can use a remote Data Federator Query Server as a datasource. You
use the tables that Query Server returns as your source.

Configuring a remote Query Server datasource

In order to use a remote installation of Data Federator Query Server as a


datasource, you require the following details from the remote installation of
Query Server that you want to use:
• hostname and port
• login username and password details
• the catalog and schema details to use

Note:
The remote installation of Query Server must be operating in order that you
can use it for a datasource.

To add a remote Data Federator Query Server as a datasource:


1. At the top of the window, click Add, and from the pull-down list, select
Add datasource.
The New Datasource window is displayed.

202 Data Federator User Guide


Defining sources of data
Creating remote Query Server datasources 4
2. Enter a name for your datasource, and in the Datasource type pull-down
list, select Remote Query Server.
3. Click Save.
The Draft details window is displayed.
4. Enter the remote Query Server details, and then click the Test the
Connection button.
A message is displayed, confirming that the test was successful.
5. Click Update all tables to display the available tables.
The tables in the catalog and schema that you selected are listed. The
datasource is now available for use.

Related Topics
• About datasources on page 66
• Managing resources using Data Federator Administrator on page 483

Connection parameters in Remote Query Server datasources


that can use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name
• Remote catalog and schema

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Data Federator User Guide 203


4 Defining sources of data
Managing datasources

Managing datasources
You can perform most simple operations on datasources from the tree list
in Data Federator Designer.
The Datasources node in the tree list displays the "Datasources" window,
where you can browse, edit and delete datasources.

Defining the schema of a datasource

You can define information about the schema of a datasource in its table
schema window.
1. Open the Table schema window.
2. For each column, set the options as follows.

204 Data Federator User Guide


Defining sources of data
Managing datasources 4
Option Description

Key (key icon

Check this check box to specify that


the column is part of the table's key.

Index (index icon Check this check box to specify that


the column is an index.

Index means that the column has


) a large number of distinct values.

Check this check box to specify that


a value must be provided for this
column in order to retrieve the rest
of the row.

You can use the input column to


prevent the rows of the target table
from being retrieved if the values
Input for this column are not provided in
the query.

This prevents operations like "SE


LECT *".

More precisely, it forces the


WHERE clause to provide a value
for the column.

Enter the number of distinct values


that appear in this column.
Distinct
Data Federator Query Server uses
this value to optimize queries.

Data Federator User Guide 205


4 Defining sources of data
Managing datasources

Option Description

Description icon
Click this icon to display a new win-
dow allowing you to enter a descrip-
tion of the column

3. If your datasource is a text file, you can amend it as follows.

To do this... follow this step...

Select a row, then click Add


Add more columns columns, then click the number of
new columns you want to add.

Edit the name of the column in the


Change a column name
Column name column.

Edit the type of the column in the


Change a column type
Column type column.

Click the Delete icon

Delete a column

in the row that defines the column


that you want to delete.

In the Schema definition pane,


Extract the entire schema again
click Extract schema.

206 Data Federator User Guide


Defining sources of data
Managing datasources 4
Note:
The columns must be indicated in the order that the fields appear in the
datasource.
When the source of the datasource is a file containing fixed-length fields,
you must also indicate the number of characters of each field.

4. Click Save.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Authentication methods for database datasources

When selecting a data access authentication method, the following options


are available:

Authentication method Description

Use a specific database logon


for all Data Federator users
Data Federator connects
to the database using
the username and pass-
word that you enter. For
each user, Data Federa-
tor uses the same user-
name and password.

Data Federator User Guide 207


4 Defining sources of data
Managing datasources

Authentication method Description

Use the Data Federator logon


Data Federator connects
to the datasource using
the username and pass-
word used to log in to
Data Federator.

Use a Data Federator login do-


main
Data Federator connects
to the datasource by
mapping Data Federator
users to database users.

Data Federator uses po-


tentially different user-
names and passwords
for all Data Federator
users, depending on how
you or your administrator
have set up the login do-
mains.

Displaying the impact and lineage of datasource


tables
1. Open your datasource table.
2. Click Impact and Lineage.
Note:
Only datasources that have Final status show impact and lineage.

The Impact and lineage pane for your datasource table expands and
appears.

208 Data Federator User Guide


Defining sources of data
Managing datasources 4
Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

Restricting access to columns using input columns

If you are adding a JDBC datasource or a web service datasource that has
a column with required values, you can check the Input box beside the
column to represent this requirement.

Web service datasources often have columns with required values, because
web services require a request parameter, like a stock ticker symbol, in order
to provive a specific response.

JDBC datasources rarely have columns with required values, but if they do
you can use the Input feature to represent this.
1. Edit the schema of your datasource table.
a. In the treelist, expand Datasources.
b. Click your-datasource-table-name.
In the window that appears, you can edit the schema in the Table
Schema pane.

2. Check Input next to the columns that you want to restrict.

Related Topics
• Mapping values to input columns on page 233

Changing the source type of a datasource

You can change the source type of the draft version of a datasource.

Changing the source type of your datasource has the following consequences:
• Your data extraction parameters are erased.
• Your table schemas are erased.

1. In the bottom of the tree list, click Datasources.


2. Expand your-datasource-name, then click Draft.

Data Federator User Guide 209


4 Defining sources of data
Testing and finalizing datasources

The Datasources > your-datasource-name > Draft window appears.

3. Click Change type, then click the type to which you want to change your
datasource.

Deleting a datasource

You can delete a datasource from the Datasources window.


1. Click Datasources.
The Datasources window appears.

2. In the Datasources window, select the check box of the datasource(s)


you want to delete.
3. Click Delete.
Data Federator deletes the datasources you selected.

Testing and finalizing datasources


To test a datasource, you must verify if the information you entered allows
Data Federator to extract all data from source files and to correctly populate
the datasource tables.

You can encounter the following problems:


• Incorrect configuration, for example incorrect values as datasource
definition or extraction parameters, or an incorrect schema definition.
• Your configuration is not consistent with the data in the source file.

You can perform tests by running a query.

Note:
The tests must be done table by table. It is often practical to test the
datasource tables when they are created.

210 Data Federator User Guide


Defining sources of data
Testing and finalizing datasources 4
A datasource is completely tested when all its tables have been tested, and
are all correctly populated.

Related Topics
• Running a query on a datasource on page 211

Running a query on a datasource

You can run a query on a datasource to test that your datasource definition
and schema definition are returning the right values.

Running a query on your datasource is a way to test what values Data


Federator retrieves when it uses your datasource to respond to requests.
1. In the tree list, expand your-datasource-name, then expand Draft, then
click your-datasource-table-name in the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the Query tool pane, select the columns of the datasource table you
wish to test and click View data.
Data Federator extracts data from the file, then displays the data in
columns in the query frame.

3. Verify that the values in your file appear under the correct columns.
If they are not, try adjusting the schema again.

Example: Example tests to perform on your datasource, when its source


is a text file
• Verify that Data Federator extracts the right number of rows.

Run a query, as in Running a query on a datasource on page 211, and


select the Show total number of rows only check box.

Data Federator User Guide 211


4 Defining sources of data
Testing and finalizing datasources

The number of rows will appear above the query results.


• Verify that Data Federator extracts dates in the correct format.

For example, if you enter the value "dd-MM-yyyy" in the Date format
box, and the dates in your text file are "01-02-2000", where "01" means
"January", then Data Federator will extract the wrong date.

Make sure you use the value "MM-dd-yyyy" if the month appears before
the day in your source file.
• Verify that each value lines up in its correct column.

For example, in the following figure, there is a problem with one value
that breaks across two columns.

To fix this, make sure that you choose the field separator correctly when
you configure the extraction parameters.

Related Topics
• Running a query to test your configuration on page 614
• Printing a data sheet on page 617
• File extraction parameters on page 162

Making your datasource final

Once the datasource is final, you can use it in a mapping.

212 Data Federator User Guide


Defining sources of data
Testing and finalizing datasources 4
When you make a datasource final:
• Its previous final is replaced.
• Its draft is erased.

1. In the bottom of the tree list, click Datasources.


2. Click your-datasource-name in the tree list.

The Datasources > your-datasource-name window appears.

3. Click Make final.


Your datasource appears in Datasources > your-datasource-name
> Final.

Note:
If you already have a datasource in final, it is replaced.
Your draft is erased.

Editing a final datasource

To edit a datasource that you have already made final, you must copy it to
a draft.

When you copy a final datasource to a draft:

Data Federator User Guide 213


4 Defining sources of data
Testing and finalizing datasources

• Its previous draft is replaced.


• Its final remains, and you can still use it in a mapping.

1. In the bottom of the tree list, click Datasources.


2. Click your-datasource-name in the tree list.

The Datasources > your-datasource-name window appears.

3. Click Copy to draft.


Your datasource appears in Datasources > your-datasource-name
> Draft.

Note:
The datasource previously in draft is replaced.

214 Data Federator User Guide


Mapping datasources to
targets

5
5 Mapping datasources to targets
Mapping datasources to targets process overview

Mapping datasources to targets process


overview
The following process shows the most simple mapping that you can add.
The process lists the steps in mapping a single datasource table to a single
target table.

• (1) Add a mapping rule for a target table.


• (2) Select a datasource table that maps the key of the target table.
• (3) Write mapping formulas to map the columns of the target table.

Related Topics
• Adding a mapping rule for a target table on page 217
• Selecting a datasource table for the mapping rule on page 218
• Writing mapping formulas on page 219

The user interface for mapping

The following diagram shows what you see in Data Federator Designer when
you work with mappings:

216 Data Federator User Guide


Mapping datasources to targets
Mapping datasources to targets process overview 5

The main components of the user interface for working with mappings are:
• (A) the tree view, where you navigate among your target tables and
mappings
• (B) the main view, where you configure your mappings
• (C) an expanded node, showing a mapping rule for a target table with the
datasource, lookup and domain tables that participate in the mapping
rule
• (D) an expanded pane, showing how you edit mapping formulas

Adding a mapping rule for a target table

• You must have created a target table.

You add a mapping rule to map data from source tables to target tables.
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules.
The Mapping rules window appears.

Data Federator User Guide 217


5 Mapping datasources to targets
Mapping datasources to targets process overview

2. Click Add.
The New mapping rule window appears.

3. In the General pane, type the name and description of your mapping rule
and click Save.
The new mapping rule appears in the tree list.

Related Topics
• Managing target tables on page 46

Selecting a datasource table for the mapping rule

• You must have added a mapping rule.


• You must have created a datasource.
• You must have made your datasource final.

You can select multiple datasource tables to use as the source of a mapping
rule. This procedure shows how to select a single datasource table.

A datasource table that has a column that contributes to the key of a target
table is a called a "core table".
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules, then your-mapping-rule-name
The your-mapping-rule-name window appears .
2. In the Table relationships and pre-filters pane, click the Add a new
table to the mapping rule icon.
The Add a table to the mapping pop-up window appears.
3. In the tree list of the Add a table to the mapping pop-up window, select
the required datasource.
The name of your selected datasource table appears in the Selected
table field.
4. By selecting the appropriate checkboxes as required, define its alias,
whether it should be a core table and whether it should have distinct rows.
5. Click OK.
The selected datasource table appears in the mapping rule.

218 Data Federator User Guide


Mapping datasources to targets
Mapping datasources to targets process overview 5
Related Topics
• Adding a mapping rule for a target table on page 217
• About datasources on page 66
• Creating generic JDBC or ODBC datasources on page 138
• Making your datasource final on page 212
• Managing datasource, lookup and domain tables in a mapping rule on
page 282

Writing mapping formulas

• You must have added a mapping and selected a datasource table.

You use mapping formulas to define relationships between values in your


datasource tables and values in your target tables.
1. Click Target tables, then your-target-table-name, then Mapping
rules.
The Mapping rules window appears, showing a list of your mapping
rules.

2. Click the Edit this mapping rule icon

beside the mapping rule that you want to open.


The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name window appears, where you can modify
your mapping rule.

3. Edit the formula, using Ctrl + Spacebar to display autocomplete, if


required.
Examples of mapping formulas for a target column are:
• S4.ID

• concat(S12.FIRSTNAME, S12.LASTNAME)

4. Click Save.
Data Federator verifies and saves the mapping formula for the column.

Data Federator User Guide 219


5 Mapping datasources to targets
Determining the status of a mapping

Tip:
Displaying the column type when writing mapping formulas
When writing a mapping formula, you will likely need to know the type of the
column that you are mapping. An easy way to display the type is to roll the
mouse over the name of the column.

A tooltip appears, showing the column type.

Related Topics
• Selecting a datasource table for the mapping rule on page 218
• Mapping values using formulas on page 222

Determining the status of a mapping


Data Federator displays the current status of each of your mapping rules.
You can use this status to learn if you have entered all of the information
that Data Federator needs to use the mapping rule.

Each mapping rule goes through the statuses:


• incomplete

(Data Federator does not show this status in the interface. All new
mapping rules are put in this status.)
• completed
• tested

The status is shown in the Target tables > your-target-table-name >


Mapping rules > your-mapping-rule-name window, in the Status pane.

This table shows what to do for each status of the mapping rule life cycle.

220 Data Federator User Guide


Mapping datasources to targets
Determining the status of a mapping 5
The status... means... you can do this...

The mapping rule has Add a datasource table


incomplete
no datasource tables. to the mapping rule.

Some of the columns in Write a mapping formula


the target table are not for all the columns in the
mapped. target table.

Datasource tables are


not linked together by
relationships. Add relationships from
the tables to the core
There are some data- tables.
source tables are not
linked to core tables.

You have defined all of


the formulas and rela-
tionships, but you have
completed Test the mapping rule.
not checked the map-
ping rule against the
constraints.

You have checked the


data of the mapping rule
tested against all of the con-
straints that are marked
as required.

Related Topics
• Managing relationships between datasource tables on page 253
• Writing mapping formulas on page 219
• Adding a mapping rule for a target table on page 217
• Testing mappings on page 280

Data Federator User Guide 221


5 Mapping datasources to targets
Mapping values using formulas

Mapping values using formulas


You use mapping formulas to define relationships between values in your
datasource tables and values in your target tables.
The Data Federator mapping formulas also let you transform values. For
example, you can use formulas to construct new values in your target tables,
combine multiple values, or calculate results.

Mapping formula syntax

Use the following rules when writing a mapping formula:


• Start the formula with an equals sign (=).
• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. The alias is either
an id number or a name (Sn.An or Sn.[column_name]).
• Use the Data Federator functions to construct the column values or
constants.

Example: Basic functions you use in a mapping formula

Table 5-2: Examples of basic functions in a mapping formula

To do this... use the formula...

convert a date value from one format =permute( S1.date_of_birth,


to another 'AyyMMdd', '19yy-MM-dd' )

=concat(concat( S1.lastname,
concatenate text values
', ' ), S2.firstname)

extract a substring =substring(S1.A1, 5, 10)

222 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
For a full list of functions that you can use, see Function reference on
page 624.

For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.

Filling in mapping formulas automatically

When you have to map a lot of columns, Data Federator can add mapping
formulas automatically.

Data Federator automatically maps columns in the datasource tables whose


names match the columns in the target table.
1. Select a datasource table for a mapping rule (see Selecting a datasource
table for the mapping rule on page 218).
2. In the Mapping formulas pane, click Auto map.
If:
• target column T.A is empty, AND
• Data Federator finds a datasource column S.A where name of S.A
= name of T.A,

OR
• Data Federator finds a datasource column S.A where name of S.A
= name of T.A ignoring all periods, hyphens, and other
non-alphanumeric characters,

then, Data Federator fills in the formula:

Target column Formula

A = S.A

Data Federator User Guide 223


5 Mapping datasources to targets
Mapping values using formulas

Example: An example of mapping formulas that are filled in automatically


In the following figure, Data Federator has found the names of several
datasource columns and mapped them to the target columns. It has not
affected the columns that were already mapped.

Setting a constant in a column of a target table

• You must have referenced a domain table from your target table.

You may want to set a constant value for two reasons:


• Your target table has a column that does not appear in the datasource
tables, so you must create the value for each row.
• You know that all of the values in a datasource table map to the same
value in a target table.

In this example, the target table has a column called "country" that is not
available in the datasource table. However, all of the rows in the datasource
table are known to have the same value of country.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:

224 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
a. In the Mapping formulas pane, in the target column whose value you
want to map, enter a constant value.
3. Or, if the target column is type "enumerated":
a. In the Mapping formulas pane, beside the target column whose value
you want to map, click the Choose domain table value icon

A frame appears, showing the domain table that you used as the
domain of this column.
b. Click the value that you want to use as a constant.

Data Federator User Guide 225


5 Mapping datasources to targets
Mapping values using formulas

You can click any column, but only the value from the column that you
selected in the schema of the target table will appear in the mapping
formula

The value appears as a formula in the Mapping formulas pane.

4. In the frame showing the domain table, click Close.


5. Click Save to apply your changes.
Data Federator Query Server maps all values of this column that come
from the source of this mapping to a constant.

Related Topics
• Using a domain table as the domain of a column on page 62

Testing mapping formulas

• You must have added a mapping (see Mapping datasources to targets


process overview on page 216).
• You must have added a formula (see Writing mapping formulas on
page 219).

You can run a query on a mapping formula to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name window appears.

2. In the Mapping formulas pane, click the Edit icon

beside a formula.

226 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5

A menu appears.

3. Click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > column-name window appears.

4. In the Formula test tool pane, click View data to see the query results.
For details on running the query, see Running a query to test your
configuration on page 614.

Data Federator displays the data in columns in the Data sheet frame.

5. Verify that the values are correct.


Otherwise, try adjusting the mapping formula again.

Writing aggregate formulas

Data Federator offers a set of standard aggregate functions that you can
use in your formulas.

Aggregate functions perform an operation over a set of data, for example on


all the values in a column.

There is a list of aggregate functions at Aggregate functions on page 624.

Data Federator User Guide 227


5 Mapping datasources to targets
Mapping values using formulas

Nesting aggregate functions

When you need nested aggregate functions in one formula, you must
decompose them into separate terms.

For example, the formula:

SUM(S1.A1 + AVG(S1.A2))

must be written as:

SUM(S1.A1) + AVG(S1.A2)

How aggregate formulas result in groupby

When you use an aggregate function in your mapping rule, the resulting
query will perform a groupby on all columns that are not aggregates.

If you use the following formulas, where none of the columns are marked as
a key:

target.A = source.A
target.B = source.B
target.C = MAX(source.C)

then Data Federator interprets them as the query:

SELECT A, B, MAX(C) FROM T GROUPBY A, B

The aggregate formula is applied by calculating the maximum value of C for


all the groups of rows where A and B are identical.

Example: The effect of using aggregate formulas


For the following data and formulas:

228 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5

S1.Customer S1.Amount

Almore 100

Beamer 100

Beamer 150
Data
Costly 100

Costly 200

Costly 250

T.Customer = S1.Cus- T.Amount = AVG(


Formulas
tomer S1.Amount)

The result is as follows.

T.Customer T.Amount

Almore 100

Beamer 125

Costly 183.333

Data Federator User Guide 229


5 Mapping datasources to targets
Mapping values using formulas

Writing case statement formulas

• You must have added a mapping.

You can use a case statement formula when you want to express the result
as a series of possible cases instead of as a single formula.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. In the Formuals field of the Mapping formuals pane, enter the case
statement directly and click Save
3. Or:
a. In the Mapping formulas pane, click the Edit icon

beside the formula.

A menu appears.
b. Click Edit as case statement.
c. Click OK.
The Case statement pane appears
d. Click Add case, then Add new case.

230 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
To insert a row at a specific position, see the section on inserting rows
in tables.
e. Edit the If and then formulas.
For details on case statement formulas, see the section on the syntax
of case statement formulas.
f. Click Save.

Example: A basic case statement formula

Table 5-6: Example of a case statement formula

Conditions are tested Enter this in the col- Enter this in the col-
in this order... umn If... umn then...

date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
1
'1%' 'AyyMMdd', '19yy-
MM-dd')

date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
2
'2%' 'AyyMMdd', '20yy-
MM-dd')

other cases

other cases (click Add case > Add date = '0001-01-01'


default case to add this
row)

Related Topics
• Mapping datasources to targets process overview on page 216
• Inserting rows in tables on page 618

Data Federator User Guide 231


5 Mapping datasources to targets
Mapping values using formulas

• The syntax of case statement formulas on page 620

Testing case statement formulas

You can test a conditional formula by running a query on it.

Data Federator adapts the default query to your formula.

For example, when you are configuring your query, and you click the Default
button in the Case statement test tool pane, Data Federator will limit the
selected columns to those that are referenced in your formula.
1. In the tree list, expand Target tables, then expand your-target-table-
name, then expand Mapping rules, then click your-mapping-rule-name.

The Target tables > your-target-table-name > Mapping rules >


your-mapping-rule-name window appears.

2. Click the Edit icon

beside the case statement formula that you want to test, then click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > your-column-name window appears.

3. In the Case statement test tool pane, click the Default button.
Data Federator limits the selected columns to those that your formula
uses.

4. Click View data to see the query results.


For details on running the query.

Data Federator displays the data in columns in the Data sheet frame.

5. Verify that the values appear in the target columns appear correctly.
Otherwise, try adjusting the mapping formula.

Related Topics
• Running a query to test your configuration on page 614
• The syntax of case statement formulas on page 620

232 Data Federator User Guide


Mapping datasources to targets
Mapping values to input columns 5
Mapping values to input columns
When your source tables have columns that require you to provide values,
Data Federator Designer guides you to make sure that the values will be
provided.

To take advantage of the ability of Data Federator Designer to guide you,


do the following.
1. Add all of your tables.
2. Add relationships and pre-filters between tables.
3. If Data Federator tells you that a value for an input column is missing:
• Check if you can assign a constant value to the input column, using
a new pre-filter.
• If not possible, see if you can assign a dynamic value that comes from
another table, using a table relationship.
• If not possible, use an input value function to propagate the decision
to a higher target table.

Related Topics
• Assigning constant values to input columns using pre-filters on page 233
• Assigning dynamic values to input columns using table relationships on
page 234
• Propagating values to input columns using input value functions on
page 234
• Setting a constant in a column of a target table on page 224

Assigning constant values to input columns using


pre-filters
1. Add a pre-filter on the input column.
2. Set the pre-filter to a constant value.
a. In the Formula panel, enter the formula your-column-name-or-alias
= constant.

Data Federator User Guide 233


5 Mapping datasources to targets
Mapping values to input columns

Related Topics
• Adding a pre-filter on a column of a datasource table on page 236

Assigning dynamic values to input columns using


table relationships
1. Add a source table to your mapping.
a. In the tree list, click your-target-table-name > Mapping rules >
your-mapping-rule-name
b. In the Table relationships and pre-filters pane, click Add table.
c. In the Add a table to the mapping pane, click the table you want to
add, then click Add.
2. Add a relationship to the input column from the table you added.
a. In the Table relationships and pre-filters pane, click Add
relationship.
b. In the Add relationship pane, in the Columns box, click the input
column in the first table, and another column in the second table.
c. Click Add to formula, then Add.

Propagating values to input columns using input


value functions

When one of the source tables in your mapping rule has a column that
requires a value, and you want to force the query to provide this value, you
can use an input value function.

When you use an input value function in such a way, the user or application
that sends the query is responsible for providing a value in the where clause
of the query. When this value is not provided, Data Federator Query Server
throws an error.
1. Edit the source tables in your mapping rule.
a. In the treelist, click Target tables > your-target-table-name >
Mapping rules > your-mapping-rule-name.
2. Add an input value function on the column.

234 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
a. In the Table relationships and pre-filters pane, click Input value
function....
3. Use the input value function to connect the source column to a column
in your target table.
For example, to propagate the value from the column A1 in table T, type
= T.A1.

Adding filters to mapping rules


Filters let you control data in mapping rules in two ways.
• pre-filters

Pre-filers let you limit the source data that Data Federator queries in a
mapping rule. For example, you can use a filter to limit customer data to
those who are born after a certain date.

You can use a pre-filter on each datasource table that is used in a mapping
rule.
• post-filters

Post-filters let you limit the data after it has been treated by table
relationships.

You can use one post-filter per mapping rule.

The precedence between filters and formulas

Pre-filters are applied before the table relationships.

Post-filters are applied after the table relationships.

Data Federator User Guide 235


5 Mapping datasources to targets
Adding filters to mapping rules

Adding a pre-filter on a column of a datasource table

• You must have added a mapping.

You can add a pre-filter on a column of a datasource table to limit the data
that Data Federator retrieves from the column.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, select the table including
the column whose values you want to filter and click the Edit the selected
table icon.

236 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
The Edit the mapping source -your-datasource-table-name
pop-up window appears:

3. Click Set a pre-filter.


The Edit Pre-Filter pop-up window appears:

Data Federator User Guide 237


5 Mapping datasources to targets
Adding filters to mapping rules

4. Expand the tree list in the Tables and Columns pane and select a column
on which to add a filter formula.
Press Ctrl + Spacebar to activate autocomplete and display all possible
column names, if required.
5. Enter the filter formula in the Formula pane using, if required, the Tables
and Columns, Operator and Functions panes.
An example filter formula is:

S12.DATE_OF_BIRTH > '1970-01-01'

The column name appears in the Formula pane.

238 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
Note:
You can enter multiple filter formulas on different columns.

6. Click OK.
You are returned to the Edit the mapping source -your-datasource-
table-name pop-up window.
7. Click OK.
The Table relationships and pre-filters pane shows a Filter icon

over each table where you added a pre-filter.


• The Valid Filter icon

appears when a pre-filter is correct.


• The Invalid Filter icon

appears when a pre-filter is incorrect.

Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of filter formulas on page 619

Editing a pre-filter

• You must have added a mapping.

You can edit a pre-filter using the Table relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:

Data Federator User Guide 239


5 Mapping datasources to targets
Adding filters to mapping rules

3. Select the table including the column whose filter you want to edit and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:

4. Click the Edit Pre-filter button.


The Edit Pre-filter pop-up window appears.
5. Edit the filter formula in the Formula pane, using the Tables and
Columns, Operator and Functions panes, if required.
Note:
When you edit a filter formula, you cannot change the column on which
you filter. To change the column, delete the filter, and add a new filter on
a different column.

6. Click Update.

Related Topics
• Mapping datasources to targets process overview on page 216

240 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5

Deleting a pre-filter

You can delete a pre-filter using the Table relationships and pre-filters
pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:

3. Select the table including the column whose filter you want to delete and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:

4. Click the Delete Pre-filter button.


The Pre-filter pane shows the Set a Pre-filter button.
5. Click OK.
The pre-filter formula is removed and the pre-filter disappears from the
Table relationships and pre-filters pane.

Data Federator User Guide 241


5 Mapping datasources to targets
Using lookup tables

Related Topics
• Mapping datasources to targets process overview on page 216

Using lookup tables


You can use a lookup table to map values from a datasource table to values
in a domain table.

You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.

What is a lookup table?

A lookup table associates the values of columns of one table to the values
of columns in another table.
• Lookup tables hold columns of data with mappings to other columns of
data.
• The data in a lookup table is stored on the Data Federator Query Server.
• You can combine a lookup table with a domain table to map the values
in a datasource column to the values in a domain table (see Using lookup
tables on page 242).
• Lookup tables support up to 5000 rows.

Example: A case where you might need a lookup table


The following is an example of a lookup table that you can use to associate
a list in a datasource table to a different list in your target table. In this
procedure, the datasource table uses text codes to represent sex, while
the target table uses integers.

A datasource table has a column "sex" with the values:


• F
• M

Your target table has a column "sex" with the enumerated values:
• 1

242 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
• 2

To complete your mapping, you must create a lookup table that maps
• F to 1
• M to 2

The following table lists the types of lookup tables that Data Federator lets
you create.

Type of lookup table How you implement it

You create a lookup table with two


columns.

Datasource table to domain table The first column references a column


in the datasource table; the second
column references a column in the
domain table.

You create a lookup table with two


columns.

Datasource table to datasource table The first column references a column


in a datasource table; the second
column references a column in anoth-
er datasource table.

The process of adding a lookup table between


columns

You can use a lookup table to map values from a datasource table to values
in a domain table.

Data Federator User Guide 243


5 Mapping datasources to targets
Using lookup tables

You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.

The following process lists the steps in adding a lookup table to associate
the values of a column in a datasource table to a column in a domain table.

• (1) Add a lookup table (see Adding a lookup table on page 244).
• (2) Reference a datasource table in your lookup (see Referencing a
datasource table in a lookup table on page 246).
• (3) Reference a domain table in your lookup (see Referencing a domain
table in a lookup table on page 247).
• (4) Map the values in the datasource table to the values in the domain
table (see Mapping values between a datasource table and a domain
table on page 248).

Adding a lookup table

This procedure shows how to add a lookup table, which establishes a


correspondence between one set of values and another set of values.
1. In the bottom of the tree list, click Lookup tables.
2. Click Add lookup table, then click Create a lookup table.
The Lookup tables > New lookup table window appears.

3. In the Table name box, type a name for your new lookup table.
4. In the Table schema pane, click Add columns, then click Add
datasource column to add one column from a datasource table.
An empty datasource column appears.

5. In the Table schema pane, click Add columns, then click Add domain
column to add one column from a domain table.
You can add columns repeatedly.

244 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
To add a new column at a specific position, see Inserting rows in tables
on page 618.

An empty domain column appears. The domain column is marked by a


domain icon

6. Enter the following values in the table.

In the text box... enter the following...

Datasource column, Column name assoc_sex_char

Datasource column, Column type STRING

Domain column, Column name assoc_sex_num

Domain column, Column type INTEGER

For details about the data types in Data Federator, see Using data types
and constants in Data Federator Designer on page 604.

7. Click Save.
Your new lookup table appears in the tree list.

The Lookup tables > your-lookup-name window appears. In this


window, you can reference a datasource table and a domain table (see
Referencing a datasource table in a lookup table on page 246 and
Referencing a domain table in a lookup table on page 247).

Data Federator User Guide 245


5 Mapping datasources to targets
Using lookup tables

Referencing a datasource table in a lookup table

• You must have added a lookup table.


• You must have added a datasource that contains a datasource table.
• You must have made your datasource final.

When you have created a lookup table, you can reference a datasource
table. The datasource table is the first part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.

The Lookup tables > your-lookup-table-name window appears.

3. In the Datasource table contributors to lookup values pane, click Add.


The Lookup tables > your-lookup-table-name > Description >
Add a new relationship window appears.

4. In the list, expand the datasource, then click the datasource table whose
column you want to use in the lookup.
The name of the selected datasource appears in the Selected Datasource
box.

The name of the selected datasource table appears in the Selected table
box.

The list of columns you can map appears in the your-lookup-first-


column-name list.

5. From the your-lookup-datasource-column-namelist, select the


name of the column that you want to use in the lookup.
6. Click Save.
The Lookup tables > your-lookup-table-name window appears,
showing the datasource table you selected as part of the lookup.

246 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
Related Topics
• Adding a lookup table on page 244
• About datasources on page 66
• Creating generic JDBC or ODBC datasources on page 138
• Making your datasource final on page 212

Referencing a domain table in a lookup table

• You must have added a lookup table (see Adding a lookup table on
page 244).
• You must have added a domain table (see Adding a domain table to
enumerate values in a target column on page 55).

When you have created a lookup table, you can reference a domain table.
The domain table is the second part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.

The Lookup tables > your-lookup-table-namewindow appears.

3. In the Domain contributors to lookup values pane, click Add.


The Lookup tables > your-lookup-table-name > Description >
Add a new relationship window appears.

4. In the list, click the domain table whose column you want to use in the
lookup.
The name of the selected domain table appears in the Lookup table box.

The list of columns you can map appears in the your-lookup-second-


column-name list.

5. From the your-lookup-domain-column-namelist, select the name


of the column that you want to use in the lookup.

Data Federator User Guide 247


5 Mapping datasources to targets
Using lookup tables

6. Click Save.
The Lookup tables > your-lookup-table-name > Description
window appears, showing the domain table you selected as part of the
lookup.

Mapping values between a datasource table and a


domain table

• You must have referenced a datasource table and a domain table in your
lookup (see Referencing a datasource table in a lookup table on page
246 and Referencing a domain table in a lookup table on page 247).

This section shows how to associate the values in the column of a datasource
table to the values in the column of a lookup table.

In this example, the set of values in the datasource is {F, M}, and the set
of values in the domain table is {1, 2}.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-name in the tree list.

The Lookup tables > your-lookup-table-name window appears.

3. In the Datasource table contributors to lookup values pane, select


your datasource table, and click Update table contents.
The Table contents pane shows the values that Data Federator imported
from your datasource table. If you followed the procedure at Referencing
a datasource table in a lookup table on page 246, then you see the
following values:
• F
• M

4. Beside the row "F", click the Edit icon

248 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5

.
The Lookup tables > your-lookup-table-name > Modify row window
appears. This frame contains a blank row containing two text boxes. The
first text box is a column from your datasource table. The second text box
is a column from your domain table.

5. Click the Lookup values icon

.
The your-domain-table-name frame appears.

6. In the domain table, click the row (1, female).


The value "1" appears in the lookup table.

7. Click Save.
The Table contents pane shows the row (F, 1) in your table. This means
that the value F in your datasource table is associated to the value 1 in
your domain table.

8. Repeat steps 4 to 7 to associate the value M to the value 2.


Your lookup table is complete, and is ready to use in a mapping.

Adding a lookup table by importing data from a file

If you have a lot of lookup data, you can enter it into your Data Federator
project quickly by importing the data from a text file.

For example, Data Federator can import data such as the following.

file: my-lookup-data.txt
"political_region";"code"
"Alabama";"1"
"Alaska";"2"
"Arizona";"3"
"Arkansas";"4"

Data Federator User Guide 249


5 Mapping datasources to targets
Using lookup tables

"California";"5"
"Colorado";"6"
"Connecticut";"7"
"Delaware";"8"
"Florida";"9"
"Georgia";"10"
"Hawaii";"11"
"Idaho";"12"
"Illinois";"13"
"Indiana";"14"
"Iowa";"15"
"Kansas";"16"
"Kentucky ";"17"
"Louisiana ";"18"
"Maine";"19"
"Maryland";"20"
"Massachusetts";"21"
"Michigan";"22"
"Minnesota";"23"
"Mississippi";"24"
"Missouri";"25"
"Montana";"26"
"Nebraska";"27"
"Nevada";"28"
"New Hampshire";"29"
"New Jersey";"30"
"New Mexico";"31"
"New York";"32"
"North Carolina";"33"
"North Dakota";"34"
"Ohio";"35"
"Oklahoma ";"36"
"Oregon";"37"
"Pennsylvania";"38"
"Rhode Island";"39"
"South Carolina";"40"
"South Dakota";"41"
"Tennessee";"42"
"Texas";"43"
"Utah";"44"
"Vermont";"45"
"Virginia";"46"
"Washington";"47"
"West Virginia";"48"
"Wisconsin";"49"
"Wyoming";"50"

1. Make a new text file to define the lookup table data.


Ensure the new file is in comma-separated value (CSV) format, as in the
example above.

2. Add a lookup table and reference a datasource and domain table in it.

250 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
3. Add a second datasource that points to the file from which you want to
import.
4. When the Lookup tables > your-lookup-table-name window
appears, click Add, then click Add from a datasource table.
The Lookup tables > your-lookup-table-name > Add rows from
a datasource window appears.

5. Refer to the Select a datasource table field and select the datasource
table to be added to the lookup table.
The first columns of the selected lookup table, and their types, are
displayed in the lookup table drop-down list-boxes beneath the Lookup
columns mapping pane.

They are also displayed in the Select a subset of columns field on the
right. You can, if required, select one or all of the columns in this field and
click View Data to display the contents of the selected columns.

6. Refer to the Lookup columns mapping pane and map the required
datasource column from each lookup table column's drop-down list-box.
7. Click Save.
The Lookup tables > your-lookup-table-namewindow is displayed
and your file's imported data is added to your lookup table.

Related Topics
• Using data types and constants in Data Federator Designer on page 604
• Adding a lookup table on page 244
• Referencing a datasource table in a lookup table on page 246
• Referencing a domain table in a lookup table on page 247
• Creating text file datasources on page 158

Dereferencing a domain table from a lookup table

In this version, the only way to dereference a domain table from a lookup
table is to delete the lookup table.

Related Topics
• Deleting a lookup table on page 252

Data Federator User Guide 251


5 Mapping datasources to targets
Using a target as a datasource

Deleting a lookup table


1. In the tree list, click Lookup tables.
The Projects > Lookup tables window opens.

2. Select the check box beside the lookup table you want to delete.
3. Click Delete.

Exporting a lookup table as CSV

• You must have added a lookup table.

See Adding a lookup table on page 244.

1. In the tree list, click Lookup tables.


The Lookup tables window appears.
2. Select the table you want to export as CSV.
The Lookup tables > your-lookup-table-name window appears.
3. Click Export
The File download window appears giving you the option of opening or
saving your Lookup_your-lookup-table-name.csv file.
4. Click Save and save the .csv file to a location of your choosing.

Using a target as a datasource


You can use a target as a datasource for another mapping. This lets you
build several levels of mappings.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click Add table.
The Add a table to the mapping pane appears.
3. Click on a specific table.
The name of the selected table appears in the Selected table field.

252 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
4. Click Add to add the table to the mapping rule.
The table appears in the tree list under the Target tables >your-target-
table-name >Mapping rules >your-mapping-rule-name >Tables
node.

Managing relationships between


datasource tables
This section shows how to manage the relationships between datasource
tables, which you need to do while mapping columns from multiple datasource
tables.

You need to manage relationship when you have multiple datasource tables
and the data in those tables is related.

Related Topics
• The process of mapping multiple datasource tables to one target table
on page 264

The precedence between formulas and relationships

When you add a relationship between datasources, it is applied after the


mapping formulas.

Data Federator User Guide 253


5 Mapping datasources to targets
Managing relationships between datasource tables

Finding incomplete relationships

To find the datasource tables that have no, or incorrect, relationships to the
core tables, look for red bars in the Table relationships and pre-filters
pane.

254 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

The image above, however, also shows a solid grey line between the first
two (non-core) tables. This means their relationship is satisfactory. A dotted
red line means the relationship is erroneous.

Meaning of colors in the datasource relationships diagram

The meaning of a red bar in the datasource relationships diagram depends


on whether you have chosen to designate the table a core table or non-core
table. In either case, it indicates a situation that should be corrected.

You can right-click a table to see if it is a core table or not.

The following table shows the meaning of the colors depending on whether
a table is a core table or not.

Data Federator User Guide 255


5 Mapping datasources to targets
Managing relationships between datasource tables

Table 5-9: The meaning of colors in the datasource relationships diagram

Type of table Red bar Blue bar

At least one table has no All of the core tables are re-
core table relationships with the other lated through relationships
core tables. to other core tables.

The table has no relation- The table has a relationship


ship to any core table, nei- to at least one core table, ei-
non-core table
ther directly nor through ther directly or through other
other tables. tables.

Dotted red line Solid gray line

The relationship has an


The relationship is correct.
error.

Add relationships as required until no red bar is displayed.

Related Topics
• Adding a relationship on page 256

Adding a relationship

• You must have added a mapping.

You can add a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Find the datasource tables that have no relationships.

256 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
3. In the Table relationships and pre-filters pane, click Add relationship:

The Add relationship pop-up window appears:

Data Federator User Guide 257


5 Mapping datasources to targets
Managing relationships between datasource tables

4. Select columns from each table and the operator from the drop-down
between Table 1 and Table 2.
An example relationship is:

S1.A2 = S2.A2

The resultant formula is displayed in the Formula pane.


Note:
Alternatively, write and edit the formula directly in the Formula pane.

5. Click OK.
The Table relationships and pre-filters pane shows your relationship.

258 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
Related Topics
• Mapping datasources to targets process overview on page 216
• Finding incomplete relationships on page 254
• The syntax of relationship formulas on page 621

Editing a relationship

• You must have added a mapping.

You can edit a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click the relationship to
select it.

3. Click the Edit the selected datasource relationship icon.


The Edit relationship pop-up window appears.
4. Select columns from each table and the operator from the drop-down
between Table 1 and Table 2, as required.
The resultant formula is displayed in the Formula box.
Note:
Alternatively, write and edit the formula directly in the Formula box.

5. Click OK.

Data Federator User Guide 259


5 Mapping datasources to targets
Managing relationships between datasource tables

The Table relationships and pre-filters pane shows your relationship.

Related Topics
• Mapping datasources to targets process overview on page 216
• Finding incomplete relationships on page 254
• The syntax of relationship formulas on page 621

Deleting a relationship

You can delete a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click the relationship to
select it.

The selected relationship is shown in bold.

3. Click Remove.
4. Click OK to confirm you want to remove the relationship formula.
The Table relationships and pre-filters pane no longer shows the
relationship.

260 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Choosing a core table

The following procedure shows how to designate a table as a core table.


1. Edit the mapping rule.
2. Right-click the table to that you want to choose as a core table.
The table appears selected in the Table relationships and pre-filters
pane, and a context menu appears.
3. On the context menu, select the Core table option.

4. Click Save.

Configuring meanings of table relationships using


core tables

When you map multiple datasources to a target, you must distinguish between
core tables and non-core tables.
• Use a core table to choose the set of rows that will populate your target
table (the result set).

When you set two or more tables as core, the result set is defined by the
join of all the core tables.

Data Federator User Guide 261


5 Mapping datasources to targets
Managing relationships between datasource tables

• Use non-core tables to extend the attributes of each row in the result set.

Example: The effect of setting a table as core or non-core


Suppose that you have two tables: Customers and Orders.

then a join between


If you set the Cus and the Orders table
the two tables re-
tomers table to... to...
turns...

all customers, including


those who did not pur-
core non-core
chase anything (a left
outer join)

only those customers


core core who purchased some-
thing (an inner join)

The following icon, displayed beneath datasource table aliases such as S1,
S2 or S10 in the Table relationships and pre-filters pane, indicates they
are core tables:

The table below describes how you use core tables to configure meanings
of table relationships:

If you have ... And ... Then ensure ...

want to map a column to the source table is a core


One source table
the key of the target table table

the target table has no the source table is a core


One source table
key columns table

want to display all values


only one source table is
Two source tables in all rows, including null
a core table
values

262 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
If you have ... And ... Then ensure ...

want to display rows that both source tables are


Two source tables
contain null values core tables

you change the non-core


have a non-core table table to a core table, or
Three source tables
between two core tables one of the outer core ta-
bles to a non-core table

The effects on the target table, of designating a source table as a core table
are represented in the following diagram:

Using a domain table to constrain possible values

• You must have created a target table.


• You must have created a domain table.
• You must have added a datasource table.

You can use a domain table to constrain the values that of a target table by
defining a relationship between the domain table and your datasource.
1. Add the datasource table as a source of your mapping.
2. Add the domain table as a source of your mapping.
3. Add a relationship between the key columns of the datasource table
whose values you want to constrain and the domain table.

Data Federator User Guide 263


5 Mapping datasources to targets
Managing relationships between datasource tables

To add this kind of relationship, add a relationship as in Adding a


relationship on page 256, and enter the following formula.

datasource-id.key-column-id = domain-id.key-column-id

For example:

S1.A1 = S2.A1

Only the rows of the datasource whose ID matches one of the IDs in the
domain table appear in the target.

Related Topics
• Managing target tables on page 46
• Managing domain tables on page 54

The process of mapping multiple datasource tables


to one target table

The following process lists the steps in mapping a two datasource tables to
a single target table.
• (1) Add a datasource table (see Adding multiple datasource tables to a
mapping on page 265).
• (2) Write mapping formulas (see Writing mapping formulas when mapping
multiple datasource tables on page 265)
• (3) Add relationships between the datasource tables (see Adding a
relationship when mapping multiple datasource tables on page 267)

Tip:
In what order to proceed when mapping multiple datasource tables
Start by adding the datasource tables that map the key of the target table,
then proceed with the datasource tables that are needed to map the non-key
columns of the target table.

264 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Adding multiple datasource tables to a mapping

Adding multiple datasource tables is the same as adding a single datasource


table multiple times.

To add a datasource table to a mapping, see Selecting a datasource table


for the mapping rule on page 218. Repeat this to add all of your datasource
tables.

Writing mapping formulas when mapping multiple


datasource tables

Writing mapping formulas for multiple datasource tables is the same as for
a single datasource table. To write a mapping formula, see Writing mapping
formulas on page 219.

While you are adding mapping formulas:


• When you have a choice between multiple datasource tables to map the
key in the target table, see Interpreting the results of a mapping of multiple
datasource tables on page 270.

Example: Mapping columns from two datasource tables to one key


columnExample datasource table that contributes a key and non-key
column
In this example, you need to create a target table with columns that come
from two datasource tables.

source1.order_id (key) source1.date

200101 April 2001

200102 January 2000

Data Federator User Guide 265


5 Mapping datasources to targets
Managing relationships between datasource tables

source1.order_id (key) source1.date

200103 March 2003

200104 March 2002

200200 January 2000

444444 January 2000

Table 5-13: Example datasource table that contributes one non-key column

source2.order_id (key) source2.quantity

200101 3

200102 40

200103 12

200104 560

200200 10

555555 10

266 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
Table 5-14: Example target table schema with columns from two datasource tables

target.order_id (key) target.date target.quantity

[key-values] [values] [values]

In this example, map the columns as follows.

Map this... to...

source1.order_id target.order_id

source1.date target.date

source2.quantity target.quantity

For the difference between mapping source1.order_id or source2.order_id


to target.order_id, see Interpreting the results of a mapping of multiple
datasource tables on page 270.

Adding a relationship when mapping multiple


datasource tables

• You must have added a mapping.

You can add a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. In the tree list, expand your-target-table-name , expand Mapping
rules, then clickyour-mapping-rule-name .

Data Federator User Guide 267


5 Mapping datasources to targets
Managing relationships between datasource tables

The Target tables >your-target-table-name> Mapping rules


>your-mapping-rule-name window appears.

2. In the Table relationships and pre-filters pane, click Add relationship.


The Create datasource relationship frame appears.

3. Edit the relationship formula in the Formula box.


An example relationship is:

S1.A2 = S2.A2

4. Click Save.
The Table relationships and pre-filters shows the relationships.

5. Repeat steps 2-4 until all your datasource tables form a chain.
None of your datasource tables must be left without a relationship to
another table.

The syntax of relationship formulas allows you to use AND to map multiple
relationships between more than two tables simultaneously.

Example: A relationship between two datasource tables


In example Writing mapping formulas when mapping multiple datasource
tables on page 265, add the following relationship.

268 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Add the relationship from... to...

source1.order_id source2.order_id

If you have followed the two examples Writing mapping formulas when
mapping multiple datasource tables on page 265 and the current one, the
result is as follows.
• The row (order_id1, date1, quantity1) exists in the target table T if there
is a row with a key of value "order_id1" in the datasource source1, and
there is a row with a key of value "order_id1" in the datasource source2.
• The row (order_id1, date1, null) exists in the target table T if there is a
row with a key of value "order_id1" in the datasource S1, and there is
no row with a key of value "order_id1" in the datasource S2.

Data Federator User Guide 269


5 Mapping datasources to targets
Managing relationships between datasource tables

Table 5-17: Example target table composed of columns from two datasource tables

target.order_id (key) target.date target.quantity

200101 April 2001 3

200102 January 2000 40

200103 March 2003 12

200103 March 2003 500

200200 January 2000 10

444444 January 2000 <null>

Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of relationship formulas on page 621

Interpreting the results of a mapping of multiple


datasource tables

When you map two or more datasource tables to one target, the result
depends on several factors.

This section demonstrates the effect of the following factors on the result of
a mapping from two datasource tables to a target table:

270 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
• datasource tables that contribute to the key columns (which are called
"core tables")
• relationships between datasource tables

This section shows how mappings between the same datasource tables and
target can produce different results.

Example: Example of two datasource tables where the second one's


values are optional
In this example, the factors are as follows.

Factor Value For details

For details on core ta-


bles, see "ch531.di-
core tables source1
ta#fm_2006071221_1199008-
eim-titan".

For details on adding


relationships, see Man-
relationships between source1.order_id =
aging relationships be-
datasource tables source2.order_id
tween datasource ta-
bles on page 253.

The result is as follows.


• The row (order_id1, date1, quantity1) exists in the target table T
if there is a row with a key of value order_id1 in the datasource
source1, and there is a row with a key of value order_id1 in the
datasource source2.
• the row (order_id1, date1, null) exists in the target table T if there is
a row with a key of value order_id1 in the datasource source1, and
there is no row with a key of value order_id1 in the datasource
source2.

Data Federator User Guide 271


5 Mapping datasources to targets
Managing relationships between datasource tables

Note:
This is the same example as the one in the procedure Adding a relationship
when mapping multiple datasource tables on page 267

Example: Example of two datasource tables where the first one's values
are optional
In this example, the factors are as follows.

Factor Value For details

For details on core ta-


bles, see "ch531.di-
core tables source2
ta#fm_2006071221_1199008-
eim-titan".

272 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Factor Value For details

For details on adding


relationships, see Man-
relationships between source1.order_id =
aging relationships be-
datasource tables source2.order_id
tween datasource ta-
bles on page 253.

The result is as follows.


• The row (order_id1, null, quantity1) exists in the target table T
if there is no row with a key of value order_id1 in the datasource
source1, and there is a row with a key of value order_id1 in the
datasource source2.
• The row (order_id1, date1, quantity1) exists in the target table T
if there is a row with a key of value order_id1 in the datasource
source1, and a row with a key of value order_id1 in the datasource
source2.

Note:
Because the target order_id is mapped from source2, in the result, date
may be NULL.

Data Federator User Guide 273


5 Mapping datasources to targets
Managing relationships between datasource tables

Combining mappings and case statements

This section demonstrates the effect of a case statement on the result of a


mapping from two datasource tables to a target table.

Example: Example of two datasource tables where both values are required
In this example, the factors are as follows. For the schemas, see the
examples Writing mapping formulas when mapping multiple datasource
tables on page 265 and Adding a relationship when mapping multiple
datasource tables on page 267.

Factor Value

core tables source1

274 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Factor Value

relationships between datasource


source1.order_id = source2.order_id
tables

The result is as follows.


• The row (order_id1, date1, quantity1) exists in the target table T if there
is a row with a key of value "order_id1" in the datasource source1, and
there is a row with a key of value "order_id1" in the datasource source2.
• The row (order_id1, date1, null) cannot exist in the target table T because
of the formula that denies rows where source2.id is NULL.

Related Topics
• Managing relationships between datasource tables on page 253

Data Federator User Guide 275


5 Mapping datasources to targets
Managing a set of mapping rules

Managing a set of mapping rules


This section shows how to add, delete and modify your mapping rules quickly.

Viewing all the mapping rules

Data Federator lists all the mapping rules in the Mapping rules window.
• Click Target tables, then your-target-table-name, then Mapping
rules.
The Mapping rules window appears, showing a list of your mapping
rules.

Opening a mapping rule

There are two ways to open a mapping rule.


1. Either:
a. Find the mapping rule in the tree list and click its name.
2. Or:
a. Open the Mapping rules window as in Viewing all the mapping rules
on page 276.
b. Click the Edit this mapping rule icon

beside the mapping rule that you want to open.

276 Data Federator User Guide


Mapping datasources to targets
Managing a set of mapping rules 5

The Target tables > your-target-table-name > Mapping rules


> your-mapping-rule-name window appears, where you can
modify your mapping rule.

Copying a mapping rule

Copying a mapping rule is a quick way to add a new mapping rule. When
you copy a mapping rule, the new mapping rule contains the same datasource
tables, lookup tables, and correct mapping formulas as the original mapping
rule.

Note:
Data Federator only copies correct mapping formulas. Therefore, even if
your original mapping rule is incomplete, after you copy, the copied mapping
rule may be complete.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Click the Copy this mapping rule icon

beside the mapping rule that you want to copy.

Data Federator User Guide 277


5 Mapping datasources to targets
Managing a set of mapping rules

A message box appears. The message asks you to confirm the copy.

When you confirm, the Target tables > your-target-table-name >


Mapping rules > CopyOfyour-mapping-rule-namewindow appears,
where you can modify your new mapping rule.

3. In the General pane, in the Description box, type a new description of


your mapping rule. Data Federator uses all the characters before the first
space of this description as the label of your mapping rule.

Printing a mapping rule

You can print the definition of a mapping rule to a PDF file.


1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Click the Print this mapping rule icon

beside the mapping rule that you want to print.

278 Data Federator User Guide


Mapping datasources to targets
Activating and deactivating mapping rules 5

Deleting a mapping rule

Delete a mapping rule when you do not need any of the mapping formulas
between datasources and target tables inside the mapping rule.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Select the check box beside the mapping rule that you want to delete.
You can also select multiple mapping rules at the same time.

3. Click Delete.
A message box appears. The message asks you to confirm the deletion.

When you confirm, the selected mapping rule is deleted.

Displaying the impact and lineage of mappings


1. Open your mapping rule.
2. Click Impact and Lineage.
The Impact and lineage pane for your mapping rule expands and appears.

Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

Activating and deactivating mapping


rules
In order to test your targets, you can deactivate mapping rules that you are
not ready to use.

By default, all mapping rules that you add are activated.

Data Federator User Guide 279


5 Mapping datasources to targets
Testing mappings

Deactivating a mapping rule

You can deactivate a mapping rule from the Target tables > your-target-
table-name > Mapping rules window.
1. In the tree list, expand your-target-table-name, then click Mapping
rules.
2. In the List of mapping rules pane, check the box beside the mapping
rule that you want to deactivate.
3. Click Deactivate.
The mapping rule is deactivated.

The mapping rule appears in gray in the tree list.

In the tree list, you can click your-target-table-name to see that


the target's status has been updated; for example it may pass to "mapped"
if the mapping rule that you deactivated was the only one preventing the
target from being mapped.

Activating a mapping rule

All mapping rules are activated by default.


To activate a mapping rule that you previously deactivated, follow the same
procedure as in Deactivating a mapping rule on page 280, and use the
Activate button instead.

Testing mappings
To test a mapping rule, you must verify if the information you entered allows
Data Federator to correctly populate the target tables.

You can encounter the following problems:


• You have written mapping formula that maps the wrong value.
• Your mapping formulas do not result in sufficient information for your
target columns.

280 Data Federator User Guide


Mapping datasources to targets
Testing mappings 5
• Your mapping formulas result in null values in columns that must not be
NULL.

Data Federator lets you test a mapping rule by using the Query tool pane.

Testing a mapping rule

• You must have added a mapping (see Mapping datasources to targets


process overview on page 216).
• You must have added formulas to map all the values in the target table
(see Writing mapping formulas on page 219).

You can run a query on a mapping rule to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-namewindow appears.

2. In the Mapping rule test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.

For details on printing the results of the query, see Printing a data sheet
on page 617.

Data Federator displays the data in columns in the Data sheet frame.

3. Verify the values appear correctly.


Otherwise, try adjusting the mapping rule again.

Example tests to run on a mapping rule

Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.

Data Federator User Guide 281


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.

The number of rows will appear above the query results.


• Fetch a single row.

For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:

client_id=6000114

in the Filter box.

Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.

Type the formula:

client_id <> NULL

If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.

Managing datasource, lookup and


domain tables in a mapping rule
This section shows how to add, delete, or view the contents of the tables
that participate in a mapping rule. These can be datasource tables, lookup
tables, or domain tables.

Adding a table to a mapping rule

The following procedure shows how to add any type of table to a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name

282 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click Add table.
A new window Add a table to the mapping appears:

3. Select a specific table.


The name of the selected table appears in the Selected table field.
4. By selecting the appropriate checkboxes as required, define its alias,
whether it should be a core table and whether it should have distinct rows.
5. Click OK to add the table to the mapping rule.

Data Federator User Guide 283


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

Replacing a table in a mapping rule

The following procedure shows how to replace any type of table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

284 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

b. Click Edit.
A new window Edit the mapping source appears:

Data Federator User Guide 285


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

4. Expand the Tables tree list and select the replacement table.
The name of the selected table appears in the Replace with table field.
5. Click OK to add the replacement table to the mapping rule.

Deleting a table from a mapping rule

The following procedure shows how to delete any type of table from a
mapping rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Right-click the table to be deleted.
A context-sensitive menu appears:

3. Click Remove.
4. Click OK.
The selected table is deleted from the mapping rule, but a reference to it
remains.
Note:
The target table itself is not deleted.

Viewing the columns of a table in a mapping rule

The following procedure shows how to view the columns of any type of table
when it is part of a mapping rule.

286 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5
• Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name, then Tables, then your-ta
ble-name.
The columns in the expanded table appear.

Setting the alias of a table in a mapping rule

The following procedure shows how to set the alias of a table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

Data Federator User Guide 287


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

b. Click Edit.
A new window Edit the mapping source appears:

288 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

4. In the Properties panel, enter an alias in the Update table alias field
and click OK.
The alias of the table in the mapping rule is set.

Restricting rows to distinct values

The following procedure shows how to restrict values in a table used in a


mapping rule to distinct values only.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name

Data Federator User Guide 289


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

The your-mapping-rule-name window appears.


2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

290 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

b. Click Edit.
A new window Edit the mapping source appears:

4. Select the distinct rows check box.


5. Click OK to restrict the values in the selected table to distinct values only.

Data Federator User Guide 291


5 Mapping datasources to targets
Details on functions used in formulas

Details on functions used in formulas


See Function reference on page 624 for a list of functions in Data Federator.

292 Data Federator User Guide


Managing constraints

6
6 Managing constraints
Testing mapping rules against constraints

Testing mapping rules against constraints


This section describes how to test your mappings in Data Federator. This is
a way to test the integrity of the data in order to improve the definition of your
mapping rules, before you put a project into production.

Defining constraints on a target table


This section describes how to define the constraints that you want to check
on your mapping rules.

You can define constraints once you have defined a target table.

Types of constraints

Data Federator Designer checks several pre-defined constraints on mapping


rules, and lets you define custom constraints.

This table describes the types of constraints that you can run in Data
Federator Designer.

Test Description

checks if the values of the key in the


key constraint
target table are unique

checks if the values of a column in


NOT-NULL constraint
the target table are not null

checks a formula that you define on


custom constraint
the columns of a target table

294 Data Federator User Guide


Managing constraints
Defining constraints on a target table 6
Test Description

checks if the values in an enumerat


ed column are in the associated do-
main table

domain constraint Every enumerated column has a do-


main table. This domain table defines
the valid values for the column. The
domain constraint checks that all the
columns values are in the domain.

Related Topics
• Defining key constraints for a target table on page 295
• Adding a domain table to enumerate values in a target column on page 55

Defining key constraints for a target table

You define a key constraint when you create the schema of the target table.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-namewindow appears.

2. Select the Key check box for each column that you want to define as a
key.
3. Click Save.
When you click Constraints > your-key-constraint-namein the
tree list, in the Constraint checks pane, all mapping rules that have the
status "completed" appear in the list.

Defining not-null constraints for a target table

You define a NOT-NULL constraint when you create the schema of the target
table.

Data Federator User Guide 295


6 Managing constraints
Defining constraints on a target table

1. In the tree list, click your-target-table-name.


The Target tables > your-target-table-namewindow appears.

2. Select the Not null check box for each column on which you want to
define a NOT-NULL constraint.
3. Click Save.
When you click Constraints > your-column-name_not_null in the tree
list, in the Constraint checks pane, all mapping rules that have the status
completed appear in the list.

Defining custom constraints on a target table

• You must have added a mapping rule (see Adding a mapping rule for a
target table on page 217).

You can define a custom constraint on a target table by writing a constraint


formula.
1. In the tree list, expand your-target-table-name, then Constraints.
The your-target-table-name> Constraints window appears.

2. Click Add.
The your-target-table-name> Constraints > your-constraint-
name > New constraint window appears.

3. In the General pane, type a name and description for your constraint.
4. In the Constraint definition pane, select a type for your constraint and
enter a constraint formula.
5. Click Save.
The custom constraint is added to the set of available constraints.

In the Constraint checks pane, all mapping rules that have the status
"completed" appear in the list.

Syntax of constraint formulas

Use the following rules when writing a constraint formula:

296 Data Federator User Guide


Managing constraints
Defining constraints on a target table 6
• Write a formula that returns a BOOLEAN value.
• Refer to columns by their aliases. The alias is either an id number or a
name (An or [column_name]).
• Use the Data Federator functions to construct the column values or
constants.

Example: Basic functions you use in a constraint formula

Table 6-2: Examples of basic functions in a constraint formula

To do this... use the formula...

check if a date is later than 01-01-


date > '01-01-1970'
1970

For a full list of functions that you can use, see Function reference on
page 624.

For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.

Configuring a constraint check

Use these parameters when Computing constraint violations on page 300 or


Computing constraint violations for a group of mapping rules on page 301.

Parameter Description

lists the available columns that you


Available columns
can test

Data Federator User Guide 297


6 Managing constraints
Defining constraints on a target table

Parameter Description

lists the columns that you have cho-


sen to test
• Click on the columns in the
Available columns list to add
columns to your constraint. They
appear in the Selected columns
list.
• Click on the columns in the Select-
ed columns list to remove
Selected columns columns from your constraint.
• Click All to add all the columns to
your constraint.
• Click None to remove all the
columns from your constraint.
• Click Default to add the default
columns to your constraint. The
default columns for a constraint
depend on the constraint type.

the column by which the constraint


Sort by
results are ordered

the order in which Data Federator


Sort order
displays the constraint rows

specifies how many rows you want


your constraint to return
Retrieved rows Use this box to limit the size of the
returned data when your constraint
may return a large number of rows.

298 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
Parameter Description

specifies if you want your constraint


to return a count of all violations
Compute total number of constraint
violations This option will retrieve more detailed
information, but it will double the
processing time of the query.

Checking constraints on a mapping rule


This section describes how to check the constraints that you defined on a
single mapping rule.

The purpose of analyzing constraint violations

Analyzing constraint violations indicates what you should do to improve the


integrity of the data that a mapping rule returns.

To analyze the constraint violations a mapping rule returns, you must check
constraints on the mapping rule.

Tip:
Resolving constraint violations in a constraint check

There are several ways to resolve a failed constraint.


• Change the mapping rule to handle the cases revealed by the constraint
violations.
• Filter the constraint violations to help organize them into smaller subsets.
• Change the constraint violations in the source data, if the source data is
erroneous.
• Relax the constraint, in the case that it rejects useful data.

Data Federator User Guide 299


6 Managing constraints
Checking constraints on a mapping rule

Related Topics
• Checking constraints on a mapping rule on page 299
• Viewing constraint violations on page 303
• Mapping datasources to targets process overview on page 216
• Filtering constraint violations on page 302

Computing constraint violations

• You must have defined the constraints for the mapping rule.

See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.

Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name .
The Target tables > your-target-table-name > Constraints >
your-constraint-namewindow appears.

2. In the Constraint checks pane, click the Edit contents icon

beside the mapping rule you want to test.


The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-namewindow
appears.

3. In the Query check pane, click Check constraint.


In the Current check results pane, the Number of constraint violations
box displays the number of violations that caused the constraint to fail,
and the date of the check is displayed in the Launch date box.

300 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
The value ">n" appears in the Number of constraint violations box if
the number in the Retrieved rows box is "n", and the number of constraint
violations is greater than "n".

For details on the settings of the Query check pane, see Configuring a
constraint check on page 297.

4. Enter your comments about this check in the Comments box.


5. In the Current check results pane, click View results.
The rows in your mapping rule that fail the constraint appear in the Data
sheet frame.

Computing constraint violations for a group of


mapping rules

• You must have defined the constraints for the mapping rule.

See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.

Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.
The Target tables > your-target-table-name > Constraints >
your-constraint-name window appears.

2. In the Constraint checks pane, select the check boxes beside the
mapping rules you want to test.
3. Click Check constraints.
In the Constraint checks pane, the date of the check is displayed in the
Launch date column, and the number of violations that caused the
constraint to fail is displayed in the Violations column: zero, an integer,
or the value ">n".

Data Federator User Guide 301


6 Managing constraints
Checking constraints on a mapping rule

If you have:
• constraint violations, Viewing constraint violations on page 303.
• no constraint violations, Marking a mapping rule as validated on page 303.

Filtering constraint violations

You must have computed the constraint violations for at least one constraint
(see: Computing constraint violations on page 300).

You can filter constraint violations into subsets in order to help you analyze
their sources of error.
1. In the Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name window, in
the Filtered constraint violations pane, click Add.
The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name > Filtered
constraint violations window appears.

2. In the Constraint definition pane, in the Filter on constraint violations


box, type a formula to filter the constraint violations.
For example, if your constraint returns all dates that are greater than the
year 2000, you can enter then formula date > toDate( '2006-01-01'
).

3. In the Query check pane, click Check constraint.


In the Current check results pane, the Number of constraint violations
box shows the number of only those violations that match the filter.

4. In the Current check results pane, click View results.


The Data sheet frame appears, showing the filtered violations.

5. Click Save.
Your filter is saved.

You can make as many filters as you need to organize and help you treat
the constraint violations.

302 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6

Marking a mapping rule as validated

After you have checked a constraint, and you are sure that a mapping rule
has satisfied the constraint formula, you can mark the mapping rule as
validated.

Data Federator remembers which constraints you marked as validated for


each mapping rule. Once you mark a mapping rule as validated for all the
required constraints, Data Federator changes the status of the rule to tested.

When a mapping rule is in the status tested, it can be deployed on the Data
Federator Query Server.
1. In the tree list, expand your-target-table-name, then Constraints,
then click your-constraint-name.
The your-target-table-name > Constraints > your-constraint-
name window appears.

2. Select the check box beside the mapping rule that you want to mark as
validated.
3. In the Constraint checks pane, click Validate.
The mapping rule is marked as validated for this constraint.

Related Topics
• Deploying a version of a project on page 324

Viewing constraint violations

• You must have checked the constraint at least once (see Computing
constraint violations on page 300).

Each time you checked a constraint, its results are stored so you can read
them without checking the constraint again.
You can view the stored results of the most recent constraint check.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.

Data Federator User Guide 303


6 Managing constraints
Checking constraints on a mapping rule

The Target tables > your-target-table-name > Constraints >


your-constraint-namewindow appears.

2. In the Constraint checks pane, click the Edit contents icon

beside the mapping rule you want to test.


The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-namewindow
appears.

3. In the Current check results pane, click View results.


The result of the most recently checked constraint appears in the Data
sheet frame.

For details on printing the results of the constraint, see Printing a data
sheet on page 617.

The Constraint checks pane

The Constraint checks pane shows the characteristics of each mapping


rule for one constraint.

304 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
Column Description

the mapping rules on which this con-


Mapping rule
straint must be checked

the date of the last check of this


Launch date
constraint

the number of violations that caused


Violations
the constraint to fail the last check

the way that Data Federator calculat-


ed the last check
• Enforced

specifies that Data Federator


checks this constraint by examin-
ing the structure of your mapping
rule; It is not necessary to exam-
ine the data.
• Violated

specifies that Data Federator cal-


Analysis
culates that this constraint is failed
by the structure of the mapping
rule; It is not necessary to exam-
ine the data.

These types of checks are faster


than a standard check.
• <empty>

specifies that Data Federator


must examine the data to check
this constraint

Data Federator User Guide 305


6 Managing constraints
Reports

Column Description

whether the mapping rule has been


Validated
validated

Reports
This section describes the reports you can run on your constraints. It consists
of:
• "Generating a constraint report".

306 Data Federator User Guide


Managing projects

7
7 Managing projects
Managing a project and its versions

Managing a project and its versions


This section describes how to manage the versions of a project while it is
under development.

While a project is in development, you can store and modify multiple versions
of it. When you are ready to deploy the project, you choose one of the
versions and deploy it to a Data Federator Query Server. Data Federator
also keeps a list of all the deployments you have made.

For details on deploying projects, see Deploying projects on page 321.

The user interface for projects

The following diagram shows what you see on the Data Federator user
interface when you work with projects:

The main components of the user interface for working with projects are:
• (A) the Projects tab, where you switch from a list of the projects to the
current project

308 Data Federator User Guide


Managing projects
Managing a project and its versions 7
• (B) the tab for the current version of the project (prj_accounts,
prj_chile, prj_customer_satisfaction)
• (C) the tree view, where you open the configuration of your projects
• (D) the main view, where you see a list of your projects, with their names
and descriptions
• (E) the checkbox, where you select the project you want to open

The life cycle of a project

A project can have multiple simultaneous versions:


• the current version
• several stored versions (archive files)
• several deployed versions

Note:
The current version is automatically saved into an archive file called Latest
Version.

The following table summarizes the life cycle of a project:

The version... means... what you can do...

Store the project.

Your working version of


Include an archive file
the project that can be
current or a deployed version of
modified in Data Feder-
another project.
ator Designer

Deploy the project.

Data Federator User Guide 309


7 Managing projects
Managing a project and its versions

The version... means... what you can do...

Load the archive file to


make it the current ver-
A past version of a sion.
archive file project that has been
stored on the server
Download the archive
file on your file system.

Load the archive file to


A version that has been make it the current ver-
deployed on Data Feder- sion.
deployed ator Query Server and
can be used by applica-
tions Download the archive
file on your file system.

Related Topics
• Opening a project on page 41
• Including a project in your current project on page 315
• Deploying a version of a project on page 324
• Downloading a version of a project on page 313
• Downloading a version of a project on page 313

Editing the configuration of a project

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

To define a project and to manage its versions, you must select the project.
1. Either:
a. Select the tab of the project.
The project's Configuration window appears.

310 Data Federator User Guide


Managing projects
Managing a project and its versions 7
Note:
The project is selected and you can manage its versions and its
description.

2. Or
a. At the top of the window, click Projects
The list of projects appears.
b. In the tree list, select the project.

The project's Configuration window appears.

Note:
The project is selected and you can manage its versions and its
description.

Related Topics
• Unlocking projects on page 43
• Adding a project on page 41

Storing the current version of a project

You can store the current version of a project in an archive file.

You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Entire project.
You can also store the current version of selected target tables.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively, and click Store.

Data Federator User Guide 311


7 Managing projects
Managing a project and its versions

Data Federator saves the current version of your project in an archive.


The archive appears in the tree list of the Projects tab, and in the Archive
files pane.
Note:
Once you have stored a version of a project, you can download its archive
file to your file system.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of selected target tables on page 312
• Downloading a version of a project on page 313

Storing the current version of selected target tables

As well as storing the current version of selected tables in an archive file,


you can also store the entire current version of a project.

You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Select targets.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively.
4. In the Targets selection pane, in the Select the targets to archive box,
select the check boxes beside the targets you want to archive.
When you select a target, its dependencies appear in the Dependencies
of box.

A dependency is a target that the selected target uses as a source.

You can click the name of a target in the Select the targets to archive
box to show its dependencies without selecting it.

Use the check box at the top of the Select the targets to archive box to
select all the displayed targets.

312 Data Federator User Guide


Managing projects
Managing a project and its versions 7
5. Click Store.
Data Federator saves the selected targets of your project in an archive.
The archive appears in the tree list of the Projects tab, and in the Archive
files pane.

Note:
Once you have stored a version of a project, you can download its archive
file to your file system.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of a project on page 311
• Downloading a version of a project on page 313

Downloading a version of a project

You must have stored a version of your project.

You can download an archive file or a deployed version of a project to your


file system. You can use this downloaded file to import into a different
installation of Data Federator Designer.
1. Edit the configuration of the project.
2. Expand the Archive files pane, and click the Download archive file icon
of the project to be downloaded.

Data Federator User Guide 313


7 Managing projects
Managing a project and its versions

A browser dialog opens, asking you if you want to save the archive file.
3. Save the archive file on your file system.

Related Topics
• Editing the configuration of a project on page 310
• Storing the current version of a project on page 311

Loading a version of a project stored on the server

Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on server radio button, then in the Select archive
or deployed version pane, expand your-project-name.
4. Click the name of the archive or deployed version you want to load.
The name of the clicked version appears in the Name box. The description
and creation date also appear.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.

314 Data Federator User Guide


Managing projects
Managing a project and its versions 7
Related Topics
• Opening a project on page 41

Loading a version of a project stored on your file


system

Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on file system radio button, and click Browse.
The Choose file window appears.
4. Navigate to and click the name of the archive or deployed version you
want to load, and click Open.
You are returned to the Load from an archive window with the path of
your selected archive displayed in the Archive file field.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.

Related Topics
• Opening a project on page 41

Including a project in your current project

You can merge the contents of an archive file or a deployed version with the
contents of the current version of a project.
1. Open the project.
2. Click Include.
3. Select an archive to include.
You can select an archive either from the Data Federator server or from
your file system. You can do both of these the same way as loading a
version of a project.

Data Federator User Guide 315


7 Managing projects
Managing a project and its versions

4. Choose how Data Federator treats included components that have the
same names as existing ones.

Table 7-2: How Data Federator treats included components that have the same names
as existing ones

When you want to ... do this ...

Clear both check boxes: Keep ex-


isting datasources, domain and
lookup tables when names match
replace existing datasource tables, and Keep existing mappings
domain tables, lookup tables and when target table names match.
target tables including their map-
pings when names match This will replace datasource tables,
domain tables, lookup tables and
mappings when components of the
same name are included.

Select the Keep existing data-


sources, domain and lookup ta-
bles when names match check
box. This will keep the existing
datasource, domain and lookup ta-
bles when the ones in the included
file have the same names.

However, if the existing datasource


keep existing datasources, domain
has no draft version, then the final
and lookup tables when names
version of the matching datasource
match
becomes the draft version.

An error message appears if an in-


cluded mapping rule points to an
included datasource table whose
schema changes because it has
the same name as an existing
datasource table. You will need to
correct this.

316 Data Federator User Guide


Managing projects
Managing a project and its versions 7
When you want to ... do this ...

Select the Keep existing map-


pings when target table names
match check box. This will keep
the existing mappings when the
target tables in the included file
have the same names as the exist-
ing target tables.

An error message appears if an


existing mapping rule points to an
existing datasource table whose
keep existing mappings when tar-
schema changes because it has
get table names match
the same name as an included
datasource table. You will need to
correct this.

Furthermore, an existing mapping


rule may not be valid if it points to
an existing datasource table whose
name matches an included data-
source table, and is therefore re-
placed. This will also need to be
corrected.

The Data Federator procedure for treating included components is


illustrated in the image below, in the example where neither the Keep
existing datasources, domain and lookup tables when names match
or Keep existing mappings when target table names match are
selected.

Note that the large, shape-containing squares represent target tables,


and the small rectangles, mappings in target tables:

Data Federator User Guide 317


7 Managing projects
Managing a project and its versions

5. Choose how Data Federator treats the status of included components.

Table 7-3: How Data Federator treats the status of included components

When you want to ... do this ...

try to flag mapping rules in all includ-


ed target tables as validated;

Data Federator flags included tar-


get tables as validated if they have
check the box: Validate all target
the status mapped.
tables
This option is useful when you work
on multiple projects separately, and
you have already validated the
project that you want to include.

318 Data Federator User Guide


Managing projects
Managing a project and its versions 7
When you want to ... do this ...

leave mapping rules in included


target tables with the flag "validat-
ed" or "invalidated" that they had
before you included them clear the box: Validate all target
This option is useful when you work tables
on multiple projects separately, and
you want to validate all target tables
on a master project.

6. Click Save.
Data Federator includes the archive file into the project, the Projects >
your-project-name window appears, and a message is displayed
advising whether the project was included successfully.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on your file system on page 315
• Loading a version of a project stored on the server on page 314
• Mapping datasources to targets process overview on page 216

Opening multiple projects

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

You can open multiple projects at the same time by selecting them from the
"Projects" window.
1. At the top of the window, click the Projects tab.
2. In the tree list, click Projects.

Data Federator User Guide 319


7 Managing projects
Managing a project and its versions

The Projects window appears.

3. Click the checkbox beside the projects that you want to open.
4. Click Open.
A tab appears for each project you opened. Each tab contains the latest
version of the project.

Click the tab of the project that you want to work on.

Once your project is opened, you can add targets, datasources and
mappings to it.

Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• About datasources on page 66
• Mapping datasources to targets process overview on page 216

Exporting all projects

You can export all of the projects at once from the Projects tab.
1. Above the tree view, click the Projects tab.
The Projects window appears.
2. Click Export all projects.
Data Federator creates an archive file whose name begins with
projects_export_. The message "Successfully exported all projects"
appears.
3. Click Download the archive file to download the projects.
Your browser asks you if you want to save the archive file.
4. Save the archive file using your browser.

320 Data Federator User Guide


Managing projects
Deploying projects 7
Related Topics
• Storing the current version of a project on page 311

Importing a set of projects

• You must have exported a set of projects.


• Make sure that the project is not locked by another user account. If it is,
you can unlock it.

You can import a set of projects you previously exported.


1. Select the Projects tab.
The Projects window appears.
2. Click Import projects.
The Import projects from the file system window appears.
3. Click Browse and select an archive file you previously exported.
Note:
To overwrite all the projects listed in the tree view, select the Overwrite
existing projects checkbox.

4. Click Save.
Data Federator imports the set of projects contained in the selected
archive file.

Related Topics
• Unlocking projects on page 43
• Exporting all projects on page 320

Deploying projects
You deploy a project when you want other applications to query its tables.
When you deploy a project, it becomes a catalog on Data Federator Query
Server. The datasource, target, lookup and domain tables in the project
become tables in the catalog.

When you use the default settings to deploy a project, it is deployed on a


local installation of Data Federator Query Server. You can change these

Data Federator User Guide 321


7 Managing projects
Deploying projects

settings to deploy on a remote installation of Query Server, or on a cluster


of servers.

Related Topics
• Deploying a version of a project on page 324

Servers on which projects are deployed

If you want to deploy on a remote installation of Query Server, you must first
configure Data Federator so that it can connect to a remote server.

If you want to deploy the project onto a server cluster, click the Add servers
button and add the details of each of the cluster servers.

This configures Data Federator Designer to deploy the project on the servers
for which you provided the details. Each server uses the same datasource
connection parameters, and accesses the same datasource.

Note:
When deploying to a cluster, if deployment to one of the servers fails for any
reason, the deployment is not rolled back. That is, the project is deployed to
all servers in the cluster except the server or server where the deployment
fails.

Related Topics
• Configuring Data Federator Designer to connect to a remote Query Server
on page 584
• Sharing Query Server between multiple instances of Designer on page 585

User rights on deployed catalogs and tables

When you deploy a new project, your user account is automatically granted
"select" and"undeploy" rights on that project.

If you want other user accounts to query a project, your administrator must
create those user accounts and give them authorizations to read the tables
in your project.

Authorizations can apply to catalogs, schemas, tables and columns.

322 Data Federator User Guide


Managing projects
Deploying projects 7
When you redeploy a project, the current authorizations are examined and
updated automatically. For example, any privilege that references a table or
column that no longer exists is automatically removed. This privilege is called
a deprecated privilege.

Related Topics
• About user accounts, roles, and privileges on page 504

Storage of deployed projects

When you deploy a version of a project to Query Server, Data Federator


also stores the version as an archive file on the Data Federator server. When
storing, Data Federator allows you to enter a title and description for the
archive.

You can also store versions of projects to the file system, and open versions
that you have stored either on the Data Federator server or on the file system.

Related Topics
• Managing a project and its versions on page 308

Version control of deployed projects

When you deploy a version of a project:


• The version becomes a deployed version on Data Federator Query Server.

Data Federator allows you to enter a title and description for the archive.
This lets you maintain a history of all the projects that you have deployed.
You can also import an old version of a project.
• Data Federator creates a catalog on Data Federator Query Server using
the name you chose for the catalog. This overwrites any previous catalog
of the same name.

For example, if you deployed projectA in the catalog OP, and you deploy
projectB in the catalog OP, projectA is overwritten.

Related Topics
• Managing a project and its versions on page 308

Data Federator User Guide 323


7 Managing projects
Deploying projects

Deploying a version of a project

Before you deploy a project, ensure that you have:


• added a datasource and made it final
• added at least one target table
• created a mapping between the datasource and at least one target table

Note:
You can deploy an empty project, but it is not useful until you have done the
above.
This procedure shows how to deploy your project to Data Federator Query
Server. When you deploy a project on Query Server, your datasources and
target tables can be queried by an application that connects to Query Server.
1. Open the project.
2. Click Deploy.
3. In the Default deployment address pane, enter the deployment options.
Note:
Choose a unique catalog name for the project. If you choose a catalog
name that already has a project deployed in it, you will overwrite the
existing catalog.

4. Click Deploy current version.


The Projects >your-project-name> New deployed version window
appears.

5. Type a name and description for the deployed version in the General
pane.
6. Click Save.
Your project is accessible for querying through Data Federator Query
Server, at the catalog name that you specified. Target tables are available
in the schema named targetschema, while datasource tables are available
in the schema named source.

Note:
Any previous deployment in the same catalog is overwritten.

324 Data Federator User Guide


Managing projects
Deploying projects 7
For example, if you deployed projectA in the catalog MyCatalog, and
you deploy projectB in the catalog MyCatalog, projectA is
overwritten.

The target tables are deployed as indicated by the option Deploy only
integrated target tables.

Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• Making your datasource final on page 212
• Mapping datasources to targets process overview on page 216
• Servers on which projects are deployed on page 322
• User rights on deployed catalogs and tables on page 322
• Storage of deployed projects on page 323
• Version control of deployed projects on page 323
• Reference of project deployment options on page 327
• Using deployment contexts on page 325

Using deployment contexts

Deployment contexts allow you to easily deploy a project on multiple servers.


Using deployment contexts, you can define multiple sets of datasource
connection parameters to use with a project's deployment. Each deployment
context represents a different server deployment.

For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.

When you define the connection parameters for a datasource, in place of


the configuration values, you use the corresponding parameter name. At
deployment time, you select a deployment context, and Data Federator
substitutes the appropriate values for the connection.

Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,

Data Federator User Guide 325


7 Managing projects
Deploying projects

Data Federator substitutes the values corresponding to the deployment type


that you select.

The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.

Related Topics
• Defining deployment parameters for a project on page 155
• Defining a connection with deployment context parameters on page 156

326 Data Federator User Guide


Managing projects
Deploying projects 7

Reference of project deployment options

Parameter Description
Server address and • specifies the address and port of the Data
Server port Federator Query Server on which you want to
deploy your project

These are default options, and you can change


them each time you deploy a project.

Username and Pass- • specifies the username and corresponding


word password of the account used to access the
Data Federator Query Server

The Data Federator Query Server administrator


sets up this account.

These are default options, and you can change


them each time you deploy a project.

Catalog name
specifies the name that you want your project to
have on Data Federator Query Server

When you deploy your project, it becomes a cata-


log on the Data Federator Query Server. This op-
tion lets you name the catalog.
Note:
If you deploy two or more projects on the same
installation of Data Federator Query Server, you
must use a different catalog name for each project
in order to distinguish them.

For example, if you deployed projectA in the


catalog OP, and you deploy projectB in the cat-
alog OP, projectA is overwritten.

This is a default option, and you can change it


each time you deploy a project.

Data Federator User Guide 327


7 Managing projects
Deploying projects

Parameter Description
Add servers button
displays the option to add a server or servers,
when you are deploying to a cluster

Move to button
lets you move a server within a cluster of servers

For cluster deployments, select a server, and use


this button to re-locate the server position in the
list.

328 Data Federator User Guide


Managing changes

8
8 Managing changes
Overview

Overview
This section describes the impact of your changes in Data Federator.
When working on a project in Data Federator, you can make changes to the
following components.
• targets
• mappings
• datasources
• lookup tables
• domain tables
• constraint checks

The type of change you make to each of these components can impact some
of the other components. This section lists what you should verify for each
type of change.

Verifying if changes are valid

Data Federator Designer displays an icon

to indicate values that you must fix before your changes are complete.

When you see this icon, click it to view a detailed message.

330 Data Federator User Guide


Managing changes
Modifying the schema of a final datasource 8

Modifying the schema of a final


datasource
When you modify a datasource, verify the components in the following table.

To modify a final datasource, see the procedure at Editing a final datasource


on page 213.

Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
Did you change a col- verify if any of the rela-
umn that has a relation- tionships reference the
mappings column that you
ship to another data-
source table? changed.

See Finding incomplete


relationships on
page 254.

Data Federator User Guide 331


8 Managing changes
Modifying the schema of a final datasource

Component What to verify How to verify it

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that no formula.
longer exist?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping formu- that indicates an invalid


las expect types that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

332 Data Federator User Guide


Managing changes
Deleting an installed datasource 8
Component What to verify How to verify it

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

Check the schema of


your lookup table to
make sure that the
columns still match the
datasource.

Do the lookup tables If they do not match,


map a column that you create a lookup table
lookup tables
modified in the data- with a schema that
source? matches your new data-
source.

See Mapping values


between a datasource
table and a domain ta-
ble on page 248.

Deleting an installed datasource


When you delete a datasource, verify the components in the following table.

To delete a final datasource, see the procedure at Deleting a datasource


on page 210.

Data Federator User Guide 333


8 Managing changes
Deleting an installed datasource

Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
Did you delete a data- verify if any of the rela-
source table that partici- tionships reference the
mappings pates in a relationship datasource table that
to another datasource you deleted.
table?
See Finding incomplete
relationships on
page 254.

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that no formula.
longer exist?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

334 Data Federator User Guide


Managing changes
Modifying a target 8
Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
verify if any of the rela-
tionships reference the
datasource that you
Do the lookup tables deleted.
map a column from a
lookup tables You may need to add a
datasource you delet-
different datasource,
ed?
and a different lookup
table, to complete the
relationship.

See Finding incomplete


relationships on
page 254.

Modifying a target
To modify a target, see Adding a target table manually on page 46.

When you modify a target, verify the following components.

Data Federator User Guide 335


8 Managing changes
Modifying a target

Component What to verify How to verify it

You must consider what


result a query will return
when you run it on the
new definitions of the
keys.
Did you change the def-
Check if the new defini-
mappings initions of the primary
tion of your keys yields
keys?
a different result.

See Mapping data-


sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping formu- that indicates an invalid


las result in types that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

336 Data Federator User Guide


Managing changes
Adding a mapping 8
Adding a mapping
To add a mapping, see Mapping datasources to targets process overview
on page 216.
When you add a mapping, verify the following components.

Component What to verify How to verify it

Does the key constraint


check find constraint vi-
olations among the set Check that the key con-
constraint checks
of mapping rules be- straints still pass.
cause of the new map-
ping?

Modifying a mapping
To modify a mapping, see Mapping datasources to targets process overview
on page 216.

When you modify a mapping, verify the following components.

Component What to verify How to verify it

Does the constraint


check find constraint vi-
olations among the set Check that the con-
constraint checks
of mapping rules be- straints still pass.
cause of changes to the
mapping?

Data Federator User Guide 337


8 Managing changes
Adding a constraint check

Adding a constraint check


To add a constraint check, see Defining constraints on a target table on
page 294.
When you add a constraint check, verify the following components.

Component What to verify How to verify it

Does the new constraint Check that the map-


mappings check make a mapping pings still pass the con-
fail? straint checks.

Modifying a constraint check


To modify a constraint check, see Defining constraints on a target table on
page 294.

When you modify a constraint check, verify the following components.

Component What to verify How to verify it

Does the constraint Check that the map-


mappings check make a mapping pings still pass the con-
fail? straint checks.

Modifying a domain table


To modify a domain table, see Adding a domain table to enumerate values
in a target column on page 55.

When you modify a domain table, verify the following components.

338 Data Federator User Guide


Managing changes
Deleting a domain table 8
Component What to verify How to verify it

Run a query on the


mapping, and check that
there are no blank val-
Does a mapping rule di- ues in the rows that ref-
rectly reference a value erence the column that
mappings
that you changed in the you changed in the do-
domain table? main table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Check the schema of


your lookup table to
make sure that the
columns still match the
domain table.
Does the lookup table
If they do not match,
reference columns that
lookup tables create a lookup table
have changed in the
with a schema that
domain table?
matches your domain
table.

See Referencing a do-


main table in a lookup
table on page 247.

Deleting a domain table


To delete a domain table, see Deleting a domain table on page 61.

Data Federator User Guide 339


8 Managing changes
Deleting a domain table

When you delete a domain table, verify the following components.

Component What to verify How to verify it

Run a query on the


mapping, and check that
there are no blank val-
Does a mapping rule di- ues in the rows that ref-
rectly reference a value erence the column in
mappings
in the deleted domain the deleted domain ta-
table? ble.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Check the schema of


your lookup table to
make sure that the
columns do not refer-
ence columns in the
deleted domain table.
Does the lookup table
reference columns in If they do reference
lookup tables
the deleted domain ta- columns in the deleted
ble? domain table, add a do-
main table to replace
the one you deleted.

See Referencing a do-


main table in a lookup
table on page 247.

340 Data Federator User Guide


Managing changes
Modifying a lookup table 8
Modifying a lookup table
To modify a lookup table, see Mapping values between a datasource table
and a domain table on page 248.
When you modify a lookup table, verify the following components.

Component What to verify How to verify it

Run a query on the


mapping, and check that
Does a mapping rule there are no blank val-
use a value that you ues in the rows that ref-
mappings erence the column in
deleted or changed in
the lookup table? the lookup table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Deleting a lookup table


To delete a lookup table, see Deleting a table from a mapping rule on
page 286.

When you delete a lookup table, verify the following components.

Data Federator User Guide 341


8 Managing changes
Deleting a lookup table

Component What to verify How to verify it

Run a query on the


mapping, and check that
Does a mapping rule there are no blank val-
mappings use a value in the delet- ues in the rows that ref-
ed lookup table? erence the lookup table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

342 Data Federator User Guide


Introduction to Data
Federator Query Server

9
9 Introduction to Data Federator Query Server
Data Federator Query Server overview

Data Federator Query Server overview


This chapter describes the architecture and the main administration functions
of Data Federator Query Server.

The Data Federator Query Server expands the standard database user
management and the SQL query language. This allows the query engine
and virtual target tables to provide additional functionality and higher
performance.

Data Federator Query Server architecture


The illustration below shows an overview of the main components of Data
Federator. They are, Data Federator Query Server, Data Federator Designer
and Data Federator Administrator.

344 Data Federator User Guide


Introduction to Data Federator Query Server
Data Federator Query Server architecture 9

How Data Federator Query Server accesses sources


of data

Real-time access to sources of data is divided into two steps: the connector
and the driver.

Element Description

The connector expands the function-


ality of the database driver to work
with Data Federator Query Server.

Connector The connector is an XML file (.wd file)


that defines the parameters by type
of database and contains metadata
about the data managed by Data
Federator Query Server.

The driver provides a common ac-


cess method for querying tools. The
driver is supplied with the database.

The driver is a file that defines the


Driver
access parameters to query the
database it supports.

Data Federator Query Server sup-


ports ODBC and JDBC drivers.

Data Federator Query Server sup-


ports many sources of data, for exam-
Database (source)
ple: Oracle, SQL Server, MySQL,
CSV files, XML files or web services.

Data Federator User Guide 345


9 Introduction to Data Federator Query Server
Data Federator Query Server architecture

The illustration below shows an example of how the Data Federator Query
Server relates to the sources of data.

346 Data Federator User Guide


Introduction to Data Federator Query Server
Data Federator Query Server architecture 9

Data experts and data administrators

The database administrator is the person who sets up the connection and
allows others to connect to the data base. This person is not necessarily the
data expert who controls the Data Federator Designer. For this reason, a
system of aliases facilitates access to the often highly-customized database
resources.

Once the connections are made, Data Federator provides an end-to-end


view via Data Federator Administrator and Data Federator Designer, from
the database resources to the target tables. These two web-based tools
respectively provide functionality for the data administrator and the data
expert.

Related Topics
• The Data Federator application on page 26

Key functions of Data Federator Administrator

The main functions performed by Data Federator Administrator on Data


Federator Query Server are as follows:
• Setting up and managing users and their roles
• Managing resources and database connections
• Monitoring query execution

Related Topics
• About user accounts, roles, and privileges on page 504
• Managing resources using Data Federator Administrator on page 483
• Data Federator Query Server overview on page 344
• Query execution overview on page 530

Data Federator User Guide 347


9 Introduction to Data Federator Query Server
Security recommendations

Security recommendations
Business Objects recommends that you use a firewall and use a standard
http protocol to protect and access the Data Federator Query Server.

348 Data Federator User Guide


Connecting to Data
Federator Query Server
using JDBC/ODBC drivers

10
10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC

Connecting to Data Federator Query


Server using JDBC
This section describes how to connect your applications using JDBC so that
they can retrieve data from Data Federator Query Server.

Installing the JDBC driver with the Data Federator


installer

This procedure shows you what you need to install in order to use the JDBC
driver for Data Federator Query Server in your client application.

This procedure applies when you have the Data Federator or Data Federator
Drivers installer.
1. Use the Data Federator or Data Federator Drivers installer on the Data
Federator CD-ROM to install the JDBC driver.
See the Data Federator Installation Guide for installation details.

2. Add data-federator-installation-dir/JdbcDriver/lib/thindriv
er.jar to the classpath that your client application must search when
loading the Data Federator Query Server JDBC driver.
3. Use this as the class name of the JDBC driver that your client application
loads:
com.businessobjects.datafederator.jdbc.DataFederatorDriver

Note:
Data Federator remains compatible with previous versions. You can still
use the class name LeSelect.ThinDriver.ThinDriver, but it is recommended
that you update to the new name above.

4. Launch your client application.


For certain applications, you may need to launch the JVM with the
following option:
-Djava.endorsed.dirs=data-federator-installation-dir/Jdbc
Driver/lib

350 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC 10
If your application does not allow you to set the java.endorsed.dirs
option, set a system CLASSPATH variable that includes all the .jar files
from the directory:
data-federator-installation-dir/JdbcDriver/lib

Installing the JDBC driver without the Data Federator


installer

If you do not have the Data Federator installer, but you can access the JDBC
files that come with Data Federator, then you can make a JDBC connection
as follows.
1. Retrieve thindriver.jar from the machine where Data Federator is
installed, from the directory data-federator-installation-dir/Jdbc
Driver/lib.
If:
• the parameter commProtocol=jacORB is used, you will also need
avalon-framework-4.1.5.jar, jacorb.jar and logkit-1.2.jar
• your client application uses JDK1.4, you will also needicu4j.jar
2. Copy these files to a directory of your choice (your-jdbc-driver-direc
tory).
3. Add your-jdbc-driver-directory/thindriver.jar to the classpath
that your client application must search when loading the Data Federator
Query Server JDBC driver.
4. Launch your client application.
For certain applications, you may need to launch the JVM with the
following option:

-Djava.endorsed.dirs=your-jdbc-driver-directory

If your application does not allow you to set the java.endorsed.dirs


option, set a system CLASSPATH variable that includes all the above .jar
files.

Data Federator User Guide 351


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC

Connecting to the server using JDBC

This procedure shows how to establish a connection between your application


and Data Federator Query Server.
1. Install the JDBC driver for Data Federator Query Server.
2. In the client application that you are using to connect, enter the connection
URL:

jdbc:datafederator://host[:port][/[catalog]][[;param-
name=value]*]

Note:
Data Federator remains compatible with previous versions. You can still
use the url prefix jdbc:leselect:, but it is recommended that you update
to the new prefix above.

The catalog must be the same as the catalog name you used to deploy
one of your projects.

For example, if you named your catalog "OP":

jdbc:datafederator://localhost/OP

The parameters in the JDBC connection URL let you configure the
connection.
Note:
The classpath that your application uses must include: your-jdbc-
driver-directory/thindriver.jar

Your client application establishes a connection to Data Federator Query


Server.

Related Topics
• Installing the JDBC driver with the Data Federator installer on page 350
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• Deploying a version of a project on page 324
• Example Java code for connecting to Data Federator Query Server using
JDBC on page 353

352 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC 10

Example Java code for connecting to Data Federator


Query Server using JDBC

The following code block suggests how to use the Data Federator driver to
connect to Data Federator Query Server and execute an SQL query.

The statement shown in this block is select distinct(CATALOG) from


leselect/system/schemas where schema='targetSchema'. This statement
retrieves all of the distinct catalogs from the system table that lists all of the
schemas of target tables that you have deployed on Query Server.

You can replace the SQL statement in this example with any statement that
is supported by Data Federator. For the names of the stored procedures that
you can call using this kind of code, see the list of stored procedures.

Example: Code block showing how to connect to Data Federator Query


Server through JDBC

/* loads the driver for Data Federator Query Server */


Class.forName( "com.businessobjects.datafederator.jdbc.DataFed
eratorDriver" );

/* sets up the url and parameters to pass in the url */


String strUrl = "jdbc:datafederator://localhost";
String strUser = "sysadmin";
String strPass = "sysadmin";

/* creates a representation of a connection */


Connection conn =
DriverManager.getConnection(
strUrl, strUser, strPass );

/* creates a representation of a statement */


Statement statement = conn.createStatement();

/* sets up the schema name */


String strSchema = "targetSchema";

/* writes the text of the statement */


String strSelectStatement =
"select distinct(CATALOG) "
+ "from /leselect/system/schemas "
+ "where SCHEMA='"
+ strSchema

Data Federator User Guide 353


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC

+ "'";

/* executes the query and retrieves the result set */


ResultSet rs =
statement.executeQuery( strSelectStatement );

/* iterates through the rows */


while( rs.next() )
{
/* prints the value of one column in the row */
System.out.println( rs.getString( "CATALOG" ) );
}

/* closes the connections and frees resources */


rs.close();
statement.close();
conn.close();

Related Topics
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• System table reference on page 728
• List of stored procedures on page 744

Connecting to Data Federator Query


Server using ODBC
This section describes how to connect your applications using ODBC so that
they can retrieve data from Data Federator Query Server.

Installing the ODBC driver for Data Federator


(Windows only)

To connect to Data Federator Query Server via ODBC, you must use the
OpenAccess ODBC to JDBC Bridge.

The Data Federator Drivers installer installs and configures the OpenAccess
ODBC to JDBC Bridge.

354 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC 10
• Use the Data Federator Drivers installer on the Data Federator CD-ROM
to install the OpenAccess ODBC to JDBC Bridge.
See the Data Federator Installation Guide for installation details.

Connecting to the server using ODBC

This procedure shows how to establish a connection between your application


and Data Federator Query Server.
1. Install the OpenAccess ODBC to JDBC Bridge for Data Federator Query
Server.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start, then Programs, then Administrative Tools,
then click Data Sources (ODBC).

3. Add a DSN entry of type OpenAccess ODBC to JDBC Bridge, and


configure it as follows.

For the parameter... Enter...

DSN Name a name of your choice

com.businessobjects.datafedera
Driver Class
tor.jdbc.DataFederatorDriver

Data Federator User Guide 355


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC

For the parameter... Enter...

the following JDBC URL

jdbc:datafedera
tor://<host>[:<port>][/[<cata
log>]][[;param-name=value]*]
• The <catalog> must be the
same as the catalog name you
used to deploy one of your
URL projects.
• For example, if you named your
catalog "OP":
• jdbc:datafederator://local
host/OP
• Add parameters in the JDBC
connection URL as required.

• Click Test to test the connection to Data Federator Query Server.

4. In your ODBC client application, use the DSN name that you created in
your "ODBC Data Source Administrator".
Your client application can establish an ODBC connection to Data
Federator Query Server.

Related Topics
• Installing the ODBC driver for Data Federator (Windows only) on page 354
• Parameters in the JDBC connection URL on page 361
• Using ODBC when your application already uses another JVM on page 357
• Deploying a version of a project on page 324
• Parameters in the JDBC connection URL on page 361

356 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Accessing data 10

Using ODBC when your application already uses


another JVM
1. Edit the OpenAccess configuration file.
You can find the OpenAccess configuration file in data-federator-
drivers-install-dir\\OaJdbcBridge\bin\iwinnt\openrda.ini.

2. Copy the value of the CLASSPATH property to your application's classpath.


For example, when you install Data Federator Drivers, the value of the
CLASSPATH property may be:

CLASSPATH=C:\Program Files\Business Objects\BusinessObjects


Data Federator 12 Drivers\OaJdbcBridge\jdbc\thindriv
er.jar;C:\Program Files\BusinessObjects Data Federator XI
3.0\OaJdbcBridge\oajava\oasql.jar

You should add everything after the equal sign (=) to the classpath that
your application uses.

3. Change the value of the JVM_DLL_NAME property to the "JVM" path that
your application uses.
For example, when you install Data Federator, the value of the
JVM_DLL_NAME property may be:

JVM_DLL_NAME=C:\Program Files\Business Objects\BusinessOb


jects Data Federator 12 Drivers\jre\bin\client\jvm.dll

You should change everything after the equal sign (=) to the path to the
"JVM" that your application uses.

Accessing data
You can access data in the Data Federator using SQL statements.

Your queries include the names of the catalog, schema, and table, unless
the catalog or schema names are specified by default.

The structure of a table name in a query is:

"[catalog-name]"."[schema-name]"."[table-name]"

Data Federator User Guide 357


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

For details on catalog, schema and table naming, see Data Federator SQL
grammar on page 713.

Example: To query a table


If you named your catalog "/OP", and your schema is "targetSchema", you
could make the query:

SELECT * FROM "/OP"."targetSchema"."clients"

If your default catalog is "/OP", and your default schema is "targetSchema",


you could make the query:

SELECT * FROM clients

Related Topics
• Properties of user accounts on page 511

JDBC URL syntax


The syntax of a JDBC URL in Data Federator Query Server is as follows.
Figure 10-1: JDBC URL for connecting to Data Federator Query Server directly
jdbc:datafederator://[ host[ :port ] ][ /catalog ][ ;param-name=value ]*

The table below describes the parts of the URL syntax shown above, and
applies to all following examples:

URL part Description

the prefix required to connect to Data


Federator Query Server
Note:
jdbc:datafederator:// Data Federator remains compatible
with previous versions. You can still
use the url prefix jdbc:leselect:, but it
is recommended that you update to
the new prefix above.

358 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
URL part Description

the host where Data Federator Query


Server is running
Note:
host
• This is an optional part.
• If you do not name a host, the
URL points to localhost.

the port on which Data Federator


Query Server is running

Note:
port
• This is an optional part.
• If you do not name a port, the
URL points to 3055.

the name of the catalog to which you


connect

Note:
catalog
• This is an optional part
• If you do not name a catalog, the
URL points to /OP.

the JDBC parameters of the connec-


tion

Note:

param-name=value • This is an optional part.


• JDBC parameters are case-sensi-
tive.
• Multiple JDBC parameters must
be separated by semi-colons.

Data Federator User Guide 359


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

To connect to Query Server with fault tolerance, use the alternateServers


URL parameter, as follows:
Figure 10-2: JDBC URL for connecting to Data Federator Query Server with fault tolerance
jdbc:datafederator://[ host[ :port ] ][ /catalog ][
;alternateServers=host1[ :port1 ][ &host2:port2 ]* ][ ;param-name=value
]*
You can also use , (comma) to separate the entries in the list of hosts to use
for fault tolerance, as follows :
Figure 10-3: Alternative JDBC URL for connecting to Data Federator Query Server with
fault tolerance
jdbc:datafederator://[ host[ :port ] ][ /catalog ][
;alternateServers=host1[ :port1 ][ ,host2:port2 ]* ][ ;param-name=value ]*
Note:
If you use OpenAccess, you must use the & (ampersand) to separate entries
in the list of hosts.

In the above syntax, each host entry can be either Data Federator Query
Server or Data Federator Connection Dispatcher.

Use the JDBC URL to define values for the parameters. Any values that you
set in the JDBC URL will override, for a single connection, the default values
and the values for your client.

Example:
The following example URL causes Data Federator to
• apply fault tolerance on host1, host2 and host3 is:
• apply connection failover if the host mainhost is down

jdbc:datafederator://mainhost:alternate
Servers=host1,host2,host3;user=jill;

Related Topics
• Configuring fault tolerance for Data Federator on page 601
• Parameters in the JDBC connection URL on page 361
• Parameters in the JDBC connection URL on page 361

360 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10

Parameters in the JDBC connection URL

This section lists the parameters you can add in the JDBC connection URL.
Note:
All parameters are case-sensitive.

Finding the default values of parameters


The default values for all the system and query parameters are stored in the
file thindriver.jar. You can access these files by using an archiving tool
to extract the contents of thindriver.jar.

When you extract the files from thindriver.jar, you will find the following
two files, which contain the default values of all the parameters.
• thindriver.jar/properties/thin_params.properties
• thindriver.jar/properties/thin_site_params.properties

Data Federator User Guide 361


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description
user
user=[username]

Specifies the name of the


user connecting to Data
Federator Query Server.

password
password=[password]

Specifies the password of


the user connecting to
Data Federator Query
Server.

catalog
catalog=[catalog-
name]

Specifies the default cata-


log for the current user. If
present, the catalog ele-
ment of the URL overrides
the value of the catalog
parameter.

schema
schema=[schema-name]

Specifies the default


schema for the current
user.

alternateServers

362 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

alternate
Servers=host1[:port1]

[,host2[:port2][,...]]

alternate
Servers=host1[:port1]

[&host2[:port2][&...]]

Used for fault tolerance.


Specifies the list of alter-
nate servers to try upon
failed connections.

alternateServersFile
alternateServers
File=true

alternateServers
File=false

Used for fault tolerance.


Specifies whether to read
a file to find the list of alter-
nate servers.

This file is at US
ER_HOME/.datafedera
tor/alter
nate_servers.list

checkVersion

Data Federator User Guide 363


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

checkVer
sion=[STRICT|MA
JOR|NONE]

Setting this parameter al-


lows the verification of
compatibility between the
JDBC driver and Data
Federator Query Server.

STRICT means that an


exception is thrown when
a difference is detected
between the version of the
JDBC driver and the ver-
sion of Data Federator
Query Server. No connec-
tion can be established in
such a case.

MAJOR means that an


exception is only thrown
when a difference is de-
tected between the major
version of the JDBC driver
and the major version of
Data Federator Query
Server. In such a case, no
connection can be estab-
lished. However, a connec-
tion can still be estab-
lished when the minor
versions are different.

NONE means that any


difference between Data
Federator Query Server
and the JDBC driver ver-
sions is allowed. A connec-
tion is always established,

364 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description
but unexpected effects
can occur when the JDBC
driver and Data Federator
Query Server are incom-
patible.

The default value of the


checkVersion parameter
is STRICT.

commProtocol
commProto
col=[JacORB|JDKORB]

Allows the user to choose


the implementation of the
communication protocol
between the Data Federa-
tor JDBC driver and Data
Federator Query Server.

This parameter can take


one of the following val-
ues: JacORB or JDKORB.

JacORB means that the


communication protocol is
based on the Jacorb imple-
mentation of the CORBA
specification.

JDKORB means that the


communication protocol is
based on the internal JDK
implementation of the
CORBA specification.

The default value is JDKO


RB.

commWindow

Data Federator User Guide 365


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

commWindow=[integer]

Allows the user to specify


the maximum number of
chunks of rowsAtATime
size that can be fetched in
advance, when the param-
eter dataFetch
er=PREFETCH.

The default value is 2.

dataFetcher

366 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

dataFetch
er=[PREFETCH|ONDE
MAND]

Allows the user to choose


the data fetching type
used by the Data Federa-
tor JDBC driver to transfer
query result set data from
Data Federator Query
Server.

This parameter can take


one of the following val-
ues: PREFETCH or ON
DEMAND. ONDEMAND
means that data is fetched
in chunks of rowsAtATime
size. PREFETCH means
that multiple chunks of
rowsAtATime size can be
fetched in advance. The
user will be able to re-
trieve data after the first
chunk of rowsAtATime
size is fetched.

The default value is


PREFETCH.

enforceMetadataColumnSize

Data Federator User Guide 367


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

enforceMetadataColumn
Size=[true|false]

When you set this param-


eter to true, you can limit
the string size of VAR
CHAR values from VAR
CHAR columns from
metadata result sets to the
value of the maxMetada
ColumnSize parameter.

When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

maxMetadataColumnSize

368 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

maxMetadataColumn
Size=[integer]

Allows the user to specify


the maximum size for
columns of type VAR
CHAR in metadata result
sets.

This parameter can take


an integer value that
should be at least equal to
maximum(maxDecimalPre
cision + 2, 29) and not
greater than maxString
Size. The value 29 repre-
sents the display size of a
TIMESTAMP value.

The default value is 255.

When retrieving VAR


CHAR values, only the
parameter maxStringSize
is used to truncate and
eventually generate a
truncation warning, even
when dealing with metada-
ta result sets.

When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum

Data Federator User Guide 369


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).

enforceStringSize
enforceString
Size=[true|false]

When you set this param-


eter to true, the string size
of VARCHAR data is
checked and limited to the
maximum string size de-
fined in the maxStringSize
parameter, and the strings
are truncated to fit this. A
warning is issued whenev-
er a string truncation is
applied.

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

maxStringSize

370 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

maxStringSize=[inte
ger]

Allows the user to specify


the maximum size of
string values from
columns of type VAR
CHAR in result sets of
queries.

This parameter can take


an integer value that
should be at least equal to
maximum(maxDecimalPre
cision + 2, maxMetaData
ColumnSize, 29). The val-
ue 29 represents the dis-
play size of a TIMES
TAMP value.

The default value is 9000.

If the enforceStringSize
parameter is set to true,
then when a VARCHAR
value has a size greater
than maxStringSize, the
value is truncated and a
warning message is set.

enforceServerMetadataDecimal-
Size

Data Federator User Guide 371


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

enforceServerMeta
dataDecimal
Size=[none|maxS
cale|fixedScale]

Allows the user to specify


whether the precision and
scale reported at metada-
ta level by Data Federator
Query Server for DECI
MAL data should be
checked and enforced
over the DECIMAL data
obtained from Data Feder-
ator Query Server. This
parameter is used in con-
junction with enforce
MaxDecimalSize parame-
ter.

This parameter can be


NONE, MAXSCALE or
FIXEDSCALE.

The default value is


NONE.

enforceMaxDecimalSize

372 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

enforceMaxDecimal
Size=[none|maxS
cale|fixedScale]

Allows the user to specify


whether the precision and
scale of DECIMAL data
should be limited to the
user-defined maximum
precision maxDecimalPre
cision or maximum scale
maxDecimalScale.

This parameter can be


NONE, MAXSCALE or
FIXEDSCALE.

The default value is


NONE.

The enforceMaxDecimal
Size parameter is used in
conjuction with enforce
ServerMetadataDecimal
Size, primarily to configure
the precision and scale of
values returned by Data
Federator Query Server.

maxDecimalPrecision

Data Federator User Guide 373


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

maxDecimalPreci
sion=[>=20]

Allows the user to specify


the maximum precision of
DECIMAL values.

This parameter can take


an integer value greater
than or equal to 20. When
ODBCMode = TRUE it will
be equal to or less than
40.

The default value is 27.

The display size of a


DECIMAL value is equal
to (maxDecimalPrecision
+ 2).

maxDecimalScale
maxDeci
malScale=[0<=maxDeci
malScale<=maxDecimal
Precision]

Allows the user to specify


the maximum scale of
DECIMAL values.

This parameter can take


an integer value between
0 and maxDecimalPreci
sion.

The default value is 6.

ODBCMode

374 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

ODBCMode=[true|false]

When you set this param-


eter to true, the following
parameters are set by de-
fault:
• enforceMetadata
ColumnSize = TRUE
• enforceStringSize =
TRUE
• enforceMaxDecimal
Size = MAXSCALE
• enforceMaxDecimal
Size = MAXSCALE

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

rowsAtATime

Data Federator User Guide 375


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

rowsAtATime=[integer]

Specifies the data fetch


size. With this parameter,
you give a hint to Data
Federator Query Server
on the number of rows to
be sent in one data trans-
fer to the Data Federator
JDBC driver.

By default the parameter


is set to -1 which activates
an algorithm that dynami-
cally and automatically
adjusts the size of the
fetched data. The algo-
rithm examines the memo-
ry available on the server
and the query execution
speed. The value gets ad-
justed throughout query
execution in order to as-
sure the best performance
for all clients.

It is recommended to not
force this to any value as
its dynamic adjustement
helps to prevent client
starving. Clients may be
found in situations where
they cannot retrieve data
for a long time if another
set of clients has forced
the fetch size to a large
value. The data prepara-
tion for fetching may con-

376 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description
sume a lot of fetching
memory on the server,
leaving no memory to oth-
er clients.

On the other hand, if you


need a client to perform
faster than others, this pa-
rameter may help. In all
cases the server may limit
this value to a maximum,
which is decided depend-
ing on the available re-
sources for query execu-
tion.

The default value is -1.

autoReconnect
autoRecon
nect=[YES|NO]

Specifies if the client appli-


cation respawns connec-
tions upon error. When a
connection respawns auto-
matically, your session
parameters will be reset
to their default values. To
prevent session parame-
ters from being reset when
there is an error in the
connection, leave this val-
ue as NO.

The default value is NO.

Related Topics
• Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server on page 535

Data Federator User Guide 377


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC and ODBC Limitations

JDBC and ODBC Limitations

JDBC and ODBC Limitations

JDBC limitations

The following JDBC methods are not supported by the Data Federator JDBC
driver.
• Connection methods:
• setAutoCommit (no effect)
• setHoldability (no effect)
• setReadOnly (Data Federator Query Server is always read-only)
• setSavePoint, releaseSavePoint
• setTransactionIsolation (no effect)
• setTypeMap (no effect)
• commit, rollback (no effect)

• Statement methods:
• setCursorName (no effect)
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)

• setFetchSize (no effect)


• setMaxFieldSize (only supported when the parameter max is 0)
• setQueryTimeOut (only supported when the parameter seconds is 0)
• setEscapeProcessing (no effect)
• addBatch, clearBatch, executeBatch
• executeUpdate

• PreparedStatement methods:

378 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC and ODBC Limitations 10
• setArray/AsciiStream/BinaryStream/Blob/Bytes/
CharacterStream/Clob/Long/Ref/UnicodeStream/URL
• getParameterMetaData

• ResultSet methods:
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)

• setFetchSize (no effect)


• isAfterLast/BeforeFirst/First/Last
• absolute, first, last, beforeFirst, afterLast
• cancelRowUpdates
• deleteRow, insertRow
• getArray/AsciiStream/BinaryStream/Blob/Bytes/
CharacterStream/Clob/CursorName/Ref/UnicodeStream/URL
• moveToCurrentRow, moveToInsertRow
• refreshRow
• relative
• rowDeleted/Inserted/Updated
• update methods

ODBC limitations

The following ODBC functions are limited on or not supported by the


OpenAccess ODBC to JDBC Bridge for Data Federator.
• SQLAllocHandle (SQL_HANDLE_DESC not supported)
• SQLBindCol (Bookmark column not supported)
• SQLExtendedFetch (Accepts only SQL_FETCH_NEXT as fFetchType
value; Fetches only one row)
• SQLFreeHandle (SQL_HANDLE_DESC not supported)
• SQLBulkOperations

Data Federator User Guide 379


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints

• SQLColumnPrivileges
• SQLColumnPrivilegesW
• SQLCopyDesc
• SQLFetchScroll
• SQLGetDescField
• SQLGetDescFieldW
• SQLGetDescRec
• SQLGetDescRecW
• SQLGetDiagField
• SQLGetDiagFieldW
• SQLGetDiagRec
• SQLGetDiagRecW
• SQLNativeSQL
• SQLNativeSQLW
• SQLParamOptions
• SQLSetDescField
• SQLSetDescRec
• SQLSetDescRecW
• SQLSetPos
• SQLSetScrollOptions
• SQLTablePrivileges
• SQLTablePrivilegesW

SQL Constraints
This section lists the SQL syntax that is accepted by Data Federator Query
Server.

380 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints 10
Related Topics
• SQL syntax overview on page 688

Data Federator User Guide 381


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints

382 Data Federator User Guide


Using Data Federator
Administrator

11
11 Using Data Federator Administrator
Data Federator Administrator overview

Data Federator Administrator overview


This chapter covers the procedures to start Data Federator Administrator
and to perform basic queries. It also describes its user interface. Data
Federator Administrator is a web based client application that serves as a
window onto Data Federator Query Server.

Before you begin, ensure you or your system administrator has followed the
steps to install and configure Data Federator in the Data Federator Installation
Guide.

Related Topics
• Data Federator Query Server overview on page 344

Starting Data Federator Administrator


1. Click Start > Programs > BusinessObjects Data Federator XI Release
3 > Data Federator Query Server Administrator.
2. Enter your user name and password in the User and Password text
boxes, then click OK.
A status message appears at the top right side of the screen: Connected
as user 'user_name'.

To end your Data Federator


Administrator session
• Click the link Logout, that appears at the top right side of the screen.
Your session ends and the login screen appears.

Server configuration
The following section details steps to consider to configure your Data
Federator Query Server deployment.

384 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11
Exploring the user interface
If you have the rights, you may proceed to perform any of the following
actions:
• Explore objects in the Objects tab.
• Enter queries in the My Query Tool tab.
• Monitor queries and manage Data Federator Query Server in the
Administration tab.

Related Topics
• Objects tab on page 385
• Managing user accounts with SQL statements on page 516
• Administration tab on page 387

Objects tab

You use this tab to navigate in Data Federator Query Server objects. At the
highest level, a list of functions appears. When you navigate within the
objects, the following tabs appear:
• Info tab, displays information about the object
• Content tab, displays the contents for that object in the form of a table

Click Refresh to refresh the table content.

Data Federator User Guide 385


11 Using Data Federator Administrator
Exploring the user interface

My Query Tool tab

You use this tab most frequently to execute SQL queries and to perform
administrative functions on Data Federator Query Server.

When you first login, the My Query Tool tab appears by default.

386 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

Related Topics
• Managing queries with Data Federator Administrator on page 397
• Key functions of Data Federator Administrator on page 347

Administration tab

You use this tab to monitor the queries that are running and display a history
of the queries that have executed already.

Viewing the running queries

Queries that are running are displayed in the Query Running tab. If you
have no running queries, nothing displays here.

Data Federator User Guide 387


11 Using Data Federator Administrator
Exploring the user interface

Viewing the query history

The Query History tab displays the history of the last ten queries.

388 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

Each query record contains the following information:


• Status: the status of the query
• SQL: the SQL statement executed
• User: the user account that executed the query
• Exception: any exceptions that occurred
• Subqueries: the queries executed by the connectors

The Server Status menu item

The Server Status menu item displays a summary of the status of Data
Federator Query Server.

Data Federator User Guide 389


11 Using Data Federator Administrator
Exploring the user interface

The Connector Settings menu item

The Connector Settings menu item lets you manage resources for
connectors to data sources.

390 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

The User Rights menu item

The User Rights menu item lets you manage users, roles and permissions.

Data Federator User Guide 391


11 Using Data Federator Administrator
Exploring the user interface

The Configuration menu item

The System Parameters and Session Parameters tabs under the


Configuration menu item let you manage system and session parameters.

392 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

The Statistics menu item

The Statistics menu item lets you view updated statistics of queries that
you run on Data Federator Query Server.

Data Federator User Guide 393


11 Using Data Federator Administrator
Managing statistics with Data Federator Administrator

Managing statistics with Data Federator


Administrator
Data Federator Administrator provides a window where you manage statistics
of tables and columns. You can use this window to view or configure statistics
in order to optimize your queries.

Statistics are the estimations of the amount data in a column or table. Data
Federator can use statistics to optimize the queries it runs. By default, Data
Federator calculates statistics by itself. If you want, you can override the
values that Data Federator calculates.

394 Data Federator User Guide


Using Data Federator Administrator
Managing statistics with Data Federator Administrator 11

Using the Statistics tab to refresh statistics


automatically

You can use the Global Refresh of Statistics pane to refresh the statistics
of your tables automatically.
1. Select the option that controls how you want to perform the global refresh.
For details, see List of options for the Global Refresh of Statistics pane
on page 396.

2. Use the Statistics list pane to select the list of tables for which you want
statictics to be refreshed automatically.
3. Click OK.

Selecting the tables for which you want to display


statistics

You can use the Statistics List pane to select the list of tables for which
you want to display statistics.
1. Select List only statistics in.
2. Select values in the Catalog, Schema and Table boxes.
The statistics will be displayed for the tables that you select.

3. Click Refresh to update the statistics.

Recording statistics that Query Server recently


requested

You can use the Statistics List pane to record the tables and columns for
which statistics were recently requested by Query Server.
1. Click List only cardinalities recently requested.
2. Set the session parameter Leselect.core.statistics.recorder.enabled to
true.

Data Federator User Guide 395


11 Using Data Federator Administrator
Managing statistics with Data Federator Administrator

When queries are executed, the list of tables with their statistics is
displayed automatically.

List of options for the Global Refresh of Statistics


pane

Option Description

Only columns
computes the number of distinct values for
each column

Only tables
computes the number of distinct values in
each table

All tables and


columns
computes the number of distinct values for
each column and the number of rows in each
table

Excluding when
value is overridden
the statistics will not be computed if you en-
by user tered a value during the definition of the
datasource

Including when val-


ue is overridden by
the statistics will be computed and will over-
user write the the value you entered during the
definition of the datasource

Related Topics
• Defining the schema of a datasource on page 204

396 Data Federator User Guide


Using Data Federator Administrator
Managing queries with Data Federator Administrator 11
Managing queries with Data Federator
Administrator
You can use Data Federator Administrator to run queries and keep track of
queries you have run in the past.
1. Login to Data Federator Administrator.
2. Click the My Query Tool tab.
3. Enter an SQL-syntax query.
4. Set the options as follows.
• If you want to limit the results to be displayed in the Query Result,
enter a number in the text box for the Maximum rows to display.
The default value is 5.
• If you want to fetch more rows than are displayed (for example, to see
the activity of the query in the log files), select Yes from the drop down
list for Fetch all rows option.

Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.

5. Click Execute to run the query.

Related Topics
• Starting Data Federator Administrator on page 384
• Data Federator Query Server query language on page 688
• Query execution overview on page 530

Executing SQL queries using the My Query Tool tab


1. Enter the SQL query in the Query Editor panel.
Note:
Select objects from the Catalog, Schema, Table and Column columns
of the table in the "Objects Browser", if required. Note also that the "Query
Editor" text area can be shrunk or enlarged by placing your cursor over
the grey bar at the bottom of the text area and moving it up or down
accordingly.

Data Federator User Guide 397


11 Using Data Federator Administrator
Managing queries with Data Federator Administrator

• If you want to limit the results to be displayed in the Query Results


panel, enter a number in the text box for the Maximum rows to
display. The default value is 5.
• If you want to fetch more rows than are displayed (for example, to see
the activity of the query in the log files), select Yes from the drop down
list for Fetch all rows option.

Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.

2. Click Run Query to execute the SQL query.


Note:
Select the text of one query with your cursor and click Run Selected
Query to execute just that selected query.
The query is run and the results displayed in the Query Results panel.

398 Data Federator User Guide


Configuring connectors to
sources of data

12
12 Configuring connectors to sources of data
About connectors in Data Federator

About connectors in Data Federator


In Data Federator, configuring a connector means installing drivers or
middleware, and then setting parameters so that Data Federator can connect
to a source of data.

In general, Data Federator connects to sources of data in one of two ways.


• JDBC

For most sources of data that support JDBC, you just copy the JDBC
driver to a directory where Data Federator can find it, and there is nothing
more to configure.
• proprietary middleware

For sources of data that do not support JDBC, you must install the
vendor's middleware, and point Data Federator to the middleware. In
most cases, you have already installed the middleware, and you just need
to tell Data Federator where to find it.

Configuring Access connectors

Configuring Access connectors

In order to configure a connector for Access, you must install an ODBC driver
and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Access.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

400 Data Federator User Guide


Configuring connectors to sources of data
Configuring DB2 connectors 12
Provide the following information to name users of Data Federator
Designer who want to create a datasource for Access.

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

Configuring DB2 connectors

Configuring DB2 connectors

In order to configure a connector for DB2, you must install JDBC drivers.
These drivers are usually available from the DB2 website.
1. Download the JDBC driver for DB2.
You get a driver in the form of a .jar file or several .jar files.

Use the following link to download the IBM DB2 JDBC Universal Driver.
The product is called IBM Cloudscape.

To complete this download, you must register on the IBM website.


Registration is free.

After you install IBM Cloudscape, you can find the driver file in ibm-
cloudscape-install-directory/lib/db2jcc.jar. The file db2jcc.jar
is the driver you can use for DB2.
http://www14.software.ibm.com/webapp/download/
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Data Federator User Guide 401


12 Configuring connectors to sources of data
Configuring Informix connectors

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

Configuring Informix connectors

Supported versions of Informix

This version of Data Federator supports Informix XPS.

The middleware is the IBM Informix ODBC Driver.

Configuring Informix connectors

In order to configure a connector for Informix, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Informix.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Informix.

402 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Informix resource properties

The table below lists the properties that you can configure in Informix
resources.

Data Federator User Guide 403


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

404 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

Data Federator User Guide 405


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Informix, this should be:


• Informix CLI

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

406 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Informix, possible values are:
• Informix XPS 8.4
• Informix XPS 8.5

Data Federator User Guide 407


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

408 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 409


12 Configuring connectors to sources of data
Configuring MySQL connectors

Configuring MySQL connectors

Configuring MySQL connectors

In order to configure a connector for MySQL, you must install JDBC drivers.
These drivers are usually available from the MySQL website.
1. Download the JDBC driver for MySQL.
You get a driver in the form of a .jar file or several .jar files.
http://dev.mysql.com/downloads/connector/j/5.1.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

410 Data Federator User Guide


Configuring connectors to sources of data
Configuring MySQL connectors 12

Specific collation parameters for MySQL

Property Value Property Description


datasourceSortCollation the source collation for
sort operations
datasourceCompColla- the source collation for
tion comparisons
datesourceBinaryColla- the source collation for
tion binary comparisons

Resource properties for setting collation parameters on MySQL


Three JDBC resource properties let you force the specific collation to use
for MySQL, even if your source of data has a different default collation.

To force a collation value for MySQL, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.

Example: Setting specific collation parameters for MySQL


datasourceCompCollation="utf8_swedish_ci "

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 411


12 Configuring connectors to sources of data
Configuring Oracle connectors

Configuring Oracle connectors

Configuring Oracle connectors

In order to configure a connector for Oracle, you must install JDBC drivers.
These drivers are usually available from the Oracle website.
1. Download the JDBC driver for Oracle.
You get a driver in the form of a .jar file or several .jar files.
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

Specific collation parameters for Oracle

Resource properties for setting collation parameters on Oracle


A JDBC resource property lets you force the specific collation to use for
Oracle, even if your source of data has a different default collation.

The default setting issessionProperties:NLS_TERRITORY=AMERICA;NLS_LAN


GUAGE=ENGLISH;NLS_SORT=BINARY;NLS_COMP=BINARY

The NLS_COMP and NLS_SORT parameters are used by Oracle to define


the collation for comparison and sort operations. By default both NLS_COMP
and NLS_SORT are set to BINARY.

412 Data Federator User Guide


Configuring connectors to sources of data
Configuring Oracle connectors 12
To force a specific collation on Oracle, change the value of the sessionProp
erties JDBC resource property.

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

How Data Federator transforms wildcards in names


of Oracle tables

Certain wildcards in Oracle table names do not appear in Data Federator.


For example, when searching for all tables in an object type, a table with the
name abc/table1 appears as abc?22ftable1.

The table below describes how wildcards appear in Data Federator:

Data Federator User Guide 413


12 Configuring connectors to sources of data
Configuring Netezza connectors

The Oracle character Is replaced by ...


...

%
?225

/
?22f

\
?25c

.
?22e

#
?223

?
?23f

Configuring Netezza connectors

Supported versions of Netezza

This version of Data Federator supports Netezza NPS Server versions 3.0
or 3.1.

To let Data Federator connect to your Netezza NPS Server database, you
must install Netezza ODBC driver (versions 3.0 or 3.1).

Configuring Netezza connectors

In order to configure a connector for Netezza, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Netezza.
2. Open your operating system's "ODBC Data Source Administrator".

414 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Netezza.

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Netezza resource properties

The table below lists the properties that you can configure in Netezza
resources.

Data Federator User Guide 415


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

416 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

Data Federator User Guide 417


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Netezza, this should be ODBC.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

418 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Netezza, possible values are:
• Netezza Server

Data Federator User Guide 419


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

420 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 421


12 Configuring connectors to sources of data
Configuring Progress connectors

Configuring Progress connectors

Configuring connectors for Progress

Summary of the connection from Data Federator to Progress


In order to use a Progress connector, you must install the Progress
middleware and a driver that lets Data Federator connect to the Progress
middleware.

Details of the connection from Data Federator to Progress


In order to connect to Progress databases, you must do the following:
• install the OEM SequeLink Server for ODBC Socket 5.5 from the Data
Federator Drivers DVD (directory drivers/sl550socket). For Windows
platforms only.
• install the OEM ODBC driver for Progress OpenEdge (Data Federator
DataDirect Progress OpenEdge ODBC driver) using the Data Federator
Drivers installer. For Windows platforms only.
• install the Progress OpenEdge 10.0B client
• configure DSN entries to point to your Progress databases

Data Federator loads the JDBC driver for Progress OpenEdge. The JDBC
driver for Progress connects to the OEM SequeLink Server. The OEM
SequeLink Server connects to Data Federator DataDirect Progress OpenEdge
ODBC driver. The ODBC driver connects to the Progress OpenEdge 10.0B
client. Finally, the Progress OpenEdge 10.0B client connects to the Progress
database.

The OEM SequeLink Server and the Data Federator DataDirect Progress
OpenEdge ODBC driver should be on the same Windows machine as the
Progress OpenEdge 10.0B client.

The connection from the Progress OpenEdge 10.0B client to the Progress
database is covered in your Progress documentation.

422 Data Federator User Guide


Configuring connectors to sources of data
Configuring Progress connectors 12

Figure 12-1: Architecture of an connection from Data Federator to Progress

Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423
• Configuring middleware for Progress connections on page 423

Installing OEM SequeLink Server for Progress


connections

In order to bridge the JDBC driver for Progress to the Data Federator
DataDirect Progress OpenEdge ODBC driver, you must install the SequeLink
Server for ODBC Socket 5.5 OEM version. The SequeLink Server installation
is provided on the Data Federator DVD.
• Run the following script from the Data Federator DVD.
drivers/sl550socket/oemsetup.bat

You can find documentation on the SequeLink Server in the directory


drivers/sl550socket/doc on the Data Federator DVD.

Configuring middleware for Progress connections


1. Install a Progress OpenEdge 10.0B client. See the Progress
documentation for details.

Data Federator User Guide 423


12 Configuring connectors to sources of data
Configuring Progress connectors

2. Set your environment variables to point to the Progress OpenEdge


installation as follows.

DLC=C:\Progress\OpenEdge

PATH=%PATH%;%DLC%\bin

3. Run the Data Federator Driver installer and choose an install set that
contains the connector driver for Progress OpenEdge 10.0B.
4. Open your operating system's "ODBC Data Source Administrator".
On Windows, you configure DSN entries in the "ODBC Data Source
Administrator".

To open the "ODBC Data Source Administrator" on a standard installation


of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

5. Add a DSN entry of the type "Data Federator DataDirect Progress


OpenEdge", and configure it as follows.

For the parameter... Enter...

a name of your choice: your-


progress-data-source-name
Data source name
for example, accounts-progress-
data-source

a description of your choice, to de-


Description
scribe this Progress data source

the name of the machine where


Host
your Progress database is installed

Port the port of the Progress database

424 Data Federator User Guide


Configuring connectors to sources of data
Configuring Progress connectors 12
For the parameter... Enter...

the name of the Progress database


Database name
to which you want to connect

a username that has at least read


privileges on the Progress database
User to which you want to connect

for example, progress-username

6. Use the Data Federator DVD and install the SequeLink Server OEM
version (drivers/sl550socket/oemsetup.bat).
The SequeLink Server is a bridge between the JDBC driver for Progress
and the Data Federator DataDirect OpenEdge driver.

7. Open the administration interface of the SequeLink Server.


8. Add a data source, and configure it as follows.

For the parameter... Enter...

a name of your choice: your-se


data source name (in tree list)
quelink-data-source-name

the text DNS= followed by the name


that you chose in the DSN entry for
your ODBC driver, your-
the attribute DataSourceSOCOD progress-data-source-name
BCConnStr, in the term DSN For example, for the attribute Data
SourceSOCODBCConnStr, enter
DSN=accounts-progress-data-
source

Data Federator User Guide 425


12 Configuring connectors to sources of data
Configuring SAS connectors

When you complete the above steps, any connections that users create of
type jdbc.progress.openedge will connect to Progress through the SequeLink
Server.

Give the following information to users of Data Federator Designer that want
to connect to this Progress server.

Parameter Description

The name you defined as the data


source name in the administration
SequeLink data source name
interface of SequeLink Server:your-
sequelink-data-source-name

The name of the host where you in-


SequeLink server host name
stalled the SequeLink Server.

The port of the host where you in-


SequeLink server port
stalled the SequeLink Server.

Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423

Configuring SAS connectors

Configuring connectors for SAS

In order to use a SAS connector, you must install a driver that lets Data
Federator connect to a SAS/SHARE server.

A SAS/SHARE server is a server that allows you to connect to SAS data


sets. For more information about SAS/SHARE, see the SAS website.

You can install the driver as you would install any other JDBC driver for Data
Federator.

426 Data Federator User Guide


Configuring connectors to sources of data
Configuring SAS connectors 12
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
• http://www.sas.com/products/share/index.html

Supported versions of SAS

This version of Data Federator supports SAS with the SAS/SHARE server
version 9.1 or higher.

Installing drivers for SAS connections

In order to connect to SAS sources from Data Federator, you must install a
SAS/SHARE driver for JDBC.

The SAS/SHARE driver lets Data Federator connect to a SAS/SHARE server.


The SAS/SHARE server accesses your SAS data sets.

The SAS/SHARE driver for JDBC should be on the same machine as Data
Federator.

To set up your SAS/SHARE server, see your SAS documentation.

Figure 12-2: Architecture of an installation from Data Federator to SAS

• Install a driver for a JDBC connection to SAS, as you would install a


regular JDBC driver in Data Federator.
Users can now add a datasource of type SAS.

Data Federator User Guide 427


12 Configuring connectors to sources of data
Configuring SAS connectors

Optimizing SAS queries by ordering tables in the


from clause by their cardinality

SAS is sensitive to the ordering of tables in the from clause. For the fastest
response from the SAS/Share server, the table names in the from should
appear in descending order with respect to their cardinalities.

You can ensure that Data Federator generates tables in this order by keeping
the statistics in Data Federator accurate. You can do this using Data
Federator Administrator.

To control the order of tables manually, you can also set the sasWeights
resource property for the SAS JDBC connector.

Related Topics
• Managing statistics with Data Federator Administrator on page 394
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties for SAS on page 428

List of JDBC resource properties for SAS

The table below lists the properties that you can configure in JDBC resources.

428 Data Federator User Guide


Configuring connectors to sources of data
Configuring SAS connectors 12
Parameter Description
sasWeights
a mapping of table names to weights used to order
the tables in the from clause when generating a
query in the SAS dialect
Tables in the from clause are ordered according
to weights, in descending order. The weight is by
default set to the table cardinality but it can be
overridden using this parameter. This ordering is
done only for inner joins.

A table name here is the name as exported by the


connector. A weight is a long value.

Example

EMPLOYEE=16;DEPARTMENT=4

Using this setting, the EMPLOYEE table will ap-


pear before the DEPARTMENT table when
pushing a query on SAS with a join of this two ta-
bles.

If this parameter is not specified, or if no weight


is defined for a given table, then the weight is by
default the cardinality of the table (as set in Query
Server).
If a table name is unknown, it is simply ignored.

This parameter is taken into account only when


the parameter sqlStringType is set to sas.

Related Topics
• List of JDBC resource properties on page 461

Data Federator User Guide 429


12 Configuring connectors to sources of data
Configuring SQL Server connectors

Configuring SQL Server connectors

Configuring SQL Server connectors

In order to configure a connector for SQL Server, you must install JDBC
drivers. These drivers are usually available from the SQL Server website.
1. Download the JDBC driver for SQL Server.
You get a driver in the form of a .jar file or several .jar files.

Note:
The recommended driver for SQL Server 2000 is SQL Server JDBC driver
SP3 (the version is 2.2.0040)

The recommended driver for SQL Server 2005 is version v1.0.809.102.

http://www.microsoft.com/downloads/details.aspx?familyid=07287b11-
0502-461a-b138-2aa54bfdc03a&displaylang=en
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

430 Data Federator User Guide


Configuring connectors to sources of data
Configuring SQL Server connectors 12

Specific collation parameters for SQL Server

Property Value Property Description


datasourceSortCollation the source collation for
sort operations
datasourceCompColla- the source collation for
tion comparisons
datesourceBinaryColla- the source collation for
tion binary comparisons

Resource properties for setting collation parameters on SQL Server


Three JDBC resource properties let you force the specific collation to use
for SQL Server, even if your source of data has a different default collation.

To force a collation value for SQL Server, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.

Example: Setting specific collation parameters for SQL Server


• datasourceBinaryCollation="Latin1_general_bin"
• datasourceSortCollation="french_ci_ai"

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 431


12 Configuring connectors to sources of data
Configuring Sybase connectors

Configuring Sybase connectors

Supported versions of Sybase

This version of Data Federator supports Sybase Adaptive Server Enterprise


versions 12.5 or 15.0.

To let Data Federator connect to your Sybase Adaptive Server Enterprise


database, you must have:
• Data Federator Query Server and Sybase Open Client library installed
on the same machine
• your library path set so that Data Federator can find the Sybase Open
Client library

Configuring Sybase connectors

In order to configure a connector for Sybase, you must do the following:


• Install middleware for Sybase.

This middleware may be a driver, a client application, or a combination


of both.
• Configure the middleware to point to your database.
• Configure Data Federator to point to the middleware.

The middleware comes with Sybase and lets Data Federator talk to the
database. For details on installing it, see the Sybase documentation.

Once you install and configure the middleware, you can use Data Federator
to connect to Sybase data sources.

432 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12

Installing middleware to let Data Federator connect


to Sybase

To let Data Federator connect to your Sybase database, you must have the
following configuration.
• Data Federator Query Server and Sybase Open Client library, 12.5
or 15.0, must be installed on the same machine
• the Sybase Open Client library must be included in the environment
variable the defines the library path

On Windows, this variable is called PATH.

Make sure that your PATH variable contains: C:\sybase\Shared\Sybase


Central 4.3;C:\sybase\ua\bin;C:\sybase\OCS-
15_0\lib3p;C:\sybase\OCS-15_0\dll;C:\sybase\OCS-15_0\bin;

On Linux and Solaris, this variable is called LD_LIBRARY_PATH:

$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export LD_LIBRARY_PATH=$LD_LI
BRARY_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p

On AIX, this variable is called LIB_PATH:

$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export
LIB_PATH=$LIB_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p

1. Make sure Sybase Open Client is configured to connect to your Sybase


server, where the Server name is defined as sybase-server-name.
For example, you can install Open Client Directory Server Editor
(dsedit). Then, use dsedit to add a Server Object and choose a name
for this object, sybase-server-name.

For details on installing the Sybase middleware, see your vendor's


documentation. http://infocenter.sybase.com/help/index.jsp

Data Federator User Guide 433


12 Configuring connectors to sources of data
Configuring Sybase connectors

2. In Data Federator Designer, when adding a datasource to your Sybase


database, use sybase-server-name as the value of the Server name
field.
Give the following information to users of Data Federator Designer that want
to connect to this Sybase server.

Parameter Description

the name defined in the Server name


Server name field of the Server object in Sybase
Open Client: sybase-server-name

the name of the database running on


Default database the Sybase server: sybase-
database

the name of the user account used


Password to connect to the Sybase database:
sybase-password

the password for the user account


User Name used to connect to the Sybase
database: sybase-username

Related Topics
• http://infocenter.sybase.com/help/index.jsp

List of Sybase resource properties

The table below lists the properties that you can configure in Sybase
resources.

434 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

Data Federator User Guide 435


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

STRING database
Sybase only

the name of the default database

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

436 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Sybase, this should be


SybaseOpenClient.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

Data Federator User Guide 437


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

438 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN setQuotedIdentifi
er
Sybase only

specifies if the character quote (") is


used around identifiers

If true, Data Federator puts quotes


around table and column identifiers
when it sends queries to Sybase.

If false, Data Federator does not put


quotes around identifiers, but this
means that identifiers with any com-
plex characters will fail.

The setting setQuotedIdentifi


er=true corresponds to the state-
ment set quoted_identifier=on
in Sybase.

Predefined val sourceType


Identifies the version of the database.
ue
For Sybase, possible values are:
• Sybase Adaptive Server 12
• Sybase Adaptive Server 15

Data Federator User Guide 439


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

440 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 441


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Configuring Sybase IQ connectors

Supported versions of Sybase IQ

This version of Data Federator supports Sybase Adaptive Server IQ versions


12.6 or 12.7.

To let Data Federator connect to your Sybase Adaptive Server IQ database,


you must install Sybase ODBC 9 for Adaptive Server IQ.

Configuring Sybase IQ connectors

In order to configure a connector for Sybase IQ, you must install an ODBC
driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Sybase IQ.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Sybase IQ.

442 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Sybase IQ resource properties

The table below lists the properties that you can configure in Sybase IQ
resources.

Data Federator User Guide 443


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

444 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

STRING database
Sybase only

the name of the default database

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

Data Federator User Guide 445


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Sybase IQ, this should be ODBC.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

446 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Data Federator User Guide 447


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN setQuotedIdentifi
er
Sybase only

specifies if the character quote (") is


used around identifiers

If true, Data Federator puts quotes


around table and column identifiers
when it sends queries to Sybase.

If false, Data Federator does not put


quotes around identifiers, but this
means that identifiers with any com-
plex characters will fail.

The setting setQuotedIdentifi


er=true corresponds to the state-
ment set quoted_identifier=on
in Sybase.

Predefined val sourceType


Identifies the version of the database.
ue
For Sybase, possible values are:
• Sybase ASIQ 12

448 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

Data Federator User Guide 449


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

450 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Configuring Teradata connectors

Supported versions of Teradata

This version of Data Federator supports Teradata V2R5.1 or V2R6.

To let Data Federator connect to your Teradata database, you must install
a Teradata ODBC driver (versions 3.04 or 3.05).

Configuring Teradata connectors

In order to configure a connector for Teradata, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Teradata.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Teradata.

Data Federator User Guide 451


12 Configuring connectors to sources of data
Configuring Teradata connectors

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Teradata resource properties

The table below lists the properties that you can configure in Teradata
resources.

452 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

Data Federator User Guide 453


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outerjoin=false,
that tells Data Federator Query
Server to execute this operator within
Data Federator Query Server engine.

example:

'isjdbc=true;outer
join=false;rightouter
join=true'

For a list of capabilities, see "bo-df-


reference-jdbc-resources-properties-
capabilites-list-of-capabilities.dita#bo-
df-reference-jdbc-resources-proper-
ties-capabilites-list-of-capabilities-
eim-titan".

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

454 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Teradata, the value should be:


• Teradata

Data Federator User Guide 455


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

separated list schema


Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Teradata, possible values are:
• Teradata V2 R5
• Teradata V2 R6

456 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

For details, see transactionIsolation


property on page 478.

Data Federator User Guide 457


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

458 Data Federator User Guide


Configuring connectors to sources of data
Default values of capabilities in connectors 12
Type Parameter Description
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
(default value 0: no limit)

integer sampleSize
lets you define the maximum number
of rows to return in a random sample
from the database

(Teradata only) (default value 0: no


limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Default values of capabilities in


connectors
Capabilities information exported by connectors is as shown in the table
below:

Sybase Sybase Informix Netez


Capabilities / Connectors Teradata
ASE ASIQ XPS za

outerJoin false false false false -

leftOuterJoin - false false false -

rightOuterJoin - false false false -

minAggregate - false - - -

Data Federator User Guide 459


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

maxAggregate - false - - -

avgAggregate - false - - -

sumAggregate false - - -

union - - false - -

unionAll - - false - -

countAggregate - - false - -

aggregateDistinct - - false - -

Configuring connectors that use JDBC


Connectors to JDBC sources are the most common type of connectors that
Data Federator uses.

By convention, the names of JDBC connectors start with jdbc.. If you create
a JDBC connector, you should maintain this convention.

In order to be usable from Data Federator Designer, JDBC connectors require


some specific properties. See the list of properties for JDBC resources to
learn which properties are required.

Related Topics
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties on page 461

Pointing a resource to an existing JDBC driver

By default, Data Federator looks for JDBC drivers in the directory data-
federator-install-dir/LeSelect/drivers. You can keep the drivers in
a different directory by changing the path in the resource.

460 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
• In Data Federator Administrator, use the "Connector Resources" tab to
set the driverLocation property to the name of the directory where you
keep your drivers.
You can also use the ALTER RESOURCE statement to set the driverLocation
property.

For example, to set the directory for the oracle9 resource, use the following
statement.

ALTER RESOURCE "jdbc.oracle.oracle9" SET driverLocation


'C:\drivers\ojdbc14.jar'

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Modifying a resource property using SQL on page 493
• List of JDBC resource properties on page 461

List of JDBC resource properties

The table below lists the properties that you can configure in JDBC resources.

Data Federator User Guide 461


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
url
the url to the database

Example

jdbc:oracle:thin:@server.mydo
main.com:1521:ora

jdbcClass
This property is required for JDBC resources to
work in Data Federator Designer.

the class name of the JDBC driver used to connect


to the database

Example

oracle.jdbc.driver.OracleDriver

driverLocation
This property is required for JDBC resources to
work in Data Federator Designer.

the location of the JDBC driver used to connect


to the database

The location is defined as a list of jar files and di-


rectories separated by the system-dependent path
separator (: on UNIX-like systems, ; on Windows).

Example

/usr/local/javaapps/oracle_classes12.zip
or C:\DRIVERS\oracle_classes12.zip

driverProperties
a list of driver properties

Elements are separated by the character ;. Do not


put spaces between the elements.
Example

selectMethod=cursor;connection
RetryCount=2

462 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
user
the username of the database account

Example

smith

password
the password of the corresponding user account

isPasswordEncrypted
specifies if the password has been encrypted by
Data Federator Designer

authenticationMode
one of: {configuredIdentity, callerImpersonation,
principalMapping}
1. configuredIdentity: Authentication on the
database is done using value of parameters
username and password.
2. callerImpersonation: Authentication on the
database is done using the same credential as
used to connect to Query Server.
3. principalMapping: Authentication on the database
is done using a mapping from the user of the
connector (principal) to a database user account.
In this case, the parameter loginDomain should
be set to a registered login domain.

loginDomain
the name of a login domain

Used only when authenticationMode=principalMap


ping. It identifies the set of credentials to use when
connecting to the underlying database. This login
domain should have been previously registered
in Query Server (using stored procedure
addLoginDomain).

supportsCatalog
specifies if the JDBC driver supports the notion
of catalog

The default is true.

Data Federator User Guide 463


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
escapeIdenti
fierQuoteString
defines the string used to escape the identifier
quote string (as returned by DatabaseMetaDa
ta#getIdentifierQuoteString) when it appears inside
an identifier

By default, this escape string is set to the identifier


quote string itself. If set to (empty), no escape
will be done.

addCatalog
specifies if Data Federator should prefix table
names with the name of the catalog

If supportsCatalog is false, this parameter is ig-


nored.

Possible values are true, yes, false or no. The


default value is false.

supportsSchema
specifies if the JDBC driver supports the notion
of schema

The default value is true.

schema
the schema or schema pattern, or the list of
schema or schema patterns you want to access

Schemas and schema patterns can be separated


with one of the following characters: ,, ; or
(space). If supportsSchema is false, this parame-
ter is ignored. If empty or not present, it defaults
to null (equivalent to the pattern %).

Example

schema="SMITH", schema="SMITH;JOHN",
schema="SM%"

464 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
addSchema
specifies if Data Federator should prefix tables
with the name of the schema
If supportsSchema is false, this parameter is ig-
nored.

Possible values are true, yes, false or no. The


default value is true if multiple schemas are
specified or the attribute schema is empty (or un-
defined); false otherwise.

showAllTables
specifies if Data Federator should show all tables
from the selected schemas or schema patterns

If showAllTables is false, only the tables that you


selected when defining the datasource are re-
turned.

Possible values are true, yes, false or no. The


default value is true.

ignoreKeys
specifies if the wrapper should not query the JDBC
driver to get key or foreign key metadata

The Sun JDBC-ODBC bridge does not support


such calls, and this option should be set to true.

Possible values are true, yes, false or no. The


default value is false.

supportsBoolean
specifies if the JDBC driver or database does not
support booleans as first class objects

The default value for this parameter depends on


the database. If this is one of the supported source
types, this parameter is already set to its correct
value. However, it can be overridden.

Possible values are true, yes, false or no. The


default value is false.

Data Federator User Guide 465


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
trimTrailingSpaces
specifies if Data Federator should remove extra
spaces from catalog, schema, table, column, key
and foreign key names
Some JDBC drivers return metadata padded with
blank spaces. Setting this parameter to yes will
ensure that extra spaces in catalog, schema, ta-
ble, column, key and foreign key names are re-
moved.

Possible values are true, yes, false or no. The


default value is false.

collationName
Deprecated: use datasourceBinaryCollation in-
stead.

the source collation to use for binary operations

Example

collationName="latin1_bin"

datasourceBinaryColla
tion
the source collation to use for comparisons that
need to be evaluated with a binary collation (like,
not like and function evaluations)

This is used for SQL Server and MySQL to add a


collate clause in queries where the semantics of
binary collation are required. If unset, no collate
clause is generated for these operations.

Unset by default.

Example

datasourceBinaryCollation="Latin1_gener
al_bin"

466 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
datasourceCompColla
tion
the source collation to use for comparisons (other
than like, not like and function evaluations)
It is used for SQL Server and MySQL to add a
collate clause in queries. If unset, no collate clause
is generated for these operations.

Unset by default.

Example

datasourceCompCollation="Latin1_gener
al_ci_ai"

datasourceSortCollation
the source collation to use for sort operations (or-
der by)

This is used for SQL Server and MySQL to add a


collate clause in queries. If unset, no collate clause
is generated for these operations.

Unset by default.

Example

datasourceSortCollation="Latin1_gener
al_ci_as"

compCollationCompatible
specifies if the collation for comparison operations
in the data source is compatible with the current
setting in Query Server

When set to true, the server can ignore the colla-


tion of comparison operations and predicates can
be safely pushed on the source.

Possible values are true, yes, false or no. The


default value is false.

Example

compCollationCompatible="true"

Data Federator User Guide 467


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
sortCollationCompatible
specifies if the collation for sort operations (order
by) in the data source is compatible with the cur-
rent setting in the Query Server
When set to true, the server can ignore the colla-
tion of sort operations and (order by) expressions
can be safely pushed on the source.

Possible values are true, yes, false or no. The


default value is false.

Example

sortCollationCompatible="true"

sqlStringType
identifies the SQL dialect supported by the
database

one of:
• sql92
• sql99 (reserved for future usage)
• oracle8
• oracle9
• jdbc3 (JDBC syntax is used for outer joins)
• sas
Defaults to the SQL dialect supported by the
source as identified by the parameter sourceType.
If sourceType is undefined, then defaults to sql92.

468 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
sourceType
identifies the database

one of:
• oracle8R1
• oracle8R2
• oracle8R3
• oracle9R1
• oracle9R2
• sqlserver
• mysql
• db2
• access
• progress
• openedge
• sybase
• teradata
• sas

castColumnType
a list of mappings from the type used by the
database to the type used by JDBC

These should be in the form databasetype=jd


bctype

This is useful when the default mapping done by


the driver is incorrect or incomplete.
Note:
For officially supported databases the type map-
pings are set implicitly, but you can override them.

Example

for Oracle JDBC driver FLOAT=FLOAT;BLOB=BLOB

Data Federator User Guide 469


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
allowTableType
a list of table types to take into consideration when
the metadata of the underlying database is re-
trieved
Elements are separated by the character ;. Do not
put spaces between the elements.

Special case: if this attribute is (empty), all table


types are allowed.

Example

TABLE;SYSTEM TABLE;VIEW

capabilities
a list of all capabilities supported by the database

Elements are separated by the character ;. Do not


put spaces between the elements.

Example

isjdbc=true;outerjoin=false;rightouter
join=true

nbPreparedState
mentsPerQuery
defines the maximum number of statements that
can be used concurrently when executing param-
eterized queries

useParameterInlining
specifies whether the JDBC connector should use
java.sql.PreparedStatement or java.sql.Statemen
tobjects to execute parameterized queries

When set to true, the JDBC connector uses ja


va.sql.Statement objects to execute parameterized
queries. The parameterized query is inlined, re-
placing placeholders with constant values. This
option is useful for JDBC drivers that do not sup-
port well prepared statements (for example
Progress Openedge). The default value is false.

470 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
transactionIsolation
the transaction isolation level

one of:
• TRANSACTION_READ_COMMITTED
• TRANSACTION_READ_UNCOMMITTED
• TRANSACTION_REPEATABLE_READ
• TRANSACTION_SERIALIZABLE
Default: not set.

defaultFetchSize
the default fetch size to set when creating state-
ment objects

Default: not set.

setFetchForwardDirection
specifies if fetch forward should be explicitly set

Possible values are true, yes, false or no. The


default value is false.

setReadOnly
specifies if connections should not be set to to
read only

Possible values are true, yes, false or no. The


default value is false.

Data Federator User Guide 471


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
sessionProperties
a list of session variables set on the database

Elements are separated by the character ;. Do not


put spaces between the elements.

Example

selectMethod=cursor;connection
RetryCount=2

useIndexInOrderBy
specifies if index (column position) should be used
instead of alias (column name) in the order by
clause of submitted queries

The default value is false (except for databases


which do not handle aliases in order by clause
well).

Example

If we order by column 2 and 3, we will generate


ORDER BY 2, 3 instead of ORDER BY C2, C3.

translationFile
the name of the XML file which contains transla-
tion definitions
The value can be absolute or relative to data-
federator-install-dir. If this parameter is
not specified, the default file will be used.

List of JDBC resource properties for connection pools

The table below lists the properties that you can configure in JDBC resources.

472 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
maxIdlePools
the maximum number of pools that can be kept
idle
If this value is reached, the oldest unused pool is
closed and removed. 0 means no limit. The default
value is 24.

maxConnections
the maximum number of simultaneous connec-
tions to the underlying database

The value 0 means no limit. The default value is


0.

maxPoolSize
the maximum number of idle (free) connections
to keep in the pool

The value 0 means no limit. The default value is


32.

maxLoadPerConnection
the maximum load authorized for each connection

This value can be used to control the maximum


number of cursors open per connection. 0 means
no limit. The default value is 0.

maxConnectionIdleTime
the maximum time an idle connection is kept in
the pool of connections

Unit is milliseconds. 0 means no limit. The default


value is 60000 (60 seconds).

reaperCycleTime
Deprecated. There is now only one reaper for all
JDBC connector configurations. The system pa-
rameter leselect.core.jdbc.reaperCycleTime can
be used to control how often the connection reaper
should check for idle connections, or bad connec-
tions (those that are suspected to be broken due
to connection failure).

Data Federator User Guide 473


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
connectionTestQuery
the SQL test query that can be used to check if
connections to the underlying database are valid
Caution: this query should be cheap to execute.

Example

An example of a test query for Oracle could be


SELECT 1 FROM DUAL. An empty string means no
test query. The default value is empty.

connectionFailureDetec
tionOnError
a keyword indicating the kind of connection failure
detection that should be done when an SQLExcep
tion occurs
• sqlState: specifies that failure detection should
be done using SQLState codes

SQLState codes for connection failures have


a 2 character class 08. Some JDBC drivers do
not comply with this standard; in this case, the
parameter connectionFailureSQLStates can
be used to specify the list of all specific SQL
State codes returned by the driver.
• testQuery: specifies that failure detection
should be done using the test query defined
by parameter connectionTestQuery

The default value is sqlState.

connectionFailureSQL
States

474 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description

the list of specific SQLState codes that can be


used to detect a connection failure when an
SQLException is thrown by the underlying
database

Standard codes for connection failures (starting


with the two character class 08) do not need to
be specified here. An example of specific code
for Oracle can be 61000(ORA-00028: your
session has been killed). Elements are sep-
arated by the character ;. Do not put spaces be-
tween the elements. The default value is empty.

Example

61000

Related Topics
• List of JDBC resource properties on page 461

List of common JDBC classes

The table below lists the most common JDBC driver classes and the syntax.
You can use these when configuring the JDBC resource property named
jdbcClass.

Database name JDBC attributes


Access
sun.jdbc.odbc.JdbcOdbcDriver

DB2
com.ibm.db2.jcc.DB2Driver

MySQL
com.mysql.jdbc.Driver

Oracle
oracle.jdbc.driver.OracleDriver

Data Federator User Guide 475


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Database name JDBC attributes


Progress
com.ddtek.jdbc.sequelink.Se
queLinkDriver

SAS
com.sas.net.sharenet.ShareNet
Driver

SQLServer
com.microsoft.jdbc.sqlserv
er.SQLServerDriver

SQLServer 2005
com.microsoft.sqlserver.jd
bc.SQLServerDriver

Related Topics
• List of JDBC resource properties on page 461

List of pre-defined JDBC URL templates

The table below defines the default values used by the pre-defined databases.
You can use these when configuring the JDBC resource property named
urlTemplate.

476 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Database name URLTEMPLATE
Access
jd
bc:odbc:<ODBC_DSN>

DB2
jdbc:db2://host
name[:port]/database
name

MySQL
jdbc:mysql://host
name[:port]/database
name

Oracle 8
jdbc:ora
cle:thin:@host
name[:port]:database
name

Oracle 9
jdbc:ora
cle:thin:@host
name[:port]:database
name

Oracle 10
jdbc:ora
cle:thin:@host
name[:port]:database
name

Progress
jdbc:se
quelink://host
name[:port];server
DataSource=se
quelinkdatasource
name

Data Federator User Guide 477


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Database name URLTEMPLATE


SAS
jd
bc:sharenet://host
name:port

SQLServer 2000
jdbc:mi
crosoft:sqlserv
er://host
name[:port];database
name=databasename

SQLServer 2005
jdbc:sqlserv
er://host
name[:port];database
name=databasename

Related Topics
• List of JDBC resource properties on page 461

transactionIsolation property

Attempts to change the transaction isolation level for connections to the


database and sets the transaction isolation level. The 'transactionIsolation'
parameter is used by the JdbcWrapper (JDBC Connector) to set the
transaction isolation level of each connection made to the underlying
database.

The exact meaning of each level is defined in the JDBC specification.

Predefined value

Expected values: {TRANSACTION_READ_COMMITTED |


TRANSACTION_READ_UNCOMMITTED |
TRANSACTION_REPEATABLE_READ | TRANSACTION_SERIALIZABLE}

478 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors to web services 12
Example: Improving performance using transactionIsolation
Use the transactionIsolation parameter to enhance the performance of the
connector if a database supports the setting of such levels. If you do not
want to see dirty reads, non-repeatable reads and phantom reads, set the
transaction isolation to TRANSACTION_READ_UNCOMMITED. You can
expect better performance.

urlTemplate

This property defines the template of the JDBC URL used to connect to the
database.

This value is a hint. Data Federator Designer shows the value of this property
while adding a datasource, and users of Data Federator Designer complete
the remaining values.

For example, if the value of urlTemplate is jdbc:hostname, users of Data


Federator Designer must replace hostname by the correct host.

String

Example: urlTemplate property


jdbc:oracle:thin:@server.mydomain.com:1521:mydatabase

Depending on the database, the URL follows a specific template. The


template is a character sequence with or without variables.

For the URL templates of common JDBC resources, see List of pre-defined
JDBC URL templates on page 476.

Configuring connectors to web services


You can configure the web service using the properties in the web service
resource. The web service resource is called ws.generic.

Data Federator User Guide 479


12 Configuring connectors to sources of data
Configuring connectors to web services

You can configure this resource as you would configure any resource in Data
Federator Administrator.

Note:
Any changes that you make to the resource ws.generic will apply to all web
services that are deployed on your installation of Data Federator Query
Server.

Related Topics
• Creating and configuring a resource using Data Federator Administrator
on page 486
• List of resource properties for web service connectors on page 480

List of resource properties for web service connectors

The table below lists the properties that you can configure in web service
resources.

480 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors to web services 12
Parameter Default value Description
addNamespaceInParam true
eter

Data Federator User Guide 481


12 Configuring connectors to sources of data
Configuring connectors to web services

Parameter Default value Description

By default, Data Federa-


tor decides if it should
add namespaces before
parameters in SOAP re-
quests.

You can use the param-


eter addNamespaceIn
Parameter to deactivate
the default behavior and
force Data Federator to
suppress namespaces.

For example, if you


have the following
SOAP body with the
single parameter Sym
bol:

<soapenv:Body>
<ns:GetQuote
xmlns:ns="http://www.xig
nite.com/services/">

<ns:Sym
bol>SAP</ns:Symbol>
</ns:GetQuote>
</soapenv:Body>

Then if addNames
paceInParameter is
true, Data Federator de-
tects if the web service
expects a namespace.
If so, it generates the
line containing the pa-
rameter as:

<ns:Sym
bol>SAP</ns:Symbol>

If addNamespaceInPa

482 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
Parameter Default value Description
rameter is false, Data
Federator will generate
the line containing the
parameter as:

<Symbol>SAP</Symbol>

Managing resources and properties of


connectors
Resources are used for configuring connectors in a flexible way. A resource
is a set of properties with a name. The name usually corresponds to a
connector for which the resource contains properties, like jdbc.sas, or jd
bc.mysql.

You can set the properties of connectors by using one of the pre-defined
resources. Resources let you re-use the same set of properties for different
connectors.

You can also make your own resources. When you make a resource, you
define properties, the values of the properties, and then you choose a name
for your set of properties. See the documentation on managing resources
and properties for details.

Note:
In order to use a resource to make a connection, you must install drivers for
your sources of data.

Managing resources using Data Federator


Administrator

Data Federator Administrator provides a window where you manage


resources.

Data Federator User Guide 483


12 Configuring connectors to sources of data
Managing resources and properties of connectors

To access the Data Federator Administrator interface for managing resources,


login to Data Federator Administrator, click the Administration tab, then
click Connector Settings.

You must be logged in with an administrator user account to change accounts


of other users. If you log in as a regular user, you can only change your own
information.

The following table summarizes how to manage resources on the Connector


Resources tab.

484 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
Table 12-8: Connector resources tab summary

Task Actions

Create a resource

Delete a resource

Copy a resource

Add a property to a resource Click Add a property.

Delete a property

Impact of modifications to a resource on a configured connector


When properties of a resource are modified, the changes are made visible
the next time the datasource is accessed. This allows you to update an
existing connector configuration dynamically and in a flexible way.

The values that users define when adding a datasource in Data Federator
Designer override the values defined in the resource.

Data Federator User Guide 485


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Related Topics
• Data Federator Administrator overview on page 384
• Creating and configuring a resource using Data Federator Administrator
on page 486
• Copying a resource using Data Federator Administrator on page 488

Valid names for resources

Your resource name must begin with a letter [a-zA-Z]. The characters that
follow can be any number of alphanumeric [a-zA-Z0-9], dot . and
underscore _, in any order, but each dot must be immediately followed by
an alphanumeric or underscore.

A resource name must start with a prefix that identifies that type of the
resource.

Available prefixes are:


• jdbc
• odbc
• openclient
• ws

Example: Valid names for resources


• jdbc.My.Resource1
• jdbc.My_Resource1_
• odbc.My_Resource___for___My_Database
• odbcMy.Resource.for.My.Database

Creating and configuring a resource using Data


Federator Administrator

You can create a new, empty resource, and then add the parameters that
you require.

486 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
1. Login to Data Federator Administrator.
The Data Federator Administrator screen is displayed.
2. At the top of the screen, click the Administration tab.
The Administration panel appears. At the left of the panel, the list of
administration options appears.
3. In the list of administration options, click the Connector Settings option.
The "Resource" panel appears.
4. At the right of the Resource pull-down list, click the New icon

.
A dialog box appears, prompting you to enter a name for your new
resource.
5. Enter a name for your resource and click OK.
The dialog box closes.
6. Below the Property Name heading, click Add a property.
7. From the Property Name pull-down list, select a property to add, and in
the Property Value field, enter a value for the property. Click OK to add
the property.
Data Federator Administrator validates your entry and displays an error
message if it is invalid for the property.
8. Repeat the process to add the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.

When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486
• List of JDBC resource properties on page 461

Data Federator User Guide 487


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Copying a resource using Data Federator


Administrator

You can create a new resource by copying an existing resource, and then
adding and modifying the parameters that you require.
1. Login to Data Federator Administrator.
The Data Federator Administrator screen is displayed.
2. At the top of the screen, click the Administration tab.
The Administration panel is displayed. At the left of the panel, the list
of administration options is displayed.
3. In the list of administration options, click the Connector Settings option.
The "Resource" panel is displayed.
4. In the Resource pull-down list, select the resource to copy, and click the
Copy icon

.
A dialog box appears, prompting you to enter a name for the new copy.
5. Enter a name for the resource copy and click OK.
The properties configured in the copied resource are displayed.
6. Add, edit, and delete properties as required to configure the new resource.
When you finish editing the properties, click OK.
7. Repeat the process to add and modify the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.

When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486

488 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
• List of JDBC resource properties on page 461

List of pre-defined resources

Table 12-9: Pre-defined resources

Data Federator delivers a set of pre-defined resources for the most popular
databases. This table lists the names of those that are available with the
installation.

Type of driver or mid-


Name of source Name of resource
dleware

Access JDBC jdbc.access

DB2 JDBC jdbc.db2

DB2 iSeries (as400) JDBC jdbc.db2.iSeries

DB2 zSeries (os390) JDBC jdbc.db2.zSeries

odbc.informix.in-
formixXPS85
IBM Informix XPS ODBC
odbc.informix.in-
formixXPS84

MySQL JDBC jdbc.mysql

Netezza ODBC odbc.netezza

Oracle 8 JDBC jdbc.oracle.oracle8

Data Federator User Guide 489


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Type of driver or mid-


Name of source Name of resource
dleware

Oracle 9 JDBC jdbc.oracle.oracle9

Oracle 10 JDBC jdbc.oracle.oracle10

Progress through JDBC


Progress OpenEdge jdbc.progress.openedge
to ODBC bridge

JDBC through
SAS jdbc.sas.sas9
SAS/SHARE server

SQL Server JDBC jdbc.sqlserver

jdbc.sqlserver.sqlserv
SQL Server 2005 JDBC
er2005

open
native library (Open- client.sybase.sybaseASE12
Sybase
Client) open
client.sybase.sybaseASE15

Sybase Adaptive Server


ODBC odbc.sybase.sybaseASIQ12
IQ

odbc.teradata.tera
dataV2R5
Teradata ODBC
odbc.teradata.tera
dataV2R6

490 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
Type of driver or mid-
Name of source Name of resource
dleware

odbc.generic.odbc
Generic ODBC ODBC
odbc.generic.odbc3

Managing resources using SQL

As an alternative to using the Data Federator Administrator to manage


resources, you can also manage resources using an extension of the standard
SQL-92 syntax from a command line or other tool.

For example, the My Query Tool tab in Data Federator Administrator lets
you create, delete, and modify resources and their properties.

In order to be usable from Data Federator Designer, JDBC connectors require


some specific properties. See the list of properties for JDBC resources to
learn which properties are required.

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Data Federator Administrator overview on page 384
• List of JDBC resource properties on page 461

Creating a resource using SQL

You can create resources using one of the following statements.

CREATERESOURCEresource_name

This statement lets you create a new empty resource. Use the statement
Alter Resource to define associated properties.

Data Federator User Guide 491


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Syntax
Figure 12-3: CREATERESOURCE statement
CREATE RESOURCE " new_resource_name "
[ FROM " existing_resource_name " ]

Example: Creating a resource

CREATE RESOURCE MyResource

CREATERESOURCEnew_resource_nameFROMexisting_re
source_name

Example: Creating a resource by copying

CREATE RESOURCE "jdbc.oracle.oracle10" FROM "jdbc.oracle.ora


cle9"

Tip:
Quickly duplicating an existing resource
Using the syntax FROMexisting_resource_name in this statement lets
you create a new resource with a set of initial properties. The new resource
inherits all the properties of the existing resource. This is an easy way to
duplicate resources.

Related Topics
• Valid names for resources on page 486

Deleting a resource using SQL

This statement lets you delete an existing resource.


Figure 12-4: DROPRESOURCE statement
DROP RESOURCE " existing_resource_name "

Example: Deleting a resource

DROP RESOURCE "jdbc.oracle.oracle10"

492 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12

Modifying a resource property using SQL

This statement lets you define or modify a property of a given resource. If


the property is undefined, then one is created. If the property is already
defined the property value is updated with the new value.

ALTER RESOURCE "resource_name" SET property_name property_value

ALTER RESOURCE "resource_name" SET "keyword_property_name"


property_value

Figure 12-5: ALTERRESOURCESET statement


ALTER RESOURCE " existing_resource_name " SET { property_name '
property_value ' | " keyword_property_name "' property_value ' }
Note:
When the property name is also a keyword, such as password, schema or
user, then you must enclose the property name in quotes.

Example: Modifying a resource

ALTER RESOURCE "jdbc.mysql" SET DRIVERLOCATION 'C:\Program


Files\MySQL\mysql-connector-java-5.1\mysql-connector-java-
5.1.jar'

This statement defines the location of the jdbc driver associated to the
resource for MySQL.

ALTER RESOURCE "jdbc.myresource" SET "password" 'newpassword'

ALTER RESOURCE "jdbc.myresource" SET "schema" 'myschema'

ALTER RESOURCE "jdbc.myresource" SET "user" 'newuser'

Data Federator User Guide 493


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Deleting a resource property using SQL

This statement lets you delete an existing property of a given resource.

ALTER RESOURCE "resource_name" RESET property_name

ALTER RESOURCE "resource_name" RESET "keyword_property_name"

Figure 12-6: ALTERRESOURCERESET statement


ALTER RESOURCE " existing_resource_name " RESET { property_name
| " keyword_property_name " }
Note:
When the property name is also a keyword, such as password, schema or
user, then you must enclose the property name in quotes.

Example: Deleting a resource property

ALTER RESOURCE "jdbc.oracle.oracle10" RESET driverLocation

This statement deletes the property for the jdbc driver location.

ALTER RESOURCE "jdbc.myresource" RESET "password"

ALTER RESOURCE "jdbc.myresource" RESET "schema"

ALTER RESOURCE "jdbc.myresource" RESET "user"

System tables for resource management

Once you define and edit resources, you can verify the metadata that Data
Federator Query Server has stored for these objects by querying the system
tables.

For a detailed reference of the available system tables related to resources,


click the links below or see, System table reference on page 728.
• resources on page 739
• resourceProperties on page 739

494 Data Federator User Guide


Configuring connectors to sources of data
Collation in Data Federator 12
These system tables are stored in the catalog called leselect and the schema
called system.

Using a system table to check the property of a resource

This example shows how to check the location of the JDBC driver file that
Data Federator uses to connect to your MySQL resource.
1. Open the Data Federator Administrator, as decribed in Starting Data
Federator Administrator on page 384.
2. Click the tab SQL.
3. Type the query to alter the property.
For example, to check the property that defines the location of the the
JDBC driver for a MySQL database, type:

SELECT * FROM /leselect/system/resourceProperties WHERE RE


SOURCE_NAME='jdbc.mysql' AND PROPERTY_NAME='DRIVERLOCATION';

Data Federator Administrator displays the value of property of the


resource.

For the other operations that you can perform on the resources, see
Managing resources using SQL on page 491.

For the resource properties that are available for JDBC connectors, see
"admin_resource_mngt18.dita#fm_2006071221_1141448-eim-titan".

Collation in Data Federator


Collation is a set of rules that determines how data is sorted and compared.

Data Federator and the database systems that it accesses sort and compare
character data using rules that define the correct sequence of characters.
For most database systems, you can configure options to specify whether
the database system should consider upper or lower case, accent marks,
character width or types of kana characters.

Case sensitivity
If a system treats the character M the same as the character m, then the
system is case-insensitive. A computer treats M and m differently because

Data Federator User Guide 495


12 Configuring connectors to sources of data
Collation in Data Federator

it uses ASCII codes to differentiate the input. The ASCII value of M is 77,
while m is 109.

Accent sensitivity
If a system treats the character a the same as the character á, then then the
system is accent-insensitive. A computer treats a and á differently because
it uses ASCII codes for differentiating the input. The ASCII value of a is 97
and á is 225.

Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated
differently, it is called Kana sensitive.

Width sensitivity
When a single-byte character (half-width) and the same character when
represented as a double-byte character (full-width) are treated differently
then it is width sensitive

Related Topics
• Supported Collations in Data Federator on page 496
• How Data Federator decides how to push queries to sources when using
binary collation on page 500

Supported Collations in Data Federator

The following collations are supported in DF:


binary
Unicode binary sort (or compatible with Unicode binary sort - for
example, sort on ASCII charset is compatible with sort on Unicode
charset)
locale_AI_CI
locale, accent insensitive, case insensitive
locale_AI_CS
locale, accent insensitive, case sensitive
locale_AS_CI

496 Data Federator User Guide


Configuring connectors to sources of data
Collation in Data Federator 12
locale, accent sensitive, case insensitive
locale_AS_CI
locale, accent sensitive, case insensitive
locale_AS_CS
locale, accent sensitive, case sensitive
wherelocale is defined as LN_CY with
• LN - ISO language code (for example, en )
• CY - ISO country code (for example, US )
Note:
All the DF collations are kana-insensitive and width-insensitive

Example:
en_US_AS_CI - English, US, accent sensitive, case insensitive

Related Topics
• Collation in Data Federator on page 495

Setting string sorting and string comparison behavior


for Data Federator SQL queries

You can use the sort and comp parameters to set how Data Federator treats
sorting and comparison for strings.

The sort parameter is used to define how strings will be sorted by Data
Federator Query Server. The value of the sort parameter is one of the
supported collation values. The default is binary.

The comp parameter is used to define how strings will be compared in SQL
queries . The value of the comp parameter is either

• one of the supported collation values


• the keyword Linguistic: In this case the collation used is the collation
defined by the sort parameter.

The sort and comp parameters can be defined as a session parameter,


system parameter or as a property of a user account.

Data Federator User Guide 497


12 Configuring connectors to sources of data
Collation in Data Federator

• If thesort or comp parameter is defined in session parameters, this value


will be used for the current connection.
• If not defined in session parameters, the sort or comp property of the user
account will be used for current connection.
• If not defined as a property of the current user account, the sort or comp
system parameter will be used for current connection.
The values for sort and comp parameters have an impact on the result of
SQL operations applied on string values. An operation might be a function,
a SQL operator such as GROUP BY, ORDER BY or a filter expression such
as T.A < e. The figure below summarizes the SQL operators which are
sensitive to comp and sort parameters:

SQL Expressions Sensitivity

=, !=, >, <=, >= comp sensitive

BETWEEN, NOTBETWEEN comp sensitive

CASE comp sensitive

DISTINCT comp sensitive

GROUP BY comp sensitive

HAVING comp sensitive

IN, NOTIN comp sensitive

LIKE, NOTLIKE insensitive: binary only

ORDER BY sort sensitive

498 Data Federator User Guide


Configuring connectors to sources of data
Collation in Data Federator 12
SQL Expressions Sensitivity

UNION ALL insensitive

SQL Functions Sensitivity

MAX, MIN comp sensitive

Data Federator string functions insensitive: binary only

Example:

SELECT LASTNAME, count(*)


FROM EMPLOYEE E
WHERE SALARY < 5000 AND DEPARTMENT_NAME = Sales
GROUP BY LASTNAME

Table 12-11: Employee table

DEPART
LASTNAME FIRSTNAME SALARY
MENT_NAME

Smith John 6000 Sales

SmIth Jo 4000 Sales

Smith John 2000 SaLes

Smith Albert 7000 Sales

Data Federator User Guide 499


12 Configuring connectors to sources of data
Collation in Data Federator

When the comp parameter is: en_US_AS_CS, the result is:

Smith 2

SmIth 1

When the comp parameter is: en_US_AI_CI, the result is:

Smith 4

Related Topics
• Collation in Data Federator on page 495
• Supported Collations in Data Federator on page 496
• Managing parameters using Data Federator Administrator on page 554
• Modifying properties of a user account with SQL on page 517

How Data Federator decides how to push queries to


sources when using binary collation

The optimizer in Data Federator Query Server performs pushdown analysis


to decide if an SQL operation can be pushed down to a data source.

When collations are binary, Query Server decides whether or not to push a
subquery on a particular datasource by examining only the SQL capabilities
of the source of data.

Thus, in the general case, Query Server assumes that the underlying source
of data is using a default collation that is compliant with the binary collation
in Data Federator.

For SQLServer, MySQL and Oracle only, it is possible to force Data Federator
Query Server to use binary collation even if the default collation on the source

500 Data Federator User Guide


Configuring connectors to sources of data
Collation in Data Federator 12
is not compliant with binary collation. (see MySQL, SQLserver, Oracle for
details of how to configure resource parameters for binary collation).

Related Topics
• Collation in Data Federator on page 495
• Setting string sorting and string comparison behavior for Data Federator
SQL queries on page 497
• Supported Collations in Data Federator on page 496
• Specific collation parameters for SQL Server on page 431
• Specific collation parameters for MySQL on page 411
• Specific collation parameters for Oracle on page 412

Data Federator User Guide 501


12 Configuring connectors to sources of data
Collation in Data Federator

502 Data Federator User Guide


Managing user accounts
and roles

13
13 Managing user accounts and roles
About user accounts, roles, and privileges

About user accounts, roles, and privileges


You use Data Federator Administrator to manage user accounts and roles,
and the privileges they have, for Data Federator Query Server:
• A user account is a login ID with permissions that allow it to perform
specific actions.
• A role is a collection of user accounts that you can use to control access
to groups of users.
• A privilege is a user account's access rights to a table, schema, catalog
or other objects on Data Federator Query Server. A user account's set
of privileges defines the actions that the user account can perform. You
can set privileges at the catalog, schema, table, or column level.
Note:
The privileges you apply to Data Federator Query Server are independent
of the authorization mechanism used by the federated databases (the
datasources accessed via the connectors). That is, the privileges that
you grant apply only to the Data Federator Query Server, and not to the
databases that a Query Server accesses.

Related Topics
• Deploying projects on page 321
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506

About user accounts

There are three types of user account:


• Administrator and Designer user accounts are used to manage the Data
Federator installation, and create and deploy projects.

Administrator account users can log on to Data Federator Administrator


and perform the administration and maintenance tasks that are available.

504 Data Federator User Guide


Managing user accounts and roles
Creating a Data Federator administrator user account 13
These accounts can also be used to create and deploy projects.
Administrator account users can also log on to Data Federator Designer,
create and configure projects, and then deploy these projects so that they
are available to client applications.
• Designer-only user accounts are used to create and deploy projects only.
• Query Server user accounts are used by client applications to log on to
a query server and access specific projects. You configure these user
accounts to access the sub-set of deployed projects that they require.

Related Topics
• About user accounts, roles, and privileges on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506

Creating a Data Federator administrator


user account
Perform the following on the user interface to create a user acccount with
administrator privileges.
1. In Data Federator Administrator, select User Rights, then click the User
Accounts tab.
2. Click Create a new user account to display the Create User Account
dialog box.
3. In the General area, enter the user name, and select the is an
administrator check box.
4. If the user ID is for accessing a specific schema in the project, in the
Default Schema area, enter or select the schema to use.
5. Click OK to complete the process.

Related Topics
• Granting privileges to a user account or role on page 514
• Creating a Data Federator Designer user account on page 506
• Creating a Data Federator Query Server user account on page 506

Data Federator User Guide 505


13 Managing user accounts and roles
Creating a Data Federator Designer user account

Creating a Data Federator Designer user


account
You can use an account with Administrator rights to access Designer and
create and deploy projects. Alternatively, you can create a user account for
Designer users only. For this type of user account, you grant privileges to
create, modify and deploy projects only, so that it cannot be used for
administration purposes such as creating other users.
1. In Data Federator Administrator, select User Rights, then click the User
Accounts tab.
2. Click Create a new user account to display the Create User Account
dialog box.
3. In the General area, ensure that you do not check the is an administrator
check box.
4. Click OK to complete the process.
5. On the "Data Federator Query Server Administrator" screen, select the
My Query Tool tab, and in the Query Editor, run the following query:GRANT
EXECUTE ON PROCEDURE executeNativeUpdateQuery TO <username>
where <username> is the user account that you created.
6. in the Query Editor, run the following query:GRANT DEPLOY, UNDEPLOY
ON CATALOG "/" TO <username> where <username> is the user account
that you created.

Related Topics
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Query Server user account on page 506

Creating a Data Federator Query Server


user account
You can create a Query Server account to use with client applications that
access a Data Federator Query Server. A Query Server account has no
privileges to perform administration functions, or to create and deploy projects.
1. In Data Federator Administrator, select User Rights, then click the User
Accounts tab.

506 Data Federator User Guide


Managing user accounts and roles
Managing user accounts with Data Federator Administrator 13
2. Click Create a new user account to display the Create User Account
dialog box.
3. In the General area, clear the is an administrator check box.
4. To set a default catalog for the user, in the Default Catalog area, click
Select an existing catalog, and select the catalog from the pull down
list. This sets that when the user executes a query on a table without
setting the catalog, then the default catalog is used.
5. To set a default schema for the user, in the Default Schema area, click
Select an existing schema, and select the schema from the pull down
list. This sets that when the user executes a query on a table without
setting the schema, then the default schema is used.
6. If the user has a default catalog, in the Permission on Default Catalog
area, click the Grant SELECT privilege on default catalog check box
to allow the user to read tables in its default catalog.
7. Click OK to complete the process.

Related Topics
• About user accounts on page 504
• Creating a Data Federator administrator user account on page 505
• Creating a Data Federator Designer user account on page 506

Managing user accounts with Data


Federator Administrator
Data Federator Administrator provides a window where you manage user
accounts.

Data Federator User Guide 507


13 Managing user accounts and roles
Managing user accounts with Data Federator Administrator

To access the Data Federator Administrator interface for managing user


accounts, login to Data Federator Administrator , click the Administration
tab, then click User Rights.

The following table summarizes how to manage user accounts on the User
Accounts tab.

Table 13-1: The Users Accounts view of the Security tab in Data Federator Administrator

Task Actions

to add a user account Click Create a new user account.

In the Username column, beside the


username, click the Delete icon
to delete a user account

In the Username column, beside the


to edit a user account
username, click the Edit icon .

508 Data Federator User Guide


Managing user accounts and roles
Managing user accounts with Data Federator Administrator 13
Task Actions

When you create the user account,


in the General pane, select the is an
administrator check box, and click
OK.
to create a user account with admin-
Note:
istrator status
You can grant an account administra-
tor status at creation time only. You
cannot grant administrator status to
an existing account.

Edit the user account, then, in the


to change the user account's pass- Password pane, enter the new
word password in the Enter a new pass-
word box, and click OK.

Edit the user account, then, in the


Default Catalog pane, either select
to set a default catalog for the user a catalog from the Select an existing
account catalog list, or enter the path to a
catalog in the Enter path of catalog
box, and click OK.

Edit the user account, then, in the


Default Schema pane, either select
to set a default schema for the user a schema from the Select an exist-
account ing schema list, or enter the path to
a schema in the Enter path of
schema box, and click OK.

Data Federator User Guide 509


13 Managing user accounts and roles
Managing user accounts with Data Federator Administrator

Task Actions

Edit the user account and in the


Privilege on Default Catalog area,
select the Grant select privilege on
default catalog checkbox and click
OK. Select this checkbox only if the
To grant SELECT privilege on the user account has a default catalog.
default catalog to a user account If the user account has no default
catalog, or if you want to grant SE-
LECT privilege on another catalog,
see the section on managing privi-
leges with Data Federator Administra-
tor.

Edit the user account, then, in the


Granted Roles pane, hold down the
CTRL key and click the user ac-
to grant a role to a user account
counts to which you want to grant the
role so that they are selected (blue),
and click OK.

Edit the user account, then, in the


Granted Roles pane, hold the CTRL
to revoke a role from the user ac-
key, click the roles that you want to
count
revoke so that they are not selected
(white), and click OK.

Related Topics
• Managing privileges with Data Federator Administrator on page 514
• Properties of user accounts on page 511

510 Data Federator User Guide


Managing user accounts and roles
Properties of user accounts 13
Properties of user accounts
Parameter Description
CATALOG
The default catalog associated with the user

This catalog is used by default when no catalog


is specified in a table reference. When a default
catalog is defined, the user only sees the tables
that are available in this catalog.

SCHEMA
the default schema associated with the user

This schema is used by default when no schema


is specified in a table reference.

LANGUAGE
specifies the default language for error messages

COMP
the default collation to use for comparison opera-
tions

SORT
the default collation to use for sort operations

Managing roles with Data Federator


Administrator
Roles provide a way to group privileges together. Large organizations can
have thousands of users with the same privileges on the same catalogs,
schemas, tables or other objects in the database. Roles facilitate the process
of granting and revoking privileges for large numbers of users.
When you assign a role to a user, that user gets every privilege in the role.
A user may be assigned many roles and a role may be assigned to many
users.

You can assign a role to a different role. The role inherits the assigned role's
privileges.

Data Federator User Guide 511


13 Managing user accounts and roles
Managing roles with Data Federator Administrator

Every user is part of the role PUBLIC. Any privilege that you grant to PUBLIC
is automatically available to all users. The PUBLIC role is always available.

Data Federator Administrator provides a window where you can manage


roles.

To access the Data Federator Administrator interface for managing roles,


login to Data Federator Administrator, click the Administration tab, click
User Rights, then click Roles.

The following table summarizes how to manage roles on the Security tab.

Table 13-2: The Roles tab in Data Federator Administrator

Task Actions

to add a role Click Create a new role.

512 Data Federator User Guide


Managing user accounts and roles
Managing roles with Data Federator Administrator 13
Task Actions

In the Role name column, beside the


name of the role, click the Delete
to delete a role

icon .

In the Role name column, beside the


name of the role, click the Edit icon
to edit a role

Edit the role, then, in the Granted


Roles pane, hold the CTRL key, click
to grant a role to a role (this lets you
the roles that you want to grant so
maintain a hierarchy of roles)
that they are selected (blue), and
click OK.

Edit the role, and in the Granted


Roles pane, hold the CTRL key and
to revoke a role from a role click the roles that you want to revoke
so that they are not selected (white),
then click OK.

Edit the role, then, in the User Mem-


bers pane, click the user accounts
to grant the role to user accounts to which you want to grant the role
so that they are selected (blue), and
click OK.

Edit the user account, then, in the


Granted Roles pane, hold the CTRL
to revoke a role from a user account key, click the roles that you want to
revoke so that they are not selected
(white), and click OK.

Data Federator User Guide 513


13 Managing user accounts and roles
Granting privileges to a user account or role

Granting privileges to a user account or


role
For a user account or role, you can control access to a catalog, a schema,
a table, or a column. That is, you can either grant access to a specific schema,
table, or column, or you can deny access to a specific table, schema, or
column.
1. In Data Federator Administrator, select User Rights, then click the
Privileges tab.
2. In the Privileges Editor, select the user account name, and click the Add
button.
3. In the Add permissions for user dialog box, select the catalog to display
the schemas that the catalog contains, click a schema to display the
tables that the schema contains, and click a table to display the columns
in that table. Select the catalog, schema, table or column to which you
want to set access privileges.
4. To grant access, click Add as Allow, and to deny access, click Add as
Deny.
When you set the access privileges, the results appear in the Privileges
table for the user.
• In the left-most column, a green tick or a red minus sign denotes
whether the privilege is granted or denied.
• In the next column, a catalog, schema, table or column icon denotes
the item to which the privilege applies.

5. To check the privilege that applies to a component, select the catalog,


schema, table or column and click Check. A message is displayed that
tells you whether access is granted or denied.
6. Click the Cancel button to close the Add permissions for user dialog
box and apply the privileges.

Managing privileges with Data Federator


Administrator
Data Federator Administrator provides a window where you manage
privileges.

514 Data Federator User Guide


Managing user accounts and roles
Managing privileges with Data Federator Administrator 13

To access the Data Federator Administrator interface for managing privileges,


login to Data Federator Administrator, click the Administration tab, then
click User Rights. Then select the Privileges tab.

The following table summarizes how to manage privileges on the Privileges


tab.

Table 13-3: The Privileges tab in Data Federator Administrator

Task Actions

to add a privilege Click Create a new privilege.

In the Username column, beside the


privilege, click the Delete icon
to delete a privilege

In the Username column, beside the


to edit a privilege
privilege, click the Edit icon .

Related Topics
• Data Federator Administrator overview on page 384

Data Federator User Guide 515


13 Managing user accounts and roles
Managing user accounts with SQL statements

Managing user accounts with SQL


statements
As an alternative to using the Data Federator Administrator to manage user
accounts (see Managing user accounts with Data Federator Administrator
on page 507), you can also manage users using an extension of the standard
SQL-92 syntax from a command line or other tool.

Creating a user account with SQL

The following statement creates a user account. You must log in with an
administrator account to create user accounts. The optional keyword ADMIN
lets you assign administrator privileges to the user account that you create.
An administrator has total control over Data Federator Query Server.
Figure 13-1: CREATE USER statement
CREATE USER user_name PASSWORD password { ADMIN }

Example: Adding a user

CREATE USER Smith PASSWORD smith


CREATE USER "David Smith" PASSWORD 'dsmith' ADMIN
CREATE USER David PASSWORD "David"

Dropping a user account with SQL

To delete a user that you have already created, use the drop user statement.
Figure 13-2: DROP USER statement
DROP USER user_name

Example: Dropping a user

DROP USER Smith

516 Data Federator User Guide


Managing user accounts and roles
Managing user accounts with SQL statements 13

Modifying a user password with SQL

To change the user password, use the alter user and set password
statements.
Figure 13-3: ALTER USER and SET PASSWORD statements
ALTER USER user_name SET PASSWORD { new- password }

Example: Changing a user's password

ALTER USER "Smith" SET PASSWORD "xyz12345"

Modifying properties of a user account with SQL

To define or change default user properties, use the following statements.


Default user properties can be defined for catalog, schema, or language.
Figure 13-4: ALTER USER statement
ALTER USER user_name SET property_name "property_value"

Example: Changing a user's properties

ALTER USER Smith SET CATALOG "MyCatalog"


ALTER USER Smith SET CATALOG "MyNewCatalog"
ALTER USER Smith SET comp "fr_FR_AS_CS"

Related Topics
• Properties of user accounts on page 511

Listing user accounts using SQL

To verify if you have correctly created, dropped, or altered a user, you can
create a query that gets the information from the system table called, users.
For more information see, users on page 735.

Data Federator User Guide 517


13 Managing user accounts and roles
Managing privileges using SQL statements

Example: Verifying the list of users

SELECT * FROM /leselect/system/users

This query displays a list of users that Data Federator Query Server has
registered in the system table.

Managing privileges using SQL


statements
Privileges determine the actions that a user account can perform on a catalog,
schema or table. In addition to the user interface, you can control privileges
using SQL statements.

In SQL you define authorizations for objects, that is you set privileges and
set the roles that can use those privileges. Data Federator Query Server
defines authorizations following the same principles as in the standard SQL
syntax.

Privileges are created and granted with the GRANT statement, and are
deleted with the REVOKE statement and the DROP ROLE statement. Any
defined privilege must contain the following information:
• name of the object on which the privilege acts
• user or role that may use the privilege
• action that may be performed on the specified object

To verify the privileges that you have defined, you can query the permissions
system table.

Related Topics
• List of privileges on page 522
• permissions on page 738
• Grammar for managing users on page 721
• Deploying a version of a project on page 324

518 Data Federator User Guide


Managing user accounts and roles
Managing privileges using SQL statements 13

About grantees

Grantees is a set of one or more user identifiers, role identifiers, or the


keyword PUBLIC.

For the detailed syntax of the grantees that you can use, see Grammar for
managing users on page 721.

Granting a privilege with SQL

This statement grants one or more privileges on a given object to one or


more grantees.

The keyword PUBLIC can be used to grant privileges to all grantees.

Syntax
Figure 13-5: GRANT privileges statement
GRANT privileges [ ON{ CATALOG| TABLE| SCHEMA| RESOURCE } objectname
] TO{ grantees| PUBLIC }
The variable grantees must be a username or a list of usernames,
separated by commas.

Example: Granting privileges

GRANT SELECT ON MyTable TO Smith


GRANT SELECT (EMPNO, ENAME, JOB) ON jdbc.oracle.EMPLOYEE TO
Smith
GRANT SELECT ON CATALOG MyCatalog TO Smith
GRANT SELECT ON SCHEMA MyCatalog.MySchema TO Smith
GRANT SELECT ON TABLE MyCatalog.MySchema.MyTable TO Smith
GRANT SELECT (MyColumn) ON MyTable TO Smith
GRANT SELECT ON MyTable TO PUBLIC TO Smith
GRANT DEPLOY, UNDEPLOY ON CATALOG "/" TO erin

Related Topics
• List of privileges on page 522

Data Federator User Guide 519


13 Managing user accounts and roles
Managing privileges using SQL statements

Revoking a privilege with SQL

To revoke or remove privileges from a user or role, use the following syntax.
You can revoke privileges from all grantees by using the PUBLIC keyword.

Syntax
Figure 13-6: REVOKE privileges statement
REVOKE privileges [ ON { CATALOG | TABLE | SCHEMA | RESOURCE }
objectname ] FROM { grantees | PUBLIC }

Example: Revoking privileges

REVOKE SELECT ON MyTable FROM Smith


REVOKE SELECT ON CATALOG MyCatalog FROM Smith
REVOKE SELECT ON SCHEMA MyCatalog.MySchema FROM Smith
REVOKE SELECT ON TABLE MyCatalog.MySchema.MyTable FROM Smith
REVOKE SELECT (MyColumn) ON MyTable FROM Smith
REVOKE SELECT ON MyTable FROM PUBLIC
REVOKE DEPLOY, UNDEPLOY ON CATALOG "/" FROM erin

Related Topics
• Managing privileges using SQL statements on page 518

Checking a privilege with SQL

This statement is unique to Data Federator and allows users to verify their
own privileges. An Administrator can also check the privileges of any user.

Syntax
Figure 13-7: CHECK AUTHORIZATION statement
CHECK [ AUTHORIZATION ] privileges [ FOR user_name ][ ON { CATALOG
| TABLE | SCHEMA | RESOURCE } objectname ]

This statement returns a result set with one row, one column "AUTHORIZED"
(type BIT).

The value of this column is true if the user has the listed privileges; false
otherwise.

520 Data Federator User Guide


Managing user accounts and roles
Managing privileges using SQL statements 13
Note:
• An administrator can check privileges for any user using the optional part
FORuser_name.
• CHECK and CHECK AUTHORIZATION are equivalent.

Example: Checking privileges

CHECK AUTHORIZATION DEPLOY ON CATALOG MyCatalog


CHECK SELECT ON CATALOG MyCatalog
CHECK SELECT ON SCHEMA MyCatalog.MySchema FOR Smith
CHECK SELECT ON TABLE MyCatalog.MySchema.MyTable
CHECK SELECT (MyColumn) ON MyTable
CHECK SELECT ON MyTable FOR PUBLIC
CHECK DEPLOY ON CATALOG MyCatalog
CHECK DEPLOY, UNDEPLOY ON CATALOG "/" FOR erin

Related Topics
• Managing privileges using SQL statements on page 518

Verifying privileges using system tables

You can also query the system table /leselect/system/permissions to


return the list of privileges that are stored in Data Federator Query Server.

By default, only an administrator can read this table. An administrator can


give the SELECT privilege on this table to other users.

Related Topics
• Resource system tables on page 739

Data Federator User Guide 521


13 Managing user accounts and roles
Managing privileges using SQL statements

List of privileges

Privilege Implies Description

SELECT
Allows a user to access
tables.

DEPLOY when granted to a non-


admin user, DEPLOY im-
Allows a user to deploy
plies the privileges SE- connectors to data-
LECT and UNDEPLOY sources.
on any schema created
A user must have this
in this catalog; when
privilege in order to de-
granted to an admin user,
fine datasources in Data
DEPLOY implies SE-
LECT and UNDEPLOY
Federator Designer.
on the whole catalog
UNDEPLOY
Allows a user to unde-
ploy connectors.

CREATE ANY RESOURCE


Allows a user to create
any resource.

ALTER ANY RESOURCE


Allows a user to alter
any resource.

DROP ANY RESOURCE


Allows a user to drop
any resource.

SELECT ANY TABLE


Allows a user to access
any table.

DEPLOY ANY SCHEMA


Allows a user to deploy
any schema.

522 Data Federator User Guide


Managing user accounts and roles
Managing roles with SQL statements 13
Privilege Implies Description

UNDEPLOY ANY SCHEMA


Allows a user to unde-
ploy any schema.

ALTER RESOURCE
Allows a user to alter a
specified resource.

DROP RESOURCE
Allows a user to drop a
specified resource.

Related Topics
• About user accounts, roles, and privileges on page 504
• Grammar for managing users on page 721

Managing roles with SQL statements


As an alternative to using the Data Federator Administrator to manage roles
(see Managing roles with Data Federator Administrator on page 511), you
can manage roles using an extension of the standard SQL-92 syntax from
a command line or other tool.

Creating a role with SQL

The CREATE ROLE statement defines a new Role. You can create a role
and apply it to more than one user. To avoid confusion between roles and
users, the CREATE ROLE statement is explicit.
Figure 13-8: CREATE ROLE statement
CREATE ROLE identifier

Example: Creating a role

CREATE ROLE BankTeller


CREATE ROLE "BankManager"

Data Federator User Guide 523


13 Managing user accounts and roles
Managing roles with SQL statements

Dropping a role with SQL

The DROP ROLE statement deletes a role.


Figure 13-9: DROP ROLE statement
DROP ROLE identifier

Example: Dropping a role

DROP ROLE BankTeller


DROP ROLE "BankManager"

Granting roles with SQL

The Grant Role statement assigns one or more roles to grantees (a grantee
can be a user or a role).
Figure 13-10: GRANT roles statement
GRANT roles TO grantees

Example: Granting a role

GRANT BankTeller TO BankManager

GRANT BankManager TO "David Smith"

Note:
David Smith now has BankTeller and BankManager roles.

GRANT BankTeller TO PUBLIC

Note:
This grants the BankTeller role to all users.

Verifying roles using system tables

Once you have created and modified the user roles, you can verify the data
stored in Data Federator Query Server by querying the system tables. For

524 Data Federator User Guide


Managing user accounts and roles
Managing login domains 13
more information see, roles on page 736, roleMembers on page 737, or user
Roles on page 737.

Managing login domains


A "login domain" is a database server, a file server, or a set of servers, on
which users can log in.

You can use login domains to map users or roles to credentials that allow
them to log in to specific databases. This method of authenticating users lets
you hide database credentials while still allowing users to log in to those
databases.

Adding a login domain


• To add a login domain, use the CALLaddLoginDomain statement.
Figure 13-11: CALLaddLoginDomain statement
CALL addLoginDomain ' name_of_your_login_domain ' ' descrip
tion_of_your_login_domain '

CALL addLoginDomain 'mysql@mandelbrot',


'MySQL server on the machine mandelbrot'

Modifying a login domain description


• To modify a login domain, use the CALLalterLoginDomain statement.
Figure 13-12: CALLalterLoginDomain statement
CALL alterLoginDomain ' name_of_your_login_domain ' ' descrip
tion_of_your_login_domain '

CALL alterLoginDomain 'mysql@mandelbrot',


'MySQL server in test lab'

Data Federator User Guide 525


13 Managing user accounts and roles
Managing login domains

Deleting login domains


• To delete a login domain, use the CALLdelLoginDomains statement.
Figure 13-13: CALLdelLoginDomains statement
CALL delLoginDomains ' { name_of_your_login_domain pattern } '

CALL delLoginDomains 'mysql@mandelbrot'

CALL delLoginDomains '%'

Mapping user accounts to login domains

You must have added a login domain.


1. Log in to Data Federator Administrator.
2. To map user accounts to a login domain, use the CALLaddCredential
statement.
Figure 13-14: CALLaddCredential statement
CALL addCredential [{ ' name_of_user_account_on_data_federator
'|' name_of_role_on_data_federator | PUBLIC ' }],' name_of_login_domain
',' name_of_user_account_on_login_domain ',' password_on_login_domain
'
Note:
You can only specify a user account or role on Data Federator, or the
keyword PUBLIC if you are logged in as an administrator; otherwise, the
mapping always applies to your user account.

The statement

CALL addCredential 'benoit', 'mysql@mandelbrot', 'julia',


'julia'

maps the user account benoit to the user account julia with password
julia on the machine mysql@mandelbrot.

When the user benoit accesses datasources that are on the mysql server,
Data Federator will log in using the account julia.

526 Data Federator User Guide


Managing user accounts and roles
System tables for user management 13
System tables for user management
Once you define and edit users, you can verify the metadata that Data
Federator Query Server has stored for them by querying the system tables.
For a detailed reference of the available system tables related to users, click
the links below or see, System table reference on page 728.
• users on page 735
• userProperties on page 736
• roles on page 736
• roleMembers on page 737
• userRoles on page 737
• permissions on page 738

These system tables are stored in the catalog called leselect and the schema
called system.

Using a system table to check the properties of a user

This example shows how to check the properties of a user.


1. Open the Data Federator Administrator, as decribed in Starting Data
Federator Administrator on page 384.
2. Click the tab SQL.
3. Type the query to check the property.
For example, to check the property that defines the permissions of the
user "Vic", type:

SELECT * FROM /leselect/system/userProperties WHERE USER_NAME


= 'Vic';

Data Federator Administrator displays the properties of the user "Vic".

Related Topics
• Managing user accounts with SQL statements on page 516

Data Federator User Guide 527


13 Managing user accounts and roles
System tables for user management

528 Data Federator User Guide


Controlling query execution

14
14 Controlling query execution
Query execution overview

Query execution overview


This chapter explains the controls you have to manage all the functions of
Data Federator Query Server. The query execution controls are organized
into the following sections:
• Auditing and monitoring the system on page 530
• Viewing system configuration on page 543

For information on the capabilities of Data Federator Administrator, see Data


Federator Administrator overview on page 384.

Auditing and monitoring the system


You can use the Data Federator Administrator to access and test instances
of the Data Federator Query Server objects (catalogs, schemas, and tables).

Viewing target tables

You can use Data Federator Administrator to view the target tables that you
have deployed.
1. Start Data Federator Administrator.
2. Click the Objects tab.
3. In the tree list, expand Objects, then TABLE, then your-catalog-
name, then targetSchema.

Related Topics
• Managing target tables on page 46

Viewing datasource tables

You can view datasource tables using Data Federator Administrator.


1. Start Data Federator Administrator.
2. Click the Objects tab.

530 Data Federator User Guide


Controlling query execution
Auditing and monitoring the system 14
3. In the tree list, expand Objects, then TEST/your-user-account-
name/sources, then your-datasource-name, then your-table-
name.

4. Click the Information tab to display information about the table on the
screen.
5. Click the Content tab to display the contents of the table.
The content that displays is dependent on the number of rows you enter
in the Maximum rows to display option.

Data Federator User Guide 531


14 Controlling query execution
Auditing and monitoring the system

6. Once you display the tables and contents, you have the information you
need to proceed to enter queries on the data to optimize the mapping
rules and test your data strategy.

Related Topics
• About datasources on page 66

Querying metadata

Dynamic applications that are not hard-coded to work with a specific set of
tables must have a mechanism for determining the structure and attributes
of the objects in any database to which they connect. These applications
may require information such as the following.
• the number and names of the tables in the targets and datasources
• the number of columns in a table together with the name, data type, scale,
and precision of each column
• the keys that are defined for a table

532 Data Federator User Guide


Controlling query execution
Auditing and monitoring the system 14
Applications based on Data Federator Query Server can access the
information in the system catalogs by using the following:

Related Topics
• Using stored procedures to retrieve metadata on page 533
• List of stored procedures on page 744
• JDBC metadata methods on page 533
• ODBC metadata functions on page 534

Using stored procedures to retrieve metadata

The following SQL command uses stored procedures to retrieve metadata:


CALL procedureName [ arg1 [ , arg2 ] ]

Example: Calling stored procedures

CALL getTables '%', '%', '%', 'table;view'

CALL getTables 'MyCatalog', 'oracle%', '%', 'table'

CALL getFunctionsSignatures '%'

JDBC metadata methods

The JDBC specification defines a set of methods returning result sets that
contain the metadata information. This is a standard way of presenting catalog
information supported at database level, for example, lists of tables, columns,
data types.

The Data Federator JDBC driver supports the standard JDBC metadata
methods. For more information on standard JDBC metadata methods, see
the Sun JDBC Reference Guide,

http://java.sun.com/j2se/1.4.2/docs/api/java/sql/package-summary.html

Data Federator User Guide 533


14 Controlling query execution
Cancelling a query

ODBC metadata functions

The ODBC specification defines a set of functions that return result sets
containing the metadata information. This is a standard way of presenting
catalog information supported at a database level, for example, lists of tables,
columns, data types.

The OpenAccess ODBC to JDBC Bridge provides ODBC connectivity to


Data Federator Query Server. The OpenAccess ODBC to JDBC Bridge
supports most of the standard ODBC metadata methods.

For more information on the ODBC function limitations of the OpenAccess


ODBC to JDBC Bridge see, JDBC and ODBC Limitations on page 378. For
more information on standard ODBC metadata functions, see Microsoft
ODBC 3.0 software development kit and programmer's reference.

For an ODBC API reference, see:

http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/odbc/htm/odbcodbc_api_reference.asp

Cancelling a query
Data Federator provides a command that lets you cancel a running query.
The cancel command is asynchronous. Therefore, in some cases, when you
cancel a query, your client application may see the query as cancelled while
Data Federator Query Server may have not yet completed the cancel.

Cancelling a query

You can cancel a query using the ADMIN CANCEL command.


• Run the following command.

ADMIN CANCEL id

id is the ID of your query, which you can get from the Running Queries
tab in the Data Federator Administrator, in the Administration tab.

534 Data Federator User Guide


Controlling query execution
Data types 14

Cancelling all running queries

You can cancel all running queries using the ADMIN CANCEL ALL QUERIES
command.
• Run the following command.

ADMIN CANCEL ALL QUERIES

Data types
Data types are propogated from data sources to Data Federator.

Configuring the precision and scale of DECIMAL


values returned from Data Federator Query Server

Depending on the values of the JDBC URL parameters enforceMaxDecimal


Size and enforceServerMetadataDecimalSize, the precision and scale of
DECIMAL data can be limited to the user-defined maximum precision
maxDecimalPrecision and scale maxDecimalScale as described in the table
below.

Table 14-1: Precision and scale of DECIMAL data

enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

Returned as obtained
from Data Federator
The DECIMAL metadata re-
Query Server. No
NONE NONE trieved from Data Federator
checks on decimal pre-
Query Server.
cision and scale are
done.

Data Federator User Guide 535


14 Controlling query execution
Data types

enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

Fixed at the decimal


The DECIMAL metadata re- precision and scale set
trieved from Data Federator at metadata level on
Query Server. Data Federator Query
Server.

NONE FIXED_SCALE
Note:
An exception is raised when the required decimal
precision and scale cannot be ensured. A warning is
issued when a truncation of the decimal digits is ap-
plied to ensure the required decimal precision and
scale.

Limited to the default


The DECIMAL metadata re- decimal precision and
trieved from Data Federator scale set at metadata
Query Server. level on Data Federator
Query Server.

NONE MAX_SCALE
Note:
An exception is raised when the required decimal
precision and scale cannot be ensured. A warning is
issued when a truncation of the decimal digits is ap-
plied to ensure the required decimal precision and
scale.

536 Data Federator User Guide


Controlling query execution
Data types 14
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

The decimal precision


is limited to no more
than the value of the
maxDecimalPrecision
parameter. The deci-
mal scale is truncated
maxDecimalPrecision and
as much as required to
maxDecimalScale.
ensure the maximum
decimal precision and
to have exactly the
maximum decimal
FIXED_SCALE NONE scale (maxDeci
malScale parameter).

Note:
An exception is raised when the maximum decimal
precision cannot be ensured. A warning is issued
when a truncation of the decimal digits is applied to
ensure the required decimal precision. The integer
part of the DECIMAL value cannot exceed maxDeci
malPrecision – maxDecimalScale.

Data Federator User Guide 537


14 Controlling query execution
Data types

enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre
cision, computed_scale = Adjusted with respect
maxDecimalScale and to the computed DECI
computed_precision = MAL metadata to have
MIN(server_precision - no more than comput
server_scale, maxDeci ed_precision and exact-
FIXED_SCALE FIXED_SCALE ly computed_scale.
malPrecision - maxDeci
malScale) + maxDeci Note:
malScale, and An exception is raised
• when server_precision is when truncation is not
greater than or equal to possible.
server_max_precision,
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.

538 Data Federator User Guide


Controlling query execution
Data types 14
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre Adjusted with respect
cision, computed_scale = to the computed DECI
maxDecimalScale and MAL metadata to have
computed_precision = no more than comput
MIN(server_precision, ed_precision and exact-
FIXED_SCALE MAX_SCALE ly computed_scale.
maxDecimalPrecision),
and Note:
• when server_precision is An exception is raised
greater than or equal to when truncation is not
server_max_precision, possible.
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.

Data Federator User Guide 539


14 Controlling query execution
Data types

enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

The decimal precision


is limited to no more
than the value of the
maxDecimalPrecision
parameter. The deci-
mal scale is truncated
as much as required to
ensure the maximum
decimal precision and
to have no more than
the maximum decimal
scale (maxDeci
malScale parameter).
maxDecimalPrecision and
Note:
MAX_SCALE NONE maxDecimalScale
An exception is raised
. when the maximum
decimal precision can-
not be ensured. A
warning is issued when
a truncation of the deci-
mal digits is applied to
ensure the required
decimal precision. The
integer part of the
DECIMAL value cannot
exceed maxDecimal
Precision – maxDeci
malScale.

540 Data Federator User Guide


Controlling query execution
Data types 14
enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre
cision, computed_scale = Adjusted with respect
MIN(server_scale, to the computed DECI
maxDecimalScale) and MAL metadata to have
computed_precision = no more than comput
MIN(server_precision - ed_precision and exact-
MAX_SCALE FIXED_SCALE server_scale, maxDeci ly computed_scale.
malPrecision - comput
ed_scale) + comput Note:
ed_scale An exception is raised
when truncation is not
• When server_precision is possible.
greater than or equal to
server_max_precision,
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.

Data Federator User Guide 541


14 Controlling query execution
Data types

enforce-
enforce Server DECIMAL metadata reported
DECIMAL values from
Max Deci Metadata by the Data Federator JDBC
DECIMAL columns
malSize Decimal- Driver for DECIMAL columns
Size

computed_precision and
computed_scale, computed
as follows:
• When server_precision is
less than server_max_pre Adjusted with respect
cision, computed_scale = to the computed DECI
MIN(server_scale, MAL metadata to have
maxDecimalScale) and no more than comput
computed_precision = ed_precision and no
MIN(server_precision, more than comput
MAX_SCALE MAX_SCALE
maxDecimalPrecision), ed_scale.
and Note:
• when server_precision is An exception is raised
greater than or equal to when truncation is not
server_max_precision, possible.
computed_scale =
maxDecimalScale and
computed_precision =
maxDecimalPrecision.

Note:
• server_precision is the metadata DECIMAL precision retrieved from Data
Federator Query Server
• server_scale is the metadata DECIMAL scale retrieved from Data
Federator Query Server
• server_max_precision is the value of the parameter core.common.maxDec
imalPrecision defined at the level of Data Federator Query Server to
control the maximum decimal precision used when computing the decimal
precision of DECIMAL columns from result sets of queries.

Related Topics
• Scale and precision on page 700

542 Data Federator User Guide


Controlling query execution
Viewing system configuration 14
Viewing system configuration
You can audit all system information using Data Federator Administrator.
The system information that is possible to monitor includes statistics on query
execution, statistics on the buffer manager, queries registered for buffer
manager, detailed buffer allocation for operators and statistics on wrapper
management.

This information is available using the info statment.

Statistics on query execution

To view statistics on query execution, use the following command.

info executor statistics

Statistics on the buffer manager

To view statistics on buffer manager, use the following command.

info buffermanager

Queries registered for buffer manager

To view queries registered for the buffer manager, use the following
command.

info buffermanager queries

Detailed buffer allocation for operators

To view detailed buffer allocation for operators, use the following command.

info buffermanager operators

Data Federator User Guide 543


14 Controlling query execution
Viewing system configuration

Statistics on wrapper management

To view statistics on wrapper management, use the following command.

info wrappermanager

544 Data Federator User Guide


Optimizing queries

15
15 Optimizing queries
Tuning the performance of Data Federator Query Server

Tuning the performance of Data


Federator Query Server
You can tune the performance of your queries on Data Federator Query
Server. To do this, you can maintain updated statistics on the data in your
data sources, and set parameters to optimize Query Server for your data.

Updating statistics

To ensure that Data Federator Query Server can optimize your queries in
the best possible manner, you can update the statistics on your datasource
or target tables.

For an overview of the Statistics tab in Data Federator Administrator, see


The Statistics menu item on page 393.

Related Topics
• The Statistics menu item on page 393

Optimizing access to the swap file

You can optimize the performance of Data Federator Query Server by paying
attention to the use of your system swap file.

The size of the swap file is dependent on the number of retrieved rows and
the complexity of the query.

Generally, you can tell that Data Federator will use the swap file when your
query uses one of the operators that consume memory. The Data Federator
documentation contains a list of these operators.

You can use the following strategies to optimize access to the swap file.
• Set the server parameter core.workingDir.

You can use this parameter to set the location of the swap file.

Point this parameter to a partition that has the most efficient disk access.
Use the syntax of the host OS (c:\\tmp or /tmp).

546 Data Federator User Guide


Optimizing queries
Tuning the performance of Data Federator Query Server 15
Note:
The backslash character (\) is an escape character. To represent a single
backslash, you must enter two backslashes in a row (\\).

• In UNIX, configure your swap area for parallel access, for example RAID.

Related Topics
• Operators that consume memory on page 548

Optimizing memory

To audit your system's memory use, use the statement info buffermanager.

You can use the following strategies to optimize how Data Federator uses
memory.
• Set the server parameter core.bufferManager.executorMemory.

This parameter lets you configure the amount of memory used for query
execution.

Set this parameter either as a percentage of the Java Virtual Machine


memory (for example 50%), or as a fixed value with a suffix indicating
the units (for example, 512M, 512m, 1024K or 1024k).
• Set the server parameter core.bufferManager.maxConcurrentQueries.

Defines the number of queries that consume memory that can run
concurrently. Other queries are not affected.

Enter a small value here, if you have many large queries.

Enter a large number if you have many small queries.


• core.bufferManager.maxConcurrentOperatorsPerQuery=5

This parameter limits how many operators that consume memory run in
parallel.

Decrease this number if the operators in your queries are consuming too
much memory.

You can approximate the average size and number of operators in your
queries by counting the number of large tables in different datasources

Data Federator User Guide 547


15 Optimizing queries
Tuning the performance of Data Federator Query Server

accessed. For example, four large tables in different datasources in one


mapping rule result in three joins that consume memory.

Related Topics
• Statistics on the buffer manager on page 543
• Queries registered for buffer manager on page 543
• Detailed buffer allocation for operators on page 543
• List of parameters on page 557
• Operators that consume memory on page 548

Operators that consume memory

The following are the operators that can consume memory when you use
them in your queries.
• join
• cartesian product
• orderby
• groupby
• groupby when you have a lot of different values in the group (a large
group set)
Data Federator does not use a significant amount of memory when it performs
scans of tables, projections, filters, function evaluation or when it pushes the
operations down to the sources.

Guidelines for using system and session parameters


to optimize queries on large tables

In certain situations, Data Federator optimizes the amount of data transferred


from large tables by trying to retrieve only those values that match those in
a smaller table. This results in fractions of the amount of data that is
transferred, and thus leads to faster queries.

Data Federator can do this when your query involves a very large table and
a relatively small table, for example 20000 rows and 50M rows. Additionally,
your query must use the values in the small table to filter those in the large

548 Data Federator User Guide


Optimizing queries
Tuning the performance of Data Federator Query Server 15
table. The operator that Data Federator uses in these situations is called a
"bind join".

You can configure the "bind join" operator using the following four parameters.
• leselect.core.optimizer.bindJoin.minCardinality

This parameter specifies the cardinality threshold (on the large table)
required to activate the "bind join" operator.
• leselect.core.optimizer.bindJoin.reductionFactor

This is a number specifying the largest fraction of rows that must be


returned by a "bind join" compared to a full table scan in order for Data
Federator to consider that the "bind join" is useful.
• leselect.core.optimizer.bindJoin.maxQueries

When Data Federator uses the "bind join" operator, it fetches the rows
from the small table into memory and generates parameterized queries.
This parameter defines the maximum number of parameterized queries.
• leselect.core.optimizer.bindJoin.useIndexOnly

Specifies that Data Federator should only use the "bind join" when there
is an index on the source column.

Data Federator User Guide 549


15 Optimizing queries
Tuning the performance of Data Federator Query Server

Figure 15-1: How Data Federator decides to activate a "bind join" with parameters minCar
dinality=15000, reductionFactor=1000 and maxQueries=100

Example: Activating a "bind join" on a query with a small table and a very
large table
This example shows how to set system and session parameters to activate
the "bind join", when you have a small table containing 100 rows and a
large table with 50M rows. We also assume that when the values of the
small table are used to filter the values in the large table, 10000 rows will
be returned.

Refresh the statistics once your Data Federator project has been deployed.
You can refresh statistics in the Data Federator Administrator.

Set leselect.core.optimizer.bindJoin.minCardinality to 15000. The number


of rows in the large table exceeds 15000, so this value will allow Data
Federator to use a "bind join".

Set leselect.core.optimizer.bindJoin.reductionFactor to 1000. This is a good


default value. It is used as follows.

The number of rows in the large table is divided by this number to calculate
a threshold. In this case, the threshold is 50000 (50M / 1000 = 50000). Data
Federator then checks the statistics, which show that the "bind join" will

550 Data Federator User Guide


Optimizing queries
Tuning the performance of Data Federator Query Server 15
return about 10000 rows. This is under the threshold of 50000 and therefore
allows Data Federator to use the "bind join".

If you set this value too low, Data Federator will use a "bind join" when it is
not efficient. For example, if you set this value to 1, Data Federator will use
a "bind join" even when the number of rows returned by the "bind join" is
50M (50M / 1 = 50M). This is equivalent to doing a full table scan.

If you set this value to 2, Data Federator will use a "bind join" when the
number of rows returned by the "bind join" is half of that returned by a table
scan. This is not a sufficient gain over a full table scan.

If you set this value too high, Data Federator will not use a "bind join" when
it would be efficient. For example, if you set this value to 50M, Data
Federator will only use the "bind join" if the number of rows returned by the
"bind join" is 1 (50M / 50M = 1).

Setting this value to 1000 is generally equivalent to requesting that the "bind
join" be activated when its result is 1000 times smaller than a table scan.

Set leselect.core.optimizer.bindJoin.maxQueries to 100 or higher to be sure


that a "bind join" will be used. The number of rows in the small table is 100,
so the maximum number of parametrized queries will be lower or equal to
this number. Therefore, a value of 100 will allow Data Federator to use the
"bind join".

Finally, set leselect.core.optimizer.bindJoin.useIndexOnly to false, unless


you do not have indexes on the columns used in the join.

With these settings, Data Federator should be able to perform a "bind join"
and thus run your query with optimal speed and use of memory.

Related Topics
• Managing statistics with Data Federator Administrator on page 394

Data Federator User Guide 551


15 Optimizing queries
Tuning the performance of Data Federator Query Server

552 Data Federator User Guide


Managing system and
session parameters

16
16 Managing system and session parameters
About system and session parameters

About system and session parameters


There are two levels of parameters in Data Federator: system and session.
System parameters are shared by a running instance of Data Federator
Query Server. When you change the values of these parameters, to be sure
that the new values take effect, you must restart the server.

Session parameters are defined for one JDBC or ODBC connection. The
value of these parameters can be different among connections.

Each session parameter takes its default value from the system parameter
of the same name. When you change the value of a system parameter
corresponding to a session parameter, the new value is only taken into
account on new sessions.

You can use system and session parameters to configure various aspects
of Data Federator, such as the following.
• use of memory
• use of network
• the order of execution of queries
• optimizations

Managing parameters using Data


Federator Administrator
Data Federator Administrator provides a window where you manage system
and session parameters.

554 Data Federator User Guide


Managing system and session parameters
Managing parameters using Data Federator Administrator 16

To access the Data Federator Administrator interface for managing


parameters, login to Data Federator Administrator (see Data Federator
Administrator overview on page 384), click the Administration tab, then click
System Parameters or Session Parameters.

The following table summarizes how to manage parameters on the System


Parameters and Session Parameters tabs.

Table 16-1: What you can do on the System parameters and Session parameters tabs
in Data Federator Administrator

Task Actions

In the row containing the parameter,


to modify a system parameter or
type the new value in the Value box,
session parameter
and click OK.

Data Federator User Guide 555


16 Managing system and session parameters
Managing parameters using SQL statements

Managing parameters using SQL


statements
Use the ALTER SYSTEM SQL statement to manage system parameters,
and the SET SQL statement to manage session parameters, as described
below.
Figure 16-1: ALTER SYSTEM statement
ALTERSYSTEM{ 'parameter_name''parameter_value' | default }
Figure 16-2: SET statement
SET{ 'parameter_name''parameter_value' | default }

Use ALTER SYSTEM default or SET default to set all system parameters
or all session parameters to their default values.

Note:
The keyword default is not enclosed in quotation marks when used in either
SQL statement.

Example: ALTER SYSTEM statement


ALTER SYSTEM 'country' 'united_states'

ALTER SYSTEM default

Example: SET statement


SET 'comm.jdbc.port' '3999'

SET default

Related Topics
• Data Federator Administrator overview on page 384

556 Data Federator User Guide


Managing system and session parameters
List of parameters 16
List of parameters
Parameter Description
leselect.show_hidden_pa specifies if Data Federator Administrator shows hid-
rameters den parameters in its interface; Warning: You should
not modify this parameter unless requested to do so
by a member of the Business Objects support team.
scope: system and session

type: boolean

needs restart? no

default value: false

leselect.monitor.execu The monitoring level of the internal monitor


tor.internal.level
scope: system only

type: integer

needs restart? no

default value: 0

leselect.monitor.execu The monitoring level of the external monitor


tor.external.level
scope: system only

type: integer

needs restart? no

default value: 2

core.queryEngine.dis
tinct.nbPartitions

Data Federator User Guide 557


16 Managing system and session parameters
List of parameters

Parameter Description
The optimal number of first level partitions to produce
for the distinct operator. (A new value of this param-
eter takes effect when there are no queries regis-
tered in the BufferManager.)
scope: system only

type: integer

needs restart? no

default value: 300

leselect.core.maxConnec Specifies the maximum number of simultaneous


tions connections authorized to Data Federator Query
Server. New connections are rejected when this
threshold is reached.
scope: system only

type: integer

needs restart? no

default value: 32767

comm.jdbc.port The port for remote links from a client to Data Feder-
ator Query Server.
scope: system only

type: integer

needs restart? no

default value: 3055

comm.jdbc.connPort The static port for JDBC communication.


scope: system only

type: integer

needs restart? no

default value: 5512

558 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
comm.jdbc.SSLconnPort The static port for JDBC communication over SSL.
scope: system only
type: integer

needs restart? no

default value: 5514

comm.jdbc.connIP The IP of Query Server when multiple IPs are avail-


able. Only one IP is currently supported per Query
Server. Specify none if only one port is configured
and you want to allow Query Server to select it by
default. Note: Specify none if your network configu-
ration is not optimal and you cannot connect to Query
Server from a client found on another system. Also
verify that the remote client can connect to the IP
address provided here.
scope: system only

type: string

needs restart? no

default value: none

core.bufferManager.max The maximum number of parallel queries. (A new


ConcurrentQueries value of this parameter takes effect when there are
no queries registered in the BufferManager)
scope: system only

type: integer

needs restart? no

default value: 2

Data Federator User Guide 559


16 Managing system and session parameters
List of parameters

Parameter Description
core.bufferManager.max The maximum number of memory-consuming con-
ConcurrentOperatorsPer current operators. (A new value of this parameter
Query should take effect when there are no queries regis-
tered in the BufferManager. Currently, you must
restart the server)
scope: system only

type: integer

needs restart? no

default value: 5

core.bufferManager.ex This parameter represents the minimum memory


ecutorStaticMemory space allocated to operators upon initialization. It is:
- either an exact value, for example: 'core.bufferMan-
ager.executorStaticMemory=50M' (the value should
be less than the memory space allocated to the ex-
ecutor - see 'core.bufferManager.executorMemory'
parameter) - or the percentage of the executor
memory size, for example: 'core.bufferManager.ex-
ecutorStaticMemory=25%' (A new value of this pa-
rameter takes effect when there are no queries reg-
istered in the BufferManager.)
scope: system only

type: string

needs restart? no

default value: 25%

560 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
core.bufferManager.ex This parameter represents the memory space allo-
ecutorMemory cated to the executor. It is - either the value of
memory size, for example: 'core.bufferManager.ex-
ecutorMemory=256M' - or the percentage of the
memory size allocated by the JVM, for example:
'core.bufferManager.executorMemory=80%' (A new
value of this parameter takes effect when there are
no queries registered in the BufferManager.)
scope: system only

type: string

needs restart? no

default value: 80%

core.bufferManag The size of one page parameter, in number of rows.


er.bufferSize (A new value of this parameter takes effect when
there are no queries registered in the BufferManag-
er.)
scope: system only

type: integer

needs restart? no

default value: 128

core.queryEngine.history The maximum size of the history for the repository


Size of executed queries.
scope: system only

type: integer

needs restart? no

default value: 10

Data Federator User Guide 561


16 Managing system and session parameters
List of parameters

Parameter Description
core.queryEngine.hash.max The maximum number of first level partitions to pro-
Partitions duce for the hash algorithms. (A new value of this
parameter takes effect on subsequent queries.)
scope: system only

type: integer

needs restart? no

default value: 1987

leselect.wrap Data directory for the connector to text files. This is


pers.text.dataDir the default directory where to find text files to for the
connector to text files; if the path is relative, it will be
relative to "data-federator-installation-dir".
scope: system only

type: string

needs restart? no

default value: data

leselect.core.optimizer.in enable/disable InternalBindJoin


ternalBindJoin.enable
scope: system only

type: boolean

needs restart? no

default value: false

leselect.core.optimizer.or if set to to true, activate all rules doing order based


derBasedOptimization optimization
scope: system only

type: boolean

needs restart? no

default value: true

562 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
leselect.core.optimiz if set to to true, activate the order join rule that tries
er.profitabilityBased to build bushy trees based on profitability
JoinOrdering
scope: system only

type: boolean

needs restart? no

default value: true

comm.HTTPNew.port the default port that Data Federator Administrator


uses
scope: system only

type: integer

needs restart? no

default value: 3080

leselect.core.optimiz The minimum cardinality of distinct values to decide


er.minCardForGroupBy to eliminate GroupBy nodes by using order of
Transformation sources <p> If 0, it means that Group by elimination
should always be done.
scope: system only

type: long

needs restart? no

default value: 300

core.queryEngine.mergeAg The number of partitions to use in the MergeBased-


gregate.nbPartitions GroupByAggregate algorithm. (A new value of this
parameter takes effect on subsequent queries.)
scope: system only

type: integer

needs restart? no

default value: 300

Data Federator User Guide 563


16 Managing system and session parameters
List of parameters

Parameter Description
leselect.core.optimiz Parameter for the minimum cardinality to determine
er.minCardForAsynch an asynchronous prefetch. -1 means that no asyn-
Prefetch chronous prefetch is allowed
scope: system only

type: long

needs restart? no

default value: 50000

comm.COR Close on fault delay in time slices for CORBA Con-


BA.conn.closeOnFault.de nections. If set to 0 it will be disabled. This is the
lay delay after which Data Federator Query Server de-
cides to close a connection if the client does not
show any activity on this connection. Enter a value
in tens of seconds. Default is 3 hours.
scope: system only

type: string

needs restart? no

default value: 1080

comm.CORBA.state Close on fault delay in time slices for SQL state-


ment.closeOnFault.delay ments. If set to 0 it will be disabled. This is the delay
after which Data Federator Query Server decides to
close a statement if the client does not show any
activity on this statement. Enter a value in tens of
seconds. Default is 10 minutes.
scope: system only

type: string

needs restart? no

default value: 60

564 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
core.common.defaultDec The value that is reported by Data Federator Query
imalPrecision Server for the decimal precision of a column if the
connector does not return a value for the column.
Under normal circumstances, the connector always
supplies this value.
scope: system only

type: integer

needs restart? no

default value: 27

core.common.defaultDec The value that is reported by Data Federator Query


imalScale Server for the decimal scale of a column if the con-
nector does not return a value for the column. Under
normal circumstances, the connector always supplies
this value.
scope: system only

type: integer

needs restart? no

default value: 6

core.common.maxDeci The maximum value that is reported by Data Feder-


malPrecision ator Query Server for the decimal precision of a col-
umn.
scope: system only

type: integer

needs restart? no

default value: 40

Data Federator User Guide 565


16 Managing system and session parameters
List of parameters

Parameter Description
core.common.scaleFor The maximum value that is reported by Data Feder-
MaxDecimalPrecision ator Query Server for the decimal scale of a column.
scope: system only

type: integer

needs restart? no

default value: 6

core.common.maxNum The maximum number of fractional digits in the string


berOfFractionalDigitsFor representation of a double when using the locale-
LocaleDouble sensitive function toStringL(double, varchar)
scope: system only

type: integer

needs restart? no

default value: 20

core.queryEngine.hashJoin.nbPar The estimated optimal number of first level partitions


titions for HashJoin/HashOuterJoin algorithms (A new value
of this parameter takes effect on subsequent
queries.)
scope: system only

type: integer

needs restart? no

default value: 300

566 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
core.queryEngine.order The number of partitions to use in the OrderBased-
Aggregate.nbPartitions GroupByAggregate algorithm (A new value of this
parameter takes effect on subsequent queries.)
scope: system only

type: integer

needs restart? no

default value: 1987

leselect.core.optimiz parameter for the minimum cardinality of store size


er.minStoreCard that justifies the use of an ordered Merge Join
ForMergeJoin
scope: system only

type: long

needs restart? no

default value: 10000

leselect.core.optimiz The parameter for the minimum cardinality of transfer


er.minTransferCard that justifies the use of an ordered Merge Join
ForMergeJoin
scope: system only

type: long

needs restart? no

default value: 30000

Data Federator User Guide 567


16 Managing system and session parameters
List of parameters

Parameter Description
leselect.core.optimiz Enable optimizer cache. If this is set to true, then
er.cache.enabled two queries with same execution plan, differing only
in their constants, need only one optimizer execution.
The second query is executed with the same plan
as the first, but with modified constants.
scope: system and session

type: boolean

needs restart? no

default value: false

leselect.core.optimiz The maximum number of entries in optimizer cache.


er.cache.maxEntries The optimizer cache is an LRU cache. When the
maximum number of entries is exceeded, the Least
Recently Used cache entry is removed.
scope: system only

type: integer

needs restart? no

default value: 100

leselect.core.optimiz The maximum load factor of optimizer cache. The


er.cache.maxLoadFactor optimizer cache entries are reached by a hash table
with hash code generated from the SQL expression
of the query. The size of the hash table is '((maxEn
tries / maxLoadFactor) + 1)'.
scope: system only

type: string

needs restart? no

default value: 0.75

568 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
leselect.core.statis Enable recording of statistics requests. If set to true,
tics.recorder.enabled all statistics requests (table and column cardinalities)
are kept in a (non persistent) memory area. The
statistic requests are available using the system ta-
ble: /leselect/system/statisticRequests
scope: system and session

type: boolean

needs restart? no

default value: false

leselect.core.statis The maximum number of entries for table statistics


tics.recorder.maxTableRe requests that the recorder should store in memory.
cords The statistics recorder is an LRU cache.
scope: system only

type: integer

needs restart? no

default value: 100

leselect.core.statis The maximum number of entries for column statistics


tics.recorder.maxColumn requests that the recorder should store in memory.
Records The statistics recorder is a LRU cache.
scope: system only

type: integer

needs restart? no

default value: 1000

Data Federator User Guide 569


16 Managing system and session parameters
List of parameters

Parameter Description
language defines the ISO language code for the locale
scope: system and session
type: string

needs restart? no

default value: en

country defines the ISO country code for the locale


scope: system and session

type: string

needs restart? no

default value: US

sort defines the sort collation


scope: system and session

type: string

needs restart? no

default value: binary

comp defines the comp collation


scope: system and session

type: string

needs restart? no

default value: binary

570 Data Federator User Guide


Managing system and session parameters
List of parameters 16
Parameter Description
leselect.core.optimiz Specifies the cardinality threshold on the large table
er.bindJoin.minCardinali required to activate the bind join operator.
ty
scope: system only

type: long

needs restart? no

default value: 15000

Data Federator User Guide 571


16 Managing system and session parameters
List of parameters

Parameter Description
leselect.core.optimiz When Data Federator uses the bind join operator, it
er.bindJoin.maxQueries fetches the rows coming from the small table into
memory and generates parameterized queries. The
parameter "leselect.core.optimizer.bindJoin.max-
Queries" defines the maximum number of parame-
terized queries. This also means the maximum
number of rows retrieved from the small table to use
the bind join operator.
scope: system only

type: long

needs restart? no

default value: 100

leselect.core.optimiz Specifies that Data Federator only uses the bind join
er.bindJoin.useIndexOnly operator on indexed columns.
scope: system only

type: boolean

needs restart? no

default value: false

leselect.core.optimiz
er.bindJoin.reductionFac
tor

572 Data Federator User Guide


Managing system and session parameters
Configuring the working directory 16
Parameter Description
This is the fraction of rows returned by a bind join
compared to a full table scan in order for Data Fed-
erator to consider that the bind join is useful. If you
have to retrieve too many values the bind join be-
comes less useful. In this case Data Federator will
execute a table scan. For example, if you have a
table with 10M of rows, and the reduction factor is
1000, Data Federator uses the bind join operator if
less than 10M / 1000 = 10 000 rows are fetched to
execute the bind join.
scope: system only

type: string

needs restart? no

default value: 1000

Related Topics
• Guidelines for using system and session parameters to optimize queries
on large tables on page 548

Configuring the working directory


You can configure the working directory used by Data Federator Query
Server by setting the startup parameter core.workingDir in the server.prop
erties file.

Note:
The startup parameter core.workingDir can only be changed by modifying
the server.properties file, and not via the Administrator UI as per system
parameters.

1. Edit the file data-federator-installation-dir/LeSelect/conf/serv


er.properties.

Data Federator User Guide 573


16 Managing system and session parameters
Configuring the working directory

In the server.properties file, find the section, Debugging, which


contains the following lines:

#core.workingDir=

2. Set the parameter core.workingDir to a partition that has the most efficient
disk access.
Make sure you erase the hash (#) character from the beginning of the
line.

To type the name of the directory, use the syntax of the host OS (c:\\tmp
or /tmp).

Note:
The backslash character (\) is an escape character. To represent a single
backslash, you must enter two backslashes in a row (\\).

core.workingDir=c:\\tmp

574 Data Federator User Guide


Backing up and restoring
data

17
17 Backing up and restoring data
About backing up and restoring data

About backing up and restoring data


Data Federator provides a tool for administrators to back up and restore all
Data Federator data at once. This is useful for example when moving to a
new server.

Once the administrator has entered their username and password, the Data
Federator Backup and Restore tool backs up and restores the following data:
• projects, including datasource definitions, targets, mappings, lookup tables
and domain tables
• connector resources
• usernames and authorizations

Starting the Data Federator Backup and


Restore tool
Data Federator administrators can start the Data Federator Backup and
Restore Tool in two modes: graphical or console. Both modes offer the same
functionality.

Starting the Backup and Restore tool

Data Federator Administrators can start the Data Federator Backup and
Restore Tool as follows.
1. Ensure you are logged out of your Data Federator applications.
2. Use one of the following methods to start the Backup and Restore Tool.

576 Data Federator User Guide


Backing up and restoring data
Starting the Data Federator Backup and Restore tool 17
To start the Backup
and Restore Tool on in GUI mode in console mode
this platform...

In the Start menu, click


Programs > Busines-
sObjects Data Feder-
ator XI 3.0 > Data
Federator Backup Run the following batch
and Restore Tool. script.
Windows You can also run the
data-federator-in
Backup and Restore stall-dir\bin\back
Tool with the following uptoolcon.bat
batch script.

data-federator-in
stall-dir\bin\back
uptoolgui.bat

Run Data_Federa Run Data_Federa


tor_Backup_and_Re tor_Backup_and_Re
store_Tool_GUI from store_Tool_Console
the links directory that from the links directory
you chose for Data that you chose for Data
Federator. Federator.
AIX, Solaris or Linux You can also run the You can also run the
Backup and Restore Backup and Restore
Tool with the following Tool with the following
shell script. shell script.

data-federator-in data-federator-in
stall-dir/bin/back stall-dir/bin/back
uptoolgui.sh uptoolcon.sh

3. Enter your username and password.


The Data Federator Backup and Restore Tool stops the Data Federator
services and prepares to backup or restore your data.

Data Federator User Guide 577


17 Backing up and restoring data
Backing up your Data Federator data

Backing up your Data Federator data


Your user account must have administrator rights in order to use the Backup
and Restore Tool.
1. Start the Data Federator Backup and Restore Tool.
2. Click Backup. (In console mode, type 0 and press enter).
3. Choose a directory where you want to store your data.
You can type the name of a new directory, and the Backup and Restore
Tool will create it.

The Backup and Restore Tool backs up your data

If you are using Windows, and the Data Federator Windows services are
installed, the Backup and Restore Tool restarts the Data Federator
services automatically. Otherwise, use the shutdown and startup scripts.

4. If you are using AIX, Solaris or Linux, restart the Data Federator servers
by using the command: [data-federator-install-dir]/bin/start
up.sh

Related Topics
• Starting the Data Federator Backup and Restore tool on page 576

Restoring your Data Federator data


Your user account must have administrator rights in order to use the Backup
and Restore Tool, and you must have previously backed up your data using
it.

Data Federator Administrators can restore data using the Data Federator
Backup and Restore Tool.
1. Start the Data Federator Backup and Restore Tool.
2. Click Restore. (In console mode, type 1 and press enter).
3. Choose the directory where you backed up the data that you want to
restore.
The Backup and Restore Tool restores your data.

578 Data Federator User Guide


Backing up and restoring data
Restoring your Data Federator data 17
If you are using Windows, and the Data Federator Windows services are
installed, the Backup and Restore Tool restarts the Data Federator
services automatically. Otherwise, use the shutdown and startup scripts.

4. If you are using AIX, Solaris or Linux, restart the Data Federator servers
by using the command: data-federator-install-dir/bin/startup.sh

Related Topics
• Starting the Data Federator Backup and Restore tool on page 576

Data Federator User Guide 579


17 Backing up and restoring data
Restoring your Data Federator data

580 Data Federator User Guide


Deploying Data Federator
servers

18
18 Deploying Data Federator servers
About deploying Data Federator servers

About deploying Data Federator servers


You can deploy Data Federator servers in the following ways:
• deploy a project on a single Data Federator installation

Projects are deployed on the same machine that runs Designer. In this
configuration, your application connects to a single Data Federator Query
Server.
• deploy a project on multiple installations on a cluster

In this configuration, your application connects to Data Federator


Connection Dispatcher, which routes each connection to the installation
of Query Server that has the lowest load.
• deploy a project on cluster installations that use load balancing and fault
tolerance functionality

This configuration is the same as using multiple installations on a cluster,


except that Data Federator automatically reroutes failed connections to
other running instances of Connection Dispatcher or Query Server.
• deploying a project on multiple servers

With this configuration, the deployments on each server are independent


of each other. That is, there is no load balancing or fault tolerance
involved.

Related Topics
• Deploying a project on a single remote Query Server on page 582
• Deploying a project on a cluster of remote instances of Query Server on
page 586
• Configuring fault tolerance for Data Federator on page 601

Deploying a project on a single remote


Query Server
By default, when you deploy a project from Data Federator Designer, it is
deployed on the installation of Query Server that is installed on the same
machine.

582 Data Federator User Guide


Deploying Data Federator servers
Deploying a project on a single remote Query Server 18
You can also deploy projects on a remote installation of Query Server by
configuring the server details at deploy time.

Related Topics
• Deploying projects on page 321
• Deploying a version of a project on page 324
• Using deployment contexts on page 325
• Possibilities for deploying a project on a single remote instance of Query
Server on page 583

Possibilities for deploying a project on a single


remote instance of Query Server

The following possibilities are available for configuring Data Federator with
a single remote Query Server.
• You can deploy projects on the local Query Server. Designer and Query
Server run on the same machine.

This configuration is the default.


• You can deploy projects on a single remote Query Server that is dedicated
to a single Designer.

You do this by configuration the Query Server at deployment time.


• You can deploy projects on a single remote Query Server that is used by
multiple installations of Designer.

You do this by configuration the Query Server at deployment time for


each of your Designer installations.

Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325

Data Federator User Guide 583


18 Deploying Data Federator servers
Deploying a project on a single remote Query Server

Configuring Data Federator Designer to connect to


a remote Query Server

When you install Designer and Query Server on separate machines, there
must not be a firewall between the two machines.

Figure 18-1: Architecture of an installation of Data Federator Designer with a remote Query
Server

1. Use the Data Federator installer to install Data Federator Designer and
Data Federator Query Server on machine A.
2. Use the Data Federator installer to install Data Federator Designer and
Data Federator Query Server on machine B.
3. At project deployment time on machine A, configure the deployment
address of machine B.
When you deploy projects from Designer, the domain and lookup tables are
deployed on the remote machine (machine B).

Note:
While you edit your project in Data Federator Designer, you use the local
installation of Query Server and the local domain and lookup repository.

Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325

584 Data Federator User Guide


Deploying Data Federator servers
Deploying a project on a single remote Query Server 18

Sharing Query Server between multiple instances of


Designer

When you share an installation of Data Federator Query Server among


multiple installations of Data Federator Designer, at deployment time, you
configure the details of the Data Federator Query Server installation.

Figure 18-2: Architecture of multiple installations of Data Federator Designer sharing a


Query Server

When you deploy projects from each Designer on machine A1 or A2, its
domain and lookup tables are deployed in a dedicated database on machine
B.

Note:
In this type of shared configuration, if users on two different machines deploy
to a catalog of the same name, the first catalog is overwritten.

Data Federator User Guide 585


18 Deploying Data Federator servers
Deploying a project on a cluster of remote instances of Query Server

For example, if a user on machine A1 deployed project1 in the catalog


OP, and a user on machine A2 deploys project2 in the catalog OP,
project1 is overwritten.

Related Topics
• Deploying a version of a project on page 324
• Using deployment contexts on page 325

Deploying a project on a cluster of remote


instances of Query Server
You can optimize your Data Federator Query Server configuration by setting
up a cluster of servers and using Connection Dispatcher to distribute the
connections.

To use Connection Dispatcher, you must be sure that the project that your
application wants to access has been deployed in a catalog of the same
name on each Query Server that belongs to the cluster.

Data Federator Connection Dispatcher works by redirecting client connections


to the installation of Query Server with the lowest load. It continually polls
the installations of Query Server to measure which one has the lowest load.

To measure the load, Connection Dispatcher compares the latency time of


each response from every installation of Query Server. Connection Dispatcher
then routes each new connection to the appropriate server.

Note:
Connection Dispatcher only dispatches connections. Queries sent on the
same connection are always executed on the same installation of Query
Server. Load balancing is done only at connection time when a new
connection for an installation of Query Server is acquired.

Related Topics
• Deploying projects on page 321
• Possibilities for deploying a project on a cluster of remote instances of
Query Server on page 587

586 Data Federator User Guide


Deploying Data Federator servers
Deploying a project on a cluster of remote instances of Query Server 18

Possibilities for deploying a project on a cluster of


remote instances of Query Server

The following possibilities are available for configuring a cluster of installations


of Data Federator Query Server.
• You can install multiple instances of Query Server on different physical
machines.
• You can install multiple instances of Query Server on one machine with
multiple virtual machines such as VMWare.

You do this by installing the virtual machines, and assigning separate IP


addresses to all of them. You can activate Connection Dispatcher and
list the different virtual machines in the Connection Dispatcher server
configuration file.

Note:
You cannot install multiple instances of Data Federator Query Server on the
same physical machine without using VMWare.

Data Federator User Guide 587


18 Deploying Data Federator servers
Deploying a project on a cluster of remote instances of Query Server

Figure 18-3: A single Data Federator Designer, multiple instances of remote Query Server
and Connection Dispatcher

Related Topics
• Deploying a version of a project on page 324

588 Data Federator User Guide


Deploying Data Federator servers
Starting and stopping Connection Dispatcher 18
• Using deployment contexts on page 325

Starting and stopping Connection


Dispatcher
If you install Connection Dispatcher with the Data Federator Windows
Services component, you can use Windows services to start Connection
Dispatcher.

Once Connection Dispatcher is started, you can connect to it by opening a


telnet session to localhost (port: 3344). You can enter commands to
Connection Dispatcher directly in its telnet console.

Starting Connection Dispatcher when Data Federator


Windows Services are installed

If you installed the Data Federator Windows Services component in the Data
Federator installer, the Connection Dispatcher server starts automatically.
Specifically, the Data Federator Windows Services start the Connection
Dispatcher server.

The table below shows the Windows service that runs the Connection
Dispatcher server.

Table 18-1: List of Windows services

name of service is... the service runs...

Data Federator Connection Dispatch-


DataFederator.ConnectionDispatcher
er

Data Federator User Guide 589


18 Deploying Data Federator servers
Starting and stopping Connection Dispatcher

Starting Connection Dispatcher when Data Federator


Windows Services are not installed

If you did not install the Data Federator Windows Services component in the
Data Federator installer, you can use the startup scripts to start Connection
Dispatcher.

The following table shows the script that runs Connection Dispatcher.

Table 18-2: List of scripts that run Connection Dispatcher on Windows

name of script is... the script runs...

data-federator-install- Data Federator Connection Dispatch-


dir\dispatcher\bin\startup.bat er

Starting Connection Dispatcher on AIX, Solaris or


Linux

The following table lists the names of the script that runs Connection
Dispatcher.

Table 18-3: List of scripts that run Connection Dispatcher on AIX, Solaris or Linux

name of script is... the script runs...

data-federator-install- Data Federator Connection Dispatch-


dir/dispatcher/bin/startup.sh er

590 Data Federator User Guide


Deploying Data Federator servers
Starting and stopping Connection Dispatcher 18

Shutting down Connection Dispatcher when Data


Federator Windows Services are installed

Stop the following Data Federator Windows Service in order to shut down
Data Federator Connection Dispatcher.

Table 18-4: List of Windows services

name of service is... the service controls...

Data Federator Connection Dispatch-


DataFederator.ConnectionDispatcher
er

Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure
to either restart Connection Dispatcher or point the clients to a different
server.

All connections that were already acquired may still be used.


The shutdown operation and status are noted in the configured log.

Shutting down Connection Dispatcher when Data


Federator Windows Services are not installed

If you did not install the Data Federator Windows Services component in the
Data Federator installer, you can use the shutdown scripts to shut down
Connection Dispatcher.

The following table shows the name of the script that shuts down Connection
Dispatcher:

Data Federator User Guide 591


18 Deploying Data Federator servers
Starting and stopping Connection Dispatcher

Table 18-5: List of scripts that stop Connection Dispatcher on Windows

name of script is... the script shuts down...

data-federator-install-
Data Federator Connection Dispatch-
dir\dispatcher\bin\shut
er
down.bat

Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure
to either restart Connection Dispatcher or point the clients to a different
server.

All connections that were already acquired may still be used.

The shutdown operation and status are noted in the configured log.

Shutting down Connection Dispatcher on AIX,


Solaris or Linux

The following table lists the name of the script that shuts down Connection
Dispatcher.

Table 18-6: List of scripts that shut down Connection Dispatcher on AIX, Solaris or Linux

name of script is... the script shuts down...

data-federator-install- Data Federator Connection Dispatch-


dir/dispatcher/bin/shutdown.sh er

Note:
When you shut down Connection Dispatcher, any new connections requests
of any clients that are pointing to Connection Dispatcher will fail. Make sure

592 Data Federator User Guide


Deploying Data Federator servers
Configuring Connection Dispatcher 18
to either restart Connection Dispatcher or point the clients to a different
server.

All connections that were already acquired may still be used.

The shutdown operation and status are noted in the configured log.

Configuring Connection Dispatcher


You can configure Connection Dispatcher to update the list of servers among
which it dispatches connections, as well as the way it treats the servers.

These parameters are available in the Connection Dispatcher properties file,


data-federator-install-dir/dispatcher/conf/dispatcher.proper
ties.

Setting parameters for Connection Dispatcher

You can set parameters for the Connection Dispatcher in the dispatch
er.properties file.
1. Shut down Connection Dispatcher if it is running.
2. Edit the file data-federator-install-dir/dispatcher/conf/dispatch
er.properties.
3. Restart Connection Dispatcher.

Related Topics
• Parameters for Connection Dispatcher on page 596

Guidelines for using Connection Dispatcher


parameters to configure validity times of references
to servers

The configuration file for Connection Dispatcher comes with the


recommended settings for normal use of Connection Dispatcher. If you
customize how Connection Dispatcher calculates if servers are valid, you
can modify this file using the following guidelines.

Data Federator User Guide 593


18 Deploying Data Federator servers
Configuring Connection Dispatcher

Generally, there are three types of parameters that configure how long servers
are considered valid.
• The parameter leselect.dispatcher.reverifyTime specifies the
minimal time that Connection Dispatcher waits before contacting a server
to verify if it is valid.

The effective time between verifications may vary, but it cannot be less
than this value.
• The parameter leselect.dispatcher.validityTime specifies the time
that Connection Dispatcher considers a server to be valid after the last
reverification.
• The parameters leselect.dispatcher.blockedReverifyTime and
leselect.dispatcher.stopedReverifyTime control reverification for
blocked (non-responsive) servers and stopped servers respectively.

Using the parameter for validity time in Connection Dispatcher

You can change the parameter leselect.dispatcher.validityTime to a


very high value if you are using Connection Dispatcher but you do not need
to balance the load among servers. In this case the freshness of the load of
each server is not assured to be at a maximum of validityTime but
Connection Dispatcher will still try to reverify it each reverifyTime period
at minimum.

You can raise this value if you find that your servers are becoming invalid
before Connection Dispatcher checks them. However, since the default value
is set at two times the reverify value, your servers should very rarely hang
more often than this.

You may lower this value if you want to decrease the time limit that
Connection Dispatcher considers a server to have the same load and status.
This way you increase the confidence in the information that the Connection
Dispatcher has about each server, and Connection Dispatcher will perform
load balancing and failover with more accuracy.

If the leselect.dispatcher.validityTime is lower then the leselect.dis


patcher.reverifyTime, then Connection Dispatcher will consider servers
invalid before it verifies them. Therefore, when you set validityTime very
low, make sure that the parameter reverifyTime is at least as high.

594 Data Federator User Guide


Deploying Data Federator servers
Configuring Connection Dispatcher 18

Using the parameter for reverification time in Connection


Dispatcher

You can change the parameter leselect.dispatcher.reverifyTime to a


lower value if you want Connection Dispatcher to verify the servers more
often.

In order to verify servers, Connection Dispatcher makes a connection to


each server and measures the time the server takes to respond. This puts
a small, but extra load on your servers. You can set this value higher if you
find that your servers are becoming overloaded by Connection Dispatcher
reverifications.

If the leselect.dispatcher.validityTime is lower then the leselect.dis


patcher.reverifyTime, then Connection Dispatcher will consider servers
invalid before it verifies them. Therefore, when you set validityTime very
low, make sure that the parameter reverifyTime is at least as high.

Configuring logging for Connection Dispatcher

By default, the level of Connection Dispatcher logging is set to info.

You can configure a different level of logging in the dispatcher.properties


file (for details on properties in Connection Dispatcher, see Parameters for
Connection Dispatcher on page 596).
• Set the property log4j.configuration to the name of the file that corresponds
to the level of logging that you want.
For example, set the property as follows: log4j.configuration=properties/log
ging_all.properties, to set the highest level of logging and force the logging
to go to the active console.

For a list of the possible values, see Parameters for Connection Dispatcher
on page 596.

Data Federator User Guide 595


18 Deploying Data Federator servers
Configuring Connection Dispatcher

Parameters for Connection Dispatcher

Parameter Description
leselect.dispatcher.validity-
Time=120000
Validity time of a serv-
er reference in the
cache (milliseconds).

leselect.dispatcher.reverify-
Time=6
The interval in chunks
of 10 seconds at
which a server refer-
ence is rechecked.

leselect.dispatcher.stopedRever-
ifyTime=3
The interval in chunks
of 10 seconds at
which a stopped serv-
er is rechecked.

leselect.dispatcher.blocke-
dReverifyTime=12
The interval in chunks
of 10 seconds at
which a non-respon-
sive (heavily loaded)
server is rechecked.

leselect.dispatcher.thresholdLa-
tency=500
The latency threshold
at which the server is
considered as loaded
as to decrease its pri-
ority by one level. The
most prioritized
servers are returned
first when a client asks
for a reference (mil-
liseconds).

596 Data Federator User Guide


Deploying Data Federator servers
Configuring Connection Dispatcher 18
Parameter Description
comm.client.port=3555
The port on which the
clients connect to the
connection dispatcher.
If no value is given,
this parameter de-
faults to 3055, which
is the Data Federator
Query Server default.

#leselect.dispatcher.clusterCon-
figFile=

Data Federator User Guide 597


18 Deploying Data Federator servers
Configuring Connection Dispatcher

Parameter Description

Points to an XML file


which provides the
cluster's configuration
(keeps the list of all
servers with their pa-
rameters). If the file
does not exist, one will
automatically be creat-
ed. At shutdown, the
list of servers and their
configurations are
persisted. A backup is
automatically generat-
ed, as well.

If no value is specified
for this parameter, this
configuration file is
created by default in
USER_HOME/.datafed
erator/dispatch
er_servers_conf.xml

When entering the


name of a path that
contains backslash (\)
characters, you must
enter a double back-
slash (\\). For exam-
ple, in Windows, to
point to file named
C:\serverinfo.xml,
enter the value
C:\\serverinfo.xml

For details on the for-


mat of this file, see
Format of the Connec-
tion Dispatcher
servers configuration

598 Data Federator User Guide


Deploying Data Federator servers
Configuring Connection Dispatcher 18
Parameter Description
file on page 600.

log4j.configuration=proper-
ties/logging_info.properties
Points to an existing
logging configuration
file that controls the
level of logging.

The possible values


for this property are:
• logging_all.proper
ties: the highest
level of logging
• logging_info.proper
ties: intermediate
level of logging
• logging_none.prop
erties: no logging

Logging is output di-


rectly to the active
console.

Managing the set of servers for Connection


Dispatcher

You must have installed and started Data Federator Connection Dispatcher.
For details, see the Data Federator Installation Guide.

A common way to use Connection Dispatcher is to distribute connections to


several servers in a network.
1. Open a telnet connection to localhost on port 3344.
In a command line, use the following command to open a the telnet
connection.

telnet localhost 3344

Data Federator User Guide 599


18 Deploying Data Federator servers
Configuring Connection Dispatcher

2. Use the Connection Dispatcher commands to manage the list of


installations of Query Server over which you want Connection Dispatcher
to distribute connections.
The following command adds one installation of Query Server.

add jdbc:datafederator:://server1

The following command shows the installations of Query Server that are
in the list.

list

The following command displays help information.

help

Related Topics
• Possibilities for deploying a project on a cluster of remote instances of
Query Server on page 587
• Parameters in the JDBC connection URL on page 361

Format of the Connection Dispatcher servers


configuration file

By default, the Connection Dispatcher servers configuration file is created


in USER_HOME/.datafederator/dispatcher_servers_conf.xml.

To add a server, stop the Connection Dispatcher, open this file for editing
and under the <servers> root tag add a tag with the following syntax.

<server URL="jdbc:datafederator:..." [blockedReverifyTime="..."]


[reverifyTime="..."] [stopedreverifyTime="..."] [thresholdLa
tency="..."] [validityTime="..."]/>

• Attributes between brackets [] can be omitted and the default values are
used in this case.
• The URL is an installation of Data Federator Query Server. The other
attributes have the same meaning as the ones found in the dispatch
er.properties file except that they are not prefixed by leselect.dis
patcher.

600 Data Federator User Guide


Deploying Data Federator servers
Configuring fault tolerance for Data Federator 18
To remove a server, edit the Connection Dispatcher servers configuration
file dispatcher_servers_conf.xml and delete the <server> entry

Caution:
Even if you can add a reference to the URL of the Connection Dispatcher
itself, or to the URL of another Connection Dispatcher, this is not
recommended. Such references will prevent Connection Dispatcher from
correctly identifying the latency of the servers, and load balancing will not
function.

Note:
When you start Connection Dispatcher from a script on Windows or Unix-like
systems, the Connection Dispatcher servers configuration file is found at
USER_HOME/.datafederator/dispatcher_servers_conf.xml. When you
start Connection Dispatcher as a Windows service, the value of the SYSTEM
USER home directory depends on the version of Windows you have, and
you must use Windows search to find the correct .datafederator directory.

Related Topics
• Parameters for Connection Dispatcher on page 596
• Finding the Connection Dispatcher servers configuration file when running
Connection Dispatcher as a Windows service on page 780
• Parameters in the JDBC connection URL on page 361

Configuring fault tolerance for Data


Federator
Data Federator can apply fault tolerance to your connections to Query Server.
Fault tolerance lets you specify a complementary list of servers when your
application connects to Query Server. If the connection to the host provided
in the URL fails, the servers in the alternate list are tried one by one until a
valid connection can be established.

The alternate servers will be solicited in the order in which you have listed
them in the URL. Each host in your list can be either a Query Server or
Connection Dispatcher.

You can configure fault tolerance for connections from your application to
Data Federator by using the JDBC URL.

Data Federator User Guide 601


18 Deploying Data Federator servers
Configuring fault tolerance for Data Federator

1. Deploy a cluster of replicated instances of Data Federator Query Server.


a. Install Data Federator Query Server on multiple hosts.
b. Deploy the same projects on each installation of Query Server.
2. List all of these hosts in the JDBC URL that your application uses to
connect to Data Federator.
To list the hosts, you can either use the property alternateServers or al
ternateServersFile in the JDBC URL.

If you use the alternateServers property, enter a list of hosts separated


by ampersand (&) or comma (,).

If you use the alternateServersFile property, enter a list of hosts in the


Connection Dispatcher servers configuration file.

Related Topics
• JDBC URL syntax on page 358
• Format of the Connection Dispatcher servers configuration file on page 600

602 Data Federator User Guide


Data Federator Designer
reference

19
19 Data Federator Designer reference
Using data types and constants in Data Federator Designer

Using data types and constants in Data


Federator Designer
Data Federator uses the following data types.

You may use these data types when:


• defining schemas of datasource tables, domain tables, lookup tables or
target tables
• writing mapping formulas
• writing filter formulas
• writing constraint formulas

Type Description Format of constants

Use single-quotes (')


around string con-
a STRING of a speci- stants.
STRING
fied length • 'xxxx'
• 'my-string'

Use numeric constants


an integer n without quotes.
INTEGER -2147483648 <= n <= • 100
2147483648 • 22

a double-precision val- Use numeric constants


ue d without quotes.
DOUBLE • 100
-1.79E308 <= d <=
1.79E308 • 22.5

604 Data Federator User Guide


Data Federator Designer reference
Using data types and constants in Data Federator Designer 19
Type Description Format of constants

Use numeric constants


a decimal value c without quotes.
DECIMAL -10^38 +1 <= c <= • 100
10^38 -1 • 22.5

Use single-quotes (')


around DATE, TIME,
or TIMESTAMP con-
stants.
• '2005-11-20'
a date and time value
in the format: • '12:30:00'
TIMESTAMP
yyyy-MM-dd • '1999-10-25-
HH:mm:ss.SSS 07:20:05'

In formulas, use a
function to return a
TIMESTAMP con-
stant.

Use single-quotes (')


around DATE, TIME,
or TIMESTAMP con-
stants.
• '2005-11-20'
a time in the format: •
TIME '12:30:00'
HH:mm:ss • '1999-10-25-
07:20:05'

In formulas, use a
function to return a
TIME constant.

Data Federator User Guide 605


19 Data Federator Designer reference
Using data types and constants in Data Federator Designer

Type Description Format of constants

Use single-quotes (')


around DATE, TIME,
or TIMESTAMP con-
stants.
• '2005-11-20'
a DATE in the format: •
DATE '12:30:00'
yyyy-MM-dd • '1999-10-25-
07:20:05'

In formulas, use a
function to return a
date constant.

Use BOOLEAN values


without quotes. These
are case insensitive.
• TRUE
• true
BOOLEAN • FALSE
• false
• true

• false

• or mixed case

one of the values de-


ENUMERATED
fined in a domain table

Use the NULL constant


NULL a NULL value without quotes.
• null

606 Data Federator User Guide


Data Federator Designer reference
Date formats in Data Federator Designer 19
Related Topics
• toTimestamp on page 683
• toTime on page 682
• toDate on page 676
• Adding a domain table to enumerate values in a target column on page 55

Date formats in Data Federator Designer


For details on date formats, see the Java 2 Platform API Reference for the
java.text.SimpleDateFormat class, at the following URL:

http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.

Data extraction parameters for text files


The following table shows the parameters that you set when extracting data
from text file.

You need these parameters, for example, when:


• importing domain table data from a file (see Adding a domain table by
importing data from a file on page 59)
• importing lookup table data from a file (see Adding a lookup table by
importing data from a file on page 249)

Parameter Description

File name the name and location of your file

the character that separates fields


Field separator
in your source file

the character that surrounds text,


String delimiter for example " (double quote) or '
(single quote)

Data Federator User Guide 607


19 Data Federator Designer reference
Formats of files used to define a schema

Parameter Description

the number of rows, starting from


the top, that Data Federator ignores
Number of header lines to discard when reading the file
Enter 1 if your file has a one header
row.

specifies that Data Federator con-


siders the first value that it finds in
Map table columns by order the file as the first column, the sec-
ond value as the second column,
and so on

specifies the Data Federator at-


tempts to match the names of the
headings that it finds in your file to
Map table columns by name
the names of the columns that you
have defined in your Data Federa-
tor project

Formats of files used to define a schema


You can use schema files to define the schemas of tables in Data Federator
Designer. You may need schema files in two cases:
• when defining a datasource schema
• when defining a target table schema

608 Data Federator User Guide


Data Federator Designer reference
Formats of files used to define a schema 19
File format Description

Proprietary DDL file

Data Federator User Guide 609


19 Data Federator Designer reference
Formats of files used to define a schema

File format Description

A file that describes your datasource


schema in the following format:

ColumnName[:Column
Type][(Length)]

Different columns are separated by


";" (semi-colon)

Examples:

my_column:varchar(20)

your_column:date

our_column:integer;their_col
umn:integer

You can use any standard SQL col


umn types. When Data Federator
reads the DDL, it converts the SQL
types into its own types as follows:
• For a column of type STRING, use
the VARCHAR type in the DDL.
• For a column of type DECIMAL,
use the DECIMAL, BIGINT, NUMER
IC or NUMBER type in the DDL.

• For a column of type DOUBLE, use


the DOUBLE, FLOAT or REAL type
in the DDL.
• For a column of type BOOLEAN,
use the BOOLEAN or BIT type in
the DDL.
• For a column of type INTEGER,
use the INTEGER, INT or SMALL
INT type in the DDL.

• For a column of type DATE, use

610 Data Federator User Guide


Data Federator Designer reference
Formats of files used to define a schema 19
File format Description

the DATE type in the DDL.


• For a column of type TIME, use
the TIME type in the DDL.
• For a column of type TIMESTAMP,
use the TIMESTAMP type in the
DDL.

For details about the data types


in Data Federator, see Using data
types and constants in Data Fed
erator Designer on page 604.

Data Federator User Guide 611


19 Data Federator Designer reference
Formats of files used to define a schema

File format Description

DDL script

612 Data Federator User Guide


Data Federator Designer reference
Formats of files used to define a schema 19
File format Description

A file that describes your datasource


schema in DDL SQL statements.

Example:

create TABLE Clients ( id IN


TEGER not null, name CHAR(20)
null, date DATE null, PRIMA
RY KEY (id) )

You can use any standard SQL col-


umn types. When Data Federator
reads the DDL, it converts the SQL
types into its own types as follows:
• For a column of type STRING, use
the VARCHAR type in the DDL.
• For a column of type DECIMAL,
use the DECIMAL, BIGINT, NUMER
IC or NUMBER type in the DDL.

• For a column of type DOUBLE, use


the DOUBLE, FLOAT or REAL type
in the DDL.
• For a column of type BOOLEAN,
use the BOOLEAN or BIT type in
the DDL.
• For a column of type INTEGER,
use the INTEGER, INT or SMALL
INT type in the DDL.

• For a column of type DATE, use


the DATE type in the DDL.
• For a column of type TIME, use
the TIME type in the DDL.
• For a column of type TIMESTAMP,
use the TIMESTAMP type in the
DDL.

Data Federator User Guide 613


19 Data Federator Designer reference
Running a query to test your configuration

File format Description

For details about the data types


in Data Federator, see Using data
types and constants in Data Fed-
erator Designer on page 604.

Related Topics
• Using a schema file to define a text file datasource schema on page 171
• Adding a target table from a DDL script on page 47
• Using data types and constants in Data Federator Designer on page 604

Running a query to test your


configuration
Data Federator lets you run a query on datasources, mappings, and
components such as mapping formulas, to test that your configuration is
correctly extracting and converting values to the target table.

You run a query in situation such as the following.


• testing the definition and schema of a datasource (see Running a query
on a datasource on page 211)
• testing a mapping formula (see Writing aggregate formulas on page 227)
• testing a case statement formula (see Testing case statement formulas
on page 232)
• testing a mapping rule (see Testing a mapping rule on page 281)

Running a query on the parts of your configuration is a way to test what


values Data Federator returns when you query your target table.
1. Open the Query tool pane of the component that you want to test.
2. Leave the parameters in the Query tool pane as their default values.
If you want to configure your query, see Query configuration on page 615.

614 Data Federator User Guide


Data Federator Designer reference
Query configuration 19
3. To add columns to your query, click on the columns in the Available
columns list. They appear in the Selected columns list.
To remove columns from your query, click on the columns in the Selected
columns list.

Click All to select all the columns in your query.

Click None to clear all the selected columns. If you run your query with
no selected columns, the query will return all columns.

Click Default to get the most relevant set of columns.

4. Click View data.


Data Federator extracts data from the file, then displays the data in
columns in the Data sheet frame.

5. Verify that the values in your file appear as you expect them to appear.
This depends on the component you are testing. Examples are:
• If you are testing a datasource, see Running a query on a datasource
on page 211.
• If you are testing a mapping rule, see Testing a mapping rule on
page 281.

Query configuration
Use these parameters when Running a query to test your configuration on
page 614.

Parameter Description

lists the available columns that you


Available columns
can query

Data Federator User Guide 615


19 Data Federator Designer reference
Query configuration

Parameter Description

lists the columns that you have cho-


sen to query

To add columns to your query, click


on the columns in the Available
columns list. They appear in the
Selected columns list.
• Click a column in the Selected
columns list to remove columns
from your query.
• Click All to add all the columns to
Selected columns your query.
• Click None to clear all the
columns from the selection.

When you run a query with no


columns selected, the query re-
turns all the columns.
• Click Default to add the most rel-
evant columns. The set of default
columns depends on which com-
ponent you are testing.

specifies the filter on the returned


Filter
rows

the column by which the query results


Sort by
are ordered

the order in which Data Federator


Sort order
displays the query results

616 Data Federator User Guide


Data Federator Designer reference
Printing a data sheet 19
Parameter Description

specifies how many rows you want


your query to return.
Retrieved rows Use this box to limit the size of the
returned data when your query
source is very large.

specifies if you want your query to


Show total number of rows only return a count of all the rows in your
query source

specifies if you only want your query


Retrieve distinct values only
to return distinct rows

Printing a data sheet


You must have run a query to test your configuration. See Running a query
to test your configuration on page 614.
Data Federator lets you print a data sheet or test tool frame on datasources,
targets, and elements such as mapping formulas to test that your
configuration is correctly extracting and converting values to the target table.

You can print a data sheet after:


• testing a target (see Testing a target on page 53)
• running a query on a datasource (see Running a query on a datasource
on page 211)
• viewing constraint violations (see Viewing constraint violations on page 303)
• testing a mapping rule (see Testing a mapping rule on page 281)

• In the Data sheet pane of the component on which you have run a query,
click Print.
A PDF window appears, as shown in the example below:

Data Federator User Guide 617


19 Data Federator Designer reference
Inserting rows in tables

Inserting rows in tables


The following figure shows a case analysis formula with one row selected.

When you click Add, Data Federator adds the new row below the row you
selected.

618 Data Federator User Guide


Data Federator Designer reference
The syntax of filter formulas 19

You can do this when you:


• add columns to a target table (see Adding a target table manually on
page 46)
• add columns to a domain table (see Adding a domain table to enumerate
values in a target column on page 55)
• add columns to a lookup table (see Adding a lookup table on page 244)
• add conditions to a case analysis formula (see Writing case statement
formulas on page 230)

The syntax of filter formulas


When writing a filter formula, you can use any combination of the Data
Federator functions that selects columns from your tables and that results
in a BOOLEAN value.

An example filter formula is:

S2.order_date > '1999-12-31'

You can use the following structures of formulas:


• operand comparison-operator operand
where comparison-operator is one of: <, >, = or <>

Data Federator User Guide 619


19 Data Federator Designer reference
The syntax of case statement formulas

• operand = NULL
• operand <> NULL
• operand IS NULL
• operand IS NOT NULL
• operand LIKEstring-constant
• operand NOT LIKEstring-constant

You can use the following operands when writing a filter formula:
• columns from datasource tables
• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. This is either
an id number or a name (Sn.An or Sn.[column_name]).

Column names are case-sensitive.

• constants
• functions
• Use the Data Federator functions to convert or combine the column
values or constants.
• Choose the appropriate types for the data that you want to convert.

Related Topics
• isLike on page 649
• Function reference on page 624
• Using data types and constants in Data Federator Designer on page 604

The syntax of case statement formulas


Use the following rules when writing a case analysis mapping formula:
• Enter the conditions in the order in which you want Data Federator to test
them.
• Data Federator will test condition 1 before condition 2, condition 2 before
condition 3, and so on.
• In the If column:

620 Data Federator User Guide


Data Federator Designer reference
The syntax of relationship formulas 19
• Enter the value of the test you want to make, using the syntax of a
filter formula.

• In the then column:


• Enter the value of the result that you want, using the syntax of a filter
formula.
• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. This is either
an id number or a name (Sn.An or Sn.[column_name]).

Column names are case-sensitive.

• Use the Data Federator functions to convert or combine the column values
or constants.

For a full list of functions that you can use, see Function reference on
page 624.
• Choose the appropriate types for the data that you want to convert.

For details about the data types in Data Federator, see Using data types
and constants in Data Federator Designer on page 604.

Related Topics
• The syntax of filter formulas on page 619

The syntax of relationship formulas


When writing a relationship formula, you can use any combination of the
Data Federator functions that select columns from your tables and that results
in the same type between the columns that are to have the relationship.

An example relationship formula is:

S1.A1 = toInteger( S2.A10 )

You can use the following structures of formulas:


• simple_formula AND simple_formula, where a simple formula is:
• operand = operand

You can use the following operands when writing a relationship formula:

Data Federator User Guide 621


19 Data Federator Designer reference
The syntax of relationship formulas

• columns from datasource tables


• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. This is either
an id number or a name (Sn.An or Sn.[column_name]).

Column names are case-sensitive.

• constants

Use data types in Data Federator Designer to format constants.


• functions
• Use the Data Federator functions to convert or combine the column
values or constants.
• Choose the appropriate types for the data that you want to convert.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

622 Data Federator User Guide


Function reference

20
20 Function reference
Function reference

Function reference

Aggregate functions

This section describes the aggregate functions in Data Federator Designer.

Aggregate functions perform an operation over a set of data.

In aggregate functions, you can use the SQL keyword distinct in front of
column names.

Related Topics
• Writing aggregate formulas on page 227

AVG

Returns the average of a set of values.

DECIMAL AVG(INTEGER n)
Syntax
DECIMAL AVG(DECIMAL d)

• to calculate the average of the


sums of two columns that contain
INTEGERS or DECIMALS:

= AVG( S1.A1 + S1.A2 )


Examples • to calculate the average of the
values in a column that contains
numbers written as STRINGSs:

= AVG( toInteger( S1.A1 ) )

624 Data Federator User Guide


Function reference
Function reference 20

COUNT

Counts the number of values in a set.

INTEGER COUNT(INTEGER n)

INTEGER COUNT(DECIMAL c)

INTEGER COUNT(DOUBLE d)

INTEGER COUNT(STRING s)
Syntax
INTEGER COUNT(TIMESTAMP m)

INTEGER COUNT(TIME t)

INTEGER COUNT(DATE a)

INTEGER COUNT(BOOLEAN b)

• to count the number of values in


a column:
Examples
= COUNT( S1.A1 )

MAX

Returns the maximum value in a set.

Data Federator User Guide 625


20 Function reference
Function reference

INTEGER MAX(INTEGER n)

DECIMAL MAX(DECIMAL c)

DOUBLE MAX(DOUBLE d)

Syntax STRING MAX(STRING s)

TIMESTAMP MAX(TIMESTAMP m)

TIME MAX(TIME t)

DATE MAX(DATE d)

• to return the maximum value of a


column:
Examples
= MAX( S1.A1 )

MIN

Returns the minimum value in a set.

626 Data Federator User Guide


Function reference
Function reference 20

INTEGER MIN(INTEGER n)

DECIMAL MIN(DECIMAL c)

DOUBLE MIN(DOUBLE d)

Syntax STRING MIN(STRING s)

TIMESTAMP MIN(TIMESTAMP m)

TIME MIN(TIME t)

DATE MIN(DATE d)

• to return the minimum value of a


column:
Examples
= MIN( S1.A1 )

SUM

Returns the sum of a set of values.

Data Federator User Guide 627


20 Function reference
Function reference

DECIMAL SUM(INTEGER n)

Syntax DECIMAL SUM(DECIMAL c)

DECIMAL SUM(DOUBLE d)

• to calculate the sum of the values


in a column:
Examples
= SUM( S1.A1 )

Numeric functions

The following numeric functions are available in Data Federator.

abs

Returns the absolute, positive value of the numeric argument.

decimal abs(decimal n)

Syntax double abs(double n)

integer abs(integer n)

abs(-2^31) = -2^31
Restrictions
returns null if the argument is null

628 Data Federator User Guide


Function reference
Function reference 20

acos

Returns the arc cosine of an angle, in the range of 0 through pi.

Syntax double acos(double d)

Restrictions if abs(d) > 1 , returns null

asin

Returns the arc sine of an angle, in the range of -pi/2 through pi/2.

Syntax double asin(double d)

Restrictions if abs(d) > 1, returns null

atan

Returns the arc tangent of an angle, in the range of -pi/2 through pi2

Syntax double atan(double d)

atan2

atan2(x, y) converts rectangular coordinates (x, y) to polar (r, theta). This


method computes the phase theta by computing an arc tangent of y/x in the
range of -pi to pi.

Data Federator User Guide 629


20 Function reference
Function reference

Syntax double atan2(double x, double y)

Restrictions if x==0 and y==0, returns null.

ceiling

Returns the smallest value that is not less than the argument and is equal
to a mathematical integer.

integer ceiling(integer n)

Syntax double ceiling(double n)

decimal ceiling(decimal n)

cos

Returns the cosine of an angle.

Syntax double cos(double d)

cot

Returns the cotangent of an angle. Returns null if sine is equal to 0.

Syntax double cot(double d)

Restrictions if sin(d) == 0, returns null.

630 Data Federator User Guide


Function reference
Function reference 20

degrees

Converts an angle measured in radians to an approximately equivalent angle


measured in degrees.

double degrees(integer n)

Syntax double degrees(double d)

double degrees(decimal c)

exp

Returns the exponential value of a number "d", of type double. This is the
value of e raised to the exponent d.

Syntax double exp(double d)

Examples exp(10) == e^10 == 22 026.4658

Restrictions Throws exception if overflow.

floor

Returns the largest value that is not greater than the argument and is equal
to a mathematical integer.

Note:
The type of the value returned is not converted. Therefore, floor (1.9) == 1.0.
If you want to convert the value to an integer, use a conversion function like
toInteger() (see toInteger on page 679).

Data Federator User Guide 631


20 Function reference
Function reference

integer floor(integer n)

Syntax double floor(double n)

decimal floor(decimal n)

log

Returns the base e logarithm of double number "d". The argument "d" must
be greater than 0. Returns null if the argument is negative or equal to 0.

Syntax double log(double d)

Restrictions if d <= 0, returns null

log10

Returns the base 10 logarithm of double number "d". The argument d must
be greater than 0. Returns null if the argument is negative or equal to 0.

Syntax double log10(double d)

mod

Returns the remainder of two integers, when n1 is divided by n2.

Syntax integer mod(integer n1, integer n2)

632 Data Federator User Guide


Function reference
Function reference 20

Restrictions if n2 == 0, returns null

pi

Returns the constant pi.

Syntax double pi()

power

Returns the number raised to an exponent. The exponent must be an integer.

double power(integer n1, integer n2)

double power(double n1, integer n2)


Syntax
decimal power(decimal n1, integer
n2)

if n1 == 0 and n2<0, returns null.


Restrictions
throws exception if overflow

radians

Converts an angle measured in degrees to an approximately equivalent


angle measured in radians.

Data Federator User Guide 633


20 Function reference
Function reference

double radians(integer n)

Syntax double radians(double d)

double radians(decimal c)

rand

Returns a double value d 0 <= d < 1. You can provide a seed integer to
initialize the random number generator.

double rand(integer n)
Syntax
double rand()

round

Returns the closest value to specified number of decimal places "p". The
function rounds towards the nearest neighbor unless both neighbors are
equidistant. In this case, it rounds up (i.e rounds away from zero).

If you do not specify p, this function rounds to zero decimal places.

Note:
The type of the value returned is not converted. Therefore, round(1.9) ==
2.0. If you want to convert the value to an integer, use a conversion function
like toInteger() (see toInteger on page 679).

634 Data Federator User Guide


Function reference
Function reference 20

integer round(integer n, integer p)

double round(double n, integer p)

decimal round(decimal n, integer p)


Syntax
integer round(integer n)

double round(double n)

decimal round(decimal n)

The function rounds towards the


nearest neighbor unless both neigh-
Restrictions
bors are equidistant, in which case it
rounds away from zero.

sign

Returns the positive (1), zero (0), or negative (-1) sign of the argument.

integer sign(integer n)

Syntax decimal sign(decimal c)

double sign(double d)

sin

Returns the sine of an angle.

Syntax double sin(double d)

Data Federator User Guide 635


20 Function reference
Function reference

sqrt

Returns the square root of a number. The argument must be positive. Returns
null if the argument is negative.

Syntax double sqrt(double d)

Restrictions if d<0, returns null

tan

Returns the tangent of an angle.

Syntax double tan(double d)

Restrictions if cos(d) == 0, returns null

trunc

Returns the value n truncated to m decimal places. If m is omitted, n is


truncated to 0 decimal places.

If the value m is negative, the function starts at the mth digit left of the decimal
point and sets to zero all the digits to the right of that position.

636 Data Federator User Guide


Function reference
Function reference 20

integer trunc(integer n, integer m)

double trunc(double n, integer m)

decimal trunc(decimal n, integer m)


Syntax
integer trunc(integer n)

double trunc(double n)

decimal trunc(decimal n)

Alias truncate()

trunc( 10.1234, 1) == 10.1

trunc( 10.1234, 2 ) == 10.12


Examples
trunc( 1862.1234, -1 ) == 1860

trunc( 1862.1234, -2 ) == 1800

Date/Time functions

The following date/time functions are available in Data Federator.

curdate

returns the current date

date
Returns the current date as a date value. This function is a "system function"
and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.

Data Federator User Guide 637


20 Function reference
Function reference

curtime

returns the current time

time
Returns the current local time as a time value. This function is a "system
function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.

dayName

Returns a character string representing the day component of date "a" or


timestamp "m".

string dayName(date a)
Syntax
string dayName(timestamp m)

The name is returned in English and


in upper case. The possible return
values are:
• SUNDAY
• MONDAY
Restrictions • TUESDAY
• WEDNESDAY
• THURSDAY
• FRIDAY
• SATURDAY

638 Data Federator User Guide


Function reference
Function reference 20

dayOfMonth

Returns an integer from 1 to 31 representing the day of the month in date


"a" or timestamp "m".

integer dayOfMonth(date a)
Syntax
integer dayOfMonth(timestamp m)

dayOfWeek

Returns an integer from 1 to 7 representing the day of the week in date "a"
or timestamp "m". The first day of a week is Sunday.

integer dayOfWeek(date a)
Syntax
integer dayOfWeek(timestamp m)

Restrictions The first day of the week is Sunday.

dayOfYear

Returns an integer from 1 to 366 representing the day of the year in date "a"
or timestamp "m".

integer dayOfYear(date a)
Syntax
integer dayOfYear(timestamp m)

Data Federator User Guide 639


20 Function reference
Function reference

decrementDays

Decrements the given number of days "n" from date "a" or timestamp "m".

date decrementDays(date a, integer


n)
Syntax
timestamp decrementDays(times-
tamp m, integer n)

hour

Returns an integer from 0 to 23 representing the hour component of time "t"


or timestamp "m".

integer hour(time t)
Syntax
integer hour(timestamp m)

incrementDays

Increments the date "a" or timestamp "m" argument with the given number
of days "n".

date incrementDays(date a, integer


n)
Syntax
timestamp incrementDays(timestamp
t, integer n)

640 Data Federator User Guide


Function reference
Function reference 20

minute

Returns an integer from 0 to 59 representing the minute component of time


"t" or timestamp "m".

integer minute(time t)
Syntax
integer minute(timestamp t)

month

Returns an integer from 1 to 12 representing the month component of date


"a" or timestamp "m".

integer month(date a)
Syntax
integer month(timestamp m)

monthName

Returns a character string representing the month component of date "a" or


timestamp "m".

string monthName(date a)
Syntax
string monthName(timestamp m)

Data Federator User Guide 641


20 Function reference
Function reference

The name is returned in English and


in upper case. The possible return
values are:
• JANUARY
• FEBRUARY
• MARCH
• APRIL
• MAY
Restrictions
• JUNE
• JULY
• AUGUST
• SEPTEMBER
• OCTOBER
• NOVEMBER
• DECEMBER

now

returns the current timestamp

date
Returns a timestamp value representing date and time. This function is a
"system function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.

642 Data Federator User Guide


Function reference
Function reference 20

quarter

Returns an integer from 1 to 4 representing the quarter in date "a" or


timestamp "m". The value 1 represents January 1 through March 31.

integer quarter(date a)
Syntax
integer quarter(timestamp m)

second

Returns an integer from 0 to 59 representing the second component of time


"t" or timestamp "m".

integer second(time t)
Syntax
integer second(timestamp m)

timestampadd

Returns a timestamp calculated by adding "count" numbers of interval(s) to


timestamp "m".

Data Federator User Guide 643


20 Function reference
Function reference

timestamp timestampadd(string inter-


val-constant, integer count, times-
tamp t)

timestamp timestampadd(integer in-


terval-constant, integer count, times-
tamp t)

The interval-constant may be one of


the following values:
• 'SQL_TSI_FRAC_SECOND' or 0

Syntax • 'SQL_TSI_SECOND' or 1
• 'SQL_TSI_MINUTE' or 2
• 'SQL_TSI_HOUR' or 3
• 'SQL_TSI_DAY' or 4
• 'SQL_TSI_WEEK' or 5
• 'SQL_TSI_MONTH' or 6
• 'SQL_TSI_QUARTER' or 7
• 'SQL_TSI_YEAR' or 8

• The computation can depend on


Restrictions the daylight savings of the locale
for 'SQL_TSI_HOUR'.

timestampdiff

Returns an integer representing the number of intervals by which timestamp


2 is greater than timestamp 1

644 Data Federator User Guide


Function reference
Function reference 20

integer timestampdiff(string interval-


constant, timestamp m1, timestamp
m2)

integer timestampdiff(integer interval-


constant, timestamp m1, timestamp
m2)

The interval-constant may be one of


the following values:
• 'SQL_TSI_FRAC_SECOND' or 0

Syntax • 'SQL_TSI_SECOND' or 1
• 'SQL_TSI_MINUTE' or 2
• 'SQL_TSI_HOUR' or 3
• 'SQL_TSI_DAY' or 4
• 'SQL_TSI_WEEK' or 5
• 'SQL_TSI_MONTH' or 6
• 'SQL_TSI_QUARTER' or 7
• 'SQL_TSI_YEAR' or 8

• If the difference is very large, the


result can trigger an exception.
• Currently, the computation can
Restrictions depend on the daylight saving of
the local for SQL_TSI_HOUR.
• The first day of a week is Sunday.

trunc

Truncates the timestamp "m" to the nearest day.

Data Federator User Guide 645


20 Function reference
Function reference

Syntax timestamp trunc(timestamp "m")

week

Returns an integer from 1 to 53 representing the week in the year of date


"a" or timestamp "m". A week is defined as the days starting Sunday and
ending Saturday.

integer week(date a)
Syntax
integer week(timestamp m)

The first day of a week is Sunday.


The first week contains one day at
least.

If the first day of a year is a Saturday,


then the following applies.
Restrictions • January 1st is considered as
week 1
• January 2nd to 8th is considered
week 2
• December 25th to 31st is consid-
ered week 53

year

Returns an integer from representing the year component of date "a" or


timestamp "m".

646 Data Federator User Guide


Function reference
Function reference 20

integer year(date a)
Syntax
integer year(timestamp m)

String functions

The following string functions are available in Data Federator.

ascii

Returns an integer representing the code value of the leftmost character in


string s. Returns NULL if the string is NULL.

Syntax integer ascii(string s)

Restrictions returns NULL if s == '' (NULL string)

char

Returns the character whose ascii value corresponds to the INTEGER "n"
where n is between 0 and 255. Returns NULL if n is out of range.

Returns the ascii value of the INTEGER "n" where n is between 0 and 255.
Returns NULL if n is out of range.

Syntax string char(integer n)

Restrictions returns NULL if n < 0 or n > 255

Data Federator User Guide 647


20 Function reference
Function reference

concat

Concatenates two strings.

string concat(string s1,


Syntax string s2)

Examples concat('AB', 'CD') = 'ABCD'

if s1 == NULL or s2 == NULL, returns


Restrictions
NULL

containsOnlyDigits

Returns true if the string "s" contains only digits, false otherwise.

boolean containsOnlyDig
Syntax its(string s)

insert

Returns a character string formed by deleting "length" characters from string


"s1" beginning at the position "start", and inserting string "s2" into string "s1"
at start. The value of the position "start" must be an INTEGER in the range
of 1 to the length of the string s1 plus one. The value of length must be an
INTEGER in the range of 0 to the length of the string s1. Returns NULL if
the start or the length is out of range.

648 Data Federator User Guide


Function reference
Function reference 20

string insert(string s1, inte


Syntax ger start, integer length,
string s2)

if start is not in the range [1 ..


Restrictions
s1.length] or length < 0, returns NULL

isLike

Checks a string s1 for a matching pattern s2. The pattern follows the SQL
92 standard. The string s3 can be used to specify an escape character in
the pattern.

If an '_' or '%' occurs in the string s1, you can match it by defining a character
s3 and, in the pattern s2, preceding the '_' or '%' by s3.

The pattern is as follows.


• characters are either:
• the "metacharacters" '%' (percent) or '_' (underscore)
• "regular characters", which include any character that is not a
metacharacter

• a '_' matches any single character


• a '%' matches any string of characters
• any regular character in the pattern s2 matches the same character in s1
• if an '_' or '%' occurs in the string s1, you can match it by defining a
character s3 and, in the pattern s2, preceding the '_' or '%' by s3

Data Federator User Guide 649


20 Function reference
Function reference

boolean isLike(string s1,


string s2)

boolean isLike(string s1,


string s2, string s3)
Syntax

Note:
The third argument above is a charac-
ter used to escape metacharacters.
See restrictions below.

isLike("ABCD", "AB%") = true

isLike("ABCD", "AB_D") = true

isLike("10000", "100%") = true


Examples
isLike("10000", "100\%", "\") = false

isLike("status: 100%", "100\%", "\") =


true

• (strings1, strings2): if s1 ==
NULL or s2 == null, returns NULL
• (strings1, strings2, strings3): if
s1 == NULL or s2 == NULL or s3
Restrictions == NULL, returns NULL.
• (strings1, strings2, strings3): in
s2, any occurence of s3 must be
followed by '_' or '%' or a second
s3

left

Returns "n" characters from the left of a string "s".

650 Data Federator User Guide


Function reference
Function reference 20

string left(string s, integer


Syntax n)

Alias leftStr()

Restrictions if n <= 0, returns NULL

leftStr

Returns "n" characters from the left of a string "s".

string leftStr(string s, inte


Syntax ger n)

Alias left()

Restrictions if n <= 0, returns NULL

len

Returns the length of a string "s". Spaces are counted.

Syntax integer len(string s)

Alias length()

Data Federator User Guide 651


20 Function reference
Function reference

lPad

Pads a string "s1" on the left side to a specified length "n" using another
string "s2".

string lPad(string s1, string


Syntax s2, integer n)

lPad('AB','x', 4) = 'xxab'

lPad('ABC','x', 2) = 'AB'
Examples

lPad('ABC','cd', 7) = 'cdcd
ABC'

if n < s1.length, returns leftStr(s1, n)

Restrictions if n <= 0 , returns NULL

if s2 == '' (null string), returns NULL

Note:
If n is less than the length of s1, s1 is truncated.

lTrim

Removes the first sequence of spaces and tabs from the left side of the string
s.

If you specify s1 and s2, lTrim removes the first sequence of s2 from the left
side of s1. The string s2 must be a single chacter.

652 Data Federator User Guide


Function reference
Function reference 20

string lTrim(string s)

Syntax
string lTrim(string s1, string
s2)

lTrim(' ABCD') = 'ABCD'

Examples
lTrim(' AB CD ') = 'AB CD '

• (string s): the characters removed


are: ' ', '\t', '\r'
• (string s): if ltrim(s) == '', returns
NULL
Restrictions • (string s1, string s2): if ltrim(s1,
s2) == '', returns NULL
• (string s1, string s2): s2 must be
a single character.

match

Returns true if the first string "s1" matches the pattern "s2", false otherwise.

BOOLEAN match(string s1,


Syntax string s2)

The pattern s2 is a Java regular expression. For example, '([A-Z]*)([0]*)(\d*)'


is a valid pattern.

Data Federator User Guide 653


20 Function reference
Function reference

Refer to the Sun™ Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html.

permute

Permutes a string using two templates.

Takes the first string s1, whose reference pattern is supplied in the second
argument reference-pattern, and applies a new pattern new-pattern
to produce a resulting string. The new pattern is expressed by permuting the
letters defined in the reference pattern.
• The reference pattern assigns each character in string s to the character
in the corresponding position in reference-pattern. The length of
reference-pattern must be equal to the length of s.
• The new pattern permutes the characters that were assigned in the
reference pattern.

For example, the character string s = '22/09/1999', which represents a date,


can be converted to '1999-09-22', as follows.

The reference pattern can be described as 'DD/MM/YYYY', where 'D' is the


day, 'M' the month and 'Y' the year. The letters are matched according to
their position.

In this example, the first 'D' refers to the first character in string s, the second
'D' to the second character in s, '/' to the third character in s, the first 'M' to
the fourth character and so on. This is why the length of reference-pat
tern must always equal the length of the string s. The function returns an
error if the two strings are of different lengths.

Once letter mapping has been defined, new-pattern must be provided to


transform string s. For example, if 'YYYY-MM-DD' is the new pattern, the
function defines the transformation of s to a new date format. Thus, for s =
'22/09/1999' we get '1999-09-22'.

string s 22/09/1999

654 Data Federator User Guide


Function reference
Function reference 20

reference-pattern MM/DD/YYYY

new-pattern YYYY-MM-DD

result 1999-22-09

Text can also be inserted into the new pattern, provided none of the letters
are already used in the reference pattern. For example, using the new pattern
'MM/DD Year: YYYY' produces the following string: '09/22 Year: 1999'. The
permute function is helpful not only in transforming formats (dates, times,
encoding), but also in extracting information from a code of pre-defined length
(refer to the examples below).

Data Federator User Guide 655


20 Function reference
Function reference

string permute(string s1,


Syntax string reference-pattern,
string new-pattern)

• change the format of how a date


is represented:

permute('02/09/2003',
'DD/MM/YYYY', 'YYYY-MM-DD')
= '2003-09-02'

permute('02-09/200',
'DD/MM/YYYY', 'YYYY-MM-DD')
= '2003-09-02'

permute('02/09_2003',
'DD/MM/YYYY', 'DL :MM/DD An
:YYYY') = 'DL :09/02 An
:2003'

• extract a month and year from a


string of characters representing
Examples one date:

permute('2003-09-02', 'DDYY-
MM-YY', 'MM/YY') = '09/03'

• compose a number from an inter-


nal code:

permute('03/03/21-0123',
'bbbYY/MM/DD-NNNN', 'YYM
MDDNNNN') = '0303210123'

• extract date information from inter-


nal code

permute('2003NL987M08J21',
'YYYYXXXXXXMMXDD', 'YYYY-MM-
DD') = '2003-08-21'

656 Data Federator User Guide


Function reference
Function reference 20

permuteGroups

Replaces strings of characters by matching a pattern.

Takes the first string "s1", whose reference pattern is supplied in the second
argument "reference-pattern", and applies a new pattern "new-pattern" to
produce a resulting string. The new pattern is expressed by permuting the
groups of regular expressions defined in the reference pattern.

The "reference-pattern" is a Java regular expression whose groups are


defined in parentheses. For example, in the reference-pattern
"([A-Z]*)([0]*)(\d*)", there are three groups: ([A-Z]*), ([0]*) and (\d*).

In "new-pattern", the groups are referenced by digits enclosed in braces (e.g.


'{1}'). The first group is {1}, the second is {2}, and so on. If new-pattern must
contain a '{' (left brace) character that does not reference a group, the
character must be preceded by the character '\' (backslash).

The permuteGroups function is helpful not only in transforming formats (dates,


times, encoding), but also in extracting information from a code (see the
example below).

If the input string does not match the pattern, the function returns NULL.

Refer to the Sun Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html.

Data Federator User Guide 657


20 Function reference
Function reference

string permuteGroups(string
Syntax s1, string reference-pattern,
string new-pattern)

• permuteGroups( '1978-01-12',

'([\d]*)(.)([\d]*)(.)([\d]*)',
'{5}/{3}' ) = 12/01

• permuteGroups( '345', '([a-


z]*)', '{1}' ) = NULL
Examples
• permuteGroups( 'text', '([a-
z]*)(\d*)', '{2}' ) = NULL

If a group does not match (in a


matching pattern), the result is a
null string for that group.

pos

Returns the position of the first occurrence of string "s1" in string "s2". Returns
0 if string s1 is not found. The first character is in position 1. If "start" is
specified, the search begins from position "start" in s2.

658 Data Federator User Guide


Function reference
Function reference 20

integer pos(string s1, string


Syntax s2, integer start)

Alias locate()

pos('cd','abcd') = 3

pos('abc', 'abcd') = 1

pos('cd', 'abcdcd') = 3
Examples
pos('cd', 'abcdcd', 3) = 3

pos('cd', 'abcdcd', 4) = 5

pos('ef', 'abcd') = 0

• start < 1 is equivalent to start ==


1
Restrictions
• if start > length of s1, returns 0

repeat

Returns a string formed by repeating string "s". The string is repeated "n"
times. Returns NULL if the count is negative.

Data Federator User Guide 659


20 Function reference
Function reference

string repeat(string s, inte


Syntax ger n)

Restrictions if n <= 0, returns NULL

replace

Replaces all occurences of string "s2" in string "s1" with string "s3".

string replace(string s1,


Syntax string s2, string s3)

replace( 'rar', 'a', 'ada' )


Examples
returns 'radar'

if s2 == '' (null string), returns s1


Restrictions if s3 == '' (null string), does not return
NULL

replaceStringExp

Replaces all occurences of string "s2" in string "s1" with string "s3", following
the syntax of a Java regular expression.

Refer to the Sun Java documentation for more information on Java regular
expressions at
http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html

660 Data Federator User Guide


Function reference
Function reference 20

string replaceStringExp(string
Syntax s1, string s2, string s3)

right

Returns "n" characters from the right of a string "s".

string right(string s, integer


Syntax n)

Alias rightStr()

Restrictions if n <= 0, returns NULL

rightStr

Returns "n" characters from the right of a string "s".

string rightStr(string s, inte


Syntax ger n)

Alias right()

Restrictions if n <= 0, returns NULL

Data Federator User Guide 661


20 Function reference
Function reference

rPad

Pads a string "s1" on the right side to a specified length "n" using another
string "s2".

string rPad(string s1, string


Syntax s2, integer n)

if n < length of s1, returns leftStr(s1,


n)
Restrictions
if n <= 0 , returns NULL

if s2 == '' (null string), returns NULL

Note:
If n is less than the length of s1, s1 is truncated.

rPos

Returns the position of the last occurrence of string "s1" in string "s2". Returns
0 if string s2 is not found. The first character is in position 1, and the counting
proceeds from left to right.

662 Data Federator User Guide


Function reference
Function reference 20

integer rPos(string s1, string


Syntax s2)

rPos('CD','ABCD') = 3

rPos('CD', 'ABCDCD') = 5
Examples
rPos('ABC', 'ABCD') = 1

rPos('EF', 'ABCD') = 0

rTrim

Removes the first sequence of spaces and tabs from the right side of the
string s.

If you specify s1 and s2, rTrim removes the first sequence of s2 from the
right side of s1. The string s2 must be a single chacter.

Data Federator User Guide 663


20 Function reference
Function reference

string rTrim(string s)

Syntax
string rTrim(string s1, string
s2)

rTrim('ABCD ') = 'ABCD'

Examples
rTrim(' AB CD ') = ' AB
CD'

• (string s): The characters re-


moved are: ' ', '\t', '\r'
• (string s): if rtrim(s) == '', returns
NULL
Restrictions • (string s1, string s2): if rtrim(s1,
s2) == '', returns NULL
• (string s1, string s2): s2 must be
a single character

space

Returns a string of "n" spaces. Returns NULL if "n" is negative.

Syntax string space(integer n)

Restrictions if n <= 0, returns NULL

664 Data Federator User Guide


Function reference
Function reference 20

subString

Returns a substring from a string.

This function extracts the sub-string beginning in position "n1" that is "n2"
characters long, from string "s". If string s is too short to make "n2" characters,
the end of the resulting sub-string corresponds to the end of string "S" and
is thus shorter than "n2".

If you do not specify n2, the sub-string from n to the end of s will be returned.

string substring(string s, in
teger n)
Syntax
string substring(string s, in
teger n1, integer n2)

substring('ABCD', 2, 2) = 'BC'

Examples substring('ABCD', 2, 10) =


'BCD'

substring('ABCD', 0, 2) = NULL

(string s, integer n): if length <= 0 or


Restrictions start > length of s or start <= 0 or s
== '', returns NULL;

toLower

Converts a string into lower case.

Data Federator User Guide 665


20 Function reference
Function reference

Syntax string toLower(string s)

Alias lcase()

toLower('ABCD') = 'abcd'
Examples
toLower('Cd123') = 'cd123'

toUpper

Converts a string into uppercase.

Syntax string toUpper(string s)

Alias ucase()

Examples toUpper('abcd') = 'ABCD'

trim

Removes the first sequence of spaces and tabs from the left and right sides
of the string s.

If you specify s1 and s2, trim removes the first sequence of s2 from the left
and right sides of s1. The string s2 must be a single chacter.

666 Data Federator User Guide


Function reference
Function reference 20

string trim(string s)

Syntax
string trim(string s1, string
s2)

• (string s): the characters removed


are: ' ', '\t', '\r'
• (string s): if trim(s) == '', returns
NULL
Restrictions • (string s1, string s2): if trim(s, s2)
== '', returns NULL
• (string s1, string s2): s2 must be
a single character

System functions

The following system functions are available in Data Federator.

database

returns the name of the database

string
Returns the name of the database (catalog). This function is a "system
function" and has the following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.

Data Federator User Guide 667


20 Function reference
Function reference

ifElse

Returns a value based on a condition "b".

The condition b must be a boolean expression.


• If b resolves to 'true', the function returns the second argument.
• If b resolves to 'false', the function returns the third argument
• For the condition b, you can use the syntax of a filter formula. See The
syntax of filter formulas on page 619.

boolean ifElse(boolean b, boolean


b1, boolean b2)

date ifElse(boolean b, date a1, date


a2)

decimal ifElse(boolean b, decimal


c1, decimal c2)

double ifElse(boolean b, double d1,


double d2)

integer ifElse(boolean b, integer n1,


integer n2)
Syntax
null ifElse(boolean b, null u1, null
u2)

string ifElse(boolean b, string s1,


string s2)

timestamp ifElse(boolean b, times-


tamp m1, timestamp m2)

time ifElse(boolean b, time t1, time


t2)
• In any of the above signatures,
the third argument may be null.

668 Data Federator User Guide


Function reference
Function reference 20

if one of the second or third argu-


Restrictions ments is null, the function does not
necessarily return null

min

Calculates the minimum of two values.

type min(type t1, type t2)

The two arguments must be of the


same type.
Syntax
"type" can be integer, double, deci-
mal, string, timestamp, time, date,
boolean, or null.

min (2987, 45) = 45

min (-2987, 45) = -2987


Examples
min (2987.8, 45.0) = 45.0

min (-2987.8, 45.0) = -2987.8

Restrictions if t1== null or t2 == null, returns null

nvl

Checks if the first argument is null.


• If the first argument is null, this function returns the second argument.
• If the first argument is not null, this function returns the first argument.

Data Federator User Guide 669


20 Function reference
Function reference

boolean nvl(boolean b1, boolean


b2)

date nvl(date a1, date a2)

decimal nvl(decimal c1, decimal c1)

double nvl(double d1, double d2)

Syntax integer nvl(integer n1, integer n2)

string nvl(string s1, string s2)

timestamp nvl(timestamp m1,


timestamp m2)

time nvl(time t1, time t2)

null nvl(null u, null u)

Alias ifNull()

if one of the arguments is null, this


Restrictions function does not necessary return
null

user

returns the user name

string
Returns the user name. This function is a "system function" and has the
following characteristics:
• It is non-deterministic.
• It returns the value from Data Federator Query Server, not from the data
access system used as the source.

670 Data Federator User Guide


Function reference
Function reference 20

valueIfElse

Returns a value if two values match; otherwise returns a different value.

Compares value a1 with a2.


• If a1 is equal to a2, returns value b1.
• Otherwise, returns value b2.

type2 valueIfElse(type1 a1, type1


a2, type2 b1, type2 b2)
Syntax Where type1 and type2 can be inte-
ger, double, decimal, string, times-
tamp, time, date, boolean or null.

Conversion functions

The following conversion functions are available in Data Federator.

cast

Casts the first argument "x" as the type specified by the second argument.

The second argument is an integer constant that can have the following
values:
• NULL
• STRING
• DOUBLE
• DECIMAL
• DATE
• TIME

Data Federator User Guide 671


20 Function reference
Function reference

• TIMESTAMP
• BOOLEAN

This function is only supported in


the Data Federator Administrator.

In Data Federator Designer, use


Support the specific functions toBoolean(),
toInteger(), toDouble(), toDecimal(),
toDate(), toTime(), toTimestamp(),
toString() and toNull().

null cast(type x AS NULL)

string cast(type x AS STRING)

integer cast(type x AS INTE


GER)

double cast(type x AS DOUBLE)

decimal cast(type x AS DECI


Syntax MAL)

date cast(type x AS DATE)

time cast(type x AS TIME)

timestamp cast(type x AS
TIMESTAMP)

boolean cast(type x AS
BOOLEAN)

convert

Converts the first argument "x" to the type specified by the second argument.

The second argument is a string constant that can have the following values:

672 Data Federator User Guide


Function reference
Function reference 20
• 'NULL'
• 'STRING'
• 'DOUBLE'
• 'DECIMAL'
• 'DATE'
• 'TIME'
• 'TIMESTAMP'
• 'BOOLEAN'

This function is only supported in


the Data Federator Administrator.

In Data Federator Designer, use


Support the specific functions toBoolean(),
toInteger(), toDouble(), toDecimal(),
toDate(), toTime(), toTimestamp(),
toString() and toNull().

Data Federator User Guide 673


20 Function reference
Function reference

null convert(type x, 'NULL')

string convert(type x,
'STRING')

integer convert(type x, 'INTE


GER')

double convert(type x, 'DOU


BLE')

Syntax decimal convert(type x, 'DEC


IMAL')

date convert(type x, 'DATE')

time convert(type x, 'TIME')

timestamp convert(type x,
'TIMESTAMP')

boolean convert(type x,
'BOOLEAN')

convertDate

Converts a string "s" having a specific format into a DATE. The INTEGER
"n" is reserved for format values. It must equal 1.

The following formats are available.


• The expected format of string s is CyyMMdd.
• "C" can have two values: 1, or 2, where 1 represents the 20th century,
and 2 represents the 21st century.
• "yy" represents the last two digits of the year.
• "MM" represents two digits of the month.
• "dd" represents two digits of the day.

674 Data Federator User Guide


Function reference
Function reference 20

date convertDate(string s,
Syntax
integer n)

hexaToInt

Converts the hexadecimal value of the string "s" into an integer.

Syntax integer hexaToInt(string s)

Examples hexaToInt( 'AF' ) == 175

intToHexa

Converts an integer "n" into a hexadecimal value. Returns the hexadecimal


value as a string.

If n < 0, this function returns the hexadecimal value of 2^32 + n. Thus,


intToHexa(-1) == FFFFFFFF.

Syntax string intToHexa(integer n)

toBoolean

Converts the argument to a BOOLEAN value.


• If the argument is a string "s", this function returns the value true if s
equals 'true' or 'TRUE' or any mixed case variation of the string 'true'.
Otherwise, it returns false.
• If the argument is a BOOLEAN "b", this function returns the value b.

Data Federator User Guide 675


20 Function reference
Function reference

• If the argument is NULL, this function returns NULL.

boolean toBoolean(boolean b)

Syntax null toBoolean(null u)

boolean toBoolean(string s)

toBoolean('true') = true

toBoolean('TrUe') = true

toBoolean('tru') = false
Examples
toBoolean('False') = false

toBoolean('F') = false

toBoolean('f') = false

Restrictions string s: if trim(s) == '', returns NULL

toDate

Converts character string "s" to a date.

The string s should appear as 'YYYY-MM-DD' where 'YYYY' is the year, 'MM'
is the month and 'DD' is the day.

Examples of character strings that respect this format: '2003-09-07' and


'2003-11-29'. An error is returned if the format is wrong.

No restrictions are imposed on month, day or year digit values. If the month
digit is greater than 12 or the day digit does not exist in the corresponding
month, the "toDate" function uses the internal calendar to convert to the
correct date. Thus, '2003-02-29' will be converted to '2003-03-01' and
'2002-14-12' to '2003-02-12'.

676 Data Federator User Guide


Function reference
Function reference 20

date toDate(date a)

null toDate(null u)
Syntax
date toDate(string s)

date toDate(timestamp m)

toDate('2003-02-12') = '2003-
02-12'

toDate('2003-02-29') = '2003-
03-01'
Examples
toDate('2002-14-12') = '2003-
02-12'

toDate('1994-110-12') = '2003-
02-12'

toDecimal

Converts the argument to a decimal.


• If the argument is a string "s", s must be in the decimal number format
where the period is used as a separator for the decimal portion. An error
is returned if s is not in the decimal number format.
• If the argument is a decimal, double, or integer, this function returns the
decimal value of the argument.
• If the argument is null, this function returns null.

Data Federator User Guide 677


20 Function reference
Function reference

decimal toDecimal(string s)

decimal toDecimal(decimal c)

Syntax decimal toDecimal(double d)

decimal toDecimal(integer n)

decimal toDecimal(null)

(string s): if trim(s) == '', returns


Restrictions
NULL

toDouble

Converts the argument to a DOUBLE.


• If the argument is a string "s", s must be in the decimal number format
where the period is used as a separator for the decimal portion. An error
is returned if s is not in the decimal number format.
• If the argument is a DECIMAL, DOUBLE, or INTEGER, this function
returns the double value of the argument.
• If the argument is NULL, this function returns NULL.

double toDouble(string s)

double toDouble(decimal c)

Syntax double toDouble(double d)

double toDouble(integer n)

double toDouble(null u)

678 Data Federator User Guide


Function reference
Function reference 20

toDouble ('2987.9') = 2987


Examples toDouble ('-2987.9') = -
2987.9

(string s): if trim(s) == '', returns


Restrictions
NULL

toInteger

Converts the argument to an integer.


• If the argument is a string "s", this function returns floor(s). An error is
returned if the integer represented by s is too big (for details on data types
and their ranges, see Using data types and constants in Data Federator
Designer on page 604).
• If the argument is a DECIMAL, DOUBLE, or INTEGER, this function
returns the INTEGER value of the argument.
• If the argument is NULL, this function returns NULL.

integer toInteger(string s)

integer toInteger(decimal c)

Syntax integer toInteger(double d)

integer toInteger(integer n)

integer toInteger(null u)

toInteger ('2987') = 2987


Examples
toInteger ('-2987') = -2987

Data Federator User Guide 679


20 Function reference
Function reference

(string s): if trim(s) == '', returns


Restrictions
NULL

toNull

Converts the value of the argument into a null value.

NULL toNull(BOOLEAN b)

NULL toNull(DATE a)

NULL toNull(DECIMAL c)

NULL toNull(DOUBLE d)

Syntax NULL toNull(INTEGER n)

NULL toNull(NULL u)

NULL toNull(STRING s)

NULL toNull(TIME t)

NULL toNull(TIMESTAMP m)

toString

Converts the value of the argument into a string value.


• If you provide a single argument, the argument is converted into a string.
• For toString( double d, integer n ) and toString( decimal c,
integer n ), the integer n represents the number of fractional digits to
include in the resulting string. The decimal is rounded up to fit the number
of fractional digits.

680 Data Federator User Guide


Function reference
Function reference 20
• For toString( timestamp m, string s ), the string s represents a
pattern. The pattern defines the format in which you want to extract the
elements of the timestamp m.

For example, toString( 2001-12-30 10:12:32.222, 'yyyy/MM/dd'


) == '2001/12/30'.

For details on date formats, see the Java 2 Platform API Reference for
the java.text.SimpleDateFormat class, at the following URL:

"http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html".

STRING toString(BOOLEAN b)

STRING toString(DATE a)

STRING toString(DECIMAL c)

STRING toString(DOUBLE d)

STRING toSTRING(INTEGER n)

string toSTRING(NULL u)

string toSTRING(STRING s)
Syntax
STRING toString(TIME t)

STRING toString(TIMESTAMP m)

STRING toString(DECIMAL c,
INTEGER n)

STRING toString(DOUBLE d, IN
TEGER n)

STRING toString(TIMESTAMP m,
STRING s)

Alias str()

Data Federator User Guide 681


20 Function reference
Function reference

toString(45) = '45'

toString (-45) = '-45'

toString(45.9) = '45.9'

toString (-45.9) = '-45.9'

toString('2002-09-09') =
'2002-09-09'
Examples
toString('23:08:08') =
'23:08:08'

toString('2002-03-03
23:08:08.0') = '2002-03-03
23:08:08'

toString(true) = 'T'

toString(false) = 'F'

• (double d, integer n): n must be


a constant
Restrictions • (decimal c, integer n): n must be
a constant

toTime

Converts the argument to a time.


• If the argument is a STRING "s", this function converts s to a TIME. This
STRING should in the format 'HH :MM :SS' where 'HH' is hour, 'MM'
minutes and 'SS' seconds.

Examples of STRINGS that respect this format: '23 :09 :07' and '03 :11
:29'. An error is returned if the format is wrong. No restrictions are imposed
on hour, minute or second values. If the number of minutes or seconds
is greater than 60 or the number of hours is greater than 24, the toTime()

682 Data Federator User Guide


Function reference
Function reference 20
function uses an internal clock to convert to the correct time. Thus, '0
:450 :29' will be converted to '07 :30 :29' and '25 :14 :180' to '01 :17 :00'.
• If the argument is a DATE, TIME or TIMESTAMP, this function converts
the argument to a TIME.
• If the argument is NULL, this function returns NULL.

TIME toTime(STRING s)

TIME toTime(DATE a)

Syntax TIME toTime(TIME t)

TIME toTime(TIMESTAMP m)

TIME toTime(NULL u)

toTime('02:10:09') =
'02:10:09'
Examples
toTime('0:450:29') =
'07:30:29'

toTimestamp

Converts the argument to a TIMESTAMP.


• If the argument is a STRING "s", this function converts s to a TIMESTAMP.
This STRING s should be in the format 'YYYY-MM-DD HH:mm:SS(.ssss)'
where 'YYYY is the year, 'MM' is the month, 'DD' is the day, 'HH' is the
hour, 'mm' is the minute, 'SS' is the second, and 'ssss' is the millisecond.

Examples of character strings that respect this format: '2003-02-17


23:09:07' and '2003-11-12 03:11:29'.
• For toTimestamp( s1, s2 ), the string "s2" represents a pattern. The pattern
defines the format in which you want to extract the elements of the string
s1.

Data Federator User Guide 683


20 Function reference
Function reference

For example, toTimestamp( '4:30:26 PM on January 3, 1976',


'KK:mm:ss a \'on\' MMMM d, yyyy' ) == 1976-01-03 16:30:26.0.

For details on date formats, see the Java 2 Platform API Reference for
the java.text.SimpleDateFormat class, at the following URL:
http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.

An error is returned if the format is wrong. No restrictions are imposed on


year, month and day number values or on hour, minute and second number
values.

If the month number is greater than 12, the day of the month does not exist
in the corresponding month, the number of minutes or seconds is greater
than 60, or the hour digit is greater than 24, the timestamp function uses an
internal clock and calendar to covert to the correct timestamp. Thus,
'2002-09-09 25:14:180' will be converted to '2002-09-10 01:17:00'.

TIMESTAMP toTimestamp(STRING
s)

TIMESTAMP toTimestamp(STRING
s1, STRING s2)

Syntax TIMESTAMP toTimestamp(DATE a)

TIMESTAMP toTimestamp(TIME t)

TIMESTAMP toTimestamp(TIMES
TAMP m)

TIMESTAMP toTimestamp(NULL u)

684 Data Federator User Guide


Function reference
Function reference 20

toTimestamp('2003-02-12
02:10:09') = '2003-02-12
02:10:09.0'

toTimestamp('2003-02-29
02:10:09') = '2003-03-01
02:10:09.0'

toTimestamp('2002-14-12
02:10:09') = '2003-02-12
02:10:09.0'
Examples
toTimestamp('1994-110-12
02:10:09') = '2003-02-12
02:10:09.0'

toTimestamp('2003-02-12
0:450:29') = '2003-02-12
07:30:29.0'

toTimestamp('2002-09-09
25:14:180') = '2002-09-09
01:17:00.0'

(time t): The date value of the con-


Restrictions
stant is 1970-01-01

val

Converts a STRING "s" into a DECIMAL.

The string "s" must be in the decimal number format where the period is used
as a separator for the decimal portion. An error is returned if s is not in the
decimal number format.

Syntax DECIMAL val(STRING s)

Data Federator User Guide 685


20 Function reference
Function reference

val('2987.9') = 2987.9

Examples val('-2987.9') = -2987.9

val('UUYGV76') = 0.0

Restrictions if trim(s) == '', returns NULL

686 Data Federator User Guide


SQL syntax reference

21
21 SQL syntax reference
SQL syntax overview

SQL syntax overview


This chapter introduces Data Federator Query Server query language and
syntax. It provides an overview of the main concepts of working with Data
Federator Query Server. Use the table below to learn how to use the syntax
described in this chapter to administrate the Data Federator Query Server.

Administration function See this chapter

About user accounts, roles, and


Managing users and roles
privileges on page 504

Managing resources and properties


Managing resources
of connectors on page 483

Query execution overview on


Controlling query execution
page 530

Data Federator Query Server query


language
Whenever possible, Data Federator aligns to the standard SQL-92 syntax.
It is important to understand, however, how some elements are used by, or
influence, statements in Data Federator Query Server. This section describes
elements of SQL-92 implemented by Data Federator including object
management, data types, select and expressions.

Identifiers and naming conventions

For historical reasons and to support hierarchies, catalog names start with
"/". The "/" character is also the delimiter between two levels in a hierarchy.

688 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21
Because "/" is not an alpha-numeric character, you have to delimit catalog
names by quotes when sending queries that use fully qualified table names
to Data Federator Query Server.

Example: Defining the name of a table


You can use two formats to reference a table: qualified name or URL.

Qualified name URL

c.s.t /c/s/t

"c"."s"."t" /c/s/t

"/c1/c2"."s"."t" /c1/c2/s/t

"/c1/c2".s.t /c1/s2/s/t

If there is a catalog or schema defined by default, you can omit the name
of the catalog or schema in the reference to the table.

and if the then use the


To reference if the default or use the
default qualified
the table... catalog is... URL...
schema is... name...

c.s.t c s.t s/t

"/c1/c2".s.t "/c1/c2" s.t s/t

"/c1/c2".s.t "/c1/c2" s t t

Data Federator User Guide 689


21 SQL syntax reference
Data Federator Query Server query language

Catalogs

In Standard SQL a catalog is a named group of schemas. The catalog's


name qualifies the names of schemas that belong to it. You either state the
catalog name explicitly in the query, or at connection time in the JDBC URL,
or the database administrator supplies a default name for each user.

For details on the JDBC URL, see JDBC URL syntax on page 358.

For details on specifying a default catalog name for users, see Modifying
properties of a user account with SQL on page 517.

Catalog hierarchy

You can define hierarchies of catalogs in Data Federator by defining


hierarchies of directories in the catalog root directory. You must use this
hierarchy at the application level, because the hierarchy is flattened with
JDBC.

If you specify a catalog and schema in the JDBC connection URL, you need
only query relative names below the specified schema. If you specify nothing,
you have access to a global view of the system.

In Data Federator, each level of the hierarchy corresponds to a directory.


Each directory is a catalog. A directory may contain sub-catalogs or schemas.
Catalog names are written as directory paths:

Catalog name Root catalog Catalog Sub-catalog

"/" "/"

/c1 "/" c1

"/c1/c2" "/" c1 c2

690 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

Default catalogs

You can specify a default catalog either when you connect to Data Federator
Query Server or by calling the java.sql.Connection#setCatalog(String
catalog) method. Specifying a default catalog allows you to send queries
without fully qualifying table names.

The following table shows some examples of default catalogs specified in


the connection URL.

Default catalog examples Description

jdbc:datafederator://localhost no default catalog is defined

jdbc:datafederator://local specifies "/" as the default catalog


host/

jdbc:datafederator://local specifies "/OP" as the default catalog


host/OP

jdbc:datafederator://local specifies "/OP" as the default catalog


host/OP/

jdbc:datafederator://local specifies "/c1/c2" as the default


host/c1/c2 catalog

Schemas

A SQL schema is a named group of tables or views. Schemas are dependent


on a catalog. The schema name must be unique within the catalog to which
it belongs.

Data Federator User Guide 691


21 SQL syntax reference
Data Federator Query Server query language

Schema identifiers are absolute paths when no default catalog is set, or a


relative path from the default catalog directory:

Schema identifier Catalog name Schema name

"/" "s1"
/s1

"/c1" "s1"
/c1/s1

"/c1/c2" "s1"
/c1/c2/s1

Tables

A table is attached to one schema. The table name must be unique within a
schema to which it belongs.

When neither the default catalog nor the default schema are set, table
identifiers are constructed by giving the catalog name, the schema name
and the table name. In standard SQL syntax, the table identifier is constructed
by concatenating the catalog name, the schema name and the table name
separated by a "." (period).

Data Federator supports standard SQL syntax. As an alternative syntax,


Data Federator allows you to define table identifiers by their absolute path.
Some examples are shown below:

692 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

Table identifier Catalog name Schema name Table name

"/".s1.t1

Note:
This syntax is "/" "s1" "t1"
equivalent to the
absolute path syn-
tax: /s1/t1.

c1.s1.t1

Note:
This syntax is "c1" "s1" "t1"
equivalent to the
absolute path syn-
tax: /c1/s1/t1

"/c1/c2".s1.t1

Note:
This syntax is
equivalent to the "/c1/c2" "s1" "t1"
absolute path syn-
tax:
/c1/c2/s1/t1

Data Federator User Guide 693


21 SQL syntax reference
Data Federator Query Server query language

When default catalog and/or default schemas are set, catalog names and
schema names can be omitted in the table identifier:

To reference the if the default and if the default then use the
table... catalog is... schema is... qualified name...

"/c1/c2" "s1" t1
"/c1/c2".s1.t1

s1.t1

"/c1/c2" or
"c1/c2".s1.t1
s1/t1

Columns

A Table is described by a set of Columns. A column name must be unique


within a table to which it belongs. In standard SQL syntax, the column

694 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21
identifier is constructed by concatenating the table identifier, with the column
name separated by a period "."

To reference the if the default and if the default then use the
column... catalog is... schema is... qualified name...

/c1/s1/t1.col1
c1.s1.t1.col1

"/c1" "s1" /t1.col1


c1.s1.t1.col1

"/c1" s1/t1.col1
c1.s1.t1.col1

Using double quoted delimiters

To avoid misinterpretation of identifiers by the parser, you must use double


quoted delimiters for catalog, schema, table and column names when those
names contain non alpha-numeric characters.

"c1/c2"."sche+ma"."Tab-
Correct
le1".col1

Data Federator User Guide 695


21 SQL syntax reference
Data Federator Query Server query language

Incorrect /c1/c2.sche+ma.Tab-le1.col1

For reference information see, Object identifiers and numeric constants on


page 705.

Related Topics
• Parameters in the JDBC connection URL on page 361

Data Federator data types

In Data Federator, each column, local variable, expression, and parameter


has a related data type. A data type is a definition of the size and structure
of data that the object can hold, such as: integer data, character data, date
and time data or decimal data.

A data type associated to an object defines three attributes of the object:


• The kind of data contained by the object, see Known data types on
page 696
• The length or size of the value, see Type inference in expressions on
page 700.
• The precision and scale of the number (numeric data types only), Scale
and precision on page 700.

In a traditional database, length, precision and scale are set when creating
a column since they define the properties of the stored value. Data Federator
is a virtual database and does not store any values. Thus length, precision
and scale are not defined at schema definition time. Their values are
dynamically inferred from the contributing source tables.

Known data types

Here is a list of the data types that are known to Data Federator Query Server.
• BIT

696 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21
• DATE
• TIMESTAMP
• TIME
• INTEGER
• DOUBLE
• DECIMAL
• VARCHAR
• NULL

Since all databases do not use the same data types or interpret them in the
same way, Data Federator has standardized a mapping between the common
database types and Data Federator Query Server.

Mapping Data Federator data types to JDBC data types

The following table details the correspondence between the internal data
types used in Data Federator and the JDBC data types returned by the Data
Federator JDBC driver.

Data Federator data type JDBC data type

BIT BIT

DATE DATE

TIMESTAMP TIMESTAMP

TIME TIME

INTEGER INTEGER

Data Federator User Guide 697


21 SQL syntax reference
Data Federator Query Server query language

Data Federator data type JDBC data type

DOUBLE DOUBLE

DECIMAL DECIMAL

VARCHAR VARCHAR

NULL NULL

Mapping from JDBC data types to Data Federator data types

When accessing a JDBC data source, Data Federator does the mapping of
the JDBC types returned by the JDBC driver to the internal Data Federator
data types. The following table details the correspondence between the
JDBC data types and the Data Federator type that is used for the mapping.

JDBC data type Data Federator data type

TINYINT, SMALLINT, INTEGER,


DECIMAL with precision <= 10 and INTEGER
scale = 0

BIT BIT

REAL, FLOAT, DOUBLE DOUBLE

BIGINT, DECIMAL, NUMERIC DECIMAL

698 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

JDBC data type Data Federator data type

VARCHAR, LONGVARCHAR, CHAR VARCHAR

DATE DATE

TIME TIME

TIMESTAMP TIMESTAMP

NULL and all other JDBC types NULL

Date and time conversion

Data Federator Query Server converts the TIME data into a TIMESTAMP
data by setting the date to '1970-01-01'

Example: Conversion of a time into a timestamp


TIME '12:01:01' is converted to TIMESTAMP '1970-01-01 12:01:01.0'

Data Federator Query Server converts the DATE to TIMESTAMP by adding


the time: 00:00:00.000000000.

Example: Conversion of a date to a timestamp


DATE '1999-01-01' is converted to TIMESTAMP '1999-01-01
00:00:00.000000000'

Data Federator User Guide 699


21 SQL syntax reference
Data Federator Query Server query language

Type inference in expressions

When two expressions have different data types, the data type of the result
of an expression that combine these two expressions with an arithmetic
operator is determined by applying the data type precedence.

Data Federator is using the following precedence order between types:

NULL < VARCHAR < INTEGER < DOUBLE < DECIMAL

Scale and precision

The length, precision, scale of the result of an expression is inferred from


the type of the result. In the case of VARCHAR or DECIMAL result types,
the length, precision, scale, are inferred from the scale and precision of the
input expressions and the function and operator that combined them.

The table below gives the vector (length, precision, scale) for all Data
Federator expressions.

Fixed limit (length, precision,


Column type
scale)

BIT (1, 1, 0)

INTEGER (11, 10, 0)

DOUBLE (22, 15, 0)

DATE (10, 0, 0)

TIMESTAMP (29, 9, 0)

700 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

Fixed limit (length, precision,


Column type
scale)

TIME (8, 0, 0)

NULL (0, 0, 0)

DECIMAL Inferred

Precision and scale are always (0, 0)


VARCHAR
Length is Inferred

Related Topics
• Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server on page 535

Expressions

This section details the expressions in Data Federator SQL syntax.

Functions in expressions

For more information on the specific functions that you can use in
expressions, see Function reference on page 624.

Operators in expressions

Operators in expressions combine one or more simple expressions to form


a more complex expression.

Data Federator User Guide 701


21 SQL syntax reference
Data Federator Query Server query language

Operator name Description

An arithmetic operator that means


+ (Addition) "Addition" for numeric types and
"Concatenation" for VARCHAR type.

An arithmetic operator that means


- (Subtract)
"Subtraction".

An arithmetic operator that means


* (Multiply)
"Multiplication".

An arithmetic operator that means


/ (Divide)
"Division".

An arithmetic operator. It returns the


integer remainder of a division. For
% (Modulo)
example, 12 % 5 = 2 because the
remainder of 12 divided by 5 is 2.

An arithmetic operator. It returns the


** (Power) value of the given expression to the
specified power.

A comparison operator that means


= (Equals)
"Equal to".

A comparison operator that means


> (Greater)
"Greater than".

A comparison operator that means


< (Less)
"Less than"

702 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

Operator name Description

A comparison operator that means


>= (Greater Than Or Equal To)
"Greater than or equal to".

A comparison operator that means


<= (Less Than Or Equal To)
"Less than or equal to"

Comparison operator meaning "Not


<> (Not Equal To)
equal to".

A logical operator that is TRUE if all


ALL
of a set of comparisons are TRUE.

A logical operator that is TRUE if both


AND
BOOLEAN expressions are TRUE.

A logical operator that is TRUE if any


ANY one of a set of comparisons are
TRUE.

A logical operator that is TRUE if the


BETWEEN
operand is within a range.

A logical operator that is TRUE if a


EXISTS
subquery contains any rows.

A logical operator that is TRUE if the


IN operand is equal to one of a list of
expressions.

Data Federator User Guide 703


21 SQL syntax reference
Data Federator Query Server query language

Operator name Description

A logical operator that is TRUE if the


LIKE
operand matches a pattern.

A logical operator that inverts the


NOT value of any other BOOLEAN opera-
tor.

A logical operator that is TRUE if ei-


OR
ther BOOLEAN expression is TRUE.

A logical operator that is TRUE if


SOME some of a set of comparisons are
TRUE.

A unary operator where the numeric


+ (Positive)
value is positive.

A unary operator where the numeric


- (Negative)
value is negative.

Operator precedence levels

When a complex expression has multiple operators, operator precedence


determines the sequence in which the operations are performed. The order
of execution can significantly affect the resulting value.

Operators have these precedence levels. An operator on a higher levels is


evaluated before an operator on a lower level:
• + (Positive), - (Negative)

704 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21
• * (Multiply), / (Division), % (Modulo), ** (Power)
• + (Add), (+ Concatenate), - (Subtract)
• =, >, <, >=, <=, <> (Comparison operators)
• NOT
• AND
• OR
• ALL, ANY, BETWEEN, IN, LIKE, SOME

Object identifiers and numeric constants

Names of identifiers and constants must start with a letter and use only letters
and underscores. You can, however, use any characters in your identifier /
constant name if you enclose it in double-quotes: ".

For example, identifier names could be ABC_12 or "!%any name you


like$#$%".

The following table describes the Data Federator syntax for identifiers and
numeric constants:

To type a Use this definition For example

12
INTEGER: nnn (only
Integer 14
digits - one or more)
15

DOUBLE/DECIMAL: 12.3
nn.nn (one or more dig-
Double or Decimal its, followed by a point, 13.222
followed by one or more
digits) 11.3

Data Federator User Guide 705


21 SQL syntax reference
Data Federator Query Server query language

To type a Use this definition For example

DATE: {d 'yyyy-mm-
Date {d '2005-03-28'}
dd'}

Time TIME: {t 'hh:mm:ss'} {t '01:10:12'}

TIMESTAMP: {ts
{ts '2005-03-28
Timestamp 'yyyy-mm-dd
01:11:34.23222'}
hh:mm:ss.ffff'}

any string inside single


String or Varchar 'asdasdas'
quotes

any string starting with a


letter and followed by any
Simple identifier ABC_12
combination of letters,
digits and underscores

Indentifier with special any string inside double- "!%any name you
characters quotes like$#$%"

Comments

To add comments to the SQL Statements precede the text with a double
hyphen (--), or with a pound sign (#). Comments terminate at the end of the
line.

Statements

You can write SQL queries to retrieve or manipulate data that is stored in
Data Federator Query Server. A query can be issued in several forms:

706 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21
• Data Federator Administrator, a web-based application that has a graphical
user interface (GUI) on top of Data Federator Query Server.
• a command line SQL application
• Another compatible utility that can issue a SELECT statement.
• A client or middle tier-based application, such as a Microsoft Visual Basic
application, can map the data from an SQL Server table into a bound
control, such as a grid.

Related Topics
• Data Federator Administrator overview on page 384

SELECT Statement

Although queries have various ways of interacting with a user, they all
accomplish the same task: They present the result set of a SELECT statement
to the user. Even if the user never specifies a SELECT statement, the client
software transforms each user query into a SELECT statement that is sent
to Data Federator Query Server.

The SELECT statement retrieves data from Data Federator Query Server
and returns it to the user in one or more result sets. A result set is a tabular
arrangement of the data from the SELECT. Like an SQL table, the result set
is made up of columns and rows.

The full syntax of the SELECT statement is complex, but most SELECT
statements describe four primary properties of a result set:
• The number and attributes of the columns in the result set. The following
attributes must be defined for each result set column:
• The data type of the column.
• The size of the column, and for numeric columns, the precision and
scale.
• The source of the data values returned in the column.
• The tables from which the result set data is retrieved, and any logical
relationships between the tables.

• The conditions that the rows in the source tables must meet to qualify for
the SELECT. Rows that do not meet the conditions are ignored.

Data Federator User Guide 707


21 SQL syntax reference
Data Federator Query Server query language

• The sequence in which the rows of the result set are ordered.

Example: SELECT statement


The following SELECT statement finds the product ID, name, and list price
of any products whose unit price exceeds $40.

Statement

SELECTProductID, Name,ListPriceFROMProduction.Prod
uctWHEREListPrice >$40ORDER BYListPriceASC

• SELECT clause

The column names listed after the SELECT keyword (ProductID, Name,
and ListPrice) form the select list. This list specifies that the result set
has three columns, and each column has the name, data type, and size
of the associated column in the Product table. Because the FROM clause
specifies only one base table, all column names in the SELECT
statement refer to columns in that table.
• FROM clause

The FROM clause lists the Product table as the one table from which
the data is to be retrieved.
• WHERE clause

The WHERE clause specifies the condition that the only rows in the
Product table that qualify for this SELECT statement are those rows in
which the value of the ListPrice column is more than $40.
• ORDERBY clause

The ORDER BY clause specifies that the result set is to be sorted in


ascending sequence (ASC) based on the value in the ListPrice column.

708 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

SQL-92 statements supported by Data Federator Query Server

Here is the complete list of the SQL Statements that are supported by Data
Federator Query Server. A particular set of SELECT statements are supported
and unless noted, the entire standard SQL-92 syntax. In particular both the
SQL-92 grammar for outerjoin and JDBC syntax for outerjoins is supported.

Supported SQL Statement

ALTER USER Statement

CREATE USER Statement

CALL Statement

DROP USER Statement

GRANT Statement

REVOKE Statement

SELECT Statement
Note:
There is a particular set of compatible statements.
Caution:
Data Federator does not support correlated queries, MINUS, INTERSECT, and
EXCEPT.

The following table lists the SQL-92 statements that are not supported by
Data Federator Query Server.

Data Federator User Guide 709


21 SQL syntax reference
Data Federator Query Server query language

Unsupported SQL Statement

Column Constraints

CONNECT Statement

CREATE PROCEDURE Statement

CREATE TABLE Statement

BEGIN-END DECLARE SECTION

CLOSE Statement

COMMIT Statement

CREATE INDEX Statement

CREATE SYNONYM Statement

CREATE TRIGGER Statement

CREATE VIEW Statement

DELETE Statement

DISCONNECT Statement

710 Data Federator User Guide


SQL syntax reference
Data Federator Query Server query language 21

Unsupported SQL Statement

DROP PROCEDURE Statement

DROP TABLE Statement

EXEC SQL Delimiter

EXECUTE IMMEDIATE Statement

GET DIAGNOSTICS Statement

OPEN Statement

SET SCHEMA Statement

ISOLATION LEVEL Statement

UPDATE STATISTICS Statement

INSERT Statement

DECLARE CURSOR Statement

DESCRIBE Statement

DROP INDEX Statement

Data Federator User Guide 711


21 SQL syntax reference
Data Federator Query Server query language

Unsupported SQL Statement

DROP SYNONYM Statement

DROP TRIGGER Statement

DROP VIEW Statement

EXECUTE Statement

FETCH Statement

GET DIAGNOSTICS EXCEPTION Statement

LOCK TABLE Statement

PREPARE Statement

ROLLBACK Statement

SET CONNECTION Statement

SET TRANSACTION

UPDATE Statement

WHENEVER Statement

712 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
Related Topics
• Modifying a user password with SQL on page 517
• Modifying properties of a user account with SQL on page 517
• Creating a user account with SQL on page 516
• List of stored procedures on page 744
• Dropping a user account with SQL on page 516
• Granting a privilege with SQL on page 519
• Revoking a privilege with SQL on page 520
• Grammar for the SELECT clause on page 715

Data Federator SQL grammar


Data Federator complies with the following set of standard SQL-92
Statements and additional sub-set for the particular needs of Data Federator
Query Server. Refer to the specific grammar reference chapters detailed
below, for the specific syntax when you begin writing queries.

For this grammar reference See

Grammar for the SELECT clause on


SELECT clause
page 715.

Grammar for managing users on


User management
page 721.

Grammar for managing resources on


Resource management
page 725.

Numeric Constants and Double Object identifiers and numeric con-


quoted identifiers stants on page 705

Data Federator User Guide 713


21 SQL syntax reference
Data Federator SQL grammar

Syntax key

The table below explains the conventions used in the command line syntax.

Symbol Description

text a keyword

text A syntax element, appears in italics.

An optional attribute, appears in


[ option ] courier font surrounded by square
brackets [ ... ].

Multiple attributes are separated by


,
a comma.

When you must choose one element


within the listed elements, curly
{option1 | option2 | ...} braces { ... } appear and the different
elements are separated by the verti-
cal bar | and followed by the ellipses.

When you have a choice among zero


or one of the listed elements, square
brackets [ ... ] appear and the differ-
[ option1 | option2 | ...]
ent elements are separated by the
vertical bar | and followed by the el-
lipses.

714 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
Symbol Description

An element that appears zero or


[ element ] ... many times, appears inside square
brackets, followed by the ellipses.

An element that appears one or many


times, appears with a comma inside
element [ , element ] ...
square brackets, followed by the el-
lipses.

Grammar for the SELECT clause

The following section details the complete SQL Select clause grammar used
with Data Federator.

start

query [ ; ] EOF

query

[ WITHView ]... SQLSelectFromWhere [ { UNION | INTERSECT | EX


CEPT } [ DISTINCT | ALL ] SQLSelectFromWhere ] [ ORDER BY or
derByTerms [ , orderByTerms ]... ]

WITHView

WITHidentifierAS "(" query ")"

SQLSelectFromWhere

SELECT [ DISTINCT ] { selectExpression [ , selectExpression


]... | MULT } fromClause [ WHEREdisjunction ] [GROUPBY addi
tiveTerm [ , additiveTerm ]... ] [ HAVING disjunction ]

Data Federator User Guide 715


21 SQL syntax reference
Data Federator SQL grammar

fromClause

FROM tableReferenceList

tableReferenceList

tableReference [ , tableReference ]...

tableReference

tableReferenceAtomicTerm [ qualifiedJoinPart ]...

tableReferenceAtomicTerm

{ tablePrimary
| jdbcOuterJoin
| "(" query ")" [ [ AS ] identifier ]
| "(" tableReference ")" [ [ AS ] identifier [ "(" projectAlias
[ , projectAlias ]... ")" ] ] }

tablePrimary

table [ [ AS ] tableAlias ]

table

{
{ identifier | delimitedIdentifier }
[ "." { identifier | delimitedIdentifier } ]
[ "." { identifier | delimitedIdentifier } ]
| URLIDENTIFIER
}

qualifiedJoinPart

[ NATURAL ] [ joinType ] JOIN tableReferenceAtomicTerm


[ joinSpecification ]

jdbcOuterJoin

"{" OUTER_JOIN_JDBC jdbcOuterJoinPart "}"

716 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
jdbcOuterJoinPart

tableReferenceAtomicTerm [ outerJoinType OUTER JOIN jdbcOuter


JoinPart joinSpecification ]

joinType

{ INNER | CROSS | outerJoinType [ OUTER ] }

outerJoinType

{ LEFT | RIGHT | FULL }

joinSpecification

{ joinCondition | namedColumnsJoin }

joinCondition

ON disjunction

namedColumnsJoin

USING "(" addUsing [ , addUsing ]... ")"

addUsing

columnName

projectAlias

{ identifier | delimitedIdentifier }

selectExpression

{ tableStar | disjunction
[ AS { identifier | delimitedIdentifier } ]
}

tableStar

{ tableAlias | table }.*

Data Federator User Guide 717


21 SQL syntax reference
Data Federator SQL grammar

functionTermJdbc

"{" FUNCTION_JDBC { identifier | LEFT | RIGHT } "(" disjunction


[ , disjunction ]... ] ")" "}"

functionTerm

{ identifier | LEFT | RIGHT } "(" [ [ DISTINCT | ALL ]


disjunction [ , disjunction ]... | * ] ")"

analyticFunctionPart

OVER "(" [ PARTITIONBYvariable [ , variable ]... ] ORDERBYvari


able [ ASC | DESC ] [ , variable [ ASC | DESC ] ]...")"

disjunction

conjunction [ ORconjunction ]...

conjunction

negationTerm [ ANDnegationTerm ]...

escapeChar

quotedString

quotedString

QUOTED_STRING_LITERAL

delimitedIdentifier

DELIMITED_IDENTIFIER

718 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
identifier

IDENTIFIER

columnName

{ identifier | delimitedIdentifier }

negationTerm

[ NOT ] { comparisonTerm | EXISTS "(" query ")" }

comparisonTerm

[ COMPARISON_OPERATOR
{ additiveTerm | { ANY | SOME | ALL } "(" query ")" }
| BETWEEN additiveTerm AND additiveTerm
| inValuesOrQuery
| LIKE additiveTerm [ ESCAPE escapeChar ]
| IS { NULL_LITERAL | NOT NULL_LITERAL }
| NOT { BETWEEN additiveTerm AND additiveTerm
| LIKE additiveTerm [ ESCAPE escapeChar ]
}
]

inValuesOrQuery

[ NOT ] IN "(" { inValues | query } ")"

inValues

constant [ , constant ]...

additiveTerm

factor [ { PLUS | MINUS } factor ]...

factor

unaryTerm [ { MULT | DIVIDE | POWER | INT_DIVIDE | MOD }


unaryTerm ]...

Data Federator User Guide 719


21 SQL syntax reference
Data Federator SQL grammar

unaryTerm

[ PLUS | MINUS ] atomicTerm

variable

table [ . columnName ]

constant

BOOL_LITERAL | INT_LITERAL | FLOAT_LITERAL | DATE_LITERAL |


TIMESTAMP_LITERAL | TIME_LITERAL | NULL_LITERAL | quotedString
| PARAMETER

atomicTerm

{
functionTerm [ analyticFunctionPart ] | functionTermJdbc
| variable
| constant
| "(" disjunction ")"
| caseExpression
| coalesceExpression
}

caseExpression

CASE {
additiveTermWHENadditiveTermTHENadditiveTerm [ WHENadditiveT
ermTHENadditiveTerm ]...
|
WHENcomparisonTermTHENadditiveTerm
[ WHENcomparisonTermTHENadditiveTerm ]
}
[ ELSEadditiveTerm ]
END

coalesceExpression

COALESCE "(" additiveTerm [ , additiveTerm ]...")"

tableAlias

{ delimitedIdentifier | identifier }

720 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
orderByTerms

{ tableAlias [ . columnName ] | INT_LITERAL } [ ASC | DESC ]

startRoutineQuery

procedureCall [ ; ] EOF

procedureCall

CALL identifier procedureArguments

procedureArguments

[ procedureArgument [ , procedureArgument ]... ]

procedureArgument

{ procedureConstant | CAST "(" procedureConstant AS identifier


")"
}

procedureConstant

{ BOOL_LITERAL | INT_LITERAL | FLOAT_LITERAL | DATE_LITERAL |


TIMESTAMP_LITERAL | TIME_LITERAL | NULL_LITERAL | quotedString
}

Grammar for managing users

The following section describes the Data Federator user management


grammar, and its syntax. For examples and further explanation on how to
use the grammar to formulate SQL queries, see About user accounts, roles,
and privileges on page 504.

Non-terminals

Non-terminals are syntax elements that follow a specific syntax. The following
section details the syntax for non-terminals.

Data Federator User Guide 721


21 SQL syntax reference
Data Federator SQL grammar

Start

dataDefinitionQuery EOF

dataDefinitionQuery

{ createUser | dropUser | alterUser | grantPrivilege | re


vokePrivilege | createRole | dropRole | grantRole | revokeRole
| checkAuthorization | createResource | dropResource | alter
Resource }

createUser

CREATE USER identifier PASSWORD string adminFlag

adminFlag

[
ADMIN
]

dropUser

DROP USER identifier

alterUser

ALTER USER identifier SET { passwordSetting | propertySetting


}

passwordSetting

PASSWORD { string | identifier }

propertySetting

userPropertyName userPropertyValue

userPropertyName

{ CATALOG | SCHEMA | LANGUAGE }

722 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
userPropertyValue

{ string | identifier }

createRole

CREATE ROLE identifier

dropRole

DROP ROLE identifier

revokeRole

REVOKE roles FROM grantees

grantRole

GRANT roles TO grantees

roles

identifiers

grantPrivilege

GRANT privileges TO grantees

revokePrivilege

REVOKE privileges FROM grantees

privileges

objectPrivileges ON [ TABLE | SCHEMA | CATALOG ] objectName

objectPrivileges

{ALL [ PRIVILEGES ] | action [, action] ...}

Data Federator User Guide 723


21 SQL syntax reference
Data Federator SQL grammar

checkAuthorization

CHECK [ AUTHORIZATION ] privileges [ FOR authorizationId ]

objectName

tableName

tableName

identifier | URLIDENTIFIER

grantees

authorizationId [, identifier ]...

authorizationId

PUBLIC | identifier

action

SELECT [ columnNames ] | DEPLOY | UNDEPLOY

columnNames

"(" identifiers ")" identifiers [, identifiers ]...

createResource

CREATE RESOURCE identifier [FROMreferencedResource]

dropResource

DROP RESOURCE identifier

alterResource

ALTER RESOURCE identifier


{SET resourcePropertyName resourcePropertyValue | RESET resour
cePropertyName }

724 Data Federator User Guide


SQL syntax reference
Data Federator SQL grammar 21
resourcePropertyName

identifier

resourcePropertyValue

{ string | identifier }

referencedResource
identifier

identifiers

identifier { , identifier }...

string

QUOTED_STRING_LITERAL

identifier

{ IDENTIFIER | DQUOTED_STRING_LITERAL }

Grammar for managing resources

The following section describes the complete Data Federator resource


management grammar. For examples and further explanation on how to use
the grammar to formulate SQL queries, see Managing resources and
properties of connectors on page 483.

createResource
CREATE RESOURCE identifier

dropResource
DROP RESOURCE identifier

Data Federator User Guide 725


21 SQL syntax reference
Data Federator SQL grammar

alterResource
ALTER RESOURCE identifier {SET resourcePropertyName
resourcePropertyValue | RESET resourcePropertyName }

resourcePropertyName
identifier

resourcePropertyValue
{string | identifier}

identifiers
identifier [ , identifier ]...

string
QUOTED_STRING_LITERAL

identifier
{IDENTIFIER | DQUOTED_STRING_LITERAL}

726 Data Federator User Guide


System table reference

22
22 System table reference
System table reference

System table reference


The following system tables are stored in the catalog 'leselect' and schema
'system'. System tables are divided into categories related to their functional
area:

System table category Contains these system tables

• /leselect/system/ catalogs
Metadata system tables • /leselect/system/ schemas

• /leselect/system/ func
Function system tables tionSignatures

• /leselect/system/ users

• /leselect/system/ userProp
erties

• /leselect/system/ roles
User system tables
• /leselect/system/ roleMem
bers

• /leselect/system/ userRoles

• /leselect/system/ permis
sions

• /leselect/system/ resources
Resource system tables
• /leselect/system/ resource
Properties

• /leselect/system/ dual

Other system tables • /leselect/system/ connec


tions

728 Data Federator User Guide


System table reference
System table reference 22
Related Topics
• Data Federator SQL grammar on page 713
• Query execution overview on page 530
• Using a system table to check the property of a resource on page 495
• Using a system table to check the properties of a user on page 527
• Querying metadata on page 532
• Data Federator SQL grammar on page 713
• Metadata system tables on page 729
• Function system tables on page 733
• User system tables on page 735
• Resource system tables on page 739
• Other system tables on page 740

Metadata system tables

catalogs

/leselect/system/catalogs - This system table contains all catalogs


available in Data Federator Query Server.

Position Column name Type Description

1 CATALOG VARCHAR catalog name

schemas

/leselect/system/schemas - This system table contains all schemas


available in Data Federator Query Server.

Data Federator User Guide 729


22 System table reference
System table reference

Position Column name Type Description

1 SCHEMA VARCHAR schema name

2 CATALOG VARCHAR catalog name

systemTables

/leselect/system/systemTables - This system table contains a list of all


system tables available in Data Federator Query Server.

Position Column name Type Description

the catalog con-


1 CATALOG VARCHAR taining the sys-
tem table

the schema of the


2 SCHEMA VARCHAR
system table

the name of the


3 TABLE_NAME VARCHAR
system table

the Data Federa-


tor Query Server
4 URL VARCHAR
URL of the sys-
tem table

730 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

the system table


5 DOCUMENT VARCHAR document as an
XML string

the access pat-


terns of the sys-
tem table as an
6 ACCESS_PATTERN VARCHAR XML string

NULL means no
access restriction.

an estimation of
ESTIMAT the number of
7 DOUBLE
ED_ROW_COUNT rows in the sys-
tem table

procedures

/leselect/system/procedures - This system table contains all stored


procedures available in Data Federator Query Server.

Position Column name Type Description

the name of the


1 PROCEDURE VARCHAR
stored procedure

a short descrip-
PROCEDURE_DE
2 VARCHAR tion of the proce-
SCRIPTION
dure

Data Federator User Guide 731


22 System table reference
System table reference

procedureSignatures

/leselect/system/procedureSignatures - This system table contains


descriptions of all the arguments used in stored procedures.

Position Column name Type Description

the name of the


1 PROCEDURE_NAME VARCHAR
procedure

the sequence
2 SIGNATURE_SEQ INTEGER number within the
signature

sequence number
of the parameters
3 PARAMETER_SEQ INTEGER
within the signa-
ture

the code of the


PARAMETER_DA
4 INTEGER data type of the
TA_TYPE
parameter

732 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

the data type of


the parameter

This data type is


one of the values
in java.sql.Types.
PARAME
5 VARCHAR For a list of types
TER_TYPE_NAME
used in Data Fed-
erator, see Map-
ping from JDBC
data types to Da-
ta Federator data
types on page 698.

Function system tables

functions

/leselect/system/function - This system table maintains information


about all built-in functions

Position Column name Type Description

the name of the


1 FUNCTION_NAME VARCHAR
function

sequence number
2 SIGNATURE_SEQ INTEGER
within signature

Data Federator User Guide 733


22 System table reference
System table reference

Position Column name Type Description

data type of the


RETURN_DA returned value
3 INTEGER
TA_TYPE from ja-
va.sql.Types

data type name of


RE
4 VARCHAR the returned val-
TURN_TYPE_NAME
ue

sequence number
5 PARAMETER_SEQ INTEGER for parameter
within signature

data type of the


PARAMETER_DA
6 INTEGER parameter from
TA_TYPE
java.sql.Types

PARAME data type name of


7 VARCHAR
TER_TYPE_NAME the parameter

functionSignatures

/leselect/system/functionSignatures - This system table maintains


information about signatures of all built-in functions

Position Column name Type Description

the name of the


1 FUNCTION_NAME VARCHAR
function

734 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

sequence number
2 SIGNATURE_SEQ INTEGER
within signature

data type of the


RETURN_DA returned value
3 INTEGER
TA_TYPE from ja-
va.sql.Types

data type name of


RE
4 VARCHAR the returned val-
TURN_TYPE_NAME
ue

sequence number
5 PARAMETER_SEQ INTEGER for parameter
within signature

data type of the


PARAMETER_DA
6 INTEGER parameter from
TA_TYPE
java.sql.Types

PARAME data type name of


7 VARCHAR
TER_TYPE_NAME the parameter

User system tables

users

/leselect/system/users - This system table maintains information about


all users

Data Federator User Guide 735


22 System table reference
System table reference

Position Column name Type Description

the name of the


1 NAME VARCHAR
user

userProperties

/leselect/system/userProperties - This system table maintains


information about all user properties

Position Column name Type Description

the name of the


1 USER_NAME VARCHAR
user

the name of the


2 PROPERTY_NAME VARCHAR
property

the value of the


3 PROPERTY_VALUE VARCHAR
property

roles

/leselect/system/roles - keeps information about all roles

736 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

the name of the


1 NAME VARCHAR
role

roleMembers

/leselect/system/roleMembers - This system table maintains information


about role members. A member can be a user or a role.

Position Column name Type Description

the name of the


1 ROLE_NAME VARCHAR
role

the name of the


2 MEMBER_NAME VARCHAR
member

userRoles

/leselect/system/userRoles - This system table maintains information about


all direct or indirect roles of users. This table has restricted access.

Position Column name Type Description

the name of the


1 USER_NAME VARCHAR
user

Data Federator User Guide 737


22 System table reference
System table reference

Position Column name Type Description

the name of the


2 ROLE_NAME VARCHAR
role

permissions

/leselect/system/permissions - This system table maintains information


about all permissions (privileges that are granted or revoked)

Position Column name Type Description

the type of the


1 TYPE VARCHAR permission (AL-
LOW/DENY)

the action to per-


2 ACTION VARCHAR form (SELECT,
DEPLOY, ...)

the object type


(CATALOG,
3 OBJECT_TYPE VARCHAR
SCHEMA, TA-
BLE, COLUMN)

the object name


4 OBJECT_NAME VARCHAR (canonical name,
fully qualified)

738 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

AUTHORIZA the user or role


5 VARCHAR
TION_ID name

Resource system tables

resources

/leselect/system/resources - This system table maintains information


about available resources. Resources are used to configure wrappers in a
flexible way.

Position Column name Type Description

the name of the


1 NAME VARCHAR
resource

resourceProperties

/leselect/system/resourceProperties - This system table maintains


information about all resource properties.

Position Column name Type Description

the name of the


1 RESOURCE_NAME VARCHAR
resource

Data Federator User Guide 739


22 System table reference
System table reference

Position Column name Type Description

the name of the


2 PROPERTY_NAME VARCHAR
property

the value of the


3 PROPERTY_VALUE VARCHAR
property

Other system tables

dual

/leselect/system/dual - This system table is a dummy table with a single


row ("X")

Position Column name Type Description

single dummy
1 DUMMY VARCHAR
column

connections

/leselect/system/connections - This system table maintains information


about all connected users

740 Data Federator User Guide


System table reference
System table reference 22

Position Column name Type Description

the name of the


1 USER VARCHAR
user

the identifier of
2 CONNECTION_ID VARCHAR
this connection

the date and time


this user logged
3 LOGIN_TIME TIMESTAMP
in to Query Serv-
er

Data Federator User Guide 741


22 System table reference
System table reference

742 Data Federator User Guide


Stored procedure reference

23
23 Stored procedure reference
List of stored procedures

List of stored procedures


The section below describes the available stored procedures and their
arguments. Many are based on standard JDBC stored procedures, while
some are unique to Data Federator Query Server.

Related Topics
• Using stored procedures to retrieve metadata on page 533

getTables

getTables

Returns information about all the tables in the system.

Returns -
-

The result set returned by this


stored procedure is identical to
the one returned by the JDBC
method: java.sql.DatabaseMeta
Data#getTables().

Parameters -
-

VARCHAR catalog
a catalog name

You must match the catalog


name as it is stored in the
database. If you enter '', you re-
trieve items without a catalog.

744 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
VARCHAR schema-
Pattern
a schema name pattern

You can use a pattern to match


the schema name as it is stored
in the database. If you enter '',
you retrieve items without a
schema.

VARCHAR table-
NamePat-
a table name pattern
tern You can use a pattern to match
the table name as it is stored in
the database.

VARCHAR types
a list of table types to include

Elements are separated by a


semicolon ';'. For example, 'ta-
ble;view'. This argument is option-
al.

Related Topics
• Using patterns in stored procedures on page 762

Data Federator User Guide 745


23 Stored procedure reference
List of stored procedures

getCatalogs

getCatalogs

Returns information about all the catalogs in the system.

Returns -
-

The result
set re-
turned by
this stored
procedure
is identical
to the one
returned by
the JDBC
method: ja
va.sq.lDatabaseMeta
Data#get
Catalogs().

getKeys

getKeys

Returns information about all the keys in the system.

Returns -
-

746 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
Note:
The
tableURL
format fol-
lows the
naming con-
ventions de-
scribed in
this chap-
ter, see
Identifiers
and naming
conventions
on page 688.
This proce-
dure re-
turns a ta-
ble contain-
ing:

VARCHAR
-
KEY_NAME
- the name
of the key

VARCHAR
- COL
UMN_NAME
- the name
of the col-
umn to
which this
key refers

INTEGER -
KEY_SEQ
- sequence
number
within key

BIT -

Data Federator User Guide 747


23 Stored procedure reference
List of stored procedures

IS_PRIMA
RY - speci-
fies if this
key is a pri-
mary key

Parameters -
-

VARCHAR tableURL
a table
URL

getFunctionsSignatures

getFunctionsSignatures

Returns information about the signatures of all functions in the system.

Returns -
-

748 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
For information on
the column name
definitions see func
tionSignatures on
page 734.

This procedure re-


turns a table contain-
ing:

VARCHAR - FUNC
TION_NAME - the
name of the function

INTEGER - SIGNA
TURE_SEQ - a short
description of the
function

INTEGER - RE
TURN_DATA_TYPE
- the data type

VARCHAR - RE
TURN_TYPE_NAME
- the data type name
INTEGER - PARAME
TER_SEQ - the pa-
rameter sequence

INTEGER - PARAME
TER_DATA_TYPE -
the parameter data
type

VARCHAR - PARAM
ETER_TYPE_NAME
- the name of the pa-
rameter type

Parame- -
ters
-

Data Federator User Guide 749


23 Stored procedure reference
List of stored procedures

VAR- pattern a function name pat-


CHAR ternFor details on pat-
terns, see Using pat-
terns in stored proce-
dures on page 762.

getColumns

getColumns

Returns information about all the columns in the system.

Re- -
turns
-

The result set returned by this


stored procedure is identical to
the one returned by the JDBC
method: ja
va.sql.DatabaseMetaData#get
Columns().

Param- -
eters
-

VAR- catalog
CHAR
a catalog name; must match the
catalog name as it is stored in
the database; "" retrieves those
without a catalog

750 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
VAR- schemaPattern
CHAR
a schema name pattern; must
match the schema name as it is
stored in the database; "" re-
trieves those without a schema

For details on patterns, see Us-


ing patterns in stored procedures
on page 762.

VAR- tableNamePattern
CHAR
a table name pattern; must
match the table name as it is
stored in the database

VAR- columnNamePattern
CHAR
a column name pattern; it must
match the column name as it is
stored in the database

For details on patterns, see Us-


ing patterns in stored procedures
on page 762.

getSchemas

getSchemas

Returns information about all the schemas in the system.

Returns -
-

Data Federator User Guide 751


23 Stored procedure reference
List of stored procedures

The result
set re-
turned by
this stored
procedure
is identical
to the one
returned by
the JDBC
method: ja
va.sq.lDatabaseMeta
Da
ta#getSchemas().

getForeignKeys

getForeignKeys

Returns information about all the foreign keys in the system.

Returns -
-

752 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
This proce-
dure re-
turns a ta-
ble contain-
ing:

VARCHAR
-
FK_NAME
- the for-
eign key
name

VARCHAR
-
RK_NAME
- the refer-
enced key
name

VARCHAR
- RK
TABLE_NAME
- the refer-
enced key
table name

VARCHAR
- FKCOL
UMN_NAME
- the name
of the col-
umn this
foreign key
refers to

VARCHAR
- RKCOL
UMN_NAME
- the name
of the col-
umn associ-

Data Federator User Guide 753


23 Stored procedure reference
List of stored procedures

ated to the
referenced
key

INTEGER -
KEY_SEQ
- sequence
number
within key

Parameters -
-

VARCHAR tableUrl
a table
URL

refreshTableCardinality

refreshTableCardinality

Computes and stores the estimated row count for the tables matching the
given pattern in the given catalog or schema.

Parameters -
-

VARCHAR catalogName
the catalog name

VARCHAR schemaName
the schema name

VARCHAR tablePattern
the table pattern, such as, 'D%'

For details on patterns, see Using pat-


terns in stored procedures on page 762.

754 Data Federator User Guide


Stored procedure reference
List of stored procedures 23

clearMetrics

clearMetrics

clears all the metrics that were stored by any previous calls to refreshTable
Cardinality

addLoginDomain

addLoginDomain

Registers a new login domain with the specified name and description.

This routine can be called by an ADMIN user or by a user with the privilege:
CREATE ANY LOGINDOMAIN

When this routine is executed, the following privileges are automatically


granted to the user.

DROP LOGINDOMAIN ON loginDomainName

ALTER LOGINDOMAIN ON loginDomainName

Parameters -
-

VARCHAR loginDo-
mainName
a name for this login domain;

This will be used when referencing this login do-


main in other stored procedures.

VARCHAR loginDo-
mainDe-
a description of this login domain
scription

Data Federator User Guide 755


23 Stored procedure reference
List of stored procedures

delLoginDomains

delLoginDomains

Deletes all registered login domains matching the specified name pattern.

For details on patterns, see Using patterns in stored procedures on page 762.

This routine can be called by an ADMIN user or by a user with the DROP
LOGINDOMAIN privilege on each login domain matching the pattern.

When this routine is executed, all credentials depending on the deleted login
domains are also removed.

When this routine is executed, all privileges associated to the deleted login
domains are automatically deleted.

Parameters -
-

VARCHAR loginDomain-
NamePattern
a pattern for the login domain name

For details on patterns, see Using patterns


in stored procedures on page 762.

alterLoginDomain

alterLoginDomain

Modifies an existing login domain (with the specified name), setting a new
description.

This routine can be called by an ADMIN user or by a user with the privilege
ALTER LOGINDOMAIN ONloginDomainName.

756 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
Parameters -
-

VARCHAR loginDomainName
the name of an existing login
domain

VARCHAR loginDomainDescrip-
tion
a new description of this login
domain

getLoginDomains

getLoginDomains

Returns all existing login domains matching the specified pattern.

This routine can be called by any user.

Parameters -
-

VARCHAR loginDomain-
NamePattern
a pattern for the login domain name

For details on patterns, see Using patterns


in stored procedures on page 762.

addCredential

addCredential

Data Federator User Guide 757


23 Stored procedure reference
List of stored procedures

Adds a new credential for the current user and the specified login domain
name. Or, adds a new credential for the specified authorizationID (user or
role) and login domain name.

The credential is divided into a public part and private part. The private part
will be encrypted when stored in the credential store.

Typically, the public part is the username, and the associated private part
is the password.

Any user can call this procedure.

To call this procedure using the authorizationID parameter, you must be


logged in with an ADMIN user account.

Parameters -
-

VARCHAR authoriza-
tionID
the Data Federator user or role name, or the key-
word 'PUBLIC' to denote any user

This parameter is optional when the user is AD


MIN.

This parameter cannot be used by a regular user.

VARCHAR loginDo-
mainName
a login domain name;

This name should have been previously registered


using the routine addLoginDomain.

VARCHAR public_cre-
dential
the public part of the credential

VARCHAR private_cre-
dential
the private part of the credential

delCredentials

758 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
delCredentials

Deletes credentials matching the specified login domain name pattern for
the current user. Or, deletes credentials matching the specified authoriza
tionID pattern and login domain name pattern.

Any user can call this procedure.

To call this procedure using the authorizationID parameter, you must be


logged in with an ADMIN user account.

Re- -
turns
-

This routine returns the number


of deleted credentials.

The returned result set has one


row and one column:

CREDENTIALS_DELETED: the
number of deleted credentials.

Param- -
eters
-

VAR- authorizationIDPattern
CHAR
a pattern for the user or role
name

For details on patterns, see Us-


ing patterns in stored procedures
on page 762.

This parameter is optional when


the user is ADMIN.

This parameter cannot be used


by a regular user.

VAR- loginDomainNamePattern
CHAR

Data Federator User Guide 759


23 Stored procedure reference
List of stored procedures

a pattern for the login domain


name

For details on patterns, see Us-


ing patterns in stored procedures
on page 762.

alterCredential

alterCredential

Modifies an existing credential for the current user and the specified login
domain name. Or, modifies an existing credential for the specified autho
rizationID (user or role) and login domain name.

The credential is divided into a public part and private part. The private part
will be encrypted when stored in the credential store.

Typically, the public part is the username, and the associated private part
is the password.

Any user can call this procedure.

To call this procedure using the authorizationID parameter, you must be


logged in with an ADMIN user account.

Parameters -
-

VARCHAR authoriza-
tionID
the Data Federator user or role name, or the key-
word 'PUBLIC' to denote any user

This parameter is optional when the user is AD


MIN.

This parameter cannot be used by a regular user.

760 Data Federator User Guide


Stored procedure reference
List of stored procedures 23
VARCHAR loginDo-
mainName
a login domain name;

This name should have been previously registered


using the routine addLoginDomain.

VARCHAR public_cre-
dential
the public part of the credential

VARCHAR private_cre-
dential
the private part of the credential

getCredentials

getCredentials

Returns all credentials matching the specified login domain name pattern.
Or, returns all credentials matching the specified authorizationIDpattern
pattern and login domain name pattern.

Any user can call this procedure.

To call this procedure using the authorizationIDpattern parameter, you must


be logged in with an ADMIN user account.

Re- -
turns
-

Data Federator User Guide 761


23 Stored procedure reference
Using patterns in stored procedures

This procedure returns a result


set with three columns:

AUTHORIZATION_ID: the Data


Federator user or role name

LOGIN_DOMAIN: the login do-


main

PUBLIC_CREDENTIAL: the
public part of the credential

The rows are ordered (in ascend-


ing order) by AUTHORIZA
TION_ID and LOGIN_DOMAIN.

Param- -
eters
-

VAR- authorizationIDpattern
CHAR
the Data Federator user or role
name, or the keyword 'PUBLIC'
to denote any user

This parameter is optional when


the user is ADMIN.

This parameter cannot be used


by a regular user.

VAR- loginDomainNamePattern
CHAR
a pattern for the login domain
name

For details on patterns, see Us-


ing patterns in stored procedures
on page 762.

Using patterns in stored procedures


use... to match... for example...

762 Data Federator User Guide


Stored procedure reference
Using patterns in stored procedures 23
use... to match... for example...
the character % an arbitrary number of
characters
the pattern

%ment

matches

apartment

treatment

payment

the character _ a single character


the pattern

_____ment

matches

apartment

treatment

but not

payment

any literal character a literal character


The pattern

matches

a combination of the a complex pattern


above

Data Federator User Guide 763


23 Stored procedure reference
Using patterns in stored procedures

use... to match... for example...

the pattern

AAA%port_100

matches

AAAairportX100

AAAseaportB100

AAAport8100

but not

AAAport100

764 Data Federator User Guide


Glossary

24
24 Glossary
Glossary

Glossary

Terms and descriptions


This section lists the terms used in the Data Federator applications and
documentation.

Term or phrase Definition

a mapping rule that Data Federator


Designer takes into account when
activated (mapping rule)
deciding the status of a target table,
and when deploying a project

the action of defining a new data-


add a datasource
source in a Data Federator project

a stored version of a project, stored


archive / archive file in a file either on the Data Federator
server, or on your file system

in a case statement, the combination


case
of an (If, Then) condition and action

a mapping rule that treats different


case statement
cases based on conditions

case statement formula a formula in a case statement

in a table, a dimension that defines


column
a single type of data

766 Data Federator User Guide


Glossary
Terms and descriptions 24
Term or phrase Definition

the status that specifies that your


configuration (of a target, mapping,
completed (status) or datasource) is not missing any
parameters, and all parameters have
valid values

an element that you work on in the


Data Federator Designer workspace;
component either a target, a datasource, a map-
ping, a constraint, a domain table or
a lookup table

a row that results from a database


composite row join operation on at least two data-
source tables

in a mapping formula, an expression


conditional expression that contains conditions (such as "if
..." and "then...")

a driver that allows Data Federator


connector Query Server to connect to a source
of data

a test that you define in Data Federa-


constraint
tor to verify a property of a mapping

the type and the formula that define


constraint definition
a constraint

a formula that defines the conditions


constraint formula
of a constraint

Data Federator User Guide 767


24 Glossary
Terms and descriptions

Term or phrase Definition

a report that shows the results of the


constraint report constraint checks and that indicates
how to correct violations

the table showing the last check date


constraint checking summary and the last result for each rule on
which a constraint applies

a datasource table that has at least


core table one column that must map to at least
one key column of the target table

the mapping rows that fail the defined


constraint violations constraints when you run a constraint
check

a constraint that you define by writing


custom constraint
a constraint formula; See "constraint".

a generic term for a database, a plain


file, or some other application that
data-access application
lets Data Federator, or other applica-
tions, access data

in a datasource, the parameters that


specify the format of the data in a text
data extraction parameters file; thus, the parameters that tell
Data Federator how to extract the
source data

768 Data Federator User Guide


Glossary
Terms and descriptions 24
Term or phrase Definition

a pointer to a source of data;

datasource It points to and represents the data


that is kept in a data-access applica-
tion.

datasource catalog a collection of datasources

in a datasource, the parameters that


datasource definition define the name, description and the
source of data

an element of the Data Federator


Designer that displays the tables
whose columns have relationships,
datasource relationship diagram
highlights the tables that are missing
relationships, and lets you manage
those relationships

a table in a datasource;

datasource table More precisely, a datasource table is


a pointer to a table in a source of
data.

the definition of a datasource table's


datasource table schema
columns and primary keys

a definition of the size and structure


of data that the object can hold, such
data type
as: integer data, character data, date
and time data or decimal data

Data Federator User Guide 769


24 Glossary
Terms and descriptions

Term or phrase Definition

a mapping rule that Data Federator


Designer does not take into account
deactivated (mapping rule)
when deciding the status of a target
table, or when deploying a project

the action of putting a version of a


deploy project into production on Data Fed-
erator Query Server

a mapping that is part of a project


deployed mapping
that you have deployed

deployed version (project) a version that is in production

the set of possible values that can


domain constraint
exist in a column

the set of permissible values of a


column; a domain lets you define
domain
permissible values more stringently
than a type

a table that defines a domain in Data


domain table
Federator

a datasource that you are working


on;
draft datasource You can modify a draft datasource,
but it is not available to use in a
mapping.

770 Data Federator User Guide


Glossary
Terms and descriptions 24
Term or phrase Definition

an expression that limits the data re-


turned from a query; used in mapping
filter formulas and in testing queries

see also: pre-filter, post-filter

a datasource that cannot be modified,


final datasource
and that you can use in a mapping

in a text file that contains tabular da-


fixed width fields (in a file) ta, columns whose values always
have the same number of characters

in a mapping formula, an expression


functional expression
that performs a function on a value

import the schema of a table in order


import a target table schema
to define a target table

a mapping rule whose constraints


have been checked against other
integrated (mapping rule status)
mapping rules in the same target ta-
ble

a target table whose mapping rules


integrated (target table status)
all have the status "integrated"

Data Federator User Guide 771


24 Glossary
Terms and descriptions

Term or phrase Definition

the action of setting a flag that indi-


cates that a mapping rule is "invalidat-
invalidate (action of setting a flag for ed";
a mapping rule) can also apply to a target table to in-
dicate that some mapping rules are
"invalidated"

a flag that indicates that a mapping


rule has not satisfactorily passed a
given constraint;
invalidated (flag that you set for a
mapping rule) can also apply to a target table to in-
dicate that not all mapping rules have
satisfactorily passed all of their con-
straints

a constraint that verifies that all the


key constraint
keys in a table are unique

the sequence of states through which


life cycle your work passes as you work in
Data Federator

a table that you create, and whose


content you enter, in Data Federator;
lookup table
the lookup table maps a column of
values to another column of values

the action of turning a draft data-


make final (a datasource)
source into a final datasource

772 Data Federator User Guide


Glossary
Terms and descriptions 24
Term or phrase Definition

a set of mapping rules for one target


mapping table; or, the action of defining map-
pings

a definition of how the columns in the


mapping formula datasource tables map to one column
in the target table

a set of formulas that map the


mapping rule columns of datasource tables to a
target table

a constraint that verifies that none of


not-null constraint
the values in a column are null

a filter that limits the results of a


post-filter mapping rule; post-filters operate on
data after the table relationships

a filter that limits the data that comes


pre-filter from datasoruces; pre-filters operate
on data before the mapping formulas

a column that is guaranteed to con-


primary key tain unique values, and whose values
identify all of the rows in a table

a constraint that prescribes that all


primary key constraint the values of a column must be
unique

Data Federator User Guide 773


24 Glossary
Terms and descriptions

Term or phrase Definition

a unit of data that groups targets,


datasources mappings, domains,
project lookups and constraint checks, and
which lets you manage versions and
archives of your work

the description about the project's


deployment parameters, its list of
project definition
archives and its list of deployed ver-
sions

in a mapping formula, an expression


range expression
that defines a range

a link between the columns of two


relationship datasource tables, or between a
datasource and a lookup table

a dimension that defines a single in-


row (in a table)
stance of data

data that is used in a constraint check


to compare the data that you expect
sample datasource table data to extract from the sources to the
data that Data Federator extracts
from the sources

data that is used in a constraint check


sample target table data to compare the expected results to
the actual results

774 Data Federator User Guide


Glossary
Terms and descriptions 24
Term or phrase Definition

in a datasource, the definition of the


columns and primary keys; the defini-
schema definition tion that allows Data Federator to
query the data when the source is a
text file

• in a text file that contains tabular


data: a character that separates
the values of two columns

separator • thousands separator: a character


that separates groups of zeros in
large numbers, such as
"200,000,000"

an indicator of the level of complete-


ness of the components of a project
status
in Data Federator; the point in the life
cycle of your work

a virtual database that you define in


Data Federator Designer, whose data
target is mapped from datasources, and
against which you can run queries
through Data Federator Query Server

target table a table in a target

action of setting a flag that indicates


validate (action of setting flag for a that a mapping rule is "validated";
mapping rule) can also apply when all the mapping
rules in a target table are "validated"

Data Federator User Guide 775


24 Glossary
Terms and descriptions

Term or phrase Definition

a flag that indicates that a mapping


rule has satisfactorily passed a given
validated (flag that you set for a constraint;
mapping rule) can also apply when all the mapping
rules in a target table have satisfacto-
rily passed all of their constraints

the existence of two values in a col-


violation of a primary key constraint umn that are equal, where the col-
umn is specified as a primary key

the property of there being no ele-


uniqueness ments in the same set that have the
same value

the action of using the schema defini-


update schema tion parameters to regenerate the
schema of a datasource table

the contents of a project at a specific


version (project)
time

776 Data Federator User Guide


Troubleshooting

25
25 Troubleshooting
Installation

Installation
This section lists common questions about installation.

Installing from remote machine

I cannot install from a remote machine.

It is displaying the following error after launching the Installer.

bash-2.05$ sh ./install.bin
Preparing to install...
Extracting the installation resources from the installer
archive...
Configuring the installer for this system's environment...
Launching installer...
Invocation of this Java Application has caused an
InvocationTargetException. This application will now exit.
(LAX)

Cause

When installing from a remote machine, you may get this error because the
host cannot detect the graphic environment.

Action

Try installing the software using the console mode, as follows.

sh ./install.bin -i console)

Input line is too long: error on Windows 2000

I get the error "input line is too long" when installing the services on Windows
2000.

Cause

In Windows 2000, the command line exceeds 2000 characters. Windows


2000 has a limit on the number of characters in one command.

Action

778 Data Federator User Guide


Troubleshooting
Installation 25
You need to rename the jar files in the directory bin/LeSelect/extlib, so that
their names are shorter.

Input line is too long: error on Windows 2000

I get the error "input line is too long" when installing the services on Windows
2000.

Cause

In Windows 2000, the command line exceeds 2000 characters. Windows


2000 has a limit on the number of characters in one command.

Action

You need to rename the jar files in the directory bin/LeSelect/extlib, so that
their names are shorter.

McAfee's On-Access Scan

If you use McAfee Anti-Virus, you must enable 'On-Access Scan' before you
start Data Federator.

If the 'On-Access Scan' was not enabled, stop the Data Federator Designer
service, enable the 'On-Access Scan' and restart the Designer service.

Errors like missing method due to uncleared browser


cache after installation

When using Data Federator Administrator, I get an error like Missing method
or missing parameter converters.

Cause

If you install a version of Data Federator, then, within three days, you use
this version and install a newer version or a hotfix, your browser's cache may
not be cleared. This can lead to errors in which new Data Federator
Administrator functions try to use new HTML pages that do not exist in the
browser cache.

Data Federator User Guide 779


25 Troubleshooting
Installation

Action

Clear your browser's cache after installing the newer version.

In Internet Explorer 7.0, for example, you can clear the cache as follows:
choose the Tools menu, click Delete Browsing History..., then in the
Temporary Internet Files section, click Delete Files.

In Internet Explorer 6.0: choose the Tools menu, click Internet Options,
then, in the General tab, in the Temporary Internet Files section, click
Delete Files.

In Firefox 2.0: choose the Tools menu, click Options, then, in the Privacy
tab, in the Private Data section, click Clear Now....

Finding the Connection Dispatcher servers


configuration file when running Connection
Dispatcher as a Windows service

Connection Dispatcher loses its configuration after a Windows upgrade.

Cause

When upgrading Windows to a new Service Pack or version, the home folder
for the system user may change.
Action

To find the system user’s home directory on your version of Windows, if you
upgrade your version of Windows or you install a Windows service pack for
example:
• ensure you are signed in as an administrator
• stop and re-start the Connection Dispatcher service
• select the Include Hidden and System Files option of the Windows
Search tool
• locate the .datafederator directory, and
• copy the dispatcher_servers_conf.xml file from your old .datafeder
ator directory to the new one.

Note:
If you do not do this, the concerned DataFederator services will start with an
empty configuration after the Windows upgrade.

780 Data Federator User Guide


Troubleshooting
Datasources 25
Related Topics
• Installing the JDBC driver without the Data Federator installer on page 351
• Format of the Connection Dispatcher servers configuration file on page 600

Datasources
This section lists common questions about working with datasources.

File on SMB share

I cannot use a file on an SMB share.

Cause

The connection parameters may be incorrect.

Did you type in the full host name?

Did you type the username and password?

Action

See File extraction parameters on page 162.

Separators not working

My separators are not working when I run a query on my datasource (for


text files).

Cause

You may have entered an extra space in your separator.

Action

Make sure that in the separator text box, you did not enter a space before
or after the separator itself.

For example:
• "; " matches a semicolon followed by a space

Data Federator User Guide 781


25 Troubleshooting
Datasources

• ";" matches a semicolon only

If your separator is a semicolon only, the first example will not work.

See Field formatting parameters for text files on page 164.

Cannot edit an existing datasource

I cannot edit my datasource anymore.

Cause

The datasource is not a draft.

Action

See Editing a final datasource on page 213.

Connection parameters

You receive a error message when you attempt to add a datasource, such
as JDBC from defined resource. The message reads "The connection
could not be established. Either the connection parameters are not correct
or the server is not accessible."

Cause

You incorrectly entered the syntax or did not respect the URLTEMPLATE
parameters. You did not set the transactionIsolation parameter correctly.

Action

Refer to the documentation for information on the correct syntax.


• For information on creating or modifying resources, see Modifying a
resource property using SQL on page 493.
• For information on URLTEMPLATE syntax, see urlTemplate on page 479.
• For information on setting the transactionIsolation parameter, see
transactionIsolation property on page 478.

782 Data Federator User Guide


Troubleshooting
Targets 25
Targets
This section lists common questions about working with target tables.

Cannot see any datasources in targets windows

I cannot see my datasources, when, in the targets window, I click Copy a


datasource table schema.

Cause

Your datasource is not final.

Action

See Making your datasource final on page 212.

Mappings
This section lists common questions about working with mappings.

Cannot reference lookup table in existing mapping

I added a lookup table and I cannot reference it in an existing mapping.

Cause

Did you add your datasource a second time?

If you add a lookup table after you add a mapping, you must add your
datasource table to your mapping a second time. Then, Data Federator
re-imports all the lookup tables linked to your datasources.

Action

Add your datasource a second time.

Data Federator User Guide 783


25 Troubleshooting
Mappings

Error in formula that uses a BOOLEAN value

I test for a BOOLEAN value in a mapping formula and the formula does not
work.

Cause

Did you specify "= true" in the predicates that return BOOLEAN values?

Certain data access systems return the values "1" and "0", or others, instead
of true and false. When you test the value, you should explicitly test for
equivalence to "TRUE" rather than relying on the returned value.

Action

Instead of using the fomat:

if( match(S20.HomePhone,'0[0-68].*' )

use:

if( match(S20.HomePhone,'0[0-68].*' = true )

Source relationships introduce cycles in the graph

I add a relationship between sources in the Table relationships and


pre-filters pane of a mapping rule (as described in Adding a relationship on
page 256) and see the message "These source relationships introduce cycles
in the graph".

Cause

You either:

added a relationship which caused the relationship path starting from the
very first source table in the path to eventually return to itself, or

added a relationship which caused the relationship path to return to a source


table that had been 'reached' before.

Action

784 Data Federator User Guide


Troubleshooting
Mappings 25
Remove the relationship and ensure you do not add one that causes the
relationship path to return either to the first source table in the path, or one
that has been connected to before.

The diagram below shows relationship paths that cause the message "These
source relationships introduce cycles in the graph" to be displayed:

The lines with the red tick show a valid relationship. The lines with a black
cross show relationships that will cause the message "These source
relationships introduce cycles in the graph" to be displayed.

The first line including a black cross introduces a cycle in the graph because
it adds a relationship which causes the relationship path to return to a source
table that has been 'reached' before, namely, the second bar from the left.

The second line including a black cross introduces a cycle in the graph
because it adds a relationship which causes the relationship path which
starts from the very first source table in the path to eventually return to itself.

Table used in mapping rule is no longer available

The following error is displayed: The table ''{t}'', used in the mapping rule
''{m}'' is no longer available.

Cause

The table has been deleted from its datasource, though not necessarily from
the mapping rule, keeping all relationships between this and other tables in
the mapping rule intact.

Action

Either:

a. Manually remove the proxy table from the mapping rule.

Data Federator User Guide 785


25 Troubleshooting
Mappings

Note:
This procedure also deletes all relationships between this table and all other
tables in the mapping rule.
Or:

b. Select another table.


Note:
All relationships between the deleted table and other tables in the mapping
rule are applied to the new table. Ensure the new table has the same columns
as the table it replaced.

Related Topics
• Adding a table to a mapping rule on page 282
• Replacing a table in a mapping rule on page 284
• Deleting a table from a mapping rule on page 286

Table added to the mapping rule should be core

The following error is displayed: TableShouldBeCore.

Cause

Either the source table has been added to the mapping rule and its columns
map the key of the target table, or the source table links two core tables.
Action

Set the source table as a core table, or change the key of the target table.

Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261

Table added to the mapping rule should not be core

The following error is displayed: TableShouldBeNonCore.

786 Data Federator User Guide


Troubleshooting
Domain tables 25
Cause

The source table is connected to another core table through one or more
non-core tables.

Action

Set the source table as a core table, or change the key of the target table.

Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261

At least one table should be core

The following error is displayed: AtLeastOneTableShouldBeCore.

Cause

The target table has no key columns, and and no source table has been set
as core.

Action

Set the source table as a core table, or set a key of the target table.

Related Topics
• Choosing a core table on page 261
• Defining key constraints for a target table on page 295
• Configuring meanings of table relationships using core tables on page 261

Domain tables

Cannot remove domain table

I cannot remove a domain table.

Cause

Data Federator User Guide 787


25 Troubleshooting
Data Federator Designer

Have you removed all the links to the domain table? You must first make
sure that no other tables use the domain table.

For example, in a target table, columns that are of type "enumerated" may
reference your domain table.

Action

You must change these references before you can delete the domain table.

Data Federator Designer


This section lists common questions about working with Data Federator
Designer in general.

Cannot select a table

I cannot select a table in the datasources, targets, lookup or domain list

Cause

Did you type a name for your table? It is possible to create a table with no
name, but you cannot select it.

Action

Make sure the table has a name.

Cannot find column names

Data Federator cannot find column names

Cause

Do your column names have leading or trailing spaces?

Data Federator allows blank characters in column names, but the spaces
may be difficult to see on the screen.

Action

788 Data Federator User Guide


Troubleshooting
Data Federator connectors 25
If your columns have leading or trailing spaces, type the spaces in the names
when you use the Data Federator Designer interface.

This may apply to:


• column names in mapping formulas
• column names in filters
• column names in relationships between datasources

Cannot use column or table after changing the source

I cannot use a column or table that was working earlier

Cause

Did you change the table on your data-access system?

If you change tables in a database or in a text file that is referenced by


datasources in Data Federator, you must update the datasource tables in
Data Federator as well.

Action

To update the datasource tables, see "Editing the schema of an existing


table" and Updating the tables of a relational database datasource on
page 158.

Data Federator connectors


This section lists common questions about configuring connectors to sources
of data when using Data Federator.

Exception for entity expansion limit

I get the exception: Parser has reached the entity expansion limit "xxx" set
by the Application.

Cause

Data Federator User Guide 789


25 Troubleshooting
Accessing data

You may get this error if you manipulate large XML files within Data Federator,
for example the configuration files used to define connectors to sources of
data. This means that the number of XML entities used in your file has
exceeded the default limit. The default allows 1000000 entities.

Action

Use the Data Federator Administrator to set the parameter


leselect.core.entityExpansionLimit to a higher value.

On Teradata V2R6, error Datatype mismatch in the


Then/Else expression

I get the error Datatype mismatch in the Then/Else expression when


connecting to Teradata V2R6.

Cause

There is a problem with certain types of queries. Teradata is aware of this


problem in version V2R6 and has provided a fix.

Action

Upgrade to version V2R6.2.2.21 or get an e-fix of DR 108983.

Accessing data
This section lists common questions about accessing data on Data Federator
Query Server.

Target tables not accessible on Data Federator Query


Server

I added target tables, but I cannot access them on Data Federator Query
Server.

Cause

790 Data Federator User Guide


Troubleshooting
Accessing data 25
Did you deploy your project? Target tables will only be accessible on Data
Federator Query Server if you deploy the project that contains them.

Action

Deploy a version of your project (see Deploying a version of a project on


page 324).

Target tables not accessible from deployed project


on Data Federator Query Server

I added target tables and deployed the project, but I cannot access the target
tables on Data Federator Query Server.

Cause

Do your target tables have the status "mapped"? Are your mapping rules in
the status "completed"?

Action

Data Federator only deploys target tables and mapping rules that have the
following status:
• Target tables are deployed when they have status "mapped".
• Mapping rules are deployed when they have status "completed".

This prevents incomplete target tables or mapping rules from appearing on


a production server.

To set the status of a target table to "mapped", make sure its mapping rules
have the status "completed". For details on setting the status of target tables,
see Determining the status of a mapping on page 220.

Cannot access CSV files on a remote machine using


a generic ODBC connection

When trying to create a datasource to read CSV files on a remote machine


using a generic ODBC connection, the connection fails. The problem occurs

Data Federator User Guide 791


25 Troubleshooting
Data Federator services

on Windows when the Query Server service was started with a user account
that does not have sufficient rights to access the CSV files over the network.

Cause

Did you start the DataFederator.QueryServer service using an account that


does not have access to the CSV files?

Action

Restart the DataFederator.QueryServer service using an account that has


access to the CSV files, for example an administrator account.

Data Federator services


This section lists common questions about Data Federator services.

Starting and stopping services

What are the Data Federator Windows services and how do I start/stop them?

For a description of the Data Federator Windows services, see the Data
Federator Installation Guide.

Networking
This section lists common questions about networking. It covers how to
configure Data Federator Server if your server has multiple network cards
and / or independent sub-networks, or if your computer is not networked
correctly.

Network Connections

If the client is unable to contact the given IP, errors of the type [ThinDriver]
Cannot reach server. Reason: Retries exceeded, couldn't recon
nect to IP address or [ThinDriver] Server has probably been
restarted or clients entities on server have expired as no ac
tivity was recorded on them for a long time. are displayed.

792 Data Federator User Guide


Troubleshooting
Networking 25
Cause

A correct network configuration has the following characteristics:


• It returns the correct host name of the computer, and
• a ping on the computer’s hostname uses the correct IP of the computer
and not the localhost or other local IP which cannot be used by the clients
to contact the server.

Note:
The correct IP is the one that the clients may use to contact the server.

A network configuration may return an incorrect hostname or IP if, for


example, the server has multiple network interfaces configured (physical
and/or virtual). Virtual machines like Vmware install virtual network interfaces.

Data Federator Query Server supports only one network interface. That is,
only one IP address on which the server can be contacted by clients.
Configurations with multiple clients in multiple independent sub-networks
are not supported. If you have a multi-interface configuration on your server,
you should explicitly specify the public IP to be used by the server.

All clients that can contact this IP can create connections to the server.

Action

Configure the parameter comm.jdbc.connIP to the correct IP address as


described above, and restart the server.

Note:
By default, following installation of Query Server, this parameter is not set
and it tries to identify the IP from the computer’s network configuration. This
may sometimes fail, and in some cases, after a system restart, the IP may
change randomly if the computer’s network configuration does not explicitly
specify what the the public IP/network interface is.

Data Federator User Guide 793


25 Troubleshooting
Networking

794 Data Federator User Guide


Data Federator logs

26
26 Data Federator logs
About Data Federator logs

About Data Federator logs


You can use log files to help solve problems and to communicate information
to Business Objects support.
• Use Data Federator Designer logs to help resolve issues with designing
and setting up projects
• Use Data Federator Query server logs to help resolve issues with using
Data Federator projects.

Data Federator Designer logs


Data Federator Designer logs are activated by default. You find them in:
• [data-federator-installation-dir]/datamap/WEB-
INF/datamap/traces/datamap.log

This file also contains the log of the JDBC connection with Data Federator
Query Server and the log of the persistence layer.
• [data-federator-installation-dir]/tomcat/logs/*.log

• [data-federator-installation-dir]/tomcat/logs/local
host_datamap_log.[yyyy-mm-dd].txt

Data Federator Query Server logs


You can activate the Data Federator Query Server logs by modifying
.properties files.

Activating Data Federator Query Server logs


1. Edit the file data-federator-installation-dir/LeSelect/conf/serv
er.properties, and uncomment the line corresponding to the logging
level that you want.

796 Data Federator User Guide


Data Federator logs
Data Federator Query Server logs 26
In the server.properties file, find the section, Debugging, which
contains the following lines:

#log4j.configuration=conf/logging_all.properties
#log4j.configuration=conf/logging_level4.properties
#log4j.configuration=conf/logging_level3.properties
#log4j.configuration=conf/logging_level2.properties
#log4j.configuration=conf/logging_level1.properties
log4j.configuration=conf/logging.properties

By default, the logging configuration is set to the file logging.properties.


This file configures the lowest level of logging. To change to the highest
level of logging, change this section to:

log4j.configuration=conf/logging_all.properties
#log4j.configuration=conf/logging_level4.properties
#log4j.configuration=conf/logging_level3.properties
#log4j.configuration=conf/logging_level2.properties
#log4j.configuration=conf/logging_level1.properties
#log4j.configuration=conf/logging.properties

2. Restart Data Federator Query Server.


On Windows, if Data Federator Query Server is installed as a Windows
service, restart the service.

On Unix, restart the server using the scripts.


• data-federator-installation-dir/LeSelect/shutdown.sh

• data-federator-installation-dir/LeSelect/startup.sh

Logs are found in the following files.

data-federator-installation-dir/LeSelect/log/*.log

Data Federator User Guide 797


26 Data Federator logs
Data Federator Query Server logs

798 Data Federator User Guide


Get More Help

A
A Get More Help

Online documentation library


Business Objects offers a full documentation set covering all products and
their deployment. The online documentation library has the most up-to-date
version of the Business Objects product documentation. You can browse
the library contents, do full-text searches, read guides on line, and download
PDF versions. The library is updated regularly with new content as it becomes
available.

To access the online documentation library, visit http://help.sap.com/ and


click Business Objects at the top of the page.

Additional developer resources


https://boc.sdn.sap.com/developer/library/

Online customer support


The Business Objects Customer Support web site contains information about
Customer Support programs and services. It also has links to a wide range
of technical information including knowledgebase articles, downloads, and
support forums.

http://www.businessobjects.com/support/

Looking for the best deployment solution for your company?


Business Objects consultants can accompany you from the initial analysis
stage to the delivery of your deployment project. Expertise is available in
relational and multidimensional databases, in connectivities, database design
tools, customized embedding technology, and more.

For more information, contact your local sales office, or contact us at:

http://www.businessobjects.com/services/consulting/

Looking for training options?


From traditional classroom learning to targeted e-learning seminars, we can
offer a training package to suit your learning needs and preferred learning
style. Find more information on the Business Objects Education web site:

http://www.businessobjects.com/services/training

800 Data Federator User Guide


Get More Help
A
Send us your feedback
Do you have a suggestion on how we can improve our documentation? Is
there something you particularly like or have found useful? Drop us a line,
and we will do our best to ensure that your suggestion is included in the next
release of our documentation:

documentation@businessobjects.com

Note:
If your issue concerns a Business Objects product and not the documentation,
please contact our Customer Support experts. For information about
Customer Support visit: http://www.businessobjects.com/support/.

Business Objects product information


For information about the full range of Business Objects products, visit:
http://www.businessobjects.com.

Data Federator User Guide 801


A Get More Help

802 Data Federator User Guide


Index
A B
Access backing up
adding datasources 72, 76 data
configuring connectors 400 Backup and Restore Tool 576
connection parameters 74 Data Federator data 578
Access datasources Backup and Restore Tool
using deployment contexts 76 starting 576
activating backups 578
mappings 280 bind join
adding optimizing 548
datasources 72, 76, 81, 85, 86, 90, 91,
96, 101, 107, 112, 113, 118, 119,
124, 128, 132, 148, 149, 168, 181,
C
184, 203 cancelling
datasources from DDL scripts 171 queries 534, 535
domain tables 54, 55 capabilities
login domains 525 connectors 459
lookup tables 242, 243, 244 case statement 620
mapping formulas 219, 222 case statement formulas
mappings 217, 218, 337 editing 230, 620
projects 41 syntax 230, 620
targets 46 case statements 230
targets from DDL scripts 47 cast
ADMIN keyword 516 functions 671
alternative syntax 692 catalog hierarchy 690
archives changing name
storing project versions as 311, 312 target tables 48
authentication methods check privilege statement 520
managing 207, 525 choosing file formats
on web services 185 datasources 159
closing
projects 42
colors
meaning in source relationship diagram 255

Data Federator User Guide 803


Index

columns connection parameters (continued)


restricting access to 209 generic ODBC 146
comments 706 Informix 83
configuring JDBC from custom resource 135
connectors 400, 401, 402, 410, 412, MySQL 88
414, 430, 432, 442, 451 Netezza 99
working directory 573 Oracle 93
connecting Progress 103
to Data Federator Query Server 352 SAS 109
connecting with JDBC SQL Server 115
sample code 353 Sybase 121
Connection Dispatcher Sybase IQ 126
about 586, 587 Teradata 130
configuring 593, 599 connections
logging 595 dispatching 586, 587
parameters connector definition files
setting 593 syntax 428, 461, 472
reverification 593, 594, 595 connectors
shutting down Access 400
AIX 592 capabilities 459
Linux 592 configuring 400, 401, 402, 410, 412,
Solaris 592 414, 430, 432, 442, 451
starting DB2 401
AIX 590 Informix 402
Linux 590 MySQL 410
Solaris 590 Netezza 414
Unix-like systems 590 Oracle 412
Windows 589 SAS 426
stopping SQL Server 430
AIX 592 Sybase 432
Linux 592 Sybase IQ 442
Solaris 592 Teradata 451
Unix-like systems 592 web services 479
using 589 constants
validity 593, 594, 595 mapping values to 224
connection parameters constraining values
Access 74 domain tables 263
datasources 74, 78, 83, 88, 93, 99, targets 263
103, 109, 115, 121, 126, 130, 135, constraint violations
141, 146 filtering 302
DB2 78 constraints
generic JDBC 141 defining 294, 295, 296, 297

804 Data Federator User Guide


Index

conversion functions 671 datasources (continued)


core tables deleting 210
choosing 261 effect of changes 330
defining 264 from text files 158
meanings of table relationships using 261 from web services 182
create or modify a resource property 493 from xml files 179
create resource 491 generic JDBC 148
create role statement 523 generic ODBC 149
create user statement 516 Informix 81, 85
custom resources installing drivers 423, 427, 460
datasources 133 installing middleware 423, 433
JDBC 138
making draft 213
D making final 212
data managing 204
backing up managing workflow 67
Backup and Restore Tool 576 mapping to targets 217, 218
restoring mapping values to domains 248
Backup and Restore Tool 576 modifying 331, 333
Data Federator Query Server MySQL 86, 90
connecting to 352 Netezza 96, 101
data types 696 numeric formats 174
definition 604 Oracle 91, 96
mappings 697 Progress 101, 107
datasource tables Remote Query Server 203
displaying impact and lineage 208 running queries 211
referencing in lookup tables 246 SAS 107, 112
datasources SQL Server 113, 118
Access 72, 76 Sybase 119, 124
adding 72, 76, 81, 85, 86, 90, 91, 96, Sybase IQ 124, 128
101, 107, 112, 113, 118, 119, 124, Teradata 128, 132
128, 132, 148, 149, 168, 181, 184, testing 211
203 testing web services 201
adding web services 183, 184 text 168
choosing file formats 159 types of sources 209
choosing primary key 170 undoing final 213
connection parameters 74, 78, 83, 88, user interface 67
93, 99, 103, 109, 115, 121, 126, using target as 252
130, 135, 141, 146 using to define target table 47
creating from custom resources 133 web service 184
DB2 76, 81 XML 181
defining schema 172

Data Federator User Guide 805


Index

date deployment contexts (continued)


conversion 699 using with DB2 81
date/time using with generic JDBC 148
functions 637 using with generic ODBC 149
date/time behavior 699 using with Informix 85
DB2 using with MySQL 90
adding datasources 76, 81 using with Netezza 101
configuring connectors 401 using with Oracle 96
connection parameters 78 using with Progress 107
DB2 datasources using with Remote Query Server 203
using deployment contexts 81 using with SAS 112
DDL scripts using with SQL Server 118
defining datasources 171 using with Sybase 124
defining targets 47 using with Sybase IQ 128
deactivating using with Teradata 132
mapping rules 280 using with text datasources 168
mappings 279 using with web service datasources 184
default catalogs 691 using with XML datasources 181
default user properties 517 dispatching
defining connections 586, 587, 593, 594, 595
constraints 294, 295, 296, 297 displaying
defining a schema statistics 395
of text file datasources 168, 171, 608 displaying in Data Federator Designer
defining extraction parameters impact and lineage 52
of text file datasources 607 domain table
text files 177 referencing in lookup tables 247
defining roles 523 domain tables
defining target table adding 54, 55
from datasources 47 constraining values in target tables 263
delete a resource property 494 dereferencing from lookup 251
delete resource 492 dereferencing from target 60
deleting enumerating values in target tables 61, 62
datasources 210 importing 59
login domains 526 managing 54
projects 42 modifying 338, 339
users 516 domains
deleting users 516 mapping datasources to 248
deploying double quoted delimiters 695
on a single remote Query Server 583 drop resource 492
projects 321, 324, 327 drop role statement 524
deployment contexts dropping
using with Access 76 users 516

806 Data Federator User Guide


Index

E functions (continued)
curdate 637
effect of changes 330 curtime 638
exporting database 667
projects 313 date/time 637
dayname 638
dayofmonth 639
F dayofweek 639
fault tolerance dayofyear 639
Query Server 601 decrementdays 640
filter formulas degrees 631
editing 619 exp 631
syntax 619 floor 631
filtering 619 hexaToInt 675
constraint violations 302 hour 640
datasource columns 235, 236, 239, 241 ifElse 668
filters ifNull 669
adding to mapping rules 235, 236, 239, incrementdays 640
241 insert 648
formulas intToHexa 675
filters 619 isLike 649
if-then 620 lcase 665
relationships 621 left 650, 651
function leftStr 650, 651
conversion type 671 len 651
string type 647 length 651
functions locate 658
abs 628 log 632
acos 629 log10 632
ascii 647 lPad 652
asin 629 lTrim 652
atan 629 match 653
atan2 629 min 669
cast 671 minute 641
ceiling 630 mod 632
char 647 month 641
concat 648 monthname 641
containsonlydigits 648 now 642
convert 672 numeric 628
convertDate 674 nvl 669
cos 630 permute 654
cot 630 permuteGroups 657

Data Federator User Guide 807


Index

functions (continued) functions (continued)


pi 633 user 670
pos 658 val 685
power 633 valueIfElse 671
quarter 643 week 646
radians 633 year 646
rand 634
repeat 659
replace 660
G
replaceStringExp 660 generic JDBC
right 661 adding datasources 148
rightStr 661 connection parameters 141
round 634 generic JDBC datasources
rPad 662 using deployment contexts 148
rPos 662 generic ODBC
rTrim 663 adding datasources 149
second 643 connection parameters 146
sign 635 generic ODBC datasources
sin 635 using deployment contexts 149
space 664 grant privilege statement 519
sqrt 636 grant role statement 524
str 680
subString 665
system type 667 I
tan 636
timestampadd 643 identifiers 688
timestampdiff 644 if-then formulas 620
toBoolean 675 impact and lineage
toDate 676 displaying for datasource tables 208
toDecimal 677 displaying for mappings 279
toDouble 678 displaying for target tables 48
toInteger 679 displaying in Data Federator Designer 52
toLower 665 importing
toNull 680 domain tables 59
toString 680 lookup tables 249
toTime 682 Informix
toTimestamp 683 adding datasources 81, 85
toUpper 666 configuring connectors 402
trim 666 connection parameters 83
trunc 636, 645 properties 403
truncate 636 Informix datasources
ucase 666 using deployment contexts 85

808 Data Federator User Guide


Index

input columns JDBC from custom resource


assigning constant values to parameters connection parameters 135
188 JDBC URL templates 476
assigning dynamic values to parameters JVM 357
188
assigning values using binding functions
234
L
assigning values using table relationships life cycle
234 datasources 67
assigning values with pre-filters 233 mappings 220
mapping to 233 projects 309
propagating values to parameters 189 targets 50
providing values for 233 load balancing 586, 587
restricting access to columns 209 loading
input value functions projects 314, 315
using to assign values to input columns login
233, 234 default 40
installing drivers login and password
datasources 423, 427, 460 default 40
JDBC 460 login domains
Progress 423 adding 525, 755, 756
resources 423, 427, 460 definition 525
SAS 427 deleting 526, 756
installing middleware managing 525
datasources 423, 433 mapping 526
Progress 423 modifying 525
resources 423, 433 lookup tables
Sybase 433 adding 242, 243, 244
definition 242
J deleting 252
dereferencing domain table from 251
jdbc importing 249
using to connect to Data Federator 352 modifying 341
JDBC 460 referencing datasource table in 246
Add schema option 150 referencing domain table in 247
adding tables 157, 158, 181, 189
configuring data sources 150
datasources 138
M
installing drivers 460 making final
selecting sources 138 datasources 212
JDBC classes 475 managing
datasources 204

Data Federator User Guide 809


Index

managing (continued) Modify user properties statement 517


domain tables 54 modifying
mappings 276, 277, 279, 337 datasources 331, 333
privileges 514 domain tables 338, 339
projects 308 login domains 525
roles 511 lookup tables 341
statistics 394 targets 335
tables in mappings 261, 282, 284, 286, MySQL
287, 289 adding datasources 86, 90
targets 46 configuring connectors 410
user accounts 507 connection parameters 88
user accounts using SQL 516 MySQL datasources
users 507, 514 using deployment contexts 90
users with SQL 516
mapping
source table keys 264
N
to input columns 233 naming
values to constant 224 resources 486
values to domain tables 248 naming conventions 688
mapping formulas Netezza
adding 219, 222 adding datasources 96, 101
testing 226, 232 configuring connectors 414
mapping rules connection parameters 99
deactivating 280 properties 415
mappings Netezza datasources
activating 280 using deployment contexts 101
adding 217, 218, 337 non-terminals 721
data types 697 Numeric
deactivating 279 functions 628
displaying impact and lineage 279 numeric constants 705
effect of changes 330
example tests 54, 281
managing 276, 277, 279, 337 O
managing tables in 261, 282, 284, 286,
287, 289 object identifiers 705
managing workflow 220 ODBC functions
running query 280, 281 privileges 378
testing 280, 281 ODBC metadata methods 534
user interface 216 opening
metadata methods multiple projects 319
ODBC 534 projects 41, 319
modify user password statement 517 operator precedence levels 704

810 Data Federator User Guide


Index

optimizing Progress (continued)


queries installing middleware 423
bind join 548 Progress datasources
swap file access 546 using deployment contexts 107
Oracle projects
Add schema option 150 adding 41
adding datasources 91, 96 choosing servers for deployment 322
configuring connectors 412 closing 42
connection parameters 93 definition 40
Oracle datasources deleting 42
using deployment contexts 96 deploying 321, 322, 323, 324
exporting 313
loading 314, 315
P managing 308
parameters managing workflow 309
in SOAP header 186 opening 41, 319
parsing selecting 310
text files 161, 162 starting 41
password storage 323
default 40 storing 311, 312
patterns unlocking 43
in stored procedures 762 user interface 308
permissions 527, 728, 738 user rights 322
pre-filters version control 323
adding 236 properties
deleting 241 Informix 403
editing 239 Netezza 415
using to assign values to input columns 233 resources 489
precedence levels 704 Sybase 434
privileges 504, 505, 511, 518, 522, 738 Sybase IQ 443
for adding login domain 755 Teradata 452
for altering login domain 756 transactionIsolation 478
for deleting login domain 756 url 479
for Progress sources 423 user accounts 511
in SQL grammar 721
in SQL statements 706 Q
managing 514
ODBC functions 378 queries
Progress 422 cancelling 534, 535
adding datasources 101, 107 running 614, 615, 617
connection parameters 103 query optimization
installing drivers 423 bind join 548

Data Federator User Guide 811


Index

query optimization (continued) restoring (continued)


large tables 548 data
parameters 548 Backup and Restore Tool 576
Query Server Data Federator data 578
fault tolerance 601 restricting
remote 584 access to columns 209
sharing by Designer 585 revoke privilege statement 520
roles
defining 523
R managing 511
refreshing running queries
statistics 395 datasources 211
relationship formulas running query
editing 621 mappings 280, 281
syntax 621
relationships 621 S
adding in mappings 253, 254, 256, 259,
260, 267 sample code
between datasource tables 253, 256, 259, connecting with JDBC 353
260, 267 SAS
finding incomplete 254 adding datasources 107, 112
remote connection parameters 109
deploying on Query Server 583 connectors 426
Query Server 584 installing drivers 427
Remote Query Server optimizing queries 428
adding datasources 203 order of tables in from clause 428
Remote Query Server datasources properties 113
using deployment contexts 203 resources 113
remote text file as datasource 178 SAS datasources
resource using deployment contexts 112
definition 483 scale and precision 700
resource management 483 schema
resources defining 168, 171, 608
installing drivers 423, 427, 460 selecting
installing middleware 423, 433 projects 310
naming 486 setting
pre-defined 489 table alias 287
properties 489 sharing
properties for web service connectors 480 Query Server 585
restoring shutting down
backups 578 Connection Dispatcher 591, 592

812 Data Federator User Guide


Index

SOAP stored procedures


authenticating using SOAP header 185 addCredential 757
header 186 addLoginDomain 755
passing parameters 186 alterCredential 760
source relationship diagram alterLoginDomain 756
colors 255 clearMetrics 755
source table keys delCredentials 758
mapping 264 delLoginDomains 756
SQL grammar 713 getCatalogs 746
SQL Server getColumns 750
adding datasources 113, 118 getCredentials 761
configuring connectors 430 getForeignKeys 752
connection parameters 115 getFunctionsSignatures 748
SQL Server datasources getKeys 746
using deployment contexts 118 getLoginDomains 757
SQL-92 statements 709 getSchemas 751
starting getTables 744
Connection Dispatcher 590 refreshTableCardinality 754
statement storing
check privilege 520 projects 311, 312
create role 523 selected tables 312
create user 516 string functions 647
drop role 524 swap file
drop user 516 optimizing access 546
grant privilege 519 Sybase
grant role 524 adding datasources 119, 124
modify user password 517 configuring connectors 432
modify user properties 517 connection parameters 121
revoke privilege 520 installing middleware 433
statements properties 434
create or modify a resource 493 Sybase datasources
create resource 491 using deployment contexts 124
delete a resource property 494 Sybase IQ
delete resource 492 adding datasources 124, 128
statistics 395 configuring connectors 442
displaying 395 connection parameters 126
managing 394 properties 443
refreshing 395 Sybase IQ datasources
status using deployment contexts 128
targets 50 syntax
stopping url 352
Connection Dispatcher 591, 592 wd files 428, 461, 472

Data Federator User Guide 813


Index

syntax key 714 targets (continued)


system functions 667 enumerating values 61, 62
system tables managing 46
catalogs 729 managing workflow 50
connections 740 modifying 335
dual 740 storing 312
functions 733, 734 testing 53
functionsignatures 734 using as datasource 252
permissions 738 Teradata
procedures 731 adding datasources 128, 132
procedureSignatures 732 configuring connectors 451
resourceproperties 739 connection parameters 130
resources 739 errors 790
rolemembers 737 properties 452
roles 736 Teradata datasources
schemas 729 using deployment contexts 132
systemTables 730 testing
userproperties 736 datasources 211
userroles 737 mapping formulas 226, 232
users 735 mappings 280, 281
targets 53
text
T adding datasources 168
table identifiers 692 text datasources
table names using deployment contexts 168
wildcards 413 text files
table relationships datasources 159
meanings of using core tables 261 datasources from 158
using to assign values to input columns 234 defining a schema 168, 171, 608
tables defining extraction parameters 177, 607
adding from JDBC source 157, 158, 181, parsing 161, 162
189 remote file as datasource 200
adding to web service datasources 187 remote text file as datasource 178
target tables time
changing name 48 conversion 699
displaying impact and lineage 48 transactionIsolation
storing 312 property 478
targets type inference 700
adding 46
constraining values 263
dereferencing domain table from 60
effect of changes 330

814 Data Federator User Guide


Index

U web services
adding datasources 183, 184
unlocking adding tables 187
projects 43 assigning constant values to parameters
url 479 188
syntax 352 assigning dynamic values to parameters
URLTEMPLATE property 479 188
databases 479 authenticating on 185
user account authenticating on server 185
default 40 connectors 479
user accounts datasources from 182
managing 507 extracting operations from 183
managing using SQL 516 propagating values to parameters 189
mapping to login domains 526 responses 187
properties 511 testing datasources 201
user interface Windows services 591
datasources 67 not installed 590, 591
mappings 216 workflow
overview 31 datasources 67
projects 308 mappings 220
user management 728 projects 309
user properties working directory
default 517 configuring 573
users WSDL files
deleting 516 choosing 183, 184
dropping 516 extracting operations from 183
managing 507, 514 testing 201
managing using SQL 516
X
W
XML
web service adding datasources 181
adding datasources 184 XML datasources
properties for connectors 480 using deployment contexts 181
web service datasources xml files
using deployment contexts 184 datasources from 179

Data Federator User Guide 815


Index

816 Data Federator User Guide

You might also like