1)Which two statements are correct about XML stages and their usage? (Choose two.)
A. XML Transformer stage converts XML data to tabular format.
B. XML Input stage converts XML data to tabular format.
C. XML Output stage uses XSLT stylesheet for XML to tabular transformations.
D. XML Output stage converts tabular data to XML hierarchical structure.
2)Which two statements are true about usage of the APT_DISABLE_COMBINATION environment
variable? (Choose two.)
A. Globally disables operator combining.
B. Must use the job design canvas to check which stages are no longer being combined.
C. Disabling generates more processes requiring more system resources and memory.
D. Locks the job so that no one can modify it.
3)When importing a COBOL file definition, which two are required? (Choose two.)
A. The file does not contain any OCCURS DEPENDING ON clauses.
B. The column definitions are in a COBOL copybook file and not, for example, in a COBOL source
file.
C. The file you are importing contains level 01 items.
D. The file you are importing is accessible from your client workstation.
4)A client requires that any job that aborts in a Job Sequence halt processing. Which three activities
would provide this capability? (Choose three.)
A. Exception Handler
B. Nested Condition Activity
C. Job trigger
D. Sendmail Activity
E. Sequencer Activity
5)A DataStage EE job is sourcing a flat file which contains a VARCHAR field. This field needs to be
mapped to a target field that is a date. Which task will accomplish this?
A. Use a Modify stage to perform the type conversion.
B. Perform a datatype conversion using DateToString function inside the Transformer stage.
C. Use a Copy stage to perform the type conversion.
D. DataStage automatically performs this type conversion by default.
6)You are reading data from a Sequential File stage. The column definitions are specified by a
schema. You are considering whether to follow the Sequential File stage by either a Transformer
or a Modify stage. Which two criteria require the use one of these stages instead of the other?
(Choose two.)
A. You want to dynamically specify the name of an output column based on a job parameter,
therefore you select a Modify stage.
B. You want to add additional columns, therefore you select a Transformer stage.
C. You want to concatenate values from multiple input rows and write this to an output link,
therefore you select a Transformer stage.
D. You want to replace NULL values by a specified constant value, therefore you select a Modify
stage.
7)Which three actions are performed using stage variables in a parallel Transformer stage? (Choose
three.)
A. A function can be executed once per record.
B. Identify the last row of an input group.
C. Identify the first row of an input group.
D. A function can be executed once per run.
E. Lookup up a value from a reference dataset.
8)Which three UNIX kernel parameters have minimum requirements for DataStage installations?
(Choose three.)
A. MAXPERM - disk cache threshold
B. MAXUPROC - maximum number of processes per user
C. NOFILES - number of open files
D. SHMMAX - maximum shared memory segment size
E. NOPROC - no process limit
9)Which partitioning method would yield the most even distribution of data without duplication?
A. Random
B. Hash
C. Round Robin
D. Entire
10)Which environment variable controls whether performance statistics can be displayed in
Designer?
A. APT_NO_JOBMON
B. APT_PERFORMANCE_DATA
C. APT_PM_SHOW_PIDS
D. APT_RECORD_COUNTS
==================================================================================
Reading and writing null values in Datastage EE
There are a few considerations that need to be taken into account when handling nulls in Datastage:
All DSEE data types are nullable
Null fields do not have a value. Enterprise Edition NULL is respresented by a special value outside the range of any legitimate Datastage values
Nulls can be written to nullable columns only (this setting is specified in the column properties)
The Datastage job will abort when a NULL is to be written to a column which does not allow nulls
Nulls can be converted to or from a value. For instance, in a Sequential File stage the nulls need to be handled explicitly. A value which will be written instead of a null needs to be specified.
In a sequential source stage it is possible to specify values that will be converted to NULLs
A stage can ignore Null fields, can trigger an error or other defined action
----------------------------------------------------------------------------------
How to manage Datastage DataSets?
The Datastage DataSets can be managed in a few ways:
The Datastage Designer GUI (also available Manager and Director) provides a mechanism to view and manage data sets. It can be invoked in Tools -> Data set management
orchadmin command-line utility - a tool available on Unix which is able to list records and remove datasets (all component files, not just the header file)
dsrecords is command-line utility which lists number of records in a dataset
------------------------------------------------------------------------------------
What gets deployed when installing an Information Server Domain?
The following components are installed when deploying the IBM Infosphere Server Domain:
Metadata Server - which is installed on an WebSphere Application Server instance
Datastage server (or multiple servers) - may run on the parallel (EE) or server engine
Repository Database - DB2 UDB by default (available with the installation), however other RDBMS can be used
Information Server clients: Administration console, Reporting console, DS Administrator, DS Designer, DS Director
Information Analyzer
Business Glossary
Optionally: Rational Data Architect, Federation Server
------------------------------------------------------------------------------------
What is included in the configuration layers?
Configuration layers indicate what is installed on the IBM Infosphere installation server (layers are installed on the machine local to the installation).
The configuration layers include:
Datastage and Information Server clients
Engine - Datastage and other Infosphere applications
Domain - metadata server and installed product domain components
Repository database
Documentation
------------------------------------------------------------------------------------
What is the IBM Information Server startup sequence?
The Information Server startup sequence need to be preserved to avoid errors.
The startup steps are as follows:
Metadata Server startup - from the Microsoft Windows start menu (start the server option) or type startup serverx in the profile bin directory where serverx is the default name of the application server hosting the MetaData server
Start the ASB agent - by default it is set to start during the system startup. To start it manually, go to the Information Server folder in the Start menu and click Start the agent. It is only required when the Infosphere components (Datastage and Metadata server) work on different servers.
Execute Administration and Reporting consoles by clicking the Information Server Web Console icon.
Double click the DataStage and QualityStage client icon to begin the ETL development
----------------------------------------------------------------------------------
What privileges are required to run a parallel job?
To run a parallel job in Datastage, a user must have the following minimum permissions:
Read access to APT_ORCHHOME
Execute permissions on local programs and scripts
Read and Write access to the disk resources
----------------------------------------------------------------------------------
What is the default ip address and port for Information Server Web Console?
The Infosphere Information Server environment administration is done in the IBM Information Server Web Console, accessed via a web browser.
The default address is http://localhost:9080, where localhost should be replaced by a hostname or IP address of the machine that hosts the MetaData server and the default port 9080 can be adjusted if necessary.
The default web console webpage asks for user and password to log on. Those are login credentials specified during the installation and the administrator can manage the IDs in the console
-----------------------------------------------------------------------------------
What are datastage user roles?
Infosphere Datastage user roles are set up in the project properties in Datastage Administrator. To prevent unauthorized access, all Datastage users must have a user role assigned in the administrative console.
Each Datastage user can have one of the following roles assigned:
Datastage Developer - with full access to all functions of a Datastage projects
Datastage Operator - this users are allowed to run and manage released Datastage jobs
Datastage Super Operator - can browse repository (read-only) and open Designer client
Datastage Production Manager - creates and manages protected projects
-----------------------------------------------------------------------------------
What is the server-side tracing?
Infosphere Datastage server-side tracing is enabled and disabled in project properties in the tracing tab of the Datastage Administrator.
By default, server side tracing is disabled, because normally tracing is a system resources consuming option and causes a lot of server overhead.
When the server-side tracing is enabled, the information about all datastage server activity is recorded and written to trace files.
It is strongly recommended to use tracing only by experienced Datastage users or with the help of Datastage customer support.
-----------------------------------------------------------------------------------
Datastage imports and exports
All types of Infosphere Datastage objects (stored in a repository) can be easily exported into a file or imported into Datastage from a previously created export.
Main purpose of creating imports and exports are:
Projects and jobs backups
Maintaining and versioning
Moving objects between projects or datastage installations
Sharing projects and jobs between developers.
Imports and exports are executed from within the Datastage Client (in previous versions it was Datastage Manager).
To begin the export process, go to the Datastage and QualityStage Client menu and click Export -> Datastage components. Then select the types of objects to export and specify the details of an export file - path, name and extension (dsx or xml).
Keep in mind that the path to the export file is on the client machine, not the server.
The Song Jane [Doe, CEO] Likes
4 years ago
No comments:
Post a Comment