Tuesday 22 May 2012

Converting to AL32UTF8 Character Set from the Oracle Default of WE8MSWIN1252


Scenario

Current database uses WE8MSWIN1252 we need to migrate new database uses AL32UTF8 encoding.

Method Opted:
Export/Import you can use data pump utility to do the same

STEPS:
1) Explicitly set the NLS_LANG enviornment variable to the characterset of the source database.

#export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252

2) Export the source database with export utility .take export backup using full option

#exp system/sys file=orcl_full.dmp log=orcl_full.log full=y consistent=y statistics=none buffer=10000000

3) Create a new database with AL32UTF8.

4) Set old nls_lang enviorment variable before import in new databas

#export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252

5) Import data in schema

# imp Username/password file=sigma.dmp log=import_SIGMAUAT_BACKUP.log fromuser= scott_backup touser=scott

Related Issues :

Ora-12899 value to large for the column string

Cause : column length is less the n data nedd to import/ insert into perticuler column

Solution: Increase the column length

Explanation: When a database is created using byte semantics, the size of the char and varchar data types are specified in bytes ,not character .when database use single byte encoding then number of character is equivalent to the number of bytes .But if database using multiple charcter set then charcter is no longer equivalent to the bytes

During migration to new character set ,it is important to verify the column width of exisiting char and varchar column because thay may need to extended to support an encoding the required multibyte storage

 Single-Byte and Multibyte Encoding



Character
WE8MSWIN 1252 Encoding
AL32UTF8 Encoding
Ä
E4
C3 A4
Ö
F6
C3 B6
©
A9
C2 A9
80
E2 82 AC







Friday 18 May 2012

How to genrate awr report

1) Connect with sys user

$ sqlplus / as sysdba

2) Run script awrrpt which you can find in ORACLE_HOME/rdbms/admin folder

SQL> @$ORACLE_HOME/rdbms/admin/awrrpt


3) Specify the name of report type

Enter value for report_type: html

4) Specify the number of  days you want to interested the awr report

Enter value for num_days: 1

5)  Now you will found the list of

Listing the last day's Completed Snapshots  Snap Id  per hour from where you can choose the sanp Id of specific interval of time

Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
orcl orcl 16159 18 May 2012 00:00 1
16160 18 May 2012 01:00 1
16161 18 May 2012 02:00 1
16162 18 May 2012 03:00 1
16163 18 May 2012 04:00 1
16164 18 May 2012 05:00 1



6) Mention  start time or end time :

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 16159
Begin Snapshot Id specified: 16159

Enter value for end_snap: 16162
End Snapshot Id specified: 16162

7)  Specify the Report Name

Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1.html. To use this name,
press <return> to continue, otherwise enter an alternative.

Enter value for report_name: path/myreport

How to find the redo genrated by DML,DDL



Sometime want to know redo generated by at any transaction  in oracle . We can simply use "  set autotrace on statistics"  



SQL> set autotrace on statistics
SQL> insert into  object1  select * from all_objects;
49951 rows created.
Statistics
----------------------------------------------------------
       6693  recursive calls
       3866  db block gets
      38572  consistent gets
        263  physical reads
    5600712  redo size
        675  bytes sent via SQL*Net to client
        579  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
         15  sorts (memory)
          0  sorts (disk)
      49951  rows processed


But in few cases create like create Index we can find redo using below quires 


SQL> set verify off
SQL> column value new_val V
SQL> select a.name ,b.value value from v$statname a, v$mystat b  where a.statistic#=b.statistic# and lower(a.name) like '%redo size%' ;

NAME VALUE
---------------------------------------------------------------- ----------
redo size 185164

SQL> create table object3 as select * from dba_objects;

Table created.


SQL> select a.name , b.value ,b.value-&V redogenerated   from v$statname a, v$mystat b where a.statistic#=b.statistic#  and lower(a.name) like '%redo size%';

NAME                         VALUE                     REDOGENERATED  
--------                        ---------                       --------------------
redo size                       239880                        54716

SQL> set verify off
SQL> column value new_val V
SQL> select a.name ,b.value value from v$statname a, v$mystat b
2 where a.statistic#=b.statistic# and lower(a.name) like '%redo size%' ;

NAME VALUE
---------------------------------------------------------------- ----------
redo size 2323044

SQL> create index obind1 on object3(object_type);

Index created.

SQL> select a.name , b.value ,b.value-&V redogenerated
2 from v$statname a, v$mystat b where a.statistic#=b.statistic#
3 and lower(a.name) like '%redo size%';

NAME                  VALUE                   REDOGENERATED    
--------                  ---------                  ------------
   redo size             3503840                   1180796


Wednesday 16 May 2012

ORA-00827: could not shrink sga_target to specified value



SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 24000M
sga_target big integer 24000M

SQL> alter system set sga_max_size=20000M scope=spfile;

System altered.

SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup

ORACLE instance started.

Total System Global Area 2.3773E+10 bytes
Fixed Size 2126736 bytes
Variable Size 687869040 bytes
Database Buffers 2.3069E+10 bytes
Redo Buffers 14647296 bytes
Database mounted.
Database opened.

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 22672M
sga_target big integer 22672M


sga_max_size greter then  20000M even it  have been already  modified   after that i try to modify  sga_target oppps got an error

 SQL> alter system set sga_target=18000M scope=both;
alter system set sga_target=18000M scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00827: could not shrink sga_target to specified value

Then i try to find the  sga sub component size i.e  db_cache_size

SQL> show parameter db_cache_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------

db_cache_size big integer 22000M

ohhkkk there is an issue .I shrink first  db_cache_size and after taht i am able to shrink   sga_targe successfully.


SQL> alter system flush buffer_cache;

System altered.


SQL> alter system set db_cache_size=17008M scope=both;

System altered.

SQL> alter system set sga_target=18000M scope=both;

System altered.

SQL> alter system set sga_max_size=20000M scope=spfile;

System altered.

SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 2.0972E+10 bytes
Fixed Size 2121400 bytes
Variable Size 2768240968 bytes
Database Buffers 1.8187E+10 bytes
Redo Buffers 14655488 bytes
Database mounted.
Database opened.
SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 20000M
sga_target big integer 18000M
SQL>

Now max size also showing  20000M




Tuesday 15 May 2012

Configure UTL_MAIL in 10g


In case you want to send email from oracle 10g yu must need install and set up UTL_MAIL
Pakage . You must need to login with sys user first



SQL> show user
 USER is "SYS" 


The UTL_MAIL package provides a simple API to allow email to be sent from PL/SQ
The package is loaded by running the following scripts. 


SQL> host echo $ORACLE_HOME
/app/oracle/10g

SQL> @/app/oracle/10g/rdbms/admin/utlmail.sql

Package created.

Synonym created.
SQL> @/app/oracle/10g/rdbms/admin/prvtmail.plb

Package body created.

No errors.


Next SMTP_OUT_SERVER parameter must be set to identify the SMTP server.


SQL> alter system set smtp_out_server ='127.0.0.1:25' scope=both;

System altered.



 By using  code we can send mail



SQL> create or replace procedure vani as
2 subtext varchar(20) default 'abc';
3 begin
4 Utl_Mail.send(
5 sender => 'dba.vanitasharma@gmail.com',
6 recipients => 'dba.vanitasharma@gmail.com',
7 cc=>'dba.vanitasharma@gmail.com',
8 subject => subtext,
9 mime_type=>'text/html; charset=us-ascii'
10 );
11 end;
12
13 /

Procedure created.

SQL> execute vani

PL/SQL procedure successfully completed.

SQL>








Monday 14 May 2012

Issues in Installtion 10gr2 on linux 64-bit


Last week I have installed oracle  10.2.0.1.0 on  linux 2.6.9-22.ELsmp .I faced the below errors 

1)  Error in invoking target 'install' of makefile ins_ctx.mk

Solution:
Install rpm glibc-devel-2.3.4-2.39.i386.rpm

This rpm is required as few installation files required 32-bit compatibility
 32-bit  and 64-bit rpm  work together and 64x versions as an add on. I have both of them installed on my system.

 For 32bit version when you install on 64bit machine you need the --force  --oldpackeges  option

rpm -iv --force --oldpackeges glibc-devel-2.3.4-2.39.i386.rpm


2) Exception String: Error in invoking target 'all_no_orcl' of makefile '/home/oracle/10.2.0/db_1/rdbms/lib/ins_rdbms.mk'.

Solution:
Install rpm
binutils-2.15.92.0.2.15 (x86_64)
binutils-2.15.92.0.2.13.0.0.0.2 (x86_64)

Query:
# rpm --query --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n" binutils

Output:
binutils-2.15.92.0.2.15 (x86_64)
binutils-2.15.92.0.2.10.EL4 (i386)
binutils-2.15.92.0.2.13.0.0.0.2 (x86_64)

3) Get ORA-12547 :Tns Lost Contact when creating database using DBCA
Solution:
Install rpm libaio-0.1.3.105-2.x86_64.rpm

Query:
# rpm --query --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n" libaio

Output:
libaio-0.3.103.3 (i386)
libaio-0.3.105.2 (x86_64)


Friday 11 May 2012

Cross-Platform Transportable Tablespaces


A transportable tablespace allows you to  move a tablespace data of an Oracle database from one Oracle database to another. In Oracle 10g allow you to move tablespace across different platforms.

Steps to move data accross Cross-Platform

1 ) Check Platform Support and File Conversion Requirement

The pattern for byte ordering in native types is called endianness. There are only two main patterns, big endian and little endian. Big endian means the most significant byte comes first, and little endian means the least significant byte comes first. If the source platform and the target platform are of different endianness, then an additional step must be taken on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform


SQL> select platform_name , endian_format from v$transportable_platform
SQL> /

PLATFORM_NAME                  ENDIAN_FORMAT
------------------------------ --------------
Solaris[tm] OE (32-bit)        Big
Solaris[tm] OE (64-bit)        Big
Microsoft Windows IA (32-bit)  Little
Linux IA (32-bit)              Little
AIX-Based Systems (64-bit)     Big
HP-UX (64-bit)                 Big
HP Tru64 UNIX                  Little
HP-UX IA (64-bit)              Big
Linux IA (64-bit)              Little
HP Open VMS                    Little
Microsoft Windows IA (64-bit)  Little
IBM zSeries Based Linux        Big
Linux x86 64-bit               Little
Apple Mac OS                   Big
Microsoft Windows x86 64-bit   Little
Solaris Operating System (x86) Little
IBM Power Based Linux          Big
Solaris Operating System (x86- Little
64)

HP IA Open VMS                 Little

19 rows selected.


2)Identify Tablespaces to be Transported and Verify Self-containment

SQL> execute dbms_tts.transport_set_check('NEW',true);

PL/SQL procedure successfully completed.

OR

SQL> BEGIN
  2    SYS.dbms_tts.transport_set_check
  3     ('NEw', incl_constraints=>TRUE, full_check=>FALSE);
  4  END;
  5  /

PL/SQL procedure successfully completed.

SQL> SELECT * FROM SYS.transport_set_violations;

no rows selected

SQL>

3)Make Tablespace read only

sql> alter tablespace NEw read only;

Tablespace altered.

4)Create Directory for EXPDUMP

SQL> create  directory dumdir  as '/home/oracle/dumpdir';

Directory created.

SQL> grant read on directory dumdir to system;

Grant succeeded.

SQL> grant write on directory dumdir to system;

Grant succeeded.

4)Take BACKUP OF TABLESPACE

[oracle@dbdatah-server ~]$ expdp dumpfile=new.dmp logfile=new.log directory=dumdir transport_tablespace=new
LRM-00101: unknown parameter name 'transport_tablespace'

[oracle@dbdatah-server ~]$ expdp dumpfile=new.dmp logfile=new.log directory=dumdir transport_tablespaces=new transport_full_check=y

Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 09 August, 2011 13:30:15

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=new.dmp logfile=new.log directory=dumdir transport_tablespaces=new transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /home/oracle/dumpdir/new.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 13:31:37



5)Copy the datafiles on Destination Platform

6)convert name of the datafiles belongs to Transportable  tablespace

C:\oracle\product\10.2.0\db_1>rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Tue Aug 9 16:04:08 2011

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: GEN (DBID=1378351019)

RMAN> convert datafile "c:\dumpdir\new2.dbf" to platform="Microsoft Windows IA (32-bit)" db_file_nam
e_convert=('/home/oracle/oradata/dbdata/','c:\dumpdir');

Starting backup at 09-AUG-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=145 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input filename=C:\DUMPDIR\NEW2.DBF
converted datafile=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\DATA_D-DBDATA_I-504259222_TS-NEW_FNO-7_0
2MJIPC6
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:36
Finished backup at 09-AUG-11

RMAN> exit;


Recovery Manager complete.

7) Import tablespace
C:\oracle\product\10.2.0\db_1>impdp dumpfile=new.dmp logfile=new.log  directory=dumpdir transport_da
tafiles=c:\dumpdir\new1.dbf,c:\dumpdir\new2.dbf

Import: Release 10.2.0.4.0 - Production on Tuesday, 09 August, 2011 16:06:44

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** dumpfile=new.dmp logfile=new.log d
irectory=dumpdir transport_datafiles=c:\dumpdir\new1.dbf,c:\dumpdir\new2.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:06:55


C:\oracle\product\10.2.0\db_1>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Aug 9 16:07:21 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options



SQL> select tablespace_name,file_name from dba_data_files;

TABLESPACE_NAME
------------------------------
FILE_NAME
--------------------------------------------------------------------------------
USERS
C:\ORACLE\PRODUCT\10.2.0\ORADATA\GEN\USERS01.DBF

SYSAUX
C:\ORACLE\PRODUCT\10.2.0\ORADATA\GEN\SYSAUX01.DBF

UNDOTBS1
C:\ORACLE\PRODUCT\10.2.0\ORADATA\GEN\UNDOTBS01.DBF


TABLESPACE_NAME
------------------------------
FILE_NAME
--------------------------------------------------------------------------------
SYSTEM
C:\ORACLE\PRODUCT\10.2.0\ORADATA\GEN\SYSTEM01.DBF

NEW
C:\DUMPDIR\NEW2.DBF

NEW
C:\DUMPDIR\NEW1.DBF


6 rows selected.



SQL> select tablespace_name ,status from dba_tablespaces where tablespace_name='NEW';

TABLESPACE_NAME                STATUS
------------------------------ ---------
NEW                            READ ONLY

8)Change tablespace in Read Write Mode

SQL> alter tablespace new  read write;

Tablespace altered.

SQL> select tablespace_name ,status from dba_tablespaces where tablespace_name='NEW';

TABLESPACE_NAME                STATUS
------------------------------ ---------
NEW                            ONLINE

SQL>

Thursday 10 May 2012

BINDSIZE, ROWS and READSIZE


BINDSIZE, ROWS and READSIZE parameters affects the performance of rows
Try to keep grip on the frequency of the commit. Commits records frequency  is high then performance become poor as commits perform in small interval of time resources will occupied to write on the data file. Same if commits perform in large interval of time it degrades performance again .redo generates in conventional path load will become large .Our objective is specifies the parameters with proper values.
Some time it’s difficult task to decide what value have to put for the BINDSIZE ,READSIZE, and ROWS .how SQL Loader behaves with these parameters. There is practical example to understands all above facts

Have table in which have to load data
COMP_ID     VARCHAR2(20)
 COMP_NAME   VARCHAR2(100)
 COMP_ADD    VARCHAR2(200)
 CITY     VARCHAR2(30)
 PHONE_NO   VARCHAR2(30)


·         BINDSIZE specifies the size of the bind array n bytes.
·         READSIZE is use at the time of the read data .
·         The maximum  value for BINDSIZE is 20971520.

Now I have to find how SQL loader commits if we didn’t mansion the
Rows 

specified value for readsize(20000000) less than bindsize(20000000)
Commit point reached - logical record count 64
Commit point reached - logical record count 128

It commits at every 64 rows there is no effect of the bindsize and readsize if we use default value of rows.

·         Now I try to find the what will happen if rows define. I specifies the value of rows=200000 bindsize=20000000  readsize=20000000

Commit point reached - logical record count 65936
Commit point reached - logical record count 131872

It overwrite the parameter rows=200000 because row limit in bytes (maximum row length * rows) exceeds the limit of the bindsize, in that case oracle decide the rows commit values depending on the bindsize.
There is one more example now we decrease the value of the bindsize but still same rows as above example now bindsize=10000000  readsize=10000000

Commit point reached - logical record count 7678
Commit point reached - logical record count 15358


You can check log file  to more details

Space allocated for bind array:                9999230 bytes(7633 rows)
Read   buffer bytes:10000000


This time the commit value is decrease.Don’t worry rows also play a role in commit let’s check in next example


Specifies the parameters  bindsize=20000000 readsize=20000000  rows=200
Commit point reached - logical record count 200
Commit point reached - logical record count 400

Space allocated for bind array:                 262000 bytes(200 rows)
Read   buffer bytes:20000000

·         If we only specifies the rows parameter then , if the number of rows in the limit of default bindsize (256000) then commits take place on the number of rows given otherwise it commits calculating the number of rows depend upon the default bindsize.

sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log  rows=3000
Commit point reached - logical record count 195


sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log  rows=30
Commit point reached - logical record count 30

·         The point to think what’s the best way to parameterized the
these parameters. I try consolidate by parameterized with values
And find some results
My system is in ideal condition .no other process is running
Now  I am going go load a 46M file to a table having five columns.
Here table truncated first taking time 15-18 sec and the load data

[oracle@mediamath-server sqllder]$  sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log bindsize=20000000  readsize=20000000
Commit point reached - logical record count 64
1:49

sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log bindsize=20000000 readsize=20000000  rows=200000
Commit point reached - logical record count 15359
1:25

sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log bindsize=10000000 readsize=10000000  rows=5000
Commit point reached - logical record count 5000
1:05


From above example we can conclude that if we commits records frequency  is high then (every 64 rows) loading data taking more time performance is poor . same with we commit rows at large interval is also degrade performance hence always put the rows commit proper frequency .
One more thing always try to specific rows parameters otherwise all are depends on bindsize. If you have no idea about maximum row length and you only specifies the bind size it can be increase your difficulties
And as in above example if you use all default parameters (not specifies any one of them) then performance is in most poor stage.

SQL LOADER


SQL loader is oracle utility use for load data from external sources.
To perform the insert using SQL loader we need a control file. Simply a file with .ctl extension contains the parameter for load i.e INFILE  file name from which use for data load .The table format in which data have to load ,the path of the discard file[Contains  records which are not meet load criteria ] and bad file [Contains records are not match the format of columns in table ].How to insert data in table whether INSERT[simply insert data in empty table] APPEND[append rows with the existing data] ,TRUNCATE[truncate table first  and then load data] or REPLACE[delete data first from the  table and then insert new data] .If we have to load data with different formats  we have to create control file different according to data format.
If we are unable to use proper clues the number of bad or discard records increase .first we have to analyze the data which to be load and then create control file according to requirement some time to load data in proper format need to use a bunch of clues provided by the ORACLE in this article u will find a verity of control file to load data form different format files





First how to run SQL Loader from command prompt
sqlldr scott/tiger control=multiplefile.ctl log=multiplefile.log Discard= archiver.dat bad= archiver.bad  bindsize=10000000 readsize=1000000000  rows=5000

*First  we load a have field data to be loaded is separated by any delimiter “|”

Example of the data file:-

v7347|ebusi|sec-2|gurgaon|758475848
h476|hnl|sec-9|pune|647357364
g637|lokp|sec-8|pune|534537363

Example of Control file :-

load data
infile 'archvefile1.txt'
append into table archiver_data
fields terminated by '|'
(COMP_ID  ,
 COMP_NAME,
 COMP_ADD ,
 CITY     ,
 PHONE_NO )



* Load data from the file having data separated by the tab.
 X'09’ specifies to load field data separated by tab and new line terminated by eliminator “|”."str '|\n'" record separator is here is either a  “|” or a line feed.
i.e




v7347   ebusi   sec-2   gurgaon 758475848 34|
h476     hnl    sec-9   pune    647357364 |
g637    lokp    sec-8   pune    534537363|



control file format tabsepratedata.ctl

infile 'archvefile1.txt' "str '|\n'"
append into table archiver_data
fields terminated by X'09'
(COMP_ID  ,
 COMP_NAME,
 COMP_ADD ,
 CITY     ,
 PHONE_NO )

* To load data from multiple tables specifies the multiple INFILE clues
i.e
infile 'archvefile1.txt'
infile 'archvefile2.txt'
 append into table archiver_data
fields terminated by X'09'
(COMP_ID  ,
 COMP_NAME,
 COMP_ADD ,
 CITY     ,
 PHONE_NO )



If we want to skip some records then SQL Loader provide the OPTION
SKIP . lets we have  a file having a row with the header(column name) ,no need to insert in table hence we can skip by using  SKIP=1

In next file we load data using fixed length

Example of data file:-

123456789…………………………………….character count
Id          name        address city        Phone No-----àheader row

v7347   ebusi        sec-2   gurgaon      758475848
h476     hnl.mtphkl    sec-9   Cherapungi    647357364
g637    lokp         sec-8    pune         534537363


for insert the value of first column we specifies the position POSTION(1:5) length of the record having maximum length.

Using Fixed position is faster if we use delaminates; loader scans the data to find the delimiter. If the gap between records is large replace the gap with comma it improves performance.


Control file example :-


OPTIONS (SKIP=2)
load data
infile 'archvefile1.txt'
append into table archiver_data
 (COMP_ID POSITION(1:6)  ,
 COMP_NAME POSITION(7:14),
 COMP_ADD POSITION(15:24),
 CITY    POSITION(25:40) ,
 PHONE_NO POSITION(41:51))


specific column then this complete row treated as rejected i.eusing csv file I am insert six records in a table in csv file record is like nx890,,, then its treated as rejected records to avoid this situation SQL loader have option TRAILING NULLCOLS , which treat this type of If records in the file to be load contains  no record for the records as null value. And insert the value null in the table.
Remember if all records are null then SQL loader never load that rows it show the counts in Bad file with note :all records of row are null.
i:e
Record 1: Rejected - Error on table ARCHIVER_DATA, column COMP_ID.
Column not found before end of logical record (use TRAILING NULLCOLS)



Example of data file:-

v7347\n0code,ebusi,,sec-2,758475848,,,,
nx890,,,
h476\n0No,hnl,Cherapungi,,,647357364

ag637\n0code,lokp,sec-8,,,534537363,,, output


in the above data file we have null column ,null rows and the we can break
first record into two lines. To avoid the null rows we specifies TRAILING NULLCOLS and for split the recodes in two lines we use ‘\\n\’ .Notice \\n not “\n” because “\n” converts  into a new line.

Example of control file:-

load data
infile 'archvefile1.txt'
truncate  into table  archiver_data
fields terminated by ','
TRAILING NULLCOLS
(COMP_ID  "replace(:comp_id,'\\n',chr(10))" ,
 COMP_NAME ,
 COMP_ADD ,
 CITY     ,
 PHONE_NO )



COMP_ID
--------------------
v7347
0code
h476
0No
ag637
0code


WHEN option is use for insert the conditional records [record which meets the given condition] in given example we want to insert records
In which city name is start with latter g in archiver_data table.


load data
infile 'archvefile1.txt'
truncate  into table  archiver_data
WHEN (15) = 'g'
TRAILING NULLCOLS
(COMP_ID POSITION(1:6)  ,
 COMP_NAME POSITION(7:14),
 COMP_ADD POSITION(15:24),
 CITY    POSITION(25:40) ,
 PHONE_NO POSITION(41:51))


Concatenate the one column to other or concatenate the value with any column i.e have to concatenate std code with phone number , we can use oracle function to insert the data , constant values using CONSTANT clues and can add sequence in a column of table as given in below example.

load data
infile 'archvefilenew.txt'
truncate  into table  archiver_data
 TRAILING NULLCOLS
(COMP_ID  CONSTANT 'njkl' ,
 COMP_NAME POSITION(7:14) "substr(:COMP_NAME,1,2)",
 COMP_ADD POSITION(15:24),
 CITY    POSITION(25:40) ,
 PHONE_NO POSITION(42:45) "0124||:PHONE_NO",
COMP_NO  "com_seq.nextval")