Friday, November 16, 2007

Some issues about ADRCI in Oracle 11g

In Oracle11g Oracle introduced a new feature called Automatic Diagnostic Repository (ADR). All the diagnostic information including the found incidents will be stored in the repository. The repository is actually an OS directory. Along with this feature, Oracle also introduced a new command line utility called adrci (Automatic Diagnostic Repository Command Interpreter) to make the info. retrieval  from the ADR easy. There are already some articles around talking a lot of ADR. Here is a good one: ADR So I am not going to get in depth about ADR itself. Instead I am going to talk about some issues I encountered using 'adrci'.

Some reported there was an editor issue using adrci 'show alert'. This can be easily solved by 'set editor'. While the error messages may appear different on UNIX and Windows, 'set editor' works on both platforms.

adrci> show alert

ADR Home = /app/oracle/diag/rdbms/chooyu/chooyu:
*************************************************************************
Output the results to file: /tmp/alert_12361_1_chooyu_1.ado
screen: Unknown terminal type
I don't know what kind of terminal you are on - all I have is 'screen'.
[Using open mode]
"/tmp/alert_12361_1_chooyu_1.ado" 989 lines, 43823 characters
2007-11-13 12:27:30.840000 +08:00
:q
Additional information: 256
Additional information: 13

adrci> set editor vim

Another issue is related to the OS permissions. Usually nobody will encounter this issue. However, if you create 2 OS groups, one is for Oracle Software Owner and the other is for DBA, you will probably hit this issue while signing in under DBA group. This is because Oracle sets all the directories under the owner's control during the installation. In my case, I created 2 groups 'oinstall' and 'dba' as well as 2 users 'oracle' and 'swong'. I use 'oracle' as the Software owner as always. The ownership of all the directories would be oracle:oinstall. If I sign in as 'swong' which I put under 'dba', I won't be able to find any home but only listener. How to solve the issue then? In Solaris, I need to use 'setfacl' to allow 'dba' group to access DIAGNOSTIC_DEST. After that you'll be able to see several homes using 'show home' adrci command. You may need to 'set base' first. Please see below.

$ adrci

ADRCI: Release 11.1.0.6.0 - Beta on Fri Nov 16 14:18:17 2007

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

No ADR base is set
adrci> set base /app/oracle
adrci> show home
ADR Homes:
diag/rdbms/orcl/orcl
diag/clients/user_unknown/host_411310321_11
diag/tnslsnr/myhost/listener
diag/diagtool/user_swong/host_2847951770_11

Blogged with Flock

Oracle11gR1 installation on Solaris 10

Generally speaking, the Oracle11gR1 installation on Solaris 10 is pretty smooth. I didn't encounter any problem during the installation. Neverthless burning a DVD for Solaris 64-bit really frustrated me.

I downloaded the Oracle11gR1 for Sparc Solaris 64-bit into my windows laptop. At the 1st time I burned a DVD-R with the default option (ISO9660 + Joliet). I realized it's wrong and doesn't work when using it to install the Oracle11g into my SUN Ultra 25 (SPARC architecture) on the next day. File names have been shortened. Afterwards I burned another DVD-RW with the carefully chosen option (ISO9660:1999) since this guarantees a file name longer than 250 characters. I thought this one would work but it still didn't work even worse. My U25 even doesn't recognize this DVD. What's wrong? I have no idea. I may need to figure out later. At last I FTPed the downloaded Oracle11gR1 ZIP file to the U25 and install Oracle11g from the hard disk natively.

The installation procedure is pretty much like the previous versions. An OS user and an OS group need to be created as the Oracle software owner. Depending on your security preference, you can create only one group 'dba' as both software owner and DBA group; you also can create 2 groups. One is the software owner and the other is the DBA group. For me, I created 2 groups. One is called 'oinstall' as the software owner and the other is called 'dba' as the DBA group. An user 'oracle' has also been created. Put some basic environment variables into 'oracle' profile. They are all similar to the previous versions. Here are mine:

ORACLE_BASE=/app/oracle
ORACLE_HOME=$ORACLE_BASE/product/11.1
ORACLE_SID=orcl
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib:/usr/sfw/lib:/usr/local/lib:$ORACLE_HOME/lib
PATH=$PATH:/usr/ucb:/usr/ccs/bin:/usr/sbin:$ORACLE_HOME/bin

You may want to pre-create some directories and change the ownership.

The next step is to check your /etc/system to see if all the necessary IPC parameters are configured or not. Otherwise you won't be able to create a database since Share Memory is required. If configured already, you are ready to install; otherwise you need to reboot your machine to take /etc/system changes effective. Here are mine:

set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=5000
set semsys:seminfo_semmap=5000
set semsys:seminfo_semmsl=50
set semsys:seminfo_semmnu=100
set semsys:seminfo_semume=25
set semsys:seminfo_semopm=50

set shmsys:shminfo_shmmax=4294967296
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=64
set shmsys:shminfo_shmmin=1

set msgsys:msginfo_msgmap=258
set msgsys:msginfo_msgmnb=65535
set msgsys:msginfo_msgseg=8192
set msgsys:msginfo_msgtql=512

We are ready to go! Sign in as 'oracle' and execute 'runInstaller'. All the pre-check has been passed by the installer. I chosen a customized installation and followed most of the default options. As I mentioned earlier, the installation was pretty smooth and no problem was encountered.

At last I'd like to give my machine configurations.

Model: SUN Sparc Ultra 25
CPU Model: Sparc Ultra IIIi
RAM: 2GB
Hard disk: 2x80GB, 2x250GB
OS: SunOS 5.10 Generic_125100-07

Blogged with Flock

Friday, May 04, 2007

Install BeleniX (OpenSolaris Live CD) into a USB thumb drive

BeleniX Live CD works just fine. But when you try to install it into a USB thumb drive following the installation instruction, you may get frustrated. The installer doesn't work! At least it doesn't work for me. I had to debug the installer script step by step to see where the problems result from. Finally, I worked out and successfully installed it into a USB drive. There are two places in the script to which you need to pay attention.
  • the command at line 211
# rmformat -s /tmp/slices /dev/rdsk/c0t0d0p0
This command simply doesn't work in my case. Maybe the slices information in /tmp/slices generated by the script are not correct, maybe not. Since there was no error returned, I have no idea why it failed. After running this command, 2 slices are supposed to be created and labeled. However, if you run the following command

# newfs /dev/rdsk/c0t0d0s0 < /dev/null
You will get error "no such device" sth. like that. In this case, you can run the command below instead to create the slices and label them based on /tmp/slices

# format -e
It's interactive. You just follow the instructions to do so.

  • The command at line 281
# installgrub -mf stage1 stage2 $devs > /dev/null
You will get error "can not read stage1 file stage1" if you follow the provided installer. The reason is there is no such a file called stage1 in /boot/grub directory. Where you can get the required stage1? Don't run

# umount /mnt/belenix
at line 274. You need to run the commands below first:

# cd /mnt/belenix/boot/grub
# installgrub -mf stage1 stage2 /dev/rdsk/c0t0d0s0 > /dev/null
# umount /mnt/belenix
# lofiadm -d $BELENIX_ISO_PATH
All the required files to run installgrub are in /mnt/belenix/boot/grub.

A successful installation! You can enjoy the LiveUSB boot from now on!

Thursday, April 26, 2007

Oracle bug 3744836 while using table function in 9205

Recently I use a pipelined table function in a package to tabularize multiple rows into columns. It's my first time to write pipelined table function. It's very cool.

However, after refine the package and recompile it several times, I start to hit an Oracle bug 3744836. The symptom is:
While compiling the package at 1st time in a session, I encountered ORA-04043 error. The error message is something like: object SYS_PLSQL_84730_110_1 does not exist. If I attempt to recompile it again and again in the same session, ORA-00600 internal error will appear. You won't be able to drop the package either.

After searching on Oracle metalink, I found out the Note:3744836.8 that matches the symptom. Actually, it's an Oracle bug 3744836. In the note, the bug is said to be fixed in several Oracle patchsets including 9207. Since I don't wanna apply Oracle 9207 patchset at the time, I downloaded the interim patch for the bug as there is and applied to the problematic 9205 server. After applying the patch, the problem seemed to be solved as I was able to either recompile/drop the package. But it's not true actually! Some time later, while trying to recompile the package, I got ORA-04043 and ORA-00600 errors again! I felt very frustrated of getting these errors after the patch had already been applied. It's supposed to work but it doesn't! Still, I don't wanna apply the patchset 9207 yet due to some reasons. Finally, I figure out a workaround.

The root cause of ORA-04043 is from the PLSQL TYPEs (Record, Table) defined in the package which are used by the pipelined table function. Every time the package is compiled, Oracle will generate a new version of these TYPEs. It's so-called Versioned Objects. These versioned objects can be selected from user_objects.

select * from user_objects where object_type = 'TYPE';

They are something like SYS_PLSQL_%s_%c_%v. The last substiute variable is the VERSION. Oracle manages the relationship between packages and their verioned objects. Due to the bug, the relationship is somehow broken. Even the versioned objects are there in user_objects, Oracle would tell you they don't exist while dropping/compiling the package. e.g. before applying the interim patch, SYS_PLSQL_%s_%c_1 and SYS_PLSQL_%s_%c_2 could co-exist and Oracle would tell you SYS_PLSQL_%s_%c_1 doesn't exist when trying to drop/compile the package; after applying the interim patch, there is only SYS_PLSQL_%s_%c_2. It's getting better coz Oracle knows how to drop the version 1 objects. Unfortunately, Oracle still associates the package with SYS_PLSQL_%s_%c_1 rather than SYS_PLSQL_%s_%c_2. Since the version 1 objects have been dropped and are not there any more, Oracle will still stubbornly tell you SYS_PLSQL_%s_%c_1 doesn't exist and you won't be able to compile/drop the package.

Here is a simple way to bypass the bug before 9207 patchset is applied. Convert those PLSQL TYPEs which are used by table function in the package to Oracle Stored TYPEs. This way, you take control on packages, types and their relationships. Oracle won't version the stored TYPEs automatically as it does for the packaged TYPEs.

Wednesday, April 25, 2007

Working with SVN on windows

Before using the version control, I encourage you to read the documentation first. Here you can get it: SubVersion
To start your journey on version control, you may need to install some client software first if you are using windows platform. If you are using Linux/Unix, things become very simple. Depending on your experience and preference, the requirements may vary. Here I'd like to give you several solutions.

1. Check-in/out locally on SVN server and map the directory to your local drive on windows probably using SAMBA
This is the one I am using currently. :-) Coz I like to use VIM editor in windows more than the one in Solaris. It's more colorful.

2. Install cygwin package including both SSH & SVN on your windows platform
You may need to generate RSA/DSA keys and install the public key into your directory on SVN server. Otherwise, you will probably get frustrated to input your password coz it will prompt you to input the password many times.
Below is how to check-out remotely over SSH protocol in a cygwin shell

$ svn co svn+ssh://scott@myhost/svn_repository/myproj/trunk

3. Using a SVN GUI client called TortoiseSVN
The well-known free SSH client PUTTY is required. You can obtain the package here: PUTTY . Please make sure you get the whole package but not the PUTTY.exe only! Especially plink.exe which we are going to use with the SVN client.
You can get the GUI client here. TortoiseSVN

First, install the SVN client and reboot your windows, you'll see the options integrated into your context menu in the window explorer.
In TortoiseSVN -> Settings -> Network, configure SSH client to use plink.exe which you installed in the previous step
Seems like using plink.exe without the public-key authentication doesn't work, so you need to generate RSA/DSA keys using puttygen.exe and install the public part into your directory on the SVN server
Finally, in the local directory into which you want to check-out, choose SVN checkout ... in the pop-up context menu and type in the URL, what you want to check-out should be put in the local directory. Everything is done.
URL:
svn+ssh://scott@myhost/svn_repository/myproj/trunk

Thursday, March 29, 2007

Replace tape and tape drive using Veritas NetBackup

To replace a tape drive:
  • Physically replace it
  • Synchronize the S/N of the new one with the database
# tpautoconf -report_disc
# tpautoconf -replace_drive drive_name -path drive_path

To replace a tape:
  • Expire it manually first
# bpexpdate -m A00001 -d 0 -h host
  • Delete the volume using Veritas Netbackup GUI
  • Remove it's label and replace it with a new one attached to the old label
  • Import the new tape back into the library

It's difficult to do things in a proactive way

As things get older, it becomes more & more difficult to identify what is the root cause of the problem whenever there is a problem. The problem could happen at any point or multiple points down to the whole path. You have to isolate them from each other and fix them one by one. Sometimes it's just not possible to figure out the root cause. You don't know which is caused by which since they are all correlated. It's pretty much like us, human-beings. It may take several days or even several weeks to fix all the issues and get the problematic system back to normal. The system availability drops a lot during this period, of course. So proactive is always much better than reactive. Sounds like everybody knows this.

However, in a real world, usually it's hard to actively replace things until it has a problem. People would say "why do we need to replace it as there is no problem at all? look, it's running well!". Or sometimes they would say "I understand your concern. But you know, we currently have a tight budget. It's hard to get approved if we raise the request. After all, things are still running well. Let's play by ear!" But, when a serious problem happens and the system becomes unavailable, these guys just get nervous. "You must fix it ASAP! You must bring the system back by today! I don't care how you do it, you gotta get this done!" Sounds pretty similar? This happens in our life almost every day.

Wednesday, March 07, 2007

The storage representation for number zero

Yesterday I found out something interesting in an Oracle 9205 database. One column called is_mandatory is defined as NUMBER(1,0). Based on the business rule, only 0 or 1 can be stored in the column. The strange things happen to 0. There are 2 different storage representations for the number zero!
09:18:50 > select is_mandatory, dump(is_mandatory) storage from t where is_mandatory != 1;

IS_MANDATORY STORAGE
------------ ------------------------------
0 Typ=2 Len=1: 128
0 Typ=2 Len=1: 128
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1
0 Typ=2 Len=2: 193,1

9 rows selected.

Elapsed: 00:00:00.01
09:19:00 > select is_mandatory, dump(is_mandatory) storage from t where is_mandatory = 0;

IS_MANDATORY STORAGE
------------ ------------------------------
0 Typ=2 Len=1: 128
0 Typ=2 Len=1: 128

2 rows selected.

Elapsed: 00:00:00.01
In a normal case, the 0 should be always stored in Oracle like:

11:14:16 > select dump(0) from dual;

DUMP(0)
------------------------------------------------
Typ=2 Len=1: 128

Elapsed: 00:00:00.08


However, there is another storage scheme for 0 in the table
Typ=2 Len=2: 193,1
This value should be zero as well based on Oracle storage internal. However, I couldn't fetch these rows using is_mandatory = 0 since 0 will be always interpreted as 'Typ=2 Len=1: 128'. Even I couldn't reproduce this scenario simply using SQL. Originally, these values are inserted/updated by a database stored procedure which takes a set of values as parameters from a front-end Java program via JDBC thin client. I am wondering under what kind of condition 0 would be stored this strange way. It seems like 'Typ=2 Len=2: 193,1' is something between 0 and 1.

11:42:02 > select is_mandatory from t where is_mandatory > 0 and is_mandatory <>
0
0
0
0
0
0
0
7 rows selected.
Elapsed: 00:00:00.00

I raised a question about this to ASKTOM. So this post will be updated once I got the answer from Tom.

The problem is caused by the incompatible JDBC and RDBMS version. We are running an Oracle 9205 database as we are using JDBC 8174 in the middle tier to access the database. After I point to JDBC 9205 driver and re-run the program, number zero can be stored correctly with 'Len=1: 128'. According to what one said:

'Len=2: 193,1' is definitely an invalid NUMBER; in 10.2.0.3 :
SQL> select dump ( utl_raw.cast_to_number ('C101') ) from dual;

DUMP(UTL_RAW.CAST_TO_NUMBER('C101'))
--------------------------------------------------------------------------------
Typ=2 Len=2: 193,1

SQL> select 1 / utl_raw.cast_to_number ('C101') from dual;
select 1 / utl_raw.cast_to_number ('C101') from dual
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

Just an illegal bit representation that doesn't map to any NUMBER, and so produces unpredictable results (including process termination) if used.


I also did some tests by myself in Oracle SQL Developer:

select utl_raw.cast_to_number('c101') from dual

UTL_RAW.CAST_TO_NUMBER('C101')
------------------------------
0

1 rows selected

select utl_raw.cast_to_number('C101')/1 from dual

UTL_RAW.CAST_TO_NUMBER('C101')/1
--------------------------------
146150163.7229892817193583822615272918645831532875

1 rows selected


If the 2nd SQL is executed in SQL*Plus, seems like it will never end. A different result! Completely unpredictable, indeed!

Friday, March 02, 2007

Finally Oracle SQL Developer 1.1 patch can be applied

Oracle SQL Developer 1.1 'check for updates ...' had been pending for over a month. Seems like the option only was made available for a short while. So even there had been a 1.1 patch 1, I was not able to apply it via 'Check for updates ...'. The only way to apply the patch is to install the whole SQL Developer which I don't like. I've been waiting for the option to be made available again.

This morning, I tried to run 'Check for updates ...' again and found that the option was made available! I upgraded my SQL Developer to the latest version finally. Meanwhile, Oracle released a new patch for SQL Developer 1.1, it's called patch 2. However, there is still some problem regarding the patch application via 'Check for updates ...'. Is it the reason Oracle disabled the option for over one month? The problem is that SQL developer still tells you there is a new patch found to be applied even you've already applied it. This gets me confused. I don't know what will happen if I reapply the patch 2. Maybe nothing will happen. It's just repeating this application itself over and over again. Oracle should sort it out as soon as possible.

Thursday, March 01, 2007

XML audit trail and RMAN

Oracle 10g introduces a new feature concerning the audit. In 10g, we are able to write the audit trail into XML files on the operating system level.

alter system set audit_trail=XML;

This sounds very exciting since we could do a lot of things with XML. At least, the audit trail is well structured, isn't it?

But, there seems a problem using RMAN with this auditing feature enabled.

alter system set audit_sys_operation=true;

I only enable auditing for the privileged account 'SYS' in the statement above. Now, when I try to connect RMAN, I will get the errors below:

swong@mars:flyhorse > rman target / nocatalog

Recovery Manager: Release 10.2.0.1.0 - Production on Thu Mar 1 10:28:10 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06003: ORACLE error from target database:
ORA-09817: Write to audit file failed.


I encountered this error in Oracle 10.2.0.1. This is because of an unpublished bug 4600757. You can refer to Oracle DOC ID: Note:391787.1 to get more about this problem.

Wednesday, February 28, 2007

Make a unique constraint deferrable

Is it possible to create an unique deferrable constraint enforced by an unique index? No, it's impossible! You can create an unique constraint with 'deferrable' state. You can create an unique constraint enforced by an unique index. But you can't have both.

Below statement will create an unique constraint enforced by an unique index by default.

21:36:13 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:36:36 2 (charge_element_id, ce_det_version_no, display_order) using index;

Table altered.


This will create an unique constraint with 'deferrable' state enforced by an non-unique index by default.

21:39:29 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:39:40 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index;

Table altered.


You need to drop the constraint and the index separately.


You will get an error when trying to add an unique constraint with 'deferrable' state enforced by an existing unique index.

21:40:14 SQL> create unique index p_pced_uk on p_product_ce_determinants
21:40:44 2 (charge_element_id, ce_det_version_no, display_order);

Index created.

Elapsed: 00:00:00.03
21:40:49 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:40:57 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index p_pced_uk;
alter table p_product_ce_determinants add constraint p_pced_uk unique
*
ERROR at line 1:
ORA-14196: Specified index cannot be used to enforce the constraint.

Elapsed: 00:00:00.06


However, if you add the constraint solely enforced by the unique index, the statement will succeed.

21:41:08 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:41:23 2 (charge_element_id, ce_det_version_no, display_order) using index p_pced_uk;

Table altered.

Elapsed: 00:00:00.01


You still need to drop the constraint and the index separately.


Instead the statement below will succeed and an unique constraint with 'deferrable' state will be created as enforced by an existing non-unique index.

21:42:17 SQL> create index p_pced_uk on p_product_ce_determinants
21:42:28 2 (charge_element_id, ce_det_version_no, display_order);

Index created.

Elapsed: 00:00:00.03
21:42:33 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:42:37 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index p_pced_uk;

Table altered.

Elapsed: 00:00:00.03

I tested in both 9i and 10g. While they have the same behavior, 10g gives more clear messages than 9i.

Put tapes into a tape library

To import a ordinary tape into a tape library using Veritas NetBackup, there are 2 ways. One is to use 'new volumes ...' via 'Media' interface, the other one is to use 'Inventory Robot'. While using 'Inventory Robot', you don't have to set up anything in the 'advanced option', just remember to check 'clear media access port', the library will take care of the rest. To use 'new volumes ...', you need to choose the proper tape type, the slot number, Media ID and so forth. It's important NOT to click on 'Apply' button to take the action. Instead, click on 'Close' button. Otherwise, you will get the error 'The Media ID is not unique in the database'.

To import a cleaning tape into a tape library, seems like 'Inventory Robot' must be used to do so. Also, 'Advanced Option' needs to be accessed to configure sth. You need to add a new rule in 'barcode rule' tab. In the pop window, you need to choose the proper cleaning tape type, e.g. DLT_CLN. You need to give the barcode tag. e.g. CLN (Seems like we don't have to append the barcode number. 'CLN' is enough) At last, a number of the remaining cleanings must be given. Usually it's 25 times. Every time the cleaning tape is used, the number decrements by 1 until 0 automatically. If you did't follow these instructions, even the cleaning tape could be put into the library successfully, the library wouldn't recognize it as a cleaning tape by any means. You will get errors whenever you want to clean a tape drive.

Monday, February 26, 2007

How to get HBA WWNs on Solaris

Recently I want to get the WWNs of the HBAs in our servers to match them with a SAN report generated by a tool in order to know which node in the report represents which server. Basically, there are two methods.

One of them is to run the command below:


# prtpicl -v -c scsi
SUNW,fas (scsi, 1140000032a)
...
:manufacturer JNIC
:fcode_revision Version 3.9
:copyright Copyright (c) 2000-2002 by JNIC
:reg
00 00 00 00 00 40 00 00 00 00 10 00 00 00 00 00 00 40 40 00 00 00 04 00 00 00 00 00 00 40 50 00 00
00 01 00 00 00 00 00 00 00 00 00 00 08 00 00
:emerald_id 0
:sbus_slot 0
:model FCE-1063
:parity-generated
:xilinx_rev 0x2000b
:revision-id 0x4
:device-id 0x1160
:my_wwn_lo 0x69a4966b
:my_wwn_hi 0x100000e0
:lport_cfg 0
:board_id 0x4966b
...

root@boxwood # prtpicl -v -c scsi-fcp
SUNW,qlc (scsi-fcp, 120000003b8)
...
:reg
00 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 10 10 00 00 00 00 00 00 00 00 00
00 00 00 00 00 01 00 02 00 10 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00
:node-wwn 20 00 00 e0 8b 1a 90 85
:port-wwn 21 00 00 e0 8b 1a 90 85
:assigned-addresses
81 00 10 10 00 00 00 00 00 00 03 00 00 00 00 00 00 00 01 00 82 00 10 14 00 00 00 00 00 10 00 00 00
00 00 00 00 00 20 00 82 00 10 30 00 00 00 00 00 12 00 00 00 00 00 00 00 02 00 00
...
:manufacturer QLGC
...
:reg
00 00 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 08 10 00 00 00 00 00 00 00 00 00
00 00 00 00 00 01 00 02 00 08 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00
:node-wwn 20 00 00 e0 8b 1a 55 63
:port-wwn 21 00 00 e0 8b 1a 55 63
:assigned-addresses
81 00 08 10 00 00 00 00 00 00 03 00 00 00 00 00 00 00 01 00 82 00 08 14 00 00 00 00 08 00 00 00 00
00 00 00 00 00 20 00 82 00 08 30 00 00 00 00 08 02 00 00 00 00 00 00 00 02 00 00
:hba0-adapter-hard-loop-ID 0
:hba0-enable-adapter-hard-loop-ID 0
:devfs-path /ssm@0,0/pci@19,700000/SUNW,qlc@1
:driver-name qlc
:binding-name pci1077,2300
:bus-addr 1
:instance 1
:_class scsi-fcp
:name SUNW,qlc

The other method is to use 'prtconf'. Here is an example:

# prtconf -vp > /tmp/prtconf.out
# cd /tmp; vim prtconf.out

Node 0xf00fa738
port-wwn: 210000e0.8b111339
node-wwn: 200000e0.8b111339
compatible: 'pci1077,2200.1077.4082.5' + 'pci1077,2200.1077.4082' + 'pci1077,4082' + 'pci1077,2200' + 'pciclass,010000' + 'pciclass,0100'
interrupts: 00000004
vendor-id: '1077'
device-id: '2200'
revision-id: '5'
subsystem-id: 00004085
subsystem-vendor-id: '1077'
manufacturer: 'QLGC'
reg: 00000002.00030000.00000400.00000002.00030800.00000400
device_type: 'scsi-fcp'
name: 'SUNW,qlc'
version: 'ISP2200 Sbus FC-AL Host Adapter Driver: 1.13.07 09/16/02'


Thursday, February 22, 2007

How to catch an user defined error message

Recently I am working on a project. In this project, there is a database package in which I put all the related functions and procedures. I want a function to generate user defined errors while calling it directly in the hosting environment; meanwhile, I want the calling stored procedure to ignore some of the user generated errors if the function is being called by a stored procedure. The point here is to catch some user defined errors but not the oracle predefined errors.

There is a RAISE_APPLICATION_ERROR statement in the function to generate the user defined error. e.g.

function test
return number
is
e_no_rate_found exception;
begin
raise e_no_rate_found;
exception
when e_no_rate_found then
raise_application_error(-20306, 'No rate found!');
end;

If this function is called directly from sqlplus, the user defined error -20306 will be raised. I want to ignore this error in a procedure which calls this function. Actually, it's same to catch an user defined error as to catch an oracle predefined error. Here we go. In the procedure, we could write the code block below:

declare
e_no_rate_found exception;
pragma exception_init(e_no_rate_found, -20306);
begin
call test;
exception
when e_no_rate_found then
NULL;
end;

The user defined error -20306 is associated with the user defined exception e_no_rate_found using exception_init pragma in the calling procedure.

Solaris utility which can be used to connect to MS SQLServer

Sometimes we need to access MS SQLServer from Solaris platform. In many cases, we need to get the data out of MS SQLServer and import it into Oracle. I wonder if there is any utility on Solaris which can do the similar things to what isql(the interactive SQL client comes with MS SQLServer) can do on Windows platform. Here come two resources:
  • One is called FreeTDS
  • There is one more tools using FreeTDS to connect to MS SQL Server. It's called sqsh

Tuesday, January 30, 2007

Interesting things during housekeeping a database

These days I've been housekeeping a database. Some tablespaces were created inefficiently which all have unnecessary huge extent size 16MB. As a result, even most of segments in those tablespaces have only few rows, they still occupy at least 16MB. Since there are a lot of such segments, the database has grown unnecessarily big. My job is to move these segments out of the tablespaces and shrink their size.
  • Move small tables to the small-extent 128KB tablespaces. Move big tables and table partitions to the big-extent 16MB tablespaces. Moving tables must be done at first as the rows movement will invalidate all the associated indexes.
select 'alter table '||owner||'.'||segment_name||' move tablespace '||
case when bytes/1048576 <>
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'TABLE'
union
select 'alter table '||owner||'.'||segment_name||' move partition '||
partition_name||' tablespace ts_big;'
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'TABLE PARTITION'
order by 1;

  • Rebuild small indexes into the small-extent 128KB tablespaces. Rebuild big indexes and index partitions into the big-extent 1MB tablespaces.
select 'alter index '||owner||'.'||segment_name||' rebuild tablespace '||
case when bytes/1048576 <>
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'INDEX'
union
select 'alter index '||owner||'.'||segment_name||' rebuild partition '||
partition_name||' tablespace ts_big_idx nologging;'
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'INDEX PARTITION'
order by 1;

  • Even the tablespaces into which those segments are being moved are created as locally managed and have uniformly small extents, the segments being moved into them will be created with a big initial extent which equals to their original size. After moving the segments, I need to deallocate the unused blocks and shrink their initial extent size. Firstly, I need to find out which segments are candidates to be deallocated. Probably whose initial extents are larger than the tablespace initial extent and the initial extent size is equal to it's segment size.
select 'alter table '||owner||'.'||segment_name||' deallocate unused keep 128k;'
from dba_segments
where tablespace_name = 'ts_small'
and segment_type = 'TABLE'
and initial_extent != 128*1024 and initial_extent = bytes
order by initial_extent;

Some interesting things show up during deallocating the segments:
  1. Some segments can be deallocated down to 128k, the tablespace initial extent size; some can NOT even their high watermark is not above 128k.
  2. Some segments can be deallocated down to the nearest size to the high watermark; some can NOT.
  3. Some statements will tell you how many unused blocks there are above the segment high watermark while some won't.
  4. After more aggressively changing 'keep 128k' to 'keep 64k' or down to another level, some segments which are not able to be shrank to 128k mentioned in point 1 have been deallocated to 128k.
  5. After more aggressively changing 'keep 128k' to 'keep 64k' or down to another level, some segments which are not able to be shrank to their possibly smallest size in point 2 have been deallocated more.
I need to invest more time to figure out these interesting things.

Forward or backward?

Recently I was assigned to a project. Actually, the project itself is pretty simple in terms of business logic. There are only 4 screens which need to be maintained on the front-end web pages. The maintenance is also pretty straightforward. Users just want to insert, update, delete or get a report on the data. What about the data? The data itself is relatively static as well. Till now, everybody would think this project is easy to be done and the data would be easy to be manipulated. I could think it's enough to have up to 4 or 5 tables out there in the database.

Wrong! The data model designed by somebody is extremely complicated. There are tens of tables in the database to support his idea and backup the front-end interface. This guy seems to live in a perfect world. He wants everything to be flexible. He wants users to control which table columns to be displayed on the web interface. He wants everything to be stored as their IDs rather than their real values. To find the real value, I need to come down 3 or more tables to fetch it. I could think even the simplest SQL needs to join 5 or more tables to get what we want. This guy wants to build a RDBMS system instead of an application on top of the Oracle RDBMS!

Yes, he can say the model is very flexible. Users can control almost everything even the data type and some constraints. But the price we pay here is a hard-to-understand data model(sometimes even the designer himself has no idea what he's doing during project meetings, funny!), the inevitable database performance overhead and the unnecessary flexibility. Users may not want those flexibilities we introduce to them at all! Actually, He complicates things completely!

I've been thinking about this. The database technology has been emerging for decades. From the layered database to the networking database to the relational database to the object-oriented database. One of the most important goals is to simplify the data model as much as possible so that developers can focus on the real business logic instead of the complicated data model. We do ours and you do yours! In this project, everything seems inversed! Instead of putting our effort to enforcing the business logic, we have to spend 90% of the project time to understand the data model, manipulate the data to stay consistent with one another. Is it really worth doing all of these? Are we going forward or backward? Time will tell!