Wednesday, February 28, 2007

Make a unique constraint deferrable

Is it possible to create an unique deferrable constraint enforced by an unique index? No, it's impossible! You can create an unique constraint with 'deferrable' state. You can create an unique constraint enforced by an unique index. But you can't have both.

Below statement will create an unique constraint enforced by an unique index by default.

21:36:13 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:36:36 2 (charge_element_id, ce_det_version_no, display_order) using index;

Table altered.


This will create an unique constraint with 'deferrable' state enforced by an non-unique index by default.

21:39:29 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:39:40 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index;

Table altered.


You need to drop the constraint and the index separately.


You will get an error when trying to add an unique constraint with 'deferrable' state enforced by an existing unique index.

21:40:14 SQL> create unique index p_pced_uk on p_product_ce_determinants
21:40:44 2 (charge_element_id, ce_det_version_no, display_order);

Index created.

Elapsed: 00:00:00.03
21:40:49 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:40:57 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index p_pced_uk;
alter table p_product_ce_determinants add constraint p_pced_uk unique
*
ERROR at line 1:
ORA-14196: Specified index cannot be used to enforce the constraint.

Elapsed: 00:00:00.06


However, if you add the constraint solely enforced by the unique index, the statement will succeed.

21:41:08 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:41:23 2 (charge_element_id, ce_det_version_no, display_order) using index p_pced_uk;

Table altered.

Elapsed: 00:00:00.01


You still need to drop the constraint and the index separately.


Instead the statement below will succeed and an unique constraint with 'deferrable' state will be created as enforced by an existing non-unique index.

21:42:17 SQL> create index p_pced_uk on p_product_ce_determinants
21:42:28 2 (charge_element_id, ce_det_version_no, display_order);

Index created.

Elapsed: 00:00:00.03
21:42:33 SQL> alter table p_product_ce_determinants add constraint p_pced_uk unique
21:42:37 2 (charge_element_id, ce_det_version_no, display_order) deferrable using index p_pced_uk;

Table altered.

Elapsed: 00:00:00.03

I tested in both 9i and 10g. While they have the same behavior, 10g gives more clear messages than 9i.

Put tapes into a tape library

To import a ordinary tape into a tape library using Veritas NetBackup, there are 2 ways. One is to use 'new volumes ...' via 'Media' interface, the other one is to use 'Inventory Robot'. While using 'Inventory Robot', you don't have to set up anything in the 'advanced option', just remember to check 'clear media access port', the library will take care of the rest. To use 'new volumes ...', you need to choose the proper tape type, the slot number, Media ID and so forth. It's important NOT to click on 'Apply' button to take the action. Instead, click on 'Close' button. Otherwise, you will get the error 'The Media ID is not unique in the database'.

To import a cleaning tape into a tape library, seems like 'Inventory Robot' must be used to do so. Also, 'Advanced Option' needs to be accessed to configure sth. You need to add a new rule in 'barcode rule' tab. In the pop window, you need to choose the proper cleaning tape type, e.g. DLT_CLN. You need to give the barcode tag. e.g. CLN (Seems like we don't have to append the barcode number. 'CLN' is enough) At last, a number of the remaining cleanings must be given. Usually it's 25 times. Every time the cleaning tape is used, the number decrements by 1 until 0 automatically. If you did't follow these instructions, even the cleaning tape could be put into the library successfully, the library wouldn't recognize it as a cleaning tape by any means. You will get errors whenever you want to clean a tape drive.

Monday, February 26, 2007

How to get HBA WWNs on Solaris

Recently I want to get the WWNs of the HBAs in our servers to match them with a SAN report generated by a tool in order to know which node in the report represents which server. Basically, there are two methods.

One of them is to run the command below:


# prtpicl -v -c scsi
SUNW,fas (scsi, 1140000032a)
...
:manufacturer JNIC
:fcode_revision Version 3.9
:copyright Copyright (c) 2000-2002 by JNIC
:reg
00 00 00 00 00 40 00 00 00 00 10 00 00 00 00 00 00 40 40 00 00 00 04 00 00 00 00 00 00 40 50 00 00
00 01 00 00 00 00 00 00 00 00 00 00 08 00 00
:emerald_id 0
:sbus_slot 0
:model FCE-1063
:parity-generated
:xilinx_rev 0x2000b
:revision-id 0x4
:device-id 0x1160
:my_wwn_lo 0x69a4966b
:my_wwn_hi 0x100000e0
:lport_cfg 0
:board_id 0x4966b
...

root@boxwood # prtpicl -v -c scsi-fcp
SUNW,qlc (scsi-fcp, 120000003b8)
...
:reg
00 00 10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 10 10 00 00 00 00 00 00 00 00 00
00 00 00 00 00 01 00 02 00 10 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00
:node-wwn 20 00 00 e0 8b 1a 90 85
:port-wwn 21 00 00 e0 8b 1a 90 85
:assigned-addresses
81 00 10 10 00 00 00 00 00 00 03 00 00 00 00 00 00 00 01 00 82 00 10 14 00 00 00 00 00 10 00 00 00
00 00 00 00 00 20 00 82 00 10 30 00 00 00 00 00 12 00 00 00 00 00 00 00 02 00 00
...
:manufacturer QLGC
...
:reg
00 00 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 08 10 00 00 00 00 00 00 00 00 00
00 00 00 00 00 01 00 02 00 08 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00
:node-wwn 20 00 00 e0 8b 1a 55 63
:port-wwn 21 00 00 e0 8b 1a 55 63
:assigned-addresses
81 00 08 10 00 00 00 00 00 00 03 00 00 00 00 00 00 00 01 00 82 00 08 14 00 00 00 00 08 00 00 00 00
00 00 00 00 00 20 00 82 00 08 30 00 00 00 00 08 02 00 00 00 00 00 00 00 02 00 00
:hba0-adapter-hard-loop-ID 0
:hba0-enable-adapter-hard-loop-ID 0
:devfs-path /ssm@0,0/pci@19,700000/SUNW,qlc@1
:driver-name qlc
:binding-name pci1077,2300
:bus-addr 1
:instance 1
:_class scsi-fcp
:name SUNW,qlc

The other method is to use 'prtconf'. Here is an example:

# prtconf -vp > /tmp/prtconf.out
# cd /tmp; vim prtconf.out

Node 0xf00fa738
port-wwn: 210000e0.8b111339
node-wwn: 200000e0.8b111339
compatible: 'pci1077,2200.1077.4082.5' + 'pci1077,2200.1077.4082' + 'pci1077,4082' + 'pci1077,2200' + 'pciclass,010000' + 'pciclass,0100'
interrupts: 00000004
vendor-id: '1077'
device-id: '2200'
revision-id: '5'
subsystem-id: 00004085
subsystem-vendor-id: '1077'
manufacturer: 'QLGC'
reg: 00000002.00030000.00000400.00000002.00030800.00000400
device_type: 'scsi-fcp'
name: 'SUNW,qlc'
version: 'ISP2200 Sbus FC-AL Host Adapter Driver: 1.13.07 09/16/02'


Thursday, February 22, 2007

How to catch an user defined error message

Recently I am working on a project. In this project, there is a database package in which I put all the related functions and procedures. I want a function to generate user defined errors while calling it directly in the hosting environment; meanwhile, I want the calling stored procedure to ignore some of the user generated errors if the function is being called by a stored procedure. The point here is to catch some user defined errors but not the oracle predefined errors.

There is a RAISE_APPLICATION_ERROR statement in the function to generate the user defined error. e.g.

function test
return number
is
e_no_rate_found exception;
begin
raise e_no_rate_found;
exception
when e_no_rate_found then
raise_application_error(-20306, 'No rate found!');
end;

If this function is called directly from sqlplus, the user defined error -20306 will be raised. I want to ignore this error in a procedure which calls this function. Actually, it's same to catch an user defined error as to catch an oracle predefined error. Here we go. In the procedure, we could write the code block below:

declare
e_no_rate_found exception;
pragma exception_init(e_no_rate_found, -20306);
begin
call test;
exception
when e_no_rate_found then
NULL;
end;

The user defined error -20306 is associated with the user defined exception e_no_rate_found using exception_init pragma in the calling procedure.

Solaris utility which can be used to connect to MS SQLServer

Sometimes we need to access MS SQLServer from Solaris platform. In many cases, we need to get the data out of MS SQLServer and import it into Oracle. I wonder if there is any utility on Solaris which can do the similar things to what isql(the interactive SQL client comes with MS SQLServer) can do on Windows platform. Here come two resources:
  • One is called FreeTDS
  • There is one more tools using FreeTDS to connect to MS SQL Server. It's called sqsh

Tuesday, January 30, 2007

Interesting things during housekeeping a database

These days I've been housekeeping a database. Some tablespaces were created inefficiently which all have unnecessary huge extent size 16MB. As a result, even most of segments in those tablespaces have only few rows, they still occupy at least 16MB. Since there are a lot of such segments, the database has grown unnecessarily big. My job is to move these segments out of the tablespaces and shrink their size.
  • Move small tables to the small-extent 128KB tablespaces. Move big tables and table partitions to the big-extent 16MB tablespaces. Moving tables must be done at first as the rows movement will invalidate all the associated indexes.
select 'alter table '||owner||'.'||segment_name||' move tablespace '||
case when bytes/1048576 <>
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'TABLE'
union
select 'alter table '||owner||'.'||segment_name||' move partition '||
partition_name||' tablespace ts_big;'
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'TABLE PARTITION'
order by 1;

  • Rebuild small indexes into the small-extent 128KB tablespaces. Rebuild big indexes and index partitions into the big-extent 1MB tablespaces.
select 'alter index '||owner||'.'||segment_name||' rebuild tablespace '||
case when bytes/1048576 <>
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'INDEX'
union
select 'alter index '||owner||'.'||segment_name||' rebuild partition '||
partition_name||' tablespace ts_big_idx nologging;'
from dba_segments
where tablespace_name = 'ts_unnecessary_big' and segment_type = 'INDEX PARTITION'
order by 1;

  • Even the tablespaces into which those segments are being moved are created as locally managed and have uniformly small extents, the segments being moved into them will be created with a big initial extent which equals to their original size. After moving the segments, I need to deallocate the unused blocks and shrink their initial extent size. Firstly, I need to find out which segments are candidates to be deallocated. Probably whose initial extents are larger than the tablespace initial extent and the initial extent size is equal to it's segment size.
select 'alter table '||owner||'.'||segment_name||' deallocate unused keep 128k;'
from dba_segments
where tablespace_name = 'ts_small'
and segment_type = 'TABLE'
and initial_extent != 128*1024 and initial_extent = bytes
order by initial_extent;

Some interesting things show up during deallocating the segments:
  1. Some segments can be deallocated down to 128k, the tablespace initial extent size; some can NOT even their high watermark is not above 128k.
  2. Some segments can be deallocated down to the nearest size to the high watermark; some can NOT.
  3. Some statements will tell you how many unused blocks there are above the segment high watermark while some won't.
  4. After more aggressively changing 'keep 128k' to 'keep 64k' or down to another level, some segments which are not able to be shrank to 128k mentioned in point 1 have been deallocated to 128k.
  5. After more aggressively changing 'keep 128k' to 'keep 64k' or down to another level, some segments which are not able to be shrank to their possibly smallest size in point 2 have been deallocated more.
I need to invest more time to figure out these interesting things.

Forward or backward?

Recently I was assigned to a project. Actually, the project itself is pretty simple in terms of business logic. There are only 4 screens which need to be maintained on the front-end web pages. The maintenance is also pretty straightforward. Users just want to insert, update, delete or get a report on the data. What about the data? The data itself is relatively static as well. Till now, everybody would think this project is easy to be done and the data would be easy to be manipulated. I could think it's enough to have up to 4 or 5 tables out there in the database.

Wrong! The data model designed by somebody is extremely complicated. There are tens of tables in the database to support his idea and backup the front-end interface. This guy seems to live in a perfect world. He wants everything to be flexible. He wants users to control which table columns to be displayed on the web interface. He wants everything to be stored as their IDs rather than their real values. To find the real value, I need to come down 3 or more tables to fetch it. I could think even the simplest SQL needs to join 5 or more tables to get what we want. This guy wants to build a RDBMS system instead of an application on top of the Oracle RDBMS!

Yes, he can say the model is very flexible. Users can control almost everything even the data type and some constraints. But the price we pay here is a hard-to-understand data model(sometimes even the designer himself has no idea what he's doing during project meetings, funny!), the inevitable database performance overhead and the unnecessary flexibility. Users may not want those flexibilities we introduce to them at all! Actually, He complicates things completely!

I've been thinking about this. The database technology has been emerging for decades. From the layered database to the networking database to the relational database to the object-oriented database. One of the most important goals is to simplify the data model as much as possible so that developers can focus on the real business logic instead of the complicated data model. We do ours and you do yours! In this project, everything seems inversed! Instead of putting our effort to enforcing the business logic, we have to spend 90% of the project time to understand the data model, manipulate the data to stay consistent with one another. Is it really worth doing all of these? Are we going forward or backward? Time will tell!