2009-01-20

pipelined function vs. dbms_output

I don't like dbms_output.
So I came to pipelined functions.
here a small example:
create or replace
function dummy
return DBMS_DEBUG_VC2COLL
PIPELINED -- NOTE the pipelined keyword
is
begin
pipe row (to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS' ));
dbms_lock.sleep(15);
pipe row (to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS' ));
return;
end;
/
Make sure you set the arraysize of SQL*PLUS (or whatever) small enough, otherwise you will get a bunch of results at once, not when they occure.
set arraysize 1
select * from table(dummy());
gives
COLUMN_VALUE
--------------------
20-JAN-2009 14:24:17
and after 15 sec.
20-JAN-2009 14:24:32
It might not be a big advantage for anyone, just a thing I like.

2009-01-13

multiple ASM instances on one node (11gR1)

I just searched a way to circumvent the limit of 63 ASM DiskGroups in a storage system.
My first thought was to create a 2nd (or multiple) ASM instances on the same node which manage seperated DGs. This is quite easily possible: Just create a spfile for the 2nd ASM, just make sure to add *.DB_UNIQUE_NAME to avoid ORA-15150. I also recommend to use different ASM_DISKSTRINGs, to avoid multiple mount-attempts of the same DG on all ASM instances.
So I could create different DGs in different ASM-instances.
But now the troubles starts: In my test-RDBMS I could only see the DGs of the first ASM.
So I created a SR(7275580.994) at MetaLink to ask how to do it and wether or not it's supported in single instance or RAC.
To summarize the answers:
  • It is possible to run multiple ASMs on a node
  • One RDBMS can onlybe served by one ASM
  • all the GUIs are not aware of multiple ASMs
  • it's not supported in RAC
Even these answers are worth some forther testcases.
I have 2 ASMs: +ASM (default) and +ASM2 (2nd, with changed DB_UNIQUE_NAME and ASM_DISKSTRING) and one RDBMS. DG ASM1_DG created in +ASM, DG ASM2_DG created in +ASM2.

  1. Test 1
    +ASM up
    +ASM2 up
    => only ASM1_DG1 visible in RDBMS .
  2. Test 2
    +ASM down
    +ASM2 up
    => ASM2_DG1 visible
    ! create tablespace in ASM2_DG1
  3. Test 3
    +ASM up
    +ASM2 up
    - restart RDBMS
    => only ASM2_DG1 visible in RDBMS
  4. Test 4
    +ASM up
    +ASM2 up
    => DBCA only shows ASM1_DG1: 
  5.  Test 5
    • shutdown +ASM
    • switch ASM1_DG to +ASM2 (some fiddling with ASM_DISKSTRING and symlinks)
    • mount ASM1_DG in +ASM2
    • create TS in ASM1_DG
    • shutdown RDBMS and +ASM2
    • switch ASM1_DG back to +ASM
    • startup +ASM, +ASM2 and rdbms
    • this constelation (ASM1_DG mounted on +ASM, ASM2_DG mounted on +ASM2) led to this error-message:

      ORA-01157: cannot identify/lock data file 8 - see DBWR trace file
      ORA-01110: data file 8: '+ASM1_DG/berx2/datafile/asm1_dg.256.676065339'
      ORA-17503: ksfdopn:2 Failed to open file +ASM1_DG/berx2/datafile/asm1_dg.256.67
      6065339
      ORA-15001: diskgroup "ASM1_DG" does not exist or is not mounted
      ORA-15001: diskgroup "ASM1_DG" does not exist or is not mounted
      
Test5 led me to one guess: At startup a RDBMS can potentially access all ASMs, but the first attempt to access a DG pins the RDBMS to the ASM. Unfortunately I do not know how to prove this.
Within the SR Bug:6991236 was mentioned.
This Bug is not visible right now, but interpreting the header I can guess it's an enhancement request to allow or support multiple ASM-instances with differen OS-users in a cluster-environment. It's shown to be fixed (implemented) in 11gR2 and has status 98 (Suggestion Implemented). We will see when 11gR2 is out.

2009-01-09

avoid ORA-1555 in long running queries

11.1.0.7 brought an interresting new Feature: Allow ORA-1555 to be ignored during table scan.
From the description for Bug 6688108:

ORA-1555 can occur during long running queries involving a ROWID Range or full table scan if the undo has recycled.
This error is normal and expected.

This fix is a special enhancement to allow an SCN_ASCENDING hint to be supplied in the SQL statement. With this hint the select will take a new snapshot SCN and continue the scan rather than reporting an ORA-1555 to the caller. This can lead to inconsistent results as the SELECT is allowed to see data which is not all consistent to one point in time.

It is somewhat amazing Oracle allows inconsistent results, or it's the 'other end' of READ UNCOMMITTED isolation Level.

I'm not sure where this feature makes any sense, but there will be environments where it's better to get false data than no data.

2009-01-08

buffer cache poisioning

Out of some tests and discussions I wanted to create a way to pollute the buffercache with blocks not existing there. As a result the BCHR also goes down, so I might produce a contradiction to Connor McDonalds choose_a_hit_ratio. There might be easier ways doing so, feel free to promote, if you want.

How i test the BCHR:
column name format A32;
SELECT id, name, block_size,
 (1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))) HIT
FROM V$BUFFER_POOL_STATISTICS
ORDER BY NAME, BLOCK_SIZE;
On my test system the script is really slow and I'm not sure about side effects it will have on other systems than the buffer cache.

and my results (before the first run):
ID NAME       BLOCK_SIZE        HIT
-- ---------- ---------- ----------
3 DEFAULT          8192 .920069518
1 KEEP             8192 .158558559
The scripts functionality explained:
  1. a Cursor c_tab provides all Tables of some quality (e.g. not owned by SYS)
  2. for every entry in this cursor, all rowids are collected into a TABLE OF ROWID
  3. every rowid is checked if it's block exists in x$bh, if not, it's directly selected and therefore bushed into bufer cache.
and here the script itselve (a big part just copied with permission from Reinhard Krämer)
DECLARE
counter  number;
obj_id   number;
rel_fno  number;
block_id number;
bh_count number;
bh_stmt  varchar2(200);
rid_stmt varchar2(200);
v_stt    varchar2(50);
v_rid    ROWID;
CURSOR c_tab
IS
  SELECT owner,
         table_name,
            owner
         || '.'
         || table_name owntab
  FROM DBA_tables
  WHERE owner NOT IN ('SYS')
         -- comment next line, if you want to have all tables
         --AND table_name = 'T_KEEP'
         AND BUFFER_POOL NOT IN ('KEEP')
  --       AND owner = UPPER ('TEST')
  ;

g_string VARCHAR2 (4000) := NULL;
-- memory-array
TYPE t_rowid IS TABLE OF ROWID
  INDEX BY PLS_INTEGER;

r_rowid t_rowid;

-- print-procedure (mini-version of MSG)
PROCEDURE PRINT (
  p_text IN VARCHAR2 := NULL
)
IS
BEGIN
-- remove comment for debug
--      DBMS_OUTPUT.put_line (p_text);
 NULL;
END;
BEGIN
-- main
EXECUTE IMMEDIATE 'select b.ksppstvl InstValue
                 from x$ksppi a, x$ksppcv b
                 where a.indx = b.indx
                   and a.ksppinm=''_small_table_threshold''' into v_stt;
EXECUTE IMMEDIATE 'alter session set "_small_table_threshold" = 10';
counter := 0;
bh_stmt := 'select count(*)
          from x$bh
          where obj=:obj_id and dbablk=:block_id and dbarfil=:rel_fno';

FOR crec IN c_tab
LOOP
 PRINT(' tab: ' ||crec.owntab);
  g_string :=    'select rowid the_rowid from '
              || crec.owntab;

  BEGIN
     PRINT (   'using command: '
            || g_string);

     EXECUTE IMMEDIATE (g_string)
     BULK COLLECT INTO r_rowid;
  EXCEPTION
     WHEN OTHERS
     THEN
        PRINT (   SQLERRM
               || ' on '
               || g_string);
  END;

  IF r_rowid.COUNT > 0
  THEN
     PRINT (   'showing rowids for: '
            || crec.owntab);

     FOR i IN r_rowid.FIRST .. r_rowid.LAST
     LOOP
        PRINT (r_rowid (i));
        -- load block if not in X$bh
   
        obj_id   := dbms_rowid.rowid_object(r_rowid(i));
        rel_fno  := dbms_rowid.rowid_relative_fno(r_rowid(i));
        block_id := dbms_rowid.rowid_block_number(r_rowid(i));
        rid_stmt := 'SELECT rowid the_rowid FROM '
              || crec.owntab || ' WHERE rowid = :rid';
        execute immediate bh_stmt into bh_count using obj_id, block_id, rel_fno;
   
        IF bh_count = 0 THEN -- block curently not in buffer
          BEGIN
            execute immediate rid_stmt into v_rid using r_rowid(i);
            counter := counter + 1;
          END;
        END IF;
     END LOOP;
  END IF;
END LOOP;
EXECUTE IMMEDIATE 'alter session set "_small_table_threshold" = :stt' using v_stt;
DBMS_OUTPUT.put_line ('counter: ' || counter);
END;
/


After a 2nd check the BCHR is really lower than before.

ID NAME       BLOCK_SIZE        HIT
-- ---------- ---------- ----------
3 DEFAULT          8192 .856567413
1 KEEP             8192 .158558559


One question might be why I didn't run just a lot of full table scans? I can not be sure how many blocks of this tables exist in buffer cache, and every existing buffer would increse the BCHR. Also in Oracle 11g this might result in direct path reads which will not affect the buffer cache directly (in fact, it will increase the BCHR, as all internal selects in SEG$, TS$ and all the other data dictionary tables needed to access blocks).
One last purpose: If ever anyone argues about the need of ratios to judge the health of a database, just ask if you get additional ressources if the BCHR is above/below any limit - and then grab these resources :-)

2009-01-07

Instance parameters derived from cpu_count

Today I have a lot of pictures!
We got a brand new SUN T5140, and I managed to get my hands on it to run some tests.

Glenn Fawcett wrote some interresting entries in his blog about these so called chip multithreading CPUs. 2 should be mentioned here, as they had influence to this test-case:
Oracle's vision of multi-core processors on Solaris and Virtual CPUs effect on Oracle SGA allocations.
I wanted to know which parameters are derived from cpu_count.
A fresh installed and patched RDBMS 11.1.0.7.0 was my victim.
I used this script to get my data:
#!/usr/bin/ksh  -x

integer i=1
while ((i <= 128));
do
  print " $i ";

  echo "*.compatible='11.1.0.0.0'
*.control_files='/appl/oracle/oradata/BERX2/control01.ctl','/appl/oracle/oradata/BERX2/control02.ctl','/appl/oracle/oradata/BERX2/control03
.ctl'
*.db_block_size=8192
*.db_name='BERX2'
*.MEMORY_MAX_TARGET=16G
*.SGA_TARGET=13G
*.cpu_count=$i"> initBERX2.ora

  echo "startup pfile='/appl/oracle/product/11.1.0.7/dbs/initBERX2.ora' ;

insert into init_parameters
select $i         ,
       a.ksppinm  ,
       b.ksppstvl ,
       c.ksppstvl
from x\$ksppi a, x\$ksppcv b, x\$ksppsv c
where a.indx = b.indx and a.indx = c.indx;

commit;

select distinct(\"CPUs\") from init_parameters;

shutdown;

exit;"> cpus/doit$i.sql

sqlplus "/ as sysdba" @cpus/doit$i.sql >cpus/log$i.log

sleep 5

  (( i = i + 1));
done

Afterwards I just had to get the interresting data out:
select /*+ PARALLEL */ i."CPUs" || ',' || i."Parameter" || ',' || i."InstanceValue"  
from init_parameters i  
where "Parameter" in (select distinct(i1."Parameter")
    from init_parameters i1,
       init_parameters i2
    where i1."Parameter" = i2."Parameter"
       and i1."CPUs" <> i2."CPUs"
      and i1."InstanceValue" <> i2."InstanceValue" )  
order by i."CPUs", i."Parameter";

and waste my time in excel creating the graphs:
(the whole spool output can be found here.)
I hope everyone enjoys these graphs. I'm sure there can be done much more tests, so if you have an interresting testcase which can be automated such as mine, feel free to contact me.

2009-01-05

KEEP and DEFAULT buffer really seperated (at least for LRU lists)

I had to answer a really simple question:
If I manage to put a table totally into KEEP buffer (and it's the only table which will ever go there), will it be affected by other operations which are going into DEFAULT buffer cache?
At least if you disable Automated Memory Management the answer is yes (even if some points has to be considered, but that's not part of the primary question).
Here my little testcase:
I have a table t_keep which should go and stay in keep buffer cache (and fits into it).
I have 2 other tables, t and t2, where t is mich bigger than the default pool, t2 also fits into default-pool but together they kick each other out (at least partially).
And now for the tests:
(I had to make _small_table_threshold big enough to load all tables in buffer cache and avoiding direct path reads)

about the tables:
select table_name, buffer_pool, blocks from dba_tables where owner ='TEST';TABLE_NAME                     BUFFER_     BLOCKS
------------------------------ ------- ----------
T2                             DEFAULT        401
T_KEEP                         KEEP           496
T                              DEFAULT      26289

select OBJECT_NAME, DATA_OBJECT_ID 
from dba_objects
 where owner ='TEST';
OBJ_NAME DATA_OBJECT_ID
-------- --------------
T2                64351
T_KEEP            64143
T                 64142
the test itselve:
SELECT ROWID FROM TEST.T_KEEP;

... ROWIDs ...

select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj;

OBJ   COUNT(*)
---------- ----------
64143        452

SELECT ROWID FROM TEST.T;

... ROWIDs ...

select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj;

OBJ   COUNT(*)
---------- ----------
64143        452
64142       1080

select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj; 

OBJ   COUNT(*)
---------- ----------
64351        386
64143        452
64142          1 
So ObjectID:64143 == T_KEEP is still in KEEP buffer cache.

Now the same test with Automated Memory Management:
(only the results)
select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj;
       OBJ   COUNT(*)
---------- ----------
     64143        452

select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj;
       OBJ   COUNT(*)
---------- ----------
     64143        452
     64142      20111

select obj, count(*) from x$bh where obj in (64142, 64143,64351) group by obj;
       OBJ   COUNT(*)
---------- ----------
     64351        386
     64143        452
     64142      19725
So also in this case 64143 survived, whereas the other 2 were fighting for the same blocks in DEFAULT buffer cache.