I had a (for my environment) unusual request:
After the migration of a Repository Database from 9i to latest 10g I was asked to keep a backup of the old DB for at least 3 years.
This does not sound very unusual, but it's not that simple in our environment. We do only keep backups for weeks to some month, worst case. I also cannot just backup the datafiles at all: The old database run on Solaris, but we are switching to Linux right now. With just some bad luck I would not have any system to restore (or open) this database backup at all.
This brought me to another solution; in my point of view it was not worth to write a blog about it, but I was asked by Leighton L. Nelson and so I write:
- export of the full database
I run a simple export of the database. There is no expdp in 9i, so the choice was easy.
- compress the files
the dump (and the logfile!) where tared together and compressed. Just to save space.
- prepare a proper store
As mentioned above, there is no dedicated system for this purpose. So I had to prepare a place where the dump is safe. As a DBA, of course I know a good place to store data: A database!
First a DBFS came to my mind. But the DB is in Version 10.2 - no DBFS.
But it's quite simple to do the important steps manually:
create tablespace old_dump datafile '+<DG>' autoextend on; create user old_dump identified by dump_old1 default tablespace old_dump; GRANT CONNECT, CREATE SESSION, CREATE TABLE to old_dump; alter user old_tech_dump quota unlimited on old_dump; connect old_dump/dump_old1 create table old_dump_store (id integer primary key, description VARCHAR(2000), file_store BLOB) LOB (file_store) STORE AS SECUREFILE (TABLESPACE old_dump DISABLE STORAGE IN ROW NOCACHE LOGGING);
- insert the dump (and some metadata)
There is a nice way in SQL Developer to load a file to a BLOB. It's just so simple.
At last some words in the comment field are worth - so everyone knows what's inside the BLOB.