Search This Blog

Friday, December 27, 2013

DBMS_LOGMNR, or "How to see what's in the redo logs"


Contents:
1 Database-Level Supplemental Logging
2 Redo Log File Options
3 Example of Querying V$LOGMNR_CONTENTS
4 End the LogMiner Session





Database-Level Supplemental Logging:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;



Redo Log File Options

Here you will tell you LOGMNR whate logfiles to look into. It can be done:

Automatically. Just define the time range you want to observe and start logmnr.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';

EXECUTE DBMS_LOGMNR.START_LOGMNR( STARTTIME => '01-Jan-2003 08:30:00', ENDTIME => '01-Jan-2003 08:45:00', OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);


Manually. You need to manually add every logfile you're interested in and then start logmnr.
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log.f', OPTIONS => DBMS_LOGMNR.ADDFILE);

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);



Example of Querying V$LOGMNR_CONTENTS


SELECT OPERATION, SQL_REDO, SQL_UNDO
  FROM V$LOGMNR_CONTENTS
  WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
  OPERATION = 'DELETE' AND USERNAME = 'RON';


End the LogMiner Session

EXECUTE DBMS_LOGMNR.END_LOGMNR;



Note: Everything is related to your session, so if you got disconnected, start again

Source: http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/logminer.htm#i1015913

Tuesday, December 24, 2013

Oracle: Manage your tablespaces, useful queries

Show the free available space:
SELECT
  ts.tablespace_name,
  TO_CHAR(SUM(NVL(fs.bytes,0))/1024/1024, '99,999,990.99') AS MB_FREE
FROM
  dba_free_space fs,
  dba_tablespaces ts
WHERE fs.tablespace_name(+)   = ts.tablespace_name
and ts.tablespace_name not in ('SYSAUX', 'SYSTEM', 'TEMP', 'TEMPBIG')
and ts.tablespace_name not like '%UNDO%'
GROUP BY
  ts.tablespace_name

Show more details for tablespace size management:
select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files 
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments 
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name 
order by tu.tablespace_name;


Coalesce a tablespace.
Alter tablespace TABLESPACE_NAME coalesce; 


Resize the datafiles to a minimum space.
select 'alter database datafile ''' || file_name || ''' resize ' || CEIL( ( NVL( hwm,1) * blksize ) / 1024 / 1024 ) || 'm;' cmd
from dba_data_files a,
  (select file_id, max(block_id + blocks - 1) hwm from dba_extents group by file_id) b,
  (select to_number(value) blksize from v$parameter where name = 'db_block_size') c
where a.file_id = b.file_id (+)
and CEIL(blocks * c.blksize / 1024 /1024) - CEIL((nvl(b.hwm,1) * c.blksize) / 1024 /1024) > 0
and tablespace_name not in ('SYSTEM','TEMP') and tablespace_name not like '%UNDO%';

See the biggest objects in the SYSTEM tablespace:
  select owner,segment_name,segment_type
 ,bytes/(1024*1024) size_m
 from dba_segments
 where tablespace_name = 'SYSTEM'
 and    bytes/(1024*1024) > 1
 order by size_m desc



Tuesday, December 17, 2013

Oracle: Archivelog activity

This will give you the number of logfile switch per hour. You will then be able to see the pics of activity (ie. data that has to be written) in your redo log.
It's similar than checking the number of "archivelog current", the only difference is with the writting mode in the logs. archivelog current is synchronous, switch logfile is asynchronous.


Date format: DD/MM/YYYY

SELECT
    SUBSTR(TO_CHAR(first_time, 'DD/MM/RR HH:MI:SS'),1,5)                          DAY
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'00',1,0)) H00
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'01',1,0)) H01
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'02',1,0)) H02
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'03',1,0)) H03
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'04',1,0)) H04
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'05',1,0)) H05
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'06',1,0)) H06
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'07',1,0)) H07
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'08',1,0)) H08
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'09',1,0)) H09
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'10',1,0)) H10
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'11',1,0)) H11
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'12',1,0)) H12
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'13',1,0)) H13
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'14',1,0)) H14
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'15',1,0)) H15
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'16',1,0)) H16
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'17',1,0)) H17
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'18',1,0)) H18
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'19',1,0)) H19
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'20',1,0)) H20
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'21',1,0)) H21
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'22',1,0)) H22
  , SUM(DECODE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH24:MI:SS'),10,2),'23',1,0)) H23
  , COUNT(*)                                                                      TOTAL
FROM
  v$log_history  a
WHERE
    TO_DATE(SUBSTR(TO_CHAR(first_time, 'MM/DD/RR HH:MI:SS'), 1,8), 'MM/DD/RR') >=  TO_DATE('&startDate', 'DD/MM/YYYY')
AND
    TO_DATE(substr(TO_CHAR(first_time, 'MM/DD/RR HH:MI:SS'), 1,8), 'MM/DD/RR')   <=   TO_DATE('&endDate', 'DD/MM/YYYY')

GROUP BY SUBSTR(TO_CHAR(first_time, 'DD/MM/RR HH:MI:SS'),1,5)


You will have a result similar to this:


It will help you to find the times where your db has the most activity in data writing.

Oracle: Monitoring indexes

Maintaining indexes consumes resources like CPU and I/O. Knowing if an index is used can also be useful to gain disk space. You should keep in mind that the oracle engine needs a representative amount of time to be accurate (days, weeks or months depending on the life cycle of your application).


Start monitoring an index:
alter index OWNER.INDEX_NAME monitoring usage

Stop monitoring an index:
alter index OWNER.INDEX_NAME nomonitoring usage

See the list of monitored indexes for a specific owner:
SELECT table_name,
       index_name,
       monitoring
FROM   v$object_usage
WHERE  table_name in (select table_name from dba_tables where owner=UPPER('&owner'))

Show the index usage for a specific owner:
SELECT table_name,
       index_name,
       used,
       start_monitoring,
       end_monitoring
FROM   v$object_usage
WHERE  table_name in (select table_name from dba_tables where owner=UPPER('&owner'));

Monday, December 16, 2013

Create a new Oracle database from cold backup

You will find below the process I use to create a new Oracle database (instance named NEW in this article) from a cold backup.


First, the actions to perform on the existing database:

Generate the init file for the new database. Just keep it safe somewhere, we will modify it at a later stage.
create pfile from spfile;

To avoid any further problem, force a switch logfile.
alter system switch logfile;

Generate the CREATE CONTROLFILE statement you will use on the new database:
alter database backup controlfile to trace;

It will be generated in one of the last trace file in you existing instance background_dump_dest folder. Also, it will contains other commands to execute on your new instance (create tempfile for example). Change the instance name, redo and datafile location, and check the it is SET DATABASE, not REUSE. Below an what you should get, after modifications:
STARTUP NOMOUNT
CREATE CONTROLFILE set DATABASE "NEW" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    'E:\ORADATA\NEW\REDO01A.LOG',
    'E:\ORADATA\NEW\REDO01B.LOG'
  ) SIZE 25M,
  GROUP 2 (
    'E:\ORADATA\NEW\REDO02A.LOG',
    'E:\ORADATA\NEW\REDO02B.LOG'
  ) SIZE 25M,
  GROUP 3 (
    'E:\ORADATA\NEW\REDO03A.LOG',
    'E:\ORADATA\NEW\REDO03B.LOG'
  ) SIZE 25M
-- STANDBY LOGFILE
DATAFILE
  'E:\ORADATA\NEW\SYSTEM01.DBF',
  'E:\ORADATA\NEW\UNDOTBS01.DBF',
  'E:\ORADATA\NEW\SYSAUX01.DBF',
  'E:\ORADATA\NEW\TEMPBIG.DBF'
CHARACTER SET WE8ISO8859P1
;
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '\\SERVERNAME\FLASH_RECOVERY$\NEW\ARCHIVELOG\DATE\O1_MF_1_1_%U_.ARC';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE 'E:\ORADATA\NEW\TEMP01.DBF'
     SIZE 536870912  REUSE AUTOEXTEND ON NEXT 16777216  MAXSIZE 3072M;
-- End of tempfile additions.
--

Shutdown the Instance
shutdown immediate;

Note: You can't restart the database before you copy the dbf and logfiles to another location, or you will have to re do the steps above.




You can now start working on the new instance:


First, create all the necessary folders (oradata, diagnostic_dest or XXX_dump_dest, etc).

Create the password file:
orapwd file=full_location/FILE.ora force=y

The command will ask for the password. Make sure you specify the full location of the file, or the command will create it in your actual location. Carefull, if you are on linux or windows, the password file FILE.ora will have a different name. If the name isn't right, you will have an ORA-01031 Insufficient privileges error while trying to connect (see this article for more details) :
- Linux: orapwd<SID>.ora
- Windows: PWD<SID>.ora

Modify the init<SID>.ora file (SID, file locations, etc.)

Then, copy the datafiles and redo files from the existing database. Once this is done, you can restart the original database to reduce the downtime to a minimum.

For Windows only, create the service:
oradim -new -sid <SID> -startmode manual -spfile
oradim -edit -sid <SID> -startmode auto -srvcstart system

If you messed up this part, remove the service and recreate it:
oradim -delete -sid <SID>


Make sure you got all the entries in the listener.ora and tnsnames.ora files.

Restart you listener (lsnrctl stop/start) so it has your new instance.

Start the new instance with nomount option, and execute the CREATE CONTROLFILE statment

Open the database.
alter database open resetlogs upgrade;
Note: The upgrade option is mandatory only if your use this method to upgrade from 10g to 11g

Then, add the TEMPFILE. You will find the statement in the trace file used for the CREATE CONTROLFILE statment (see above). Check if there is no other actions required in this trace file too.

And the last but not least, check the alert file and fix all issues that might appear.

Then, recompile all invalid objects:
@?/RDBMS/ADMIN/utlirp.sql
shutdown immediate
startup mount
alter datafile open upgrade;
-- or: startup migrate;
--recompile all invalid objects
@?/RDBMS/ADMIN/utlrp.sql

This part, only if you did this for a migration from 10g to 11g:
@catupgrd.sql
@utlrp.sql

TNS-03505: Failed to resolve Name

Check thoses:

lsnrctl status
tnsping <ORACLE_SID>

listener.ora
tnsnames.ora
sqlnet.ora

hostname
variable TNS_ADMIN

Note: This article will be updated with more details and explanations

ORA-01031: insufficient privileges (on Windows)

When you have this error, here is a few things to check:
  • you have your password file. If not, recreate it:
orapwd file=c:\.....\database\PWD{SID}.ora force=[y/n] 

Carefull for linux DBA, the file is named differentely on Windows - PWD{ORACLE_SID}.ora (ex. PWDORCL.ora), it kepts me looking for a solution to this problem a loonnnng time...

  • remote_login_passwordfile must be set to EXCLUSIVE
  • If you have two runtime running on the same server (10g and 11g for example), set the TNS_ADMIN environement variable with the location of your sqlnet.ora, listener.ora and tnsnames.ora files
  • The oracle user is in the right group (dba)
  • Add this line into your sqlnet.ora file (haven't done it yet, but it supposed to fix the issue)
SQLNET.AUTHENTICATION_SERVICES = (NTS)


Thursday, December 12, 2013

Drop an Oracle database on Windows server

step 1: get datafile, control file and redo log file location
select file_name from dba_data_files
select name from v$controlfile
select member from v$logfile

step 2: set env variables if not done yet
set oracle_home=C:\Oracle\Orant1020
set oracle_sid=XXX

step 3: shutdown database and restart it in mount stage and restricted exclusively.
SQL> shutdown immediate;
SQL> startup mount exclusive restrict;

step 4: drop the DB and exit sqlplus
SQL> drop database;

step 5: check if the database files are removed, do it if not. Don't forget the folders, trc, alert file, etc.

step 6: Drop the XXX related services using ORADIM
C:\> oradim -delete -sid XXX
Instance deleted.

step 7: Check the listener if there is any XXX services and unregister them if yes. It does it by default, but in case.