How to : Oracle ASM – Ajout d’un disque SAN à un diskgroup sur Windows

Comme premier article de l’année, je me suis lancé dans un petit tuto pour montrer au mieux comment ajouter un nouvel espace de stockage SAN à un diskgroup ASM sur un environnement RAC 11G windows (à partir de 2008). Rien de bien complexe, mais cela reste des opérations courantes qui méritent d’avoir leur petit tuto 😉

Pour ce qui est du contexte technique, j’ai donc un cluster Oracle (sur machine physique) à deux noeuds, en 11.2.0.3 (Grid + RDBMS), sur un windows 2008R2.

L’objectif est d’ajouter un espace de stockage SAN de 250Go à mon diskgroup « DATA » utilisé par ASM, sans provoquer d’incident ou d’indisponibilité.

Attention :

Il faut s’assurer que le zoning soit bien réalisé depuis le SAN sur les 2 noeuds, sans accès EXCLUSIF, si si cela peu arriver …

=> Pour la petite note, l’accès « exclusif » se caractérise par le fait de réaliser des opérations sur un nouveau disque depuis un nœud du RAC, qui ne seraient pas propagées sur le ou les autres nœuds.

Tout d’abord, dès que l’espace SAN a été mappé sur les 2 serveurs, il faut s’assurer de la visibilité du disque depuis l’utilitaire DISKPART.

L’utilisation de cet utilitaire est préconisé par Oracle pour gérer les disques qui seront affectés à ASM, il faut en effet éviter d’utiliser le GUI « diskmgmt.msc » pour éviter des problèmes de FREEZE (que j’ai d’ailleurs rencontré quelques fois), qui feraient que le simple ajout -d’un disque devient une opération complexe pour ne pas dire d’autres mots.

On lance donc DISKPART depuis une commande dos sur l’un des noeuds du RAC, et on affiche l’ensemble des disques identifiés par le System pour ensuite s’occuper du nouvel espace ajouté.

Nota : il se peut que le nouveau disque ne soit pas visible sur les noeuds du cluster. Pas de panique, un RESCAN permet de rafraîchir le gestionnaire de disques Windows

1

Dans cette liste, on visualise 2 disques Offline qui sont à ajouter. Nous allons ajouter le disque 5 de 250Go à notre groupe de disque ASM « DATA ».

-> Toujours à travers DISKPART, on sélectionne le disque et débutons la création de 2 partitions en passant tout d’abord le disque ONLINE (la mise ONLINE du disque doit être réalisé sur l’ensemble des noeuds du RAC).

Note : Pour les stockages windows ASM, il faut avoir un disque ayant une partition ETENDUE puis LOGIQUE (soit à la taille globale du disque, ou non)

2

On créée la partition ETENDUE :

3

Ah … dommage . Le volume est en READ ONLY il est donc impossible de créer quoique ce soit dessus.

L’origine de ce problème est connu de Microsoft (KB), et fait référence à de nouvelles stratégies de groupes windows relatif à la gestion des disques SAN. En 2008 et 2008R2 les stratégies de groupes SAN attribuent le mode READ ONLY à tout nouveaux disques partagés SAN (mode VDS_SP_OFFLINE_SHARED) , alors que dans les versions plus récentes, le mode READ WRITE (VDS_SP_ONLINE) est le mode par défaut.

Les différentes stratégie de groupes SAN :

VDS_SP_ONLINE: All newly discovered disks are brought online and made read-write.
VDS_SP_OFFLINE_SHARED: All newly discovered disks that do not reside on a shared bus are brought online and made read-write.
VDS_SP_OFFLINE: All newly discovered disks remain offline and read-only.

Le contournement proposé est donc de modifier les attributs du disque, pour le forcer à être en READ WRITE via la commande « ATTRIBUTE DISK CLEAR READONLY« 

Qu’à cela ne tienne, passons le disque 5 en READ WRITE , et continuons la création de nos partitions.

5

=> Attention : Ceci est à faire sur l’ensemble des noeuds du cluster RAC.

Pour la création des deux partitions, il faut suivre les commandes ci dessous sauf pour la partition logique, dans le cas où une limite d’espace disque serait à fixer (ce qui n’est pas mon cas)

Création de la partition étendue :

CREATE PARTITITON EXTENDED

Création de la partition logique en utilisant tout l’espace disque de la partition étendue :

CREATE PARTITION LOGICAL 

Ou si une taille doit être spécifiée (exemple pour 10Go, la SIZE étant spécifiée en MB)

CREATE PARTITION LOGICAL SIZE=10000

Ce qui donne dans mon cas :

6

=> Après cette étape il faut s’assurer que les partitions soient BIEN VISIBLES des autres noeuds du cluster. Dans mon cas je vérifie sur mon second node toujours via DiskPart, après avoir fait en revanche un petit RESCAN pour rafraîchir le gestionnaire de disque.

7

Histoire d’être sur (car normalement depuis windows 2008 c’est du par défaut), on lance une commande « AUTOMOUNT ENABLE » pour être sur que le disque sera monté au prochain reboot.

23

Maintenant que cette étape est faite, il faut vérifier un point certes peu impactant mais qui peut être gênant pour oracle,  windows et les applications :

A chaque découverte de nouvelles partitions, Windows attribue AUTOMATIQUEMENT une nouvelle lettre de lecteur. En soi, pour des partitions classiques (primary) ce n’est pas bien grave, mais pour les partitions étendues (que va utiliser ASM) cela peut poser problème.

=> Il faut donc veiller à RETIRER la LETTRE qui a du être automatiquement ajoutée sur le second noeud (en effet, le nœud d’où a été créée la partition n’a quand à lui pas cette lettre d’automatiquement associée)

Après avoir cliqué droit sur la partition, puis sélectionné « Change drive letter and path » :

20

=> on remove !

21

On dit YES bien sur ..

22

Hop, plus de lettre, c’est tout bon, je peux maintenant passer à la partie Oracle, et plus particulièrement ASM, pour agrandir mon DISKGROUP existant avec cette nouvelle partition LOGIQUE précédemment créée.

La première action est, sous windows (et oui car pas besoin sous Unix) est de labelliser cette partition, pour être ensuite CANDIDATE sous ASM.

Pour cela, il faut lancer l’utilitaire ASMTOOL (disponible dans le répertoire d’installation du GRID CONTROL) soit en mode graphique ou en ligne de commande. Etant sous windows je ne vais pas me compliquer la vie, j’utilise la méthode graphique via ASMTOOLG :

8

On continue (NEXT) et on arrive sur la liste des partitions visibles par le system, et celles qui sont candidates à être formatées « ASM » :

9

La partition 1 du Disk5, qui est CANDIDATE, est bien celle qui a été précédemment créée.

Note :

Je garde le même PREFIX pour la labellisation ASM « DATA » puisque je vais procéder à une augmentation du diskgroup de ce même nom. Ceci étant j’aurais pu mettre « DATA2″ ou « TOTO », cela n’aurait pas empêché l’opération d’agrandissement, il s’agit plus de normes et conventions, autant les respecter alors 😉

Je sélectionne donc ma partition 1 du disk5, et lance la « labellisation » du disk en format ASM :

10

1112

 

Je peux enfin passer sous ASM pour agrandir mon diskgroup !

Mais faisons au préalable un petit état des lieux, pour avoir un état avant et après, et également identifier le disque ASM qui sera à ajouter.

13

14

J’ai donc bien d’identifié par ASM mon disque en statut PROVISIONED et non utilisé, qui peut être ajouté au diskgroup DATA, auquel cas je lance ma commande :

15

Tout juste une fois cette commande lancée, nous pouvons interroger les opérations de REBALANCE en cours sur le diskgroup DATA :

16

Tiens mais qu’est ce le REBALANCE au fait ?

->  A chaque ajout de disque (un ou plusieurs) au sein d’un même diskgroup, il y a des opérations de redistribution des données qui sont automatiquement réalisées par ASM, l’objectif étant que les données soient réparties au mieux à travers tous les disques qui composent le diskgroup.

Les opérations de REBALANCE se découpent en 3 phases :

1. Analyse des données à répartir : « PLANNING »
2. Ré-allocation des EXTENTS des fichiers de données : « File extents relocation »
3. Compactage : « Disk compacting »

Par défaut, le niveau de REBALANCE est fixé à 1 via le paramètre ASM_POWER_LIMIT

17

Ce qui signifie que les opérations ne vont pas « trop » solliciter les ressources machines / SAN (CPU / IO) pour ne pas impacter les performances, dans le cas où par exemple cette opération serait lancée en pleine journée.

Mais l’impact de laisser tel quel ce paramètre à sa valeur par défaut peut être majeur, puisque autant sur une base de 100G, répartir les données sur l’ensemble des disques va prendre un temps probablement acceptable, dans lequel peu de risques peuvent survenir, mais autant sur une base de quelques To, cela ne va pas être la même limonade…

Si l’opération met plusieurs heures à se réaliser, les facteurs risquent augmentent (risques de pannes disques, machines, coupure lien SAN, reboot et j’en passe) ce qui peut provoquer un doux bazar au sein des données stockées par ASM.

Il est donc important de vérifier le temps que va mettre ASM à effectuer ses opérations de réorganisation, pour adapter au mieux et éviter ainsi un certain nombres de risques.

La gestion de la REBALANCE est dynamique, nous pouvons augmenter ou diminuer ASM_POWER_LIMIT, via une commande du type « alter diskgroup MYDISK rebalance power 3;« , voyons donc ce que cela donne pour moi :

18

On voit que le temps estimé ne descend pas, c’est plutôt le contraire (et c’est normal car c’est la première phase de « Planning« ). Nous allons donc passer au niveau rebalance 4 :

19

Tout de suite, en montant le niveau de rebalance à 4, on divise par deux le temps nécessaire à la réorganisation des données.

24

Nous avons donc maintenant notre diskgroup DATA pleinement opérationnel avec ces 250 nouveaux Go !

Enjoy !

Micka

 

Oracle 12c Multitenant Part 2 : Rman Sauvegarde et Restauration PDB

Suite de l’article « Oracle 12C Multitenant Part 1 : export / import 11G sur une PDB 12C » : ici

Nous avons vu précédemment la création d’une pluggable database, et son alimentation via import Datapump. Attaquons nous maintenant à RMAN.

En 12c Enterprise, option Multitenant, Rman permet de sauvegarder 3 éléments distincts : La CDB,les PDBs ou les métadatas.

Dans cet article, nous allons étudier la partie Pluggable database, à mon sens la plus courante dans le monde de la production.

Prenons le cas d’un backup complet de notre CDB qui inclut donc toutes les PDBs :

Petit passage en archivelog avant toutes choses :

SQL> alter system set log_archive_dest='/oradata/CDB/arch' scope=both ;
System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount ;
ORACLE instance started.

Total System Global Area 1.0503E+10 bytes
Fixed Size 5294664 bytes
Variable Size 5737809336 bytes
Database Buffers 4731174912 bytes
Redo Buffers 28258304 bytes
Database mounted.
SQL>alter database archivelog ;
Database altered.

SQL> alter database open;
Database altered.

SQL>alter pluggable database all open read write ;
Pluggable database altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oradata/CDB/arch
Oldest online log sequence 208
Next log sequence to archive 210
Current log sequence 210

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 MYPDB1 READ WRITE NO

Go pour un petit backup full

RMAN> sql 'alter system switch logfile';

using target database control file instead of recovery catalog
sql statement: alter system switch logfile

RMAN> backup database format '/oratmp/Backup/bkpfull_svg_DB_on%_%t%U.bkp' plus archivelog ;

Starting backup at 18-SEP-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=423 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=210 RECID=1 STAMP=890684562
input archived log thread=1 sequence=211 RECID=2 STAMP=890684585
input archived log thread=1 sequence=212 RECID=3 STAMP=890684599
input archived log thread=1 sequence=213 RECID=4 STAMP=890685090
input archived log thread=1 sequence=214 RECID=5 STAMP=890685099
..
..
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oracle/product/12.1.0.2/dbs/0gqhesqa_1_1 tag=TAG20150918T084810 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 18-SEP-15

Starting backup at 18-SEP-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/oradata/CDB/undotbs01.dbf
input datafile file number=00001 name=/oradata/CDB/system01.dbf
input datafile file number=00003 name=/oradata/CDB/sysaux01.dbf
input datafile file number=00006 name=/oradata/CDB/users01.dbf
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_svg_DB_on%_8907292970hqhesqh_1_1.bkp tag=TAG20150918T084817 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/oradata/CDB/MYPDB1/tbs_mrsadmin_01.dbf
input datafile file number=00008 name=/oradata/CDB/MYPDB1/sysaux01.dbf
input datafile file number=00007 name=/oradata/CDB/MYPDB1/system01.dbf
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_svg_DB_on%_8907293040iqhesqo_1_1.bkp tag=TAG20150918T084817 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/oradata/CDB/MYPDB1/sysaux01.dbf
input datafile file number=00015 name=/oradata/CDB/MYPDB1/tbs_MYPDB1_01.dbf
input datafile file number=00013 name=/oradata/CDB/MYPDB1/system01.dbf
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_svg_DB_on%_8907293070jqhesqr_1_1.bkp tag=TAG20150918T084817 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/oradata/CDB/pdbseed/sysaux01.dbf
input datafile file number=00002 name=/oradata/CDB/pdbseed/system01.dbf
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_svg_DB_on%_8907293100kqhesqu_1_1.bkp tag=TAG20150918T084817 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 18-SEP-15

Starting backup at 18-SEP-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=267 RECID=58 STAMP=890729313
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oracle/product/12.1.0.2/dbs/0lqhesr2_1_1 tag=TAG20150918T084833 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-SEP-15

Starting Control File and SPFILE Autobackup at 18-SEP-15
piece handle=/oracle/product/12.1.0.2/dbs/c-3858261253-20150918-00 comment=NONE
Finished Control File and SPFILE Autobackup at 18-SEP-15

Ok, nous allons maintenant créer une petite panne « utilisateur » sur la PDB MYPDB1 :

SQL> alter session set container=MYPDB1;
Session altered.

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
5296

SQL> delete from MRS_EXECUTED_BLOCKS;
5296 rows deleted
SQL> commit;
SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
0

Là, nous avons une simple erreur utilisateur… Deux choix s’offrent principalement à nous :

– La restauration complète de la PDB
ou
– La restauration incomplète, ou encore restauration d’une ou plusieurs tables dans le cas de cet article.

Voyons voir tout d’abord la restauration complète de la PDB dans laquelle se trouve la table ci dessus.

Note : Bien sur pour ce type d’opération, la PDB se doit d’être CLOSE.

RMAN> sql 'alter pluggable database MYPDB1 close immediate';
using target database control file instead of recovery catalog
sql statement: alter pluggable database MYPDB1 close immediate

RMAN> run {
set until time "to_date('18-09-2015:08:50:00', 'DD-MM-YYYY:hh24:mi:ss')";
restore pluggable database MYPDB1;
recover pluggable database MYPDB1
AUXILIARY DESTINATION '/oratmp';
}2> 3> 4> 5> 6>

Note :
En PITR, si la flash_recovery n’est pas configurée, l’utilisation du paramètre « AUXILIARY DESTINATION » est obligatoire pour effectuer le Recover.

Starting restore at 18-SEP-15
using channel ORA_DISK_1

skipping datafile 13; already restored to file /oradata/CDB/MYPDB1/system01.dbf
skipping datafile 14; already restored to file /oradata/CDB/MYPDB1/sysaux01.dbf
skipping datafile 15; already restored to file /oradata/CDB/MYPDB1/tbs_MYPDB1_01.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 18-SEP-15

Starting recover at 18-SEP-15
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='twsh'

initialization parameters used for automatic instance:
db_name=CDB
db_unique_name=twsh_pitr_MYPDB1_CDB
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/oratmp
log_archive_dest_1='location=/oratmp'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used

starting up automatic instance CDB

Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 654312056 bytes
Database Buffers 2013265920 bytes
Redo Buffers 13062144 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until time "to_date('18-09-2015:08:50:00', 'DD-MM-YYYY:hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';
}
executing Memory Script

executing command: SET until clause

Starting restore at 18-SEP-15
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=191 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /oracle/product/12.1.0.2/dbs/c-3858261253-20150918-00
channel ORA_AUX_DISK_1: piece handle=/oracle/product/12.1.0.2/dbs/c-3858261253-20150918-00 tag=TAG20150918T084835
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/oratmp/CDB/controlfile/o1_mf_bzqgo9fp_.ctl
Finished restore at 18-SEP-15

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until time "to_date('18-09-2015:08:50:00', 'DD-MM-YYYY:hh24:mi:ss')";
# switch to valid datafilecopies
switch clone datafile 13 to datafilecopy
"/oradata/CDB/MYPDB1/system01.dbf";
switch clone datafile 14 to datafilecopy
"/oradata/CDB/MYPDB1/sysaux01.dbf";
switch clone datafile 15 to datafilecopy
"/oradata/CDB/MYPDB1/tbs_MYPDB1_01.dbf";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 6 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 5, 3, 6;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

datafile 13 switched to datafile copy
input datafile copy RECID=1 STAMP=890731086 file name=/oradata/CDB/MYPDB1/system01.dbf

datafile 14 switched to datafile copy
input datafile copy RECID=2 STAMP=890731086 file name=/oradata/CDB/MYPDB1/sysaux01.dbf

datafile 15 switched to datafile copy
input datafile copy RECID=3 STAMP=890731086 file name=/oradata/CDB/MYPDB1/tbs_MYPDB1_01.dbf

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 18-SEP-15
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /oratmp/CDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /oratmp/CDB/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /oratmp/CDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /oratmp/CDB/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/bkpfull_svg_DB_on%_8907292970hqhesqh_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/bkpfull_svg_DB_on%_8907292970hqhesqh_1_1.bkp tag=TAG20150918T084817
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 18-SEP-15

datafile 1 switched to datafile copy
input datafile copy RECID=8 STAMP=890731102 file name=/oratmp/CDB/datafile/o1_mf_system_bzqgogyy_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=9 STAMP=890731102 file name=/oratmp/CDB/datafile/o1_mf_undotbs1_bzqgogyx_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=10 STAMP=890731102 file name=/oratmp/CDB/datafile/o1_mf_sysaux_bzqgogyz_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=11 STAMP=890731102 file name=/oratmp/CDB/datafile/o1_mf_users_bzqgogz0_.dbf

contents of Memory Script:
{
# set requested point in time
set until time "to_date('18-09-2015:08:50:00', 'DD-MM-YYYY:hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 5 online";
sql clone "alter database datafile 3 online";
sql clone 'MYPDB1' "alter database datafile
13 online";
sql clone 'MYPDB1' "alter database datafile
14 online";
sql clone 'MYPDB1' "alter database datafile
15 online";
sql clone "alter database datafile 6 online";
# recover pdb
recover clone database tablespace "SYSTEM", "UNDOTBS1", "SYSAUX", "USERS" pluggable database
'MYPDB1' delete archivelog;
sql clone 'alter database open read only';
plsql <>;
plsql <>;
# shutdown clone before import
shutdown clone abort
plsql <>>;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 1 online
sql statement: alter database datafile 5 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 13 online
sql statement: alter database datafile 14 online
sql statement: alter database datafile 15 online
sql statement: alter database datafile 6 online

Section Recover :

Starting recover at 18-SEP-15
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 267 is already on disk as file /oradata/CDB/arch/1_267_890666181.dbf
archived log for thread 1 with sequence 268 is already on disk as file /oradata/CDB/arch/1_268_890666181.dbf
archived log file name=/oradata/CDB/arch/1_267_890666181.dbf thread=1 sequence=267
archived log file name=/oradata/CDB/arch/1_268_890666181.dbf thread=1 sequence=268
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-SEP-15

sql statement: alter database open read only
Oracle instance shut down
Removing automatic instance

Automatic instance removed
auxiliary instance file /oratmp/CDB/datafile/o1_mf_sysaux_bzqgogyz_.dbf deleted
auxiliary instance file /oratmp/CDB/controlfile/o1_mf_bzqgo9fp_.ctl deleted
Finished recover at 18-SEP-15

Cette log est assez intéressante … On s’aperçoit qu’RMAN va créer une instance temporaire « clone » de l’instance CDB pour effectuer ONLINE, sans aucun arrêt de l’instance principale (imaginons une prod), un recover en PITR de notre PDB.

Cela veut dire aussi en langage « stockage » qu’il faut prévoir du disque sur un FS (soit unitairement comme l’exemple ci dessus, ou soit fixe pour une Flash_recovery), et cela peut monter vite en Go en fonction de la taille de notre PDB ..

RMAN> sql 'alter pluggable database MYPDB1 open resetlogs';
sql statement: alter pluggable database MYPDB1 open resetlogs

RMAN>exit;

/oratmp/CDB[CDB]:sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 18 09:20:33 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> set lines 200
SQL> select DBID,NAME,OPEN_MODE,OPEN_TIME from v$pdbs;

DBID NAME OPEN_MODE OPEN_TIME
---------- ------------------------------ ---------- ---------------------------------------------------------------------------
548741631 PDB$SEED READ ONLY 17-SEP-15 07.56.27.384 PM +02:00
888812708 MYPDB1 READ WRITE 18-SEP-15 09.20.02.391 AM +02:00

SQL> alter session set container=MYPDB1 ;
Session altered.

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
5296

=> Ok, nous avons retrouvé nos datas, restauré uniquement notre PDB sans impact sur la dispo de l’instance CDB principale, et donc de facto sur les autres PDBs (dans le cas où nous en aurions eu plusieurs), mais nous avons quand même du générer une indisponibilité de la PDB cible.

Nous aurions pu donc passer par une restauration table uniquement, puisque RMAN (en 12c) permet la restauration d’une table directement !

Mais qu’allons nous faire de Datapump alors me direz vous ? Et bien le conserver, car il est toujours utile d’avoir un dump à coté au cas ou (par expérience, avec les joies de la robotique ..)

Note :
Attention, pour Datapump sur une PDB, il faut là encore utiliser un ALIAS TNS pour le réaliser, car un datapump full de la CDB ne permet en effet pas de réaliser un export des PDBs.

Vu que nous avons restauré la PDB, il faut re-simuler une panne, on refait donc un petit backup de notre PDB, avec une gestion des archivelogs car dans ce second exemple je vais utiliser le until sequence ..

SQL> alter session set container=cdb$root;
Session altered.

SQL> alter system switch logfile ;
System altered.

SQL> exit

RMAN> run {
backup pluggable database MYPDB1 format '/oratmp/Backup/bkpfull_MYPDB1_on%_%t%U.bkp' ;
backup archivelog all format '/oratmp/Backup/bkpfull_MYPDB1_ARCH_%t%U.bkp' ;
}

Starting backup at 18-SEP-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/oradata/CDB/MYPDB1/sysaux01.dbf
input datafile file number=00015 name=/oradata/CDB/MYPDB1/tbs_MYPDB1_01.dbf
input datafile file number=00013 name=/oradata/CDB/MYPDB1/system01.dbf
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp tag=TAG20150918T101728 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 18-SEP-15

Starting backup at 18-SEP-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=313 RECID=104 STAMP=890732742
input archived log thread=1 sequence=314 RECID=105 STAMP=890732754
input archived log thread=1 sequence=315 RECID=106 STAMP=890732763
input archived log thread=1 sequence=316 RECID=107 STAMP=890732769
input archived log thread=1 sequence=317 RECID=108 STAMP=890732780
input archived log thread=1 sequence=318 RECID=109 STAMP=890732790
input archived log thread=1 sequence=319 RECID=110 STAMP=890732823
input archived log thread=1 sequence=320 RECID=111 STAMP=890732835
input archived log thread=1 sequence=321 RECID=112 STAMP=890732847
input archived log thread=1 sequence=322 RECID=113 STAMP=890732861
input archived log thread=1 sequence=323 RECID=114 STAMP=890732869
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_MYPDB1_ARCH_89073465222qhf21s_1_1.bkp tag=TAG20150918T101731 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=253 RECID=44 STAMP=890727333
input archived log thread=1 sequence=254 RECID=45 STAMP=890727345
input archived log thread=1 sequence=255 RECID=46 STAMP=890727375
input archived log thread=1 sequence=256 RECID=47 STAMP=890727387
input archived log thread=1 sequence=257 RECID=48 STAMP=890727402
input archived log thread=1 sequence=258 RECID=49 STAMP=890727413
input archived log thread=1 sequence=259 RECID=50 STAMP=890727421
input archived log thread=1 sequence=260 RECID=51 STAMP=890727428
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_MYPDB1_ARCH_89073465923qhf223_1_1.bkp tag=TAG20150918T101731 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=366 RECID=157 STAMP=890734079
input archived log thread=1 sequence=367 RECID=158 STAMP=890734088
input archived log thread=1 sequence=368 RECID=159 STAMP=890734101
input archived log thread=1 sequence=369 RECID=160 STAMP=890734116
input archived log thread=1 sequence=370 RECID=161 STAMP=890734551
input archived log thread=1 sequence=371 RECID=162 STAMP=890734589
input archived log thread=1 sequence=372 RECID=163 STAMP=890734610
input archived log thread=1 sequence=373 RECID=164 STAMP=890734651
channel ORA_DISK_1: starting piece 1 at 18-SEP-15
channel ORA_DISK_1: finished piece 1 at 18-SEP-15
piece handle=/oratmp/Backup/bkpfull_MYPDB1_ARCH_89073466624qhf22a_1_1.bkp tag=TAG20150918T101731 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 18-SEP-15

Starting Control File and SPFILE Autobackup at 18-SEP-15
piece handle=/oracle/product/12.1.0.2/dbs/c-3858261253-20150918-08 comment=NONE
Finished Control File and SPFILE Autobackup at 18-SEP-15

RMAN> exit

-> On génère une petite archivelog histoire de nous arrêter à la précédente

/home/oracle[CDB]:sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Sep 18 09:25:55 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system switch logfile ;
System altered.

SQL> select sequence#,to_char(completion_time,'DD-MM-YYYY HH24:MI:SS') from v$archived_log where to_char(completion_time,'DD-MM-YYYY HH24:MI:SS') > '18-09-2015 10:10:00';

SEQUENCE# TO_CHAR(COMPLETION_
---------- -------------------
370 18-09-2015 10:15:51
371 18-09-2015 10:16:29
372 18-09-2015 10:16:50
373 18-09-2015 10:17:31
374 18-09-2015 10:18:41

SQL> alter session set container=MYPDB1 ;
Session altered.

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
5296

SQL> delete from "MYPDB1"."MRS_EXECUTED_BLOCKS";
5296 rows deleted.

SQL> commit;
Commit complete.

==> Lançons donc notre restauration de cette table à partir de RMAN, en s’arrêtant à la séquence 330 qui ne contient donc pas l’opération de delete ci dessus ..

Ici, pas besoin de faire un close de la pluggable database, la restauration n’étant pas complète et étant directement sur un objet précis. Par contre, il faut spécifier sur quelle PDB les tables concernées sont stockées :

Note :
J’ai volontairement fait un DELETE et non pas un DROP TABLE, car dans le cas d’un drop table, il n’est pas nécessaire de passer par une table temporaire via la génération d’un dump, comme je le fais ci dessous. En effet, RMAN ne permet pas de restaurer le contenu de la table sur elle même, il faut passer par une table temporaire sinon cela plante.

Bien sur vous me direz que si toutes les lignes ont été supprimées, on peut dropper la table et la restaurer entièrement… oui pourquoi pas ! Mais imaginons que seules quelques lignes soient impactées, nous pourrions faire du rattrapage à partir d’une table temporaire, et éviter ainsi d’avoir à restaurer l’intégralité de notre table.

RMAN> recover table MYPDB1.MRS_EXECUTED_BLOCKS of pluggable database MYPDB1
until sequence 374
auxiliary destination '/oratmp' ;

Starting recover at 18-SEP-15
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/18/2015 11:20:05
RMAN-05063: Cannot recover specified tables
RMAN-05112: table "MYPDB1"."MRS_EXECUTED_BLOCKS" already exists

Donc on fait :

RMAN> recover table MYPDB1.MRS_EXECUTED_BLOCKS of pluggable database MYPDB1

2> until sequence 374
3> auxiliary destination '/oratmp'
4> remap table MYPDB1.MRS_EXECUTED_BLOCKS:MRS_EXECUTED_BLOCKS_TMP;

Starting recover at 18-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='zhkk'

initialization parameters used for automatic instance:
db_name=CDB
db_unique_name=zhkk_pitr_MYPDB1_CDB
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/oratmp
log_archive_dest_1='location=/oratmp'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used

starting up automatic instance CDB

Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 654312056 bytes
Database Buffers 2013265920 bytes
Redo Buffers 13062144 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 18-SEP-15
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=191 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /oracle/product/12.1.0.2/dbs/c-3858261253-20150918-07
channel ORA_AUX_DISK_1: piece handle=/oracle/product/12.1.0.2/dbs/c-3858261253-20150918-07 tag=TAG20150918T101630
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/oratmp/CDB/controlfile/o1_mf_bzqo91sv_.ctl
Finished restore at 18-SEP-15

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 13 to new;
set newname for clone datafile 14 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 4 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 5, 3, 13, 14;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /oratmp/CDB/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 4 to /oratmp/CDB/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 18-SEP-15
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /oratmp/CDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /oratmp/CDB/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /oratmp/CDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/backup_full_on%_8907345731rqhf1vd_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/backup_full_on%_8907345731rqhf1vd_1_1.bkp tag=TAG20150918T101613
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00013 to /oratmp/CDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00014 to /oratmp/CDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp tag=TAG20150918T101728
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 18-SEP-15

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=890737881 file name=/oratmp/CDB/datafile/o1_mf_system_bzqo975w_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=890737881 file name=/oratmp/CDB/datafile/o1_mf_undotbs1_bzqo975t_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=890737881 file name=/oratmp/CDB/datafile/o1_mf_sysaux_bzqo975w_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=890737881 file name=/oratmp/CDB/datafile/o1_mf_system_bzqo9p7f_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=890737881 file name=/oratmp/CDB/datafile/o1_mf_sysaux_bzqo9p7d_.dbf

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 5 online";
sql clone "alter database datafile 3 online";
sql clone 'MYPDB1' "alter database datafile
13 online";
sql clone 'MYPDB1' "alter database datafile
14 online";
# recover and open database read only
recover clone database tablespace "SYSTEM", "UNDOTBS1", "SYSAUX", "MYPDB1":"SYSTEM", "MYPDB1":"SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 1 online
sql statement: alter database datafile 5 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 13 online
sql statement: alter database datafile 14 online

Starting recover at 18-SEP-15
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 371 is already on disk as file /oradata/CDB/arch/1_371_890666181.dbf
archived log for thread 1 with sequence 372 is already on disk as file /oradata/CDB/arch/1_372_890666181.dbf
archived log for thread 1 with sequence 373 is already on disk as file /oradata/CDB/arch/1_373_890666181.dbf
archived log file name=/oradata/CDB/arch/1_371_890666181.dbf thread=1 sequence=371
archived log file name=/oradata/CDB/arch/1_372_890666181.dbf thread=1 sequence=372
archived log file name=/oradata/CDB/arch/1_373_890666181.dbf thread=1 sequence=373
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-SEP-15

sql statement: alter database open read only

contents of Memory Script:
{
sql clone 'alter pluggable database MYPDB1 open read only';
}
executing Memory Script

sql statement: alter pluggable database MYPDB1 open read only

contents of Memory Script:
{
sql clone "create spfile from memory";
shutdown clone immediate;
startup clone nomount;
sql clone "alter system set control_files =
''/oratmp/CDB/controlfile/o1_mf_bzqo91sv_.ctl'' comment=
''RMAN set'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 671089272 bytes
Database Buffers 1996488704 bytes
Redo Buffers 13062144 bytes

sql statement: alter system set control_files = ''/oratmp/CDB/controlfile/o1_mf_bzqo91sv_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 671089272 bytes
Database Buffers 1996488704 bytes
Redo Buffers 13062144 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile 15 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 15;

switch clone datafile all;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME

Starting restore at 18-SEP-15
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=191 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00015 to /oratmp/ZHKK_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp tag=TAG20150918T101728
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 18-SEP-15

datafile 15 switched to datafile copy
input datafile copy RECID=12 STAMP=890737926 file name=/oratmp/ZHKK_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_bzqoc5kb_.dbf

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# online the datafiles restored or switched
sql clone 'MYPDB1' "alter database datafile
15 online";
# recover and open resetlogs
recover clone database tablespace "MYPDB1":"MYPDB1", "SYSTEM", "UNDOTBS1", "SYSAUX", "MYPDB1":"SYSTEM", "MYPDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 15 online

Starting recover at 18-SEP-15
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 373 is already on disk as file /oradata/CDB/arch/1_373_890666181.dbf
archived log file name=/oradata/CDB/arch/1_373_890666181.dbf thread=1 sequence=373
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-SEP-15

database opened

contents of Memory Script:
{
sql clone 'alter pluggable database MYPDB1 open';
}
executing Memory Script

sql statement: alter pluggable database MYPDB1 open

Un petit Focus sur la partie « datapump » de RMAN :

contents of Memory Script:

{
# create directory for datapump import
sql 'MYPDB1' "create or replace directory
TSPITR_DIROBJ_DPDIR as ''
/oratmp''";
# create directory for datapump export
sql clone 'MYPDB1' "create or replace directory
TSPITR_DIROBJ_DPDIR as ''
/oratmp''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/oratmp''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/oratmp''

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_zhkk_Avay":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 12 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/COMMENT
EXPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "MYPDB1"."MRS_EXECUTED_BLOCKS" 9.598 MB 438313 rows
EXPDP> Master table "SYS"."TSPITR_EXP_zhkk_Avay" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_zhkk_Avay is:
EXPDP> /oratmp/tspitr_zhkk_58836.dmp
EXPDP> Job "SYS"."TSPITR_EXP_zhkk_Avay" successfully completed at Fri Sep 18 11:12:21 2015 elapsed 0 00:00:11
Export completed

Import avec le remap table :

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_zhkk_wCBw" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_zhkk_wCBw":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "MYPDB1"."MRS_EXECUTED_BLOCKS_TMP" 9.598 MB 438313 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/COMMENT
IMPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_zhkk_wCBw" successfully completed at Fri Sep 18 11:12:24 2015 elapsed 0 00:00:01

Et maintenant, RMAN supprime son espace de travail :

Removing automatic instance

Automatic instance removed
auxiliary instance file /oratmp/CDB/datafile/o1_mf_temp_bzqo9tv9_.tmp deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_temp_bzqo9th7_.tmp deleted
auxiliary instance file /oratmp/ZHKK_PITR_MYPDB1_CDB/onlinelog/o1_mf_3_bzqoc7g4_.log deleted
auxiliary instance file /oratmp/ZHKK_PITR_MYPDB1_CDB/onlinelog/o1_mf_2_bzqoc7dz_.log deleted
auxiliary instance file /oratmp/ZHKK_PITR_MYPDB1_CDB/onlinelog/o1_mf_1_bzqoc7cq_.log deleted
auxiliary instance file /oratmp/ZHKK_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_bzqoc5kb_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_sysaux_bzqo9p7d_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_system_bzqo9p7f_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_sysaux_bzqo975w_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_undotbs1_bzqo975t_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_system_bzqo975w_.dbf deleted
auxiliary instance file /oratmp/CDB/controlfile/o1_mf_bzqo91sv_.ctl deleted
Finished recover at 18-SEP-15

On vérifie :

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
0

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS_TMP";
COUNT(*)
----------
5296

SQL> insert into "MYPDB1"."MRS_EXECUTED_BLOCKS" select * from "MYPDB1"."MRS_EXECUTED_BLOCKS_TMP" ;
5296 rows created.

SQL> commit ;

Commit complete.

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
5296

SQL> drop table "MYPDB1"."MRS_EXECUTED_BLOCKS_TMP";
Table dropped.

Comme je le mentionnais plus haut, dans le cas d’un drop direct d’une table, l’instruction de recover est plus simple :

SQL> alter session set container=MYPDB1;
Session altered.

SQL> drop table "MYPDB1"."MRS_EXECUTED_BLOCKS";
Table dropped.

La restauration complète de la table :

RMAN> recover table MYPDB1.MRS_EXECUTED_BLOCKS of pluggable database MYPDB1
until sequence 374
auxiliary destination '/oratmp' ;2> 3>

Starting recover at 18-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='otsm'

initialization parameters used for automatic instance:
db_name=CDB
db_unique_name=otsm_pitr_MYPDB1_CDB
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/oratmp
log_archive_dest_1='location=/oratmp'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used

starting up automatic instance CDB

Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 654312056 bytes
Database Buffers 2013265920 bytes
Redo Buffers 13062144 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 18-SEP-15
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=191 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /oracle/product/12.1.0.2/dbs/c-3858261253-20150918-07
channel ORA_AUX_DISK_1: piece handle=/oracle/product/12.1.0.2/dbs/c-3858261253-20150918-07 tag=TAG20150918T101630
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/oratmp/CDB/controlfile/o1_mf_bzqoxp4w_.ctl
Finished restore at 18-SEP-15

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 13 to new;
set newname for clone datafile 14 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 4 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 5, 3, 13, 14;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /oratmp/CDB/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 4 to /oratmp/CDB/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 18-SEP-15
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /oratmp/CDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /oratmp/CDB/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /oratmp/CDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/backup_full_on%_8907345731rqhf1vd_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/backup_full_on%_8907345731rqhf1vd_1_1.bkp tag=TAG20150918T101613
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00013 to /oratmp/CDB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00014 to /oratmp/CDB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp tag=TAG20150918T101728
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 18-SEP-15

datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=890738541 file name=/oratmp/CDB/datafile/o1_mf_system_bzqoxvhv_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=890738541 file name=/oratmp/CDB/datafile/o1_mf_undotbs1_bzqoxvhs_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=890738541 file name=/oratmp/CDB/datafile/o1_mf_sysaux_bzqoxvhv_.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=9 STAMP=890738541 file name=/oratmp/CDB/datafile/o1_mf_system_bzqoybkd_.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=10 STAMP=890738541 file name=/oratmp/CDB/datafile/o1_mf_sysaux_bzqoybkc_.dbf

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 5 online";
sql clone "alter database datafile 3 online";
sql clone 'MYPDB1' "alter database datafile
13 online";
sql clone 'MYPDB1' "alter database datafile
14 online";
# recover and open database read only
recover clone database tablespace "SYSTEM", "UNDOTBS1", "SYSAUX", "MYPDB1":"SYSTEM", "MYPDB1":"SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 1 online

sql statement: alter database datafile 5 online

sql statement: alter database datafile 3 online

sql statement: alter database datafile 13 online

sql statement: alter database datafile 14 online

Starting recover at 18-SEP-15
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 371 is already on disk as file /oradata/CDB/arch/1_371_890666181.dbf
archived log for thread 1 with sequence 372 is already on disk as file /oradata/CDB/arch/1_372_890666181.dbf
archived log for thread 1 with sequence 373 is already on disk as file /oradata/CDB/arch/1_373_890666181.dbf
archived log file name=/oradata/CDB/arch/1_371_890666181.dbf thread=1 sequence=371
archived log file name=/oradata/CDB/arch/1_372_890666181.dbf thread=1 sequence=372
archived log file name=/oradata/CDB/arch/1_373_890666181.dbf thread=1 sequence=373
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-SEP-15

sql statement: alter database open read only

contents of Memory Script:
{
sql clone 'alter pluggable database MYPDB1 open read only';
}
executing Memory Script

sql statement: alter pluggable database MYPDB1 open read only

contents of Memory Script:
{
sql clone "create spfile from memory";
shutdown clone immediate;
startup clone nomount;
sql clone "alter system set control_files =
''/oratmp/CDB/controlfile/o1_mf_bzqoxp4w_.ctl'' comment=
''RMAN set'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 671089272 bytes
Database Buffers 1996488704 bytes
Redo Buffers 13062144 bytes

sql statement: alter system set control_files = ''/oratmp/CDB/controlfile/o1_mf_bzqoxp4w_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 2684354560 bytes

Fixed Size 3714440 bytes
Variable Size 671089272 bytes
Database Buffers 1996488704 bytes
Redo Buffers 13062144 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile 15 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 15;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 18-SEP-15
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=191 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00015 to /oratmp/OTSM_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/oratmp/Backup/bkpfull_MYPDB1_on%_89073464821qhf21o_1_1.bkp tag=TAG20150918T101728
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 18-SEP-15

datafile 15 switched to datafile copy
input datafile copy RECID=12 STAMP=890738587 file name=/oratmp/OTSM_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_bzqozsvz_.dbf

contents of Memory Script:
{
# set requested point in time
set until logseq 374 thread 1;
# online the datafiles restored or switched
sql clone 'MYPDB1' "alter database datafile
15 online";
# recover and open resetlogs
recover clone database tablespace "MYPDB1":"MYPDB1", "SYSTEM", "UNDOTBS1", "SYSAUX", "MYPDB1":"SYSTEM", "MYPDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 15 online

Starting recover at 18-SEP-15
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 373 is already on disk as file /oradata/CDB/arch/1_373_890666181.dbf
archived log file name=/oradata/CDB/arch/1_373_890666181.dbf thread=1 sequence=373
media recovery complete, elapsed time: 00:00:00
Finished recover at 18-SEP-15

database opened

contents of Memory Script:
{
sql clone 'alter pluggable database MYPDB1 open';
}
executing Memory Script

sql statement: alter pluggable database MYPDB1 open

contents of Memory Script:
{
# create directory for datapump import
sql 'MYPDB1' "create or replace directory
TSPITR_DIROBJ_DPDIR as ''
/oratmp''";
# create directory for datapump export
sql clone 'MYPDB1' "create or replace directory
TSPITR_DIROBJ_DPDIR as ''
/oratmp''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/oratmp''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/oratmp''

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_otsm_qbiz":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 320 KB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/TRIGGER
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "MYPDB1"."MRS_EXECUTED_BLOCKS" 218.4 KB 5296 rows
EXPDP> Master table "SYS"."TSPITR_EXP_otsm_qbiz" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_otsm_qbiz is:
EXPDP> /oratmp/tspitr_otsm_34847.dmp
EXPDP> Job "SYS"."TSPITR_EXP_otsm_qbiz" successfully completed at Fri Sep 18 11:23:21 2015 elapsed 0 00:00:11
Export completed

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_otsm_xfFz" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_otsm_xfFz":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "MYPDB1"."MRS_EXECUTED_BLOCKS" 218.4 KB 5296 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/TRIGGER
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_otsm_xfFz" successfully completed at Fri Sep 18 11:23:24 2015 elapsed 0 00:00:01
Import completed

Removing automatic instance
Automatic instance removed
auxiliary instance file /oratmp/CDB/datafile/o1_mf_temp_bzqoyh6m_.tmp deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_temp_bzqoygsp_.tmp deleted
auxiliary instance file /oratmp/OTSM_PITR_MYPDB1_CDB/onlinelog/o1_mf_3_bzqozvs0_.log deleted
auxiliary instance file /oratmp/OTSM_PITR_MYPDB1_CDB/onlinelog/o1_mf_2_bzqozvqz_.log deleted
auxiliary instance file /oratmp/OTSM_PITR_MYPDB1_CDB/onlinelog/o1_mf_1_bzqozvpt_.log deleted
auxiliary instance file /oratmp/OTSM_PITR_MYPDB1_CDB/datafile/o1_mf_aedifica_bzqozsvz_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_sysaux_bzqoybkc_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_system_bzqoybkd_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_sysaux_bzqoxvhv_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_undotbs1_bzqoxvhs_.dbf deleted
auxiliary instance file /oratmp/CDB/datafile/o1_mf_system_bzqoxvhv_.dbf deleted
auxiliary instance file /oratmp/CDB/controlfile/o1_mf_bzqoxp4w_.ctl deleted
auxiliary instance file tspitr_otsm_34847.dmp deleted
Finished recover at 18-SEP-15

Et voila, nous avons retrouvé notre table complète :

SQL> select count(*) from "MYPDB1"."MRS_EXECUTED_BLOCKS";
COUNT(*)
----------
5296

Il va de soi qu’il y a plein d’autres tests à faire avec notre ami RMAN, mais cela donne déja une bonne idée de ce que la version 12c permet de faire !

Micka

Oracle RAC 11GR2 LINUX : Ajout d’un noeud au cluster

Installer un RAC sous Linux, c’est bien, mais allons plus loin en ajoutant un nouveau noeud au cluster 🙂

Pour cet article, j’utilise 3 VMs sous virtual box. Je part du principe où un RAC 2 noeuds est déja actif et opérationnel.

Après avoir effectué un clone complet d’un des noeuds du cluster, la nouvelle machine virtuelle doit être configurée, que ce soit coté réseau et coté stockage.

Coté OS :
-> Modifier le nom de la machine, car puisqu’elle a été clonée, le hostname se doit d’être modifié
Editer le fichier :

/etc/sysconfig/network

Coté réseau :
-> modifier les paramètres des cartes réseau utilisées, à savoir MAC ADRESS, IP…

Editer les fichiers

/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/sysconfig/network-scripts/ifcfg-eth2

-> Sur une VM le /etc/hosts doit être rempli car nous n’utilisons pas de résolution dns. Il convient donc d’ajouter les IP liées au nouveau node (ip « public » / private / VIP)

exemple :

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.0.10 rac1.localdomain rac1
10.10.0.11 rac2.localdomain rac2
10.10.0.16 rac3.localdomain rac3

10.10.0.12 rac1v.localdomain rac1v
10.10.0.13 rac2v.localdomain rac2v
10.10.0.17 rac3v.localdomain rac3v

10.10.0.14 rac-scan.localdomain rac-scan
10.10.0.15 rac-scan.localdomain rac-scan

Coté configuation RAC :

-> Ajouter les permissions SSH entre le nouveau node et les autres, et vice versa. Le résultat attendu étant que chaque noeud peut se connecter sans password aux autres noeuds depuis le compte « oracle »

Exemple :

[oracle@rac3 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub rac2

oracle@rac2's password:

Now try logging into the machine, with "ssh 'rac2'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[oracle@rac3 ~]$ ssh rac1
 Last login: Fri Feb 20 18:17:00 2015 from rac2.localdomain
 [oracle@rac1 ~]$

-> modifier s’il y a lieu les fichiers permettant de charger les environnements oracle. Dans mon cas j’ai deux fichiers à modifier, l’un pour le chargement de l’environnement GRID HOME et le second pour RDBMS HOME

[oracle@rac3 ~]$ cat .dbenv
export ORACLE_HOME=/oracle/product/db
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export ORACLE_SID=ORCL3

[oracle@rac3 ~]$ cat .gridenv
export ORACLE_HOME=/oracle/product/grid
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export ORACLE_SID=+ASM3

-> Supprimer les arborescences des moteurs GRID HOME et RDBMS HOME du nouveau node, en effet ils seront ajoutés lors de la procédure de ADDNODE.

Dans mon cas :

rm -rf /oracle/product/db
rm -rf /oracle/product/grid

Début d’ajout du node :

— Vérifications des prérequis :

Sur un des 2 nodes en activité, vérifier l’état du futur node, c’est à dire s’il est prêt à être utilisé et intégré dans le cluster :

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ . .gridenv
[oracle@rac1 ~]$ which cluvfy
/oracle/product/grid/bin/cluvfy
[oracle@rac1 ~]$ cluvfy stage -pre nodeadd -n rac3 -verbose

Note : Vu la taille de la log de sortie de cluvfy en Verbose, je conseille de rediriger la commande ci dessus dans une log dédiée pour l’étudier, surtout si des problèmes sont identifiés.

La log de sortie dans mon cas d’exemple :

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "rac1"
 Destination Node Reachable?
 ------------------------------------ ------------------------
 rac3 yes
 Result: Node reachability check passed from node "rac1"

Checking user equivalence...

Check: User equivalence for user "oracle"
 Node Name Status
 ------------------------------------ ------------------------
 rac3 passed
 Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
 Node Name Status
 ------------------------------------ ------------------------
 rac1 passed
 rac2 passed
 rac3 passed

Verification of the hosts config file successful

Interface information for node "rac1"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0 10.10.0.10 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth0 10.10.0.12 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth0 10.10.0.15 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth1 192.168.10.1 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:49 1500
 eth1 169.254.237.114 169.254.0.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:49 1500

Interface information for node "rac2"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0 10.10.0.11 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth0 10.10.0.13 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth0 10.10.0.14 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth1 192.168.10.2 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:02 1500
 eth1 169.254.183.81 169.254.0.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:02 1500

Interface information for node "rac3"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth4 10.10.0.16 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:1F:18:12 1500
 eth5 192.168.10.3 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:AE:45:66 1500

ERROR:
 PRVG-11050 : No matching interfaces "eth0" for subnet "10.10.0.0" on nodes "rac3"

Check: Node connectivity for interface "eth0"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1[10.10.0.10] rac1[10.10.0.12] yes
 rac1[10.10.0.10] rac1[10.10.0.15] yes
 rac1[10.10.0.10] rac2[10.10.0.11] yes
 rac1[10.10.0.10] rac2[10.10.0.13] yes
 rac1[10.10.0.10] rac2[10.10.0.14] yes
 rac1[10.10.0.10] rac3[10.10.0.16] yes
 rac1[10.10.0.12] rac1[10.10.0.15] yes
 rac1[10.10.0.12] rac2[10.10.0.11] yes
 rac1[10.10.0.12] rac2[10.10.0.13] yes
 rac1[10.10.0.12] rac2[10.10.0.14] yes
 rac1[10.10.0.12] rac3[10.10.0.16] yes
 rac1[10.10.0.15] rac2[10.10.0.11] yes
 rac1[10.10.0.15] rac2[10.10.0.13] yes
 rac1[10.10.0.15] rac2[10.10.0.14] yes
 rac1[10.10.0.15] rac3[10.10.0.16] yes
 rac2[10.10.0.11] rac2[10.10.0.13] yes
 rac2[10.10.0.11] rac2[10.10.0.14] yes
 rac2[10.10.0.11] rac3[10.10.0.16] yes
 rac2[10.10.0.13] rac2[10.10.0.14] yes
 rac2[10.10.0.13] rac3[10.10.0.16] yes
 rac2[10.10.0.14] rac3[10.10.0.16] yes
 Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "10.10.0.0"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1:10.10.0.10 rac1:10.10.0.12 passed
 rac1:10.10.0.10 rac1:10.10.0.15 passed
 rac1:10.10.0.10 rac2:10.10.0.11 passed
 rac1:10.10.0.10 rac2:10.10.0.13 passed
 rac1:10.10.0.10 rac2:10.10.0.14 passed
 rac1:10.10.0.10 rac3:10.10.0.16 passed
 Result: TCP connectivity check passed for subnet "10.10.0.0"

Checking subnet mask consistency...
 Subnet mask consistency check passed for subnet "10.10.0.0".
 Subnet mask consistency check passed.

Result: Node connectivity check failed

Checking multicast communication...

Checking subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0"...
 Check of subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0" passed.
 Check of multicast communication passed.

Checking CRS integrity...

Clusterware version consistency passed
 The Oracle Clusterware is healthy on node "rac1"
 The Oracle Clusterware is healthy on node "rac2"

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
 PRVG-1013 : The path "/oracle/product/grid" does not exist or cannot be created on the nodes to be added
 Result: Shared resources check for node addition failed

Checking node connectivity...

Checking hosts config file...
 Node Name Status
 ------------------------------------ ------------------------
 rac1 passed
 rac2 passed
 rac3 passed

Verification of the hosts config file successful

Interface information for node "rac1"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0 10.10.0.10 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth0 10.10.0.12 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth0 10.10.0.15 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:2F 1500
 eth1 192.168.10.1 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:49 1500
 eth1 169.254.237.114 169.254.0.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:49 1500

Interface information for node "rac2"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0 10.10.0.11 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth0 10.10.0.13 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth0 10.10.0.14 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:C4:FD:01 1500
 eth1 192.168.10.2 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:02 1500
 eth1 169.254.183.81 169.254.0.0 0.0.0.0 192.168.56.254 08:00:27:47:E3:02 1500

Interface information for node "rac3"
 Name IP Address Subnet Gateway Def. Gateway HW Address MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth4 10.10.0.16 10.10.0.0 0.0.0.0 192.168.56.254 08:00:27:1F:18:12 1500
 eth5 192.168.10.3 192.168.10.0 0.0.0.0 192.168.56.254 08:00:27:AE:45:66 1500

ERROR:
 PRVG-11050 : No matching interfaces "eth0" for subnet "10.10.0.0" on nodes "rac3"

ERROR:
 PRVG-11050 : No matching interfaces "eth1" for subnet "192.168.10.0" on nodes "rac3"

Check: Node connectivity for interface "eth0"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1[10.10.0.10] rac1[10.10.0.12] yes
 rac1[10.10.0.10] rac1[10.10.0.15] yes
 rac1[10.10.0.10] rac2[10.10.0.11] yes
 rac1[10.10.0.10] rac2[10.10.0.13] yes
 rac1[10.10.0.10] rac2[10.10.0.14] yes
 rac1[10.10.0.10] rac3[10.10.0.16] yes
 rac1[10.10.0.12] rac1[10.10.0.15] yes
 rac1[10.10.0.12] rac2[10.10.0.11] yes
 rac1[10.10.0.12] rac2[10.10.0.13] yes
 rac1[10.10.0.12] rac2[10.10.0.14] yes
 rac1[10.10.0.12] rac3[10.10.0.16] yes
 rac1[10.10.0.15] rac2[10.10.0.11] yes
 rac1[10.10.0.15] rac2[10.10.0.13] yes
 rac1[10.10.0.15] rac2[10.10.0.14] yes
 rac1[10.10.0.15] rac3[10.10.0.16] yes
 rac2[10.10.0.11] rac2[10.10.0.13] yes
 rac2[10.10.0.11] rac2[10.10.0.14] yes
 rac2[10.10.0.11] rac3[10.10.0.16] yes
 rac2[10.10.0.13] rac2[10.10.0.14] yes
 rac2[10.10.0.13] rac3[10.10.0.16] yes
 rac2[10.10.0.14] rac3[10.10.0.16] yes
 Result: Node connectivity passed for interface "eth0"

Check: TCP connectivity of subnet "10.10.0.0"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1:10.10.0.10 rac1:10.10.0.12 passed
 rac1:10.10.0.10 rac1:10.10.0.15 passed
 rac1:10.10.0.10 rac2:10.10.0.11 passed
 rac1:10.10.0.10 rac2:10.10.0.13 passed
 rac1:10.10.0.10 rac2:10.10.0.14 passed
 rac1:10.10.0.10 rac3:10.10.0.16 passed
 Result: TCP connectivity check passed for subnet "10.10.0.0"

Check: Node connectivity for interface "eth1"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1[192.168.10.1] rac2[192.168.10.2] yes
 rac1[192.168.10.1] rac3[192.168.10.3] yes
 rac2[192.168.10.2] rac3[192.168.10.3] yes
 Result: Node connectivity passed for interface "eth1"

Check: TCP connectivity of subnet "192.168.10.0"
 Source Destination Connected?
 ------------------------------ ------------------------------ ----------------
 rac1:192.168.10.1 rac2:192.168.10.2 passed
 rac1:192.168.10.1 rac3:192.168.10.3 passed
 Result: TCP connectivity check passed for subnet "192.168.10.0"

Checking subnet mask consistency...
 Subnet mask consistency check passed for subnet "10.10.0.0".
 Subnet mask consistency check passed for subnet "192.168.10.0".
 Subnet mask consistency check passed.

Result: Node connectivity check failed

Checking multicast communication...

Checking subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0"...
 Check of subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
 Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 3.8675GB (4055420.0KB) 1.5GB (1572864.0KB) passed
 rac3 3.8675GB (4055420.0KB) 1.5GB (1572864.0KB) passed
 Result: Total memory check passed

Check: Available memory
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 2.5106GB (2632604.0KB) 50MB (51200.0KB) passed
 rac3 3.6186GB (3794392.0KB) 50MB (51200.0KB) passed
 Result: Available memory check passed

Check: Swap space
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 3.9375GB (4128764.0KB) 3.8675GB (4055420.0KB) passed
 rac3 3.9375GB (4128764.0KB) 3.8675GB (4055420.0KB) passed
 Result: Swap space check passed

Check: Free disk space for "rac1:/oracle/product/grid,rac1:/tmp"
 Path Node Name Mount point Available Required Status
 ---------------- ------------ ------------ ------------ ------------ ------------
 /oracle/product/grid rac1 / 12.7949GB 7.5GB passed
 /tmp rac1 / 12.7949GB 7.5GB passed
 Result: Free disk space check passed for "rac1:/oracle/product/grid,rac1:/tmp"

Check: Free disk space for "rac3:/oracle/product/grid,rac3:/tmp"
 Path Node Name Mount point Available Required Status
 ---------------- ------------ ------------ ------------ ------------ ------------
 /oracle/product/grid rac3 / 15.0664GB 7.5GB passed
 /tmp rac3 / 15.0664GB 7.5GB passed
 Result: Free disk space check passed for "rac3:/oracle/product/grid,rac3:/tmp"

Check: User existence for "oracle"
 Node Name Status Comment
 ------------ ------------------------ ------------------------
 rac1 passed exists(54321)
 rac3 passed exists(54321)

Checking for multiple users with UID value 54321
 Result: Check for multiple users with UID value 54321 passed
 Result: User existence check passed for "oracle"

Check: Run level
 Node Name run level Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 5 3,5 passed
 rac3 5 3,5 passed
 Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
 Node Name Type Available Required Status
 ---------------- ------------ ------------ ------------ ----------------
 rac1 hard 65536 65536 passed
 rac3 hard 65536 65536 passed
 Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
 Node Name Type Available Required Status
 ---------------- ------------ ------------ ------------ ----------------
 rac1 soft 1024 1024 passed
 rac3 soft 1024 1024 passed
 Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
 Node Name Type Available Required Status
 ---------------- ------------ ------------ ------------ ----------------
 rac1 hard 16384 16384 passed
 rac3 hard 16384 16384 passed
 Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
 Node Name Type Available Required Status
 ---------------- ------------ ------------ ------------ ----------------
 rac1 soft 16384 2047 passed
 rac3 soft 16384 2047 passed
 Result: Soft limits check passed for "maximum user processes"
 Check: System architecture
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 x86_64 x86_64 passed
 rac3 x86_64 x86_64 passed
 Result: System architecture check passed

Check: Kernel version
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed
 rac3 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed
 Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 250 250 250 passed
 rac3 250 250 250 passed
 Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 32000 32000 32000 passed
 rac3 32000 32000 32000 passed
 Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 100 100 100 passed
 rac3 100 100 100 passed
 Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 128 128 128 passed
 rac3 128 128 128 passed
 Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 4398046511104 4398046511104 2076375040 passed
 rac3 4398046511104 4398046511104 2076375040 passed
 Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 4096 4096 4096 passed
 Check: System architecture
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 x86_64 x86_64 passed
 rac3 x86_64 x86_64 passed
 Result: System architecture check passed

Check: Kernel version
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed
 rac3 2.6.39-400.17.1.el6uek.x86_64 2.6.32 passed
 Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 250 250 250 passed
 rac3 250 250 250 passed
 Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 32000 32000 32000 passed
 rac3 32000 32000 32000 passed
 Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 100 100 100 passed
 rac3 100 100 100 passed
 Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 128 128 128 passed
 rac3 128 128 128 passed
 Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 4398046511104 4398046511104 2076375040 passed
 rac3 4398046511104 4398046511104 2076375040 passed
 Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
 Node Name Current Configured Required Status Comment
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 4096 4096 4096 passed
 ---------------- ------------ ------------ ------------ ------------ ------------
 rac1 1048576 1048576 1048576 passed
 rac3 1048576 1048576 1048576 passed
 Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
 rac3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2 passed
 Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
 rac3 compat-libcap1-1.10-1 compat-libcap1-1.10 passed
 Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
 rac3 compat-libstdc++-33(x86_64)-3.2.3-69.el6 compat-libstdc++-33(x86_64)-3.2.3 passed
 Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
 rac3 libgcc(x86_64)-4.4.7-3.el6 libgcc(x86_64)-4.4.4 passed
 Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
 rac3 libstdc++(x86_64)-4.4.7-3.el6 libstdc++(x86_64)-4.4.4 passed
 Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
 rac3 libstdc++-devel(x86_64)-4.4.7-3.el6 libstdc++-devel(x86_64)-4.4.4 passed
 Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
 rac3 sysstat-9.0.4-20.el6 sysstat-9.0.4 passed
 Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 gcc-4.4.7-3.el6 gcc-4.4.4 passed
 rac3 gcc-4.4.7-3.el6 gcc-4.4.4 passed
 Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
 rac3 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.4 passed
 Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 ksh-20120801-21.el6.1 ksh-20100621 passed
 rac3 ksh-20120801-21.el6.1 ksh-20100621 passed
 Result: Package existence check passed for "ksh"

Check: Package existence for "make"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 make-3.81-20.el6 make-3.81 passed
 rac3 make-3.81-20.el6 make-3.81 passed
 Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
 rac3 glibc(x86_64)-2.12-1.107.el6 glibc(x86_64)-2.12 passed
 Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
 rac3 glibc-devel(x86_64)-2.12-1.107.el6 glibc-devel(x86_64)-2.12 passed
 Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
 rac3 libaio(x86_64)-0.3.107-10.el6 libaio(x86_64)-0.3.107 passed
 Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
 Node Name Available Required Status
 ------------ ------------------------ ------------------------ ----------
 rac1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
 rac3 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.107 passed
 Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
 Result: Check for multiple users with UID value 0 passed

Check: Current group ID
 Result: Current group ID check passed

Starting check for consistency of primary group of root user
 Node Name Status
 ------------------------------------ ------------------------
 rac1 passed
 rac3 passed

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
 Check: Time zone consistency
 Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
 Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
 No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking to make sure user "oracle" is not in "root" group
 Node Name Status Comment
 ------------ ------------------------ ------------------------
 rac1 passed does not exist
 rac3 passed does not exist
 Result: User "oracle" is not part of "root" group. Check passed
 Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
 File "/etc/resolv.conf" does not have both domain and search entries defined
 Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
 domain entry in file "/etc/resolv.conf" is consistent across nodes
 Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
 search entry in file "/etc/resolv.conf" is consistent across nodes
 Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
 All nodes have one search entry defined in file "/etc/resolv.conf"
 Checking all nodes to make sure that search entry is "localdomain" as found on node "rac1"
 All nodes of the cluster have same value for 'search'
 Checking DNS response time for an unreachable node
 Node Name Status
 ------------------------------------ ------------------------
 rac1 failed
 rac3 failed
 PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac3

File "/etc/resolv.conf" is not consistent across nodes
 Pre-check for node addition was unsuccessful on all the nodes.

L’analyse de la log m’indique que le node est OK, car je passe les erreurs sur RESOLV.CONF puisque non utilisé dans la configuration actuelle (pas de gestion dns), et cette erreur était déjà présente initialement lors de la construction du cluster sur les 2 autres nodes.

Je passe donc à l’installation du GRID Infrastructure sur le futur node .

Pour cette partie nous allons utiliser la fonction « AddNode.sh » qui permet d’installer automatique le software Grid Infrastructure sur notre nouveau node. Attention néanmoins car ce script ne peut fonctionner que si et seulement si tous les prérequis sont OK.

Dans mon cas, avec les erreurs liées au fichier RESOLV.conf, cela va forcément échouer :

[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3v}"
..
..
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac1,rac3

File "/etc/resolv.conf" is not consistent across nodes
Pre-check for node addition was unsuccessful on all the nodes.
[oracle@rac1 bin]$

Soyons donc fourbe, et exportons la variable IGNORE_PREADDNODE_CHECKS à Y pour éviter que cela ne plante :

[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3v}"
Starting Oracle Universal Installer...

Performing tests to see whether nodes rac2,rac3 are available
 ............................................................... 100% Done.

-----------------------------------------------------------------------------
 Cluster Node Addition Summary
 Global Settings
 Source: /oracle/product/grid
 New Nodes
 Space Requirements
 New Nodes
 rac3
 /: Required 6.55GB : Available 24.41GB
 Installed Products
 Product Names
 Oracle Grid Infrastructure 11.2.0.3.0
 Sun JDK 1.5.0.30.03
 Installer SDK Component 11.2.0.3.0
 Oracle One-Off Patch Installer 11.2.0.1.7
 Oracle Universal Installer 11.2.0.3.0
 Oracle USM Deconfiguration 11.2.0.3.0
 Oracle Configuration Manager Deconfiguration 10.3.1.0.0
 Enterprise Manager Common Core Files 10.2.0.4.4
 Oracle DBCA Deconfiguration 11.2.0.3.0
 Oracle RAC Deconfiguration 11.2.0.3.0
 Oracle Quality of Service Management (Server) 11.2.0.3.0
 Installation Plugin Files 11.2.0.3.0
 Universal Storage Manager Files 11.2.0.3.0
 Oracle Text Required Support Files 11.2.0.3.0
 Automatic Storage Management Assistant 11.2.0.3.0
 Oracle Database 11g Multimedia Files 11.2.0.3.0
 Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
 Oracle Globalization Support 11.2.0.3.0
 Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
 Oracle Core Required Support Files 11.2.0.3.0
 Bali Share 1.1.18.0.0
 Oracle Database Deconfiguration 11.2.0.3.0
 Oracle Quality of Service Management (Client) 11.2.0.3.0
 Expat libraries 2.0.1.0.1
 Oracle Containers for Java 11.2.0.3.0
 Perl Modules 5.10.0.0.1
 Secure Socket Layer 11.2.0.3.0
 Oracle JDBC/OCI Instant Client 11.2.0.3.0
 Oracle Multimedia Client Option 11.2.0.3.0
 LDAP Required Support Files 11.2.0.3.0
 Character Set Migration Utility 11.2.0.3.0
 Perl Interpreter 5.10.0.0.2
 PL/SQL Embedded Gateway 11.2.0.3.0
 OLAP SQL Scripts 11.2.0.3.0
 Database SQL Scripts 11.2.0.3.0
 Oracle Extended Windowing Toolkit 3.4.47.0.0
 SSL Required Support Files for InstantClient 11.2.0.3.0
 SQL*Plus Files for Instant Client 11.2.0.3.0
 Oracle Net Required Support Files 11.2.0.3.0
 Oracle Database User Interface 2.2.13.0.0
 RDBMS Required Support Files for Instant Client 11.2.0.3.0
 RDBMS Required Support Files Runtime 11.2.0.3.0
 XML Parser for Java 11.2.0.3.0
 Oracle Security Developer Tools 11.2.0.3.0
 Oracle Wallet Manager 11.2.0.3.0
 Enterprise Manager plugin Common Files 11.2.0.3.0
 Platform Required Support Files 11.2.0.3.0
 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
 RDBMS Required Support Files 11.2.0.3.0
 Oracle Ice Browser 5.2.3.6.0
 Oracle Help For Java 4.2.9.0.0
 Enterprise Manager Common Files 10.2.0.4.3
 Deinstallation Tool 11.2.0.3.0
 Oracle Java Client 11.2.0.3.0
 Cluster Verification Utility Files 11.2.0.3.0
 Oracle Notification Service (eONS) 11.2.0.3.0
 Oracle LDAP administration 11.2.0.3.0
 Cluster Verification Utility Common Files 11.2.0.3.0
 Oracle Clusterware RDBMS Files 11.2.0.3.0
 Oracle Locale Builder 11.2.0.3.0
 Oracle Globalization Support 11.2.0.3.0
 Buildtools Common Files 11.2.0.3.0
 Oracle RAC Required Support Files-HAS 11.2.0.3.0
 SQL*Plus Required Support Files 11.2.0.3.0
 XDK Required Support Files 11.2.0.3.0
 Agent Required Support Files 10.2.0.4.3
 Parser Generator Required Support Files 11.2.0.3.0
 Precompiler Required Support Files 11.2.0.3.0
 Installation Common Files 11.2.0.3.0
 Required Support Files 11.2.0.3.0
 Oracle JDBC/THIN Interfaces 11.2.0.3.0
 Oracle Multimedia Locator 11.2.0.3.0
 Oracle Multimedia 11.2.0.3.0
 HAS Common Files 11.2.0.3.0
 Assistant Common Files 11.2.0.3.0
 PL/SQL 11.2.0.3.0
 HAS Files for DB 11.2.0.3.0
 Oracle Recovery Manager 11.2.0.3.0
 Oracle Database Utilities 11.2.0.3.0
 Oracle Notification Service 11.2.0.3.0
 SQL*Plus 11.2.0.3.0
 Oracle Netca Client 11.2.0.3.0
 Oracle Net 11.2.0.3.0
 Oracle JVM 11.2.0.3.0
 Oracle Internet Directory Client 11.2.0.3.0
 Oracle Net Listener 11.2.0.3.0
 Cluster Ready Services Files 11.2.0.3.0
 Oracle Database 11g 11.2.0.3.0
 -----------------------------------------------------------------------------

Instantiating scripts for add node (Thursday, October 8, 2015 3:30:48 PM CEST)
 . 1% Done.
 Instantiation of add node scripts complete

Copying to remote nodes (Thursday, October 8, 2015 3:31:05 PM CEST)
 ............................................................................................... 96% Done.
 Home copied to new nodes

Saving inventory on nodes (Thursday, October 8, 2015 3:41:24 PM CEST)
 . 100% Done.
 Save inventory complete
 WARNING:
 The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
 /oracle/product/grid/root.sh #On nodes rac3
 To execute the configuration scripts:
 1. Open a terminal window
 2. Log in as "root"
 3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/product/grid was successful.
 Please check '/tmp/silentInstall.log' for more details.

Comme l’indique la log ci dessus, il faut lancer le root.sh sur le nouveau node :

[root@rac3 grid]# ./root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME= /oracle/product/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
 The contents of "dbhome" have not changed. No need to overwrite.
 The contents of "oraenv" have not changed. No need to overwrite.
 The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
 Database Configuration Assistant when a database is created
 Finished running generic part of root script.
 Now product-specific root actions will be performed.
 Using configuration parameter file: /oracle/product/grid/crs/install/crsconfig_params
 User ignored Prerequisites during installation
 clscfg: EXISTING configuration version 5 detected.
 clscfg: version 5 is 11g Release 2.
 Successfully accumulated necessary OCR keys.
 Creating OCR keys for user 'root', privgrp 'root'..
 Operation successful.
 Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Etapes post installation :

Sur le nouveau node, vérifier les composants cluster :

[root@rac3 grid]# /oracle/product/grid/bin/crsctl status res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.LISTENER.lsnr

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.asm

ONLINE ONLINE rac1 Started

ONLINE ONLINE rac2 Started

ONLINE ONLINE rac3 Started

ora.gsd

OFFLINE OFFLINE rac1

OFFLINE OFFLINE rac2

OFFLINE OFFLINE rac3

ora.net1.network

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

ora.ons

ONLINE ONLINE rac1

ONLINE ONLINE rac2

ONLINE ONLINE rac3

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE rac2

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE rac1

ora.cvu

1 ONLINE ONLINE rac1

ora.mrs.db

1 ONLINE ONLINE rac1 Open

2 ONLINE ONLINE rac2 Open

ora.oc4j

1 ONLINE ONLINE rac1

ora.rac1.vip

1 ONLINE ONLINE rac1

ora.rac2.vip

1 ONLINE ONLINE rac2

ora.rac3.vip

1 ONLINE ONLINE rac3

ora.scan1.vip

1 ONLINE ONLINE rac2

ora.scan2.vip

1 ONLINE ONLINE rac1

On est pas mal sauf qu’il n’y a pas de Listener_scan pour notre rac3 …

[root@rac3 grid]# su - oracle

[oracle@rac3 ~]$ srvctl status nodeapps

VIP rac1v is enabled

VIP rac1v is running on node: rac1

VIP rac2v is enabled

VIP rac2v is running on node: rac2

VIP rac3v is enabled

VIP rac3v is running on node: rac3

Network is enabled

Network is running on node: rac1

Network is running on node: rac2

Network is running on node: rac3

GSD is disabled

GSD is not running on node: rac1

GSD is not running on node: rac2

GSD is not running on node: rac3

ONS is enabled

ONS daemon is running on node: rac1

ONS daemon is running on node: rac2

ONS daemon is running on node: rac3
[oracle@rac3 ~]$ srvctl config nodeapps
 Network exists: 1/10.10.0.0/255.255.255.0/eth0, type static
 VIP exists: /rac1v/10.10.0.12/10.10.0.0/255.255.255.0/eth0, hosting node rac1
 VIP exists: /rac2v/10.10.0.13/10.10.0.0/255.255.255.0/eth0, hosting node rac2
 VIP exists: /rac3v/10.10.0.17/10.10.0.0/255.255.255.0/eth0, hosting node rac3
 GSD exists
 ONS exists: Local port 6100, remote port 6200, EM port 2016

Une petite vérification générale :

[oracle@rac3 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Listeners Scan :

[oracle@rac3 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node rac1
[oracle@rac3 ~]$ srvctl config scan
SCAN name: rac-scan, Network: 1/10.10.0.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /rac-scan/10.10.0.14
SCAN VIP name: scan2, IP: /rac-scan/10.10.0.15

==> On ne voit donc pas la configuration du scan pour le node 3… Mais cela serait bien normal vu que nous avons que 2 ip sur notre Scan.

Essayons d’ajouter une ip au scan, et ensuite le listener Scan :

1 – on arrete tout ça :

[oracle@rac3 ~]$ srvctl stop scan_listener
[oracle@rac3 ~]$ srvctl stop scan
[oracle@rac3 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
SCAN VIP scan2 is enabled
SCAN VIP scan2 is not running

[oracle@rac3 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is not running

Ok, à ce niveau la, il faut modifier le /etc/hosts en ajoutant une ip au Scan. Dans le cas d’une gestion DNS il  suffit de faire rajouter une IP au DNS utilisé.

10.10.0.14 rac-scan.localdomain rac-scan
10.10.0.15 rac-scan.localdomain rac-scan
10.10.0.19 rac-scan.localdomain rac-scan

Bien sur, cela doit être fait sur tous les nodes du RAC.

Depuis le compte root :

[root@rac3 bin]# pwd
/oracle/product/grid/bin
[root@rac3 bin]# ./srvctl modify scan -n rac-scan
[root@rac3 bin]# ./srvctl config scan
SCAN name: rac-scan, Network: 1/10.10.0.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /rac-scan/10.10.0.14
SCAN VIP name: scan2, IP: /rac-scan/10.10.0.15
SCAN VIP name: scan3, IP: /rac-scan/10.10.0.19

[root@rac3 bin]# ./srvctl modify scan_listener -u
[root@rac3 bin]# ./srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

==> c’est pas mal tout ça !! Relancons le tout et vérifions !

[root@rac3 bin]#./srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac3
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node rac1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node rac2

[root@rac3 bin]# ./srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac3
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node rac1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node rac2
[root@rac3 bin]#

[root@rac3 bin]# ./crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ONLINE ONLINE rac3 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
OFFLINE OFFLINE rac3
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac3
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac1
ora.mrs.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.rac3.vip
1 ONLINE ONLINE rac3
ora.scan1.vip
1 ONLINE ONLINE rac3
ora.scan2.vip
1 ONLINE ONLINE rac1
ora.scan3.vip
1 ONLINE ONLINE rac2

C’est mieux !!

Passons maintenant à la partie installation du RDBMS HOME, en reprenant une session sur un des 2 nodes en activité (par convenance je reprend le même node à savoir RAC1) :

De même que pour le Addnode du grid home, il faut utiliser IGNORE_PREADDNODE_CHECKS=Y pour éviter que la procédure ne plante.

[oracle@rac1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3v}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 4027 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes rac2,rac3 are available
............................................................... 100% Done.

........
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/product/db
New Nodes
Space Requirements
New Nodes
rac3
/: Required 4.87GB : Available 19.05GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.5.0.1
Oracle ODBC Driverfor Instant Client 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
SSL Required Support Files for InstantClient 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Platform Required Support Files 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.3.0
Oracle SQL Developer 11.2.0.3.0
Oracle Application Express 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
SQLJ Runtime 11.2.0.3.0
Database Workspace Manager 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0
Enterprise Manager Repository Core Files 10.2.0.4.4
Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0
Enterprise Manager Grid Control Core Files 10.2.0.4.4
Enterprise Manager Common Core Files 10.2.0.4.4
Enterprise Manager Agent Core Files 10.2.0.4.4
RDBMS Required Support Files 11.2.0.3.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.3
Oracle 11g Warehouse Builder Required Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Parser Generator Required Support Files 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Multimedia Annotator 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Sample Schema Data 11.2.0.3.0
Oracle Starter Database 11.2.0.3.0
Oracle XML Query 11.2.0.3.0
XML Parser for Oracle JVM 11.2.0.3.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.3.0
Enterprise Manager Common Files 10.2.0.4.3
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.3.0
Oracle Containers for Java 11.2.0.3.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.3.0
Secure Socket Layer 11.2.0.3.0
Oracle Universal Connection Pool 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Precompiler Common Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
PL/SQL 11.2.0.3.0
Generic Connectivity Common Files 11.2.0.3.0
Oracle Database Gateway for ODBC 11.2.0.3.0
Oracle Programmer 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Enterprise Manager Agent 10.2.0.4.3
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Call Interface (OCI) 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle XML Development Kit 11.2.0.3.0
Database Configuration and Upgrade Assistants 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Enterprise Manager Console DB 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Oracle Text 11.2.0.3.0
Oracle Net Services 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Thursday, October 8, 2015 5:18:55 PM CEST)
. 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Thursday, October 8, 2015 5:19:24 PM CEST)
............................................................................................... 96% Done.
Home copied to new nodes

Saving inventory on nodes (Thursday, October 8, 2015 5:35:10 PM CEST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/product/db/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/product/db was successful.
Please check '/tmp/silentInstall.log' for more details.

Ok, comme le demande la procédure, allons faire un petit root.sh sur le nouveau node RAC3 :

[root@rac3 db]# id
uid=0(root) gid=0(root) groups=0(root)
[root@rac3 db]# pwd
/oracle/product/db
[root@rac3 db]# ./root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/product/db

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

=> En plus du root.sh qui permet la configuration LOCALE du RDBMS, je lance le ROOTCRS.PL du GRID HOME, pour que cela enregistre ce nouvel RDBMS HOME au niveau du cluster :

[root@rac3 db]# cd /oracle/product/grid/crs/install/
[root@rac3 install]# ./rootcrs.pl
Using configuration parameter file: ./crsconfig_params
User ignored Prerequisites during installation
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

=> Ok, vérifions maintenant via CLUVFY la bonne configuration de tout ce joli monde :

[oracle@rac3 ~]$ . .gridenv
[oracle@rac3 ~]$ cluvfy stage -post nodeadd -n rac3

Note : Pour cette vérification je n’utilise pas l’option VERBOSE contrairement à la vérification initiale de vérification des prérequis.

[oracle@rac3 ~]$ . .gridenv
[oracle@rac3 ~]$ cluvfy stage -post nodeadd -n rac3

Performing post-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac3"

Checking user equivalence...
PRVF-4007 : User equivalence check failed for user "oracle"
Check failed on nodes:
rac3

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed

Post-check for node addition was unsuccessful on all the nodes.

-> cas classique, l’accès ssh doit aussi être fait en local..

[oracle@rac3 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub rac3
oracle@rac3's password:
Now try logging into the machine, with "ssh 'rac3'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[oracle@rac3 ~]$ ssh rac3
The authenticity of host 'rac3 (10.10.0.16)' can't be established.
RSA key fingerprint is 2e:95:07:d2:83:6e:61:0b:7f:2c:ba:cc:fd:14:2f:63.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac3,10.10.0.16' (RSA) to the list of known hosts.
oracle@rac3's password:
Last login: Thu Oct 8 14:17:25 2015 from rac1.localdomain

[oracle@rac3 ~]$ exit
logout
Connection to rac3 closed.

On relance le cluvfy :

[oracle@rac3 ~]$ cluvfy stage -post nodeadd -n rac3

Performing post-checks for node addition

Checking node reachability...
Node reachability check passed from node "rac3"

Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.16" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.14" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.16" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.14" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac3 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac3 : 10.10.0.14" failed
TCP connectivity check failed for subnet "10.10.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.0.0".
Subnet mask consistency check passed.

Node connectivity check failed

Checking multicast communication...

Checking subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking cluster integrity...

Cluster integrity check passed

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/oracle/product/grid" is not shared
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.16" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac2 : 10.10.0.14" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.16" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac1 : 10.10.0.14" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac3 : 10.10.0.17" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 10.10.0.16" and "rac3 : 10.10.0.14" failed
TCP connectivity check failed for subnet "10.10.0.0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 192.168.10.3" and "rac2 : 192.168.10.3" failed

ERROR:
PRVF-7617 : Node connectivity between "rac3 : 192.168.10.3" and "rac1 : 192.168.10.3" failed
TCP connectivity check failed for subnet "192.168.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.0.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.

Node connectivity check failed

Checking multicast communication...

Checking subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.10.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of GSD node application (optional)
GSD node application is offline on nodes "rac2,rac1,rac3"

Checking existence of ONS node application (optional)
ONS node application check passed

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "rac-scan"...

ERROR:
PRVG-1101 : SCAN name "rac-scan" failed to resolve

ERROR:
PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 10.10.0.14) failed

ERROR:
PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 10.10.0.15) failed

ERROR:
PRVF-4657 : Name resolution setup check for "rac-scan" (IP address: 10.10.0.19) failed

ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan"

Verification of SCAN VIP and Listener setup failed

User "oracle" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was unsuccessful on all the nodes.

Les erreurs sur la communication entre les IP concerne le rac3 uniquement (ip publique/ VIP et IP du scan sur laquelle repose la VIP) : Normal car les IP ne communiquent pas ensemble sur le même node.

[oracle@rac3 ~]$ grep rac3 /etc/hosts
10.10.0.16 rac3.localdomain rac3
10.10.0.17 rac3v.localdomain rac3v
[oracle@rac3 ~]$ grep 0.14 /etc/hosts
10.10.0.14 rac-scan.localdomain rac-scan

Les erreurs sur la résolution du scan sont considérées comme « normales » puisque nous n’avons pas de DNS, la résolution n’est donc pas possible.

Dans mon cas de test sur VMs, je considère les erreurs normales. Hormis ces erreurs, pas d’autres anomalies, nous pouvons passer à l’ajout de la database sur ce nouveau node RAC3.

Deux options s’offrent à nous, soit manuellement via pack de scripts, ou DBCA. Je vais utiliser DBCA.

Se connecter sur un des 2 nodes sur laquelle l’instance « MRS » est active :

[oracle@rac3 ~]$ srvctl status database -d MRS
Instance MRS1 is running on node rac1
Instance MRS2 is running on node rac2

[root@rac1 install]# xhost +
access control disabled, clients can connect from any host
[root@rac1 install]# su - oracle
[oracle@rac1 ~]$ xhost +
access control disabled, clients can connect from any host
[oracle@rac3 ~]$ . .dbenv
[oracle@rac3 ~]$ dbca

dbca_1

dbca_2

dbca_3

dbca_4

dbca_5

dbca_6

dbca_7

dbca_8

dbca_9

dbca_10

Une fois l’action terminée, quelques dernières vérifications :

[oracle@rac1 ~]$ srvctl status database -d MRS
Instance MRS1 is running on node rac1
Instance MRS2 is running on node rac2
Instance MRS3 is running on node rac3

[oracle@rac1 ~]$ srvctl config database -d MRS
Database unique name: MRS
Database name: MRS
Oracle home: /oracle/product/db
Oracle user: oracle
Spfile: +DATA/MRS/spfileMRS.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MRS
Database instances: MRS1,MRS2,MRS3
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

En base de données :

SQL> set lines 200
SQL> col instance for a60
SQL> col member for a60
SQL> select thread#,instance,enabled from v$thread

THREAD# INSTANCE ENABLED
---------- ------------------------------------------------------------ --------
1 MRS1 PUBLIC
2 MRS2 PUBLIC
3 MRS3 PUBLIC
SQL> select * from v$logfile order by 1;

GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ------------------------------------------------------------ ---
1 ONLINE +DATA/mrs/onlinelog/group_1.257.884100551 NO
2 ONLINE +DATA/mrs/onlinelog/group_2.258.884100557 NO
3 ONLINE +DATA/mrs/onlinelog/group_3.265.884117077 NO
4 ONLINE +DATA/mrs/onlinelog/group_4.266.884117081 NO
5 ONLINE +DATA/mrs/onlinelog/group_5.269.892577503 NO
6 ONLINE +DATA/mrs/onlinelog/group_6.270.892577507 NO

6 rows selected.

SQL> select file_name from dba_data_files where tablespace_name='UNDOTBS3';

FILE_NAME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/mrs/datafile/undotbs3.268.892577493

Coté Database,  tout semble correct. Vérifions coté cluster :

[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac2,rac1,rac3
[oracle@rac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ONLINE ONLINE rac3 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
OFFLINE OFFLINE rac3
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ONLINE ONLINE rac3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac3
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac1
ora.mrs.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
3 ONLINE ONLINE rac3 Open
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.rac3.vip
1 ONLINE ONLINE rac3
ora.scan1.vip
1 ONLINE ONLINE rac3
ora.scan2.vip
1 ONLINE ONLINE rac1
ora.scan3.vip
1 ONLINE ONLINE rac2

 

=> Tout est parfait ! nous avons donc ajouté un troisième node à un RAC, et ajouté une nouvelle instance en succès !!

Enjoy !

Table externe pour le listener.log

Une idée simple, efficace ,pour consulter depuis une database la log du listener, et lancer des requètes de selections.

Utile en cas d’audit, d’intrusions (idenitification plus aisée) ou tout simplement de reporting quotidien.

1-> Tout d’abord création de la directory de l’emplacement du listener.log


create or replace directory LISTENERLOG as 'myrep';

2 -> ensuite creer une table externe, qui pointera vers le fichier « listener.log ».

Attention : les colonnes de la tables doivent respecter le format du fichier listener.log .


CREATE TABLE listenerlog (
datejour date,
connect_data VARCHAR2(2000),
protocol_info VARCHAR2(80),
EVENT VARCHAR2(200),
SID VARCHAR2(200),
RETURN_CODE number
)
ORGANIZATION EXTERNAL
(TYPE oracle_loader
DEFAULT DIRECTORY LISTENERLOG
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline
NOBADFILE NODISCARDFILE NOLOGFILE
FIELDS TERMINATED BY "*" LRTRIM (datejour char date_format DATE mask "DD-MON-YYYY hh24:mi:ss", connect_data, protocol_info, event,sid,return_code)
)
LOCATION ('listener.log')
)
REJECT LIMIT UNLIMITED;

Attention, le REJECT LIMIT doit etre a unlimited, sinon il y a un risque que cela plante, en fonction de la taille du listener.log

Après une requète classique peut être lancée pour affiner les résultats.

Modification de Tables – Colonnes & Contraintes

Syntaxe:

ALTER TABLE [schema.]table
constraint_clause [,…]
[ENABLE enable_clause | DISABLE disable_clause]
[{ENABLE|DISABLE} TABLE LOCK]
[{ENABLE|DISABLE} ALL TRIGGERS];

Options des contraintes :

ADD out_of_line_constraint(s)
ADD out_of_line_referential_constraint
DROP PRIMARY KEY [CASCADE] [{KEEP|DROP} INDEX]
DROP UNIQUE (column [,…]) [{KEEP|DROP} INDEX]
DROP CONSTRAINT constraint [CASCADE]
MODIFY CONSTRAINT constraint constrnt_state
MODIFY PRIMARY KEY constrnt_state
MODIFY UNIQUE (column [,…]) constrnt_state
RENAME CONSTRAINT constraint TO new_name

Les différents états des contraintes :

[[NOT] DEFERRABLE] [INITIALLY {IMMEDIATE|DEFERRED}]
[RELY | NORELY] [USING INDEX using_index_clause]
[ENABLE|DISABLE] [VALIDATE|NOVALIDATE]
[EXCEPTIONS INTO [schema.]table]

Exemples :

Ajout de colonne :

ALTER TABLE STAFF
ADD PROVIDER Varchar2(35);

Ajout d’une valeur par défaut sur une colonne

ALTER TABLE STAFF
MODIFY PROVIDER Varchar2(35) DEFAULT 'ABC Ins';

Ajout de deux colonnes et suppression d’une contrainte :

ALTER TABLE OPTIONS
ADD (STAFF_ID INT, SO_PENSION INT)
DROP CONSTRAINT Myconstraint_name