Skip to main content

SunCluster. Migrate Oracle from ufs to zfs.

Initial environment

metaset ora01-disk

  • /dev/did/rdsk/d3
  • /dev/did/rdsk/d6
metadevices
  • /dev/md/ora01-disk/d10
  • /dev/md/ora01-disk/d20
resource groups
  • ora01-listener
    • dependencies: ora01-disk,ora01-host
  • ora01-oracle
    • dependencies: ora01-disk,ora01-host
  • ora01-disk
    • dependencies: -
    • mountpoints: /u01, /u02
  • ora01-host
    • dependencies: -

Migrate procedure

  1. Map disk to your cluster
    • on both nodes run cfgadm -al
    • on one node format -e <disk_wwn> and label it with EFI
    • on both nodes run scdidadm -r; scdidadm -u; scdidadm -i
  2. Create zpool
    • on active cluster node zpool create ora01-pool01 /dev/did/dsk/d9s0 /dev/did/dsk/d10s0 /dev/did/dsk/d11s0 /dev/did/dsk/d12s0
    • create datasets, like describe in this document
    • chown oracle:dba /ora01-pool01/*
  3. Add zpool to cluster
    • if you using SunCluster 3.3,  SunPatch 145333-11 or later must be installed
    • add zpool to cluster clresource create -g ora01 -t SUNW.HAStoragePlus -p Zpools=ora01-pool01 -p ZpoolsSearchDir=/dev/did/dsk ora01-pool01
    • add dependencies clrs set -p Resource_dependencies=ora01-disk,ora01-pool01,ora01-host ora01-oracle
  4. Migrate oracle database
    • unmonitor oracle database: clrs unmonitor ora01-oracle
    • copy datafiles to new location
    • edit pfile(spfile), set new paths to control files
    • startup mount and rename files using alter database rename file 'file_name' to 'new_file_name';
    • open database: alter database open;
  5. Remove unused datasets and disks
    • check that nobody used our disk: fuser -c /u01; fuser -c /u02
    • remove cluster dependencies: clrs set -p Resource_dependencies=ora01-pool01,ora01-host ora01-oracle;  clrs set -p Resource_dependencies=ora01-pool01,ora01-host ora01-listener
    • remove resource from resource group: clrs disable ora01-disk; clrs delete ora01-disk
    • remove dataset
      • clear metadevices: metaclear -s ora01-disk -a
      • metaset -s ora01-disk -d /dev/did/rdsk/d3
      • last one is removed with -f option: metaset -s ora01-disk -f -d /dev/did/rdsk/d6
      • removing nodes. For all nodes (but the node where the diskset is primary last) perform:  metaset -s ora01-disk -d -h sc01-n2.labs.localnet && metaset -s ora01-disk -d -h sc01-n1.labs.localnet
      • remove records in /etc/vfstab
HOW TO
  • Add disk to cluster zpool
    • map disk to host
    • label it with EFI label using: format -e <disk>
    • update global devices on both nodesscdidadm -r; scdidadm -u; scdidadm -i
    • zpool add <poolname> </dev/did/dsk/new_disk_s0>

Comments

Post a Comment

Popular posts from this blog

Solaris. remove unusable scsi lun

Solaris remove unusable or failing scsi lun 1. The removed devices show up as drive not available in the output of the format command: # format Searching for disks...done ................      255. c1t50000974082CCD5Cd249 <drive not available>           /pci@3,700000/SUNW,qlc@0/fp@0,0/ssd@w50000974082ccd5c,f9 ................      529. c3t50000974082CCD58d249 <drive not available>           /pci@7,700000/SUNW,qlc@0/fp@0,0/ssd@w50000974082ccd58,f9 2. After the LUNs are unmapped Solaris displays the devices as either unusable or failing. # cfgadm -al -o show_SCSI_LUN | grep -i unusable # # cfgadm -al -o show_SCSI_LUN | grep -i failing c1::50000974082ccd5c,249       disk         connected    configured   failing c3::50000974082ccd58,249       disk         connected    configured   failing # 3. This will kick the device from failing to unusable. and also removes them from format o/p. # luxadm -e offline /dev/rdsk/c1t50000974082CCD5Cd249s0 # luxadm -e offline

memory error detect XSCF uboot

If you see something like this when you poweron you server: memory error detect 80000008, address 000002d0 data 55555555 -> fbefaaaa capture_data hi fbefaaaa lo deadbeef ecc 1b1b capture_attributes 01113001 address 000002d0 memory error detect 80000008, address 000002d4 data aaaaaaaa -> deadbeef capture_data hi fbefaaaa lo deadbeef ecc 1b1b capture_attributes 01113001 address 000002d4 memXSCF uboot  01070000  (Feb  8 2008 - 11:12:19) XSCF uboot  01070000  (Feb  8 2008 - 11:12:19) SCF board boot factor = 7180     DDR Real size: 256 MB     DDR: 224 MB Than your XSCF card is broked. Replace it with new one. After that it will ask you for enter chassis number - located at front of the server XSCF promt to enter your chasses number ( is a S/N of your server ): Please input the chassis serial number : XXXXXXX 1:PANEL Please select the number : 1 Restoring data from PANEL to XSCF#0. Please wait for several minutes ... setdefaults : XSCF clear : start ......

FOS Password recovery (Brocade Fabric OS Switch Password recovery procedure)

Password recovery using root account If you have access to the root account, you can reset the passwords on the switch to default. This feature is available for all currently supported versions of the Fabric OS. Follow the below steps to reset any account password from the root account. 1. Open a CLI session (serial or telnet for an unsecured system and sectelnet for a secure system) to the switch. 2. Log in as root. 3. At the prompt, enter the passwddefault command as shown below: switch:root> passwddefault 4. Follow the prompts to reset the password for the selected account. For example: switch:root> passwddefault All account passwords have been successfully set to factory default. Once the passwords have been reset, log into the switch as admin, and modify your default passwords. Make sure to keep a hardcopy of your switch passwords in a secure location. The default passwords for Fabric OS switches are: Root fibranne Adminpassword Userpassword Password r