Skip to main content

Posts

Showing posts from 2012

Lofiadm

lofiadm is command you need to use mounting an existing CD-ROM image under Sun Solaris UNIX. This is useful when the file contains an image of some flesystem (such as a floppy or CD-ROM image), because the block device can then be used with the normal system utilities for mounting, checking or repairing filesystem Mounting an Existing ISO CD-ROM Image under Solaris UNIX If your image name is cd.iso, you can type command: # lofiadm -a /path/to/cd.iso Output: /dev/lofi/1 Please note that the file name argument on lofiadm must be fully qualified and the path must be absolute not relative (thanks to mike for tip). /dev/lofi/1 is the device, use the same to mount iso image with mount command: # mount -o ro -F hsfs /dev/lofi/1 /mnt # cd /mnt # ls -l # df -k /mnt Mount the loopback device as a randomly accessible file system with mount -F hsfs -o ro /dev/lofi/X /mnt. Alternatively, use this combined format: mount -F hsfs -o ro `lofiadm -a /path/to/image.iso`

Memory usage by process

The Solaris pmap command will provide the total memory usage of each process. The following shell script prints the memory usage of each process, sorted by ascending memory usage. #!/bin/sh /usr/bin/printf "%-6s %-9s %s\n" "PID" "Total" "Command" /usr/bin/printf "%-6s %-9s %s\n" "---" "-----" "-------" for PID in `/usr/bin/ps -e | /usr/bin/awk '$1 ~ /[0-9]+/ { print $1 }'` do    CMD=`/usr/bin/ps -o comm -p $PID | /usr/bin/tail -1`    # Avoid "pmap: cannot examine 0: system process"-type errors    # by redirecting STDERR to /dev/null    TOTAL=`/usr/bin/pmap $PID 2>/dev/null | /usr/bin/tail -1 | \ /usr/bin/awk '{ print $2 }'`    [ -n "$TOTAL" ] && /usr/bin/printf "%-6s %-9s %s\n" "$PID" "$TOTAL" "$CMD" done | /usr/bin/sort -n -k2 Example output: PID    Total

Solaris mirroring

There is a lot of notes about the mirroring, but I always used this one: 4) Time to get our hands dirty!      The following steps should be done while in single usermode idealy.      4.1) Making both drives the same.           We start with slicing the second drive in the same way as our first drive, the master.           # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2           No need to newfs the second drive slices here, that will automaticaly done by the mirror syncing later.      4.2) Metadbs      We can now setup our metadbs.           # metadb -a -f -c3 c1t0d0s7 c1t1d0s7           Since its the initial creation of the metadbs, we need to force it with -f -a adds the metadbs, and -c tells it how many to create.           You can see the results with metadb -i.      metadb -i is a very handy tool to determine the state of your metadb states.      4.3) Initializing the devices      Now we go to setup the initial metadevices.           # metainit -f d11 1 1 /dev/r

memory error detect XSCF uboot

If you see something like this when you poweron you server: memory error detect 80000008, address 000002d0 data 55555555 -> fbefaaaa capture_data hi fbefaaaa lo deadbeef ecc 1b1b capture_attributes 01113001 address 000002d0 memory error detect 80000008, address 000002d4 data aaaaaaaa -> deadbeef capture_data hi fbefaaaa lo deadbeef ecc 1b1b capture_attributes 01113001 address 000002d4 memXSCF uboot  01070000  (Feb  8 2008 - 11:12:19) XSCF uboot  01070000  (Feb  8 2008 - 11:12:19) SCF board boot factor = 7180     DDR Real size: 256 MB     DDR: 224 MB Than your XSCF card is broked. Replace it with new one. After that it will ask you for enter chassis number - located at front of the server XSCF promt to enter your chasses number ( is a S/N of your server ): Please input the chassis serial number : XXXXXXX 1:PANEL Please select the number : 1 Restoring data from PANEL to XSCF#0. Please wait for several minutes ... setdefaults : XSCF clear : start ......

M5000 XSCF Flash update

login as: eis-installer eis-installer@172.28.134.35's password:   XCP version of Panel EEPROM and XSCF FMEM mismatched,         Panel EEPROM=1072, XSCF FMEM=1102 XCP version of Panel EEPROM and XSCF FMEM mismatched,         Panel EEPROM=1072, XSCF FMEM=1102 XSCF> version -c xcp XSCF#0 (Active ) XCP0 (Current): 1102 XCP1 (Reserve): 1102   XSCF> getflashimage -u anonymous ftp://10.10.10.10/FFXCP1111.tar.gz Existing versions:         Version                Size  Date         FFXCP1102.tar.gz   44063324  Wed May 16 15:17:13 UTC 2012 Warning: About to delete existing versions. Continue? [y|n]: y Removing FFXCP1102.tar.gz. Password:   0MB received   1MB received   2MB received   3MB received   4MB received  .............  41MB received  42MB received Download successful: 43188 Kbytes in 61 secs (787.980 Kbytes/sec) Checking file... MD5: 0c37ccaaa7241d188b832702e23ba04a XSCF> version -c xcp -v XSCF#0 (Active ) XCP0 (Current): 1102 OpenBoot PROM : 02.24.0000 XSCF          : 0

Sun Cluster 3.2 & SCSI Reservation Issues

Sun Cluster 3.2 & SCSI Reservation Issues If you have worked with luns and Sun Cluster 3.2, you may have discovered that if you ever want to remove a lun from a system, it may not be possible because of the scsi3 reservation that Sun Cluster places on the disks.  The example scenario below walks you through how to overcome this issue and proceed as though Sun Cluster is not even installed. Example:  We had a 100GB lun off of a Hitachi disk array that we were using in a metaset that was controlled by Sun Cluster. We had removed the resource from the Sun Cluster configuration and removed the device with configadm/devfsadm, however when the storage admin attempted to remove the lun id from the Hitachi array zone, the Hitach array indicated the lun was still in use.  From the Solaris server side, it did not appear to be in use, however Sun Cluster has set the scsi3 reservations on the disk. Clearing the Sun Cluster scsi reservation steps: 1) Determine what DID d

Jumpstart reminder

bash-3.2# cd /cdrom bash-3.2# cd sol_10_811_sparc/ bash-3.2# cd Solaris_10/Tools/ bash-3.2# cd /opt/ bash-3.2# mkdir solaris bash-3.2# cd /cdrom/sol_10_811_sparc/Solaris_10/Tools/ bash-3.2# ./setup_install_server /opt/solaris Verifying target directory... Calculating the required disk space for the Solaris_10 product Calculating space required for the installation boot image Copying the CD image to disk... Copying Install Boot Image hierarchy... Copying /boot netboot hierarchy... Install Server setup complete   bash-3.2# cd / bash-3.2# mkdir jumpstart bash-3.2# cp -r /opt/solaris/Solaris_10/Misc/jumpstart_sample /jumpstart bash-3.2# cp -R /jumpstart/jumpstart_sample/* /jumpstart/ bash-3.2# cd /jumpstart bash-3.2# rm -rf jumpstart_sample bash-3.2# vi /etc/dfs/dfstab share -F nfs -o ro,anon=0 /jumpstart   bash-3.2# shareall bash-3.2# cd /jumpstart/ bash-3.2# ./check Validating rules... Validating profile host_class... Validating profile zfsrootsimple... Validating profile net924_su

Finding PID of process in Solaris by port uses

There is two ways to find PID by knowing which port is used: Lsof ( L i S t O pen F iles) lsof | grep <port_number>   1. lsof doesn’t work from non-global zone. 2. Use -z option with lsof to list open files & processes with in non-global zone like “ lsof -z ” Script for know the process for port using #!/usr/bin/ksh # #grep -v "^#" /etc/services | awk '{print $2}' | cut -d"/" -f1 | sort -n | uniq >sorted-services line='-------------------------------------------------------------------------' pids=`/usr/bin/ps -ef | sed 1d | awk '{print $2}'` # Prompt users or use 1st cmdline argument while [ $# -eq 0 ]; do if [ $# -eq 0 ]; then          read ans?"Enter port you like to know pid for:  " else          ans=$1 fi # Check all pids for this port, then list that process for f in $pids do          /usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans" #/export/baha/s