Skip to main content

Posts

Showing posts from 2015

Veritas File System (VxFS) tuning

Veritas File System (VxFS) tuning Preamble In addition of Veritas Volume Manager (VxVM) Symantec is also proposing Veritas File System (VxFS) that is most of the time used in combination of VxVM. Symantec claim that highest benefit is found when using both in parallel. This document has been written using Red Hat Enterprise Linux Server release 5.5 (Tikanga) and below VxFS/VxVM releases: [ root @ server1 ~ ] # rpm -qa | grep -i vrts VRTSvlic-3.02.51.010-0 VRTSfssdk-5.1.100.000-SP1_GA_RHEL5 VRTSob-3.4.289-0 VRTSsfmh-3.1.429.0-0 VRTSspt-5.5.000.005-GA VRTSlvmconv-5.1.100.000-SP1_RHEL5 VRTSodm-5.1.100.000-SP1_GA_RHEL5 VRTSvxvm-5.1.101.100-SP1RP1P1_RHEL5 VRTSvxfs-5.1.100.000-SP1_GA_RHEL5 VRTSatClient-5.0.32.0-0 VRTSaslapm-5.1.100.000-SP1_RHEL5 VRTSatServer-5.0.32.0-0 VRTSperl-5.10.0.7-RHEL5.3 VRTSdbed-5.1.100.000-SP1_RHEL5 To avoid putting real server name in your document think of something like: export PS1 = "[\u@server1 \W]# " VxFS file system phys...

Oracle servers: DIMM error

If you have some dimm problem and regretly you can't diagnose which DIMM is fault use FMA to spot it:  The send mondo panics are likely to be as a result of the ECC errors as described in Sun Alert 235041. The dump device is not setup correctly and can be fixed with the following command. dumpadm -d swap Unfortunately the FMA system is disabled so we cannot determine which DIMM is causing the errors. svcs-av.out:disabled       -             Sep_13        - svc:/system/fmd:default To re-enable the FMA system first remove the previous fault history as it appears to be causing problems when FMA starts up. Shut down the FMA subsystem $ svcadm disable -s svc:/system/fmd:default Remove all _files_ from the FMA log directories. This is very specific to the _files_ found all directories must be left intact. $ cd /var/fm/fmd $ find /var/fm/fmd -type f -exec ls {} \; Check that only files within the /v...

Less known Solaris Features: Protecting files from accidental deletion with ZFS

I thought i know a lot about Solaris, however today i found out about a feature that is in Solaris i never heard of. It was on an internal discussion alias. Or to be exact ... i think i've read that part of the man page but never connected the dots: Let’s assume you have a set of files in a directory that you shouldn’t delete. It would be nice to have some protection, that a short but fatally placed  rm  typed under caffeine deprivation doesn't wipe out this important file. It would be nice, that the OS protects you from deleting it except you really, really want it (and thus execute additional steps).  Let’s assume those files are in  /importantfiles . You can mark this directory with the  nounlink  attribute. root@aramaki:/apps/ADMIN# chmod S+vnounlink . root@aramaki:/apps/ADMIN# touch test2 root@aramaki:/apps/ADMIN# echo „test“ >> test2 root@aramaki:/apps/ADMIN# cat test2 test root@aramaki:/apps/ADMIN# rm test2 rm: test not removed: Not owner ro...

Less known Solaris features: Dump device estimates in Solaris 11.2

One reoccuring question from customers is „How large should i size the dump device?“. Since Solaris 11.2 there is a really comfortable way to get a number. You can use  dumpadm  to get an estimate of dump device size: root@solaris:~# dumpadm -e Estimated space required for dump: 407,81 M Important to know, the estimate is based on the current dump configuration and the current state of the system. So you should run this comment at a moment that is representative for your usual load. This dependency is obvious when you switch off the dumping of zfs metadata: root@solaris:~# dumpadm -c kernel-zfs Dump content : kernel without ZFS metadata Dump device : /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash Savecore enabled : yes Save compressed : on root@solaris:~# dumpadm -e Estimated space required for dump: 337,02 M root@solaris:~# dumpadm -c kernel+zfs Dump content : kernel with ZFS metadata Dump device : /dev/zvol/dsk/rpool/du...