Monday, November 23, 2015

Fixing server error: SMART error (CurrentPendingSector) detected on host: (your_sever_hostname)

This morning I got an email from one of my server with this content:

-----------------

This email was generated by the smartd daemon running on:

   host name: (your_server_hostname)
  DNS domain: yourdomain.com
  NIS domain: (none)

The following warning/error was logged by the smartd daemon:

Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors


For details see host's SYSLOG.

You can also use the smartctl utility for further investigation.
The original email about this issue was sent at Sat Nov 21 15:15:52 2015 CST
Another email message will be sent in 24 hours if the problem persists.



---------------------


The confusing part is ... when I check the hard drive /dev/sdb using smartmon tool, it actually says PASSED!

smartctl -H /dev/sdb
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-2.6.32-40-pve] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED



But when I do a long test:

smartctl --test=short /dev/sdb

or 

smartctl --test=long /dev/sdb


and check the result using:


smartctl -a /dev/sdb


I found some errors:

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       90%      6518         84256


as you can see I have a problem and it was confirmed by ONLY the long test.

I replaced the bad drive /dev/sdb and ran another short test.  Problem has been confirmed and fixed.



How to convert or create .pem and .key files from .p12 certificate file for Kount RIS certificate

I have a client who uses Kount for fraud detection and management.  Every year Kount require renewal of their RIS (Risk Inquiry System) certificate to be renewed.

This certificate is a X.509 certificate which has to be generated from Kount website.

Here are the steps I took to renew Kount RIS certificate:



Step 1:  Login to Kount awc.kount.net  (If you are a developer like me and have not logged into Kount for more than a few months, you may need to do forgot password)

Step 2:  go to Admin menu > RIS Certificate > Create New certificate
             (USE FIREFOX - which allows exportation of p12 file)
             (p12 file will be installed inside firefox)
             (export the p12 file to your local computer)

Step 3:  Copy p12 file to your web server to convert to .pem and .key file there using OpenSSL
              
To convert from .p12 to .pem
openssl pkcs12 -clcerts -nokeys -in your_source.p12 -out your_target.pem

To convert from .p12 to .key
openssl pkcs12 -nocerts -in your_source.p12 -out your_target.key

Step 4:   Make a copy of .p12 .pem .key to uniform filename used by your PHP script.
              


Step 5:   Copy the newly generated .pem and .key files to any other web server you have.


Thursday, October 29, 2015

Decompress multiple .gz files from one directory to a different target directory

Lets say you have a directory full of .gz files and you want to decompress all of them at the same time using one statement.

find . -name "*.gz" | while read filename; do gzip -cdv $filename > ../decompressed/$filename; done;

 The above statement uses 'find' command to get list of .gz files in the current (.) directory. Then it pipes it to while loop which executes gzip -cdv for each file and redirect the output to ./decompressed/ directory.


Send result from cron script to email using mail and inside crontab

Sometime you want to send yourself an email with the output of certain crontab script execution.

Here is the statement I use to get this done:

/usr/bin/php /path/to/my_script.php | mail -s "My Results" yourname@email.com

Nice, simple and effective.

How to add / insert date using Linux CLI bash shell in cron jobs

Knowing how to add / insert date using Linux CLI bash shell is useful when creating crontab jobs that executes daily. For example you run a script daily and would like to keep the output logged each time it runs and you want to keep those output logs for the past 30 days.

What has worked for me is adding the following behind whatever script you are executing:

`date +\%Y_\%m_\%d-\%H_\%M`

When combined with 'tee' and &2>1 it will send all output to a particular file, here is an example:

some_script.sh 2>&1 | tee /log/`date +\%Y_\%m_\%d-\%H_\%M`.log 

The above statement will store the output of some_script.sh into a log file with the file name:

/log/2015_03_29.log
/log/2015_03_30.log
/log/2015_03_31.log

and so forth...

Basically the example above uses the execution character ` ... `  to execute date command.

Here are some options in date command that you can use:

  %%   a literal %
  %a   locale's abbreviated weekday name (e.g., Sun)
  %A   locale's full weekday name (e.g., Sunday)
  %b   locale's abbreviated month name (e.g., Jan)
  %B   locale's full month name (e.g., January)
  %c   locale's date and time (e.g., Thu Mar  3 23:05:25 2005)
  %C   century; like %Y, except omit last two digits (e.g., 20)
  %d   day of month (e.g., 01)
  %D   date; same as %m/%d/%y
  %e   day of month, space padded; same as %_d
  %F   full date; same as %Y-%m-%d
  %g   last two digits of year of ISO week number (see %G)
  %G   year of ISO week number (see %V); normally useful only with %V
  %h   same as %b
  %H   hour (00..23)
  %I   hour (01..12)
  %j   day of year (001..366)
  %k   hour, space padded ( 0..23); same as %_H
  %l   hour, space padded ( 1..12); same as %_I
  %m   month (01..12)
  %M   minute (00..59)
  %n   a newline
  %N   nanoseconds (000000000..999999999)
  %p   locale's equivalent of either AM or PM; blank if not known
  %P   like %p, but lower case
  %r   locale's 12-hour clock time (e.g., 11:11:04 PM)
  %R   24-hour hour and minute; same as %H:%M
  %s   seconds since 1970-01-01 00:00:00 UTC
  %S   second (00..60)
  %t   a tab
  %T   time; same as %H:%M:%S
  %u   day of week (1..7); 1 is Monday
  %U   week number of year, with Sunday as first day of week (00..53)
  %V   ISO week number, with Monday as first day of week (01..53)
  %w   day of week (0..6); 0 is Sunday
  %W   week number of year, with Monday as first day of week (00..53)
  %x   locale's date representation (e.g., 12/31/99)
  %X   locale's time representation (e.g., 23:13:48)
  %y   last two digits of year (00..99)
  %Y   year
  %z   +hhmm numeric time zone (e.g., -0400)
  %:z  +hh:mm numeric time zone (e.g., -04:00)
  %::z  +hh:mm:ss numeric time zone (e.g., -04:00:00)
  %:::z  numeric time zone with : to necessary precision (e.g., -04, +05:30)
  %Z   alphabetic time zone abbreviation (e.g., EDT)


Tail ending log rows and create new aggregated log file (rotate truncate shorten rotating log file)

These examples are very useful ways to quickly truncate and rotate your log files.  This is a very basic necessity for any server administration. If you do not rotate your logs, you will run out of disk space and one day your server will stop functioning properly.


Example for NGINX web server logs:

cp /log/nginx/error.log /data_local/tmp_nginx_log/nginx_error_`date +"%Y_%m_%d"`.log; truncate -s0 /log/nginx/error.log; kill -USR1 $( cat /var/run/nginx.pid )
kill -HUP $( cat /var/run/nginx.pid ) 

Example for APACHE web server logs:

tail -n 1000 error_php_apache.log > error_php_apache.log.tmp && mv -f error_php_apache.log.tmp error_php_apache.log && /etc/init.d/httpd graceful
or simpler way:
tail -n 1000 apache-error.log > apache-error.1.log && truncate -s 1M apache-error.log && service apache2 graceful

Get first 1000 lines from all *.log files and store them all in /log/all_logs

find /log -name "*.log" -type f | xargs tail -n 1000 2>&1 | tee /log/all_logs 

Tuesday, August 25, 2015

Use different port while using RSYNC with SSH command option

I have used RSYNC to copy files between Linux servers a lot, but never knew how to use different port other than default port 22. So I looked it up and found the solution.

Here it is

rsync -avz -e "ssh -p $portNumber" user@remoteip:/remotepath/files/ /localpath/


Hope this help someone :)

Saturday, July 25, 2015

MySQL backup database to a compressed GZIP and with date and time in file name

How to backup a MySQL database to a compressed zip file with current date and time

mysqldump -u {username} -p{password} {database} | gzip -9 - > /{target_directory}/{sub_dir}/{dbname}-`date '+\%Y_\%m_\%d-\%H_\%M'`.sql.gz



To Restore (decompress zip and restore to MySQL)

gunzip < some-backup-file.gz | mysql -u {username} -p{password}


How to synchronize files to cloud storage like google drive using rclone

Use rclone to copy local files or directory to google drive, google cloud storage, rackspace etc...


Main website:
http://rclone.org

How to Install and Use RClone with Google Drive

1. Download 'binary' to /usr/local/src
cd /usr/local/src
wget http://downloads.rclone.org/rclone-v1.17-linux-amd64.zip

2. Decompress zip
unzip rclone-v1.17-linux-amd64.zip

3. Copy rclone binary to /usr/local/bin (so that it can be executed from anywhere)
cd rclone-v1.17-linux-amd64
cp rclone /usr/local/bin/

4. Add configuration 'example-config'
rclone config

answer questions here ... this will setup a connection name and you will be required to authorize (get token) from google.

For google drive, when asked which type of storage, answer #6 'drive'.

5. Test it like this:

rclone --bwlimit=500k --log-file="/log/rclone/{{{backup_log_file.log}}}" sync {{{source_directory_to_sync}}} {{{rclone_connection_name}}}:{{{destination_directory_in_google_drive}}}

destination_directory_in_google_drive (without / prefix)
when asked about using your own certificate / key (answer just use rclone's)

for example:

(I am copying /databak/last_30_day directory from my server to google_drive_databak connection)
rclone --bwlimit=500k --log-file="/log/rclone/filename.log" sync /databak/last_30_day google_drive_databak:last_30_day

(I am copying /some_file.zip from my server to google_drive_databak connection's export directory)
rclone --bwlimit=500k --log-file="/log/rclone/filename.log" sync /some_file.zip google_drive_databak:export

Sunday, June 28, 2015

Find files on certain directory for excluding files certain permission

Why is this useful? When your directory contains thousands of files.
Yes, CHMOD is slow and surprisingly very I/O taxing.
So being able to use 'find' command first to only CHMOD certain files
that needs to be CHMODED is useful.


find /your_directory ! -perm 0777

-OR-

find /your_directory \! \( -perm 0777 -o -perm 0775 \)

Check if file or directory exists using bash or shell script sh

if [ -f file_path_name.ext ]
then
echo "file exists"
else
echo "file does not exists"
fi


if [ -d directory_path ]
then
echo "dir exists"
else
echo "dir does not exists"
fi

Send HUP hangup hang-up signal to linux process to re-read configuration file

kill -HUP $( cat /var/run/nginx.pid )

Find inodes where it is being used inside directories

This command will show inodes being used in directories recursively:

find {{{starting directory}}} -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

Sunday, February 1, 2015

Quick and dirty bash shell script to find issues on your linux server

I usually would place this file in my home directory and name it 'find_issues.sh'

cd ~
nano find_issues.sh
chmod +x find_issues.sh

content of find_issues.sh:


#!/bin/bash

read -sn 1 -p "Press any key to check keyword 'error'";echo
tail -n 1000 /var/log/messages | grep error

read -sn 1 -p "Press any key to check keyword 'fail'";echo
tail -n 1000 /var/log/messages | grep fail

read -sn 1 -p "Press any key to check keyword 'panic'";echo
tail -n 1000 /var/log/messages | grep panic

read -sn 1 -p "Press any key to check keyword 'fault'";echo
tail -n 1000 /var/log/messages | grep fault

read -sn 1 -p "Press any key to check keyword 'timeout'";echo
tail -n 1000 /var/log/messages | grep timeout

Check if directory exists using bash or linux shell script

if [ -d directory_path ]
then
   echo "dir exists"
else
   echo "dir does not exists"
fi

Check if file exists using bash or linux shell script


if [ -f file_path_name.ext ]
then
   echo "file exists"
else
   echo "file does not exists"
fi

Disable email notification from cronjobs or crontab

First technique to disable ALL email notifications:

add this line on the very top of your crontab:

MAILTO=""


Second technique is to disable for individual task:

somecommand >/dev/null 2>&1



Remove file(s) in directory recursively older than certain number of days using find and xargs

find <dir_path> -type f -mtime +<days> | xargs rm

for example to remove files older than 3 days:

find /data/log -type f -mtime +3 | xargs rm

for older than 5 minutes:
find /data/log -type f -mmin +5 | xargs rm

find files changed within last # days:
find /dir -type f -mtime -{num_day} | xargs rm


Even though all the examples above uses 'rm' to remove files, you can substitute 'rm' with other commands such as 'ls' to list all files recursively.

To make sure all linux mounts from /etc/fstab is mounted or remount it

mount -a


this command surprisingly does not double-up the mounts. it is smart enough.

Find large files size in linux file system from command line recursively

find -maxdepth 6 -type f -size +1G > ~/largefiles.txt