Friday, January 29, 2010

InnoDB Performance Tuning

InnoDB Performance Tuning

InnoDB is a transaction-safe, ACID compliant MySQL storage engine. It has commit, rollback, and crash recovery capabilities, and offers row level locking. The engine's overview page explains, “InnoDB has been designed for maximum performance when processing large data volumes. Its CPU efficiency is probably not matched by any other disk-based relational database engine.”

Insert Performance

  • Only flush logs once per second

    Tunable: innodb_flush_log_at_trx_commit

    By default, the InnoDB storage engine is ACID compliant, meaning that it flushes each transaction to the file system when it is committed. You can set the above tunable to 0 to disable this, telling InnoDB to flush to disk only once per second. Alternatively, you can set the above tunable to 2, letting the file system handle the flushing of data to the disk.

  • Increase the log buffer

    Tunable: innodb_log_buffer_size

    As the InnoDB storage engine flushes its buffer a minimum of once ever second, there's no reason to set this buffer very large unless you are inserting very large data into the table (such as large BLOB fields). For most systems doing lots of INSERTS, increasing this tunable to anywhere from 2M to 8M should be sufficient.

  • Increase the InnoDB log file size

    Tunable: innodb_log_file_size

    When data is added into an InnoDB table, it is first stored in an InnoDB log file. If you're inserting large quantities of data, it can greatly boost performance to increase the size of these log files. If you do a lot of inserts, you should boost your log file size to at least 25% the size of your buffer pool. For best performance, you may want to increase your total log file size up to the size of your buffer pool (up to a current limit of 4GB). However, note that allocating large InnoDB log files mean that recovery time is very slow. If MySQL crashes, InnoDB will have to parse the entire log files at startup time, a very time consuming process.

    Note: It's not enough to just edit my.cnf to change the size of your log files. Instead, you must do all of the following steps:

    1. Edit my.cnf, setting a new log file size.
    2. Gracefully shut down the MySQL server.
    3. Remove (or archive) the existing InnoDB log files
    4. Start the MySQL server, allowing it to create new log files.
    5. Verify that the new log files are the size you set in step #1.
  • Test alternative flush methods

    Tunable: innodb_flush_method

    By default, InnoDB uses the fsync() system call. On some systems it can be faster to flush using O_DSYNC. It is important to benchmark this change, as which flush method will perform best is dependent on your system.

  • Disable AUTOCOMMIT

    InnoDB treats all statements as transactions, adding overhead when inserting lots of data. If you need to INSERT lots of data, first call “SET AUTOCOMMIT = 0;”. Then, execute a group of INSERT statements followed by a manual “COMMIT;”. It is best to run some benchmarks to determine the optimal size for your transactions. If you make your transactions too big, they will become disk-bound, reducing INSERT performance. You can increase the supported size of your commits by increasing the size of your Buffer Pool.

  • Disable UNIQUE_CHECKS

    If you have UNIQUE constraints on any of your secondary keys, you can greatly boost insert performance into large tables by running “SET UNIQUE_CHECKS=0” before you INSERT the data, and then “SET UNIQUE_CHECKS=1” when you are done. However, be certain that you are not inserting any duplicate data before you do this.

  • Presort your data by the Primary Key

    InnoDB physically stores data sorted by the primary key. Thus, if you presort your data before you INSERT it into the database the InnoDB storage engine can handle it more efficiently. If you INSERT data in a random order, it can also cause your tables to become fragmented. If there's no way to INSERT data in order, you may want to consider making your primary key an auto_increment field.

General Performance Tips

  • Search with your Primary Key

    InnoDB offers optimal performance for searching by PRIMARY KEY compared to any other index. This is because data is stored on disk and in memory sorted by the primary key.

  • Keep your Primary Key short

    If your Primary Keys are long, this will result in very large, slow indexes. If your existing Primary Key is long, you may want to convert it to a Unique Key, then add a new auto_increment field as the primary key. If, however, you do your lookups using only the Primary Key, then leave it as is even if the Primary Key column is long.

  • Only create necessary indexes

    InnoDB stores an uncompressed copy of your Primary Key with each Secondary Key. Thus, if you have lots of indexes and you have a large Primary Key, your indexes are going to use a lot of disk space.

  • Optimizing SELECT COUNT(*)

    The design of the InnoDB storage engine prevents it from storing the actual row count of each table, thus it actually has to count rows to return SELECT COUNT(*). However, if you add a WHERE clause to the COUNT(*), InnoDB will offer the same performance as MyISAM. Alternatively, you can parse SHOW TABLE STATUS LIKE “NAME” to get quick access to an estimate of the number of rows in table NAME. (For static tables, this will be accurate. For quickly changing tables, it will just be close.)

  • Don't empty a table with DELETE FROM or TRUNCATE

    Emptying a large table using DELETE FROM or TRUNCATE is slow on InnoDB. This is because for both operations InnoDB has to process each row of the table, and due to its transactional design it first writes each delete action to the transaction log then applies it to the actual table. For better performance if not limited by foreign keys, use DROP TABLE followed by CREATE TABLE to empty a large table.



InnoDB Performance Worksheet - Drupal Performance and Scalability Checklist

InnoDB Performance Worksheet - Drupal Performance and Scalability Checklist

Source: Drupal Performance Agency # Pages: 4

Last Modified: 10:49 PM Jan 12, 2008 License: Creative Commons AttributionShareAlike2.0

Description: InnoDB is a transactionsafe, ACID compliant MySQL storage engine. It has commit, rollback, and crash recovery capabilities, and offers row level locking. The engine's overview page explains, “InnoDB has been designed for maximum performance when processing large data volumes. Its CPU efficiency is probably not matched by any other diskbased relational database engine.”

This checklist was designed by the Drupal Performance Agency. For more information, email perf@tag1consulting.com.

INSERT

I. Increase the log buffer

Tunable: innodb_log_buffer_size

As the InnoDB storage engine flushes its buffer a minimum of once ever second, there's no reason to set this buffer very large unless you are inserting very large data into the table (such as large BLOB fields). For most systems doing lots of INSERTS, increasing this tunable to anywhere from 2M to 8M should be sufficient.

II. Increase the InnoDB log file size

Tunable: innodb_log_file_size

When data is added into an InnoDB table, it is first stored in an InnoDB log file. If you're inserting large quantities of data, it can greatly boost performance to increase the size of these log files. If you do a lot of inserts, you should boost your log file size to at least 25% the size of your buffer pool. For best performance, you may want to increase your total log file size up to the size of your buffer pool (up to a current limit of 4GB). However, note that allocating large InnoDB log files mean that recovery time is very slow. If MySQL crashes, InnoDB will have to parse the entire log files at startup time, a very time consuming process.

Note: It's not enough to just edit my.cnf to change the size of your log files. Instead, you must do all of the following steps:

1. Edit my.cnf, setting a new log file size.

2. Gracefully shut down the MySQL server.

3. Remove (or archive) the existing InnoDB log files

4. Start the MySQL server, allowing it to create new log files.

5. Verify that the new log files are the size you set in step #1.

III. Only flush logs once per second

Tunable: innodb_flush_logs_at_trx_commit

By default, the InnoDB storage engine is ACID compliant, meaning that it flushes each transaction to the file system when it is committed. You can set the above tunable to 0 to disable this, telling InnoDB to flush to disk only once per second. Alternatively, you can set the above tunable to 2, letting the file system handle the flushing of data to the disk.

IV. Test alternative flush methods

Tunable: innodb_flush_method

By default, InnoDB uses the fsync() system call. On some systems it can be faster to flush using O_DSYNC. It is important to benchmark this change, as which flush method will perform best is dependent on your system.

V. Disable AUTOCOMMIT
InnoDB treats all statements as transactions, adding overhead when inserting lots of data. If you need to INSERT lots of data, first call “SET AUTOCOMMIT = 0;”. Then, execute a group of INSERT statements followed by a manual “COMMIT;”. It is best to run some benchmarks to determine the optimal size for your transactions. If you make your transactions too big, they will become diskbound, reducing INSERT performance.

You can increase the supported size of your commits by increasing the size of your Buffer Pool.

VI. Disable UNIQUE_CHECKS
If you have UNIQUE constraints on any of your secondary keys, you can greatly boost insert performance into large tables by running “SET UNIQUE_CHECKS=0” before you INSERT the data, and then “SET UNIQUE_CHECKS=1” when you are done. However, be certain that you are not inserting any duplicate data before you do this.

VII. Presort your data by the Primary Key
InnoDB physically stores data sorted by the primary key. Thus, if you presort your data before you INSERT it into the database the InnoDB storage engine can handle it more efficiently. If you INSERT data in a random order, it can also cause your tables to become fragmented. If there's no way to INSERT data in order, you may want to consider making your primary key an auto_increment field.

General

I. Search with your Primary Key
InnoDB offers optimal performance for searching by PRIMARY KEY compared to any other index. This is because data is stored on disk and in memory sorted by the primary key.

II. Keep your Primary Key short
If your Primary Keys are long, this will result in very large, slow indexes. If your existing Primary Key is long, you may want to convert it to a Unique Key, then add a new auto_increment field as the primary key. If, however, you do your lookups using only the Primary Key, then leave it as is even if the Primary Key column is long.

III. Only create necessary indexes
InnoDB stores an uncompressed copy of your Primary Key with each Secondary Key. Thus, if you have lots of indexes and you have a large Primary Key, your indexes are going to use a lot of disk space.

IV. Optimizing SELECT COUNT(*)
The design of the InnoDB storage engine prevents it from storing the actual row count of each table, thus it actually has to count rows to return SELECT COUNT(*). However, if you add a WHERE clause to the COUNT(*), InnoDB will offer the same performance as MyISAM. Alternatively, you can parse SHOW TABLE STATUS FROM db_name LIKE 'table_name%' \G to get quick access to an estimate of the number of rows in table NAME. (For static tables, this will be accurate. For quickly changing tables, it will just be close.)

V. Don't empty a table with DELETE FROM or TRUNCATE
Emptying a large table using DELETE FROM or TRUNCATE is slow on InnoDB. This is because for both operations InnoDB has to process each row of the table, and due to its transactional design it first writes each delete action to the transaction log then applies it to the actual table. For better performance if not limited by foreign keys, use DROP


TABLE followed by CREATE TABLE to empty a large table.

Reference:
http://tag1consulting.com/files/InnoDB_0.pdf
http://tag1consulting.com/InnoDB_Performance_Tuning

promiscuous mode enabled

Jan 28 12:04:53 bsd-test kernel: em0: promiscuous mode enabled
Jan 28 12:04:53 bsd-test kernel: em0: promiscuous mode disabled

# ifconfig -a
em0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500

to answer the rest of your question:

promiscious mode means the interface listens to all traffic that comes across it; various applications (particularly sniffers,) place an interface in promiscous mode in order to sniff network info (notably tcpdump and any other apps that use tcpdump's functions, like ethereal).

Here are some suggestions from Elements of Programming Style

Here are some suggestions from Elements of Programming Style:

* Make sure all variables are initialized before use.
* Don't stop at one bug (there are always more).
* Initialize constants in the DATA DIVISION; initialize variables
* with executable code.
* Watch for off-by-one errors.
* Test programs at their boundary values.
* Program defensively.

Wednesday, January 27, 2010

MySQL Engines: InnoDB vs. MyISAM – A Comparison of Pros and Cons

The 2 major types of table storage engines for MySQL databases are InnoDB and MyISAM. To summarize the differences of features and performance,
  1. InnoDB is newer while MyISAM is older.
  2. InnoDB is more complex while MyISAM is simpler.
  3. InnoDB is more strict in data integrity while MyISAM is loose.
  4. InnoDB implements row-level lock for inserting and updating while MyISAM implements table-level lock.
  5. InnoDB has transactions while MyISAM does not.
  6. InnoDB has foreign keys and relationship contraints while MyISAM does not.
  7. InnoDB has better crash recovery while MyISAM is poor at recovering data integrity at system crashes.
  8. MyISAM has full-text search index while InnoDB has not.
In light of these differences, InnoDB and MyISAM have their unique advantages and disadvantages against each other. They each are more suitable in some scenarios than the other.

Advantages of InnoDB

  1. InnoDB should be used where data integrity comes a priority because it inherently takes care of them by the help of relationship constraints and transactions.
  2. Faster in write-intensive (inserts, updates) tables because it utilizes row-level locking and only hold up changes to the same row that’s being inserted or updated.

Disadvantages of InnoDB

  1. Because InnoDB has to take care of the different relationships between tables, database administrator and scheme creators have to take more time in designing the data models which are more complex than those of MyISAM.
  2. Consumes more system resources such as RAM. As a matter of fact, it is recommended by many that InnoDB engine be turned off if there’s no substantial need for it after installation of MySQL.
  3. No full-text indexing.

Advantages of MyISAM

  1. Simpler to design and create, thus better for beginners. No worries about the foreign relationships between tables.
  2. Faster than InnoDB on the whole as a result of the simpler structure thus much less costs of server resources.
  3. Full-text indexing.
  4. Especially good for read-intensive (select) tables.

Disadvantages of MyISAM

  1. No data integrity (e.g. relationship constraints) check, which then comes a responsibility and overhead of the database administrators and application developers.
  2. Doesn’t support transactions which is essential in critical data applications such as that of banking.
  3. Slower than InnoDB for tables that are frequently being inserted to or updated, because the entire table is locked for any insert or update.
The comparison is pretty straightforward. InnoDB is more suitable for data critical situations that require frequent inserts and updates. MyISAM, on the other hand, performs better with applications that don’t quite depend on the data integrity and mostly just select and display the data.

Wednesday, January 20, 2010

CRLF line terminators makes shell script failed to run.

CRLF line terminators makes shell script failed to run.

shell script does not work properly.

I have a script like this:

# cat test
#!/bin/sh
cat `find /usr/local/www/backup/scripts/crontab/coredata/logs -type f | sort | tail -n 1`

# file test
test: ASCII English text, with CRLF line terminators

That cause the script failed to run.

Change it to unix line feed to solve the problem

修正 Apache Worker MPM 與 APC 設定

最近觀察主機,常常會難連線,可是一切卻又運作中。觀察圖表,發現是記憶體暴增,然後就吃到 SWAP 去,結果就是狂 hang 住。應該是常常有 spam 公司來幫忙測試主機,所以 request 太多,就暴了!

現在小小的主機、慢慢的 CPU、少少的 RAM 實在沒辦法像 Pixnet 這樣借人家測主機XD。所以只好修正一下參數,目前測試下來,這樣的設定是最 OK 的,雖然 Request 多的時候還是會連線稍慢一點點,不過至少 Performance好很多。

目前主機配置是:PIII-800、384Ram,就是這麼遜,所以各位 Spam 大哥不要來亂了!

Worker MPM 配置如下:


StartServers 1
ServerLimit 3
MaxClients 100
MinSpareThreads 25
MaxSpareThreads 50
ThreadLimit 50
ThreadsPerChild 50
MaxRequestsPerChild 10000

分享一下參數設定的心得好了:

「StartServers」就是剛啟動時要先生出來的 httpd 程序數量,而「ServerLimit」定義了最大程序數量,每個程序裡面會包含「ThreadsPerChild」個 Thread(每個 Thread都可以應對一個 Request,不需要傳統那樣一個 Request 就需要一個程序。而當不夠的時候,他會自動增加 Thread 或 Child 來處理),「MinSpareThreads」和「MaxSpareThreads」定義了最少和最多的空閒 Thread 數量。接著,這邊用「ThreadLimit」來限制了每個 Child 的最大 Thread 數量。而「ThreadPerChild」和「ServerLimit」的乘績決定了「MaxClients」的上限,也就是能夠承受的最大 Request 數量。最後是「MaxRequestsPerChild」,決定了 Child 接受了幾個 Request 之後就要關閉,等待如果需要才重新產生一個程序。預設是零,也就是不結束,不過我比較喜歡重新產生,可以避免安全問題,更重要的是不需要那麼多程序時不會佔用著資源。

調整的時候,可以搭配 ab 壓力測試:

ab -c 同時連線數量 -n 每個連線的Request數量

這個設定可以讓我的主機 Lifetype 部分稱到 ab -c 50 -n500 才 timeout 結束,但是 SWAP 還在 50M 以下。如果是靜態網頁、galler2、 其他 php 等等,更可以撐到 ab -c 100 -n 1000 以上沒問題。接著會觀察看看,希望不會再 hang 死。



另外,也調整了一下 apc 的設定,之前的 ttl 實在太可怕,迴響、文章發表更新,都要等很久,目前用了一陣子很滿意的設定是:

[APC]
apc.stat=0
apc.enabled=1
apc.shm_segments=1
apc.shm_size=128
apc.ttl=300
apc.user_ttl=300
apc.num_files_hint=1024
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1

ttl 設定成 300 second,這樣快取命中率還是高達 96%,快取大約是 50M 上下,快取表現、更新速度也都符合需要。

參考看看。

Tuesday, January 19, 2010

[Solved] Problem(IDE cable) adding Disk.

Hi,

I added the disk in Disk Management, then went to format it as UFS and got this:

Code: Select all
Creating partition:
gpt create: unable to open device '/dev/ad2': Input/output error
gpt add: unable to open device '/dev/ad2': Input/output error
Creating filesystem with 'Soft Updates':
newfs: /dev/ad2p1: could not find special device
Done!

Nov 21 16:55:24 kernel: ad2: FAILURE - READ_DMA status=51 error=84 LBA=16
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=16
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=16
Nov 21 16:55:24 kernel: ad2: FAILURE - READ_DMA status=51 error=84 LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2
Nov 21 16:55:24 kernel: ad2: FAILURE - READ_DMA status=51 error=84 LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2
Nov 21 16:55:24 kernel: ad2: FAILURE - READ_DMA status=51 error=84 LBA=16
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=16
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=16
Nov 21 16:55:24 kernel: ad2: FAILURE - READ_DMA status=51 error=84 LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2
Nov 21 16:55:24 kernel: ad2: WARNING - READ_DMA UDMA ICRC error (retrying request) LBA=2

===========================================

Hardware issue

1. check cable(IDE,SATA, PATA) and connectors
2. check disk and disk controllers

===========================================

Back up any data you care about now. Use the smartmontools port or hunt down a utility from Samsung which'll do a surface test (read only, nondestructive).

You can also run a "dd if=/dev/ad10 of=/dev/null bs=8192" to do a full read test under FreeBSD, and see how many CRC errors show up.

===========================================
This is *not* a FreeBSD/hard disk compatibility issue, this is probably a hardware problem, or a configuration problem (less likely).

Does the error always occur at the same location (LBA=64093375)? It may be that this part has bad sectors (i.e. the disk is physically damaged).

As mentioned before, install /usr/ports/sysutils/smartmontools, and run % smartctl -a /dev/ad0, this will read the disks' SMART information.
If you're not sure what it means, then just post the output of this command.

I would above all highly recommend running a hard disk test, MHDD is an excellent program, Ultimate Boot CD includes many (Including MHDD).

A basic test can be done with dd(1):
# dd if=/dev/ad0 of=/dev/null bs=1m conv=noerror
But note that this test is far from perfect since it does not report speed.

The reason Windows XP and Ubuntu work fine may be because they don't use the damaged area of the disk or because they silently ignore the error.
===========================================

Reference:
http://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=78&t=288
http://freebsd.monkey.org/freebsd-stable/200508/msg00150.html
http://forums.freebsd.org/showthread.php?t=879

Only Non-English Character Matching

<?php
//Non-English Character Matching:

//The "character set" block (square brackets) allow us to match characters, but since we can only range english chars, one trick is to use ASCII or UNICODE matching like this:

//ASCII matching can be performed like this: 
preg_match('/[\x00-\x80]+/', $str);

//Unicode matching can be performed like this:
preg_match('/[^\u0000-\u0080]+/', $str);

//To our case, to match only non-english chars use:
preg_match('/[^\x00-\x80]+/', $str);

//To match ALL chars (both english & non english & some non-chars as well, perhaps) use:
preg_match('/[a-zA-Z\x00-\xFF]+/', $str);

// Chinese characters range in unicode (verify?)
preg_match('/^[u4e00-u9fa5],{0,}$/', $str);
?>

Sunday, January 17, 2010

The In's and Out's of Fsck - Dealing with corrupt filesystems

fsck is used to check and resolve problems with filesystems. If you have corruption on one or more filesystems then read on.. Fsck is not used to check the functions of disks - under Solaris use format(8) for that.
Above all remember this;
You MUST NOT run fsck on a mounted filesystem

If you're in a hurry, skip down to Interacting with Fsck..

Most people's first experience with fsck comes after their system has crashed and they're faced with cryptic and daunting questions from it. This is unfortunate because they're probably under considerable pressure to get the system running again and don't know what to do. If you're new to Unix and responsible for one or more systems I would encourage you to find an unimportant workstation and experiment with fsck a little - umount a filesystem and fsck it. If the machine doesn't have any data on it you could pull the power and see what happens when the machine reboots...

This FAQ focuses on Solaris, though most of it is also applicable to other Unix variants, including Linux.
How fsck normally works

Unix, any Unix, will refuse to mount a filesystem that was not unmounted cleanly. This is because it may be corrupt and mounting a corrupt filesystem will likely cause the system to crash.

When the system boots all filesystems are checked to see whether they are Clean. The term simply means whether the filesystem was unmounted properly after it's last use. If the filesystem is Dirty then fsck will be called in to check it out in more detail. Some Unix variants such as Linux will also run fsck after the filesystem has been mounted N times - N is the maximal mount count.

Modern Unix systems run fsck automatically in what is known as Preen mode. In this mode fsck will fix minor problems that do not result in data loss - such as the Clean/Dirty state flag. If it finds any problems that may result in data loss it will flip into Interactive mode - this is how most people first encounter fsck.
Interacting with Fsck

When you first encounter fsck it seems that though only people with a PhD in computer science should be dealing with it - the messages are that cryptic.

Its really not that hard; tell someone to deal with the panicing users, close the door, and turn your phone off. You need to concentrate on this....

Take note of these points;

You must not mount a corrupt filesystem.
Some systems (including older Solaris systems) will let you mount a corrupt filesystem after fsck has been run on it. Doing so will almost certainly cause the system to crash later and your corruption might be even worse.
Most interaction with fsck consists of answering Yes or No
This to a series questions that, in essence mean 'Shall I fix this corruption?'. Newcomers are inclined to answer No because they don't understand the implications. If you answer No even once, the filesystem corruption may not be cleared. You must run fsck again in this instance.

Minor Corruption

I define minor corruption as where you've not lost data, but fsck can't tell.
An example of fsck encountering a minor corruption is show below;

sun (ksh) # fsck /dev/rdsk/c0t3d0s3
** /dev/rdsk/c0t3d0s3
** Last mounted on /usr
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
UNREF FILE I=343651 OWNER=root MODE=100644
SIZE=0 MTIME=Jun 13 09:43 2003
CLEAR? y

** Phase 5 - Check Cyl groups
FREE BLK COUNT(S) WRONG IN SUPERBLK
SALVAGE? y

25947 files, 588044 used, 133186 free (11674 frags, 15189 blocks, 1.6%
fragmentation)

Here fsck found an unreferenced file - that's an inode with no directory entry pointing to it. There's no name on the file because filenames are stored in directories. The only information shown is the inode number (I=343651), size, ownership, permissions and modification time. This inode refers to a file that is empty. Also as the Inode number is a high one it's very unlikely that this file is important - we answer Y (yes) to the CLEAR? question.

The superblock's free block count ("FREE BLK COUNT") will likely always be wrong if fsck made any modification to the file system on earlier phases. We answer Y (yes) to tell fsck to correct it.

Fsck's preen mode could not be expected to resolve this problem automatically - it is possible that an empty file could be significant. We made a judgement here, as you may have to.
Mid-Level Corruption

If you get to this point then you have lost at least one and possibly several files. If you're lucky you've only lost a few files that were open when the system crashed. At worst you've lost several directories and with them all the files in them. Send out for the backup tape, you're going to need it.

The following example of corruption showing loss of real data has been abridged for inclusion here;

** /dev/rdsk/c0t1d0s6
** Last Mounted on
** Phase 1 - Check Blocks and Sizes
UNKNOWN FILE TYPE I=97
CLEAR? yes

UNALLOCATED I=10 OWNER=root MODE=0
SIZE=0 MTIME=Jan 1 07:00 1970
NAME=?

REMOVE? yes

** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
UNREF DIR I=213509 OWNER=root MODE=40755
SIZE=512 MTIME=Mar 13 17:16 1999
RECONNECT? yes

** Phase 4 - Check Reference Counts
LINK COUNT DIR I=35722 OWNER=bin MODE=40755
SIZE=512 MTIME=Mar 13 17:24 1999 COUNT 5 SHOULD BE 4
ADJUST? yes

** Phase 5 - Check Cyl groups
FREE BLK COUNT(S) WRONG IN SUPERBLK
SALVAGE? yes

2683 files, 164403 used, 504020 free (1804 frags, 62777 blocks,
0.2% fragmentation)

***** FILE SYSTEM WAS MODIFIED *****

Phase#1 shows that we've lost two files, we have no idea of there size or contents. The I=10 entry is probably suspect because all the values are zero - root is UID 0, and the 1st Jan 1970 also equates to an epoch time of 0. I=10 is a very low inode number - in general the lower the number the more serious the problem. Both lost inodes could have been directories - there is no way of knowing. There has been serious corruption of the inode table here.

Phase#3 reveals an unconnected directory. This is a directory that is not included in any other directory, and should only hold true for the root inode, which with I=213509 this certainly is not. The 'RECONNECT? yes' causes fsck to make an entry in the lost+found directory, the name will be '#213509'. Once the filesystem is mounted you can 'cd /lost+found/#213509' and investigate what the directory contains and possibly identify where in the filesystem it should be.

Phase#4 shows a directory with an incorrect link count. The inode holding the directory has a link count of 5, but fsck could only find 4 directory entries pointing to it. This is probably the least serious error shown on this run.

The filesystem is probably safe to mount, though to be 100% sure you ought to fsck it again.

Assuming this is the only corrupt filesystem you can either 'exit' single user mode, or simply reboot the machine.

After the machine boots you need to decide what to do with this filesystem. This is a judgement call that you must make and which depends on may factors outside the scope of this FAQ. Personally, faced with the above fsck results, then unless the filesystem was totally unimportant I consider that the overall level of damage to it sufficient to warrant a full restore.

You shouldn't spend to long trying to fix this level of corruption, if more than half a dozen files have gone west you need to be considering restoring the whole filesystem from backup.
Severe Corruption

At this level you may have lost the entire filesystem. It really a case of seeing what you can salvage rather than getting the filesystem back on it's feet. If it's a file system that the system can live without to boot then you might consider removing it from /etc/vfstab (/etc/fstab on linux) so that you can boot the system multi-user.

If you run fsck on what you consider to be a 'good' filesystem, and see something like this, then you have severe corruption;

sun# fsck /dev/rdsk/c0t0d0s1
** /dev/rdsk/c0t1d0s1 (NO WRITE)
BAD SUPER BLOCK: MAGIC NUMBER WRONG
USE AN ALTERNATE SUPER-BLOCK TO SUPPLY NEEDED INFORMATION;
eg. fsck [-F ufs] -o b=# [special ...]
where # is the alternate super block. SEE fsck_ufs(1M).

fsck did not identify this partition as containing a filesystem. Double check that you entered the correct device file, assuming you did...

Using alternate superblocks

When you create a filesystem with newfs it pumps out a long list of numbers - super-block locations. The super-block contains key information about a filesystem, without it you don't have a usable filesystem. Solaris creates a backup super-block at the start of every cylinder group and there is always one at block #32. Try this, who knows....

sun (ksh) # fsck -o b=32 /dev/rdsk/c0t1d0s6
Alternate super block location: 32.
** /dev/rdsk/c0t1d0s6
** Last Mounted on
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
FREE BLK COUNT(S) WRONG IN SUPERBLK
SALVAGE? y


2746 files, 169956 used, 498467 free (1051 frags, 62177 blocks,
0.1% fragmentation)
***** FILE SYSTEM WAS MODIFIED *****

Well that doesn happen very often ! Looks like the superblock itself was the only thing corrupted. It lives at the start of the disk, so perhaps something wrote there ?

Officially you are supposed to record the super-block numbers when you create filesystem, no-one ever does. Assuming the filesystem was created with default parameters you can get a list of super-block backups by running newfs with the '-N' option;

sun (ksh) # newfs -N /dev/rdsk/c0t1d0s1
/dev/rdsk/c0t1d0s1: 237000 sectors in 50 cylinders of 20 tracks,
237 sectors
115.7MB in 4 cyl groups (16 c/g, 37.03MB/g, 17792 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 76112, 152192, 228272,

This yields the next superblock backup at block 76112, this you can try if you weren't as lucky as me, though to be honest if things are that bad it's probably a waste of time

Saturday, January 16, 2010

MySQL Slaving featuring FreeBSD Snapshots

MySQL Slaving featuring FreeBSD Snapshots
From Devpit
Jump to: navigation, search

For any number of reasons, slaving mysql could get out of sync. If this happens, the following procedure can be used to clean up and resynchronize the slave(s). This procedure is designed to minimize DB downtime by taking advantage of FreeBSD snapshots.

Assumptions: Slaving has already been set up, and we're just want to clear out the old binary logs and start fresh. For information on setting up slaving for the first time, refer to http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html and http://dev.mysql.com/doc/refman/4.1/en/replication-faq.html.

* Go to the MySQL master and open a mysql session as root

# Delete all old logs and start fresh
RESET MASTER;
# Lock the tables and note the log position
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS; # Make a note of the file and position.
# IMPORTANT: keep this mysql session open, or it will unlock your tables.

* Take a snapshot

# Refer to /usr/src/sys/ufs/ffs/README.snapshot or mount(8)
mount -u -o snapshot /var/db/mysql/.snap/dbsnap /var/db/mysql

* Get mysql going again. The rest we can do while it runs.

UNLOCK TABLES;
exit;

* Mount the snapshot

mdconfig -a -t vnode -f /var/db/mysql/.snap/dbsnap -u 4
mount -r /dev/md4 /mnt

* Go to the slave and use a big hammer!

/usr/local/etc/rc.d/mysql-server.sh stop
# save my my.cnf!
cp -p /var/db/mysql/my.cnf ~
rm -rf /var/db/mysql/*
# If you don't have root ssh setup (it's not a good idea) - or if you db is not huge, you can just make a tar and copy it over instead.
rsync -av root@mymaster:/mnt/ /var/db/mysql
# copy back our my.cnf
cp -p ~/my.cnf /var/db/mysql/
# go into /var/db/mysql and remove any mymaster.* files.
# Start the slave
/usr/local/etc/rc.d/mysql-server.sh start
mysql> CHANGE MASTER TO
-> MASTER_HOST='mymaster.mecasa.com',
-> MASTER_USER='slave',
-> MASTER_PASSWORD='xxxxxxxx',
-> MASTER_LOG_FILE='mysql-bin.000001', # you recorded this earlier
-> MASTER_LOG_POS=79; # you recorded this earlier
mysql> start slave;

* Go back to master and cleanup snapshot

umount /mnt
mdconfig -d -u 4
rm /var/db/mysql/.snap/dbsnap

Alternate approach

Instead of mounting the snapshot, you can use the dump and restore utilities to rebuild its contents on the target machine. For example, from the target machine, execute:

target$ cd /var/db/mysql # An empty filesystem
target$ ssh source 'dump -0 -C 32 -f - /var/db/mysql/.snap/dbsnap' | restore -rf -

Friday, January 15, 2010

quickly Create large big dummy file on freebsd

quickly Create large big dummy file on freebsd
Written on Jun 04, 2009 // Monologues.

# dd if=/dev/urandom of=test bs=1m count=2

This will create 2.0M file called “test”.

[root@bentley ~]# ls -alh test
-rw-r–r– 1 root wheel 2.0M Jun 4 15:20 test

Wednesday, January 13, 2010

Setting up a Layer 3 tunneling VPN with using OpenSSH

Setting up a Layer 3 tunneling VPN with using OpenSSH

Posted by emeitner on Mon 2 Jul 2007 at 16:37

This article describes how to use the new tunneling features of OpenSSH V 4.3 to establish a VPN between two Debian or Debian-like systems. Note that by tunneling I am referring to layer-3 IP-in-SSH tunneling, not the TCP connection forwarding that most people refer to as tunneling.

When operational this VPN will allow you to route traffic from one computer to another network via an SSH connection.

This is a brief recipe rather than a "HOW-TO". It it assumed you are familiar with all of the basic concepts.
Requirements

* Debian Etch and/or Ubuntu Edgy systems
* SSH version 4.3 or higher is required on both ends of the VPN.

Introduction

SSH V 4.3 introduced true layer-2 and layer-3 tunneling allowing easy to configure VPNs that can be built upon existing SSH authentication mechanisms. The VPN configuration described below allows a client(or if you prefer the stupid term: road warrior) to connect to a firewall/server and access the entire private network that is behind it.

Previously I never allowed root login via SSH to any machines because I always logged in under a personal account and then used sudo. It made sense to turn off root logins via SSH(PermitRootLogin=no). With the advent of the new tunneling features there seems to be a need to have a limited root login for the purposes of establishing the SSH VPN. This is required because the user that connects to the sshd server must have the permissions to set up a tunnnel(tun) interface. Until OpenSSH allows non-root users to do so (such as: http://marc.info/?l=openssh-unix-dev&m=115651728700190&w=2) we will have to do it this way.

OpenSSH also has a few features to allow for easily tearing down an SSH connection without having to track all sorts of PIDs and such. I use the control connection feature to do so. See the SSH man page for these switches: -M -O -S. This allows one to use the ifup and ifdown commands to easily control the SSH VPN.
Scenario

In this recipe two machines will be configured:

* A server which is a firewall and has access to a private network ¹
* A client which initiates the connections to the server and gains direct access to the private network
 --------         /\_/-\/\/-\       -----------------     
| Client |~~~~~~~/ Internet /~~~~~~| Server/Firewall |~~~[ private net ]
 --------        \_/-\/\_/\/      / ----------------- \            
    ||\                           \          ||\       \
    || {tun0}                      {eth0}    || {tun0}  {eth1}
    ||                                       ||
    \-================= tunnel ==============-/ 
For this recipe lets number things like this:

* the private net is 10.99.99.0/24
* eth0 on the server has public IP 5.6.7.8
* eth1 on the server has private IP 10.99.99.1
* the VPN network is 10.254.254.0/30
* tun0 on the server has private IP 10.254.254.1
* tun0 on the client has private IP 10.254.254.2

On the Client

If you do not already have them, generate an SSH keypair for root:

$ sudo ssh-keygen -t rsa

/etc/network/interfaces: Add this stanza to the file:

iface tun0 inet static
pre-up ssh -S /var/run/ssh-myvpn-tunnel-control -M -f -w 0:0 5.6.7.8 true
pre-up sleep 5
address 10.254.254.2
pointopoint 10.254.254.1
netmask 255.255.255.252
up route add -net 10.99.99.0 netmask 255.255.255.0 gw 10.254.254.1 tun0
post-down ssh -S /var/run/ssh-myvpn-tunnel-control -O exit 5.6.7.8

The first time we connect to the server as root we may need to acknowledge saving the servers SSH key fingerprint:

$ sudo ssh 5.6.7.8
The authenticity of host '5.6.7.8 (5.6.7.8)' can't be established.
RSA key fingerprint is aa:fe:a0:38:7d:11:78:60:01:b0:80:78:90:ab:6a:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '5.6.7.8' (RSA) to the list of known hosts.

Don't bother logging in, just hit CTRL-C.
On the server

/etc/ssh/sshd_config: Add/modify the two keywords to have the same values as below.

PermitTunnel point-to-point
PermitRootLogin forced-commands-only

The PermitRootLogin line is changed from the default of no. You do restrict root SSH login, right?

/root/.ssh/authorized_keys: Add the following line.

tunnel="0",command="/sbin/ifdown tun0;/sbin/ifup tun0" ssh-rsa AAAA ..snipped.. == root@server

Replace everything starting with "ssh-rsa" with the contents of root's public SSH key from the client(/root/.ssh/id_rsa.pub on the client).

/etc/network/interfaces: Add the following stanza.

iface tun0 inet static
address 10.254.254.1
netmask 255.255.255.252
pointopoint 10.254.254.2

/etc/sysctl.conf: Make sure net.ipv4.conf.default.forwarding is set to 1

net.ipv4.conf.default.forwarding=1

This will take effect upon the next reboot so make it active now:

$ sudo sysctl net.ipv4.conf.default.forwarding=1

Using the VPN

user@client:~$ sudo ifup tun0
RTNETLINK answers: File exists
run-parts: /etc/network/if-up.d/avahi-autoipd exited with return code 2

user@client:~$ ping -c 2 10.99.99.1
PING 10.99.99.1 (10.99.99.1) 56(84) bytes of data.
64 bytes from 10.99.99.1 icmp_seq=1 ttl=64 time=96.3 ms
64 bytes from 10.99.99.1 icmp_seq=2 ttl=64 time=94.9 ms

--- 10.99.99.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 94.954/95.670/96.387/0.780 ms
user@client:~$ sudo ifdown tun0
Exit request sent.

You may get the two errors after running ifup. No problem, they are harmless.
Things to watch for

* Client side VPNs. Firewalls such as Firestarter will block all traffic to and from any tun interface. You will need to modify the scripts in /etc/firestarter to get around this.

Next steps

Once you have this running it is fairly easy to route traffic between two networks on each end of the VPN. See the first reference link below for details.
Possible improvements

* Something to monitor and restart the VPN if it fails, such as autossh: http://packages.debian.org/stable/net/autossh
* Automatic starting of the VPN upon first packet from client destined for the remote private network.
* Use of a remote DNS server by client when VPN is active.

References

* https://help.ubuntu.com/community/SSH_VPN
* http://wouter.horre.be/node/63
* man ssh
* man ssh_config
* man sshd_config
* man interfaces

1) The server can be behind a firewall, but this requires some additional configuration of the firewall. Primarily, the firewall must forward some port to port 22 on the server. The firewall will need to also know how to route traffic destined for the VPN to the server.

======================================================================
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by drgraefy (128.59.xx.xx) on Mon 2 Jul 2007 at 18:31
[ Send Message | View Weblogs ]
Thank you so much for writing this up. So timely for me. I have been thinking about how to set this up for a couple of different reasons. I can't wait to try it.

[ Parent | Reply to this comment ]

#2
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by wuzzeb (64.5.xx.xx) on Mon 2 Jul 2007 at 23:15
[ Send Message ]
If you don't want to run the server as root, you could probably use tunctl in the /etc/network/interface script on the server.

apt-get install uml-utilities

Something like


iface tun-ssh- inet static
pre-up tunctl -u -t tun-ssh-
address 10.254.254.1
netmask 255.255.255.252
pointopoint 10.254.254.2
post-down tunctl -d tun-ssh-


or if you want to use tun0...


iface tun0 inet static
pre-up tunctl -u -t tun0
address 10.254.254.1
netmask 255.255.255.252
pointopoint 10.254.254.2
post-down tunctl -d tun0

[ Parent | Reply to this comment ]

#3
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (207.61.xx.xx) on Tue 3 Jul 2007 at 19:47
What are the advantages of this OpenSSH VPN method over using OpenVPN? OpenVPN is easy to setup in PSK mode, and with the wrapper scripts certificate mode isn't much harder. OpenVPN also supports IP over UDP or TCP. UDP is usually the better choice. OpenVPN can easily be configured for extruded intranet mode where all clear text Internet traffic passes through the VPN first. The config for the OpenSSH VPN is messy in comparison.

I am trying to imagine a scenario when an OpenSSH VPN would be better then an OpenVPN tunnel. The only one I can think of is when the outbound firewall has been viciously locked down to allow port 22 only. In that case I would be tempted to just run OpenVPN on port 22.

[ Parent | Reply to this comment ]

#8
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (209.217.xx.xx) on Fri 20 Jul 2007 at 17:08
Agreed. There is another scenario, where the server is located behind a firewall that only allows incoming port 22, and you need to run SSH on that port for other reasons.

Besides, this capability is not exactly new. 9 years ago, if you had network connectivity between two machines, SSH, and something that converts a stream of IP packets into a stream of bytes (e.g. slip or PPP), you could build a clumsy, slow VPN out of it. Today, the "convert a stream of IP packets into a stream of bytes" part of that process is done at the packet level with a tunnel interface, support is built into the SSH binary, and it can be activated with a bunch of non-backward-compatible configuration directives and command line options.

Half of the problem with VPNs over TCP is that a TCP connection may delay packets arbitrarily long times, while a UDP or other packet-based transport generally loses the packets instead of delaying them. The other half of the problem is that TCP peers literally hurl packets down the pipe until they start getting lost, then observing which packets are lost and when to derive an estimate for bandwidth availability. Since a TCP-based VPN never drops a packet until it is completely overwhelmed, and a TCP peer communicating through the VPN is trying to optimize its performance by watching for dropped packets, a TCP-based VPN with TCP peers on both sides therefore always tends toward being overwhelmed with traffic. This problem can be mitigated by using traffic control to artificially limit outgoing bandwidth through the VPN, but this only works at all if you limit the bandwidth to significantly less than what is actually available.

There are other problems too, e.g. a lost packet on the VPN carrier generally causes a delay which causes the TCP peers sending data through the VPN to believe their packets were lost, so they retransmit their packets. But since the VPN is running over TCP, the original packets are not lost, only delayed and retransmitted by the VPN's TCP stack, so the data in question is actually transmitted *three* times over the network (once lost, once retransmitted by the VPN, and once retransmitted by each TCP peer over the VPN). If there are multiple TCP connections over the VPN then this problem is multiplied by the number of connections. On slow carrier networks the extra retransmissions can cause more delays, and therefore more retransmits, so the problem cascades until the VPN is fully saturated by its own overhead (i.e. available bandwidth for user packets is zero).

OpenVPN supports TCP (and HTTP) as a carrier, but it really does so only as a means of last resort. If you still can't use OpenVPN, then OpenSSH provides another level of last resort. If you can't use a version of OpenSSH new enough to provide the VPN feature, then you can tunnel OpenVPN over OpenSSH too. ;-)

[ Parent | Reply to this comment ]

#14
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (213.207.xx.xx) on Sat 29 Sep 2007 at 12:17
My friend's (who knows more than me about these things) reply:

The only problem is that UDP doesn't get past 90% of hotel networks... So yes, TCP has disadvantages but being stateful it is better handled by badly-configured hotel firewalls...

[ Parent | Reply to this comment ]

#4
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (88.200.xx.xx) on Thu 5 Jul 2007 at 07:24
Hey, it is well-known that tunnelimng TCP into SSH tunnel is a bad idea because of the TCP float-frame algorithm that causes the "normal" TCP packets size the SSH uses to send/receive less and less as the time goes by. So the TCP overhead becomes more and more up to reconnection and again. This is unavoidable by IP design.
But this is not the case of openvpn tunneling. Think about it.

[ Parent | Reply to this comment ]

#5
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (74.130.xx.xx) on Tue 17 Jul 2007 at 04:19
I've set this up. My server only has one ethernet connection. eth1.

I can connect the tunnel, and ping eth1 to and from each machine. I am having trouble pinging a different server on the servers local network from my client.

Any help? is this a packet forwarding problem?

[ Parent | Reply to this comment ]

#6
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (74.130.xx.xx) on Tue 17 Jul 2007 at 05:08
alright. so far, i have this vpn connected. my server is behind a DSL router and obviously port 22 is forwarded correctly. I can ping (from here, the client) my servers local ip address (192.168.1.58) and it's pointopoint ip address (10.254.254.1) but I can't ping anything else on my office network, like 192.168.1.200

[ Parent | Reply to this comment ]

#7
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by emeitner (216.153.xx.xx) on Tue 17 Jul 2007 at 16:54
[ Send Message | View emeitner's Scratchpad | View Weblogs ]
Running

cat /proc/sys/net/ipv4/ip_forward

should return 1. If not you need to make sure net.ipv4.conf.default.forwarding was set properly. Run

ip route list

on your client. You should see an entry like:

192.168.1.0/24 via 10.254.254.1 dev tun0

[ Parent | Reply to this comment ]

#9
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by eperede (82.131.xx.xx) on Sun 12 Aug 2007 at 09:55
[ Send Message ]
Thank you for this howto. If you made all like here described, it works great, except one thing. If the network connection under the openssh tunnel breaks down (example: DSL periodically break), you cannot bring the tunnel again up.

# ifup tun0
channel 1: open failed: administratively prohibited: open failed
/sbin/ifdown: interface tun0 not configured
SIOCSIFADDR: No such device
tun0: ERROR while getting interface flags: No such device
SIOCSIFNETMASK: No such device
ââ‚&Acir c;¬Ã‚¦ (A lot of setup fail.)
Failed to bring up tun0

I have tested it in Debian Etch. I removed tun0 interface manually on both side but it didn't help. I have to restart the server if I want to establish a new connection.
Is it a bug?
How could I establish a new connection without a restart of the server?

Thanks for the answer(s)!

[ Parent | Reply to this comment ]

#10
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by emeitner (69.129.xx.xx) on Sun 12 Aug 2007 at 13:07
[ Send Message | View emeitner's Scratchpad | View Weblogs ]
It may be that a firewall that sits between the two machines is dropping idle TCP connections. Try adding

-oServerAliveInterval=60

To the ssh command in the pre-up stanza in /etc/network/interfaces on the client.

[ Parent | Reply to this comment ]

#11
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by eperede (82.77.xx.xx) on Sun 12 Aug 2007 at 19:00
[ Send Message ]
I think, it's not a solution for my problem.

Your solution is good, if the VPN connection alone breaks down.

My problem is the following:
OpenSSH VPN connection works upon an ADSL connection. If the ADSL goes down (it's normally every day) without turning VPN off (without ifdown tun0), then I cannot bring up the VPN connection again.
How could I reconnect without restarting the server?

Thanks for the answer(s).

[ Parent | Reply to this comment ]

#12
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (217.12.xx.xx) on Mon 27 Aug 2007 at 23:37
Sorry for not answering your question, I just want to say how much I *love* OpenVPN because it can handle this situation :)
I have OpenVPN tunnel from my laptop to a server, NFS mounted disk over it and I play some music from that disk.. Sometimes the wireless access point in my flat freezes, the first sign is that the music stops playing :) I manually restart the AP and when it comes back online, the music just starts playing again, without any OpenVPN restart or NFS remount :)

[ Parent | Reply to this comment ]

#13
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (203.6.xx.xx) on Thu 13 Sep 2007 at 07:13
You may want to consider using a specific sudo setup to allow a non-root user to set up the tun interface. That way you steer clear of allowing root logins at all over SSH

[ Parent | Reply to this comment ]

#15
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by jaume (81.248.xx.xx) on Tue 30 Oct 2007 at 03:16
[ Send Message ]
Regarding the use of allowing ssh root logins, I think you can make a special user, say ssh-vpn, add it to the sudoers file allowing only to up/down the tun interface, and then adding "sudo ifup tun" to the authorized keys. I have used this sort of trick to do other things (performing a centralised backup) with root authority without enabling root ssh logins.
Nice article!

[ Parent | Reply to this comment ]

#16
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by emeitner (216.153.xx.xx) on Tue 30 Oct 2007 at 17:21
[ Send Message | View emeitner's Scratchpad | View Weblogs ]
Yes I did try that. It appears that the tun interface is set up before the user has the ability to run any commands. So unless the user has root access from the start, bringing up the tun interface fails.

[ Parent | Reply to this comment ]

#17
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (131.215.xx.xx) on Wed 5 Dec 2007 at 21:42
Can this be ported to Windows?

[ Parent | Reply to this comment ]

#18
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (220.233.xx.xx) on Sat 11 Apr 2009 at 08:36
Awesome guide. This has helped me create a remote access connection to my work server where I have no control over the internet end connections.

[ Parent | Reply to this comment ]

#19
Re: Setting up a Layer 3 tunneling VPN with using OpenSSH
Posted by Anonymous (99.51.xx.xx) on Mon 10 Aug 2009 at 09:36
Remember to cd /dev/ && MAKEDEV tun ... as root on (both ends) of the connection. Debian and Ubuntu seem to default to omitting this device node. (ls /dev/net to see if their is a tap0 or tun device node thereunder).

Also note that MAKEDEV (at least one version I used) must be run from the /dev/ directory ... or it will make the directory and device node under wherever you happen to be.

It also seems that the best place to get the tunctl command in recent versions of Debian and Ubuntu is from the uml-utilities package?

[ Parent | Reply to this comment ]

#20
Tunnel setup without root access
Posted by Anonymous (77.164.xx.xx) on Thu 24 Sep 2009 at 13:59
Good guide, and the only one I found that makes explicit that root permissions I needed. That explained a lot of problems I've had!

Being root at both ends finally got it working for me. Then I tried lowering privileges, which failed _until_ I made /dev/net/tun accessible to me. So a bit of playing around with chmod/chgrp seems to resolve having to be root for setting up a tunnel.

[ Parent | Reply to this comment ]

#21
Re: Tunnel setup without root access
Posted by Anonymous (80.108.xx.xx) on Fri 6 Nov 2009 at 22:46
Hi, I have the same problem, but just lowering the privs for /dev/net/tun is not sufficient. Du you have some more hints, where to tickle privs.

Thanks

Monday, January 11, 2010

awstats php script

# Use this LogFormat for limited IIS log (default log format from IIS 6)
LogFormat="date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-bytes"

Don't put line break in.

A format like this works well for IIS logs:
LogFile="C:/WINNT/system32/LogFiles/W3SVC3/ex%YY-24%MM-24%DD-24.log"

When you run awstats, you can also specify which file you want it to run on (as long as it is where you specified for the LogFile location above):

perl c:\awstats-6.5\wwwroot\cgi-bin\awstats.pl -config=mymodel -LogFile="C:/WINNT/system32/LogFiles/W3SVC3/ex061123.log" -update

Again, no line breaks.

Sample list.txt
ex090120.log
ex090121.log
ex090122.log

Run awstats.pl
<?php
error_reporting(E_ALL);
ini_set('display_errors', TRUE);
ini_set('display_startup_errors', TRUE);

ini_set('max_execution_time', 0); 

echo '[START] ' . date('Y-m-d H:i:s') . PHP_EOL;


$domain_name = 'www.example.com';
$log_path = 'G:/LogFiles/W3SVC2113097918';


$lines = file('list.txt');
$lineCount = count($lines);

for ($i = 0; $i < $lineCount; $i++) {
  $line = rtrim($lines[$i]);

  $cmd = 'perl D:/www/Apache2.2/cgi-bin/awstats-6.95/wwwroot/cgi-bin/awstats.pl -config=' . $domain_name . ' -LogFile="' . $log_path . '/' . $line . '" -update';
  exec($cmd);

  echo $line . PHP_EOL;
}

echo '[END] ' . date('Y-m-d H:i:s') . PHP_EOL;

echo "done";
?>

Generate Output
<?php
$domain_name = 'www.example.com';
$year = '2009';
$log_file = 'G:/LogFiles/W3SVC2113097918/ex%YY-24%MM-24%DD-24.log';


for ($i = 1; $i<=12; $i++) {
  $month = sprintf("%02d", $i);

  $cmd = 'perl D:/www/Apache2.2/cgi-bin/awstats-6.95/wwwroot/cgi-bin/awstats.pl -config=' . $domain_name . ' -LogFile="' . $log_file . '" -month=' . $month . ' -year=' . $year . ' -output -staticlinks > ' . $domain_name . '_' . $year . '-' . $month . '.html';
  exec($cmd);

  echo $month . PHP_EOL;
}
?>

Thursday, January 7, 2010

ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably

ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably

In computer science, ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. An example of a transaction is a transfer of funds from one bank account to another, even though it might consist of multiple individual operations (such as debiting one account and crediting another).

Although Jim Gray is credited with defining, in the late 1970s, these key transaction properties of a reliable system, and with helping to develop the technologies that automatically achieve these,[1] the acronym ACID was coined by Andreas Reuter and Theo Haerder in 1983.[2]

Tuesday, January 5, 2010

add index

ALTER TABLE table_name DROP INDEX LatestTradeDate, ADD INDEX id_Key_LatestTradeDate USING BTREE(ID_KEY, LatestTradeDate);

ALTER TABLE table ADD INDEX LatestTradeDate USING BTREE(LatestTradeDate);

How do I grep (i.e. search for a string) recursively through subdirectories on UNIX?

Excluding .svn subversion directories
# find . -not -path '*/.svn/*' -type f -print | xargs grep -I -n -e 'pattern' > result.txt

# cat result.txt

use -l option for grep command to show matched file name only.

# find . \! -path '*/.svn/*' -type f -print | xargs grep -I -n -e 'pattern'

To tell find to exclude .svn directories, use the -prune option:
# find . -path '*/.svn/*' -prune -o -type f -print | xargs grep -I -n -e 'pattern'
==========================================
How do I grep (i.e. search for a string) recursively through subdirectories on UNIX?

# grep -r "modules" .

This searches through all files starting from the current directory down. Note that this includes non-text files.

# find . | xargs grep some_pattern

This will restrict your search to certain file names or file types.
# find . -name "*.txt" | xargs grep some_pattern
================================
Searching for a String in Multiple Files

Ever need to search through all your files for a certain word or phrase? I did, and to make matters more complicated, the files were all in different sub-directories. A quick Google search turned up a few scripts involving multiple commands and temporary files. Then I found a simpler solution.

If you're a Unix/Linux/BSD user, you probably know about the grep command, but did you know it's recursive? That's right, grep can search through your entire directory tree, scanning every file along the way. It will even return the results in multiple formats!

Here's an example. In this case we're searching for the word "modules":

grep -r "modules" .

By using the "-r" switch, we're telling grep to scan files in the current directory and all sub-directories. It will return a list of files the string was found in, and a copy of the line it was found on.

If you'd rather just get the file names and skip the rest of the output, use the "-l" switch, like so:

grep -lr "modules" .

Here's another tip: grep also supports regular expressions, so matching against a wildcard pattern is easy:

grep -lr "mod.*" .

That command will print a list of files containing any word starting with "mod".

You can also use grep to search for multiple words:

grep -r "drupal\|joomla\|wordpress" .

And, of course, grep supports file name wildcards in the standard unix fashion. In this example, grep will search only file names starting with "log":

grep -lr "mod.*" ./log*

Unfortunately, not all versions of grep support recursive searching, and some use different switches, so if you're running into problems, check your man pages:

man grep

All commands listed in this tutorial have been tested on the latest version of FreeBSD.
Posted by John on 2008-02-05

Monday, January 4, 2010

maatkit mk-table-sync note

@ECHO OFF

echo [START] %DATE% %TIME%

:: To sync db.tbl1 from host1 to host2:

::  mk-table-sync --execute u=user,p=pass,h=host1,D=db,t=tbl host2

:: Sync all tables in host1 to host2 and host3:

::  mk-table-sync --execute host1 host2 host3

:: 語法上基本上有幾種
:: 第一種: u=,p=h=,D=DB_NAME,t=TABLE_NAME
:: - 兩邊的 database name 可以不一樣。
:: - 但 一次 只能 指定 一個 table name.

:: 第二種: --databases db1,db2,db3,dbMore
:: - 要確定兩邊的 database name MUST be the same. 因為它沒有辦法 指定 別的 database name.
:: - 要確定兩邊的 table structure 要一模一樣。如果說,你後來在主資料庫有新增 table,或 alter table 的話,你得確保要在要 sync 的 server 做同樣的 table 改動。
:: - 範例: mk-table-sync u=myName,p=myPass,h=192.168.1.15 u=myName,p=myPass,h=localhost --charset utf8 --execute --verbose --databases test3,test2 

:: 第三種: --databases test3 --tables member,asdf,asdf2
:: - 要確定兩邊的 database name MUST be the same. 因為它沒有辦法 指定 別的 database name.
:: - 這一種的話,可以讓你 指定 哪些 tables 要 sync.
:: - 範例: mk-table-sync u=myName,p=myPass,h=192.168.1.15 u=myName,p=myPass,h=localhost --charset utf8 --execute --verbose --databases test3 --tables tb1,tb2,tb3 

:: Useful parameters:
:: --print
:: --verbose
:: --execute
:: --databases
:: --tables
:: --ignore-columns
:: --ignore-databases
:: --ignore-engines
:: --ignore-tables
:: --charset utf8 

perl C:\maatkit-5240\bin\mk-table-sync u=myName,p=myPass,h=192.168.100.156 u=myName,p=myPass,h=localhost --print --verbose --databases db_name

:: perl C:\maatkit-5240\bin\mk-table-sync u=myName,p=myPass,h=111.83.37.224 u=myName,p=myPass,h=localhost --charset utf8 --execute --verbose --databases db_name

echo [END] %DATE% %TIME%

PAUSE