Monday, January 31, 2011

Changing Your Shell

Changing Your Shell
The easiest way to change your shell is to use the chsh command. Running chsh will place you into the editor that is in your EDITOR environment variable; if it is not set, you will be placed in vi. Change the “Shell:” line accordingly.

You can also give chsh the -s option; this will set your shell for you, without requiring you to enter an editor. For example, if you wanted to change your shell to bash, the following should do the trick:

% chsh -s /usr/local/bin/bash
Note: The shell that you wish to use must be present in the /etc/shells file. If you have installed a shell from the ports collection, then this should have been done for you already. If you installed the shell by hand, you must do this.

For example, if you installed bash by hand and placed it into /usr/local/bin, you would want to:

# echo "/usr/local/bin/bash" >> /etc/shells
Then rerun chsh.

Process Forking with PHP background

Before you can use the PHP process control functions you must compile
the PCNTL extensions into PHP using the --enable-pcntl
configure option (ie ./configure --enable-pcntl along with
all the other configuration options you would like to compile into the PHP
binary). No additional libraries need to be pre-installed. Note that these
process control extensions will not work on non-Unix platforms (ie
Microsoft Windows).

Basic Forking Example

A very basic and commonly used example for forking a process in PHP is
as follows:

$pid = pcntl_fork();

  // parent process runs what is here
else {
  // child
process runs what is here

Running this will output the following:


Child process is a copy of the parent process

What actually happens when you call the pcntl_fork()
function is that a child process is spawned which is exactly the same as
the parent process and continues processing from the line below the
function call. All variables and objects etc are copied into the child
process as-is but these are new copies which belong to the new process.
Modifying them in the child process does not affect the values in the
parent (or any other forked) process.

The parent process will have a value assigned to $pid
whereas the child process will not, hence the if test. Note
that in the above example, both processes would continue running whatever
code is after the if statement, something which is rarely mentioned in
examples of PHP process forking on the web.

To illustrate this, we'll modify the example above slightly to add
additional output as follows:

$pid = pcntl_fork();


if($pid) {
  // parent
process runs what is here
else {
  // child
process runs what is here


Running this will display the following:


Making the parent process wait until the child has

So ideally you want to let either the child or parent continue
processing the rest of the script and make the other process exit after
the process is forked. If you exit from the parent process, however,
you'll end up with "zombie" processes running which do
not belong to any process. Therefore the parent process needs to wait
until all the child processes have finished running before exiting itself.
You can do this using the pcntl_waitpid() function, which
will cause the parent process to wait until the child process has
completed. You can then either just let the parent process exit, or do any
tidy up code that is required.

An example of doing this is as follows:

$pid = pcntl_fork();

  // this is the parent process
  // wait until
the child has finished processing then end the script
pcntl_waitpid($pid, $status, WUNTRACED);

// the child process runs its stuff here and then ends


The exit call in the parent process ensures that
processing stops at that point and the parent does not execute any of the
code intended for the child. Another way of doing the same thing without
the exit code would be as follows:

$pid = pcntl_fork();

  // this is the parent process
  // wait until
the child has finished processing then end the script
pcntl_waitpid($pid, $status, WUNTRACED);
else {
// the child process runs its stuff here

Exit codes from the child process

You could optionally have an exit call at the end of the
child part of the if statement.

The $status parameter passed to
pcntl_waitpid() stores the return value from the child
process. If the child process returns 0 (ie success) then it will also be
zero. On my Linux desktop the value returned as $status would
be the value returned from the exit call multipled by 256. So if the child
process ended with exit(2) my system returned 512 as the
$status value. Whether this is the same across all Unix
systems I do not know.

Getting the parent process to wait until the child process has
completed is useful for then doing something else based on the return
value of the child process as shown in the following example:

$pid = pcntl_fork();

  // this is the parent process
  // wait until
the child has finished processing then end the script
pcntl_waitpid($pid, $status, WUNTRACED);
  if($status >
0) {
    // an error occurred so do some processing to
deal with it
else {
  // the child
process runs its stuff here
condition...) {
    exit(0); // this indicates success
  else {
    exit(1); // this
indicates failure

Forking multiple child processes

This final example illustrates how you could fork several children from
the parent process with PHP. The loop runs three times and forks a child
process for each loop, storing the child pids into an array. After running
the stuff for the child process the child then exits. A second loop runs
in the parent after the first to ensure all child processes have finished
running before resuming its own process.

Note it is very important in this sort of process that the child
explicitly exits in its section of the script, otherwise each child will
continue running through the first, and then second, loop.

$pids = array();

for($i = 0; $i
< 3; $i++) {

  $pids[$i] = pcntl_fork();

  if(!$pids[$i]) {
    // child process



for($i = 0; $i < 3; $i++) {
pcntl_waitpid($pids[$i], $status, WUNTRACED);

complete parent processing now all children have finished

The PHP manual pages for process control functions can be found at There are a number of
user contributed notes for each of the functions which should also help
with your understanding of process forking in PHP.

Saturday, January 29, 2011

To check if a file contains UTF-8 BOM at header

To check if a file contains UTF-8 BOM at header:

# hexdump -n 3 -C 2.txt
00000000 ef bb bf

ef bb bf // YES


ISO8859-1 is almost identical to -15 where -15 replaces one encoding
with the Euro symbol and includes a few more french symbols. The only
way to tell them apart would be to look at the symbols in context.

UTF-8 is identical to ISO8859 for the first 128 ASCII characters which
include all the standard keyboard characters. After that, characters
are encoded as a multi-byte sequence.

Unicode is usually encoded in UTF-16. If you're lucky, there might be
a BOM (Byte Order Mark) of 0xFFFE or 0xFEFF as the first two characters
in the file. Otherwise, look for a 0x00 (Null character) as every
other character if the text file contains basic 7-bit ASCII characters.

Friday, January 21, 2011

Taskbar Shuffle - rearrange switch tasks taskbar in windows XP

Taskbar Shuffle - rearrange switch tasks taskbar in windows XP

Taskbar Shuffle is a simple, small, free utility that lets you drag and drop your Windows taskbar buttons to rearrange them. Here’s a full feature list:

Full 32-bit and 64-bit support
Reorder your taskbar buttons by dragging and dropping them
Reorder your tray icons in the same way
Reorder tasks in a grouped button's popup menu in the same way
Middle-click to close programs on your taskbar
Works with UltraMon (version 3+ only) taskbars
Tweak taskbar button grouping

If you run more than a few applications at a time, you'll definitely find Taskbar Shuffle useful, so give it a try!

Wednesday, January 19, 2011

How to Change Timezone in FreeBSD

How to Change Timezone in FreeBSD

By default, FreeBSD is configured with UTC timezone. In order to localize your timezone, follow the steps below.

1. cd /usr/share/zoneinfo

2. cp America/Vancouver /etc/localtime
Now the local time zone has been configure, lets get the updated time and date from ntp server.

1. ntpdate

Thursday, January 13, 2011

could not listen on UDP socket: permission denied

could not listen on UDP socket: permission denied
creating IPv4 interface re0 failed interface ignored

[] mac_portacl_load="YES"

[] FreeBSD Bind DNS chroot jail

Wednesday, January 12, 2011

ifconfig says that the status of re0 is "no carrier"

ifconfig says that the status of re0 is "no carrier"

re0: flags=8802 metric 0 mtu 1500
 ether 00:16:36:7f:6e:b6
 media: Ethernet autoselect (10baseT/UTP )
 status: no carrier

[] We have to use that special parameters with ifconfig because our switch does not support autoselect.

[] You can add those mediaopt options to ifconfig_re0 in rc.conf:

# iv /etc/rc.conf
ifconfig_re0="inet netmask media 1000baseTX mediaopt full-duplex"

[] Other than that, 'no carrier' is usually 'no cable' or a dead/downed switchport.

[] Which could happen if you get lots of duplex and speed mismatches, a switch might decide to block off that port. On proper equipment I'd set both on both sides, never auto negotiate. Most unmanaged switches are a bit tricky though. And realtek isn't the best either. I know my realtek cards and home el-cheapo sweex switch will throw a fit if I don't use auto/auto.

[] restart freebsd networking service:

# /etc/rc.d/netif restart && /etc/rc.d/routing restart

# /etc/rc.d/netif forcerestart && /etc/rc.d/routing forcerestart

[] pciconf -lv
re0@pci0:3:0:0: class=0x020000 card=0x2aa9103c chip=0x813610ec rev=0x05 hdr=0x00
    vendor     = 'Realtek Semiconductor'
    device     = 'Realtek 10/100/1000 PCI-E NIC Family all in one NDIS Driver v5.728.0604.2009 06/04/2009 (Rtl8023)'
    class      = network
    subclass   = ethernet

Tuesday, January 11, 2011

CMake - build automation software

CMake is a unified, cross-platform, open-source build system that allows developers to build, test and package software by specifying build parameters in simple, portable text files. It works in a compiler-independent manner and the build process works in conjunction with native build environments, such as Make, Apples Xcode and Microsoft Visual Studio. It also has minimal dependencies, C++ only [2]. CMake is open source software and is developed by Kitware.

CMake is a robust, versatile tool that can:

Create libraries
Generate wrappers
Compile source code
Build executables in arbitrary combinations

autoconf 與 automake 。有了這些工具,要進入 Unix Programming 的殿堂,可就是件輕鬆的事了。

陳雍穆( armor ; armor AT netlab Dot cse Dot yzu Dot edu Dot tw )所作。



各位修計網的學弟妹,現在大家一定面臨到如何在 Unix 下快速開發程式的問題。在 Unix 系統上
main 函式。但是使用模組化、分檔撰寫程式,又要如何 compile 這些零零碎碎的檔案。如圖一所示,
檔案少的時候還可以這樣熱血一下,將執行檔 server 和 client 與各個 .o 檔編出來。但是如果手上的 project



在Unix系統上,有一個叫做make的工具,可以協助我們來開發和編譯程式,如圖二所示,只要打一個 make
指令就可以把圖一中數行的指令一次做完。不過要寫出一個 Makefile 就不是容易的事了,單是 GNU 本身的
manual ,看到就快打
文件,大部分的人照著畫葫蘆也可以做一個漂亮的 Makefile
,但是總覺得要考慮一堆 "TAB" 就是麻煩的一件事。又如果我們的程式需要使用到各種不同的函式庫時,
當程式被移到另一部機械上,要檢查這部機械有沒有這些函式庫還是一個大問題。於是乎, GNU 當然有更好用的

有了這些工具,要進入 Unix Programming 的殿堂,可就是件輕鬆的事了。



在進入如何使用 automake 之前,大家最好先了解一下 Makefile 是什麼東西。 Makefile 就是告訴 make 如何 compile 和 link 一個程式的檔案。
一個基本的 Makefile 格式如下:
target ... : prerequisites ...

  • target ( 目標 ):是一個由程式產生的檔案,他可以是一個執行檔,或是一個目的檔。target也可以是一作用的名稱,例如 clean 或 install 。

  • prerequisite ( 必備檔案 ):一些能建立 target 的程式檔案,一個 target 通常由數個檔案建立。

  • command ( 命令列 ):描述 make 要執行的動作。由一個或一個以上的 tab ( 4個字元空白 )開頭。

  • 註解:在 Makefile 中,以"#"為開頭的的文字皆為註解,在 make 工作時會忽略他們。

  • 多行描述:在寫 Makefile 中,如果命令長度超過一行時,可以在該行的最後加上反斜線( \ )表示下一行為本行之延續,兩行應視為一行來處理。

  • macro ( 巨集 ):在 GNU mamual 中寫到,一個 variable 是在 Makefile 中定義一個字串或一段文字的名稱,
    通常代替一段作用在targets, prerequisites, commands 上,複雜且詳細的命令。在某些版本的 make 中,variables 稱做 macros 。 macro巨集的格式如下
    <string> = <value>

下面是幾個 Makefile 的範例。


edit : main.o kbd.o command.o display.o \
       insert.o search.o files.o utils.o
        cc -o edit main.o kbd.o command.o display.o \
                   insert.o search.o files.o utils.o

main.o : main.c defs.h
        cc -c main.c
kbd.o : kbd.c defs.h command.h
        cc -c kbd.c
command.o : command.c defs.h command.h
        cc -c command.c
display.o : display.c defs.h buffer.h
        cc -c display.c
insert.o : insert.c defs.h buffer.h
        cc -c insert.c
search.o : search.c defs.h buffer.h
        cc -c search.c
files.o : files.c defs.h buffer.h command.h
        cc -c files.c
utils.o : utils.c defs.h
        cc -c utils.c
clean :
        rm edit main.o kbd.o command.o display.o \
           insert.o search.o files.o utils.o


objects = main.o kbd.o command.o display.o \
          insert.o search.o files.o utils.o

edit : $(objects)
        cc -o edit $(objects)
main.o : main.c defs.h
        cc -c main.c
kbd.o : kbd.c defs.h command.h
        cc -c kbd.c
command.o : command.c defs.h command.h
        cc -c command.c
display.o : display.c defs.h buffer.h
        cc -c display.c
insert.o : insert.c defs.h buffer.h
        cc -c insert.c
search.o : search.c defs.h buffer.h
        cc -c search.c
files.o : files.c defs.h buffer.h command.h
        cc -c files.c
utils.o : utils.c defs.h
        cc -c utils.c
clean :
        rm edit $(objects)



你可以在shell下鍵入指令 whereis XXXX ,來尋找你要的XXXX。EX: whereis gcc 。如下圖。

你可以使用各套件所包好的 binary 檔或是 source code 安裝。以下四套 Linux 套件都可以在 下載光碟燒錄檔。

名稱 製作公司 網頁
RedHat Linux Red Hat
Slackware Linux Patrick Volkerding
Debian GNU/Linux GNU
Linux-Mandrake Mandrake soft



本章節提供一個使用automake的過程給大家參考。automake有許多參數設定,我們裡面只簡單使用幾個,其他詳細的部分,還是要 各位去翻閱 GNU automake manual 和 GNU autoconf manual

當我們已經寫好一組程式後,希望使用 automake 來替我們把 Makefile 做好。如圖三所示,在這組範例程式裡, 總共有 client.c client.h gettime.c gettime.h gmt2local.c gmt2local.h inits.c inits.h server.c config.h 十個檔案。程式主要產生一個 client 和 server 的執行檔,做網路傳輸。圖中的 Makefile.OLD.TXT 是舊的 Makefile ,在這裡我們不會用到他。



第一步:使用 autoscan 產生一個 configure.scan ,把他更名成 。如圖三、圖四所示。

第二步:修改 的內容。由 autoscan 產生的預設檔並不一定一樣,隨系統套件廠商的修改而不同。下面圖五是本範例產生的 預設 檔,圖六是修改過的 檔。



在改過的 檔,我們加入了 AM_INIT_AUTOMAKE(s907441, 1.0) 與 AC_PROG_CC ,並更改了 AC_OUTPUT(Makefile) 。

  • AC_INIT(FILE) :autoscan 自行產生的,不要修改。

  • AC_PROG_CC :檢查系統的 C compiler 。

  • AC_OUTPUT(FILE) :Automake 使用這個設定來決定要產生什麼檔案。我們要產生 Makefile 所以填入 Makefile 。

  • 以 dnl 開頭的都是註解。

這次作業上傳是使用學號當檔名,所以我們把 AM_INIT_AUTOMAKE(PACKAGE,VERSION) 的 PACKAGE 設定為學號, VERSION 設定為版本。換句話說,如何在作業繳交期限前有更動 作業版本,就把 VERSION 加 1 ,再執行下面的其他步驟。其他參數的設定,參考GNU autoconf manual

第三步:執行 aclocal 和 autoconf ,分別會產生 aclocal.m4 及 configure 兩個檔案,如圖七。


第四步:使用編輯器,建立 檔,內容如圖八所示。


  1. AUTOMAKE_OPTIONS= foreign

    AUTOMAKE_OPTIONS 所記錄的是嚴謹度。主要是訂定一個套件是否符合 GNU 標準的條件。預設值是 GNU ,這樣一來
    整個 package 就要有一些 GNU規定的檔案存在,例如 INSTALL , NEWS , README , COPYING , AUTHORS , and ChangeLog 檔等。
    foreign 是比較寬鬆的等級,只確定設定檔能完整的工作。

  2. bin_PROGRAMS= client server

    target 來理解。

  3. client_SOURCES= client.c config.h

    這裡就比較明顯,foo_SOURCES 跟上一個章節所講的 prerequisite 對應,這樣大家了解了吧!!而在這裡也可以使用巨集來工作。

    xs = a.c b.c
    foo_SOURCES = c.c $(xs)
    automake 會將 $(xs) 換成 a.c b.c ,整個 foo_SOURCES = c.c a.c b.c 。

  4. server_SOURCES= server.c config.h gettime.c gettime.h gmt2local.c gmt2local.h inits.c inits.h


第五步:使用 automake --add-missing 將 產生出來,如圖九所示。 automake 會根據 , 同時 scan 檔,來產生對應的 。


第六步:執行 ./configure ,我們可以看到 automake 強大的功能,他會去 check 一堆 header 檔、 function call 、 compiler 等等,如圖十所示。此時我們期望已久的 Makefile 就產生了。 configure 檢查 header 的動作是根據configure.in裡面所設定的 AC_CHECK_HEADERS( ) 和AC_CHECK_FUNCS( ) 裡面所設定的內容來 check 。


第七步:執行 make ,讓 make 根據 Makefile 來 compile 和 link 程式,如圖十一所示。而完成狀況 如圖十二所示,已經可以看到執行檔 client 和 server 。






A. 用 automake 所產生的 Makefile 檔案提供了那些功能。

  1. make all

    與直接使用 make 指令相同。

  2. make clean

    清除所有的執行檔與目的檔 ( .o )如圖十三。


  3. make distclean

    make clean 加上把 ./configure 產生的 Makefile 等檔案刪除,如圖十四。


  4. make install

    把編譯好的執行檔安裝到系統目錄中。預設會放到 /usr/local/bin 裡面。我們可以用 ./configure --help
    看到在 Configuration 的設定項目中,prefix 是設成 /usr/local , bindir 就是 EPREFIX/bin ,EPREFIX
    又跟 prefix 相同。如果我們在執行 ./configure 產生 Makefile 檔時沒有指定目錄,預設就是這些。
    所以,我們也可以根據 ./configure 時使用的參數,來改變程式最後要安裝的目錄。
    ./configure --prefix=PREFIX , PREFIX 就是你想安裝的目錄。 ./configure --prefix=/www ,就是
    把程式執行檔裝到 /www 去。

  5. make dist

    將程式和相關的檔案包裝成一個 tar.gz 的壓縮檔。這個 tar.gz 檔根據在 裡,
    檔名為 package-version.tar.gz ,如圖十五。


  6. make distcheck

    make dist 加上檢查產生的 tar.gz 檔是否能正常工作。包括解開 tar.gz , ./configure , make all 。
    經檢查過的 tar.gz 就可以提供散播 (distribution) ,如圖十六。


B. 準備散佈的套件換到另一個平台能不能執行。

configure 是一個 shell script,它可以在一般 Unix 的 sh 這個 shell 下執行,而其他的 install-sh , missing , mkinstalldirs 也都是 shell script 。所以很難有問題產生。

C. 更新的程式組如何重新包裝。

  1. make distclean
  3. aclocal
  4. autoconf
  5. 建立 檔
  6. automake --add-missing
  7. make distcheck

D. 更新的程式組如何測試。

這是有點笨的問題,如果只是 function 小部分修改,直接重新 make 看看 gcc 回應的資料就知道。 如果大到翻修結構,增減檔案,那還是重頭 run 一次,翻新 Makefile 吧。



其他還有許多功能與設定,例如使用其他的 compiler ( C++ / Assembly / Fortran 77 / Java / Yacc and Lex )、多層目錄的設定、建立 shared library 、各種巨集的使用等等,這些東西並非一時間就可以完全 寫成稿 ( 跟翻譯 manual 差不多 )。其他部分,就有待各位自行 K 手冊了。 Reference:

what is the difference between find -exec cmd {} + and xargs

what is the difference between find -exec cmd {} + and xargs

Which one is more efficient over a very large set of files and should be used?

Method 1:
# find . -exec ls -l {} \;

Note: above command executes the command ls -l on each individual file.

Note: above command will fail with an error message of "Argument list too long" if there are too many files in the directory.

Note: Each filename is passes to exec as a single value, with special characters escaped, so it is as if the filename has been enclosed in single quotes.

Method 2:
# find . | xargs cmd

Note: above command constructs an argument list from the output of the find commend and passes it to ls.

Note: find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist.

consider if the ouput of the find command produced:

the Method 1 command would execute
ls -l H1
ls -l H2
ls -l H3

but the Method 2 would execute
ls -l H1 H2 H3

Note: Using -exec will start a grep program for each file it founds, while xargs will be less resources consuming. You can end with hundreds of "grep"s running.

Note: The main reason you would use xargs is efficiency.

When you use "-exec cmd {} \;" with 'find', it starts a new process for each file that is found.

Note: Method 2 command is faster than Method 1 command because xargs will collect file names and execute a command with as long as a length as possible. Often this will be just a single command.

However the xargs solution will fail if the shell has trouble parsing the file names that contain spaces or tabs, etc. Try:

# touch "stupid name"

and then retry the two commands.

There is a third solution that combines the best of both worlds. It is in Posix but not every version of the find command supports it. It's like the first syntax except that instead of \; you just use + to terminate the command.

Method 3:
# find . -exec cmd {} +

Note: "find . -exec cmd {} +" command will NOT start a new process for each file, so it is as efficient as xargs command.

Method 4:
# find . -print0 | xargs -0 cmd -option1 -option2

Note: you should always place -print0 at the END of argument list, or it will output wrong result.

Note: without -print0 it does not work if there is a file with a space or tab etc. This can be a security vulnerability as if there is a filename like "foo -o index.html" then -o will be treated as an option. Try in empty directory: "touch -- foo\ -o\ index.html; find . | xargs cat". You'll get: "cat: invalid option -- 'o'"

Note: above command will work even if filenames contain funky characters (-print0 makes find print NULL-terminated matches, -0 makes xargs expect this format.)

Method 5:
for FILE in `find srcDir -name "*.log" -print 2> /dev/null`
python pyScript $FILE `dirname $FILE`/python.log

Method 6:
If the command to run is CPU intensive you may want to use GNU Parallel:

# find . | parallel command

Watch the intro video to learn more:

Thanks tange.


用 find、sed、xargs 及 mv 換檔名

IrfanView - desktop screenshot tool

IrfanView - desktop screenshot tool

IrfanView is a very fast, small, compact and innovative FREEWARE (for non-commercial use) graphic viewer for Windows 9x, ME, NT, 2000, XP, 2003 , 2008, Vista, Windows 7.

Thursday, January 6, 2011

Find UNIX files modified within a number of days

Find UNIX files modified within a number of days

My Note (to check there is no bad code (Code injection) in files)

cat /dev/null > ${badcode_file}

echo "===== [base64_decode] ======" >> ${badcode_file}

find /www/ -not -path '*/.svn/*' -type f -print0 | xargs -0 grep -Inle 'base64_decode' >> ${badcode_file}

echo "===== [eval] ======" >> ${badcode_file}

find /www/ -not -path '*/.svn/*' -type f -print0 | xargs -0 grep -Inle 'eval' >> ${badcode_file}

echo "===== [modified within three days] ======" >> ${badcode_file}

find /www/ -mtime -3d -not -path '*/.svn/*' -type f -print0 | xargs -0 grep -Inle 'eval' >> ${badcode_file}

cat ${badcode_file}


To find all files modified within the last 3 days, excluding .svn related files.
# find /www/ -mtime -3d -not -path '*/.svn/*' -type f

To find all files modified within the last 5 days:
# find /www/ -mtime -5 -print

Note: The – in front of the 5 modifies the meaning of the time as "less than five days".

Note: the trailing slash of a directory is necessary if the directory is a symbolic link (ex: /www/).

To find all files modified modified more than five days ago.
# find /www/ -mtime +5 -print

To find all files modified exactly five days ago.
# find /www/ -mtime 5 -print

Note: Without the + or -, the command would find files with a modification time of five days ago, not less or more.

Possible time units are as follows:
s       second
m       minute (60 seconds)
h       hour (60 minutes)
d       day (24 hours)
w       week (7 days)


Wednesday, January 5, 2011

用 find、sed、xargs 及 mv 換檔名

用 find、sed、xargs 及 mv 換檔名
Posted in Links by jeffhung @ March 6th, 2009 |

用 UNIX,只要真的弄熟 find、sed、xargs 等工具,不用學什麼 scripting language,就已經可以處理大部分複雜的需求了。


SHELL> find . -type f -name '*.vcproj' -print0 \
| sed -e 's/\.vcproj$//' \
| xargs -0 -n 1 -I @ mv @.vcproj @.vc9.vcproj \

首先先用 find 把所有要被更換的檔名列出來。然後,利用 sed 把延伸檔名的部份去掉。最後,利用 xargs 的 -I 功能[1],把讀入的部份用 @ 代表,組成 mv 指令:一個用 @.vcproj 把被 sed 去掉的部份還原,組成原來的路徑,另一個則用 @.vc9.vcproj 組合成想要的新檔名。

不過,先用 sed 把延伸檔名去掉,然後再加回來,有點脫褲子放屁。在我知道,原來 sed 的 -e 參數,可以連續使用,而其 p 指令,功能是原封不動地印出輸入的東西,我才發覺,上面的指令,其實可以寫成下面這樣:

SHELL> find . -type f -name '*.vcproj' \
| sed -e p -e 's/\.vcproj$/.vc9.vcproj/' \
| xargs -n2 mv \

一樣是先用 find 把所有要被更換的檔名列出來。接著,利用 sed 改造,分兩個 -e 給指令:先用 p 指令印出原來的檔名,然後在用 s/// 指令,將檔名改造成我們想要的樣子。

例如,現在有 a.vcproj、b.vcproj 與 c.vcproj 三個檔案要被更名,find 會得到這樣的資料:


經過 sed -e p -e 's/\.vcproj$/.vc9.vcproj/' 處理後,會變成這樣:


其中,單數行是 sed 的 p 指令印出來的,雙數行是 s/// 指令印出來的。

如此一番,將找到的路徑,兩行為一組,餵給 xargs -n2 一次收兩行,就可以組合出如下的指令:

mv a.vcproj a.vc9.vcproj
mv b.vcproj b.vc9.vcproj
mv c.vcproj c.vc9.vcproj


1. -I 是 FreeBSD 上的 xargs 的參數,在 Linux 上,參數名稱不一樣。 ↩

what is the difference between find -exec cmd {} + and xargs

FreeBSD Ramdisk

FreeBSD Ramdisk
需要大量存取小檔案,想不出比 memory disk 更好的辦法了,建立方法如下

1. 先在想建 Ramdisk 地方建個空目錄,例如
mkdir /ramdisk

2. 用 mdmfs [-s size] md-device mount-point 來建立 ramdisk
mdmfs -s 64m md1 /ramdisk
會用 memory device md1 來建立 64MB的 ramdisk,並且掛載點為 /ramdisk

3. 看看有沒有建成功
df -h

如何清空 log 檔案?

如何清空 log 檔案?

如果是 tcsh,先 unset noclobber,接著 cat /dev/null > logfile。

如果是 bash,先 set -o noclobber off,接著 cat /dev/null > logfile。

Find and Remove Files

Find and Remove Files

$ find . -name "*.pyc" -type f -print0 | xargs -0 /bin/rm -f

一般情況 find 預設使用 -print 參數,如果擔心找到的檔名含有空白、換行等符號,使用 -print0 較安全,而且搭配 xargs 時會使用 -0 參數。

what is the difference between find -exec cmd {} + and xargs

find command excludes specific directories with regular expression

find command excludes specific directories with regular expression

Method 1:
# find -E /www/drupal/sites/ -type d -mindepth 1 -maxdepth 1 -not -regex '.*/(.svn|all|dir1|dir2)' | xargs ls -ld

Method 2:
# find /www/drupal/sites/ -type f -not -path '*/.svn/*' -not -path '*/dir1'

Note: you need the -E option on FreeBSD. The -E option interprets regular expressions as extended (modern) regular expressions rather than basic regular expressions (BRE's). The re_format(7) manual page fully describes both formats.

Note: the trailing slash of a directory is necessary if the directory is a symbolic link (ex: /www/).

what is the difference between find -exec cmd {} + and xargs

How to deinstall uninstall all PHP extensions on FreeBSD ports at once

How to deinstall uninstall all PHP extensions on FreeBSD ports at once

# pkg_delete -r php5-5.2.12

Note: Ports are being removed exactly the same as the packages command.

Automatically block spam IP address

STAR says:
*自動檔 已經有好幾套有名的工具了阿
*你自己又寫 shell 做自動檔嗎?

DANNY says:
* 有哪些有名的工具 :|?
*PF 有內建 :|

STAR says:
*denyhosts 這個是不需要防火牆的
*fail2ban 這是搭防火牆的 Linux 上我是用這套搭 iptables

DANNY says:

STAR says:
*基本上兩個都是人家寫好封裝好的 shell script
*fail2ban 功能比較多 script 也比較多比較複雜

Tuesday, January 4, 2011

OmNiTTY - 同步作業的好工具

OmNiTTY - 同步作業的好工具
如果平時有多台主機需要管理,OmNiTTY 絕對可以成為你的好幫手之一!怎麼說呢?如果今天有 10 台主機需要透過 putty 手動直來安裝某套軟體,最後你決定一台一台連上去進行個別安裝,我只能說你真的是太傻了。

OmNiTTY 可說是這類問題的解決方案之一,它能讓你的指令同步發送到被標記上的主機上,簡單的說,重複性的指令只需下一次即可。你說,這是不是很方便呢?

現在就來介紹 OmNiTTY 的操作方式,至於怎麼安裝,就請各位研究網站上的安裝說明吧。

啟動 OmNiTTY 後,會看到下方會有這些選項。


F1 menu:看英文就知道,按下去可以看到一些可供選擇的項目。

[ r ] rename machine 算是我比較常用的選項,畢竟分類一下會比較清楚哪台是哪台。

F2/3 Sel:F2 和 F3 主要是讓你上下去選擇要切換的主機。
F4 tag:讓你選擇哪些主機需要被標記上,如果嫌一個一個標太累,可以使用 F1 做輔助。
F5 add:啟動 OmNiTTY 的第一件事,就是新增一台主機嚕!
F6 del:沒用的主機可以利用 F6 刪除掉。
F7 mcast:一啟動這個功能,只要被標記上的主機,都會接受單一指令的控制,來完成同步作業。
這套軟體真的很適合做大量的主機管理,尤其是主機的OS都相同的狀況下,更是讓人使用起來像如虎添翼一般。如果主機上面有裝 Screen 的話,可以褡配一起使用,感覺會更棒。

Published by Gea-Suan Linat June 7, 2006 in Computer, Murmuring, Network and Software.
前陣子在 #bsdchat 上聽 rafan 講 OmniTTY 可以透過 ssh 連到很多台機器,然後開 Multicast Mode 對每台機器下指令,再加上 ssh-agent 可以把輸入密碼的步驟省掉,就超級方便 XD

結果聽完以後一直沒機會測試,直到今天 PR System 上有東西需要更新 /etc/login.conf,剛好可以拿來玩看看。

先進入 Omnitty,然後用 F5 開一堆機器,接下來用 F1 + T 把所有的機器都標起來,再用 F7 進入 Multicast Mode,然後把要下的指令打進去,就會像這樣:(順便提一下,你可以用 F2/F3 上下移動看看每台機器的情況 XD)


接著你可以用 F2/F3 上下移動看看是不是每一台都做完了 :

密技: Add 的時候,用 @filename
filename 裡面一行一台機器 :p


from Tsung's Blog by jon
當手上有 1 台機器, 管理的方法是 ssh 到那台機器做事.

當手上有 5 台機器, 管理的方法就會想要遠端執行, 就會使用

用法: ssh 主機名稱 "要執行指令"
ssh hostname 'sudo cp http.conf /usr/local/apache/conf/'
ssh hostname "ls"
ssh hostnmae "sudo /usr/local/apache/bin/apachectl restart"
當手上有 30 台機器的時後, 管理的方法就會如下:

用法: for i in i的值; do 目前機器要下的命令 '遠端機器要下的命令'; done;
for i in 1 2 3; do scp xxx.conf w$i.hostname:; done;
for i in 1 2 3; do /usr/bin/ssh w$i.hostname 'sudo mv xxx.conf /usr/loca/conf'
for i in 1 2 3; do /usr/bin/ssh w$i.hostname 'sudo ls /'; done;
1 2 3 ... 自己寫要幾台都行, 可以改成自己其它取的值
注意: 目前機器要下的命令, 不要用 " 或 ' 包起來, 只有要給 遠端機器 的指令才要用 " 或 ' 包起來.

當我覺得這樣子已經很懶的時後, 總會有個 強者我同事 說~ 這樣子還不夠, 他已經寫成一隻 Script, 專門做前端上百台 Push 的動作. 有興趣想更加了解此強者, 可以參觀 他的blog: George Lee's blog

以下來看一下此 script,

#!/bin/sh#for A in 8 9 10 11 12; doA=1;MAX=12;PREFIX=w;SOURCE="/xxx/http.conf";REMOTEDIR="/usr/local/apache/conf"while [ $A -le $MAX ]; doHOST="$PREFIX$A.hostname";echo "$HOST :";rsync -arvz --rsh=ssh $SOURCE $HOST:$REMOTEDIR/.#sudo rsync -arvz --rsh=ssh $SOURCE $HOST:$REMOTEDIR/.#scp parse_search.php $HOST:.#rsync -arvz --rsh=ssh $HOST:.#rsync -arvz --rsh=ssh xxx.conf $HOST:$REMOTEDIR/.A=`expr $A + 1`;done;雖然以上我都有做過一些馬賽克, 但是有些歷史遺蹟還是要把他留下來, ex: for A in 8 9 10 11 12; do, 看此行就知道已經經歷過上面的寫到 "管理 30 台機器" 的風風雨雨, 然後才轉變到現在, 寫一支超方便的 Script 來用, 不愧是長輩 Orz....

此 Script 的用法很簡單, 只要以下步驟即可使用:

把此 Script 抓下來, 存成
修改此 Script 的粗體字的地方
chmod +x
A, MAX: 從 1 ~ 12 (如上範例會變成 w1, w2 ....w12 )
PREFIX: 機器名字前面要叫什麼字, 加了後會類同( w1, w2 ....)
SOURCE: 現在機器下的檔案在哪邊
REMOTEDIR: 打算要將此檔放到遠端機器哪邊
HOST的 hostname: 遠端主機的名字

此 Script 只限使用在有機器名字是連續數字的狀況 PS: A=`expr $A + 1`; 此行會將值一直加 1
此 Script 有很多活用的範例, 都 # 註解掉, 但可以簡單看一下程式, 會是相當方便的工具
若那 機器名稱(數字)列表 中, 有機器不存在或死掉, 不用擔心, 程式還是會繼續跑下去, 只是會有錯誤訊息叫幾聲罷了~

這樣實在太苦~ [回覆]

試試 omnitty。

OmNiTTY - 同步作業的好工具


From GaryLee
Jump to: navigation, search


> regedit.exe /e PuTTY.reg HKEY_CURRENT_USER\Software\SimonTatham


> regedit.exe /i Putty.reg


Monday, January 3, 2011

rsync - synchronizing two file trees

rsync - synchronizing two file trees 15 December 2000

Need more help on this topic? Click here

This article has 22 comments

Show me similar articles

This article originally appeared quite some time ago.  But for some unknown reason, it was lost from the indexes.  I've just come back to upgrade it with some new error observations.

We now return you to your regularly scheduled read...

rsync is an amazing and powerful tool for moving files around.   I know of people that use it for file transfers, keeping  dns server records up-to-date, and along with sshd to remote restart the services when rsync reports a file change (how they do that, I don't know, I'm just told they do it).

This article describes how you can use rsync to synchronize file trees.  In this case, I'm using two websites to make sure one is a backup of the other.  As an example, I'll be making sure that one box contains the same files as the other box in case I need to put the backup box into production, should a failure occur.


rsync can be used in six different ways, as documented in man rsync:

for copying local files. This is invoked when neither source nor destination path contains a : separator

for copying from the local machine to a remote machine using a remote shell program as the transport (such as rsh or ssh). This is invoked when the destination path contains a single : separator.

for copying from a remote machine to the local machine using a remote shell program. This is invoked when the source contains a : separator.

for copying from a remote rsync server to the local machine. This is invoked when the source path contains a :: separator or a rsync:// URL.

for copying from the local machine to a remote rsync server. This is invoked when the destination path contains a :: separator.

for listing files on a remote machine. This is done the same way as rsync transfers except that you leave off the local destination.

I'll only be looking at copying from a remote rsync server (4) to a local machine and when using a remote shell program (2).

InstallingThis was an easy port to install (aren't they all, for the most part?).   Remember, I have the entire ports tree, so I did this:

# cd /usr/ports/net/rsync

# make install

If you don't have the ports tree installed, you have a bit more work to do.... As far as I know, you need rsync installed on both client and server, although you do not need to be running rsyncd unless you are connecting via method 4.

Setting up the serverIn this example, we're going to be using a remote rsync server (4). On the production web server, I created the /usr/local/etc/rsyncd.conf file.  The contents is based on man rsyncd.conf.

uid             = rsync

gid             = rsync

use chroot      = no

max connections = 4

syslog facility = local5

pid file        = /var/run/


        path    = /usr/local/websites/

        comment = all of the websites

You'll note that I'm running rsync as rsync:rsync.  I added lines to vipw and /etc/group to reflect the new user.  Something like this:

rsync:*:4002:4002::0:0:rsync daemon:/nonexistent:/sbin/nologin



Then I started the rsync daemon and verified it was running by doing this:

# rsync --daemon

# ps auwx | grep rsync

root 30114 0.0 3.7 936 500 ?? Ss 7:10PM 0:00.04 rsync --daemon

And I found this in /var/log/messages:

rsyncd[30114]: rsyncd version 2.3.2 starting

Then I verified that I could connect to the daemon by doing this:

# telnet localhost 873


Connected to localhost.

Escape character is '^]'.


I determined the port 873 by looking at man rsyncd.conf.

See the security section for more information.

You can also specify a login and user id.  But if you do that, I suggest you make /usr/local/etc/rsyncd.conf non-world readable:

chmod 640 /usr/local/etc/rsyncd.conf

This example is straight from the man page.  Add this to the configuration file:

auth users = tridge, susan

secrets file = /usr/local/etc/rsyncd.secrets

The /usr/local/etc/rsyncd.secrets file would look something like this:



And don't forget to hide that file from the world as well:

chmod 640 /usr/local/etc/rsyncd.secrets

Setting up the clientYou may have to install rsync on the client as well.. There wasn't much to set up on the client.  I merely issued the following command.  The rsync server in question is ducky.

rsync -avz ducky::www /home/dan/test

In the above example, I'm connecting to ducky, getting the www collection, and putting it all in /home/dan/test.

And rsync took off!  Note that I have not implemented any security here at all.   See the security section for that.

I checked the output of my first rsync and decided I didn't want everything transferred.  So I modified the command to this:

rsync -avz --exclude* --exclude wusage/* ducky::www /home/dan/test

See the man pages for more exclusion options.

I also wanted deleted server files to be deleted on the client.  So I did this:

rsync -avz --delete ducky::www /home/dan/test

Of course, you can combine all of these arguments to suit your needs.

I found the --stats option interesting:

Number of files: 2707

Number of files transferred: 0

Total file size: 16022403 bytes

Total transferred file size: 0 bytes

Literal data: 0 bytes

Matched data: 0 bytes

File list size: 44388

Total bytes written: 132

Total bytes read: 44465

SecurityMy transfers are occur on a trusted network and I'm not worried about the contents of the transfer being observed.  However, you can use ssh as the transfer medium by using the following command:

rsync -e ssh -avz ducky:www test

Note that this differs from the previous example in that you have only one : (colon) not two as in the previous example. See man rsync for details. In this example, we will be grabbing the contents of ~/www from host ducky using our existing user login. The contents of the remote directory will be synchronized with the local directory test.

Now if you try an rsync, you'll see this:

$ rsync -e ssh -avz --delete ducky:www /home/dan/test


@ERROR: auth failed on module www

Here I supplied the wrong password and I didn't specify the user ID.  I suspect it used my login.  A check of the man page confirmed this.  This was my next attempt.  You can see that I added the user name before the host, ducky..

$ rsync -e ssh -avz --delete susan@ducky:www /home/dan/test


receiving file list ... done

wrote 132 bytes read 44465 bytes 1982.09 bytes/sec

total size is 16022403 speedup is 359.27

In this case, nothing was transferred as I'd already done several successful rsyncs.

The next section deals with how to use a password in batch mode.

Do it on a regular basisThere's no sense in having an rsync set up if you aren't going to use it on a regular basis.  In order to use rsync from a cron job, you should supply the password in a non-world readable file.  I put my password in /home/dan/test/rsync.password.   Remember to chmod 640 that password file!

I put the command into a script file (, which looks like this:


/usr/local/bin/rsync -e ssh -avz --stats --delete susan@ducky::www /home/dan/test --password-file /home/dan/test/rsync.password

Remember to chmod 740 the script file!

Then I put this into /etc/crontab in order to run this command every hour (this should be all on one line):

7 * * * * root /usr/home/dan/ 2>&1 | mail -s "rsync script" root

The above will mail you a copy of the output.

If you want to use ssh as your transport medium, I suggest using using the authorized_keys feature.

My comments

I think rsync is one of the most powerful tools I've seen for transferring files around a network and the Internet.  It is just so powerful! Although I actually use cvsup to publish the Diary, I am still impressed with rsync.

Some recent errors I encounteredI was recently adding some new files to my rsync tree.  I found these errors:

receiving file list ... opendir(log): Permission denied

opendir(fptest): Permission denied

opendir( Permission denied

opendir( Permission denied

readlink dan: Permission denied

opendir(default): Permission denied

It took me a while to understand the problem.  It's a read issue.  rsyncd didn't have permission to read the files in question.  You can either make rsynd run as a different user, or change the permissions on the files.

If you get the user id for rsync wrong, you'll see this error:

$ rsync -avz xeon::www /home/dan/rsynctest

@ERROR: invalid uid I had the rsync user misspelt as rysnc.

不使用密碼的SSH連線 - ssh-keygen

Sunday, January 2, 2011

How to Disable Clean URL in Drupal

How to Disable Clean URL in Drupal

delete from drupal_db.variable where name = 'clean_url';